Query Router ProposalUnify web and desktop data loading patterns with one type-safe interface

    We have been making progress towards reducing redundant code when querying data from web and desktop. This proposal takes another big step in that direction.

    Our Current Situation

      When introducing a new feature that loads data from the daemon, we have to write a lot of code to implement it on web and desktop.

      Some of our existing endpoints do not properly share code. For example see how grpcClient.entities.searchEntities is called with different code on web and desktop, which:

        is harder to maintain

        may cause different behavior between platforms

        might force us to solve the same bug twice

      The easiest way to understand this issue is by looking at the implementations of desktop-universal-client and create-web-universal-client. Note how some of these code paths share the underlying implementation, such as fetchDirectory, but most of them do not.

      An API router would act as a uniform solution to these two problems, reducing boilerplate and enforcing shared code patterns.

      This new pattern will make it more difficult to have fragmented code with different behavior and different bugs. So it will encourage us to have better code.

      In our current "universal client" we also have a mix of fetchers and hooks. We should move to the fetcher pattern which will be enable a more lightweight "universal client", and will be more consistent than having a mix of patterns.

    An Incremental Solution

      We will introduce a new API router which allows us to specify endpoints for querying data. Each endpoint has:

        a key/name to identify it

        zod(ts) types for input and output

        1

        an implementation that takes a strongly typed input and returns a strongly-typed HMType, using the GRPC client

        optional: provide details for serializing+deserializing the input to a URL

        future: optionally defines caching metadata for HTTP cache and intelligent client caching

      We can convert each endpoint, one at a time. Eventually the "desktop-universal-client" and "create-web-universal-client" files will loose all of the specific fetchers and hooks. So you won't need to touch these files when introducing or modifying the data endpoints.

      This solution will use zod validation always, instead of type coercion, to ensure that the frontend code always receives valid data according to the schemas. Even when we have API drift (which should happen rarely, anyways), the frontend will have predictable errors.

      In addition, the client model hooks should be shared between platforms, so you will only have to implement one when creating a new endpoint.

      Take a look a the pull request here: https://github.com/seed-hypermedia/seed/pull/127

      No Goes

        For now we will continue using React Query

        For now we will continue with the pattern where the desktop frontend calls gRPC directly. Eventually we may decide to run the API router on the middle end, exposing the same http interface to the desktop windows as we use on web. Until we have a more sophisticated invalidation system, there is little benefit in this change (and may result in a little extra overhead because of the extra server).

    Why this approach is better than GraphQL

      This is an incremental step that will be far easier to implement in our current codebase.

      Vanilla GraphQL does not support caching very well using HTTP primitives, and we want to make sure.

      If we do decide to adopt GraphQL in the future, it will be easier to transition from this uniform pattern than the mess that we have today.