Mirror: The highly customizable and versatile GraphQL client with which you add on features like normalized caching as you grow.

(docs) - Apply edits to automatically flagged errors (#1437)

+1 -1
docs/README.md
···
`urql` is a highly customizable and versatile GraphQL client with which you add on features like
normalized caching as you grow. It's built to be both easy to use for newcomers to
-
GraphQL, as well as extensible, to grow to support dynamic single-app applications and highly
+
GraphQL, and extensible, to grow to support dynamic single-app applications and highly
customized GraphQL infrastructure. In short, `urql` prioritizes usability and adaptability.
As you're adopting GraphQL, `urql` becomes your primary data layer and can handle content-heavy
+2 -2
docs/advanced/README.md
···
- [**Authentication**](./authentication.md) describes how to implement authentication using the `authExchange`
- [**Testing**](./testing.md) covers how to test components that use `urql` particularly in React.
- [**Authoring Exchanges**](./authoring-exchanges.md) describes how to implement exchanges from
-
scratch and how they work internally. This is a good basis to understanding how some of the
+
scratch and how they work internally. This is a good basis to understanding how some
features in this section function.
-
- [**Auto-populate Mutations**](./auto-populate-mutations.md) presents the `populateExchange` addon which can make it easier to
+
- [**Auto-populate Mutations**](./auto-populate-mutations.md) presents the `populateExchange` addon, which can make it easier to
update normalized data after mutations.
+35 -24
docs/advanced/authentication.md
···
## Typical Authentication Flow
-
**Initial login** - the user opens the application and authenticates for the first time. They enter their credentials and receive an auth token.
+
**Initial login** — the user opens the application and authenticates for the first time. They enter their credentials and receive an auth token.
The token is saved to storage that is persisted though sessions, e.g. `localStorage` on the web or `AsyncStorage` in React Native. The token is
added to each subsequent request in an auth header.
-
**Resume** - the user opens the application after having authenticated in the past. In this case, we should already have the token in persisted
+
**Resume** — the user opens the application after having authenticated in the past. In this case, we should already have the token in persisted
storage. We fetch the token from storage and add to each request, usually as an auth header.
-
**Forced log out due to invalid token** - the user's session could become invalid for a variety reasons: their token expired, they requested to be
-
signed out of all devices, or their session was invalidated remotely. In this case, we would want to also log them out in the application so they
+
**Forced log out due to invalid token** — the user's session could become invalid for a variety reasons: their token expired, they requested to be
+
signed out of all devices, or their session was invalidated remotely. In this case, we would want to
+
also log them out in the application, so they
could have the opportunity to log in again. To do this, we want to clear any persisted storage, and redirect them to the application home or login page.
-
**User initiated log out** - when the user chooses to log out of the application, we usually send a logout request to the API, then clear any tokens
+
**User initiated log out** — when the user chooses to log out of the application, we usually send a logout request to the API, then clear any tokens
from persisted storage, and redirect them to the application home or login page.
-
**Refresh (optional)** - this is not always implemented, but given that your API supports it, the user will receive both an auth token and a refresh token,
-
where the auth token is valid for a shorter duration of time (e.g. 1 week) than the refresh token (e.g. 6 months) and the latter can be used to request a new
+
**Refresh (optional)** — this is not always implemented; if your API supports it, the
+
user will receive both an auth token, and a refresh token.
+
The auth token is usually valid for a shorter duration of time (e.g. 1 week) than the refresh token
+
(e.g. 6 months), and the latter can be used to request a new
auth token if the auth token has expired. The refresh logic is triggered either when the JWT is known to be invalid (e.g. by decoding it and inspecting the expiry date),
or when an API request returns with an unauthorized response. For graphQL APIs, it is usually an error code, instead of a 401 HTTP response, but both can be supported.
When the token as been successfully refreshed (this can be done as a mutation to the graphQL API or a request to a different API endpoint, depending on implementation),
···
```
We check that the `authState` doesn't already exist (this indicates that it is the first time this exchange is executed and not an auth failure) and fetch the auth state from
-
storage. The structure of this particular`authState` is an object with keys for `token` and `refreshToken`, but this format is not required. You can
-
use different keys or store any additional auth related information here. For example you could decode and store the token expiry date, which would save you from decoding
-
your JWT every time you want to check whether your token is expired.
+
storage. The structure of this particular `authState` is an object with keys for `token` and
+
`refreshToken`, but this format is not required. We can use different keys or store any additional
+
auth related information here. For example, we could decode and store the token expiry date, which
+
would save us from decoding the JWT every time we want to check whether it has expired.
In React Native, this is very similar, but because persisted storage in React Native is always asynchronous, so is this function:
···
### Configuring `addAuthToOperation`
-
The purpose of `addAuthToOperation` is to take apply your auth state to each request. Note that the format of the `authState` will be whatever
-
you've returned from `getAuth` and not at all constrained by the exchange:
+
The purpose of `addAuthToOperation` is to apply an auth state to each request. Note that the format
+
of the `authState` will be whatever we've returned from `getAuth` and not constrained by the exchange:
```js
import { makeOperation } from '@urql/core';
···
};
```
-
First we check that we have an `authState` and a `token`. Then we apply it to the request `fetchOptions` as an `Authorization` header.
-
The header format can vary based on the API (e.g using `Bearer ${token}` instead of just `token`) which is why it'll be up to you to add the header
-
in the expected format for your API.
+
First, we check that we have an `authState` and a `token`. Then we apply it to the request
+
`fetchOptions` as an `Authorization` header. The header format can vary based on the API (e.g. using
+
`Bearer ${token}` instead of just `token`) which is why it'll be up to us to add the header
+
in the expected format for our API.
### Configuring `didAuthError`
This function lets the exchange know what is defined to be an API error for your API. `didAuthError` receives an `error` which is of type
-
[`CombinedError`](../api/core.md#combinederror) and we can use the `graphQLErrors` array in `CombinedError` to determine if an auth error has occurred.
+
[`CombinedError`](../api/core.md#combinederror), and we can use the `graphQLErrors` array in `CombinedError` to determine if an auth error has occurred.
The GraphQL error looks like something like this:
···
}
```
-
Most GraphQL APIs will communicate auth errors via the [error code extension](https://www.apollographql.com/docs/apollo-server/data/errors/#codes) which
-
is the recommended approach. We'll be able to determine whether any of the GraphQL errors were due to an unauthorized error code, which would indicate an auth failure:
+
Most GraphQL APIs will communicate auth errors via the [error code
+
extension](https://www.apollographql.com/docs/apollo-server/data/errors/#codes), which
+
is the recommended approach. We'll be able to determine whether any of the GraphQL errors were due
+
to an unauthorized error code, which would indicate an auth failure:
```js
const didAuthError = ({ error }) => {
···
### Configuring `getAuth` (triggered after an auth error has occurred)
-
If your API doesn't support any sort of token refresh, this is where you should simply log the user out.
+
If the API doesn't support any sort of token refresh, this is where we could simply log the user out.
```js
const getAuth = async ({ authState }) => {
···
};
```
-
Here, `logout()` is a placeholder that is called when we got an error, so that we can redirect to a login page again and clear our tokens from local storage or otherwise.
+
Here, `logout()` is a placeholder that is called when we got an error, so that we can redirect to a
+
login page again and clear our tokens from local storage or otherwise.
-
If we had a way to refresh our token using a refresh token, we can attempt to get a new token for the user first:
+
If we had a way to refresh our token using a refresh token, we can attempt to get a new token for the
+
user first:
```js
const getAuth = async ({ authState, mutate }) => {
···
When the application launches, the first thing we do is check whether the user has any auth tokens in persisted storage. This will tell us
whether to show the user the logged in or logged out view.
-
The `isLoggedIn` prop should always be updated based on authentication state change e.g. set to `true` after the use has authenticated and their tokens have been
-
added to storage, and set to `false` if the user has been logged out and their tokens have been cleared. It's important clear or add tokens to storage _before_
-
updating the prop in order for the auth exchange to work correctly.
+
The `isLoggedIn` prop should always be updated based on authentication state change. For instance, we may set it to
+
`true` after the user has authenticated and their tokens have been added to storage, and set it to
+
`false` once the user has been logged out and their tokens have been cleared. It's important to clear
+
or add tokens to a storage _before_ updating the prop in order for the auth exchange to work
+
correctly.
+8 -7
docs/advanced/authoring-exchanges.md
···
**Second,** operations are checked against the cache. Depending on the `requestPolicy`,
cached results can be resolved from here instead, which would mean that the cache sends back the
-
result and the operation doesn't travel any further in the chain.
+
result, and the operation doesn't travel any further in the chain.
-
**Third,** operations are sent to the API and the result is turned into an `OperationResult`.
+
**Third,** operations are sent to the API, and the result is turned into an `OperationResult`.
**Lastly,** operation results then travel through the exchanges in _reverse order_, which is because
exchanges are a pipeline where all operations travel forward deeper into the exchange chain, and
···
```
This exchange does nothing else than forward all operations and return all results. Hence, it's
-
called a `noopExchange` - an exchange that doesn't do anything.
+
called a `noopExchange` — an exchange that doesn't do anything.
### Forward and Return Composition
···
### Only One Operations Stream
When writing an Exchange we have to be careful not to _split_ the stream into multiple ones by
-
subscribing multiple times. Streams are lazy and immutable by default. Every time you use them, a new chain of streaming operators is created; since Exchanges are technically side-effects, we don't want to
-
accidentally have multiple instances of them in parallel.
+
subscribing multiple times. Streams are lazy and immutable by default. Every time you use them,
+
a new chain of streaming operators is created; since Exchanges are technically side effects, we don't
+
want to accidentally have multiple instances of them in parallel.
The `ExchangeIO` function receives an `operations$` stream. It's important to be careful to either only
use it once, or to _share_ its subscription.
···
synchronous. The `fetchExchange` is asynchronous since
it makes a `fetch` request and waits for a server response.
-
When you're adding more exchanges it's often crucial
-
to put them in a specific order. For instance - an authentication exchange
+
When you're adding more exchanges, it's often crucial
+
to put them in a specific order. For instance, an authentication exchange
will need to go before the `fetchExchange`, a secondary cache will probably have to
go in front of the default cache exchange.
+5 -5
docs/advanced/auto-populate-mutations.md
···
The `populateExchange` allows you to auto-populate selection sets in your mutations using the
`@populate` directive. In combination with [Graphcache](../graphcache/README.md) this is a useful
-
tool to update the data in your application automatically following a mutation, when your app grows
+
tool to update the data in your application automatically following a mutation, when your app grows,
and it becomes harder to track all fields that have been queried before.
-
> **NOTE:** The `populateExchange` is currently _experimental_! Certain patterns and usage paths
+
> **NOTE:** The `populateExchange` is _experimental_! Certain patterns and usage paths
> like GraphQL field arguments aren't covered yet, and the exchange hasn't been extensively used
> yet.
···
## Example usage
-
Consider the following queries which have been requested in other parts of your application:
+
Consider the following queries, which have been requested in other parts of your application:
```graphql
# Query 1
···
### Choosing when to populate
-
You may not want to populate your whole mutation response. In order to reduce your payload, pass populate lower in your query.
+
You may not want to populate your whole mutation response. To reduce your payload, pass populate lower in your query.
```graphql
mutation addTodo(id: ID!) {
···
### Using aliases
If you find yourself using multiple queries with variables, it may be necessary to
-
[use aliases](https://graphql.org/learn/queries/#aliases) in order to allow merging of queries.
+
[use aliases](https://graphql.org/learn/queries/#aliases) to allow merging of queries.
> **Note:** This caveat may change in the future or this restriction may be lifted.
+3 -3
docs/advanced/debugging.md
···
## Devtools
-
The quickest way to debug `urql` is to use the [`urql` devtools.](https://github.com/FormidableLabs/urql-devtools/)
+
It's easiest to debug `urql` with the [`urql` devtools.](https://github.com/FormidableLabs/urql-devtools/)
It offers tools to inspect internal ["Debug Events"](#debug-events) as they happen, to explore data
as your app is seeing it, and to quickly trigger GraphQL queries.
···
### Tips
-
Lastly, in summary, here are a few tips, dos, and don'ts that are important when we're adding new
-
Debug Events to custom exchanges.
+
Lastly, in summary, here are a few tips, that are important when we're adding new Debug Events to
+
custom exchanges:
- ✅ **Share internal details**: Frequent debug messages on key events inside your exchange are very
useful when later inspecting them, e.g. in the `devtools`.
+3 -3
docs/advanced/persistence-and-uploads.md
···
hash and sends this hash instead of the full query. If the server has seen this GraphQL query before
it will recognise it by its hash and process the GraphQL API request as usual, otherwise it may
respond using a `PersistedQueryNotFound` error. In that case the client is supposed to instead send
-
the full GraphQL query and the hash together, which will cause the query to be "registered" with the
+
the full GraphQL query, and the hash together, which will cause the query to be "registered" with the
server.
Additionally we could also decide to send these hashed queries as GET requests instead of POST
···
### Customizing Hashing
The `persistedFetchExchange` also accepts a `generateHash` option. This may be used to swap out the
-
exchange's default method of generating SHA256 hashes. By default the exchange will use the
+
exchange's default method of generating SHA256 hashes. By default, the exchange will use the
built-in [Web Crypto API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Crypto_API) on the
browser, which has been implemented to support IE11 as well. In Node.js it'll use the [Node
Crypto Module](https://nodejs.org/api/crypto.html) instead.
···
```
If you're using the `persistedFetchExchange` then put the `persistedFetchExchange` in front of the
-
`multipartFetchExchange`, since only the latter is a full replacement for the `fetchExchange` and
+
`multipartFetchExchange`, since only the latter is a full replacement for the `fetchExchange`, and
the former only handled query operations.
[Read more about `@urql/multipart-fetch-exchange` in our API
+2 -2
docs/advanced/retry-operations.md
···
# Retrying Operations
The `retryExchange` lets us retry specific operation, by default it will
-
retry only network errors but we can specify additional options to add
+
retry only network errors, but we can specify additional options to add
functionality.
## Installation and Setup
···
We have the `initialDelayMs` to specify at what interval the `retrying` should start, this means that if we specify `1000` that when our `operation` fails we'll wait 1 second and then retry it.
-
Next up is the `maxDelayMs`, our `retryExchange` will keep increasing the time between retries so we don't spam our server with requests it can't complete, this option ensures we don't exceed a certain threshold. This time between requests will increase with a random `back-off` factor multiplied by the `initialDelayMs`, read more about the [thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem).
+
Next up is the `maxDelayMs`, our `retryExchange` will keep increasing the time between retries, so we don't spam our server with requests it can't complete, this option ensures we don't exceed a certain threshold. This time between requests will increase with a random `back-off` factor multiplied by the `initialDelayMs`, read more about the [thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem).
Talking about increasing the `delay` randomly, `randomDelay` allows us to disable this. When this option is set to `false` we'll only increase the time between attempts with the `initialDelayMs`. This means if we fail the first time we'll have 1 second wait, next fail we'll have 2 seconds and so on.
+8 -8
docs/advanced/server-side-rendering.md
···
`;
```
-
This will provide `__URQL_DATA__` globally which we've used in our first example to inject data into
+
This will provide `__URQL_DATA__` globally, which we've used in our first example to inject data into
the `ssrExchange` on the client-side.
Alternatively you can also call `restoreData` as long as this call happens synchronously before the
···
or `renderToNodeStream`](https://reactjs.org/docs/react-dom-server.html#rendertostring).
For React, `urql` has a "Suspense mode" that [allows data fetching to interrupt
-
rendering](https://reactjs.org/docs/concurrent-mode-suspense.html). However, suspense is currently
+
rendering](https://reactjs.org/docs/concurrent-mode-suspense.html). However, Suspense is
not supported by React during server-side rendering.
Using [the `react-ssr-prepass` package](https://github.com/FormidableLabs/react-ssr-prepass) however,
···
### With Preact
If you're using Preact instead of React, there's a drop-in replacement package for
-
`react-ssr-prepass`, which is called `preact-ssr-prepass`. It only has a peer dependency on Preact
+
`react-ssr-prepass`, which is called `preact-ssr-prepass`. It only has a peer dependency on Preact,
and we can install it like so:
```sh
···
npm install --save preact-ssr-prepass preact
```
-
All above examples for `react-ssr-prepass` will still be the exact same, except that instead of
+
All above examples for `react-ssr-prepass` will still be the same, except that instead of
using the `urql` package we'll have to import from `@urql/preact`, and instead of `react-ssr-prepass`
we'll have to import from. `preact-ssr-prepass`.
···
this integration contains convenience methods specifically for `Next.js`.
These will simplify the above setup for SSR.
-
To setup `next-urql`, first we'll install `next-urql` with `react-is` and `urql` as
+
To set up `next-urql`, first we'll install `next-urql` with `react-is` and `urql` as
peer dependencies:
```sh
···
This will automatically set up server-side rendering on the page. The `withUrqlClient` higher-order
component function accepts the usual `Client` options as an argument. This may either just be an
-
object or a function that receives the Next.js' `getInitialProps` context.
+
object, or a function that receives the Next.js' `getInitialProps` context.
One added caveat is that these options may not include the `exchanges` option because `next-urql`
injects the `ssrExchange` automatically at the right location. If you're setting up custom exchanges
···
### Resetting the client instance
-
In rare scenario's you possibly will have to reset the client instance (reset all cache, ...), this is an uncommon scenario
-
and we consider it "unsafe" so evaluate this carefully for yourself.
+
In rare scenario's you possibly will have to reset the client instance (reset all cache, ...), this
+
is an uncommon scenario, and we consider it "unsafe" so evaluate this carefully for yourself.
When this does seem like the appropriate solution any component wrapped with `withUrqlClient` will receive the `resetUrqlClient`
property, when invoked this will create a new top-level client and reset all prior operations.
+5 -2
docs/advanced/subscriptions.md
···
In the above example, we add the `subscriptionExchange` to the `Client` with the default exchanges
add before it. The `subscriptionExchange` is a factory that accepts additional options and returns
the actual `Exchange` function. It does not make any assumption over the transport protocol and
-
scheme that is used. Instead, we need to pass a `forwardSubscription` function which is called with
+
scheme that is used. Instead, we need to pass a `forwardSubscription` function, which is called with
an "enriched" _Operation_ every time the `Client` attempts to execute a GraphQL Subscription.
When we define this function it must return an "Observable-like" object, which needs to follow the
···
`Client`'s `subscription` method for one-off subscriptions. This method is similar to the ones for
mutations and subscriptions [that we've seen before on the "Core Package" page.](../basics/core.md)
-
This method will always [returns a Wonka stream](../architecture.md#the-wonka-library) and doesn't have a `.toPromise()` shortcut method, since promises won't return the multiple values that a subscription may deliver. Let's convert the above example to one without framework code, as we may use subscriptions in a Node.js environment.
+
This method will always [returns a Wonka stream](../architecture.md#the-wonka-library) and doesn't
+
have a `.toPromise()` shortcut method, since promises won't return the multiple values that a
+
subscription may deliver. Let's convert the above example to one without framework code, as we may
+
use subscriptions in a Node.js environment.
```js
import { pipe, subscribe } from 'wonka';
+7 -7
docs/advanced/testing.md
···
In the section ["Stream Patterns" on the "Architecture" page](../architecture.md) we've seen, that
all methods on the client operate with and return streams. These streams are created using
-
[the Wonka library](../architecture.md#the-wonka-library) and we're able to create streams
+
[the Wonka library](../architecture.md#the-wonka-library), and we're able to create streams
ourselves to mock the different states of our operations, e.g. fetching, errors, or success with data.
You'll probably use one of these utility functions to create streams:
···
### Fetching
-
Fetching states can be simulated by returning a stream which never returns. Wonka provides a utility for this, aptly called `never`.
+
Fetching states can be simulated by returning a stream, which never returns. Wonka provides a utility for this, aptly called `never`.
-
Here's a fixture which stays in the _fetching_ state.
+
Here's a fixture, which stays in the _fetching_ state.
```tsx
import { Provider } from 'urql';
···
### Response (success)
-
Response states are simulated by providing a stream which contains a network response. For single responses, Wonka's `fromValue` function can do this for us.
+
Response states are simulated by providing a stream, which contains a network response. For single responses, Wonka's `fromValue` function can do this for us.
**Example snapshot test of response state**
···
});
```
-
The above client we've created mocks all three operations — queries, mutations, and subscriptions — to always remain in the `fetching: true` state.
+
The above client we've created mocks all three operations — queries, mutations and subscriptions — to always remain in the `fetching: true` state.
Generally when we're _hoisting_ our mocked client and reuse it across multiple tests we have to be
mindful not to instantiate the mocks outside of Jest's lifecycle functions (like `it`, `beforeEach`,
`beforeAll` and such) as it may otherwise reset our mocked functions' return values or
···
If you prefer to have more control on when the new data is arriving you can use the `makeSubject` utility from Wonka. You can see more details in the next section.
-
Here's an example of testing a list component which uses a subscription.
+
Here's an example of testing a list component, which uses a subscription.
```tsx
import { OperationContext, makeOperation } from '@urql/core';
···
Simulating multiple responses can be useful, particularly testing `useEffect` calls dependent on changing query responses.
-
For this, a _subject_ is the way to go. In short, it's a stream which you can push responses to. The `makeSubject` function from Wonka is what you'll want to use for this purpose.
+
For this, a _subject_ is the way to go. In short, it's a stream that you can push responses to. The `makeSubject` function from Wonka is what you'll want to use for this purpose.
Below is an example of simulating subsequent responses (such as a cache update/refetch) in a test.
+9 -9
docs/api/core.md
···
# @urql/core
-
The `@urql/core` package is the basis of all framework bindings. Every bindings package,
-
like [`urql` for React](./urql.md) or [`@urql/preact`](./preact.md) will reuse the core logic and
+
The `@urql/core` package is the basis of all framework bindings. Each bindings-package,
+
like [`urql` for React](./urql.md) or [`@urql/preact`](./preact.md), will reuse the core logic and
reexport all exports from `@urql/core`.
Therefore if you're not accessing utilities directly, aren't in a Node.js environment, and are using
framework bindings, you'll likely want to import from your framework bindings package directly.
···
| `fetchOptions` | `RequestInit \| () => RequestInit` | Additional `fetchOptions` that `fetch` in `fetchExchange` should use to make a request |
| `fetch` | `typeof fetch` | An alternative implementation of `fetch` that will be used by the `fetchExchange` instead of `window.fetch` |
| `suspense` | `?boolean` | Activates the experimental React suspense mode, which can be used during server-side rendering to prefetch data |
-
| `requestPolicy` | `?RequestPolicy` | Changes the default request policy that will be used. By default this will be `cache-first`. |
+
| `requestPolicy` | `?RequestPolicy` | Changes the default request policy that will be used. By default, this will be `cache-first`. |
| `preferGetMethod` | `?boolean` | This is picked up by the `fetchExchange` and will force all queries (not mutations) to be sent using the HTTP GET method instead of POST. |
| `maskTypename` | `?boolean` | Enables the `Client` to automatically apply the `maskTypename` utility to all `data` on [`OperationResult`s](#operationresult). This makes the `__typename` properties non-enumerable. |
···
This is a shorthand method for [`client.executeQuery`](#clientexecutequery), which accepts a query
(`DocumentNode | string`) and variables separately and creates a [`GraphQLRequest`](#graphqlrequest) [`createRequest`](#createrequest) automatically.
-
The returned `Source<OperationResult>` will also have an added `toPromise` method so the stream can
+
The returned `Source<OperationResult>` will also have an added `toPromise` method, so the stream can
be conveniently converted to a promise.
```js
···
`true` or `false` to tell the `ssrExchange` whether to
write to (server-side) or read from (client-side) the cache.
-
By default `isClient` defaults to `true` when the `Client.suspense`
+
By default, `isClient` defaults to `true` when the `Client.suspense`
mode is disabled and to `false` when the `Client.suspense` mode
is enabled.
···
during the server-side rendering pass, and allows you to populate
the cache on the client-side with the same data.
-
During React rehydration this cache will be emptied and it will
+
During React rehydration this cache will be emptied, and it will
become inactive and won't change the results of queries after
rehydration.
···
### makeOperation
This utility is used to either turn a [`GraphQLRequest` object](#graphqlrequest) into a new
-
[`Operation` object](#operation) or to copy an `Operation`. It adds the `kind` property and the
+
[`Operation` object](#operation) or to copy an `Operation`. It adds the `kind` property, and the
`operationName` alias that outputs a deprecation warning.
It accepts three arguments:
···
and marks every `__typename` property as non-enumerable.
The [`formatDocument`](#formatdocument) is often used by `urql` automatically and adds `__typename`
-
fields to all results. However, this means that data can often not be passed back into variables or
-
inputs on mutations, which is a common use-case. This utility hides these fields which can solves
+
fields to all results. However, this means that data often cannot be passed back into variables or
+
inputs on mutations, which is a common use-case. This utility hides these fields, which can solve
this problem.
It's used by the [`Client`](#client) when the `maskTypename` option is enabled.
+6 -6
docs/api/urql.md
···
This hook returns a tuple of the shape `[result, executeQuery]`.
- The `result` is an object with the shape of an [`OperationResult`](./core.md#operationresult) with
-
an added `fetching: boolean` property, indicating whether the query is currently being fetched.
+
an added `fetching: boolean` property, indicating whether the query is being fetched.
- The `executeQuery` function optionally accepts
[`Partial<OperationContext>`](./core.md#operationcontext) and reexecutes the current query when
it's called. When `pause` is set to `true` this executes the query, overriding the otherwise
···
`[result, executeMutation]`.
- The `result` is an object with the shape of an [`OperationResult`](./core.md#operationresult) with
-
an added `fetching: boolean` property, indicating whether the mutation is currently being executed.
+
an added `fetching: boolean` property, indicating whether the mutation is being executed.
- The `executeMutation` function accepts variables and optionally
[`Partial<OperationContext>`](./core.md#operationcontext) and may be used to start executing a
mutation. It returns a `Promise` resolving to an [`OperationResult`](./core.md#operationresult).
···
it's called. When `pause` is set to `true` this starts the subscription, overriding the otherwise
paused hook.
-
Since a subscription may proactively closed by the server, the additional `fetching: boolean`
-
property on the `result` may update to `false` when the server ends the subscription.
-
By default `urql` is not able to start subscriptions, since this requires some additional setup.
+
The `fetching: boolean` property on the `result` may change to `false` when the server proactively
+
ends the subscription. By default, `urql` is unable able to start subscriptions, since this requires
+
some additional setup.
[Read more about how to use the `useSubscription` API on the "Subscriptions"
page.](../advanced/subscriptions.md)
···
This component is a wrapper around [`useMutation`](#usemutation), exposing a [render prop
API](https://reactjs.org/docs/render-props.html) for cases where hooks aren't desirable.
-
The `Mutation` component accepts a `query` prop and a function callback must be passed to `children`
+
The `Mutation` component accepts a `query` prop, and a function callback must be passed to `children`
that receives the mutation result and must return a React element. The second argument of
`useMutation`'s returned tuple, `executeMutation` is passed as an added property on the mutation
result object.
+10 -10
docs/architecture.md
···
If `urql` was a train it would take several stops to arrive at its terminus, our API. It starts with us
defining queries or mutations. Any GraphQL request can be abstracted into their query documents and
-
their variables. In `urql`, these GraphQL requests are treated as unique objects which are uniquely
+
their variables. In `urql`, these GraphQL requests are treated as unique objects, which are uniquely
identified by the query document and variables (which is why a `key` is generated from the two). This
`key` is a hash number of the query document and variables and uniquely identifies our
[`GraphQLRequest`](./api/core.md#graphqlrequest).
···
![Operations and Results](./assets/urql-event-hub.png)
It's the `Client`s responsibility to accept an `Operation` and execute it. The bindings interally
-
call the `client.executeQuery`, `client.executeMutation`, or `client.executeSubscription` methods
+
call the `client.executeQuery`, `client.executeMutation`, or `client.executeSubscription` methods,
and we'll get a "stream" of results. This "stream" allows us to register a callback with it to
receive results.
In the diagram we can see that each operation is a signal for our request to start at which point
we can expect to receive our results eventually on a callback. Once we're not interested in results
anymore a special "teardown" signal is issued on the `Client`. While we don't see operations outside
-
of the `Client`, they're what travel through the "Exchanges" on the `Client`.
+
the `Client`, they're what travel through the "Exchanges" on the `Client`.
## The Client and Exchanges
To reiterate, when we use `urql`'s bindings for our framework of choice, methods are called on the
-
`Client` but we never see the operations that are created in the background from our bindings. We
+
`Client`, but we never see the operations that are created in the background from our bindings. We
call a method like `client.executeQuery` (or it's called for us in the bindings), an operation is
issued internally when we subscribe with a callback, and later our callback is called with results.
···
our perspective:
- We subscribe to a "stream" and expect to get results on a callback
-
- The `Client` issues the operation and we'll receive some results back eventually as either the
-
cache responds (synchronously) or the request gets sent to our API.
-
- We eventually unsubscribe and the `Client` issues a "teardown" operation with the same `key` as
+
- The `Client` issues the operation, and we'll receive some results back eventually as either the
+
cache responds (synchronously), or the request gets sent to our API.
+
- We eventually unsubscribe, and the `Client` issues a "teardown" operation with the same `key` as
the original operation, which concludes our flow.
The `Client` itself doesn't actually know what to do with operations. Instead, it sends them through
···
unique `key`.
- This operation is sent into the **exchanges** and eventually ends up at the `fetchExchange`
(or a similar exchange)
-
- The operation is sent to the API and a **result** comes back which is wrapped in an `OperationResult`
+
- The operation is sent to the API and a **result** comes back, which is wrapped in an `OperationResult`
- The `Client` filters the `OperationResult` by the `operation.key` and — via a callback — gives us
a **stream of results**.
···
But, **what are streams?**
Generally we refer to _streams_ as abstractions that allow us to program with asynchronous events
-
over time. Within the JavaScript context we're thinking specifically in terms of of
+
over time. Within the context of JavaScript we're specifically thinking in terms of
[Observables](https://github.com/tc39/proposal-observable)
and [Reactive Programming with Observables.](http://reactivex.io/documentation/observable.html)
These concepts may sound initimidating, but from a high-level view what we're talking about can be
thought of as a combination of promises and iterables (e.g. arrays). We're dealing with multiple
-
events but our callback is called over time. It's like calling `forEach` on an array but expecting
+
events, but our callback is called over time. It's like calling `forEach` on an array but expecting
the results to come in asynchronously.
As a user, if we're using the one framework bindings that we've seen in [the "Basics"
+17 -16
docs/basics/core.md
···
## Installation
As we said above, if we are using bindings then those will already have installed `@urql/core` as
-
they depend on it. They also all re-export all exports from `@urql/core`, so we can use those no
-
matter which bindings we've installed. However, it's also possible to explicitly install
+
they depend on it. They also all re-export all exports from `@urql/core`, so we can use those
+
regardless of which bindings we've installed. However, it's also possible to explicitly install
`@urql/core` or use it standalone, e.g. in a Node.js environment.
```sh
···
`;
```
-
This will all look familiar when coming from the `graphql-tag` package. The functionality is
-
identical and the output is approximately the same. The two packages are also intercompatible.
-
However, one small change that `@urql/core`'s implementation makes is that your fragment names don't
-
have to be globally unique, since it's possible to create some one-off fragments every now and then.
+
This usage will look familiar when coming from the `graphql-tag` package. The `gql` API is
+
identical, and its output is approximately the same. The two packages are also intercompatible.
+
However, one small change in `@urql/core`'s implementation is that your fragment names don't
+
have to be globally unique, since it's possible to create some one-off fragments occasionally,
+
especially for `@urql/exchange-graphcache`'s configuration.
It also pre-generates a "hash key" for the `DocumentNode` which is what `urql` does anyway, thus
avoiding some extra work compared to when the `graphql-tag` package is used with `urql`.
···
At the bare minimum we'll need to pass an API's `url` when we create a `Client` to get started.
Another common option is `fetchOptions`. This option allows us to customize the options that will be
-
passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object or
+
passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object, or
a function returning an options object.
In the following example we'll add a token to each `fetch` request that our `Client` sends to our
···
without it. However, another important option on the `Client` is the `exchanges` option.
This option passes a list of exchanges to the `Client`, which tell it how to execute our requests
-
and how to cache data in a certain order. By default this will be populated with the list of
+
and how to cache data in a certain order. By default, this will be populated with the list of
`defaultExchanges`.
```js
···
supports by adding new exchanges to this list. On [the "Architecture" page](../architecture.md)
we'll also learn more about what exchanges are and why they exist.
-
For now it's sufficient for us to know that our requests are executed using the logic in the
+
For now, it's enough for us to know that our requests are executed using the logic in the
exchanges in order. First, the `dedupExchange` deduplicates requests if we send the same queries
twice, the `cacheExchange` implements the default "document caching" behaviour (as we'll learn about
on the ["Document Caching"](./document-caching.md) page), and lastly the `fetchExchange` is
···
In the above example we're executing a query on the client, are passing some variables and are
calling the `toPromise()` method on the return value to execute the request immediately and get the
-
result as a promise. This may be useful when we don't plan on cancelling queries or we don't
+
result as a promise. This may be useful when we don't plan on cancelling queries, or we don't
care about future updates to this data and are just looking to query a result once.
The same can be done for mutations by calling the `client.mutation` method instead of the
`client.query` method.
Similarly there's a way to read data from the cache synchronously, provided that the cache has
-
received a result for a given query before. The `Client` has a `readQuery` method which is a
+
received a result for a given query before. The `Client` has a `readQuery` method, which is a
shortcut for just that.
```js
···
```
This code example is similar to the one before. However, instead of sending a one-off query, we're
-
subscribing to the query. Internally, this causes the `Client` to do the exact same, but the
+
subscribing to the query. Internally, this causes the `Client` to do the same, but the
subscription means that our callback may be called repeatedly. We may get future results as well as
the first one.
···
immediately if our cache already has a result for the given query. The same principle applies here!
Our callback will be called synchronously if the cache already has a result.
-
Once we're not interested in any results anymore we need to clean up after ourselves by calling
+
Once we're not interested in any results anymore, we need to clean up after ourselves by calling
`unsubscribe`. This stops the subscription and makes sure that the `Client` doesn't actively update
the query anymore or refetches it. We can think of this pattern as being very similar to events or
event hubs.
-
We're using [the Wonka library for our streams](https://wonka.kitten.sh/basics/background) which
+
We're using [the Wonka library for our streams](https://wonka.kitten.sh/basics/background), which
we'll learn more about [on the "Architecture" page](./architecture.md). But we can think of this as
React's effects being called over time, or as `window.addEventListener`.
···
- [`CombinedError`](../api/core.md#combinederror) - our abstraction to combine one or more `GraphQLError`(s) and a `NetworkError`
- `makeResult` and `makeErrorResult` - utilities to create _Operation Results_
-
- [`createRequest`](../api/core.md#createrequest) - a utility function to create a request from a query and some variables (which
-
generates a stable _Operation Key_)
+
- [`createRequest`](../api/core.md#createrequest) - a utility function to create a request from a
+
query, and some variables (which generate a stable _Operation Key_)
There are other utilities not mentioned here. Read more about the `@urql/core` API in the [API docs](../api/core.md).
+6 -7
docs/basics/document-caching.md
···
# Document Caching
-
By default `urql` uses a concept called _Document Caching_. It will avoid sending the same requests
+
By default, `urql` uses a concept called _Document Caching_. It will avoid sending the same requests
to a GraphQL API repeatedly by caching the result of each query.
This works like the cache in a browser. `urql` creates a key for each request that is sent based on
···
## Request Policies
-
The _request policy_ that is defined will alter what the default document cache does. By default the
+
The _request policy_ that is defined will alter what the default document cache does. By default, the
cache will prefer cached results and will otherwise send a request, which is called `cache-first`.
In total there are four different policies that we can use:
···
## Document Cache Gotchas
-
This cache has a small trade-off! If we request a list of data and the API returns an empty list,
-
the cache won't be able to see the `__typename` of said list and won't invalidate.
+
This cache has a small trade-off! If we request a list of data, and the API returns an empty list,
+
then the cache won't be able to see the `__typename` of said list and invalidate it.
There are two ways to fix this issue, supplying `additionalTypenames` to the context of your query or [switch to "Normalized Caching"
instead](../graphcache/normalized-caching.md).
···
Now the cache will know when to invalidate this query even when the list is empty.
-
We also have the possibility to use this for `mutations`.
-
There are moments where a mutation can cause a side-effect on your server side and it needs
-
to invalidate an additional entity.
+
We may also use this feature for mutations, since occasionally mutations must invalidate data that
+
isn't directly connected to a mutation by a `__typename`.
```js
const [result, execute] = useMutation(`mutation($name: String!) { createUser(name: $name) }`);
+1 -1
docs/basics/errors.md
···
- The `networkError` property will contain any error that stopped `urql` from making a network
request.
- The `graphQLErrors` property may be an array that contains [normalized `GraphQLError`s as they
-
were returned in the `errors` array from a GraphQL API.](https://graphql.org/graphql-js/error/)
+
were received in the `errors` array from a GraphQL API.](https://graphql.org/graphql-js/error/)
Additionally, the `message` of the error will be generated and combined from the errors for
debugging purposes.
+16 -16
docs/basics/react-preact.md
···
### Installation
-
Installing `urql` is as quick as you'd expect and you won't need any other packages to get started
+
Installing `urql` is as quick as you'd expect, and you won't need any other packages to get started
with at first. We'll install the package with our package manager of choice.
```sh
···
At the bare minimum we'll need to pass an API's `url` when we create a `Client` to get started.
Another common option is `fetchOptions`. This option allows us to customize the options that will be
-
passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object or
+
passed to `fetch` when a request is sent to the given API `url`. We may pass in an options object, or
a function returning an options object.
In the following example we'll add a token to each `fetch` request that our `Client` sends to our
···
);
```
-
Now every component and element inside and under the `Provider` are able to use GraphQL queries that
+
Now every component and element inside and under the `Provider` can use GraphQL queries that
will be sent to our API.
## Queries
Both libraries offer a `useQuery` hook and a `Query` component. The latter accepts the same
-
parameters but we won't cover it in this guide. [Look it up in the API docs if you prefer
+
parameters, but we won't cover it in this guide. [Look it up in the API docs if you prefer
render-props components.](../api/urql.md#query-component)
### Run a first query
···
Here we have implemented our first GraphQL query to fetch todos. We see that `useQuery` accepts
options and returns a tuple. In this case we've set the `query` option to our GraphQL query. The
-
tuple we then get in return is an array that contains a result object and a re-execute function.
+
tuple we then get in return is an array that contains a result object, and a re-execute function.
-
The result object contains several properties. The `fetching` field indicates whether we're currently
+
The result object contains several properties. The `fetching` field indicates whether the hook is
loading data, `data` contains the actual `data` from the API's result, and `error` is set when either
the request to the API has failed or when our API result contained some `GraphQLError`s, which
we'll get into later on the ["Errors" page](./errors.md).
···
`POST` request body that is sent to our GraphQL API.
Whenever the `variables` (or the `query`) option on the `useQuery` hook changes `fetching` will
-
switch to `true` and a new request will be sent to our API, unless a result has already been cached
+
switch to `true`, and a new request will be sent to our API, unless a result has already been cached
previously.
### Pausing `useQuery`
···
Let's pause the query we've just
written to not execute when these variables are empty, to prevent `null` variables from being
-
executed. We can do this by means of setting the `pause` option to `true`:
+
executed. We can do this by setting the `pause` option to `true`:
```jsx
const Todos = ({ from, limit }) => {
···
than just `query` and `variables`. Another option we should touch on is `requestPolicy`.
The `requestPolicy` option determines how results are retrieved from our `Client`'s cache. By
-
default this is set to `cache-first`, which means that we prefer to get results from our cache, but
+
default, this is set to `cache-first`, which means that we prefer to get results from our cache, but
are falling back to sending an API request.
Request policies aren't specific to `urql`'s React API, but are a common feature in its core. [You
···
The `useQuery` hook updates and executes queries whenever its inputs, like the `query` or
`variables` change, but in some cases we may find that we need to programmatically trigger a new
-
query. This is the purpose of the `reexecuteQuery` function which is the second item in the tuple
+
query. This is the purpose of the `reexecuteQuery` function, which is the second item in the tuple
that `useQuery` returns.
Triggering a query programmatically may be useful in a couple of cases. It can for instance be used
-
to refresh data that is currently being displayed. In these cases we may also override the
-
`requestPolicy` of our query just once and set it to `network-only` to skip the cache.
+
to refresh the hook's data. In these cases we may also override the `requestPolicy` of our query just
+
once and set it to `network-only` to skip the cache.
```jsx
const Todos = ({ from, limit }) => {
···
## Mutations
Both libraries offer a `useMutation` hook and a `Mutation` component. The latter accepts the same
-
parameters but we won't cover it in this guide. [Look it up in the API docs if you prefer
+
parameters, but we won't cover it in this guide. [Look it up in the API docs if you prefer
render-props components.](../api/urql.md#mutation-component)
### Sending a mutation
···
### Using the mutation result
When calling our `updateTodo` function we have two ways of getting to the result as it comes back
-
from our API. We can either use the first value of the returned tuple — our `updateTodoResult` — or
+
from our API. We can either use the first value of the returned tuple, our `updateTodoResult`, or
we can use the promise that `updateTodo` returns.
```jsx
···
```
The result is useful when your UI has to display progress on the mutation, and the returned
-
promise is particularly useful when you're adding side-effects that run after the mutation has
+
promise is particularly useful when you're adding side effects that run after the mutation has
completed.
### Handling mutation errors
···
## Reading on
This concludes the introduction for using `urql` with React or Preact. The rest of the documentation
-
is mostly framework-agnostic and will apply to either `urql` in general or the `@urql/core` package,
+
is mostly framework-agnostic and will apply to either `urql` in general, or the `@urql/core` package,
which is the same between all framework bindings. Hence, next we may want to learn more about one of
the following to learn more about the internals:
+8 -8
docs/basics/svelte.md
···
The `operationStore` function creates a [Svelte Writable store](https://svelte.dev/docs#writable).
You can use it to initialise a data container in `urql`. This store holds on to our query inputs,
-
like the GraphQL query and variables, which we can change to launch new queries, and also exposes
+
like the GraphQL query and variables, which we can change to launch new queries. It also exposes
the query's eventual result, which we can then observe.
### Run a first query
···
<button on:click={nextPage}>Next page<button></button></button>
```
-
The `operationStore` provides getters too so it's also possible for us to pass `todos` around and
+
The `operationStore` provides getters as well, so it's also possible for us to pass `todos` around and
update `todos.variables` or `todos.query` directly. Both, updating `todos.variables` and
`$todos.variables` in a component for instance, will cause `query` to pick up the update and execute
our changes.
···
started at will. Instead, the `query`'s third argument, the `context`, may have an added `pause`
option that can be set to `true` to temporarily _freeze_ all changes and stop requests.
-
For instance we may start out with a paused store and then unpause it once a callback is invoked:
+
For instance, we may start out with a paused store and then unpause it once a callback is invoked:
```html
<script>
···
most interesting option the `context` may contain is `requestPolicy`.
The `requestPolicy` option determines how results are retrieved from our `Client`'s cache. By
-
default this is set to `cache-first`, which means that we prefer to get results from our cache, but
+
default, this is set to `cache-first`, which means that we prefer to get results from our cache, but
are falling back to sending an API request.
In total there are four different policies that we can use:
···
...
```
-
As we can see, the `requestPolicy` is easily changed here and we can read our `context` option back
+
As we can see, the `requestPolicy` is easily changed, and we can read our `context` option back
from `todos.context`, just as we can check `todos.query` and `todos.variables`. Updating
`operationStore.context` can be very useful to also refetch queries, as we'll see in the next
section.
···
The default caching approach in `@urql/svelte` typically takes care of updating queries on the fly
quite well and does so automatically. Sometimes it may be necessary though to refetch data and to
execute a query with a different `context`. Triggering a query programmatically may be useful in a
-
couple of cases. It can for instance be used to refresh data that is currently being displayed.
+
couple of cases. It can for instance be used to refresh data.
We can trigger a new query update by changing out the `context` of our `operationStore`.
···
## Mutations
The `mutation` function isn't dissimilar from the `query` function but is triggered manually and
-
can accept a [`GraphQLRequest` object](../api/core.md#graphqlrequest) too while also supporting our
+
can accept a [`GraphQLRequest` object](../api/core.md#graphqlrequest), while also supporting our
trusty `operationStore`.
### Sending a mutation
···
## Reading on
This concludes the introduction for using `urql` with Svelte. The rest of the documentation
-
is mostly framework-agnostic and will apply to either `urql` in general or the `@urql/core` package,
+
is mostly framework-agnostic and will apply to either `urql` in general, or the `@urql/core` package,
which is the same between all framework bindings. Hence, next we may want to learn more about one of
the following to learn more about the internals:
+8 -7
docs/comparison.md
···
# Comparison
-
> This comparison page aims to be accurate, unbiased, and up-to-date. If you see any information that
+
> This comparison page aims to be detailed, unbiased, and up-to-date. If you see any information that
> may be inaccurate or could be improved otherwise, please feel free to suggest changes.
The most common question that you may encounter with GraphQL is what client to choose when you are
-
getting started. We aim to provide an unbiased and accurate comparison of several options on this
+
getting started. We aim to provide an unbiased and detailed comparison of several options on this
page, so that you can make an **informed decision**.
All options come with several drawbacks and advantages, and all of these clients have been around
for a while now. A little known fact is that `urql` in its current form and architecture has already
-
existed since February of 2019, and its normalized cache has been around since September 2019.
+
existed since February 2019, and its normalized cache has been around since September 2019.
Overall, we would recommend to make your decision based on whether your required features are
supported, which patterns you'll use (or restrictions thereof), and you may want to look into
-
whether all of the parts and features you're interested in are well maintained.
+
whether all the parts and features you're interested in are well maintained.
## Comparison by Features
···
Typically these are all additional addon features that you may expect from a GraphQL client, no
matter which framework you use it with. It's worth mentioning that all three clients support some
-
kind of extensibility API which allows you to change when and how queries are sent to an API. These
+
kind of extensibility API, which allows you to change when and how queries are sent to an API. These
are easy to use primitives particularly in Apollo, with links, and in `urql` with exchanges. The
major difference in `urql` is that all caching logic is abstracted in exchanges too, which makes
it easy to swap the caching logic or other behavior out (and hence makes `urql` slightly more
···
`@urql/exchange-graphcache` we chose to include it as a feature since it also strengthened other
guarantees that the cache makes.
-
Relay does in fact have similar guarantees as [`urql`'s Commutativity Guarantees](./graphcache/under-the-hood.md)
+
Relay does in fact have similar guarantees as [`urql`'s Commutativity
+
Guarantees](./graphcache/under-the-hood.md),
which are more evident when applying list updates out of order under more complex network
conditions.
···
- Parts of the `graphql` package tree-shake away and may also be replaced (e.g. `parse`)
- All packages in `urql` reuse parts of `@urql/core` and `wonka`, which means adding all their total
-
sizes up doesn't give you an accurate result.
+
sizes up doesn't give you a correct result of their total expected bundle size.
- These sizes may change drastically given the code you write and add yourself, but can be managed
via precompilation (e.g. with `babel-plugin-graphql-tag` or GraphQL Code Generator for Apollo and
`urql`)
+5 -5
docs/graphcache/README.md
···
# Graphcache
-
In `urql`, caching is fully configurable via [exchanges](../architecture.md) and the default
+
In `urql`, caching is fully configurable via [exchanges](../architecture.md), and the default
`cacheExchange` in `urql` offers a ["Document Cache"](../basics/document-caching.md), which is
-
sufficient for sites that heavily rely and render static content. However as an app grows more
+
usually enough for sites that heavily rely on static content. However as an app grows more
complex it's likely that the data and state that `urql` manages, will also grow more complex and
introduce interdependencies between data.
···
how [data is often structured in
Redux.](https://redux.js.org/recipes/structuring-reducers/normalizing-state-shape/)
-
In `urql`, normalized caching is an opt-in feature which is provided by the
+
In `urql`, normalized caching is an opt-in feature, which is provided by the
`@urql/exchange-graphcache` package, _Graphcache_ for short.
## Features
-
The following pages introduce different features in _Graphcache_ which together make it a compelling
+
The following pages introduce different features in _Graphcache_, which together make it a compelling
alternative to the standard [document cache](../basics/document-caching.md) that `urql` uses by
default.
- 🔁 [**Fully reactive, normalized caching.**](./normalized-caching.md) _Graphcache_ stores data in
-
a normalized data structure. Query, mutation, and subscription results may update one another if
+
a normalized data structure. Query, mutation and subscription results may update one another if
they share data, and the app will rerender or refetch data accordingly. This often allows your app
to make fewer API requests, since data may already be in the cache.
- 💾 [**Custom cache resolvers**](./local-resolvers.md) Since all queries are fully resolved in the
+32 -32
docs/graphcache/cache-updates.md
···
that is found in a result will be stored under the entity's key.
A query's result is represented as a graph, which can also be understood as a tree structure,
-
starting from the root `Query` entity which then connects to other entities via links, which are
+
starting from the root `Query` entity, which then connects to other entities via links, which are
relations stored as keys, where each entity has records that store scalar values, which are the
tree's leafs. On the previous page, on ["Local Resolvers"](./local-resolvers.md), we've seen how
resolvers can be attached to fields to manually resolve other entities (or transform record fields).
···
[quote](./normalized-caching.md/#storing-normalized-data):
> Any mutation or subscription can also be written to this data structure. Once Graphcache finds a
-
> keyable entity in their results it's written to its relational table which may update other queries
-
> in our application.
+
> keyable entity in their results it's written to its relational table, which may update other
+
> queries in our application.
This means that mutations and subscriptions still write and update entities in the cache. These
-
updates are then reflected on all queries that our app currently uses. However, there are
-
limitations to this. While resolvers can be used to passively change data for queries, for mutations
+
updates are then reflected on all active queries that our app uses. However, there are limitations to this.
+
While resolvers can be used to passively change data for queries, for mutations
and subscriptions we sometimes have to write **updaters** to update links and relations.
This is often necessary when a given mutation or subscription deliver a result that is more granular
than the cache needs to update all affected entities.
···
An "updater" may be attached to a `Mutation` or `Subscription` field and accepts four positional
arguments, which are the same as [the resolvers' arguments](./local-resolvers.md):
-
- `result`: The full API result that's currently being written to the cache. Typically we'd want to
+
- `result`: The full API result that's being written to the cache. Typically we'd want to
avoid coupling by only looking at the current field that the updater is attached to, but it's
worth noting that we can access any part of the result.
- `args`: The arguments that the field has been called with, which will be replaced with an empty
···
- `cache`: The `cache` instance, which gives us access to methods allowing us to interact with the
- local cache. Its full API can be found [in the API docs](../api/graphcache.md#cache). On this page
we use it frequently to read from and write to the cache.
-
- `info`: This argument shouldn't be used frequently but it contains running information about the
+
- `info`: This argument shouldn't be used frequently, but it contains running information about the
traversal of the query document. It allows us to make resolvers reusable or to retrieve
information about the entire query. Its full API can be found [in the API
docs](../api/graphcache.md#info).
The cache updaters return value is disregarded (and typed as `void` in TypeScript), which makes any
-
method that they call on the `cache` instance a side-effect, which may trigger additional cache
+
method that they call on the `cache` instance a side effect, which may trigger additional cache
changes and updates all affected queries as we modify them.
## Manually updating entities
···
> [the `gql` tag function](../api/core.md#gql) because `writeFragment` only accepts
> GraphQL `DocumentNode`s as inputs, and not strings.
-
### Cache Updates outside of updates
+
### Cache Updates outside updates
-
Cache updates are **not** possible outside of `updates`. If we attempt to store the `cache` in a
-
variable and call its methods outside of any `updates` functions (or functions, like `resolvers`)
+
Cache updates are **not** possible outside `updates`. If we attempt to store the `cache` in a
+
variable and call its methods outside any `updates` functions (or functions, like `resolvers`)
then Graphcache will throw an error.
-
Methods like these cannot be called outside of the `cacheExchange`'s `updates` functions, because
+
Methods like these cannot be called outside the `cacheExchange`'s `updates` functions, because
all updates are isolated to be _reactive_ to mutations and subscription events. In Graphcache,
out-of-band updates aren't permitted because the cache attempts to only represent the server's
state. This limitation keeps the data of the cache true to the server data we receive from API
results and makes its behaviour much more predictable.
-
If we still manage to call any of the cache's methods outside of its callbacks in its configuration,
+
If we still manage to call any of the cache's methods outside its callbacks in its configuration,
we will receive [a "(2) Invalid Cache Call" error](./errors.md#2-invalid-cache-call).
## Updating lists or links
···
Here we use the `cache.updateQuery` method, which is similar to the `cache.readQuery` method that
we've seen on the "Local Resolvers" page before](./local-resolvers.md#reading-a-query).
-
This method accepts a callback which will give us the `data` of the query, as read from the locally
-
cached data and we may return an updated version of this data. While we may want to instinctively
+
This method accepts a callback, which will give us the `data` of the query, as read from the locally
+
cached data, and we may return an updated version of this data. While we may want to instinctively
opt for immutably copying and modifying this data, we're actually allowed to mutate it directly,
since it's just a copy of the data that's been read by the cache.
This `data` may also be `null` if the cache doesn't actually have enough locally cached information
to fulfil the query. This is important because resolvers aren't actually applied to cache methods in
-
updaters. All resolvers are ignored so it becomes impossible to accidentally commit transformed data
+
updaters. All resolvers are ignored, so it becomes impossible to accidentally commit transformed data
to our cache. We could safely add a resolver for `Todo.createdAt` and wouldn't have to worry about
an updater accidentally writing it to the cache's internal data structure.
···
the cache. However, we've used a rather simple example when we've looked at a single list on a known
field.
-
In many schemas pagination is quite common and when we for instance delete a todo then knowing which
-
list to update becomes unknowable. We cannot know ahead of time how many pages (and using which
-
variables) we've already accessed. This knowledge in fact _shouldn't_ be available to Graphcache.
-
Querying the `Client` is an entirely separate concern that's often colocated with some part of our
+
In many schemas pagination is quite common, and when we for instance delete a todo then knowing the
+
lists to update becomes unknowable. We cannot know ahead of time how many pages (and its variables)
+
we've already accessed. This knowledge in fact _shouldn't_ be available to Graphcache. Querying the
+
`Client` is an entirely separate concern that's often colocated with some part of our
UI code.
```graphql
···
}
```
-
Suppose we have the above mutation which deletes a `Todo` entity by its ID. Our app may query a list
+
Suppose we have the above mutation, which deletes a `Todo` entity by its ID. Our app may query a list
of these items over many pages with separate queries being sent to our API, which makes it hard to
-
know which fields should be checked:
+
know the fields that should be checked:
```graphql
query PaginatedTodos ($skip: Int) {
···
}
```
-
Instead, we can **introspect an entity's fields** to find out dynamically which fields we may want
-
to update. This is possible thanks to [the `cache.inspectFields`
-
method](../api/graphcache.md#inspectfields). This method accepts a key or a keyable entity like the
+
Instead, we can **introspect an entity's fields** to find the fields we may want to update
+
dynamically. This is possible thanks to [the `cache.inspectFields`
+
method](../api/graphcache.md#inspectfields). This method accepts a key, or a keyable entity like the
`cache.keyOfEntity` method that [we've seen on the "Local Resolvers"
page](./local-resolvers.md#resolving-by-keys) or the `cache.resolve` method's first argument.
···
- `arguments`: The arguments for the given field, since each field that accepts arguments can be
accessed multiple times with different arguments. In this example we're looking at
`arguments.skip` to find all unique pages.
-
- `fieldKey`: This is the field's key which can come in useful to retrieve a field using
+
- `fieldKey`: This is the field's key, which can come in useful to retrieve a field using
`cache.resolve(entityKey, fieldKey)` to prevent the arguments from having to be stringified
repeatedly.
···
We may use the cache's [`cache.invalidate` method](../api/graphcache.md#invalidate) to either
invalidate entire entities or individual fields. It has the same signature as [the `cache.resolve`
-
method](../api/graphcache.md#resolve) which we've already seen [on the "Local Resolvers" page as
+
method](../api/graphcache.md#resolve), which we've already seen [on the "Local Resolvers" page as
well](./local-resolvers.md#resolving-other-fields). We can simplify the previous update we've written
with a call to `cache.invalidate`:
···
If we know what result a mutation may return, why wait for the GraphQL API to fulfill our mutations?
Additionally to the `updates` configuration we may also pass an `optimistic` option to the
-
`cacheExchange` which is a factory function using which we can create a "virtual" result for a
+
`cacheExchange` which is a factory function using, which we can create a "virtual" result for a
mutation. This temporary result can be applied immediately to the cache to give our users the
illusion that mutations were executed immediately, which is a great method to reduce waiting time
and to make our apps feel snappier.
···
- `cache`: The `cache` instance, which gives us access to methods allowing us to interact with the
- local cache. Its full API can be found [in the API docs](../api/graphcache.md#cache). On this page
we use it frequently to read from and write to the cache.
-
- `info`: This argument shouldn't be used frequently but it contains running information about the
+
- `info`: This argument shouldn't be used frequently, but it contains running information about the
traversal of the query document. It allows us to make resolvers reusable or to retrieve
information about the entire query. Its full API can be found [in the API
docs](../api/graphcache.md#info).
The usual `parent` argument isn't present since optimistic functions don't have any server data to
handle or deal with and instead create this data. When a mutation is run that contains one or more
-
optimistic mutation fields, Graphcache picks these up and generates immediate changes which it
+
optimistic mutation fields, Graphcache picks these up and generates immediate changes, which it
applies to the cache. The `resolvers` functions also trigger as if the results were real server
results.
···
Sometimes it's not possible for us to retrieve all data that an optimistic update requires to create
a "fake result" from the cache or from all existing variables.
-
This is why Graphcache allows for a small escape hatch for these scenarios which allows us to access
-
additional variables which we may want to pass from our UI code to the mutation. For instance, given
+
This is why Graphcache allows for a small escape hatch for these scenarios, which allows us to access
+
additional variables, which we may want to pass from our UI code to the mutation. For instance, given
a mutation like the following we may add more variables than the mutation specifies:
```graphql
+15 -15
docs/graphcache/errors.md
···
**This document lists out all errors and warnings in `@urql/exchange-graphcache`.**
-
Any unexpected behaviour, condition, or error will be marked by an error or warning
-
in development, which will output a helpful little message. Sometimes however, this
-
message may not actually tell you everything about what's going on.
+
Any unexpected behaviour or condition will be marked by an error or warning
+
in development. This will output as a helpful little message. Sometimes, however, this
+
message may not actually tell you about everything that's going on.
This is a supporting document that explains every error and attempts to give more
information on how you may be able to fix some issues or avoid these errors/warnings.
···
## (1) Invalid GraphQL document
> Invalid GraphQL document: All GraphQL documents must contain an OperationDefinition
-
> node for a query, subscription, or mutation.
+
> node for a query, subscription or mutation.
There are multiple places where you're passing in GraphQL documents, either through
methods on `Cache` (e.g. `cache.updateQuery`) or via `urql` using the `Client` or
hooks like `useQuery`.
-
Your queries must always contain a main operation, so either a query, mutation, or
+
Your queries must always contain a main operation, one of: query, mutation, or
subscription. This error occurs when this is missing, because the `DocumentNode`
is maybe empty or only contains fragments.
···
> operations like write or query, or as part of its resolvers, updaters,
> or optimistic configs.
-
If you're somehow accessing the `Cache` (an instance of `Store`) outside of any
+
If you're somehow accessing the `Cache` (an instance of `Store`) outside any
of the usual operations then this error will be thrown.
Please make sure that you're only calling methods on the `cache` as part of
-
configs that you pass to your `cacheExchange`. Outside of these functions the cache
+
configs that you pass to your `cacheExchange`. Outside these functions the cache
must not be changed.
However when you're not using the `cacheExchange` and are trying to use the
···
initialised correctly.
This is a safe-guard to prevent any asynchronous work to take place, or to
-
avoid mutating the cache outside of any normal operation.
+
avoid mutating the cache outside any normal operation.
## (3) Invalid Object type
···
Check whether your schema is up-to-date or whether you're using an invalid
field somewhere, maybe due to a typo.
-
As the warning states, this won't lead any operation to abort or an error
+
As the warning states, this won't lead any operation to abort, or an error
to be thrown!
## (5) Invalid Abstract type
···
As data is written to the cache, this warning is issued when `undefined` is encountered.
GraphQL results should never contain an `undefined` value, so this warning will let you
-
know which part of your result did contain `undefined`.
+
know the part of your result that did contain `undefined`.
## (14) Couldn't find \_\_typename when writing.
···
> If this is intentional, create a `keys` config for `???` that always returns null.
This error occurs when the cache can't generate a key for an entity. The key
-
would then effectively be `null` and the entity won't be cached by a key.
+
would then effectively be `null`, and the entity won't be cached by a key.
Conceptually this means that an entity won't be normalised but will indeed
be cached by the parent's key and field, which is displayed in the first
···
But if your entity at that place doesn't have any `id` fields, then you may
have to create a custom `keys` config. This `keys` function either needs to
-
return a unique ID for your entity or it needs to explicitly return `null` to silence
+
return a unique ID for your entity, or it needs to explicitly return `null` to silence
this warning.
## (16) Heuristic Fragment Matching
···
`@populate` directive to fields it first checks whether the type is valid and
exists on the schema.
-
If the field does not have sufficient type information because it doesn't exist
+
If the field does not have enough type information because it doesn't exist
on the schema or does not match expectations then this warning is logged.
Check whether your schema is up-to-date or whether you're using an invalid
···
## (21) Invalid mutation
-
> Invalid mutation field `???` is not in the defined schema but the `updates` option is referencing it.
+
> Invalid mutation field `???` is not in the defined schema, but the `updates` option is referencing it.
When you're passing an introspected schema to the cache exchange, it is
able to check whether your `opts.updates.Mutation` is valid.
···
## (22) Invalid subscription
-
> Invalid subscription field: `???` is not in the defined schema but the `updates` option is referencing it.
+
> Invalid subscription field: `???` is not in the defined schema, but the `updates` option is referencing it.
When you're passing an introspected schema to the cache exchange, it is
able to check whether your `opts.updates.Subscription` is valid.
+9 -9
docs/graphcache/local-resolvers.md
···
- `args`: The arguments that the field is being called with, which will be replaced with an empty
object if the field hasn't been called with any arguments. For example, if the field is queried as
`name(capitalize: true)` then `args` would be `{ capitalize: true }`.
-
- `cache`: Unlike in GraphQL.js this will not be the context but a `cache` instance, which gives us
+
- `cache`: Unlike in GraphQL.js this will not be the context, but a `cache` instance, which gives us
access to methods allowing us to interact with the local cache. Its full API can be found [in the
API docs](../api/graphcache.md#cache).
-
- `info`: This argument shouldn't be used frequently but it contains running information about the
+
- `info`: This argument shouldn't be used frequently, but it contains running information about the
traversal of the query document. It allows us to make resolvers reusable or to retrieve
information about the entire query. Its full API can be found [in the API
docs](../api/graphcache.md#info).
···
We may also run into situations where we'd like to generalise the resolver and not make it dependent
on the exact field it's being attached to. In these cases, the [`info`
object](../api/graphcache.md#info) can be very helpful as it provides us information about the
-
current query traversal and which part of the query document the cache is currently processing. The
-
`info.fieldName` property is one of these properties and lets us know which field the resolver is
-
currently operating on. Hence, we can create a reusable resolver like so:
+
current query traversal, and the part of the query document the cache is processing. The
+
`info.fieldName` property is one of these properties and lets us know the field that the resolver is
+
operating on. Hence, we can create a reusable resolver like so:
```js
const transformToDate = (parent, _args, _cache, info) =>
···
```
The `__typename` field is required. Graphcache will [use its keying
-
logic](./normalized-caching.md#custom-keys-and-non-keyable-entities) and your custom `keys`
+
logic](./normalized-caching.md#custom-keys-and-non-keyable-entities), and your custom `keys`
configuration to generate a key for this entity and will then be able to look this entity up in its
local cache. As with regular queries, the resolver is known to return a link since the `todo(id:
$id) { id }` will be used with a selection set, querying fields on the entity.
···
## Resolving other fields
-
In the above two examples we've seen how a resolver can replace Graphcache's logic which usually
+
In the above two examples we've seen how a resolver can replace Graphcache's logic, which usually
reads links and records only from its locally cached data. We've seen how a field on a record can
use `parent[fieldName]` to access its cached record value and transform it and how a resolver for a
link can return a partial entity [or a key](#resolving-by-keys).
···
- `entity`: This is the entity on which we'd like to access a field. We may either pass a keyable,
partial entity, e.g. `{ __typename: 'Todo', id: 1 }` or a key. It takes the same inputs as [the
-
`cache.keyOfEntity` method](../api/graphcache.md#keyofentity) which we've seen earlier in the
+
`cache.keyOfEntity` method](../api/graphcache.md#keyofentity), which we've seen earlier in the
["Resolving by keys" section](#resolving-by-keys). It also accepts `null` which causes it to
return `null`, which is useful for chaining multiple `resolve` calls for deeply accessing a field.
- `fieldName`: This is the field's name we'd like to access. If we're looking for the record on
···
### Reading a query
-
At any point, the `cache` allows us to read entirely separate queries in our resolvers which starts
+
At any point, the `cache` allows us to read entirely separate queries in our resolvers, which starts
a separate virtual operation in our resolvers. When we call `cache.readQuery` with a query and
variables we can execute an entirely new GraphQL query against our cached data:
+3 -3
docs/graphcache/offline.md
···
# Offline Support
_Graphcache_ allows you to build an offline-first app with built-in offline and persistence support,
-
by means of adding a `storage` interface. In combination with its [Schema
+
by adding a `storage` interface. In combination with its [Schema
Awareness](./schema-awareness.md) support and [Optimistic
Updates](./cache-updates.md#optimistic-updates) this can be used to build an application that
serves cached data entirely from memory when a user's device is offline and still display
···
`offlineExchange`. The `storage` is an adapter that contains methods for storing cache data in a
persisted storage interface on the user's device.
-
By default we can use the default storage option that `@urql/exchange-graphcache` comes with. This
+
By default, we can use the default storage option that `@urql/exchange-graphcache` comes with. This
default storage uses [IndexedDB](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API) to
persist the cache's data. We can use this default storage by importing the `makeDefaultStorage`
function from `@urql/exchange-graphcache/default-storage`.
···
have different strategies for dealing with this.
[The API docs list the entire interface for the `storage` option.](../api/graphcache.md#storage-option)
-
There we can see which methods we need to implement to implement a custom storage engine.
+
There we can see the methods we need to implement to implement a custom storage engine.
Following is an example of the simplest possible storage engine, which uses the browser's
[Local Storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage).
+6 -6
docs/graphcache/schema-awareness.md
···
Previously, [on the "Normalized Caching" page](./normalized-caching.md) we've seen how Graphcache
stores normalized data in its store and how it traverses GraphQL documents to do so. What we've seen
-
is that just using the GraphQL document for traversal and the `__typename` introspection field
+
is that just using the GraphQL document for traversal, and the `__typename` introspection field
Graphcache is able to build a normalized caching structure that keeps our application up-to-date
across API results, allows it to store data by entities and keys, and provides us configuration
options to write [manual cache updates](./cache-updates.md) and [local
···
- Fragments will be matched deterministically: A fragment can be written to be on an interface type
or multiple fragments can be spread for separate union'ed types in a selection set. In many cases,
if Graphcache doesn't have any schema information then it won't know what possible types a field
-
can return and may sometimes make a best guess and [issue a
+
can return and may sometimes make a guess and [issue a
warning](./errors.md#16-heuristic-fragment-matching). If we pass Graphcache a `schema` then it'll
be able to match fragments deterministically.
- A schema may have non-default names for its root types; `Query`, `Mutation`, and `Subscription`.
···
start checking whether any of the configuration options actually don't exist, maybe because we've
typo'd them. This is a small detail but can make a large different in a longer configuration.
- Lastly; a schema contains information on **which fields are optional or required**. When
-
Graphcache has a schema it knows which fields can be made optional and it'll be able to generate
+
Graphcache has a schema it knows optional fields that may be left out, and it'll be able to generate
"partial results".
### Partial Results
As we navigate an app that uses Graphcache we may be in states where some of our data is already
-
cached and some isn't. Graphcache normalizes data and stores it in tables for links and records for
+
cached while some aren't. Graphcache normalizes data and stores it in tables for links and records for
each entity, which means that sometimes it can maybe even execute a query against its cache that it
hasn't sent to the API before.
···
before it sent an API result.](../assets/partial-results.png)
Without a `schema` and information on which fields are optional, Graphcache will consider a "partial
-
result" as a cache miss. If we don't have all of the information for a query then we can't execute
+
result" as a cache miss. If we don't have all the information for a query then we can't execute
it against the locally cached data after all. However, an API's schema contains information on which
-
fields are required and which fields are optional, and if our apps are typed with this schema and
+
fields are required and optional, and if our apps are typed with this schema and
TypeScript, can't we then use and handle these partial results before a request is sent to the API?
This is the idea behind "Schema Awareness" and "Partial Results". When Graphcache has `schema`
+1 -1
docs/showcase.md
···
# Showcase
`urql` wouldn't be the same without our growing and loving community of users,
-
maintainers, and supporters. This page is specifically dedicated to all of you!
+
maintainers and supporters. This page is specifically dedicated to all of you!
## Used by folks at