Given a GraphQL schema that contains data like the following
type Person {
name: String!
age: Int!
friends(filter: FriendsFilter): [Person!]!
hobbies(filter: HobbiesFilter): [Hobby!]!
}
I can create a schema mapping in my controller which looks like the following
@SchemaMapping
public List<Person> friends(
@Arugment FriendsFilter filter,
Person person){
// Fetch and return friends
}
However, this runs us into the N+1 problem. So to solve that, we need to batch. What I would expect to be able to do is modify my code to the following
@BatchMapping
public Map<Person, List<Person>> friends(
@Arugment FriendsFilter filter,
List<Person> people){
// Fetch and return friends in bulk
}
I have found that spring graphql does not support this kind of thing. While this kind of support would be ideal, I'm willing to work around it, but all the other answers I'm finding lose the type information and attempt to register a mapper for the pair Person.class, List.class
which is insufficient as I have two fields that are both lists. What exactly is the simplest and most correct way forward here? I have to solve the N+1 problem and I have to preserve the filtering functionality of my API.
I've tried reading through the closed issues asking for this feature and I still haven't quite found the answer I'm looking for. I could really just use some help finding the right thing to do in this case where a filter is required and we can't sacrifice the typing of List.
Given a GraphQL schema that contains data like the following
type Person {
name: String!
age: Int!
friends(filter: FriendsFilter): [Person!]!
hobbies(filter: HobbiesFilter): [Hobby!]!
}
I can create a schema mapping in my controller which looks like the following
@SchemaMapping
public List<Person> friends(
@Arugment FriendsFilter filter,
Person person){
// Fetch and return friends
}
However, this runs us into the N+1 problem. So to solve that, we need to batch. What I would expect to be able to do is modify my code to the following
@BatchMapping
public Map<Person, List<Person>> friends(
@Arugment FriendsFilter filter,
List<Person> people){
// Fetch and return friends in bulk
}
I have found that spring graphql does not support this kind of thing. While this kind of support would be ideal, I'm willing to work around it, but all the other answers I'm finding lose the type information and attempt to register a mapper for the pair Person.class, List.class
which is insufficient as I have two fields that are both lists. What exactly is the simplest and most correct way forward here? I have to solve the N+1 problem and I have to preserve the filtering functionality of my API.
I've tried reading through the closed issues asking for this feature and I still haven't quite found the answer I'm looking for. I could really just use some help finding the right thing to do in this case where a filter is required and we can't sacrifice the typing of List.
Share Improve this question asked Mar 12 at 12:16 Travis StockerTravis Stocker 111 bronze badge1 Answer
Reset to default 1The batching feature in graphql-java and Spring for GraphQL is not "just" a way to work around the N+1 issue for a single data fetcher. This is a more general mechanism for loading elements in batches and caching their resolution for the lifetime of the GraphQL request.
More specifically, the DataLoader
API is a contract for loading Objects given a key (usually their id). DataLoader#load(...)
calls can be invoked for different parts of the query which may have different arguments and different selection sets. Futures are kept around until their resolution is triggered with DataLoader#dispatch()
.
Batch loading functions do have access to the BatchLoaderEnvironment
which contains the main GraphQLContext
and key contexts (but this is outside of the scope of this question). @BatchMapping
methods are merely shortcuts to registering a batch loading function and using it in a data fetcher.
For your case, I would say that there are two possible approaches: fetching then filtering, or doing a tailored fetch.
Let's use the following schema for this example:
type Query {
me: Person
people: [Person]
}
input FriendsFilter {
favoriteBeverage: String
}
type Person {
id: ID!
name: String
favoriteBeverage: String
friends(filter: FriendsFilter): [Person]
}
Fetching then filtering
One approach would be to fetch all friends for a given person, possibly caching their values for the entire lifetime of the GraphQL request.
@Controller
public class FriendsController {
private final Map<Integer, Person> people = Map.of(
1, new Person(1, "Rossen", "coffee", List.of(2, 3)),
2, new Person(2, "Brian", "tea", List.of(1, 3)),
3, new Person(3, "Donna", "tea", List.of(1, 2, 4)),
4, new Person(4, "Brad", "coffee", List.of(1, 2, 3, 5)),
5, new Person(5, "Andi", "coffee", List.of(1, 2, 3, 4))
);
public FriendsController(BatchLoaderRegistry registry) {
registry.forTypePair(Integer.class, Person.class).registerMappedBatchLoader((personIds, env) -> {
// fetch all friends and do not apply filter, caching Person by their id
Map<Integer, Person> friends = new HashMap<>();
personIds.forEach(personId -> friends.put(personId, people.get(personId)));
return Mono.just(friends);
});
}
@QueryMapping
public Person me() {
return this.people.get(2);
}
@QueryMapping
public Collection<Person> people() {
return this.people.values();
}
@SchemaMapping
public CompletableFuture<List<Person>> friends(Person person, @Argument FriendsFilter filter, DataLoader<Integer, Person> dataLoader) {
// load all friends THEN apply the given filter
return dataLoader
.loadMany(person.friendsId())
.thenApply(filter::apply);
}
public record Person(Integer id, String name, String favoriteBeverage, List<Integer> friendsId) {
}
public record FriendsFilter(String favoriteBeverage) {
List<Person> apply(List<Person> friends) {
return friends.stream()
.filter(person -> person.favoriteBeverage.equals(this.favoriteBeverage))
.collect(Collectors.toList());
}
}
}
In practice, this request:
query {
me {
name
friends(filter: {favoriteBeverage: "tea"}) {
name
favoriteBeverage
}
}
people {
name
friends(filter: {favoriteBeverage: "coffee"}) {
name
favoriteBeverage
}
}
}
Will yield:
{
"data": {
"me": {
"name": "Brian",
"friends": [
{
"name": "Donna",
"favoriteBeverage": "tea"
}
]
},
"people": [
{
"name": "Andi",
"friends": [
{
"name": "Rossen",
"favoriteBeverage": "coffee"
},
{
"name": "Brad",
"favoriteBeverage": "coffee"
}
]
},
{
"name": "Brad",
"friends": [
{
"name": "Rossen",
"favoriteBeverage": "coffee"
},
{
"name": "Andi",
"favoriteBeverage": "coffee"
}
]
},
{
"name": "Donna",
"friends": [
{
"name": "Rossen",
"favoriteBeverage": "coffee"
},
{
"name": "Brad",
"favoriteBeverage": "coffee"
}
]
},
{
"name": "Brian",
"friends": [
{
"name": "Rossen",
"favoriteBeverage": "coffee"
}
]
},
{
"name": "Rossen",
"friends": []
}
]
}
}
Note: we have here two different operations fetching friends with different filters, but they're both using the batch loading function.
- Pros:
Person
value are well shared in theDataLoader
cache, meaning you will fetch more values but perform less I/O operations. - Cons: If people have lots of friends and filtering operations are costly, the server will consume more memory/CPU instead of delegating that to the data store
Tailored fetch
Let's try and just fetch the values we need.
@Controller
public class FriendsController {
private final Map<Integer, Person> people = Map.of(
1, new Person(1, "Rossen", "coffee", List.of(2, 3)),
2, new Person(2, "Brian", "tea", List.of(1, 3)),
3, new Person(3, "Donna", "tea", List.of(1, 2, 4)),
4, new Person(4, "Brad", "coffee", List.of(1, 2, 3, 5)),
5, new Person(5, "Andi", "coffee", List.of(1, 2, 3, 4))
);
public FriendsController(BatchLoaderRegistry registry) {
// we're now using a composed key
registry.forTypePair(FriendFilterKey.class, Person[].class).registerMappedBatchLoader((keys, env) -> {
// perform efficient fetching by delegating the filter operation to the data store
Map<FriendFilterKey, Person[]> result = new HashMap<>();
keys.forEach(key -> {
Person[] friends = key.person().friendsId().stream()
.map(people::get)
.filter(friend -> key.friendsFilter().matches(friend))
.toArray(Person[]::new);
result.put(key, friends);
});
return Mono.just(result);
});
}
@QueryMapping
public Person me() {
return this.people.get(2);
}
@QueryMapping
public Collection<Person> people() {
return this.people.values();
}
@SchemaMapping
public CompletableFuture<Person[]> friends(Person person, @Argument FriendsFilter filter, DataLoader<FriendFilterKey, Person[]> dataLoader) {
return dataLoader.load(new FriendFilterKey(person, filter));
}
public record Person(Integer id, String name, String favoriteBeverage, List<Integer> friendsId) {
}
public record FriendsFilter(String favoriteBeverage) {
boolean matches(Person friend) {
return friend.favoriteBeverage.equals(this.favoriteBeverage);
}
}
// because this key contains both the person and the filter, we will need to fetch the same friend multiple times
public record FriendFilterKey(Person person, FriendsFilter friendsFilter) {
}
}
- Pros: We only fetch the friends we need for a given person, delegating the memory/filter operations to the data store.
- Cons: We're performing one call per list of friends and the
DataLoader
cache usage is not optimal
Note: we can't consider a record FriendFilterKey(Integer personId, FriendFilter) {}
here. If we did, the batch loading function would return null
for filtered out friends, leading to null
entries in the response:
"me": {
"name": "Brian",
"friends": [
{
"name": "Donna",
"favoriteBeverage": "tea"
},
null // Rossen is filtered out
]
},
Let's follow on the issue you've created. We'll work there on some documentation improvements.