Fragmented Thought

GraphQL file upload mutations

By

Published:

Lance Gliser

Heads up! This content is more than six months old. Take some time to verify everything still works as expected.

Getting file uploads into our stack has been a joy. I wanted to break down our approach in the hopes it saves someone a bit of heartache. Here's the tools we'll be using:

For the full example of my GraphQL server setup, see the example project on GitHub.

GraphQL

I'll omit a great deal as the focus is only on upload. Your mechanism should define its own file and ids handling. My own is based on ArangoDB, but that's outside this article's scope.

In this example I've removed my typical GraphQL namespacing that allows queries and mutations as example below would show. Someday I'll get around to an entry on that...

mutation AddFile($file: Upload!) { file { add(file: $file) { _key displayName } } }

For a basic upload, we're looking at something similar to this:

scalar Upload type Mutation extend type Mutation { "Uploads a file to a cloud provider and returns file object" addFile(file: Upload!): File } type File implements IDocument & ICreated & IDisplayName { "Unique identifier for the resource across all collections" _id: ID "Unique identifier for the resource within its collection" _key: ID "Unique identifier for revision" _rev: String "ISO date time string for the time this resource was created" createdAt: String "Unique identifier for user that created this resource" createdBy: ID "The original preformatted name safe to display in any HTML context" displayName: String "A renderedable source safe to display in HTML context" displayImageUrl: String "The public url of the file" publicUrl: String "The storage provider for this file" storageProvider: CloudProvider "The storage location or file path to this file" bucket: String "The actual file name as used in storage" fileName: String "The file size in bytes" size: Int mimetype: String encoding: String } enum CloudProvider { GoogleCloudPlatform AmazonWebServices } interface IDocument { "Unique identifier for the resource across all collections" _id: ID "Unique identifier for the resource within its collection" _key: ID "Unique identifier for revision" _rev: String } interface ICreated { "ISO date time string for the time this resource was created" createdAt: String "Unique identifier for users that created this resource" createdBy: ID } interface IDisplayName { "A preformatted display name safe to display in HTML context" displayName: String }

Resolvers

I typically split my code into logical components. At the top level I use an index file like this to gather all the resolvers and graphql schemas.

src/components/index.ts

import { GraphQLFileLoader } from "@graphql-tools/graphql-file-loader"; import { loadSchema } from "@graphql-tools/load"; import { IResolvers } from "@graphql-tools/utils/Interfaces"; import { GraphQLSchema } from "graphql"; import { addResolversToSchema } from "@graphql-tools/schema"; import { addResolvers as addFileResolvers } from "./files/files.resolvers"; import { GraphQLUpload } from "graphql-upload"; export const getSchemaWithResolvers = async (): Promise<GraphQLSchema> => { // Create a schema based on all `.graphql` files in src/src const schema = await loadSchema("./src/**/*.graphql", { // load from multiple files using glob loaders: [new GraphQLFileLoader()], }); // Prepare resolvers by adding any static packages like Upload or JSON const resolvers: IResolvers = { Query: {}, Mutation: {}, Upload: GraphQLUpload!, }; // Augment the static resolvers with any component resolvers we've written addFileResolvers(resolvers); // Return the whole package back to the application layer so Apollo can run return addResolversToSchema({ schema, resolvers, }); };

src/components/files/files.resolvers.ts

The component resolvers can then focus on adding just their own logic, extending the Query and Mutation objects as required. I use the same logic for field specific resolvers as you can see commented out.

import { FileResolvers, MutationResolvers, Resolvers, } from "../../generated/types"; import { addFile } from "./files.utils"; export const addResolvers = (resolvers: Resolvers): Resolvers => { // resolvers.Query = { ...resolvers.Query, ...query }; resolvers.Mutation = { ...resolvers.Mutation, ...mutation }; // resolvers.File = fileResolvers; return resolvers; }; const mutation: MutationResolvers = { addFile: (parent, args, context) => addFile(args, context), }; // const fileResolvers: FileResolvers = { // publicUrl: (parent, args, context) => { // if(parent.publicUrl){ // return parent.publicUrl; // } // // TODO if requested, make the file public and return the url // } // }

Where did MutationResolvers come from?

The autocomplete types generated by GraphQL codegen are just so tasty it's unbelievable. Implementation time and error counts dropped significantly once we started using them. Huge thanks to: The Guild's code generator. 🚀

To say "it's complicated" would be an understatement. Here's a sample of what gets generated:

// ContextType is my own definiton passed through to the generator export type MutationResolvers< ContextType = GraphQLContext, ParentType extends ResolversParentTypes["Mutation"] = ResolversParentTypes["Mutation"] > = { /** Uploads a file to a cloud provider and returns file object */ addFile?: Resolver< Maybe<ResolversTypes["File"]>, ParentType, ContextType, RequireFields<MutationsAddFileArgs, "file"> >; }; export type ResolverTypeWrapper<T> = Promise<T> | T; export type File = ICreated & IDisplayName & IDocument & { __typename?: "File"; /** Unique identifier for the resource across all collections */ _id?: Maybe<Scalars["ID"]>; /** Unique identifier for the resource within its collection */ _key?: Maybe<Scalars["ID"]>; /** Unique identifier for revision */ _rev?: Maybe<Scalars["String"]>; /** The storage location or file path to this file */ bucket?: Maybe<Scalars["String"]>; /** ISO date time string for the time this resource was created */ createdAt?: Maybe<Scalars["String"]>; /** Unique identifier for user that created this resource */ createdBy?: Maybe<Scalars["ID"]>; /** A renderedable source safe to display in HTML context */ displayImageUrl?: Maybe<Scalars["String"]>; // ... };

The upshot of all their generated work is perfectly translated GraphQL objects (with interfaces!) and static types for your resolvers. Many of my resolvers are written take advantage of all that typing:

const fileResolvers: FileResolvers = { someField: ({ id: fileId }, { limit, offset, ...args }, { user }) => { // ... some implementation }, };

If you're not salivating yet, I'll throw one more treat on the pile. We use the same code generation in our React app side generating our queries and mutations to get automatic state management and typings:

const [addFile, { loading, error, data }] = useAddFileMutation({}); const onClick = async () => { await addFile({ variables: { file: null, // your state variable }, }); };

Utilities

Apollo server will have handled most of the requirements up to now based on just the GraphQL spec and resolvers we added. Custom bits happen in our utilities. Your own file storage needs are going to be different. I'll show just the basics of getting where you could store and a return here.

src/components/files/files.utils.ts

import { ApolloError } from "apollo-server-express"; import { GraphQLContext } from "../context"; import fs from "fs"; import path from "path"; import { CloudProvider, File, MutationsAddFileArgs, } from "../../generated/types"; const defaultProvider: CloudProvider = CloudProvider.Aws; const bucket = "uploads"; export const addFile = async ( // { file: Upload } as defined by graphql-upload args: MutationsAddFileArgs, // Logging, repositories, and user details context: GraphQLContext ): Promise<File> => { // Wait for the capacitor upload to complete const file = await args.file; // When a file is uploaded to the API, the extension is .tmp. // Before we can move the file along we need restore the previous name. const sourcePathComponents: string[] = file.capacitor._path .split(path.sep) .slice(0, -1); const targetPath = [...sourcePathComponents, file.filename].join(path.sep); fs.renameSync(file.capacitor._path, targetPath); // TODO your own storage logic on the file at targetPath return { bucket, displayName: file.filename, encoding: file.encoding, fileName: file.filename, mimetype: file.mimetype, size: file.capacitor.size, storageProvider: provider, }; };

Testing

This example will draw slightly on my previous entry about using context functions to share state between tests.

I've offloaded much of the describe and it logic, as we're only interested in how to take a static file and use supertest to run an integration test from all the way outside the app through the resolvers, db, and back.

src/components/files/files.test.ts

import path from "path"; import { print } from "graphql"; import { gql } from "apollo-server-express"; import supertest from "supertest"; import { File, MutationsAddFileArgs } from "../../generated/types"; describe("files", () => { describe("resolvers", () => { it("should add file", async () => { await withFileContext(async ({ file }) => { expect(file.storageProvider).toBeTruthy(); expect(file.bucket).toBeTruthy(); expect(file.fileName).toBeTruthy(); }); }); }); }); const fileName = "static.text"; const filePath = path.resolve(`./tests/${fileName}`); let file: File | undefined; type TWithFileContext = ( context: ResolverContext & { file: File; } ) => Promise<void>; const withFileContext = async (fn: TWithFileContext) => { await withResolverContext(async (context) => { if (!file) { // In this case the "file" needs a full path to upload file = await resolverCreateFile(context, { file: filePath }); expect(file.fileName).toBe(fileName); } await fn({ ...context, file }); }); }; export const resolverCreateFile = async ( // Application, authentication token context: ResolverContext, variables: MutationsAddFileArgs ): Promise<File> => { // Use `gql` to create an AST tree your IDE can validate against your schemas // Then use `print` to change that schema into the required query string const query = print(gql` mutation AddFile($file: Upload!) { addFile(file: $file) { _id displayName storageProvider bucket fileName } } `); // We need to produce a multi-part request that's distinct from the standard graphql request. // It looks like the below. the -------#### are form post generated boundaries // The "operations" field defines the things we'll be running // The "map" defines what to substitute in during execution // Start a request, we're going to need to attach to it const request = supertest(context.application) .post(GRAPHQL_URI) .set("authorization", context.authorization) // Goal: {"operationName":"AddFile","variables":{"file":null},"query":"mutation AddFile($file: .... .field( "operations", JSON.stringify({ operationName: "AddFile", variables: { file: null }, // This null value is substituted below query, }) ) // Goal: {"1":["variables.file"]} .field( "map", JSON.stringify({ 1: Object.keys(variables).map((key) => `variables.${key}`), }) ); // Attach the files and fields Object.values(variables).forEach((value, i) => { if (contentType(path.extname(value))) { request.attach(`${i + 1}`, value); } else { request.field(`${i + 1}`, value); } }); // Send the request const response = await request; expectGraphQLSuccessResponse(response); return response.body.data.ontologies.addFile; };

Hopefully this helps someone out there. If you liked it or have a question, please send me a tweet.