Automate performance monitoring in GraphQL

A quick cheatsheet to integrate Sentry Performance in your Apollo Server

Enrique Fueyo
Building Lang.ai

--

The first version of this article started with some paragraphs about why performance is important, why we should monitor our systems and some other concepts about alerts and systems optimization. To my understanding, if you are reading this article you already know it and you just want the piece of code to start monitoring the performance of your GraphQL resolvers. Let’s dive in!

Stack Definition

  • We use Sentry in our systems for error monitoring and recently added their Performance functionality.
  • Lang.ai’s User Interface is a React App using a GraphQL API.
  • We use Apollo to connect the frontend and the GraphQL backend.

Problem

  • Some months ago we started experiencing some degradation in a few queries. They were complex (big) queries and it was tricky to check what precise part of the request was slowing the whole response.

Goal

  • We wanted to monitor the performance of each resolver.
  • Ideally, we wanted to monitor the performance of every resolver without explicitly adding it (we didn’t want to proactively add a start and stop lines of code all around our function’s bodies).

Solution

  • Create a Sentry transaction for each request.

A transaction is created for each request. This transactions are initialized with the context and added the them.

  • Add a span for each resolver

To intercept the life-cycle of each resolver and create and finish a span, we needed to create an Apollo Plugin.

  • And then we have to connect all the pieces on server initialization

Once your server starts receiving requests it will send every transaction info to your configured Sentry account. You should see something like this:

Transactions list
Transactions list

And you can also see the detail of each individual transaction with its resolvers.

Transaction detail

One last consideration

We could have created the transaction directly in the plugin, inside the requestDidStart function and omit any references to the Context. But, if we make the transaction accessible from the Context, each resolver can access it and we can create more spans inside the resolvers for more fine grained information.

Accessing the transaction from the resolver should also be helpful for Distributed Tracing.

Check the other articles in our Building Lang.ai publication. We write about Machine Learning, Software Development, and our Company Culture.

--

--