Docs
Launch GraphOS Studio

Reference Architecture

Reference for Enterprise Deployment of GraphOS


While you can run the regardless of your Apollo plan, connecting the router to requires an

. You can test it out by signing up for a free
Enterprise trial
.

In this guide, learn about the fundamental concepts and configuration underlying Apollo's reference architecture for enterprise deployment of GraphOS with a . Use this overview as a companion reference to the

.

About Apollo's reference architecture

In a modern cloud-native stack, your components must be scalable with high availability. The

is built with this in mind. The is
much faster and less resource-intensive than the Apollo Gateway
, Apollo's original runtime.

Apollo provides a reference architecture for self-hosting the router and in an enterprise cloud environment using Kubernetes and Helm. It demonstrates how to deploy the router and subgraphs in a cloud environment, including the use of

and the following enterprise features of the Apollo Router:

Furthermore, the reference architecture demonstrates how to use the router with

to collect and analyze performance and utilization metrics for your supergraph, as well as testing performance using
k6
.

💡 TIP

Check out the

blog post to learn about Apollo's internal use of the router, including the performance improvements and resource utilization reductions.

Getting started

To get started with the reference architecture, follow the README in the

repository. The README provides a how-to guide that walks you through building a supergraph with the reference architecture.

Architecture overview

The reference architecture uses two Kubernetes clusters, one for a development environment and other for a production environment. Each cluster has pods for:

  • Hosting the router
  • Hosting s
  • Hosting a client
  • Collecting traces
  • Load testing K6 and viewing results with Grafana

For both environments, GraphOS serves as a schema registry. Each subgraph publishes schema updates to the registry via CI/CD, and GraphOS validates and composes them into a . The router regularly polls an endpoint called

to get the latest supergraph schema and routing configurations from GraphOS.

Apollo GraphOS
Your infrastructure
Publishes
schema
Updates
Polls for schema
and configuration changes
Schema
Registry
Apollo
Uplink
Subgraphs
Router

The router also pushes performance and utilization metrics to GraphOS via Uplink so you can

.

Development environment

The development environment consists of the router and subgraphs hosted in a Kubernetes cluster in either AWS or GCP. GraphOS validates using

and makes them available to the router via
Uplink
. The router also reports usage metrics back to GraphOS.

GitHub
Apollo GraphOS
Cloud (AWS or GCP)
Dev Cluster
Ingress
Ingress
Subgraph CI/CD (GitHub Actions)
Schema Registry/Uplink
Schema Checks
Usage Metrics
Client
Router
Subgraphs
🔀
🔀
Developers

Production environment

The production environment is similar to the development environment with some additions.

  • The router and subgraphs send their OpenTelemetry data to a collector. You can then view the data in Zipkin.
  • A K6 load tester sends traffic to the router and stores load test results in InfluxDB for viewing in Grafana.

GitHub
Apollo GraphOS
Cloud (AWS or GCP)
Prod Cluster
Router Ingress
Client Ingress
Subgraph CI/CD (GitHub Actions)
Schema Registry/Uplink
Schema Checks
Usage Metrics
Client
Router
Subgraphs
OTel Collector
Zipkin
K6 Load Tester
Grafana
InfluxDB
🔀
🔀
Developers

CI/CD

The reference architecture uses

for its CI/CD. These actions include:

  • PR-level
  • Building containers using Docker
  • Publishing subgraph schemas to Apollo Uplink
  • Deployments for:
    • s
    • Client
    • OpenTelemetry Collector
    • Grafana
  • Running load tests using k6

Development actions

When a PR is submitted to one of the subgraphs, GitHub Actions uses GraphOS to validate schema changes using schema checks.

When the PR is merged, GitHub Actions publishes schema updates to Uplink, and GraphOS validates them using schema checks before making them available to the router. Additionally, the subgraph service is deployed.

Apollo Graphos
Dev Cluster
GitHub Actions
Schema Checks
Schema Uplink
Router
Subgraph
Job: Schema Checks
Job: Build + Deploy Subgraph, Schema Publish
Submit PR
Merge PR

Production deploy

When you manually trigger a production deployment, GitHub Actions publishes schema updates to Uplink and GraphOS validates them using schema checks before making them available to the router. Additionally, the subgraph service is deployed.

Apollo Graphos
Production Cluster
GitHub Actions
Schema Checks
Schema Uplink
Router
Subgraph
Job: Build + Deploy Subgraph, Schema Publish
Trigger Production Deployment

Deploy router

When you manually trigger a router deployment, GitHub Actions deploys the router to the Kubernetes cluster.

Dev/Production Cluster
GitHub Actions
Router
Job: Deploy Router
Trigger Deployment

Deploy OpenTelemetry collector

When you manually trigger an OpenTelemetry deployment, GitHub Actions deploys the OpenTelemetry Collector and Zipkin to the Kubernetes cluster.

Production Cluster
GitHub Actions
OTel Collector
Zipkin
Job: Deploy OTel
Trigger Deployment

Deploy load test infrastructure

When you manually trigger a load test infrastructure deployment, GitHub Actions deploys the K6 Load Tester, Grafana, the load tests, and InfluxDB to the Kubernetes cluster.

Production Cluster
GitHub Actions
K6 Load Tester
Grafana
Load Tests
InfluxDB
Job: Deploy Load Test Infrastructure
Trigger Production Deployment

Run load tests

When you manually trigger a load test run, GitHub Actions triggers the K6 Load Tester to pull the Load Tests from the environment, run the tests against the router, and store the results in InfluxDB.

Production Cluster
GitHub Actions
Router
Subgraphs
K6 Load Tester
Load Tests
InfluxDB
Job: Run Load Tests
Trigger Load Tests

Further reading

Previous
Supergraph Architecture Framework
Edit on GitHubEditForumsDiscord

© 2024 Apollo Graph Inc.

Privacy Policy

Company