The Graph
Update - Osmosis indexing is back to normal
Mar 25, 2023 - 16:45 UTC
Monitoring - A fix has been implemented and we are monitoring the results.
Mar 25, 2023 - 08:01 UTC
Update - We are continuing to investigate this issue.
Mar 23, 2023 - 12:23 UTC
Investigating - We are currently investigating this issue.
Mar 17, 2023 - 05:01 UTC

About This Site

Status, incident and maintenance information for The Graph.
Use the RSS feed together with https://zapier.com/apps/rss to have updates sent straight to Slack or Discord.

Network - Gateway ? Operational
90 days ago
100.0 % uptime
Today
Network - Explorer ? Operational
90 days ago
100.0 % uptime
Today
Network - Subgraph Studio ? Operational
90 days ago
100.0 % uptime
Today
Network - Network Subgraph ? Operational
90 days ago
100.0 % uptime
Today
Hosted Service - Queries ? Operational
90 days ago
99.94 % uptime
Today
Hosted Service - Subgraph Health Operational
90 days ago
100.0 % uptime
Today
Hosted Service - Indexing Major Outage
90 days ago
98.93 % uptime
Today
Mainnet Operational
90 days ago
99.97 % uptime
Today
Kovan Operational
90 days ago
100.0 % uptime
Today
Rinkeby Operational
90 days ago
100.0 % uptime
Today
Ropsten Operational
90 days ago
100.0 % uptime
Today
Goerli Operational
90 days ago
100.0 % uptime
Today
POA Core Operational
90 days ago
100.0 % uptime
Today
POA Sokol Operational
90 days ago
100.0 % uptime
Today
xDai Operational
90 days ago
99.84 % uptime
Today
Polygon (Matic) Operational
90 days ago
100.0 % uptime
Today
Mumbai Operational
90 days ago
100.0 % uptime
Today
Fantom Operational
90 days ago
100.0 % uptime
Today
BSC Operational
90 days ago
99.65 % uptime
Today
BSC Chapel Operational
90 days ago
100.0 % uptime
Today
Clover Operational
90 days ago
99.53 % uptime
Today
Avalanche Operational
90 days ago
99.88 % uptime
Today
Avalanche Fuji Operational
90 days ago
100.0 % uptime
Today
Celo Operational
90 days ago
99.97 % uptime
Today
Celo Alfajores Operational
90 days ago
99.67 % uptime
Today
Fuse Operational
90 days ago
99.85 % uptime
Today
Moonbeam Operational
90 days ago
99.97 % uptime
Today
Arbitrum One Operational
90 days ago
99.98 % uptime
Today
Arbitrum Testnet (on Rinkeby) Operational
90 days ago
100.0 % uptime
Today
Optimism Operational
90 days ago
100.0 % uptime
Today
Optimism Testnet (on Kovan) Operational
90 days ago
100.0 % uptime
Today
Moonriver Operational
90 days ago
99.99 % uptime
Today
NEAR Operational
Aurora Operational
90 days ago
100.0 % uptime
Today
Aurora Testnet Operational
90 days ago
100.0 % uptime
Today
Osmosis ? Operational
90 days ago
75.5 % uptime
Today
Arweave ? Operational
90 days ago
100.0 % uptime
Today
Cosmos ? Major Outage
90 days ago
69.9 % uptime
Today
Hosted Service - Miscellaneous Operational
90 days ago
99.98 % uptime
Today
Deployments ? Operational
90 days ago
99.93 % uptime
Today
IPFS Operational
90 days ago
100.0 % uptime
Today
Subgraph Logs ? Operational
90 days ago
100.0 % uptime
Today
Explorer ? Operational
90 days ago
100.0 % uptime
Today
thegraph.com ? Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Network: Gateway (Response Time)
Fetching
Network Subgraph (Test Query)
Fetching
Network: Explorer (Response Time)
Fetching
Past Incidents
Mar 26, 2023
Resolved - This incident has been resolved.
Mar 26, 11:58 UTC
Investigating - We are currently investigating this issue.
Mar 26, 11:18 UTC
Mar 25, 2023

Unresolved incident: Cosmos/Osmosis chain stuck.

Mar 24, 2023
Resolved - This incident has been resolved.
Mar 24, 20:28 UTC
Update - [Comment from Opsgenie]lutter acknowledged alert: "[FIRING:1] Hosted Service - Queries: too many failed requests Production (partial_outage high 9rM_ehMVz)"
Mar 24, 19:12 UTC
Investigating - We are currently investigating this issue.
Mar 24, 19:08 UTC
Mar 23, 2023
Resolved - This incident has been resolved.
Mar 23, 21:21 UTC
Investigating - We are currently investigating this issue.
Mar 23, 20:47 UTC
Resolved - This incident has been resolved.
Mar 23, 01:37 UTC
Investigating - We are currently investigating this issue.
Mar 23, 00:32 UTC
Mar 22, 2023

No incidents reported.

Mar 21, 2023

No incidents reported.

Mar 20, 2023
Resolved - This incident has been resolved.
Mar 20, 12:16 UTC
Investigating - We are currently investigating this issue.
Mar 20, 10:49 UTC
Resolved - This incident has been resolved.
Mar 20, 04:41 UTC
Investigating - We are currently investigating this issue.
Mar 20, 04:36 UTC
Mar 19, 2023
Resolved - This incident has been resolved.
Mar 19, 22:55 UTC
Investigating - We are currently investigating this issue.
Mar 19, 22:45 UTC
Resolved - This incident has been resolved.
Mar 19, 22:55 UTC
Update - [Comment from Opsgenie]Tiago Guimaraes acknowledged alert: "[FIRING:1] Arbitrum One: Block ingestor lagging behind Production (partial_outage hyf78l5nz high A ZPGle2MVz)"
Mar 19, 22:51 UTC
Investigating - We are currently investigating this issue.
Mar 19, 22:45 UTC
Mar 18, 2023

No incidents reported.

Mar 17, 2023
Mar 16, 2023
Completed - The scheduled maintenance has been completed.
Mar 16, 21:19 UTC
Update - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 16, 21:11 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 16, 21:00 UTC
Scheduled - We're going to upgrade IFPS node in production hosted-service to version `0.18.1`.
During the upgrade the IPFS will be unavailable. We expect a downtime up to 30 min.

Mar 16, 19:58 UTC
Mar 15, 2023

No incidents reported.

Mar 14, 2023
Resolved - This incident has been resolved.
Mar 14, 20:49 UTC
Update - [Comment from Opsgenie]ruslan acknowledged alert: "[FIRING:1] Hosted Service - Queries: too many failed requests Production (partial_outage high 9rM_ehMVz)"
Mar 14, 20:43 UTC
Investigating - We are currently investigating this issue.
Mar 14, 20:39 UTC
Resolved - This incident has been resolved.
Mar 14, 18:17 UTC
Update - [Comment from Opsgenie]Leonardo Schwarzstein acknowledged alert: "[FIRING:1] Clover: Block ingestor lagging behind Production (partial_outage AoM_ehGVk)"
Mar 14, 09:23 UTC
Investigating - We are currently investigating this issue.
Mar 13, 09:16 UTC
Resolved - This incident has been resolved.
Mar 14, 15:12 UTC
Investigating - We're going to upgrade one IFPS node to version 0.18.0. During the upgrade the IPFS will be unavailable. We expect a downtime up to 30 min.
Mar 14, 14:46 UTC
Mar 13, 2023
Mar 12, 2023
Resolved - This incident has been resolved.
Mar 12, 04:50 UTC
Investigating - We are currently investigating this issue.
Mar 12, 04:10 UTC