The Graph
Identified - Polygon Mumbai has stopped producing new blocks.
Apr 12, 2024 - 03:52 UTC
Identified - Goerli testnet have stopped producing blocks and hence we have closed subgraphs support for it.
Mar 21, 2024 - 08:20 UTC
Investigating - When deploying your subgraph to `osmosis` / `cosmos` , you might see this warning

`
Unfortunately, proper support for Osmosis in the Hosted Service can not be guaranteed for the time being. Please refer to https://status.thegraph.com for more information. If this is affecting you in any way, please reach out to info+osmosis@thegraph.foundation.
`

We are currently investigating this issue.

Mar 08, 2024 - 12:45 UTC

About This Site

Status, incident and maintenance information for The Graph.
Use the RSS feed together with https://zapier.com/apps/rss to have updates sent straight to Slack or Discord.

Network - Gateway ? Operational
90 days ago
99.84 % uptime
Today
Network - Explorer ? Operational
90 days ago
100.0 % uptime
Today
Network - Subgraph Studio ? Operational
90 days ago
98.75 % uptime
Today
Network - Network Subgraph ? Operational
90 days ago
100.0 % uptime
Today
Hosted Service - Queries ? Operational
90 days ago
99.97 % uptime
Today
Hosted Service - Subgraph Health Operational
90 days ago
100.0 % uptime
Today
Hosted Service - Indexing Major Outage
90 days ago
92.56 % uptime
Today
Mainnet Operational
90 days ago
99.92 % uptime
Today
xDai Operational
90 days ago
100.0 % uptime
Today
Polygon (Matic) Operational
90 days ago
99.93 % uptime
Today
Mumbai Operational
90 days ago
98.61 % uptime
Today
Fantom Operational
90 days ago
100.0 % uptime
Today
BSC Operational
90 days ago
97.9 % uptime
Today
BSC Chapel Operational
90 days ago
100.0 % uptime
Today
Clover Operational
90 days ago
100.0 % uptime
Today
Avalanche Operational
90 days ago
99.18 % uptime
Today
Avalanche Fuji Operational
90 days ago
100.0 % uptime
Today
Celo Operational
90 days ago
100.0 % uptime
Today
Celo Alfajores Operational
90 days ago
100.0 % uptime
Today
Fuse Operational
90 days ago
100.0 % uptime
Today
Moonbeam Operational
90 days ago
100.0 % uptime
Today
Arbitrum One Operational
90 days ago
100.0 % uptime
Today
Optimism Operational
90 days ago
100.0 % uptime
Today
Moonriver Operational
90 days ago
100.0 % uptime
Today
NEAR Operational
Aurora Operational
90 days ago
100.0 % uptime
Today
Aurora Testnet Operational
90 days ago
100.0 % uptime
Today
Osmosis ? Major Outage
90 days ago
0.59 % uptime
Today
Arweave ? Operational
90 days ago
100.0 % uptime
Today
Cosmos ? Major Outage
90 days ago
0.59 % uptime
Today
zkSync Era Operational
90 days ago
100.0 % uptime
Today
Polygon zkEvm Testnet Operational
90 days ago
99.91 % uptime
Today
Polygon zkEvm Operational
90 days ago
99.76 % uptime
Today
Fantom Testnet Operational
90 days ago
100.0 % uptime
Today
Base Operational
90 days ago
100.0 % uptime
Today
Base Testnet Major Outage
90 days ago
67.66 % uptime
Today
Harmony Operational
90 days ago
100.0 % uptime
Today
Sepolia Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Mainnet Operational
90 days ago
99.85 % uptime
Today
Polygon Amoy Operational
90 days ago
100.0 % uptime
Today
Arbitrum Sepolia Operational
90 days ago
100.0 % uptime
Today
Holesky Operational
90 days ago
100.0 % uptime
Today
Optimism Sepolia Operational
90 days ago
100.0 % uptime
Today
Scroll Mainnet Operational
90 days ago
100.0 % uptime
Today
Scroll Sepolia Operational
90 days ago
100.0 % uptime
Today
Linea Mainnet Operational
90 days ago
100.0 % uptime
Today
Zksync Era Sepolia Operational
90 days ago
100.0 % uptime
Today
Blast Mainnet Operational
90 days ago
100.0 % uptime
Today
Blast Testnet Operational
90 days ago
100.0 % uptime
Today
Sei Testnet Operational
90 days ago
100.0 % uptime
Today
X Layer Mainnet Operational
90 days ago
100.0 % uptime
Today
X Layer Sepolia Operational
90 days ago
100.0 % uptime
Today
Theta Testnet Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Sepolia Operational
90 days ago
100.0 % uptime
Today
Etherlink Testnet Operational
90 days ago
70.49 % uptime
Today
Hosted Service - Miscellaneous Operational
90 days ago
99.99 % uptime
Today
Deployments ? Operational
90 days ago
99.99 % uptime
Today
IPFS Operational
90 days ago
100.0 % uptime
Today
Subgraph Logs ? Operational
90 days ago
100.0 % uptime
Today
Explorer ? Operational
90 days ago
100.0 % uptime
Today
thegraph.com ? Operational
90 days ago
100.0 % uptime
Today
Firehose Indexing Operational
90 days ago
99.93 % uptime
Today
Firehose Service Operational
90 days ago
99.93 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Apr 19, 2024
Resolved - This incident has been resolved.
Apr 19, 09:43 UTC
Investigating - We are currently investigating this issue.
Apr 17, 12:26 UTC
Apr 18, 2024
Resolved - This incident has been resolved.
Apr 18, 13:37 UTC
Identified - You may find your subgraphs indexing Etherlink testnet stuck or behind chainhead. We are working on it.
Apr 17, 16:22 UTC
Resolved - This incident has been resolved.
Apr 18, 08:27 UTC
Identified - While deploying your subgraph to the graph node, you might be getting an error saying

"✖ Failed to deploy to Graph node https://api.studio.thegraph.com/deploy/: Could not deploy subgraph on graph-node: subgraph deployment error: store error: could not create file "xxxx/167xx/13368xxxx": No space left on device. Deployment: Qm... UNCAUGHT EXCEPTION: Error: EEXIT: 1"

The issue has been identified and a fix is being implemented

Apr 18, 08:22 UTC
Apr 17, 2024
Apr 16, 2024

No incidents reported.

Apr 15, 2024
Resolved - This incident has been resolved.
Apr 15, 13:52 UTC
Investigating - [FIRING:1] DatasourceNoData Production (partial_outage c448b23e-d637-4e71-a420-d83567efd033 high P2-High A Mumbai: Block ingestor lagging behind Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = DatasourceNoData
- cmp_Mumbai = partial_outage
- datasource_uid = c448b23e-d637-4e71-a420-d83567efd033
- grafana_folder = Production
- level = high
- priority = P2-High
- ref_id = A
- rulename = Mumbai: Block ingestor lagging behind
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DDatasourceNoData&matcher=cmp_Mumbai%3Dpartial_outage&matcher=datasource_uid%3Dc448b23e-d637-4e71-a420-d83567efd033&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=ref_id%3DA&matcher=rulename%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 23:42 UTC
Apr 14, 2024

No incidents reported.

Apr 13, 2024

No incidents reported.

Apr 12, 2024
Resolved - This incident has been resolved.
Apr 12, 23:00 UTC
Update - [Comment from Opsgenie]Leonardo Schwarzstein acknowledged alert: "[Grafana]: DatasourceNoData"
Apr 12, 10:42 UTC
Investigating - [FIRING:1] DatasourceNoData Production (partial_outage c448b23e-d637-4e71-a420-d83567efd033 high P2-High A Mumbai: Block ingestor lagging behind Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = DatasourceNoData
- cmp_Mumbai = partial_outage
- datasource_uid = c448b23e-d637-4e71-a420-d83567efd033
- grafana_folder = Production
- level = high
- priority = P2-High
- ref_id = A
- rulename = Mumbai: Block ingestor lagging behind
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DDatasourceNoData&matcher=cmp_Mumbai%3Dpartial_outage&matcher=datasource_uid%3Dc448b23e-d637-4e71-a420-d83567efd033&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=ref_id%3DA&matcher=rulename%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 09:42 UTC
Resolved - The chain has halted.
Apr 12, 03:50 UTC
Investigating - [FIRING:1] Mumbai: Block ingestor lagging behind Production (partial_outage high P2-High Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Mumbai: Block ingestor lagging behind
- cmp_Mumbai = partial_outage
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=cmp_Mumbai%3Dpartial_outage&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 01:52 UTC
Resolved - This incident has been resolved.
Apr 12, 01:42 UTC
Investigating - [FIRING:1] Mumbai: Block ingestor lagging behind Production (partial_outage high P2-High Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Mumbai: Block ingestor lagging behind
- cmp_Mumbai = partial_outage
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=cmp_Mumbai%3Dpartial_outage&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 00:42 UTC
Apr 11, 2024
Resolved - This incident has been resolved.
Apr 11, 18:00 UTC
Identified - You may find the errors below on your subgraph logs or at deploy time. We are working on resolving it

A non-deterministic fatal error occured at block 199949625: Failed to transact block operations: subgraph `QmT3h6pogdPkxfWsBxKNtpq7kR9fqKaQ9jGxe7fZx7MUVE` has already processed block `199949622`; there are most likely two (or more) nodes indexing this subgraph


✖️ Failed to deploy to Graph node https://api.studio.thegraph.com/deploy/: Could not deploy subgraph on graph-node: subgraph validation error: [the specified block must exist on the Ethereum network]. Deployment: QmUU1pAqyQxFFm45ptiqQ1KC3Tu73XL2h1yu6Me6W9CUo

Apr 11, 16:14 UTC
Resolved - This incident has been resolved.
Apr 11, 13:07 UTC
Update - We are continuing to investigate this issue.
Apr 10, 00:53 UTC
Update - We are continuing to investigate this issue.
Apr 10, 00:52 UTC
Investigating - We are currently investigating this issue.
Apr 9, 21:47 UTC
Resolved - This incident has been resolved.
Apr 11, 05:27 UTC
Identified - Subgraphs indexing Arbitrum-Sepolia are facing error like

"Unable to connect to endpoint: status: Cancelled, message: \"Timeout expired\", details: [], metadata: MetadataMap { headers: {} }: transport error: Timeout expired, provider: arb-sep-firehose-pinax, deployment: Qm..., component: FirehoseBlockStream"

We are working on a fix.

Apr 10, 11:31 UTC
Apr 10, 2024
Resolved - This incident has been resolved.
Apr 10, 09:35 UTC
Identified - You might be seeing new data not being indexed for the Base subgraphs showing errors like

"Failed to get block number: Ingestor error: no adapter for chain base Subgraph instance failed to run: A matching Ethereum network with NodeCapabilities { archive: false, traces: false } was not found."

We have identified the issue and working on the fix.

Apr 10, 08:30 UTC
Resolved - This incident has been resolved.
Apr 10, 06:39 UTC
Update - [Comment from Opsgenie]Ruslan Rotaru acknowledged alert: "[Grafana]: Mainnet: Block ingestor lagging behind"
Apr 10, 01:16 UTC
Investigating - [FIRING:1] Mainnet: Block ingestor lagging behind Production (partial_outage high P2-High Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=5
Labels:
- alertname = Mainnet: Block ingestor lagging behind
- cmp_Mainnet = partial_outage
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Infrastructure
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/GnG_e2G4z/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DMainnet%3A+Block+ingestor+lagging+behind&matcher=cmp_Mainnet%3Dpartial_outage&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=43

Apr 10, 01:16 UTC
Resolved - This incident has been resolved.
Apr 10, 01:25 UTC
Update - [Comment from Opsgenie]ruslan acknowledged alert: "[Grafana]: Avalanche: Block ingestor lagging behind"
Apr 9, 23:25 UTC
Investigating - [FIRING:1] Avalanche: Block ingestor lagging behind Production (partial_outage high P2-High Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Avalanche: Block ingestor lagging behind
- cmp_Avalanche = partial_outage
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Infrastructure
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/fxMle2M4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DAvalanche%3A+Block+ingestor+lagging+behind&matcher=cmp_Avalanche%3Dpartial_outage&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=78

Apr 9, 23:25 UTC
Apr 9, 2024
Apr 8, 2024
Completed - The scheduled maintenance has been completed.
Apr 8, 13:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 8, 07:00 UTC
Update - We will be undergoing scheduled maintenance during this time.
Mar 26, 19:49 UTC
Scheduled - The Graph’s hosted service will undergo scheduled database maintenance beginning April 8, 2024, 07:00 UTC (convert to your local time here: https://dateful.com/convert/utc?t=7am&d=2023-04-08).

Up to 15 minutes of downtime will apply to:
- All subgraphs on The Graph’s hosted service
- The Developer Preview URL within Subgraph Studio
- Subgraphs on chains with preliminary support via the upgrade Indexer

The 15-minute downtime will begin within a six-hour time window (07:00-13:00 UTC).

Note: Users of multiple subgraphs may notice that some subgraphs experience downtime during the listed time window while others do not – this is expected and typical during routine maintenance.

Learn how to upgrade your subgraph in a few clicks via the new upgrade flow and start using The Graph Network.

For additional assistance, reach out in The Graph Discord’s hosted service channel: https://thegraph.com/discord

Mar 26, 19:48 UTC
Resolved - This incident has been resolved.
Apr 8, 09:11 UTC
Update - We are continuing to investigate this issue.
Apr 8, 05:11 UTC
Update - We are continuing to investigate this issue.
Apr 8, 05:08 UTC
Investigating - We are currently investigating this issue.
Apr 8, 04:58 UTC
Apr 7, 2024

No incidents reported.

Apr 6, 2024

No incidents reported.

Apr 5, 2024

No incidents reported.