The Graph
Update - We are continuing to work on a fix for this issue.
Apr 24, 2024 - 21:50 UTC
Identified - The issue has been identified and a fix is being implemented.
Apr 24, 2024 - 21:49 UTC
Investigating - When deploying your subgraph to `osmosis` / `cosmos` , you might see this warning

`
Unfortunately, proper support for Osmosis in the Hosted Service can not be guaranteed for the time being. Please refer to https://status.thegraph.com for more information. If this is affecting you in any way, please reach out to info+osmosis@thegraph.foundation.
`

We are currently investigating this issue.

Mar 08, 2024 - 12:45 UTC

About This Site

Status, incident and maintenance information for The Graph.
Use the RSS feed together with https://zapier.com/apps/rss to have updates sent straight to Slack or Discord.

Network - Gateway ? Operational
90 days ago
99.84 % uptime
Today
Network - Explorer ? Degraded Performance
90 days ago
100.0 % uptime
Today
Network - Subgraph Studio ? Degraded Performance
90 days ago
99.97 % uptime
Today
Network - Network Subgraph ? Operational
90 days ago
100.0 % uptime
Today
Hosted Service - Queries ? Operational
90 days ago
99.97 % uptime
Today
Hosted Service - Subgraph Health Operational
90 days ago
100.0 % uptime
Today
Hosted Service - Indexing Major Outage
90 days ago
92.66 % uptime
Today
Mainnet Operational
90 days ago
99.92 % uptime
Today
xDai Operational
90 days ago
100.0 % uptime
Today
Polygon (Matic) Operational
90 days ago
99.92 % uptime
Today
Mumbai Operational
90 days ago
98.61 % uptime
Today
Fantom Operational
90 days ago
100.0 % uptime
Today
BSC Operational
90 days ago
99.92 % uptime
Today
BSC Chapel Operational
90 days ago
100.0 % uptime
Today
Clover Operational
90 days ago
100.0 % uptime
Today
Avalanche Operational
90 days ago
99.18 % uptime
Today
Avalanche Fuji Operational
90 days ago
100.0 % uptime
Today
Celo Operational
90 days ago
100.0 % uptime
Today
Celo Alfajores Operational
90 days ago
100.0 % uptime
Today
Fuse Operational
90 days ago
100.0 % uptime
Today
Moonbeam Operational
90 days ago
100.0 % uptime
Today
Arbitrum One Operational
90 days ago
100.0 % uptime
Today
Optimism Operational
90 days ago
100.0 % uptime
Today
Moonriver Operational
90 days ago
100.0 % uptime
Today
NEAR Operational
Aurora Operational
90 days ago
100.0 % uptime
Today
Aurora Testnet Operational
90 days ago
100.0 % uptime
Today
Osmosis ? Major Outage
90 days ago
0.54 % uptime
Today
Arweave ? Operational
90 days ago
100.0 % uptime
Today
Cosmos ? Major Outage
90 days ago
0.54 % uptime
Today
zkSync Era Operational
90 days ago
100.0 % uptime
Today
Polygon zkEvm Testnet Operational
90 days ago
99.91 % uptime
Today
Polygon zkEvm Operational
90 days ago
99.76 % uptime
Today
Fantom Testnet Operational
90 days ago
100.0 % uptime
Today
Base Operational
90 days ago
100.0 % uptime
Today
Base Testnet Major Outage
90 days ago
59.83 % uptime
Today
Harmony Operational
90 days ago
100.0 % uptime
Today
Sepolia Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Mainnet Operational
90 days ago
99.87 % uptime
Today
Polygon Amoy Operational
90 days ago
100.0 % uptime
Today
Arbitrum Sepolia Operational
90 days ago
100.0 % uptime
Today
Holesky Operational
90 days ago
100.0 % uptime
Today
Optimism Sepolia Operational
90 days ago
100.0 % uptime
Today
Scroll Mainnet Operational
90 days ago
100.0 % uptime
Today
Scroll Sepolia Operational
90 days ago
100.0 % uptime
Today
Linea Mainnet Operational
90 days ago
100.0 % uptime
Today
Zksync Era Sepolia Operational
90 days ago
100.0 % uptime
Today
Blast Mainnet Operational
90 days ago
100.0 % uptime
Today
Blast Testnet Operational
90 days ago
100.0 % uptime
Today
Sei Testnet Operational
90 days ago
100.0 % uptime
Today
X Layer Mainnet Operational
90 days ago
100.0 % uptime
Today
X Layer Sepolia Operational
90 days ago
100.0 % uptime
Today
Theta Testnet Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Sepolia Operational
90 days ago
100.0 % uptime
Today
Etherlink Testnet Operational
90 days ago
91.14 % uptime
Today
Hosted Service - Miscellaneous Degraded Performance
90 days ago
99.85 % uptime
Today
Deployments ? Operational
90 days ago
99.99 % uptime
Today
IPFS Operational
90 days ago
100.0 % uptime
Today
Subgraph Logs ? Operational
90 days ago
100.0 % uptime
Today
Explorer ? Operational
90 days ago
100.0 % uptime
Today
thegraph.com ? Degraded Performance
90 days ago
100.0 % uptime
Today
The Graph Network Subgraph - Arbitrum Operational
90 days ago
74.44 % uptime
Today
Firehose Indexing Operational
90 days ago
99.97 % uptime
Today
Firehose Service Operational
90 days ago
99.97 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Apr 26, 2024
Resolved - This incident has been resolved.
Apr 26, 05:53 UTC
Investigating - [FIRING:1] Bitcoin: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Bitcoin: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Bitcoin: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivfxbgf7kf/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DBitcoin%3A+Block+ingestor+lagging+behind&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1

Apr 26, 01:03 UTC
Apr 25, 2024
Resolved - This incident has been resolved.
Apr 25, 18:12 UTC
Update - [Comment from Opsgenie]ruslan acknowledged alert: "[Grafana]: Bitcoin: Block ingestor lagging behind"
Apr 25, 15:23 UTC
Investigating - [FIRING:1] Bitcoin: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Bitcoin: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Bitcoin: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivfxbgf7kf/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DBitcoin%3A+Block+ingestor+lagging+behind&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1

Apr 25, 15:23 UTC
Resolved - This incident has been resolved.
Apr 25, 12:47 UTC
Identified - Goerli testnet have stopped producing blocks and hence we have closed subgraphs support for it.
Mar 21, 08:20 UTC
Resolved - This incident has been resolved.
Apr 25, 12:47 UTC
Identified - Polygon Mumbai has stopped producing new blocks.
Apr 12, 03:52 UTC
Resolved - This incident has been resolved.
Apr 25, 12:29 UTC
Identified - The issue has been identified and a fix is being implemented.
Apr 25, 11:58 UTC
Resolved - This incident has been resolved.
Apr 25, 07:35 UTC
Investigating - You may find The Graph network Arbitrum subgraph lagging behind. We are working on a fix
Apr 24, 13:11 UTC
Resolved - This incident has been resolved.
Apr 25, 07:24 UTC
Investigating - Subgraphs indexing `scroll` and `scroll-sepolia` are having degraded performance due to RPC maintenance.
Apr 25, 06:42 UTC
Apr 24, 2024
Completed - The scheduled maintenance has been completed.
Apr 24, 09:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Apr 24, 05:00 UTC
Update - We will be undergoing scheduled maintenance during this time.
Apr 23, 22:30 UTC
Scheduled - We will be undergoing scheduled maintenance during this time.
Apr 23, 22:07 UTC
Apr 23, 2024
Resolved - This incident has been resolved.
Apr 23, 15:52 UTC
Update - [Comment from Opsgenie]ruslan acknowledged alert: "[Grafana]: Boba: Block ingestor lagging behind"
Apr 23, 15:47 UTC
Investigating - [FIRING:1] Boba: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Boba: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Boba: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivhz9g7pca/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DBoba%3A+Block+ingestor+lagging+behind&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1

Apr 23, 15:47 UTC
Resolved - This incident has been resolved.
Apr 23, 15:01 UTC
Update - We are continuing to investigate this issue.
Apr 23, 10:20 UTC
Investigating - You might see your newly published subgraph not being indexed by upgrade indexer. We are currently investigating this issue.
Apr 23, 10:19 UTC
Resolved - This incident has been resolved.
Apr 23, 14:26 UTC
Update - [Comment from Opsgenie]ruslan acknowledged alert: "[Grafana]: Boba: Block ingestor lagging behind"
Apr 23, 13:12 UTC
Investigating - [FIRING:1] Boba: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Boba: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Boba: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivhz9g7pca/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DBoba%3A+Block+ingestor+lagging+behind&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1

Apr 23, 13:12 UTC
Resolved - This incident has been resolved.
Apr 23, 00:23 UTC
Update - [Comment from Opsgenie]Ruslan Rotaru acknowledged alert: "[Grafana]: Etherlink-Sepolia: Block ingestor lagging behind"
Apr 23, 00:03 UTC
Investigating - [FIRING:1] Etherlink-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Etherlink-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Etherlink-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/adjeivi6c5uyod/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DEtherlink-Sepolia%3A+Block+ingestor+lagging+behind&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1

Apr 23, 00:03 UTC
Apr 22, 2024
Resolved - This incident has been resolved.
Apr 22, 18:02 UTC
Update - [Comment from Opsgenie]ruslan acknowledged alert: "[Grafana]: Aurora-Testnet: Block ingestor lagging behind"
Apr 22, 15:18 UTC
Investigating - [FIRING:1] Aurora-Testnet: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Aurora-Testnet: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Aurora-Testnet: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/bdjeivfqb8oaoa/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DAurora-Testnet%3A+Block+ingestor+lagging+behind&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1

Apr 22, 15:18 UTC
Apr 21, 2024
Resolved - This incident has been resolved.
Apr 21, 12:58 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers high P2-High Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DPolygon+%28Matic%29%3A+Block+ingestor+lagging+behind&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=61

Apr 21, 12:57 UTC
Resolved - This incident has been resolved.
Apr 21, 12:51 UTC
Update - [Comment from Opsgenie]muhammad acknowledged alert: "[Grafana]: Polygon (Matic): Block ingestor lagging behind"
Apr 21, 12:51 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers high P2-High Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DPolygon+%28Matic%29%3A+Block+ingestor+lagging+behind&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=61

Apr 21, 12:13 UTC
Apr 20, 2024

No incidents reported.

Apr 19, 2024
Resolved - This incident has been resolved.
Apr 19, 09:43 UTC
Investigating - We are currently investigating this issue.
Apr 17, 12:26 UTC
Apr 18, 2024
Resolved - This incident has been resolved.
Apr 18, 13:37 UTC
Identified - You may find your subgraphs indexing Etherlink testnet stuck or behind chainhead. We are working on it.
Apr 17, 16:22 UTC
Resolved - This incident has been resolved.
Apr 18, 08:27 UTC
Identified - While deploying your subgraph to the graph node, you might be getting an error saying

"✖ Failed to deploy to Graph node https://api.studio.thegraph.com/deploy/: Could not deploy subgraph on graph-node: subgraph deployment error: store error: could not create file "xxxx/167xx/13368xxxx": No space left on device. Deployment: Qm... UNCAUGHT EXCEPTION: Error: EEXIT: 1"

The issue has been identified and a fix is being implemented

Apr 18, 08:22 UTC
Apr 17, 2024
Apr 16, 2024

No incidents reported.

Apr 15, 2024
Resolved - This incident has been resolved.
Apr 15, 13:52 UTC
Investigating - [FIRING:1] DatasourceNoData Production (partial_outage c448b23e-d637-4e71-a420-d83567efd033 high P2-High A Mumbai: Block ingestor lagging behind Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = DatasourceNoData
- cmp_Mumbai = partial_outage
- datasource_uid = c448b23e-d637-4e71-a420-d83567efd033
- grafana_folder = Production
- level = high
- priority = P2-High
- ref_id = A
- rulename = Mumbai: Block ingestor lagging behind
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DDatasourceNoData&matcher=cmp_Mumbai%3Dpartial_outage&matcher=datasource_uid%3Dc448b23e-d637-4e71-a420-d83567efd033&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=ref_id%3DA&matcher=rulename%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 23:42 UTC
Apr 14, 2024

No incidents reported.

Apr 13, 2024

No incidents reported.

Apr 12, 2024
Resolved - This incident has been resolved.
Apr 12, 23:00 UTC
Update - [Comment from Opsgenie]Leonardo Schwarzstein acknowledged alert: "[Grafana]: DatasourceNoData"
Apr 12, 10:42 UTC
Investigating - [FIRING:1] DatasourceNoData Production (partial_outage c448b23e-d637-4e71-a420-d83567efd033 high P2-High A Mumbai: Block ingestor lagging behind Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = DatasourceNoData
- cmp_Mumbai = partial_outage
- datasource_uid = c448b23e-d637-4e71-a420-d83567efd033
- grafana_folder = Production
- level = high
- priority = P2-High
- ref_id = A
- rulename = Mumbai: Block ingestor lagging behind
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DDatasourceNoData&matcher=cmp_Mumbai%3Dpartial_outage&matcher=datasource_uid%3Dc448b23e-d637-4e71-a420-d83567efd033&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=ref_id%3DA&matcher=rulename%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 09:42 UTC
Resolved - The chain has halted.
Apr 12, 03:50 UTC
Investigating - [FIRING:1] Mumbai: Block ingestor lagging behind Production (partial_outage high P2-High Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Mumbai: Block ingestor lagging behind
- cmp_Mumbai = partial_outage
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=cmp_Mumbai%3Dpartial_outage&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 01:52 UTC
Resolved - This incident has been resolved.
Apr 12, 01:42 UTC
Investigating - [FIRING:1] Mumbai: Block ingestor lagging behind Production (partial_outage high P2-High Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Mumbai: Block ingestor lagging behind
- cmp_Mumbai = partial_outage
- grafana_folder = Production
- level = high
- priority = P2-High
- team = Network Engineering
Annotations:
Source: https://thegraph.grafana.net/alerting/grafana/wOMle2G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=alertname%3DMumbai%3A+Block+ingestor+lagging+behind&matcher=cmp_Mumbai%3Dpartial_outage&matcher=grafana_folder%3DProduction&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=63

Apr 12, 00:42 UTC