The Graph

All Systems Operational

About This Site

Status, incident and maintenance information for The Graph.
Use the RSS feed together with https://zapier.com/apps/rss to have updates sent straight to Slack or Discord.

Network - Gateway ? Operational
90 days ago
100.0 % uptime
Today
Network - Explorer ? Operational
90 days ago
100.0 % uptime
Today
Network - Subgraph Studio ? Operational
90 days ago
99.93 % uptime
Today
Network - Network Subgraph ? Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Queries ? Operational
90 days ago
99.93 % uptime
Today
Upgrade Indexer - Subgraph Health Operational
90 days ago
99.9 % uptime
Today
Token API Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Indexing Operational
90 days ago
99.94 % uptime
Today
Ethereum Mainnet Operational
90 days ago
100.0 % uptime
Today
Abstract Mainnet Operational
90 days ago
100.0 % uptime
Today
Abstract Testnet Operational
90 days ago
100.0 % uptime
Today
Arbitrum One Operational
90 days ago
100.0 % uptime
Today
Arbitrum Sepolia Operational
90 days ago
100.0 % uptime
Today
Arbitrum Nova Operational
90 days ago
100.0 % uptime
Today
Arweave ? Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Mainnet Operational
90 days ago
100.0 % uptime
Today
Aurora Operational
90 days ago
100.0 % uptime
Today
Aurora Testnet Operational
90 days ago
100.0 % uptime
Today
Avalanche Operational
90 days ago
100.0 % uptime
Today
Avalanche Fuji Operational
90 days ago
99.98 % uptime
Today
Base Operational
90 days ago
100.0 % uptime
Today
Base Sepolia Operational
90 days ago
100.0 % uptime
Today
Berachain Operational
90 days ago
100.0 % uptime
Today
Berachain bArtio Testnet Operational
90 days ago
100.0 % uptime
Today
Bitcoin Substreams Operational
90 days ago
100.0 % uptime
Today
Blast Mainnet Operational
90 days ago
100.0 % uptime
Today
Blast Testnet Operational
90 days ago
100.0 % uptime
Today
Boba Operational
90 days ago
100.0 % uptime
Today
Boba Testnet Operational
90 days ago
100.0 % uptime
Today
Boba BNB Testnet Operational
90 days ago
100.0 % uptime
Today
Boba BNB Operational
90 days ago
100.0 % uptime
Today
Botanix Testnet Operational
90 days ago
100.0 % uptime
Today
BSC Operational
90 days ago
99.28 % uptime
Today
BSC Chapel Operational
90 days ago
100.0 % uptime
Today
Celo Operational
90 days ago
100.0 % uptime
Today
Celo Alfajores Operational
90 days ago
100.0 % uptime
Today
Chiliz Operational
90 days ago
100.0 % uptime
Today
Chiliz Testnet Operational
90 days ago
100.0 % uptime
Today
Corn Maizenet (Mainnet) Operational
90 days ago
100.0 % uptime
Today
Corn Testnet Operational
90 days ago
100.0 % uptime
Today
Ethereum Sepolia Operational
90 days ago
100.0 % uptime
Today
Ethereum Holesky Operational
90 days ago
100.0 % uptime
Today
Etherlink Mainnet Operational
90 days ago
100.0 % uptime
Today
Etherlink Testnet Operational
90 days ago
100.0 % uptime
Today
EXPChain Testnet Operational
90 days ago
100.0 % uptime
Today
Fantom Operational
90 days ago
100.0 % uptime
Today
Fantom Testnet Operational
90 days ago
100.0 % uptime
Today
Fraxtal Mainnet Operational
90 days ago
100.0 % uptime
Today
Fuse Operational
90 days ago
100.0 % uptime
Today
Fuse Testnet Operational
90 days ago
100.0 % uptime
Today
Gnosis Operational
90 days ago
100.0 % uptime
Today
Gnosis Chiado Operational
90 days ago
100.0 % uptime
Today
Gravity Operational
90 days ago
100.0 % uptime
Today
Gravity Testnet Operational
90 days ago
100.0 % uptime
Today
Harmony Operational
90 days ago
100.0 % uptime
Today
Hemi Operational
90 days ago
100.0 % uptime
Today
Hemi Sepolia Operational
90 days ago
100.0 % uptime
Today
Ink Operational
90 days ago
100.0 % uptime
Today
Ink Sepolia Operational
90 days ago
100.0 % uptime
Today
Iotex Mainnet Operational
90 days ago
100.0 % uptime
Today
Iotex Testnet Operational
90 days ago
100.0 % uptime
Today
Japan Open Chain Mainnet Operational
90 days ago
100.0 % uptime
Today
Japan Open Chain Testnet Operational
90 days ago
100.0 % uptime
Today
Kaia Operational
90 days ago
100.0 % uptime
Today
Kaia Testnet Operational
90 days ago
100.0 % uptime
Today
Linea Mainnet Operational
90 days ago
100.0 % uptime
Today
Linea Sepolia Operational
90 days ago
100.0 % uptime
Today
Lens Testnet Operational
90 days ago
100.0 % uptime
Today
Lumia Operational
90 days ago
100.0 % uptime
Today
Mint Operational
90 days ago
100.0 % uptime
Today
Mint Sepolia Operational
90 days ago
100.0 % uptime
Today
Mode Mainnet Operational
90 days ago
100.0 % uptime
Today
Mode Testnet Operational
90 days ago
100.0 % uptime
Today
Monad Testnet Operational
90 days ago
100.0 % uptime
Today
Moonbeam Operational
90 days ago
100.0 % uptime
Today
Moonbase Operational
90 days ago
100.0 % uptime
Today
Moonriver Operational
90 days ago
100.0 % uptime
Today
Near Mainnet Operational
90 days ago
100.0 % uptime
Today
Near Testnet Operational
90 days ago
100.0 % uptime
Today
NeoX Operational
90 days ago
100.0 % uptime
Today
NeoX Testnet Operational
90 days ago
100.0 % uptime
Today
Optimism Operational
90 days ago
100.0 % uptime
Today
Optimism Sepolia Operational
90 days ago
100.0 % uptime
Today
Polygon (Matic) Operational
90 days ago
95.35 % uptime
Today
Polygon Amoy Operational
90 days ago
100.0 % uptime
Today
Polygon zkEVM Operational
90 days ago
100.0 % uptime
Today
Polygon zkEVM Cardona Testnet Operational
90 days ago
100.0 % uptime
Today
Rootstock Operational
90 days ago
100.0 % uptime
Today
Rootstock Testnet Operational
90 days ago
100.0 % uptime
Today
Scroll Mainnet Operational
90 days ago
100.0 % uptime
Today
Scroll Sepolia Operational
90 days ago
100.0 % uptime
Today
Sei Mainnet Operational
90 days ago
100.0 % uptime
Today
Sei Atlantic Testnet Operational
90 days ago
100.0 % uptime
Today
Solana Operational
90 days ago
100.0 % uptime
Today
Solana Devnet Operational
90 days ago
100.0 % uptime
Today
Soneium Operational
90 days ago
100.0 % uptime
Today
Soneium Testnet Operational
90 days ago
100.0 % uptime
Today
Sonic Operational
90 days ago
100.0 % uptime
Today
Starknet Substreams Operational
90 days ago
100.0 % uptime
Today
Unichain Testnet Operational
90 days ago
100.0 % uptime
Today
Vana Operational
90 days ago
100.0 % uptime
Today
Vana Moksha Testnet Operational
90 days ago
100.0 % uptime
Today
Viction Operational
90 days ago
100.0 % uptime
Today
X Layer Mainnet Operational
90 days ago
100.0 % uptime
Today
X Layer Sepolia Operational
90 days ago
100.0 % uptime
Today
Zetachain Operational
90 days ago
100.0 % uptime
Today
zkSync Era Operational
90 days ago
100.0 % uptime
Today
zkSync Era Sepolia Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Miscellaneous Operational
90 days ago
99.99 % uptime
Today
Deployments ? Operational
90 days ago
100.0 % uptime
Today
IPFS Operational
90 days ago
99.98 % uptime
Today
Subgraph Logs ? Operational
90 days ago
100.0 % uptime
Today
Explorer ? Operational
90 days ago
100.0 % uptime
Today
thegraph.com ? Operational
90 days ago
100.0 % uptime
Today
The Graph Network Subgraph - Arbitrum Operational
90 days ago
100.0 % uptime
Today
Firehose Indexing Operational
90 days ago
100.0 % uptime
Today
Firehose Service Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Apr 3, 2025
Resolved - zkevm is retired
Apr 3, 01:12 UTC
Investigating - [FIRING:1] Astar-zkEVM-Mainnet: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Astar-zkEVM-Mainnet: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Astar-zkEVM-Mainnet: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivifaark0b/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivifaark0b&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743638810000&orgId=1&to=1743642445448
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743638810000&orgId=1&to=1743642445448&viewPanel=138

Apr 3, 01:07 UTC
Resolved - This incident has been resolved.
Apr 3, 00:06 UTC
Investigating - [FIRING:1] Astar-zkEVM-Mainnet: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Astar-zkEVM-Mainnet: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Astar-zkEVM-Mainnet: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivifaark0b/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivifaark0b&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743418010000&orgId=1&to=1743421640393
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743418010000&orgId=1&to=1743421640393&viewPanel=138

Mar 31, 11:47 UTC
Apr 2, 2025
Resolved - This incident has been resolved.
Apr 2, 20:03 UTC
Identified - Subgraph deployments are failing due to a missing subscriptions field in the response from graph-node. This field was recently removed, causing deployment failures when using the deployment router.

Engineers are currently working on a fix by updating the deployment router to no longer expect the subscriptions field.

Apr 2, 17:42 UTC
Apr 1, 2025
Resolved - This incident has been resolved.
Apr 1, 22:03 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743541080000&orgId=1&to=1743544710515
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743541080000&orgId=1&to=1743544710515&viewPanel=61

Apr 1, 21:58 UTC
Mar 31, 2025
Mar 30, 2025
Resolved - This incident has been resolved.
Mar 30, 20:00 UTC
Update - [Comment from Opsgenie]Brandon Graham acknowledged alert: "Arbitrum-Sepolia: Block ingestor lagging behind"
Mar 30, 19:14 UTC
Investigating - [FIRING:1] Arbitrum-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Arbitrum-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Arbitrum-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/edjeivgjdk54wd/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dedjeivgjdk54wd&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743355230000&orgId=1&to=1743358867148
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743355230000&orgId=1&to=1743358867148&viewPanel=139

Mar 30, 18:21 UTC
Resolved - This incident has been resolved.
Mar 30, 19:56 UTC
Update - [Comment from Opsgenie]Brandon Graham acknowledged alert: "[Grafana]: Base: Block ingestor lagging behind"
Mar 30, 19:13 UTC
Investigating - [FIRING:1] Base: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Base: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivil1kc8we/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivil1kc8we&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743358300000&orgId=1&to=1743361930288
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743358300000&orgId=1&to=1743361930288&viewPanel=118

Mar 30, 19:12 UTC
Resolved - This incident has been resolved.
Mar 30, 19:53 UTC
Update - [Comment from Opsgenie]Brandon Graham acknowledged alert: "Base-Sepolia: Block ingestor lagging behind"
Mar 30, 19:14 UTC
Investigating - [FIRING:1] Base-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Base-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivf5alfk0a/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivf5alfk0a&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743358400000&orgId=1&to=1743362030547
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743358400000&orgId=1&to=1743362030547&viewPanel=127

Mar 30, 19:13 UTC
Resolved - This incident has been resolved.
Mar 30, 19:52 UTC
Update - [Comment from Opsgenie]Brandon Graham acknowledged alert: "Arweave-Mainnet: Block ingestor lagging behind"
Mar 30, 19:14 UTC
Investigating - [FIRING:1] Arweave-Mainnet: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Arweave-Mainnet: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Arweave-Mainnet: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/bdjeividov3lsc/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbdjeividov3lsc&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743357130000&orgId=1&to=1743360760389
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743357130000&orgId=1&to=1743360760389&viewPanel=108

Mar 30, 18:52 UTC
Mar 29, 2025

No incidents reported.

Mar 28, 2025

No incidents reported.

Mar 27, 2025

No incidents reported.

Mar 26, 2025
Resolved - This incident has been resolved.
Mar 26, 20:16 UTC
Investigating - [FIRING:1] Celo: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Celo: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Celo: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivigqqmtdb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivigqqmtdb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743015680000&orgId=1&to=1743019310320
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743015680000&orgId=1&to=1743019310320&viewPanel=75

Mar 26, 20:01 UTC
Resolved - This incident has been resolved.
Mar 26, 19:01 UTC
Investigating - [FIRING:1] Celo: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Celo: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Celo: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivigqqmtdb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivigqqmtdb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743010280000&orgId=1&to=1743013910384
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743010280000&orgId=1&to=1743013910384&viewPanel=75

Mar 26, 18:31 UTC
Completed - The scheduled maintenance has been completed.
Mar 26, 06:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 26, 03:00 UTC
Scheduled - We will be undergoing scheduled maintenance during this time.
Mar 25, 15:22 UTC
Resolved - This incident has been resolved.
Mar 26, 05:41 UTC
Investigating - [FIRING:1] Celo: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Celo: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Celo: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivigqqmtdb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivigqqmtdb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742959280000&orgId=1&to=1742962910300
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742959280000&orgId=1&to=1742962910300&viewPanel=75

Mar 26, 04:21 UTC
Mar 25, 2025
Resolved - This incident has been resolved.
Mar 25, 14:53 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0, B1=0
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=61

Mar 12, 16:23 UTC
Mar 24, 2025
Resolved - This incident has been resolved.
Mar 24, 11:09 UTC
Update - [Comment from Opsgenie]Leonardo Schwarzstein acknowledged alert: "[Grafana]: Subgraph Health: Too many indexing errors"
Mar 19, 15:27 UTC
Investigating - [FIRING:1] Subgraph Health: Too many indexing errors Production (partial_outage indexers high P2-High true Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = Subgraph Health: Too many indexing errors
- cmp_Hosted Service - Subgraph Health = partial_outage
- component = indexers
- grafana_folder = Production
- level = high
- priority = P2-High
- statuspage = true
- team = Network Engineering
Annotations:
- Error = [sse.dataQueryError] failed to execute query [A]: invalid character '
- grafana_state_reason = Error
- statuspage = Subgraph Health: Too many indexing errors
Source: https://thegraph.grafana.net/alerting/grafana/GGM_62G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DGGM_62G4k&matcher=cmp_Hosted+Service+-+Subgraph+Health%3Dpartial_outage&matcher=component%3Dindexers&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=69

Mar 19, 14:21 UTC
Mar 23, 2025

No incidents reported.

Mar 22, 2025

No incidents reported.

Mar 21, 2025
Resolved - This incident has been resolved.
Mar 21, 15:13 UTC
Investigating - [FIRING:1] Polygon-zkEVM: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Polygon-zkEVM: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Polygon-zkEVM: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivgbjdm2oe/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivgbjdm2oe&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742562790000&orgId=1&to=1742566427141
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742562790000&orgId=1&to=1742566427141&viewPanel=116

Mar 21, 14:13 UTC
Mar 20, 2025

No incidents reported.