The Graph

All Systems Operational

About This Site

Status, incident and maintenance information for The Graph.
Use the RSS feed together with https://zapier.com/apps/rss to have updates sent straight to Slack or Discord.

Network - Gateway Operational
90 days ago
100.0 % uptime
Today
Network - Explorer Operational
90 days ago
100.0 % uptime
Today
Network - Subgraph Studio Operational
90 days ago
100.0 % uptime
Today
Network - Network Subgraph Operational
90 days ago
100.0 % uptime
Today
Network - Substreams Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Queries Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Subgraph Health Operational
90 days ago
100.0 % uptime
Today
Token API Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Indexing Operational
90 days ago
99.99 % uptime
Today
Ethereum Mainnet Operational
90 days ago
100.0 % uptime
Today
Abstract Mainnet Operational
90 days ago
100.0 % uptime
Today
Abstract Testnet Operational
90 days ago
100.0 % uptime
Today
Arbitrum One Operational
90 days ago
100.0 % uptime
Today
Arbitrum Sepolia Operational
90 days ago
100.0 % uptime
Today
Arbitrum Nova Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Mainnet Operational
90 days ago
100.0 % uptime
Today
Avalanche Operational
90 days ago
100.0 % uptime
Today
Avalanche Fuji Operational
90 days ago
100.0 % uptime
Today
Base Operational
90 days ago
100.0 % uptime
Today
Base Sepolia Operational
90 days ago
100.0 % uptime
Today
Berachain Operational
90 days ago
100.0 % uptime
Today
Berachain bArtio Testnet Operational
90 days ago
100.0 % uptime
Today
Bitcoin Substreams Operational
90 days ago
100.0 % uptime
Today
Blast Mainnet Operational
90 days ago
100.0 % uptime
Today
Blast Testnet Operational
90 days ago
100.0 % uptime
Today
Boba Operational
90 days ago
100.0 % uptime
Today
Boba Testnet Operational
90 days ago
100.0 % uptime
Today
Boba BNB Testnet Operational
90 days ago
100.0 % uptime
Today
Boba BNB Operational
90 days ago
100.0 % uptime
Today
Botanix Testnet Operational
90 days ago
100.0 % uptime
Today
BSC Operational
90 days ago
99.96 % uptime
Today
BSC Chapel Operational
90 days ago
100.0 % uptime
Today
Celo Operational
90 days ago
100.0 % uptime
Today
Chiliz Operational
90 days ago
100.0 % uptime
Today
Chiliz Testnet Operational
90 days ago
100.0 % uptime
Today
Corn Maizenet (Mainnet) Operational
90 days ago
100.0 % uptime
Today
Corn Testnet Operational
90 days ago
100.0 % uptime
Today
Ethereum Sepolia Operational
90 days ago
100.0 % uptime
Today
Etherlink Mainnet Operational
90 days ago
100.0 % uptime
Today
Etherlink Testnet Operational
90 days ago
100.0 % uptime
Today
EXPChain Testnet Operational
90 days ago
100.0 % uptime
Today
Fantom Operational
90 days ago
100.0 % uptime
Today
Fantom Testnet Operational
90 days ago
100.0 % uptime
Today
Fraxtal Mainnet Operational
90 days ago
100.0 % uptime
Today
Fuse Operational
90 days ago
100.0 % uptime
Today
Fuse Testnet Operational
90 days ago
100.0 % uptime
Today
Gnosis Operational
90 days ago
100.0 % uptime
Today
Gnosis Chiado Operational
90 days ago
100.0 % uptime
Today
Gravity Operational
90 days ago
100.0 % uptime
Today
Gravity Testnet Operational
90 days ago
100.0 % uptime
Today
Harmony Operational
90 days ago
100.0 % uptime
Today
Hemi Operational
90 days ago
100.0 % uptime
Today
Hemi Sepolia Operational
90 days ago
100.0 % uptime
Today
Ink Operational
90 days ago
100.0 % uptime
Today
Ink Sepolia Operational
90 days ago
100.0 % uptime
Today
Iotex Mainnet Operational
90 days ago
100.0 % uptime
Today
Iotex Testnet Operational
90 days ago
100.0 % uptime
Today
Japan Open Chain Mainnet Operational
90 days ago
100.0 % uptime
Today
Japan Open Chain Testnet Operational
90 days ago
100.0 % uptime
Today
Kaia Operational
90 days ago
100.0 % uptime
Today
Kaia Testnet Operational
90 days ago
100.0 % uptime
Today
Linea Mainnet Operational
90 days ago
100.0 % uptime
Today
Linea Sepolia Operational
90 days ago
100.0 % uptime
Today
Lens Testnet Operational
90 days ago
100.0 % uptime
Today
Lumia Operational
90 days ago
100.0 % uptime
Today
Mint Operational
90 days ago
100.0 % uptime
Today
Mint Sepolia Operational
90 days ago
100.0 % uptime
Today
Monad Testnet Operational
90 days ago
100.0 % uptime
Today
Moonbeam Operational
90 days ago
100.0 % uptime
Today
Moonbase Operational
90 days ago
100.0 % uptime
Today
Moonriver Operational
90 days ago
100.0 % uptime
Today
Near Mainnet Operational
90 days ago
100.0 % uptime
Today
Near Testnet Operational
90 days ago
100.0 % uptime
Today
NeoX Operational
90 days ago
100.0 % uptime
Today
NeoX Testnet Operational
90 days ago
100.0 % uptime
Today
Optimism Operational
90 days ago
100.0 % uptime
Today
Optimism Sepolia Operational
90 days ago
100.0 % uptime
Today
Polygon (Matic) Operational
90 days ago
99.9 % uptime
Today
Polygon Amoy Operational
90 days ago
100.0 % uptime
Today
Polygon zkEVM Operational
90 days ago
100.0 % uptime
Today
Polygon zkEVM Cardona Testnet Operational
90 days ago
100.0 % uptime
Today
Rootstock Operational
90 days ago
100.0 % uptime
Today
Rootstock Testnet Operational
90 days ago
100.0 % uptime
Today
Scroll Mainnet Operational
90 days ago
100.0 % uptime
Today
Scroll Sepolia Operational
90 days ago
100.0 % uptime
Today
Sei Mainnet Operational
90 days ago
100.0 % uptime
Today
Sei Atlantic Testnet Operational
90 days ago
100.0 % uptime
Today
Solana Operational
90 days ago
100.0 % uptime
Today
Solana Devnet Operational
90 days ago
100.0 % uptime
Today
Soneium Operational
90 days ago
100.0 % uptime
Today
Soneium Testnet Operational
90 days ago
100.0 % uptime
Today
Sonic Operational
90 days ago
100.0 % uptime
Today
Starknet Substreams Operational
90 days ago
100.0 % uptime
Today
Unichain Testnet Operational
90 days ago
100.0 % uptime
Today
Vana Operational
90 days ago
100.0 % uptime
Today
Vana Moksha Testnet Operational
90 days ago
100.0 % uptime
Today
Viction Operational
90 days ago
100.0 % uptime
Today
X Layer Mainnet Operational
90 days ago
100.0 % uptime
Today
X Layer Sepolia Operational
90 days ago
100.0 % uptime
Today
Zetachain Operational
90 days ago
100.0 % uptime
Today
zkSync Era Operational
90 days ago
100.0 % uptime
Today
zkSync Era Sepolia Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Miscellaneous Operational
90 days ago
100.0 % uptime
Today
Deployments Operational
90 days ago
100.0 % uptime
Today
IPFS Operational
90 days ago
100.0 % uptime
Today
Subgraph Logs Operational
90 days ago
100.0 % uptime
Today
Explorer Operational
90 days ago
100.0 % uptime
Today
thegraph.com Operational
90 days ago
100.0 % uptime
Today
The Graph Network Subgraph - Arbitrum Operational
90 days ago
100.0 % uptime
Today
Firehose Indexing Operational
90 days ago
100.0 % uptime
Today
Firehose Service Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Dec 6, 2025

No incidents reported today.

Dec 5, 2025

No incidents reported.

Dec 4, 2025
Resolved - This incident has been resolved.
Dec 4, 08:58 UTC
Investigating - [FIRING:1] Polygon-zkEVM: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: A=0, B=1
Labels:
- alertname = Polygon-zkEVM: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Polygon-zkEVM: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivgbjdm2oe/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivgbjdm2oe&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764773590000&orgId=1&to=1764777225375
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764773590000&orgId=1&to=1764777225375&viewPanel=116

Dec 3, 15:53 UTC
Dec 3, 2025
Resolved - This incident has been resolved.
Dec 3, 16:38 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B=1
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764775980000&orgId=1&to=1764779610449
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764775980000&orgId=1&to=1764779610449&viewPanel=61

Dec 3, 16:33 UTC
Dec 2, 2025

No incidents reported.

Dec 1, 2025
Resolved - This incident has been resolved.
Dec 1, 23:38 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B=1
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764628380000&orgId=1&to=1764632012681
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764628380000&orgId=1&to=1764632012681&viewPanel=61

Dec 1, 23:33 UTC
Resolved - This incident has been resolved.
Dec 1, 14:48 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B=1
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764594480000&orgId=1&to=1764598111746
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764594480000&orgId=1&to=1764598111746&viewPanel=61

Dec 1, 14:08 UTC
Nov 30, 2025

No incidents reported.

Nov 29, 2025

No incidents reported.

Nov 28, 2025

No incidents reported.

Nov 27, 2025
Resolved - This incident has been resolved.
Nov 27, 15:48 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B=1
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764251580000&orgId=1&to=1764255210520
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764251580000&orgId=1&to=1764255210520&viewPanel=61

Nov 27, 14:53 UTC
Resolved - This incident has been resolved.
Nov 27, 12:22 UTC
Update - [Comment from Opsgenie]Ruslan Rotaru acknowledged alert: "[Grafana]: Base: Block ingestor lagging behind"
Nov 27, 09:32 UTC
Investigating - [FIRING:1] Base: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: A=0, B=1
Labels:
- alertname = Base: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivil1kc8we/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivil1kc8we&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764228700000&orgId=1&to=1764232333494
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764228700000&orgId=1&to=1764232333494&viewPanel=118

Nov 27, 08:32 UTC
Resolved - This incident has been resolved.
Nov 27, 10:53 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B=1
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764236880000&orgId=1&to=1764240511389
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764236880000&orgId=1&to=1764240511389&viewPanel=61

Nov 27, 10:48 UTC
Resolved - This incident has been resolved.
Nov 27, 07:02 UTC
Investigating - [FIRING:1] Base: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: A=0, B=1
Labels:
- alertname = Base: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivil1kc8we/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivil1kc8we&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764220000000&orgId=1&to=1764223630483
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764220000000&orgId=1&to=1764223630483&viewPanel=118

Nov 27, 06:07 UTC
Resolved - This incident has been resolved.
Nov 27, 04:37 UTC
Investigating - [FIRING:1] Base: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: A=0, B=1
Labels:
- alertname = Base: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivil1kc8we/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivil1kc8we&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764212500000&orgId=1&to=1764216130292
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764212500000&orgId=1&to=1764216130292&viewPanel=118

Nov 27, 04:02 UTC
Resolved - This incident has been resolved.
Nov 27, 02:32 UTC
Investigating - [FIRING:1] Base: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: A=0, B=1
Labels:
- alertname = Base: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivil1kc8we/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivil1kc8we&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764204400000&orgId=1&to=1764208031086
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764204400000&orgId=1&to=1764208031086&viewPanel=118

Nov 27, 01:47 UTC
Nov 26, 2025
Resolved - This incident has been resolved.
Nov 26, 21:18 UTC
Investigating - [FIRING:1] Polygon-zkEVM: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: A=0, B=1
Labels:
- alertname = Polygon-zkEVM: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Polygon-zkEVM: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivgbjdm2oe/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivgbjdm2oe&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1764182590000&orgId=1&to=1764186225145
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1764182590000&orgId=1&to=1764186225145&viewPanel=116

Nov 26, 19:43 UTC
Nov 25, 2025

No incidents reported.

Nov 24, 2025

No incidents reported.

Nov 23, 2025

No incidents reported.

Nov 22, 2025
Resolved - This incident has been resolved.
Nov 22, 12:21 UTC
Update - [Comment from Opsgenie]Theodore Butler acknowledged alert: "[Grafana]: Subgraph Health: Too many indexing errors"
Nov 21, 12:55 UTC
Investigating - [FIRING:1] Subgraph Health: Too many indexing errors Production (partial_outage indexers high P2-High true Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = Subgraph Health: Too many indexing errors
- cmp_Hosted Service - Subgraph Health = partial_outage
- component = indexers
- grafana_folder = Production
- level = high
- priority = P2-High
- statuspage = true
- team = Network Engineering
Annotations:
- Error = [sse.dataQueryError] failed to execute query [A]: invalid character 'e' looking for beginning of value
- grafana_state_reason = Error
- statuspage = Subgraph Health: Too many indexing errors
Source: https://thegraph.grafana.net/alerting/grafana/GGM_62G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DGGM_62G4k&matcher=cmp_Hosted+Service+-+Subgraph+Health%3Dpartial_outage&matcher=component%3Dindexers&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1763722280000&orgId=1&to=1763729602741
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1763722280000&orgId=1&to=1763729602741&viewPanel=69

Nov 21, 12:53 UTC