The Graph

All Systems Operational

About This Site

Status, incident and maintenance information for The Graph.
Use the RSS feed together with https://zapier.com/apps/rss to have updates sent straight to Slack or Discord.

Network - Gateway ? Operational
90 days ago
100.0 % uptime
Today
Network - Explorer ? Operational
90 days ago
100.0 % uptime
Today
Network - Subgraph Studio ? Operational
90 days ago
99.97 % uptime
Today
Network - Network Subgraph ? Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Queries ? Operational
90 days ago
99.93 % uptime
Today
Upgrade Indexer - Subgraph Health Operational
90 days ago
99.9 % uptime
Today
Token API Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Indexing Operational
90 days ago
99.91 % uptime
Today
Ethereum Mainnet Operational
90 days ago
100.0 % uptime
Today
Abstract Mainnet Operational
90 days ago
100.0 % uptime
Today
Abstract Testnet Operational
90 days ago
100.0 % uptime
Today
Arbitrum One Operational
90 days ago
100.0 % uptime
Today
Arbitrum Sepolia Operational
90 days ago
100.0 % uptime
Today
Arbitrum Nova Operational
90 days ago
100.0 % uptime
Today
Arweave ? Operational
90 days ago
100.0 % uptime
Today
Astar ZkEVM Mainnet Operational
90 days ago
100.0 % uptime
Today
Aurora Operational
90 days ago
100.0 % uptime
Today
Aurora Testnet Operational
90 days ago
100.0 % uptime
Today
Avalanche Operational
90 days ago
100.0 % uptime
Today
Avalanche Fuji Operational
90 days ago
99.98 % uptime
Today
Base Operational
90 days ago
100.0 % uptime
Today
Base Sepolia Operational
90 days ago
100.0 % uptime
Today
Berachain Operational
90 days ago
100.0 % uptime
Today
Berachain bArtio Testnet Operational
90 days ago
100.0 % uptime
Today
Bitcoin Substreams Operational
90 days ago
100.0 % uptime
Today
Blast Mainnet Operational
90 days ago
100.0 % uptime
Today
Blast Testnet Operational
90 days ago
100.0 % uptime
Today
Boba Operational
90 days ago
100.0 % uptime
Today
Boba Testnet Operational
90 days ago
100.0 % uptime
Today
Boba BNB Testnet Operational
90 days ago
100.0 % uptime
Today
Boba BNB Operational
90 days ago
100.0 % uptime
Today
Botanix Testnet Operational
90 days ago
100.0 % uptime
Today
BSC Operational
90 days ago
99.28 % uptime
Today
BSC Chapel Operational
90 days ago
100.0 % uptime
Today
Celo Operational
90 days ago
100.0 % uptime
Today
Celo Alfajores Operational
90 days ago
100.0 % uptime
Today
Chiliz Operational
90 days ago
100.0 % uptime
Today
Chiliz Testnet Operational
90 days ago
100.0 % uptime
Today
Corn Maizenet (Mainnet) Operational
90 days ago
100.0 % uptime
Today
Corn Testnet Operational
90 days ago
100.0 % uptime
Today
Ethereum Sepolia Operational
90 days ago
100.0 % uptime
Today
Ethereum Holesky Operational
90 days ago
100.0 % uptime
Today
Etherlink Mainnet Operational
90 days ago
100.0 % uptime
Today
Etherlink Testnet Operational
90 days ago
100.0 % uptime
Today
EXPChain Testnet Operational
90 days ago
100.0 % uptime
Today
Fantom Operational
90 days ago
100.0 % uptime
Today
Fantom Testnet Operational
90 days ago
100.0 % uptime
Today
Fraxtal Mainnet Operational
90 days ago
100.0 % uptime
Today
Fuse Operational
90 days ago
100.0 % uptime
Today
Fuse Testnet Operational
90 days ago
100.0 % uptime
Today
Gnosis Operational
90 days ago
100.0 % uptime
Today
Gnosis Chiado Operational
90 days ago
100.0 % uptime
Today
Gravity Operational
90 days ago
100.0 % uptime
Today
Gravity Testnet Operational
90 days ago
100.0 % uptime
Today
Harmony Operational
90 days ago
100.0 % uptime
Today
Hemi Operational
90 days ago
100.0 % uptime
Today
Hemi Sepolia Operational
90 days ago
100.0 % uptime
Today
Ink Operational
90 days ago
100.0 % uptime
Today
Ink Sepolia Operational
90 days ago
100.0 % uptime
Today
Iotex Mainnet Operational
90 days ago
100.0 % uptime
Today
Iotex Testnet Operational
90 days ago
100.0 % uptime
Today
Japan Open Chain Mainnet Operational
90 days ago
100.0 % uptime
Today
Japan Open Chain Testnet Operational
90 days ago
100.0 % uptime
Today
Kaia Operational
90 days ago
100.0 % uptime
Today
Kaia Testnet Operational
90 days ago
100.0 % uptime
Today
Linea Mainnet Operational
90 days ago
100.0 % uptime
Today
Linea Sepolia Operational
90 days ago
100.0 % uptime
Today
Lens Testnet Operational
90 days ago
100.0 % uptime
Today
Lumia Operational
90 days ago
100.0 % uptime
Today
Mint Operational
90 days ago
100.0 % uptime
Today
Mint Sepolia Operational
90 days ago
100.0 % uptime
Today
Mode Mainnet Operational
90 days ago
100.0 % uptime
Today
Mode Testnet Operational
90 days ago
100.0 % uptime
Today
Monad Testnet Operational
90 days ago
100.0 % uptime
Today
Moonbeam Operational
90 days ago
100.0 % uptime
Today
Moonbase Operational
90 days ago
100.0 % uptime
Today
Moonriver Operational
90 days ago
100.0 % uptime
Today
Near Mainnet Operational
90 days ago
100.0 % uptime
Today
Near Testnet Operational
90 days ago
100.0 % uptime
Today
NeoX Operational
90 days ago
100.0 % uptime
Today
NeoX Testnet Operational
90 days ago
100.0 % uptime
Today
Optimism Operational
90 days ago
100.0 % uptime
Today
Optimism Sepolia Operational
90 days ago
100.0 % uptime
Today
Polygon (Matic) Operational
90 days ago
95.33 % uptime
Today
Polygon Amoy Operational
90 days ago
98.2 % uptime
Today
Polygon zkEVM Operational
90 days ago
100.0 % uptime
Today
Polygon zkEVM Cardona Testnet Operational
90 days ago
100.0 % uptime
Today
Rootstock Operational
90 days ago
100.0 % uptime
Today
Rootstock Testnet Operational
90 days ago
100.0 % uptime
Today
Scroll Mainnet Operational
90 days ago
100.0 % uptime
Today
Scroll Sepolia Operational
90 days ago
100.0 % uptime
Today
Sei Mainnet Operational
90 days ago
100.0 % uptime
Today
Sei Atlantic Testnet Operational
90 days ago
100.0 % uptime
Today
Solana Operational
90 days ago
100.0 % uptime
Today
Solana Devnet Operational
90 days ago
100.0 % uptime
Today
Soneium Operational
90 days ago
100.0 % uptime
Today
Soneium Testnet Operational
90 days ago
100.0 % uptime
Today
Sonic Operational
90 days ago
100.0 % uptime
Today
Starknet Substreams Operational
90 days ago
100.0 % uptime
Today
Unichain Testnet Operational
90 days ago
100.0 % uptime
Today
Vana Operational
90 days ago
100.0 % uptime
Today
Vana Moksha Testnet Operational
90 days ago
100.0 % uptime
Today
Viction Operational
90 days ago
100.0 % uptime
Today
X Layer Mainnet Operational
90 days ago
100.0 % uptime
Today
X Layer Sepolia Operational
90 days ago
100.0 % uptime
Today
Zetachain Operational
90 days ago
100.0 % uptime
Today
zkSync Era Operational
90 days ago
100.0 % uptime
Today
zkSync Era Sepolia Operational
90 days ago
100.0 % uptime
Today
Upgrade Indexer - Miscellaneous Operational
90 days ago
99.99 % uptime
Today
Deployments ? Operational
90 days ago
100.0 % uptime
Today
IPFS Operational
90 days ago
99.98 % uptime
Today
Subgraph Logs ? Operational
90 days ago
100.0 % uptime
Today
Explorer ? Operational
90 days ago
100.0 % uptime
Today
thegraph.com ? Operational
90 days ago
100.0 % uptime
Today
The Graph Network Subgraph - Arbitrum Operational
90 days ago
100.0 % uptime
Today
Firehose Indexing Operational
90 days ago
100.0 % uptime
Today
Firehose Service Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Mar 28, 2025

No incidents reported today.

Mar 27, 2025

No incidents reported.

Mar 26, 2025
Resolved - This incident has been resolved.
Mar 26, 20:16 UTC
Investigating - [FIRING:1] Celo: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Celo: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Celo: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivigqqmtdb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivigqqmtdb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743015680000&orgId=1&to=1743019310320
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743015680000&orgId=1&to=1743019310320&viewPanel=75

Mar 26, 20:01 UTC
Resolved - This incident has been resolved.
Mar 26, 19:01 UTC
Investigating - [FIRING:1] Celo: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Celo: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Celo: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivigqqmtdb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivigqqmtdb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1743010280000&orgId=1&to=1743013910384
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1743010280000&orgId=1&to=1743013910384&viewPanel=75

Mar 26, 18:31 UTC
Completed - The scheduled maintenance has been completed.
Mar 26, 06:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Mar 26, 03:00 UTC
Scheduled - We will be undergoing scheduled maintenance during this time.
Mar 25, 15:22 UTC
Resolved - This incident has been resolved.
Mar 26, 05:41 UTC
Investigating - [FIRING:1] Celo: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Celo: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Celo: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivigqqmtdb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivigqqmtdb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742959280000&orgId=1&to=1742962910300
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742959280000&orgId=1&to=1742962910300&viewPanel=75

Mar 26, 04:21 UTC
Mar 25, 2025
Resolved - This incident has been resolved.
Mar 25, 14:53 UTC
Investigating - [FIRING:1] Polygon (Matic): Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0, B1=0
Labels:
- alertname = Polygon (Matic): Block ingestor lagging behind
- cmp_Polygon (Matic) = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- statuspage = Polygon (Matic): Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ZdG_62MVk/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DZdG_62MVk&matcher=cmp_Polygon+%28Matic%29%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=61

Mar 12, 16:23 UTC
Mar 24, 2025
Resolved - This incident has been resolved.
Mar 24, 11:09 UTC
Update - [Comment from Opsgenie]Leonardo Schwarzstein acknowledged alert: "[Grafana]: Subgraph Health: Too many indexing errors"
Mar 19, 15:27 UTC
Investigating - [FIRING:1] Subgraph Health: Too many indexing errors Production (partial_outage indexers high P2-High true Network Engineering)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: [no value]
Labels:
- alertname = Subgraph Health: Too many indexing errors
- cmp_Hosted Service - Subgraph Health = partial_outage
- component = indexers
- grafana_folder = Production
- level = high
- priority = P2-High
- statuspage = true
- team = Network Engineering
Annotations:
- Error = [sse.dataQueryError] failed to execute query [A]: invalid character '
- grafana_state_reason = Error
- statuspage = Subgraph Health: Too many indexing errors
Source: https://thegraph.grafana.net/alerting/grafana/GGM_62G4k/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3DGGM_62G4k&matcher=cmp_Hosted+Service+-+Subgraph+Health%3Dpartial_outage&matcher=component%3Dindexers&matcher=level%3Dhigh&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DNetwork+Engineering&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=69

Mar 19, 14:21 UTC
Mar 23, 2025

No incidents reported.

Mar 22, 2025

No incidents reported.

Mar 21, 2025
Resolved - This incident has been resolved.
Mar 21, 15:13 UTC
Investigating - [FIRING:1] Polygon-zkEVM: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Polygon-zkEVM: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Polygon-zkEVM: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivgbjdm2oe/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivgbjdm2oe&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742562790000&orgId=1&to=1742566427141
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742562790000&orgId=1&to=1742566427141&viewPanel=116

Mar 21, 14:13 UTC
Mar 20, 2025

No incidents reported.

Mar 19, 2025
Resolved - This incident has been resolved.
Mar 19, 15:36 UTC
Investigating - [FIRING:1] Arbitrum-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Arbitrum-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Arbitrum-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/edjeivgjdk54wd/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dedjeivgjdk54wd&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742394030000&orgId=1&to=1742397666881
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742394030000&orgId=1&to=1742397666881&viewPanel=139

Mar 19, 15:21 UTC
Mar 18, 2025
Resolved - This incident has been resolved.
Mar 18, 16:53 UTC
Investigating - [FIRING:1] Linea-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Linea-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Linea-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivigqqmtca/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivigqqmtca&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1
Panel: https://thegraph.grafana.net/d/7rcuDImZk?orgId=1&viewPanel=149

Mar 18, 15:48 UTC
Mar 17, 2025

No incidents reported.

Mar 16, 2025
Resolved - This incident has been resolved.
Mar 16, 00:08 UTC
Investigating - [FIRING:1] Base-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Base-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/cdjeivf5alfk0a/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dcdjeivf5alfk0a&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078900000&orgId=1&to=1742082530435
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078900000&orgId=1&to=1742082530435&viewPanel=127

Mar 15, 23:48 UTC
Resolved - This incident has been resolved.
Mar 16, 00:01 UTC
Investigating - [FIRING:1] Mainnet: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Mainnet: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Mainnet: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivfrf71tsb/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivfrf71tsb&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078450000&orgId=1&to=1742082085253
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078450000&orgId=1&to=1742082085253&viewPanel=43

Mar 15, 23:41 UTC
Resolved - This incident has been resolved.
Mar 16, 00:01 UTC
Investigating - [FIRING:1] Arbitrum-Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Arbitrum-Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Arbitrum-Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/edjeivgjdk54wd/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dedjeivgjdk54wd&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742075730000&orgId=1&to=1742079366929
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742075730000&orgId=1&to=1742079366929&viewPanel=139

Mar 15, 22:56 UTC
Mar 15, 2025
Resolved - This incident has been resolved.
Mar 15, 23:59 UTC
Investigating - [FIRING:1] Sepolia: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Sepolia: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Sepolia: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/fdjeivgj62g3kd/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dfdjeivgj62g3kd&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078960000&orgId=1&to=1742082595308
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078960000&orgId=1&to=1742082595308&viewPanel=120

Mar 15, 23:49 UTC
Resolved - This incident has been resolved.
Mar 15, 23:57 UTC
Investigating - [FIRING:1] Arweave-Mainnet: Block ingestor lagging behind Production (partial_outage indexers P2-High true Tech Support)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Arweave-Mainnet: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Tech Support
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Arweave-Mainnet: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/bdjeividov3lsc/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dbdjeividov3lsc&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DTech+Support&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078830000&orgId=1&to=1742082466926
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078830000&orgId=1&to=1742082466926&viewPanel=108

Mar 15, 23:47 UTC
Resolved - This incident has been resolved.
Mar 15, 23:56 UTC
Investigating - [FIRING:1] Base: Block ingestor lagging behind Production (partial_outage indexers P2-High true Infrastructure)
https://thegraph.grafana.net/alerting/list

**Firing**

Value: B0=0
Labels:
- alertname = Base: Block ingestor lagging behind
- cmp = partial_outage
- component = indexers
- grafana_folder = Production
- priority = P2-High
- statuspage = true
- team = Infrastructure
Annotations:
- description = Triggered when the block ingestor is lagging
- statuspage = Base: Block ingestor lagging behind
Source: https://thegraph.grafana.net/alerting/grafana/ddjeivil1kc8we/view?orgId=1
Silence: https://thegraph.grafana.net/alerting/silence/new?alertmanager=grafana&matcher=__alert_rule_uid__%3Dddjeivil1kc8we&matcher=cmp%3Dpartial_outage&matcher=component%3Dindexers&matcher=priority%3DP2-High&matcher=statuspage%3Dtrue&matcher=team%3DInfrastructure&orgId=1
Dashboard: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078800000&orgId=1&to=1742082436920
Panel: https://thegraph.grafana.net/d/7rcuDImZk?from=1742078800000&orgId=1&to=1742082436920&viewPanel=118

Mar 15, 23:47 UTC
Mar 14, 2025

No incidents reported.