Twelve Months of Disruption: How Cyberattacks Tested U.S. Emergency Response

Twelve Months of Disruption: How Cyberattacks Tested U.S. Emergency Response
Texas National Guard Soldiers assist residents onto a high-water vehicle during Hurricane Harvey response, Aug. 27, 2017. Photo by Staff Sgt. Tim Pruitt, Texas National Guard, via Flickr (CC BY 2.0).

Guest contribution by Joseph Topping, security researcher. The views expressed are solely the author’s and do not represent any employer, agency, or organization. He writes in a personal capacity using only publicly available information.


On the 20th anniversary of Hurricane Katrina, I keep coming back to a simple question: if we faced a “Katrina 2.0” today, could a well-timed cyberattack further exacerbate an already bad day? Disaster response—the coordinated mobilization of agencies and resources to protect life and property when normal systems fail—is a system of systems. Emergency response—911 call-taking, dispatch, field units, and emergency departments (EDs)—is one layer of that system. Over the past year, I tracked how cyber incidents actually affected it.

All findings derive from publicly available sources—official statements, regulatory/breach notices, and reputable media. Leak-site claims are treated as unverified unless corroborated.

From August 2024 to July 2025, I reviewed 115 U.S. incidents involving organizations that deliver or directly enable emergency services. 35 (30.4%) had verified operational impacts (ER diversions/slowdowns; EHR/pharmacy outages; dispatch software failures; reported phone outages or degradations). 33 (28.7%) were explicitly “no impact.” 47 (40.9%) were undetermined—not because nothing happened, but because public detail was thin or delayed.

In this dataset, verified impacts were most common in healthcare. Healthcare providers were the most severely impacted (22 verified impacts). Municipal governments reported five, county governments three, law enforcement two, tribal governments two, and dispatch/ECCs one. Even when 911 remained operational, failures in phones, EHRs, and CAD (computer-aided dispatch) systems necessitated workarounds—such as paper packets, radio, and alternative numbers—that kept people safe but slowed everything down.

That’s why this look-back is framed as a disaster-response story, not just a cybersecurity one. In a large-scale event, it doesn’t take a catastrophic hack to change outcomes. A small, well-timed nudge—jamming call-taking, degrading dispatch, hobbling an ED, or constraining a lifeline like water or fuel—can multiply the damage. What follows is where the system bent, where it held, and what needs to change before the next storm and the next outage arrive together.

Monthly Incident Timeline (Aug 2024–Jul 2025)

Scope & methodology

Timeframe. Aug 1, 2024–Jul 31, 2025; 115 U.S. incidents tied to emergency services.

Who’s in scope. Entities whose disruption can change real-world response:

  • Healthcare providers (hospitals/health systems, clinics, imaging).
  • 911/ECC/PSAPs (public safety answering points / emergency communications centers).
  • Law enforcement (police, sheriff).
  • Municipal/county governments where IT outages plausibly touch 911/EMS/field operations.
  • Tribal governments and health services.
  • Critical lifelines / supply chain

Case study: Blood supply operations — New York Blood Center Enterprises (Jan–Feb 2025)

On January 26, 2025, New York Blood Center Enterprises (NYBCe) detected suspicious activity later confirmed as ransomware, forcing the nonprofit to take systems offline and postpone donor appointments and community blood drives while it stabilized operations. Work shifted to manual workflows, slowing processing and check-in, and inbound calling was disrupted across several divisions—creating longer waits and rescheduling cascades at the worst possible moment for inventories. NYBCe, which supplies blood to 400+ hospitals, coordinated with peer centers to maintain hospital supply while restoring capacity. By February 3, collections resumed system-wide, but some manual processes remained as phones and IT were brought back in stages. The episode shows how quickly a cyber event can ripple through the blood chain: scheduling → donor flow → testing/labeling → distribution, with even brief telecom and IT outages amplifying delays. It’s an illustrative case for downtime playbooks, redundant scheduling/call intake, and cross-center mutual aid.

Incident Locations (Aug 2024–Jul 2025)

How events were found. A curated seed list plus systematic searches of official statements, reputable media, state/tribal notices, health-breach disclosures, and sector reporting. Leak-site posts were treated as claims, not facts, absent corroboration.

How “impact” was decided.

  • Yes—credible sources explicitly reported operational effects on emergency services or patient care (e.g., ER diversion/slowdown; EHR/pharmacy outages affecting care; CAD/dispatch degradation; 911 or critical telephony disruption). Verified’ means corroborated by at least one public source.
  • No—officials explicitly stated critical operations were unaffected.
  • Undetermined—impact not addressed or unverifiable; no inference made from “ransomware” alone.

Dates & attribution. On competing dates, the occurrence/onset date took precedence. Actors were listed only when credibly named; otherwise Unknown.

Data hygiene. Names normalized, near-duplicates deduped, labels standardized, and entries rechecked against sources before final counts.

Exclusions. Purely administrative hiccups with no plausible bearing on emergency services; incidents outside the window; entries lacking evidence the entity supports emergency response.

Ethics & sourcing. Public information only; background input used for context, not for publishing unpublished specifics.

Limitations. Many agencies share little operational detail (biasing toward Undetermined); leak-site noise outpaces confirmation; some impacts surface only in after-action reviews. Where certainty wasn’t possible, labels stayed conservative.

Where and how response was impacted

Count of Determination by Sector (Aug 2024–Jul 2025)

Hospitals & Health Systems

Impacts clustered around EHR and pharmacy outages that forced paper workflows, phone/portal interruptions, and ER diversions/delays. Triage slowed, throughput dipped, and rescheduling cascaded. Clinicians spent more time finding charts and meds, calling in orders, or routing patients elsewhere. Health systems accounted for most verified impacts.

Case Study: ER diversion & clinical slowdowns — Kettering Health (OH) (May–June 2025)

A ransomware attack attributed to Interlock triggered a system-wide tech outage across Kettering Health’s 14 hospitals and 120+ sites. Epic EHR, phone lines, scheduling and other apps were taken offline, pushing staff onto downtime workflows and paper charting. Elective procedures were canceled, and ambulances were diverted away from Kettering EDs for days, straining regional capacity. The system reported EDs came off diversion May 28, and then began staged restoration: core Epic components back June 2 (allowing backlog entry), followed by stabilizing phones/call centers and resuming retail pharmacy and surgeries. The episode illustrates how combined EHR + telecom outages, even without a total ED closure, can degrade throughput and force regional load-balancing.

911/ECC/Dispatch

Disruptions were less frequent but consequential. CAD outages and telephony issues (including TDoS, telephony denial-of-service) pushed centers to card-based dispatch and radio workarounds. Service continued, but call-handling/dispatch intervals lengthened and cognitive load rose.

Case Study: Dispatch on paper — South Shore Regional ECC (MA), Aug 2025

When the South Shore Regional Emergency Communications Center (SSRECC) detected “significant issues” with its CAD around 9:30 a.m. on Aug. 2, call takers didn’t stop— they switched to manual. Serving Cohasset, Hingham, Hull, and Norwell, the center kept 911 up using radios, paper cards, and preplanned downtime procedures. Performance degraded—but didn’t fail. Dispatchers logged calls by hand, matched units from memory and status boards, and radioed updates that CAD would normally automate (unit recommends, timestamps, AVL). Leadership emphasized that staff train for this scenario; still, the loss of CAD meant added handling time and higher cognitive load until systems were restored. The incident is a clean example of “resilience with delay”: services remained available, but throughput dipped and error risk rose as automation vanished. For regions relying on a single CAD instance, the lesson is clear—practice the paper playbook, segment supporting systems, and ensure redundant call-routing paths before the next outage.

Law enforcement

Peripheral systems—records portals, agency websites, tip lines, non-emergency phones—were hit more than radios/dispatch. Patrol and emergency response generally continued with reroutes and manual intake; the main cost was slower public-facing services and transparency.

Case study: Non-emergency degradation — Hamilton County Sheriff’s Office (OH) (Nov 2024)

A cyberattack on county IT forced the Hamilton County Sheriff’s Office to operate with degraded non-emergency services while patrol and 911 response continued. As county systems were isolated and rebuilt, the sheriff’s office rerouted non-emergency lines, suspended some online reporting/records lookups, and shifted internal coordination to radios and cell phones. Patrol units stayed on the road and calls for service flowed through 911 as usual, but the loss of back-office systems created friction: delays in records, case-number queries, and public information; manual workarounds for warrants/booking checks; and temporary interruptions to web updates and community tip lines. Supervisors emphasized that field response was intact because radio and CAD access were prioritized and segmented, but acknowledged the cost in time and transparency while administrative platforms were restored.

Counties & cities

Courts, billing, portals, email, and websites saw wider outages, while 911 usually remained insulated. Residents encountered friction—longer queues, offline forms, paper detours. Frontline effects were indirect unless phones, CAD, or radios were touched.

Case study: County resilience — Wexford County, MI (Nov 2024–Feb 2025)

On Election Day 2024, Wexford County was hit by ransomware that knocked out large swaths of county IT—including courthouse systems, phones, and email—yet 911 stayed reachable and core public safety response continued. County leaders cut network connections and shifted to workarounds: staff used cell phones and laptops, while non-emergency public calls into county offices failed. The Register of Deeds—critical for real estate transactions—remained offline for months, forcing title and closing delays and prompting the county to approve hundreds of thousands of dollars for recovery and modernization. Meanwhile, officials reported lingering issues on some sheriff’s office lines, even as emergency services remained available. The episode shows a common county-level resilience pattern: administrative and revenue-facing functions (records, payments, websites) absorb the heaviest disruption, while 911 and field response are insulated through separate systems and practiced contingencies. It also illustrates the operational and financial tail of recovery long after the initial outage ends.

Tribal governments & tribal health

Several incidents hit government and health simultaneously. The Sault Ste. Marie Tribe of Chippewa Indians (Feb 9–10, 2025) stood out: a ransomware attack cascaded across tribal administration, health centers, casinos, convenience stores, and fuel pumps—canceling clinic appointments, closing casinos, and forcing cash-only with pumps offline at some locations. Payments and telecom disruptions demanded wide workarounds. In remote areas with fewer alternatives, tightly coupled systems amplify risk.

Case study: Tribal closure ripple effects — Sault Ste. Marie Tribe of Chippewa Indians (Feb 2025)

A February 9 ransomware attack rippled across the Sault Ste. Marie Tribe’s networked footprint—tribal government, health centers, Kewadin Casinos, and MidJim convenience stores/gas stations—showing how flat or tightly coupled environments can magnify harm. Clinics and government offices opened with limited capabilities the next business day, prioritizing urgent needs while phones and IT systems were restored. All five Kewadin Casinos temporarily closed, and MidJim locations went cash-only, with fuel pumps offline at several sites, cutting off routine payment and fueling options for residents. Public updates emphasized staged recovery and resilience, but the episode illustrates why segmentation and tested downtime playbooks matter: when clinical access, payments, and fuel are intertwined, even short interruptions compound—from missed appointments to longer trips for gas to reduced local revenue. RansomHub later claimed responsibility on its site; at the time of writing, public attribution beyond that claim has not been independently confirmed.

What didn’t break (and why)

Design buffers. Many jurisdictions isolate PSAP/ECC networks from enterprise IT. When ransomware hit email or finance, call-taking stayed up because the ESInet/CPE path (emergency services IP network/customer premises equipment) sat outside the infected domain. Rollover numbers, carrier IVRs, and analog fallbacks preserved ingress.

PSAP isolation in practice. What failed most often was CAD, not call-taking. Centers with a drilled “paper mode” kept dispatching by radio/cards and backfilled timestamps later. Segmented credentials shortened outages; shared identity stores prolonged them.

Radio continuity when MDTs fail. LMR (land-mobile radio) carried the load when MDT/AVL logins broke. Voice status checks, whiteboard assignments, and read-backs preserved service—even if slower—because LMR is built to run when IP is sick (site trunking, failsoft, UPS/generator contracts).

Circuit & platform diversity. Dual carriers, separate fiber paths, and independent admin/emergency voice platforms turned TDoS and carrier hiccups into delay, not outage. In hospitals, downtime kits and paper order sets didn’t eliminate slowdowns but prevented unsafe stops.

Takeaway. Segment for containment; drill manual playbooks for time. That combination turned several potential crises into “resilience with delay.”

Threat landscape and claims vs. confirmations

Named groups encountered. Where named, actors are those publicly identified by the victim, law enforcement, or widely cited security researchers at the time of writing. Attributions, where present, skew toward ransomware crews (e.g., Babuk2/SatanLock, Beast/GigaKick, BlackSuit, Chort, DragonForce, Eldorado/BlackLock, Embargo, Helldown, INC, Interlock, Kairos, LockBit3, Medusa, Qilin, RansomHub, Rhysida, RunSomeWares, SafePay, Termite, ThreeAM/3AM, ValenciaLeaks, VanHelsing). Many entries remain Unknown.

Top Named Threat Actors & Most Affected Sectors (Aug 2024–Jul 2025)

Leak-site / breach-announcement reality check. 33 of 115 incidents (28.7%) were first seen via breach announcements (leak sites/third-party notices). Of those 33, 1 (3.0%) was verified impact, 5 (15.2%) were no impact, and 27 (81.8%) remained undetermined. At the dataset level, that’s 0.9% Yes, 4.3% No, 23.5% Undetermined of all 115 incidents. Bottom line: claims ≠ consequences; we count impact only when credible sources document service effects.

Tactics of consequence. The big three: ransomware (encrypt/steal → EHR, pharmacy, scheduling, phones offline), telephony pressure/TDoS (slower intake/alternate numbers), and vendor/third-party pathways (hosted EHR/CAD/schedulers) that turn one compromise into many degradations. When these touch EHR/pharmacy or CAD/telephony, the result is "resilience with delay".

The phone problem

Phones glue the operation together: scheduling, callbacks, coordination, and surge comms. Across hospitals, clinics, PSAPs, and local governments, we saw intermittent or extended outages of inbound lines and internal calling. Staff fell back to cell numbers, shared inboxes, and radios—good enough to keep moving, but slower, noisier, easier to drop. The fix isn’t glamorous: separate voice from enterprise IT, pre-publish alternates, keep analog/cellular failover, and drill “phone-down” playbooks.

Policy and preparedness implications

  • Make cyber an operational hazard. Bake it into NIMS/ICS (National Incident Management System / Incident Command System) plans, checklists, and exercises; rehearse downtime transitions.
  • Protect call-taking & dispatch first. Anti-TDoS at carriers/CPE; segment CAD; paper mode practiced quarterly; alternate numbers pre-published.
  • Hospitals/EMS. Pre-built paper order sets; downtime pharmacy/EHR playbooks; comms trees; clear diversion triggers and re-entry criteria.
  • Counties/tribes/cities. Continuity for non-emergency lines and portals; public messaging templates; alternate payment/process paths.
  • Vendors/third parties. Contract RTO/RPO, testing evidence, and notification SLAs; require segmentation-friendly architectures and status pages.

Recommendations (actionable checklist)

  • Minimum viable downtime kit (ER/ECC). Quick-cards; pre-printed forms; call trees; alternate numbers/talkgroups; status boards; radios/batteries; “phone-down/CAD-down” SOPs.
  • Quarterly no-notice drills (15–30 min). Trigger “phones” or “CAD/EHR” down; switch to paper; route critical comms via radio/out-of-band; hotwash and fix the binder.
  • Dual-path comms. Separate emergency vs. admin voice; verify LMR coverage and fail modes; maintain a single out-of-band staff channel.
  • Known-good paper workflows (top 10). ER and ECC one-pagers with fields/owners/back-entry steps.
  • Pre-approved messaging (first 2 hours). Public, EMS partners, staff, and vendor templates ready to go.

Limitations & data gaps

Sparse operational detail (silence ≠ no impact); fluid attribution; reliance on public reporting (lag and selection bias); granularity mismatches in multi-entity incidents. Treat the numbers as directional, not exhaustive; use them to inform preparedness, not as a census.

What to watch next year

NG911 rollout risks and interop; phone-system targeting; rural/tribal capacity constraints; vendor compromise routes into PSAPs and hospitals; shifts in actor tradecraft from data theft to timed operational disruption.

Credits & data access

Most media-verified incident reports were surfaced by The Dysruption Hub; breach notifications by ransomware.live. Thanks to institutions that communicated early and often; their transparency separates claims from confirmed impacts. A public data page with incident details, sources, and field definitions is available.


Guest contribution by Joseph Topping, security researcher. The views expressed are solely the author’s and do not represent any employer, agency, or organization. He writes in a personal capacity using only publicly available information.

Subscribe to The Dysruption Hub

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
[email protected]
Subscribe