Skip to content

We have seen no evidence that sensitive data was accessed

How cyber breach statements reassure the public without saying much at all

Collage of several black-and-white pages filled with text where large sections are covered by thick black redaction bars, representing censored or withheld information.
A collage of heavily redacted government documents underscores how much of the story can be blacked out before it reaches the public.

When a company suffers a cyberattack, there is one line you can almost set your watch by:

“At this time, we have seen no evidence that sensitive data was accessed.”

It sounds reassuring. It suggests that customer records are safe, no one’s identity is at risk, and the incident is under control. But read it carefully. The statement is not about what actually happened to the data. It is about what the organization has been able to see.

In many incidents, visibility is the core problem. Logs may be incomplete, overwritten, or misconfigured. Critical systems may never have been monitored at all. Ransomware operators are not in the habit of leaving a tidy audit trail of every file they quietly staged for exfiltration before encrypting the network. In that context, “we have seen no evidence” can simply mean “we lack the ability to know for sure.”

Chip in once
If this reporting helped you, a one-time tip helps cover hosting, tools and future investigations.

Tip us

Support us monthly
A small monthly pledge keeps independent coverage and our reader tools online for everyone.

Become a Supporter

Timing matters too. These statements often appear within days of an attack being discovered, long before a full forensic review is complete. At that stage, “no evidence” often means “we have not finished looking” or “our investigators have not yet found the smoking gun.” Weeks or months later, when breach notification letters go out, the same organization may concede that names, Social Security numbers, medical information, or financial data were in fact exposed.

There is also a legal angle. Saying “we have seen no evidence” is safer than saying “no data was accessed.” The first is a description of the company’s current knowledge. The second is a factual claim that can be disproven in court. Carefully chosen language allows organizations to appear confident today while leaving plenty of room to revise the story tomorrow.

For the public, the phrase should be treated as a starting point, not a conclusion. When you see “we have seen no evidence that sensitive data was accessed,” the fair translation is closer to: either nothing was stolen, or we cannot tell yet, or we do not have the telemetry to know one way or the other. The words are technically accurate. They are also, on their own, almost meaningless.

Downplaying impact

The first bucket of breach language is all about making a serious incident sound small. Companies rarely say “a lot of data was exposed.” Instead they reach for phrases like “we have seen no evidence that sensitive data was accessed,” “we have no evidence of misuse,” and “only a small subset of individuals was affected.”

Each of these lines tells you more about the organization’s messaging strategy than about the actual scope of the breach. “No evidence” and “no evidence of misuse” focus on what the company can see from its narrow vantage point, not on what may be happening with stolen data elsewhere. Once information leaves the network, the victim organization has almost no ability to track how it is used, resold, or combined with other leaks. The absence of visible fraud is not proof that the data is safe. It often just reflects the limits of their view.

“Small subset” sounds modest, but it is usually presented without context. A “small subset” of a million customers could still be tens of thousands of people. A “limited number” of records could be enough to fill a state’s identity theft help line for months. Without a numerator, a denominator, or even an order of magnitude, the phrase is functionally meaningless. It shrinks the perceived harm without providing any facts that can be tested.

These formulas are designed to nudge readers toward a comforting conclusion: the incident was contained, the risk is modest, and most people can relax. In practice, they should prompt the opposite reaction. When you see “no evidence of misuse” or “small subset,” the real takeaway is that the organization is choosing soft, elastic language instead of concrete numbers and clear risk statements.

This is not just a matter of a few sloppy press releases. An investigation by The 74 and Wired into more than 300 K-12 school cyberattacks found the same pattern playing out over and over again: districts offering early assurances that no sensitive information had been exposed, then backtracking months later when breach notices and leak sites told a different story. The reporters traced much of that messaging to “breach coach” law firms and insurers running incident response under attorney–client privilege, a structure that rewards technically accurate language that keeps families in the dark for as long as possible.

Obscuring cause

The second bucket of breach language focuses on what happened, while carefully avoiding why it was possible in the first place. This is where phrases like “a sophisticated cyberattack,” “a highly resourced threat actor,” or the vague “cyber event” come into play.

“Sophisticated attack” is the workhorse here. It suggests that the organization was targeted by an exceptional adversary, the sort of operator who would have gotten in no matter what. Sometimes that is true. More often, the entry point is depressingly ordinary: an employee who clicked a phishing email, a remote access service without multifactor authentication, a firewall or VPN left unpatched for months. Calling that “sophisticated” shifts attention away from those basic failures and toward a faceless, elite opponent.

“Cyber event” and “network incident” are even more evasive. They describe an entire ransomware outbreak or mass data exfiltration in the same neutral language you might use for a brief Wi-Fi outage. The choice of terms is deliberate. “Ransomware attack” and “data breach” have regulatory consequences and invite scrutiny. “Cyber event” sounds technical and harmless, a temporary disturbance in the IT department rather than a major breakdown of risk management.

This vocabulary has another advantage for the organization: it buys time. As long as the incident is described only as a “cyber event,” they can avoid answering specific questions about how attackers got in, how long they were inside, or what controls failed. By the time those details emerge in required notification letters, the news cycle has usually moved on. Readers are left with a hazy image of a powerful, mysterious attack, rather than a clear view of preventable, human decisions.

Reassurance theater

The third bucket is all about projecting control, competence, and concern, even when the facts are still unsettled. This is where familiar lines like “the security of your information is our top priority,” “we are working with leading third-party cybersecurity experts,” and “out of an abundance of caution” make their appearance.

“The security of your information is our top priority” is almost always paired with an admission that the organization failed to protect that same information. It is a branding statement, not a factual one. If security truly were the top priority, many of the basic safeguards that would have prevented the breach – patching, segmentation, logging, multifactor authentication – would have been in place already. The phrase persists because it signals empathy and seriousness without requiring accountability.

“We are working with leading experts” serves a similar function. It sounds impressive, but it tells you nothing about what the organization is actually doing to fix the problems that allowed the intrusion. In practice, almost every victim hires outside forensic support, whether they call them “leading experts” or not. The phrase is meant to reassure readers that the situation is being handled by professionals, while quietly pushing questions about internal capability into the background.

Then there is “out of an abundance of caution,” perhaps the most flexible piece of reassurance theater. It is used when offering credit monitoring, forcing password resets, or issuing notifications that regulators effectively require. What the phrase really communicates is: we are taking steps we have to take, but we want you to believe we are doing even more than necessary. It reframes compliance or damage control as voluntary overperformance.

All of these statements share a common purpose: to create the impression of responsibility and transparency without committing to specifics. They soothe readers, but they often leave them no clearer about what went wrong, how much data was exposed, or what the organization will do differently going forward.

Framing it as “technical issues”

There is also a special category of breach language designed to avoid saying the word “cyberattack” for as long as possible. When systems go down, organizations often announce “technical issues,” “IT outages,” or “a system disruption” even when internal teams already know they are dealing with ransomware or unauthorized access. This is not an accident. It is a deliberate choice to buy time, control the narrative, and avoid committing to a term that has legal and regulatory weight.

“Technical issues” is the most common placeholder. It suggests a broken server, a glitchy update, or some routine maintenance mishap. What it really means is: we do not want to call this a cyber incident yet. In many cases, by the time the public sees that phrase, the organization has already isolated compromised systems, pulled logs for forensic review, or called an incident response firm. They know what they are responding to. They simply have not said it out loud.

“System disruption” or “IT outage” serves the same purpose. These phrases lump ransomware encryption, command and control activity, and deliberate sabotage into the same category as a failed storage array or a misconfigured firewall. It is a way to keep the nature of the problem deliberately vague until executives have had time to assess legal exposure, notify insurers, and coordinate talking points.

Why avoid the word “cyberattack” in the early hours? Because the moment an organization uses that term, expectations shift. Reporters start calling. Regulators start paying attention. Customers want details the organization may not yet have, or may not want to share. By keeping the language generic, they preserve flexibility. If it turns out to be ransomware, they can later say they were cautious in the early stages. If details are slow to emerge, they can argue the investigation was evolving.

The result is a familiar pattern:
Day 1: “We are experiencing technical issues.”
Day 3: “We are investigating an IT security matter.”
Day 5: “We experienced a cybersecurity incident.”
Day 20: “Some personal information was accessed.”

The language evolves only when circumstances force it. For readers, the translation is simple: when you see “technical issues” coinciding with widespread outages, system shutdowns, or calls for patience, assume the organization suspects a cyberattack. They just are not ready to say the word yet.

A simple decoder ring

If breach statements feel slippery, it is because many of the most common phrases have a second meaning hiding under the surface. Here is a quick translation guide for readers who want to understand what these lines typically signal in practice.

  • “We have seen no evidence that sensitive data was accessed.”
    Translation: We do not have proof yet, but we also do not have the logging or visibility to rule it out.
  • “We have no evidence of misuse.”
    Translation: We have no way to track what happens to the data once it leaves our network.
  • “A small subset of individuals was affected.”
    Translation: We are not telling you how many people were impacted, but we want you to assume it is small.
  • “A sophisticated cyberattack.”
    Translation: The entry point may have been embarrassingly simple, but we are framing it as unstoppable.
  • “Cyber event” or “network incident.”
    Translation: We are not ready to say “ransomware” or “data breach,” and we would prefer you do not ask yet.
  • “We take your privacy and security very seriously.”
    Translation: This is boilerplate filler added to soften the fact that our controls failed.
  • “We are working with leading third-party cybersecurity experts.”
    Translation: We hired an incident response firm, like everyone does in these situations.
  • “Out of an abundance of caution, we are…”
    Translation: Regulators or legal counsel told us to do this, but we would like credit for being proactive.
  • “As soon as we became aware…”
    Translation: We are starting the clock at the moment we noticed, not when the attackers first got in.
  • “The investigation is ongoing, and we cannot share further details.”
    Translation: We are not prepared to discuss what went wrong, or we are worried the specifics will look bad.

None of these translations assume malice. They reflect how organizations navigate legal exposure, regulatory requirements, messaging control, and limited technical visibility during a crisis. But once you understand the difference between what the words imply and what they actually state, breach statements become a lot easier to read for substance, and for silence.

A more honest way forward

Critiquing breach language is not the same as blaming every organization that finds itself in a crisis. Cyberattacks happen even to well prepared environments, and early information is often incomplete. But there is a way to communicate with the public that respects both uncertainty and accountability without leaning on empty phrases.

More honest communication starts with concrete numbers or ranges. If an investigation is still underway, organizations can say so while still giving meaningful estimates: tens, hundreds, thousands. Even rough orders of magnitude tell readers far more than “a limited number.”

It also means offering quantified timelines. Instead of “as soon as we became aware,” provide real markers: when the suspicious activity began, when it was detected, when systems were taken offline, when notifications went out. Timelines not only inform the public, they demonstrate that the organization understands its own incident response.

Honesty includes acknowledging specific security changes, not just that “safeguards have been enhanced.” If multifactor authentication is being rolled out, if logging gaps are being closed, if a vulnerable system is being retired, say so. These details matter because they show the organization is correcting root causes rather than managing optics.

Most importantly, clear communication demands a straightforward distinction between “we do not know yet” and “we know, but it is bad.” Uncertainty is not a failure. Obscuring known damage is. When organizations draw that line openly, they build trust even in a difficult moment. When they blur it, readers learn to treat every word as spin.

In the end, better breach communication is not complicated. It is factual, proportional, and transparent about what is known and what is still unfolding. It replaces formulas with clarity, and performance with candor. In a landscape where cyber incidents are no longer rare exceptions, that kind of honesty is not just refreshing. It is necessary.

Joseph Topping

Joseph Topping

A writer, intelligence analyst, and technology enthusiast passionate about the connection between the digital and physical worlds. His views expressed here do not necessarily reflect those of his employer, and he writes here as an individual.

All articles
Tags: Analysis

More in Analysis

See all

More from Joseph Topping

See all