
Anantha.P
6
min read

An attack never begins with alarms.
Nobody in the IT team gets a notification that says "UNKNOWN ACCESS DETECTED" or "DATA BREACHED — ALERT!!!" (three exclamation marks and all). It's mostly just a silent login that should not have happened. No forced entry. No dramatic breach. Just access access so normal that it goes completely unnoticed.
And what follows over the next 24 hours isn't chaos. It's a quiet, methodical process that is entirely invisible to everyone inside the organisation.
Hour 0–3: Nothing looks wrong. That's the problem.
An attacker gets in.
It could be through a stolen password picked up from a previous data leak or simply bought off the dark web for a few dollars. It could be a phishing link that an employee clicked on a tired Tuesday afternoon. A misconfigured system that nobody got around to fixing. A vendor connection that was set up six months ago and never reviewed since.
The method varies. The outcome doesn't.
From the moment access is gained, no alarms go off because nothing has technically broken. The system sees what looks like a valid user logging in. It does exactly what it's supposed to do. It lets them in.
That's the first problem. And it isn't a technology problem. It's a visibility problem.
Hour 3–8: They're learning your environment.
The attacker isn't rushing. There's no reason to.
They're mapping what systems exist, where sensitive data might live, what their current level of access allows. If their access is limited, they start looking for ways to expand it. This could mean exploiting a software vulnerability, finding an admin account with a weak password, or simply poking around until they find a system that wasn't locked down properly.
This phase is quiet and deliberate. Most organisations don't detect anything here not because the tools don't exist, but because nothing looks urgent enough to flag. Everything appears normal. Because to the system, it still is.
Hour 8–12: Access spreads.
The attacker begins moving laterally across systems, through internal tools, along the same trusted pathways your own teams use every day.
To make this concrete: imagine a hospital environment. The attacker starts with access to one staff login maybe a receptionist's account. From there, they can reach the appointment scheduling system. That system is connected to the patient records database. The patient records database shares credentials with the billing software. And the billing software has an integration with the insurance claims portal.
One login. Four systems.
This is also where shared credentials become a serious problem. Shared credentials means that multiple people sometimes an entire department use the same username and password to access a system. Everyone knows it's a bad idea. Nobody changes it because it's inconvenient. And when an attacker gets hold of that one password, they don't just have one account. They have everyone's access at once.
At this point, the attacker isn't just "in." They're operating inside the environment moving, exploring, and expanding just like any regular employee would.
Hour 12–18: The breach becomes real.
Sensitive data is located. Accessed. Sometimes copied, sometimes staged for extraction, sometimes simply observed.
The objective isn't always immediate theft. It could be persistence staying inside long enough to be useful at a later date. It could be leverage for a ransom demand. It could be quiet preparation for a larger move. In some cases, the attacker isn't even planning to use the data themselves they're collecting it to sell.
But here's the thing. Impact has already begun. Whether anyone inside the organisation knows it or not.
Hour 18–22: Something feels off.
An unusual login from an odd location. A system running slightly slower than usual. An alert sitting in a queue among fifty others that nobody's gotten to yet.
It gets noticed. Checked. Questioned. And in many cases dismissed, or pushed down the priority list, because the signal isn't clear enough to justify raising an alarm. And raising an alarm has consequences it means pulling in leadership, potentially disrupting operations, and being wrong in front of everyone if it turns out to be nothing.
So people hesitate. They wait for more certainty.
This is where time is lost. Not because people aren't capable but because the signal is unclear, nobody is sure who owns the decision to escalate, and certainty feels like a prerequisite for action.
It isn't.
Hour 22–24: The dots start connecting.
Multiple anomalies. Access patterns that don't match normal behaviour. Systems doing things outside their usual parameters.
Now it's no longer just a check. It's an incident.
Leadership gets pulled in. Access starts getting restricted. Forensic investigation begins. And the conversation shifts from "what happened here?" to "what is still happening right now?"
For the first time in 24 hours, the organisation is moving fast. Unfortunately, the attacker has had a 24-hour head start.
What this actually means for your business
A breach is not a moment. It's a sequence. And the first 24 hours are rarely defined by dramatic failure they're defined by delayed clarity.
Most organisations don't struggle because they lack tools. They struggle because early signals don't get acted on, nobody is sure who owns the response, and the structure for making fast decisions under uncertainty simply isn't there.
By the time certainty arrives, the situation has already moved on.
