Most cloud teams I work with have immutable backups configured. Few can tell me which resources are covered, and that gap is exactly what attackers count on.
What are immutable backups?
Immutable backups are copies of data that, once written, cannot be modified, deleted, or overwritten by anyone during a defined retention period.
Traditional backups rely on permissions, which can be misconfigured, overridden, or compromised. That's why they can be overwritten by retention jobs, encrypted by ransomware, or deleted by mistake.
Immutability removes that vulnerability by enforcing data permanence at the storage layer itself, not just through permissions. The key distinction is that immutability is about storage state, not access controls.
A locked object cannot be altered during the lock period, even if an attacker gains full administrative access to the backup software.
How do immutable backups work?
Immutable backups work by enforcing write-once rules at the storage layer, making it technically impossible to alter or delete backup data during a defined retention period.
Control plane vs. data plane
I keep coming back to this concept because it’s why some immutable implementations survive an attack while others don’t.
The data plane is where your data lives: files, snapshots, storage itself. The control plane is where decisions get made: admin portals, backup software, API configuration, and credentials.
Real immutability requires keeping these completely separate. If the credentials used to write data to backup storage are the same ones used to change retention policies or delete objects, then compromising those credentials means compromising everything.
In cloud environments, the console login you use to configure an S3 bucket policy is the control plane. The API key your backup software uses to write data is the data plane. Give the API key only the permissions it needs. Nothing more.
Object locking
Think of it as a time lock safe. Your data is protected by time.
Here’s how it works in practice on AWS S3:
- Set a retention policy in the S3 console: nothing can be deleted for 30 days.
- Generate an API key for your backup software with data plane access only.
- That API key can read and write data, but cannot delete any locked object, regardless of who controls the backup software.
Even if an attacker extracts that API key from a compromised backup server, they cannot instruct S3 to delete locked objects. The lock is enforced at the storage layer, not the application layer.
AWS S3 Object Lock runs in two modes:
- Governance mode: Specific IAM users can override the lock. Useful for testing, not for production data that needs to survive a credential compromise.
- Compliance mode: No one, including the root account, can override or shorten the retention period. This is what I recommend for anything business-critical.
Azure Immutable Blob Storage and GCS Retention Policies follow the same principle, with slightly different implementation details per provider.
Logically air-gapped vaults
For the highest level of protection, some platforms now support logically air-gapped backup vaults. Unlike traditional air-gapping (which requires physical disconnection), logical air-gapping uses cloud-native isolation techniques to create a vault that's functionally isolated from your day-to-day infrastructure.
Eon's approach is a logically air-gapped vault hosted in a separate Eon-managed cloud account, distinct from the customer's infrastructure. Backup data lives in an isolated account boundary that an attacker compromising production credentials cannot reach.
Combined with compliance-mode object locking, this provides defense-in-depth: even if an attacker compromises the primary environment, they can't access the isolated vault to delete or encrypt recovery points.
Snapshots
Snapshots capture the exact state of your data at a point in time. As data changes, new blocks are written elsewhere while the snapshot keeps the originals intact.
The part most teams miss: snapshots are only immutable if control-plane credentials are kept separate from data-plane credentials. If the same credentials that control your storage layer also control the share your backup software writes to, a single compromise gives an attacker access to both, including the ability to delete snapshots.
Backup API keys should not carry admin-level IAM permissions on the underlying storage account. Write access is not control access.
Replication credential isolation
The direction of replication has a direct impact on how resilient your backups are under attack, yet I rarely see teams explicitly consider this.
The principle that matters is credential isolation between source and destination, regardless of whether replication is push or pull. If the source environment is compromised, the destination should not depend on the source's credentials or policies to maintain its own backup integrity.
In a self-built backup architecture, pull replication (where the destination pulls from the source) often makes credential isolation easier because the source has no credentials for the destination.
On platforms that orchestrate replication via cloud provider APIs, push-based orchestration with separate IAM roles for the source and destination provides the same protection: a source compromise doesn't automatically compromise replicated copies.
The architectural goal is the same either way: a credential boundary between source and destination that an attacker can't cross by compromising one side.
While Eon uses push orchestration through cloud provider APIs for efficiency and cost effectiveness, the principle of credential isolation between source and destination still applies. Separate IAM roles ensure that compromising the source doesn't automatically compromise replicated copies.
Why immutable backups matter in cloud environments
Immutable backups matter because ransomware, insider threats, and accidental deletion can all reach backup data. In cloud environments, the attack surface is larger and harder to see than most teams expect.
Ransomware targets the backup first
Experienced ransomware attackers don't encrypt first. They find your backups first, because an organization that can't recover has no choice but to pay.
I've reviewed enough post-incident reports to know the sequence: credential compromise through phishing, lateral movement using native cloud tools, backup storage reached within hours, backups deleted or encrypted, then production systems hit. At that point, there is no clean recovery path.
The scale of this problem shows up clearly in independent research. Verizon's 2025 Data Breach Investigations Report, which analyzed more than 12,000 confirmed breaches, found ransomware present in 44% of all breaches reviewed, up from 32% the prior year.
CISA's updated Akira ransomware advisory names backup servers as a primary target, alongside VPNs and edge devices, with Akira alone claiming roughly $244 million in ransomware proceeds as of late September 2025.
Attacks are also timed deliberately. Long weekends, holiday periods, reduced teams. The window between compromise and detection is longer, which gives attackers more time to reach backup infrastructure unnoticed.
Immutability closes the backup-destruction vector. A properly locked object cannot be deleted or encrypted during the retention period, regardless of who has the API key.
Accidental deletion and policy drift are more common
Most backup data loss I see isn’t from ransomware. It’s from misconfigured lifecycle rules and accidental deletion, and it gets far less attention.
A lifecycle rule set incorrectly silently deletes months of backups. An admin removes what they think are redundant copies. A cloud migration moves data to a new account structure, but the backup policy doesn’t follow.
Immutability provides a hard floor: deletion requests are rejected by the storage layer.
Compliance requires proof, not just policy
GDPR, HIPAA, and SOC 2 don't just require data retention. They require proof that the retained data hasn't been tampered with.
Immutable backups provide cryptographic integrity at the storage layer. For teams subject to HIPAA audit requirements or SOC 2 Type II, demonstrating that backup data has remained unchanged since it was written is fundamentally different from merely confirming that backups ran successfully.
Immutable backup vs. traditional backup
Traditional backups rely on permissions. Immutable backups enforce protection at the storage layer. That difference is what determines whether your backups survive an attack.
The gap nobody talks about: immutable doesn’t mean covered
Having immutable backup settings configured is not the same as knowing every cloud resource in your environment is protected.
This is the most common problem I find when reviewing large cloud backup architectures. Everyone assumes coverage is complete. In practice, that assumption breaks quickly.
Scale is where it breaks. In a 50-account, multi-region AWS environment, new resources spin up continuously. A new RDS instance, a DynamoDB table, and an EKS cluster: none of them automatically inherit the backup policy from existing resources unless the setup explicitly enforces it.
Without automated discovery and policy assignment, those resources go unprotected from day one.
Policy drift makes it worse. A resource gets migrated to a new account, re-tagged, or restructured during a modernization project. The backup policy that covered it before doesn’t follow. Nobody notices until a recovery is attempted.
The situation I find most often isn't an organization with no immutable backups. It's one where backups are configured on part of the environment, and nobody can tell me what's outside the policy.
This is the problem Cloud Backup Posture Management (CBPM) is built to solve. CBPM continuously discovers and classifies cloud resources across accounts and regions, enforces backup policies based on data type and criticality, and surfaces coverage gaps before they become recovery failures.
Policies follow the resource, not the tag. Every new RDS instance, DynamoDB table, or EKS cluster inherits the right policy based on what it is, not what someone remembered to label it. No manual tagging, no periodic audits that are already out of date by the time they run.
Best practices for immutable backups in cloud environments
The best practices for immutable backups go beyond setting a retention lock. They cover how you enforce coverage, validate backup integrity, and confirm you can recover what you need.
1. Use Compliance mode for critical data
Governance mode allows users with specific IAM permissions to override the retention period. That’s not actual immutability. It’s immutable unless someone with the right permissions decides otherwise.
Use Compliance mode for production databases, financial records, and anything with a regulatory retention requirements. Governance mode is fine for lower-criticality workloads. Know the difference before you configure anything.
2. Separate control plane and data plane credentials
The API key your backup software uses to write data should not carry IAM permissions to modify bucket policies, disable Object Lock, or delete locked objects. These are separate functions that need separate permissions.
Create least-privilege IAM roles for backup write operations. Keep the console credentials that configure retention policies completely separate, with MFA enforced. Mixing these into a single credential defeats the architectural purpose of immutability.
3. Automate coverage
In environments with dozens of accounts and hundreds of resources, the only way to maintain consistent backup coverage is through automated discovery and enforcement.
Every new resource should automatically receive the appropriate backup policy based on its data classification, without anyone needing to remember to configure it. Eon is the first intelligent cloud infrastructure platform for backup, data lakes, and AI. CBPM policies follow resources automatically across multi-account, multi-region environments, classifying resources and enforcing policies based on data type rather than manual tags.
4. Set retention periods intentionally
Short retention is where I see teams get caught off guard by dormant ransomware.
Attacks that sit undetected in your environment for 60 or 90 days will appear in every backup within a short retention window. If your retention period is 30 days and the infection is 45 days old, every available restore point is already compromised.
On the other side: retention periods that run too long compound storage costs quickly at scale. The 30-50% cost reduction from deduplication, compression, and tiered storage becomes significant when retention is calibrated to actual requirements.
Review retention settings at least annually and whenever your environment changes.
5. Isolate replication credentials between source and destination
Replication resilience comes down to credential isolation, not direction. The destination account should not depend on the source account's credentials or policies to maintain its own backup integrity.
On platforms that orchestrate replication via cloud provider APIs, this means maintaining separate IAM roles for source and destination operations. A source compromise should never automatically compromise replicated copies. Whether your architecture pulls or pushes, the credential boundary is what protects you.
6. Validate that backups are clean before you need them
An immutable backup of already-encrypted data is useless. Immutability guarantees the backup can't be changed after it's written. It says nothing about whether the data was clean when written.
This is where Eon's ransomware detection differentiates from file-scanning tools. Eon is the only vendor doing logical content analysis on managed database backups (Amazon RDS, Aurora, Google Cloud SQL, and Azure SQL), where there's no filesystem for traditional anti-ransomware tools to scan. Detection works through:
- Row-count anomaly detection that flags sudden mass changes inconsistent with normal write patterns.
- Cardinality shift analysis that identifies when column distributions change in ways that match encryption behavior.
- Schema validation that catches structural changes introduced by ransomware when it rewrites database tables.
For VM- and file-based workloads, Eon adds entropy analysis (encrypted data has higher entropy than unencrypted files), ClamAV integration for known malware signatures, and ransomware detection. For object storage, mass-deletion detection identifies the bulk-delete patterns that often precede or accompany ransomware.
When detection fires, Eon's Ransomware Protection surfaces the timeline directly, showing which recovery points are clean, which look risky, and where the safe cutoff is. That clean recovery point identification is your actual restore target.
7. Plan for granular recovery
I ask every team I work with the same question: if you need to recover a single DynamoDB table at 2 am, how long does that take from your current setup?
Most can’t answer it. And that’s the problem.
Recovering 10TB from a full-environment snapshot can take days. If your RTO is measured in hours, a full restore strategy doesn’t support it. File-level, table-level, and record-level granular recovery changes the calculation entirely.
NETGEAR illustrates the difference. Recovery time for a 10 TB SQL Server database dropped from 24 hours to under 3 hours after replacing native snapshots with granular restores. Restore only what's affected, not the whole environment. For teams with strict RTOs, restore granularity matters as much as backup immutability.
8. Follow the 3-2-1-1-0 rule
The classic 3-2-1 backup rule has two additions that matter for cloud environments. The “1 immutable” and the “0 unverified” close the gaps left open by traditional backup strategies.
Here's the breakdown:
- 3 copies of your data
- 2 different storage media types
- 1 copy off-site
- 1 copy immutable or logically air-gapped
- 0 unverified backups
That last zero is the one most teams skip. A backup that has never been tested is an assumption, not a recovery option.
9. Test recovery, not just backup completion
Most teams verify that backup jobs have completed successfully. Few verify they can restore within their required timeframe.
Recovery testing should be a full scenario: what gets restored, where it goes, how long it takes, and whether the restored environment functions. Run it at least annually for critical systems.
NIST CSF recommends regular recovery testing. DORA mandates it for financial entities in the EU. Either way, treating recovery testing as optional is a risk most regulated teams can't afford.
What immutable backup recovery looks like
Here’s a scenario I’ve seen play out more than once in multi-account AWS environments.
An engineer’s credentials are compromised through a phishing email. The attacker moves laterally using native AWS tools and gains access to backup storage within hours. In mutable backups, objects are either deleted or encrypted. There’s no clean recovery path.
With S3 Object Lock in Compliance mode, separate control-plane credentials, and anomaly detection enabled, the attacker reaches the backup bucket, and every delete request is rejected.
The anomaly detection flags the unusual API activity and identifies when the compromise started.
The recovery team has a clean, locked backup. They know the last unaffected snapshot. They restore only the affected databases and files, not a full environment rebuild, not days of downtime. A targeted restore of exactly what the attack touched.
SoFi is an example of this scenario in practice. Operating across five AWS regions using native snapshots, the team experienced fragmented coverage and a prior firewall outage that caused a full-day recovery delay. After switching to Eon, recovery dropped from a full day to under five minutes.
Whether recovery takes days or hours comes down to three things:
- Whether the architecture was correct.
- Whether coverage was complete.
- Whether recovery was granular enough to be practical.
Immutable backups and compliance reporting
For teams under SOC 2, HIPAA, or GDPR, immutable backups provide a verifiable audit trail that proves data integrity over time, not just a policy claim that backups ran.
The difference between “we have backups” and “we can prove our backups haven’t been altered” is significant in an audit. Immutable storage with cryptographic integrity provides the latter.
One practical note on GDPR: organizations are required to delete personal data when it's no longer needed, but deletion can be delayed if the data sits in immutable backups that can't be selectively erased.
This area of guidance continues to evolve, so review your retention and deletion practices with your compliance team regularly.
Continuous coverage reporting (which resources have active policies, which have drifted, and when the last successful backup ran) tells you what’s actually protected without having to piece it together during an audit.
The bottom line
Immutability is the floor, not the ceiling, of a solid cloud backup strategy.
Getting the floor right means Compliance mode (not Governance mode) for critical data. It means separate control-plane and data-plane credentials. Credential isolation between the replication source and the destination. Retention periods based on actual recovery requirements.
The harder question is whether every resource in a large, constantly changing cloud environment is covered. Whether you'd know within minutes if something weren't. And whether recovery is granular enough to meet real business timeframes.
Teams that identify those gaps during an incident spend days rebuilding. Those who find them in advance are back online in hours.
See what’s covered in your environment
If you can’t tell me which cloud resources have active backup policies and when coverage last drifted, that’s the gap Eon is built to close.
Eon is the first intelligent cloud infrastructure platform for backup, data lakes, and AI. CBPM discovers and classifies your cloud resources across accounts and regions, enforces policies without manual tagging, and gives you a continuous view of what's protected and what isn't.
Logical content analysis catches ransomware in managed databases where file-scanning tools can't reach. Recovery is granular by default.
Book a demo to see how it works across your environment.
Frequently asked questions
What is an immutable backup?
An immutable backup is a copy of data that cannot be modified, deleted, or overwritten after it is written. Immutability is enforced at the storage layer through mechanisms like S3 Object Lock, GCS Retention Policies, or WORM storage, for a defined retention period. A properly locked object cannot be altered by anyone during the lock period.
Can ransomware delete or encrypt immutable backups?
Ransomware cannot delete or encrypt a properly locked backup. An attacker who extracts your backup software's API credentials still can't remove locked objects because the lock lives at the storage layer, not the application layer. The one caveat is that the backup must have been written before the infection reached the data.
What is S3 Object Lock, and how does it support immutable backups?
S3 Object Lock is an AWS feature that prevents objects from being deleted or overwritten for a defined period. It supports immutable backups by enforcing that protection at the storage layer, so locked backup data can’t be altered even if backup software or credentials are compromised.
How long should immutable backups be retained?
Immutable backups should be retained based on your recovery goals, compliance requirements, and threat model. Most teams use 30–90 days of daily backups for ransomware protection, with longer retention for compliance. If retention is too short, every available backup may already be compromised.
Is an immutable backup the same as an air-gapped backup?
No. An air-gapped backup is disconnected from the network. An immutable backup stays network-accessible but cannot be modified during its retention period. Eon combines both through a logical air-gap: an isolated, Eon-managed vault account separate from customer infrastructure, paired with compliance-mode object locking.
How do I know if all my cloud resources are covered by immutable backup policies?
You know if your cloud resources are covered by immutable backup policies by using automated discovery and continuous coverage monitoring. Manual audits can’t keep up with infrastructure changes, so gaps and drift go unnoticed.


