Securing Cloud Storage with Zero Trust

Cloud storage services like Amazon S3, Azure Blob Storage, and Google Cloud Storage are among the most frequently targeted resources in cloud environments. The reasons are straightforward: storage…

Securing Cloud Storage with Zero Trust - cloud storage zero trust

Cloud Storage as a Primary Attack Target

Cloud storage services like Amazon S3, Azure Blob Storage, and Google Cloud Storage are among the most frequently targeted resources in cloud environments. The reasons are straightforward: storage buckets contain the actual data that attackers want, misconfigured access policies are common, and the flat namespace of object storage makes it easy to exfiltrate large volumes of data once access is obtained. High-profile breaches involving exposed S3 buckets, publicly accessible Blob containers, and overly permissive GCS IAM bindings demonstrate that default configurations and human error continue to create exploitable gaps.

Applying Zero Trust to cloud storage means treating every access request to every object as potentially malicious, regardless of whether the request originates from within the same cloud account, the same VPC, or even the same application. This requires layered controls spanning identity-based access policies, network-level restrictions, encryption with customer-managed keys, and continuous monitoring of access patterns. No single control is sufficient; the defense-in-depth approach ensures that a failure in one layer does not result in data exposure.

Identity-Based Storage Access Policies

In AWS, S3 bucket policies and IAM policies work together (and sometimes against each other) to determine access. A Zero Trust S3 configuration starts by denying all access at the bucket policy level, then explicitly allowing specific IAM principals to perform specific actions. The bucket policy should use condition keys to further restrict access: aws:PrincipalOrgID ensures only principals from your AWS Organization can access the bucket, aws:SourceVpc restricts access to specific VPCs, and s3:x-amz-server-side-encryption requires encryption on every uploaded object.

Azure Blob Storage uses Azure RBAC for data plane access, replacing the legacy shared key and SAS token model. The Storage Blob Data Reader and Storage Blob Data Contributor roles grant access through Entra ID authentication, which integrates with Conditional Access policies for context-aware authorization. Shared Access Signatures should be limited to service-level SAS with short expiration times, stored access policies for revocation capability, and IP restrictions. Account-level SAS tokens and the storage account access keys should be disabled entirely through Azure Policy.

GCP Cloud Storage uses IAM bindings and ACLs, with a strong recommendation to use uniform bucket-level access that disables per-object ACLs. IAM bindings on the bucket should grant roles to specific service accounts rather than allUsers or allAuthenticatedUsers. The storage.objectViewer role grants read-only access, while storage.objectAdmin grants full control. For finer granularity, IAM Conditions can restrict access based on resource attributes like object name prefix, enabling per-directory access control within a single bucket.

Network-Level Storage Isolation

Network controls add a second layer of defense that prevents access even if identity-based policies are misconfigured. In AWS, S3 VPC endpoints (gateway endpoints) route S3 traffic through the AWS backbone without traversing the public internet. Endpoint policies on the VPC endpoint restrict which buckets can be accessed through that endpoint, and bucket policies restrict which VPC endpoints can access the bucket. This bidirectional restriction creates a closed circuit where only specific applications in specific VPCs can reach specific buckets.

  • Deploy S3 gateway endpoints in every VPC and attach endpoint policies restricting access to authorized buckets
  • Use bucket policies with aws:sourceVpce condition to deny access from any source other than designated VPC endpoints
  • In Azure, use Private Endpoints for Storage accounts and deny public network access at the account level
  • In GCP, use VPC Service Controls perimeters to restrict storage access to authorized projects and networks
  • Enable Storage Firewall rules (Azure) or Access Control Lists (GCP) as additional network-layer restrictions

Azure Private Endpoints assign a private IP address from your virtual network to the storage account, and a private DNS zone resolves the storage account’s public FQDN to this private IP. Applications connect to the storage account using the same connection string as before, but the traffic stays on the Azure backbone. The storage account’s public network access should be set to Disabled, ensuring that only Private Endpoint connections succeed. This configuration eliminates an entire class of misconfigurations because there is no public endpoint to accidentally expose.

Encryption and Key Management

All three major cloud providers encrypt storage data at rest by default with provider-managed keys. Zero Trust requires upgrading to customer-managed keys (CMK) that you control through the cloud provider’s key management service (AWS KMS, Azure Key Vault, GCP Cloud KMS). With CMK, you control the key lifecycle (creation, rotation, disabling, deletion) and the key policy (which principals can use the key for encryption and decryption). Revoking the key policy renders all data encrypted with that key permanently inaccessible, providing a cryptographic kill switch for data in compromised buckets.

Client-side encryption adds protection against the cloud provider itself and against compromised cloud credentials. The application encrypts data before uploading it to cloud storage, using keys from an on-premises HSM or a separate key management system. The cloud provider stores and serves encrypted ciphertext that it cannot decrypt. AWS S3 Client-Side Encryption using the AWS Encryption SDK, Azure Storage Client-Side Encryption using Azure Key Vault, and Google Tink library for GCS provide standardized implementations. The trade-off is increased complexity: client-side encryption breaks server-side features like search, indexing, and server-side copy operations.

In-transit encryption is enforced through bucket policies and service configurations. S3 bucket policies should include a condition that denies any request where aws:SecureTransport is false, forcing all connections over TLS. Azure Storage accounts should have the “Secure transfer required” setting enabled, which rejects HTTP connections. GCP Cloud Storage enforces HTTPS by default on the JSON API, but the XML API allows HTTP unless the bucket is configured to require it. TLS version minimums should be set to 1.2 across all storage accounts to prevent downgrade attacks.

Data Lifecycle and Retention Controls

Zero Trust extends to data lifecycle management because compromised credentials can be used to delete data (ransomware) or modify data (integrity attacks) just as easily as they can be used to exfiltrate it. Object versioning preserves every version of every object, enabling recovery from accidental or malicious deletions and modifications. S3 Versioning, Azure Blob Versioning, and GCS Object Versioning should be enabled on all buckets containing business-critical data.

Object lock and immutability policies prevent even privileged users from deleting or modifying data during a retention period. S3 Object Lock in Compliance mode cannot be overridden by any user, including the root account, making it suitable for regulatory retention requirements. Azure Immutable Blob Storage with legal hold or time-based retention provides equivalent protection. GCP Bucket Lock and Object Retention policies enforce immutability at the bucket and object level respectively. These controls protect against insider threats and compromised administrator credentials that would otherwise have unrestricted access to data destruction.

  • Enable versioning on all production buckets and configure lifecycle policies to retain versions for at least 90 days
  • Use Object Lock in Governance mode for operational protection with override capability, or Compliance mode for regulatory requirements
  • Configure cross-region replication to a separate account for disaster recovery and ransomware resilience
  • Use MFA Delete on S3 buckets to require multi-factor authentication for version deletion operations

Monitoring and Anomaly Detection for Storage

Continuous monitoring of storage access patterns is the verification layer that makes Zero Trust controls effective. S3 Server Access Logging and CloudTrail Data Events capture every read, write, and delete operation on S3 objects. These logs should feed into a SIEM or analytics platform where baseline access patterns are established and anomalies trigger alerts. A function that normally reads 100 objects per hour suddenly reading 10,000 objects indicates either a legitimate traffic spike or data exfiltration in progress.

Amazon Macie, Azure Purview, and GCP Data Loss Prevention (DLP) provide automated discovery and classification of sensitive data stored in cloud storage. These services scan objects for PII (Social Security numbers, credit card numbers, email addresses), credentials (API keys, private keys, connection strings), and other sensitive patterns. In a Zero Trust model, the sensitivity classification of data should drive the access controls applied to it. A bucket containing classified data should have stricter IAM policies, mandatory CMK encryption, network isolation through VPC endpoints, and enhanced logging compared to a bucket containing public marketing assets.

Cross-account and cross-region access to storage resources deserves special monitoring attention. While legitimate use cases exist (cross-account data sharing, disaster recovery replication), these access patterns are also indicators of data exfiltration. Alert rules should flag any storage access from principals outside the owning account, any new cross-region copy operations, and any access from IP addresses not previously seen in the access logs. These alerts, combined with the preventive controls described above, create the continuous verification loop that Zero Trust demands for your most valuable cloud asset: the data itself.