C1 – S3 Buckets

It is raining data with Cloudtrails, VPC Flowlogs, GuardDuty, Inspector, and all the container logs; we need a bucket to catch it. So I speak, of course, of an S3 Bucket.
From the AWS documentation:
Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps. With cost-effective storage classes and easy-to-use management features, you can optimize costs, organize data, and configure fine-tuned access controls to meet specific business, organizational, and compliance requirements.
From my experience, this is one of the more straightforward AWS services to configure and, unfortunately, in my opinion, one of the most misconfigured.
The basic security requirements for a properly configured S3 bucket are as follows:
- It must not be publicly accessible
- Data must be encrypted at rest
- Server and Object access must be logged
AWS recommends a set of best practices within their documentation; it is worth reviewing.
First, log into the AWS management console, search for S3, and select S3 from the results panel.

In the S3 dashboard, we will select the Create Bucket button on the right-hand side and be presented with the create bucket panel. First, we will need a bucket name; I suggest something very descriptive as this name must be unique globally, contain only lowercase letters, numbers, hyphens, and dots, and start and end with a letter or number. (Did I say this was straightforward?) The information to provide is as follows:
- Bucket Name (see above limits)
- AWS Region
- ACLs disabled/enabled
- Block Public Access

- Bucket Versioning
- Tags
- Default Encryption

Once the bucket is created, we will revert to the S3 dashboard, and we will need to review the bucket and set our access logging. In the S3 dashboard, select the bucket by clicking on the bucket name; this will open the bucket details panel.
We will scroll to the Server Access Logging panel and select edit on the right-hand side. In the server access logging panel, we will enable access logging and select a bucket to send the logs to. (Note if one does not exist, feel free to create one using this document, I suggest NOT logging server access logs on the server access logging bucket.

When complete, one can run the gutters and downspouts (VPC Flowlogs, Cloudtrails, Kinesis, etc.) to catch the rain.
One may want to enable a Data Lifecycle rule for compliance and cost control reasons. The recommended archive process is to keep 90 days of logs in active storage and to move logs after 90 days to “cold storage” (Glacier) for seven years. NOTE: Check with your compliance officer or attorney for THEIR log archive requirements.
To enable and configure an archival process, access the S3 dashboard and S3 bucket Management as prior. Then, scroll to the Lifecycle rules and select Create Lifecycle Rule.
In the Create lifecycle rule panel, we will set a lifecycle name; in this case, 90-live-2555 cold, we will perform the following actions.
- Move current objects between classes, I.E., From live to cold storage
- Expire current versions of objects; At some point, we will delete the data
- Delete expired objects and incomplete uploads, essential garbage collection.

In the Transition section, we will set up our first data move from live storage to Glacier Deep Archive and set the date to transition. Note this days after object creation, so after 90 days, you will have a consistent set of data in the live storage.

We will then set our next transition (expiration) after 2555 days (7 years) in cold storage.

Review the Transitions:
- 90 days after object creation, move data to Glacier Archive
- 2555 days after archival, delete the object.
- Select Create rule

On a refresh of the Lifecycle configuration panel, we note the rule is created.

Returning to the bucket management tab, we note the rule is applied to our bucket.
