Deep Dive on S3

  • S3 operates at a flat structure. S3 is a key/value store designed to store objects. Objects consist of;
    • Key: The name of the object, unique in the bucket. It can contain a simple name or it can consist of prefixes and delimiter which allows S3 to preset it’s flat structure as a hierarchy.
    • Version ID: when the versioning enabled, every object is given an ID.
    • Value: The actual content of the object. From 0 to 5 TB in size.
    • Metadata: Extra key value data for each object, user or system defined metadata.
    • Subresources: Torrent related info for the object.
    • Access Control Information: permissions at an object level attached to it.
  • First byte rate: the time elapsed between requesting and viewing the content.

2 ways to access S3 objects:

  • Set proper public readable credentials and ACLs
  • Mounting it into an EC2 with an FTP service

Security and Access

  • There are two different public access settings, one for the bucket and another for the whole account. You can enable public access at an object level as well.
  • By default, the bucket trusts the owning-account. Only the resource owner has access, either the object or bucket.
  • Bucket Policy: You can allow unauthenticated users from another account or external users to access the bucket. 1 bucket policy per bucket but it can have multiple statements. The permissions apply to all of the objects within that bucket. The policy specifies what actions are allowed or denied for a particular user of that bucket such as;
    • Granting access to anonymous user.
    • Who (a principal) can execute certain actions like PUT or DELETE.
    • Restricting access based off of a IP address (generally used for CDN mgmt).
    • Restricting to certain times of the day.
    • Restrict access based on tags. You can only allow users to access objects with certain tags + bucket policy.
  • S3 Access Control Lists: Legacy form of permissions.
    • Grant public access to the entire bucket or other AWS services such as log delivery. External AWS accounts to access the bucket. Recommended is the bucket policies.
    • They can also be applied to a particular object using the object URL.
  • Pre-signed URLs: You might want to give a particular user, either permanent or temporary access to a particular object via pre-signed URLs inside an S3 bucket without granting permission to the wider bucket. URL has the access key ID. Authentication & authorisation is baked into this URL with an expiry time. Generally used by application to provide access temporarily. You can even generate pre-signed URL for an object does not exist.
  • Suppose you create a pre-signed URL from an EC2 instance assuming a role. The URL will be valid only the duration of role credentials but not the URL expiry date.
  • Encryption in S3
    • SSE - Server Side Encryption: You are sending S3 an unencrypted object and receiving back unencrypted. When you need to ensure the comms channel to be secure, you need to use client-side encryption by yourself or one of the AWS SDKs.
      • SSE with S3 Managed Keys: This is the default encryption method, utilises ‘envelope’ method. Each object is encrypted by using a uniquey key and that unique key is encrypted with a master key.
      • SSE-C with Customer Provided Key: You upload and provide an encryption key. S3 takes the key and encrypts it and discards the key. You are responsible for the all management and rotation of the keys. It’s not supported within cross-region replication. If you want to use on-premise HSM, then this is the method to utilise.
      • SSE with AWS KMS Keys: S3 utilises customer master key, and then S3 talks to KMS which generates data encryption key (plain text and encrypted one). It uses the plain text to encrypt the object and it stores the encrypted encryption key along with the encrypted object on the storage. Vice versa happens when S3 needs to decrypt the object. This allows you to split the roles.

Data read/write consistency for S3

  • Read after write consistency for PUTS of new objects
  • Eventual consistency for overwrite PUTS and DELETE (can take some time to propagate)
  • S3 is a key value store (name of it) and value is the actual data
  • If your requests to S3 are typically a mix of GET, PUT, DELETE, or GET Bucket (list objects), choosing appropriate key names for your objects will ensure better performance by providing low-latency access to the Amazon S3 index. It will also ensure scalability regardless of the number of requests you send per second.
  • Workloads that are GET-intensive – If the bulk of your workload consists of GET requests, Amazon CloudFront is the recommended content delivery service.
  • Metadata is not searchable in S3. It’s just the attribute. For searching, DynamoDB is recommended

How do you select the right STORAGE CLASS for your use case?

  • S3 Standard for Big Data Analysis, Content Distribution, Static Website Hosting. SLA is %99.9 availability, general purpose data. It’s replicated at least across 3 AZs. Milisecond first byte latency. Replication occurs across at least 3 AZs in a region.
  • S3 Infrequent Access for Backup & Archive, Disaster Recovery, File sync & share, Long-retained data. You still need immediate access. Medical Imaging Patient data etc. It’s replicated at least across 3 AZs. Cheaper than S3 Standard. 1 caveat is that there is a 30 day minimum storage charge per object. There is a fee when you want to retrieve a data. Half of the price of S3 Standard.
  • There is a per GB data retrieval fee.
  • S3 One-Zone Infrequent Access is the same as Standard IA but only replicated across 1 AZ. Substantially cheaper than above, non-critical, easily reproduceable data.
  • Glacier is a separate product actually. It’s used for Long Term Archive, Digital preservation, magnetic tape replacement. There is a 90 day minimum storage charge per object. Takes several hours to retrieve data.
  • Glacier Deep Archive is similar but takes a lot longer. At least 180 days minimum storage charge per object.
  • S3 Intelligent Tiering is useful for unpredictable data. Paying a monitoring & automation fee per object to perform lifecycle of these objects, i.e. moving in between different classes. You don’t need to pay for the retrieval fee.

S3 Analytics

  • S3 Analytics works like an access pattern of your data and you can visualize on it. S3 also measures data age when it becomes infrequently accessed. Then you can also apply the lifecycle policy on the data

Automate Data Management Policies

  • Lifecycle policy: Transition data to different storage classes. Expiration; delete objects after specified time. You can set policies based on bucket, prefix or even tags
  • You cannot move an object from Glacier to any other tier once it’s there.

Versioning and Locking

  • Versioning: Protects from unintended deletes or application logic failures. Every upload is created as a new version of the object.
  • Versioining allows to add a unique ID to each object. You can only ever suspend versioning. A new object with the same name will overwrite the existing version with both of them having unique IDs. The new one becomes ‘current’ version. When you delete an object, AWS puts a ‘delete marker’ on the object. You can also delete directly using the individual version IDs. If you do delete the ‘delete marker’, the previous version of the object will reappear within the bucket.
  • Object Locking: Restrictions on what you can do to objects on a bucket. Cross-region replication is not supported when object locking is enabled.
    • Retention Period: prevents updates or deletions for a period of time for a given object. You can set it by days or years. Two ways;
      • Compliance: can’t be adjusted, deleted, overwritten even by account root user until retention expires. Example might be medical, policing etc.
      • Governance: special permissions can be granted allowing lock settings to be adjusted. If you want to prevent accidental deletion.
    • Legal Holds: same but don’t have an expiration date. Used for legal investigation and auditing style situations.

AWS S3 Event Notifications

  • Automate with trigger-based workflow. You can set up event notifications when objects are created via PUT, POST, COPY, Multipart Upload, or DELETE.
  • You can do by filtering on prefixes and suffixes
  • Publish push notification to SNS, SQS queue (worker fleet asynchronously) or Lambda function based on these events

Cross-region Replication

  • There might be compliance reasons to put your data in different regions, enhance security by replicating, take advantage of spot instance pricing, low-latency access etc.
  • You put a bucket policy that it would tell S3 the destination region and bucket. Automatically and asynchronously replicated to the destination bucket. You can choose to replicate full bucket or prefixes. Or even to a different type of storage class in the replication policy.
  • You can change the storage class or ownership of the objects/buckets replicated.
  • Lifecycle events are not replicated, only the user triggered actions are replicated. This is a one-way replication only.
  • Replication is not retroactive and versioning needs to be on for that.
  • It’s a one-way replication only from source to the destination.
  • There is also Same-Region-Replication (SRR).
  • Why would you use replication?
    • SRR - Log Aggregation
    • Prod and Test Sync
    • Resilience with strict sovereignty
    • CRR - Global Resilience Improvements
    • CRR - Latency improvements.

Cross-Origin Resource Sharing

  • This is a method of allowing a web application located in one domain to use resources of another domain. This allows web applications runnning JavaScript or HTML5 to execute objects in another S3 buckets.

S3 Transfer Acceleration

  • You may have customers uploading content to a centralised bucket and transfer large amount of data frequently. This leverages the AWS CloudFront Edge Location Network that would automatically route your data to the closest endpoint/edge network. Travels the shortest distance in the public internet. TCP/HTTP protocol is used, no client software or anything.

Parallelize PUTs with multipart uploads

  • Standard, single data stream upload is limited to 5GB per object and also limits the data transfer speed between you and S3 Endpoint. Multipart upload allows you to put large objects within smaller parts. Parallelize the parts to get the most out of network bandwidth. You can also parallelize GETs too. For large objects, use range-based GETs to align your get ranges with your parts. Max is 5TB for multipart uploads with each part (10.000 parts) is between 5MB to 5GB.

Higher Transaction per Second (TPS) by Distributing Key Names

  • Use a key naming scheme with randomness at the beginning for high TPS. Most important if you regularly exceed 100TPS on a bucket. Avoid starting with a date or monotonically increasing numbers when defining key names. You need to enable this only when you need more than 3,500 PUTs (write TPS) and 5,500 GETs (read TPS).

Object Tags

  • S3 tags are key value pairs. You can set up IAM and Lifecycle policies with S3 tags, also metrics

Audit and Monitor Access - AWS Cloud Trail Data Events Use Cases

  • Perform security analysis
  • Meet your IT auditing and compliance needs
  • Take immediate action on activity How it works
  • Capture S3 object-level requests
  • Enable at the bucket level
  • Logs delivered to your S3 bucket Monitor performance and operation
  • Amazon CloudWatch metrics for S3 generates metrics for data of your choice. You can create an alert based on an alarm

  • Amazon S3 doesn’t get configured as a Trusted Signer for CloudFront. A Trusted Signer is an AWS account with a CloudFront key pair. The CloudFront Behavior is then instructed to let that key pair create signed URLs.

Restrictions

  • 100 bucket per account. Sub divide a bucket.
  • Naming convention

Cost Aspects

  • There is a cost incurred if you transfer data out. These are different per the medium. Little charge for requests for different storage classes.
  • In general, bucket owners pay for all S3 storage and data transfer costs associated with the bucket. A bucket owner, however, can configure a bucket to be a “Requestor Pays” bucket.

Glacier

  • Vaults contain archives and need to be unique in a region in an account. It’s generally asynchronous actions. Descriptions can only be added when the archive is uploaded. A vault can store an unlimited number of archives. Accounts can have 1,000 vaults per region.
  • Inventories of vault contents is an asynchronous operation with notifications available from SNS. Glacier performs an automatic inventory of every vault every 24 hours. A vault inventory lists the archive ID, creation date and the size but NOT user defined metadata. They only have unique ID and the optional description. You can only delete an archive and upload a new one, cannot be edited.
  • Three different retrieval speed;
    • Expedited: typically completed in 1-5 minutes below 250MB archives
    • Standard: Generally completed 3-5 hours
    • Bulk: economic option for large amounts - completed witihn 5-12 hours
  • If you are storing a local metada of an application, make sure you put it in a separate archival for only that component. Otherwise, it retrieves the whole thing in one go.