AWS re:Invent 2020 announcement: S3 Strong Read-After-Write Consistency

Andy Jassy presented a bunch of new services and features during his opening keynote for the 2020 AWS re:Invent, but one that caught my attention is the one regarding S3 Strong Read-After-Write consistency. So let’s take a bit of a deeper dive into the new S3 feature.

Amazon S3 (Simple Storage Service) is a service that provides object storage which aims to provide scalability, high availability, and low latency with 99.999999999% durability and 99.99% availability by replicating data across multiple servers within AWS data centers. It’s used by different types of customers for different purposes, for example hybrid cloud storage, websites, enterprise applications, cloud-native applications, mobile applications, backups, big data analytics and data lakes.

AWS improved its usage by introducing a Strong Read-After-Write Consistency.

In the past, between a PUT API call that stored or modified data or a DELETE API call that delete data, there was a small time window in which the data or the modification was not yet visible to any LIST or GET requests.

Now, immediately after the correct modification of an existing object or the writing of a new object, it is possible to receive or list the last existing version of the object without changes to performance or availability or global dependencies.

Well, this is a big improvement for customers using S3 for their data lakes or big data workloads as these services require immediate access to the latest data as soon as it is saved and will now no longer have to resort to third-party tools to have strong consistency for these applications.

Last but not least, the service is available at no additional cost in all regions.

You can find more information in the launch video

or in the Jeff Barr blog
https://aws.amazon.com/blogs/aws/amazon-s3-update-strong-read-after-write-consistency/

AWS Solutions Architect certified | AWS Community Builder | IT lover and addicted