Ultimate Guide to S3 Express One Zone Costs

Srinivas Devaki

|

December 26, 2023

A brief comparison between "S3 Express One Zone" and "S3 Standard."

First off, set up the S3 Express Gateway Endpoint !!!

Similar to the S3 standard, by default, all access to S3 Express will be driven via NAT Gateway and IGW, and it's insane per GB processing charges of $0.045. Similar to S3 standard, the free gateway endpoint needs to be setup to avoid these charges.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-networking.html

S3 Express APIs are 50% cheaper, but is it always true?

While read & write S3 Express APIs are 50% cheaper than S3 Standard, S3 Express charges bandwidth costs if the object size is greater than 512KB

Plugging in the charges, it turns out that when the object size exceeds 645 KB, the read and write APIs of S3 express one zone are costlier than S3 standard APIs. So the statement “S3 Express One Zone offers 10x better latency at 50% cheaper” needs to be carefully taken care of, i.e it's only true when the object size is less than 645.3KB; otherwise, S3 Express API is costlier in every single dimension.

Are there any Cost Benefits of Storing Files in 512KB Chunks?

Since there are no-cost benefits for storing files larger than 645KB, one could store the files in chunks of 512KB to maximise the cost benefits of 50% cheaper read/write APIs. Although the average latency would keep increasing as the number of chunks grows, with p99 as much as 250ms and objects beyond 50MB, the average latency of this approach would reach the same levels as the S3 standard while being more costly than the S3 standard in terms of storage costs.

So, there is no major cost benefit of storing files in 512KB chunks beyond 50MB as the latency reaches the same levels as the S3 standard while being more costly than S3 Standard.

Can S3 Express act as a cheaper layer for compaction?

Not in most cases, only if the object files before compaction are less than 512KB; in most data workloads, even with decent buffering, it's quite possible to buffer and store files of at least 1MB size before compaction, so it's really rare for S3 express to become more affordable than S3 standard for compaction use case.

Ignoring the S3 express storage costs for the below calculations since, for compaction, the object files don’t have to stay for more than a few minutes or hours at maximum.

for 1000 files

As a Caching Layer on top of S3 Standard

The latency benefits are quite clear, but what is misleading is that, somehow, this layer is 50% cheaper than the S3 standard in most cases.

As we have seen in the above 2 sections, if a significant number of objects size is greater than 645KB, it would be costlier to store in S3 express one zone rather than S3 standard.

This means unless most of your objects’ size is less than 645KB, you are paying a premium for this latency-optimised caching layer.

Low Latency & High-Cost Storage

So overall, S3 express is mostly for low latency use cases, and in rare and specific cases, it is cheaper than the S3 standard.

So it’s necessary to estimate costs before trusting the AWS statement, “S3 Express One Zone can improve data access speeds by 10x and reduce request costs by 50% compared to S3 Standard.” - Link

Short-term object storage

Even though the storage cost is almost 7x than the S3 standard, the 50% API cost makes it really enticing for storing small files for a shorter duration.

Unlocking New Architectures

Taking the latency out of the equation, I don’t see many cases where S3 express can reduce the cost of the S3 standard. But S3 Express surely unlocks a number of awesome cost-saving architecture.

DynamoDB Cache Storage

This storage can act as a great caching layer for databases where document sizes are slightly bigger (>4KB) and not 100% suitable for the database but not worth enough to afford the high latency of the S3 standard. for example S3 express will be a great cache for large objects stored in DynamoDB. eventually consistent read API cost of DynamoDB is $0.125 per mil per 4KB, while S3 express read cost is $0.2 per mil. so for all objects that are > 4KB, dynamodb cost is $0.25 per mil. and since DynamoDB max object size is 400KB, the cost model is much simpler.

as you can see from above, for larger objects using S3 express as a cache storage for DynamoDB gets exponentially cheaper and without trading away the latency requirements.

Lack of TTL

Even though a lot of reinvent and S3 express case studies show case various forms of using S3 express as a cache, it surprisingly doesn’t support lifecycle policies to delete the objects.

Since the storage costs of S3 express is almost 7x than S3 standard, it’s really important to remove any cached data that is cold to avoid any surprise bills. I hope that the AWS team will introduce lifecycle policies soon for S3 express as well, till then users will have to rely on scheduling deletion events by using the AWS Eventbridge scheduler and a lambda (in many cases, it would still be cheaper even after this custom expiration stack)

Replication Layer

It is now much more cheaper and faster to implement a database, pubsub, replication layer using S3 express one zone. since the data can be deleted immediately after replication, the 7x storage cost is of no concern. durability is also not a major concern because bulk of the data can be replicated using S3 express and the light weight acknowledgements can be driven via normal VPC networking.

With S3 standard, the latency is too high (250ms write + 250ms read) for replication via S3 standard to be viable and thats why aurora replicates using normal network before durably writing to S3 standard. But with S3 express the end to end replication would come down to 20ms on an average, which is a pretty decent wait time for a write to be durably replicated across AZs.

20ms write wait time for durably storing is acceptable in a number of databases like clickhouse, cassandra, elasticsearch/opensearch, documentdb, mongodb etc,…

This replication isn’t just limited to databases but even applies to Pub-Sub system like Kafka, for example warpstream agent architecture could be transformed from S3 durability into quorum durability where the quorum job is to durably store in S3 standard. since the PUT request cost is half in S3 express, the flush interval could easily be halved without any impact on PUT cost while reducing the write latency. and since the durability is guaranteed by the quorum the S3 standard PUT request cost could even be amortised to higher intervals i.e 1 flush per second rather than per 100ms. and decoupling could also help with the fact that flush happens per agent per 100ms.

ML & AI

most common inference ML & AI use cases would need to fetch significant amount of data that is related to the common entities every business manages. something AWS sagemaker feature store solves currently, but it’s cost model is much similar to DynamoDB & DAX/Elasticache, as of now the only way to reduce cost for high throughput features is to use a in-memory caching layer but the amount of storage you could keep in this layer is quite low and the storage cost is too high. Similar to DynamoDB cache, S3 Express unlocks a much cheaper storage for feature store.


Concluding

Aurora first unlocked object storage as a backend almost a decade ago, and there were a lot of awesome distributed systems built on top of object storage ever since like Warpstream, Axiom, Confluent Kora, MemQ etc,.. I'm really excited to see what new systems could be built on top of S3 Express. 🚀


At Opti Owl, we are rethinking cloud cost optimization, not through central command-and-control mechanism but as a bottom-up continuous mechanism fostering a frugal culture among engineering teams. To learn more about Opti Owl and how we can reduce your cloud and SaaS vendor costs, contact us or schedule a demo today! 🙂

Stay up to date on managing cloud costs!

Opti Owl

Cut your cloud costs today

Stay up to date on managing cloud costs!

Opti Owl

Cut your cloud costs today

Stay up to date on managing cloud costs!

Opti Owl

Cut your cloud costs today

Stay up to date on managing cloud costs!