S3 20b 100tkrazitprotocol – A true feat of engineering, Amazon Web Services’ Storage Service, or S3, has been instrumental in revolutionizing the way we store data and connect to services on the Internet. What started as a simple storage service for 20 billion objects (or bytes) has grown to serve 100 trillion objects by 2020. We interviewed 24 AWS S3 product managers, engineers and designers about how S3 has scaled over the years. Here is their oral history, in their own words:
“We were even storing a paragraph from Wikipedia. It said that if the Internet were a country, it would be the fifth largest in the world.” – Laura M. Holson , NY Times 2011
“We were so excited about Amazon S3; we felt like we had built a rocket ship. It was designed to scale to infinity and beyond.” – Peter De Santis , Amazon Web Services 2006
“I think it was around 2007 or 2008 where you started hearing about this thing called S3. That was like four years after it launched. And I remember we really started focusing on this thing that was called the S3 interface. The idea was that customers could run apps over it. That was the first thing.” – Scott Mowry, AWS S3 Senior Product Manager, 2012
“When we launched S3 in 2006, it had no support for third-party apps or applications whatsoever. We had no idea when we launched it that it would turn into the infrastructure backbone for so many of the services that you use today.” – Werner Vogels, Amazon CTO
“AWS was a relatively new business at that time and had the goal of being a reliable and low-cost cloud provider. One of the things they needed to prove was they could deliver petabytes of data reliably. Because S3 offered consistency and durability in terms of storage, it proved itself to be an important component. At that time, that was what AWS needed to prove.” – Werner Vogels, Amazon CTO
“I think we were very bullish on S3. We were building out a global network of data centers and being able to deliver the petabyte-scale storage system was a requirement for AWS at that time.” – Daisuke Murota , Senior Technical Leader, AWS S3 2008 4th Quarter
“We had gone from synchronous replication to asynchronous replication and the continuous archiving capabilities. There was the ability to restore data from any point in time within the last 30 days. I think that was one of the things that really made AWS tick and made people say, “Wow, this is a storage service I can actually use.” – Dan Adams , Senior Technical Leader, AWS S3
“I think we were trying to create an easy way for our customers to get started with cloud computing—and here we had this unusual product called S3. Suddenly, “cloud” became one of those keywords and everyone wanted it. There was a huge shift in how people viewed the cloud. Suddenly, it wasn’t just for startups and non-profits. But really it was about how you make the cloud work for you and how you could use it to solve virtually any kind of problem.” – Jeff Barr, AWS S3 Product Manager
“There was a large scale of customers with various needs so we had this new idea that we could create a marketplace or platform where customers could choose from different providers. We had a long list of vendors that were interested in joining the platform and providing their services on top of our services. And so, we started doing this and it really grew organically.” – Jeff Barr, AWS S3 Product Manager
“There was a lot of math involved. For instance, the designers wanted to make sure that the access times for users would be less than 30ms. The goal was to be able to have a billion objects stored on each node within each region in one year’s time.