What is S3 Storage?
Amazon Simple Storage Service (S3) is a massively scalable storage service based on object storage technology. It provides a very high level of durability, with high availability and high performance. Data can be accessed from anywhere via the Internet, through the Amazon Console and the powerful S3 API.
S3 storage provides the following key features:
- Buckets—data is stored in buckets. Each bucket can store an unlimited amount of unstructured data.
- Elastic scalability—S3 has no storage limit. Individual objects can be up to 5TB in size.
- Flexible data structure—each object is identified using a unique key, and you can use metadata to flexibly organize data.
- Downloading data—easily share data with anyone inside or outside your organization and enable them to download data over the Internet.
- Permissions—assign permissions at the bucket or object level to ensure only authorized users can access data.
- APIs – the S3 API, provided both as REST and SOAP interfaces, has become an industry standard and is integrated with a large number of existing tools.
This is part of an extensive series of guides about cloud storage.
In this article, you will learn:
- How Does S3 Storage Work?
- Amazon S3 Storage Classes
- S3 Storage Q&A
How Does S3 Storage Work?
Amazon S3 data is stored as objects. This approach enables highly scalable storage in the cloud. Objects can be placed on a variety of physical disk drives distributed throughout the data center. Amazon data centers use specialized hardware, software, and distributed file systems to provide true elastic scalability.
Amazon provides redundancy and version control using block storage methods. Data is automatically stored in multiple locations, distributed across multiple disks, and in some cases, multiple availability zones or regions. The Amazon S3 service periodically checks the integrity of the data by checking its control hash value. If data corruption is detected, redundant data is used to restore the object.
S3 lets you manage your data via the Amazon Console and the S3 API.
Buckets are logical containers in which data is stored. S3 provides unlimited scalability, and there is no official limit on the amount of data and number of objects you can store in an S3 bucket. The size limit for objects stored in a bucket is 5 TB.
An S3 bucket name must be unique across all S3 users, because the bucket namespace is shared across all AWS accounts.
When you upload an object to a bucket, the object gets a unique key. The key is a string that mimics a directory hierarchy. Once you know the key, you can access the object in the bucket.
The bucket name, key, and version ID uniquely identify every object in S3. S3 provides two URL structures you can use to directly access an object:
Amazon has data centers in 24 geographical regions. To reduce network latency and minimize costs, store your data in the region closest to its users.
Unless you manually migrate your data, data stored in a specific AWS Region will never leave that region’s data center. AWS Regions are separated from each other to provide fault tolerance and reliability.
Each region is made up of at least three availability zones, which are separated, independent data centers. Data is replicated across availability zones to protect against outage of equipment in a specific data center, or disasters like fires, hurricanes and floods.
Related content: read our guide to object storage deployment
Amazon S3 Storage Classes
S3 provides storage tiers, also called storage classes, which can be applied at the bucket or object level. S3 also provides lifecycle policies you can use to automatically move objects between tiers, based on rules or thresholds you define.
The main storage classes are:
- Standard—for frequently accessed data
- Standard-IA—standard infrequent access
- One Zone-IA—one-zone infrequent access
- Intelligent-Tiering—automatically moves data to the most appropriate tier
Below we expand on the more commonly used classes.
Amazon S3 Standard
The S3 standard tier provides:
- Durability of 99.999999999% by replicating objects to multiple Availability Zones
- 99.99% availability backed by Service Level Agreement (SLA)
- Built-in SSL encryption for all data (both in transit and at rest)
Amazon S3 Standard-Infrequent Access
The S3 Standard-IA tier is for infrequently accessed data. It has a lower cost per GB/month, compared to the Standard tier, but charges a retrieval fee. The S3 Standard-IA tier provides:
- The same performance and latency as the Standard tier
- The same durability—99.999999999% across multiple Availability Zones
- 99.9% availability backed by SLA
S3 Storage Archive
S3 provides Glacier and Deep Archive, storage classes intended for archived data that is accessed very infrequently. Cost per GB/month is lower than S3 Standard-IA.
- S3 Glacier—data must be stored for at least 90 days and can be restored within 1-5 minutes, with expedited retrieval.
- S3 Glacier Deep Archive—data must be stored for at least 180 days, and can be retrieved within 12 hours. There is a discount on bulk data retrieval, which takes up to 48 hours.
S3 Storage Q&A
How Much Data Can I Store in Amazon S3?
You can store an unlimited amount of data in Amazon S3. Other storage related limits include:
- Individual objects limited to 5TB
- Upload up to 5GB in one PUT operation
- For objects larger than 100MB Amazon recommends using Multiple Upload
How is Amazon S3 Data Organized?
Amazon S3 is an object store. Each object has a unique key that can be used to retrieve it later. You can define any string as a key, and keys can be used to create a hierarchy, for example by including a directory structure in the key. Another option is to organize objects using metadata, using S3 Object Tagging.
How Reliable is Amazon S3?
Amazon S3 provides 11 nines (99.999999999%) durability. With regard to availability, S3 guarantees:
- 99.99% availability for Standard storage class
- 99.9% availability for Standard-IA, Glacier and Deep Archive
- 99.5% availability for One Zone-IA
Meet Cloudian: S3-Compatible, Massively Scalable On-Premise Object Storage
Cloudian® HyperStore® is a massive-capacity object storage device that is fully compatible with Amazon S3. It can store up to 1.5 Petabytes in a 4U Chassis device, allowing you to store up to 18 Petabytes in a single data center rack. HyperStore comes with fully redundant power and cooling, and performance features including 1.92TB SSD drives for metadata, and 10Gb Ethernet ports for fast data transfer.
HyperStore is an object storage solution you can plug in and start using with no complex deployment. It also offers advanced data protection features, supporting use cases like compliance, healthcare data storage, disaster recovery, ransomware protection and data lifecycle management.
Learn more about Cloudian® HyperStore®.
See Our Additional Guides On Cloud Storage
We have authored in-depth guides on several other topics that can also be useful as you explore the world of cloud storage.
File Upload and Sharing Technologies
Authored by Cloudinary
File uploads are a common method of collecting file data from users and creating interactivity in services. For example, file uploads are used to enable users to edit their own images or submit documents for translation.
This guide explains what file uploads are, covers the most common types of file upload methods, and explains how you can use Cloudinary to upload files through a variety of languages and frameworks.
See top articles in our file upload guide:
- File Upload With Angular to Cloudinary
- Uploading PHP Files and Rich Media the Easy Way
- AJAX File Upload – Quick Tutorial & Time Saving Tips
Google Cloud Storage
Authored by NetApp
Google Cloud offers a variety of storage options for you to choose from. These services form the base of many other services in the cloud and understanding what your options are can help you manage your cloud more efficiently. This guide explains what Google Cloud Storage options exist and their common uses.
See top articles in our Google Cloud storage guide: