Blog Partners Events Press Support
Search
Language
日本語 Deutsch
English
Pricing
Products›
← Back

Cloudian Products  

HyperStore Object Storage
HyperStore File Services
HyperIQ Observability & Analytics
HyperCare Managed Service
HyperBalance Load Balancer
Product Specifications

The Object Storage
Buyer’s Guide

Technical/financial benefits; how to evaluate for your environment.

Get Guide

The Object Storage
Buyer’s Guide

Technical/financial benefits; how to evaluate for your environment.

Get Guide

HyperIQ Observability & Analytics

Watch 2-min Intro

Evaluator Group Webinar

Skills Shortage? Ease the Storage Management Burden.
Watch On-Demand

Scaling Object Storage with Adaptive Data Management

Get White Paper

The Object Storage
Buyer’s Guide

Technical/financial benefits; how to evaluate for your environment.

Get Guide

Solutions›
← Back

Solutions  

AI and Deep Learning
Data Protection
Hybrid Cloud
S3 Data Lake
Ransomware Protection
Kubernetes
Data Storage Security

 

File Services
Office 365 Backup

Industries  

Federal Government
State & Local Government
Financial Services
Telecommunications
Manufacturing
Media & Entertainment
Education
Healthcare
Life Sciences
Cloud Service Provider

Scaling AI: Leveraging Object Storage to Meet the Modern Demands of AI Workflows

Learn More

2021 Enterprise Ransomware Victims Report

Don’t Be a Victim

Scalable S3-Compatible Storage, On-Prem with AWS Outposts

Learn More

Trending Topic: On-Prem S3 for Data Analytics

Watch Webinar

Ransomware 2021: A Conversation with Veeam CISO Gil Vega

Hear His Thoughts

How a Private Cloud Addresses the Kubernetes Storage Challenge

Free White Paper

Data Security & Compliance: 3s Every CIO Should Ask
Ask the Right ??s

Satellite Application Catapult Deploys Cloudian for Scalable Storage

Replaces conventional NAS, saves 75%

Read Their Story

On-Demand Webinar

Veeam & Cloudian: Office 365 Backup – It’s Essential

Watch Now

Why the FBI Can’t Stop Cybercrime and How You Can

Register Now

8 Reasons to Choose Cloudian for State & Local Government Data

Get 8 Reasons

Cloudian HyperStore SEC17a-4 Cohasset Assessment Report

Read the Assessment

Hybrid Cloud for Telecom

Learn More

Hybrid Cloud for Manufacturers

Learn More

Tape: Does It Measure Up?

Get Free eBook

Customer Testimonial: University of Leicester

Hear from Mark

Public Health England: Resilient IT Infrastructure for an Uncertain Time

Watch On-Demand

How to Accelerate Genomics Data Analysis Pipelines by 10X

Hear from Weka

How MSPs Can Build Profitable Revenue Streams with Storage Services

Read Guide

Alliances›
← Back

Technology Partners  

AWS
Commvault
Cribl
Dremio
HPE
Kasten by Veeam
Lenovo
Microsoft
NVIDIA
RNT Rausch

 

Rubrik
Snowflake
Splunk
Teradata
Veeam
Veritas
Vertica
VMware
View All >

Get Scalable Storage On-Prem for AWS Outposts

Hear from AWS

The Path to the Hybrid World: Amazon S3-Compatible Storage On-Prem for AWS Hybrid Edge

Learn from AWS

Lock Ransomware Out with Commvault & Cloudian

Watch Now

Cribl Stream with Cloudian HyperStore S3 Data Lake

Learn More

Why Object Storage is Best for Advanced Analytics Apps in Greenplum

Explore Solution

Customer Video: NTT Communications

Hear from NTT

How to Store Kasten Backups to Cloudian

Watch Demo

Klik.Solutions Delivers World-Class Backup-as-a-Service with Lenovo & Cloudian

Why They Chose Us

Modernize SQL Server with S3 Data Lake

Find Out How

Cloudian Shatters AI Storage Barriers with Direct GPU-to-Object Storage Access

Learn More

Immutable Object Storage for European SMBs from RNT Rausch and Cloudian

Learn More

Backup/Archive to Cloudian with Rubrik NAS Cloud Direct

Explore Solution

On-Premises Object Storage for Snowflake Analytics Workloads

Get the Details

Splunk, ClearShark, and Cloudian discuss Federal Industry Storage Trends

Watch Now

Teradata & Cloudian: Modern Data Analytics for Hybrid and Multi-Cloud

Find Out How

1-Step to Data Protection: All You Need to Know About Veeam v12 + Cloudian

Step up to Cloudian

Modernize Your Enterprise Archive Storage with Cloudian and Veritas

Read About It

Unified Analytics Data Lake Platform with Vertica and Cloudian HyperStore

Find Out How

VMware Cloud Providers: Get started in cloud storage, free.

Get Started

Customers›
← Back

Customers  

Financial Services
Federal Government
State & Local Government
Healthcare
Higher Education

 

Manufacturing
Media & Entertainment
Retail
Service Providers
Video Surveillance / Digital Evidence

Cloudian Enables Leading Swiss Financial Institution to Retain and Analyze More Big Data

Read Case Study

Indonesian Financial Services Company Replaces NAS With Cloudian

Read Case Study

National Cancer Institute Reduces Cost and Time to Insight with Cloudian

Learn More

US Department of Defense Deploys Cloudian

Read Case Study

State of California Selects Storage-as-a-Service Offering Powered by Cloudian

Learn Why

Public Health England: Resilient IT Infrastructure for an Uncertain Time

Watch On-Demand

Australian Genomic Sequencing Leader Accelerates Research with Cloudian

Learn more

Swiss Education Non-Profit Achieves Scale and Flexibility of Public Cloud On-Prem with Cloudian

Get the Details

Indonesia Ministry of Education Deploys Cloudian Object Storage to Keep Up with Data Growth

Read Case Study

Leading German Paper Company Meets Growing Data Backup Needs with Cloudian

Read Case Study

Vox Media Automates Archive Process to Accelerate Workflow by 10X

Learn More

WGBH Boston Builds a Hybrid Cloud Active Archive With Cloudian HyperStore

Read Case Study

Large German Retailer Consolidates Primary and Secondary Storage to Cloudian

Read Case Study

How a Sovereign Cloud Provider Succeeds in Cloud Storage Services

View On-Demand

IT Service Provider Drives Business Growth with Cloudian-based Offering

Read Case Study

Calcasieu Parish Sheriff Deploys Hybrid Cloud for Digital Evidence Data

Read How

Montebello Bus Lines Mobile Video Surveillance with Cloudian Object Storage

Read Case Study

Resources›
← Back

Resources  

Case Studies
Datasheets
Demos & Videos
On-Demand Webinars
Reports
Solution Briefs
TCO Calculator
Whitepapers

Storage Guides  

Data Backup & Archive
Data Lake
Data Protection
Data Security
Disaster Recovery
Health Data Management
Hybrid Cloud
Kubernetes Storage
Ransomware Data Recovery
Splunk Architecture
VMware Storage
Veeam
S3 Storage
Object Storage
View All >

Storage Insider: Optimizing Enterprise Storage for AI Workloads

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Futuriom – Object Storage in the AI Era: Emerging Trends and Players

Learn More

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Ransomware Protection Buyer’s Guide

Get Free Guide

Company›
← Back

Company  

About Us
Careers
Leadership Team
Press Releases
Contact Us

 

Customers
In the News
Training & Education
Awards

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Cloudian Named a Gartner Peer Insights Customers’ Choice for Distributed File Systems and Object Storage

Read Reviews

Blog Partners Events Press Support
日本語Deutsch
Pricing

Cloudian Blog

Cloudian Blog

S3 Buckets: Accessing, Managing, and Securing Your Buckets

Amazon Simple Storage Service (Amazon S3) is an object storage solution that provides data availability, performance, security and scalability. Organizations from all industries and of every size may use Amazon S3 storage to safeguard and store any amount of information for a variety of use cases, including websites, data lakes, backup and restore, mobile applications, archives, big data analytics, IoT devices, and enterprise applications.

What Is AWS S3 Bucket?

Amazon Simple Storage Service (Amazon S3) is an object storage solution that provides data availability, performance, security and scalability. Organizations from all industries and of every size may use Amazon S3 storage to safeguard and store any amount of information for a variety of use cases, including websites, data lakes, backup and restore, mobile applications, archives, big data analytics, IoT devices, and enterprise applications.

To retain your information in Amazon S3, you use resources called objects and buckets. A bucket is a container that houses objects. An object contains a file and all metadata used to describe the file.

To retain an object in Amazon S3, you develop a bucket and upload the object into it. Once the object is within the bucket, you may move it, download it, or open it. When you don’t require the bucket or object any longer, you can discard them to trim back on your resources.

In this article:

  • How to Use an Amazon S3 Bucket
  • Tutorial: Creating a Bucket
  • What Is S3 Bucket Policy?
  • S3 Bucket URL and Other Methods to Access Your Buckets
    • Virtual-Hosted-Style Access
    • Path-Style Access
    • Accessing a Bucket Through S3 Access Points
    • Accessing a Bucket Using S3://
  • S3 Bucket Configuration: Understanding Subresources
  • Best Practices for Keeping Amazon S3 Buckets Secure
    • Block Public S3 Buckets at the Organization Level
    • Implement Role-Based Access Control
    • Encrypt Your Data
  • S3 Bucket with Cloudian

 

This is part of an extensive series of articles about S3 Storage.

How to Use an Amazon S3 Bucket

An S3 customer starts by establishing a bucket in the AWS region of their choosing and assigns it a unique name. AWS suggests that customers select regions that are geographically close to them in order to minimize costs and latency.

After creating the bucket, the user chooses a storage tier based on the usage requirements for the data—there are various S3 tiers ranging in terms of price, accessibility and redundancy. A single bucket can retain objects from distinct S3 storage tiers.

The user may then assign particular access privileges regarding the objects retained in the bucket using various mechanisms, including bucket policies, the AWS IAM service, and ACL.

An AWS customer may work with an Amazon S3 bucket via the APIs, the AWS CLI, or the AWS Management Console.

Related content: Read our guide to the S3 API

Tutorial: Creating a Bucket

Before you can store content in S3, you need to open a new bucket, selecting a bucket name and Region. You may also wish to select additional storage management choices for your bucket. Once you have configured a bucket, you can’t modify the Region or bucket name.

The AWS account that opened the bucket remains the owner. You may upload as many objects as you like to the bucket. According to the default settings, you can have as many as 100 buckets for each AWS account.

S3 lets you create buckets using the S3 Console or the API.

Keep in mind that buckets are priced according to data volume stored in them, and other criteria. Learn more in our guide to S3 pricing

Developing an S3 bucket via the S3 console:

  1. Access the S3 console.
  2. Select Create bucket.
  3. In Bucket name, create a DNS-accepted name for your bucket.

 

Image Source: AWS

The bucket name must be unique, begin with a number or lowercase letter, be between 3-63 characters, and may not feature any uppercase characters.

4. Select the AWS Region for the bucket. Select a Region near you to keep latency and cost to a minimum and to address regulatory demands. Keep in mind there are special charges for moving objects outside a region.
5. In Bucket settings for Block Public Access, specify if you want to allow or block access from external networks.
6. You can optionally enable the Object Lock feature in Advanced settings > Object Lock.
7. Select Create bucket.

What Is S3 Bucket Policy?

S3 provides the concept of a bucket policy, which lets you define access permissions for a bucket and the content stored in it. Technically, it is an Amazon IAM policy, which employs a JSON-based policy language.

For instance, policies permit you to:

  • Enable read access for unknown users
  • Restrict a particular IP address from accessing the bucket
  • Place a limit on access to a particular HTTP referrer
  • Require multi-factor authorization

S3 Bucket URLs and Other Methods to Access Your Buckets

You can perform almost any operation using the S3 console, with no need for code. However, S3 also provides a powerful REST API that gives you programmatic access to buckets and objects. You can reference any bucket or the objects within it via a unique Uniform Resource Identifier (URI).

Amazon S3 provides support for path-style and virtual-hosted-style URLs to gain access to a bucket. Given that buckets are accessible to these URLs, it is suggested that you establish buckets with bucket names that are DNS-compliant.

Virtual-Hosted-Style Access

In a virtual-hosted-style request, the bucket name is a component of the domain name within the URL.

Amazon S3 virtual-hosted-style URLs employ this format:

https://bucket-name.s3.Region.amazonaws.com/key name

For example, if you name the bucket bucket-one, select the US East 1 (Northern Virginia) Region, and use kitty.png as your key name, the URL will look as follows:

https://bucket-one.s3.us-east-1.amazonaws.com/kitty.png

Path-Style Access

In Amazon S3, path-style URLs use this format:

https://s3.Region.amazonaws.com/bucket-name/key name

For example, if you created a bucket in the US East (Northern Virginia) Region and named it bucket-one, the path-style URL you use to access the kitty.jpg object in the bucket will look like this:

https://s3.us-east-1.amazonaws.com/bucket-one/kitty.jpg

Accessing a Bucket Via S3 Access Points

As well as working with a bucket directly, you can work with a bucket via an access point.

S3 access points exclusively support virtual-host-style addressing. To address a bucket via an access point, you must employ the following format:

https://AccessPointName-AccountId.s3-accesspoint.region.amazonaws.com.

Accessing a Bucket Using S3://

Certain AWS services need you to specify an Amazon S3 bucket via S3://bucket, where you will need to follow this format:

S3://bucket-name/key-name

Note that when employing this format the bucket name does not feature the AWS Region. For example, a bucket called bucket-one with a kitty.jpg key will look like this:

S3://bucket-one/kitty.jpg

S3 Bucket Configuration: Understanding Subresources

AWS provides various tools for Amazon S3 buckets. An IT specialist may enable different versions for S3 buckets to retain every version of an object when an operation is carried out on it, for example a delete or copy operation. This may help stop IT specialists from accidentally deleting an object. Similarly, when creating a bucket, a user can establish server access logs, tags, object-level API logs, and encryption.

S3 Transfer Acceleration can assist with the execution of secure and fast transfers from the client to an S3 bucket via AWS edge locations.

Amazon S3 provides support for different alternatives for you to configure your bucket. Amazon S3 offers support for subresources so you can manage and retain the bucket configuration details. You can employ the Amazon S3 API to manage and develop these subresources. You may also utilize the AWS SDKs or the console.

These are known as subresources since they function in the context of a certain object or bucket. Below lists subresources that let you oversee bucket-specific configurations.

cors (cross-origin resource sharing):  You may configure your bucket to permit cross-origin requests.

event notification: You may permit your bucket to alert you of particular bucket events.

lifecycle: You may specify lifecycle regulations for objects within your bucket that feature a well-outlined lifecycle.

location: When you establish a bucket, you choose the AWS Region where you want Amazon S3 to develop the bucket. Amazon S3 retains these details in the location subresources and offers an API so you can gain access to this information.

logging: Logging lets you monitor requests for access to the bucket. All access log records give details regarding one access request, including bucket name, requester, request action, request time, error code, and response status.

object locking: Enables the object lock feature for a bucket. You may also wish to configure a default period of retention and mode that applies to the latest objects that are uploaded to the bucket.

policy and ACL (access control list): Both buckets and the objects stored within them are private, unless you specify otherwise. ACL and bucket policies are two ways to grant permissions for an entire bucket.

replication: This option lets you automatically copy the content of the bucket to additional buckets, within the Amazon Region. Replication is asynchronous.

requestPayment: By default, the AWS account that sets up a bucket also receives bills for requests made to the bucket. This setting lets the bucket creator pass on the cost of downloading data from the bucket to the account downloading the content.

tagging: This setting allows you to add tags to an S3 bucket. This can help you track and organize your costs on S3. AWS shows the tags on your charges allocation report, with costs and usage aggregated via the tags.

transfer acceleration: Transfer acceleration enables easy, secure and fast movement of files over extended distances between your S3 bucket and your client. Transfer acceleration leverages the globally distributed edge locations via Amazon CloudFront.

versioning: Versioning assists you when recovering accidental deletes and overwrites.

website: You may configure the bucket for static website hosting.

Best Practices for Keeping Amazon S3 Buckets Secure

AWS S3 Buckets may not be as safe as most users believe. In many cases, AWS permissions are not correctly configured and can expose an organization’s AWS S3 buckets or some of their content.

Although misconfigured permissions are by no means a novel occurrence for many organizations, there is a specific permission that entails increased risk. If you allow objects to be public, this establishes a pathway for cyberattackers to write to S3 buckets that they don’t have the correct permissions to access. Misconfigured buckets are a major root cause behind many well-known attacks.

To protect your S3 buckets, you should apply the following best practices.

Block Public S3 Buckets at the Organization Level

Assign AWS accounts for public S3 utilization and stop all other S3 buckets from accidentally becoming public by putting in place S3 Block Public Access. Employ Organizations Service control policies (SCPs) to ensure that the Block Public Access setting is not alterable. S3 Block Public Access offers a degree of safety that functions at the level of the account and also on single buckets, encompassing those that you develop in the future.

You retain the capacity to prevent existing public access—irrespective of whether it was specified by a policy or an ACL—and to make sure that public access is not given to items you newly create. This provides only specific AWS accounts with public S3 buckets and stops all other AWS accounts.

Implement Role-Based Access Control

Outline roles that cover the access needs of users and objects. Make sure those roles have the least access needed to carry out the job so that if a user’s account is breached, the damage is kept to a minimum.

AWS security is founded on AWS Identity and Access Management (IAM) strategies. A principal is an identity that may be validated, for example, with a password. Roles, users, applications, and federated users (from separate systems) may all be principals. When a validated principal requests an entity, resource, service, or a different asset, verification begins.

Verification policies determine what access the principal has to the resource being requested. Approval is given based on resource-based methods or identity. Matching each validated principal with each validated policy will ascertain if the request is permitted.

Another data security methodology is splitting or sharing data into different buckets. For instance, a multi-tenant application could require separate Amazon S3 buckets for every tenant. You can use another AWS tool, Amazon VPC, which grants your endpoints secure access to sections of your Amazon S3 buckets.

Encrypt Your Data

Even with your greatest efforts, it remains good practice to assume that information is always at risk of being exposed. Given this, you should use encryption to stop unauthorized individuals from using your information if they have managed to access it.

Make sure that your Amazon S3 buckets are encrypted during transit and while sitting on the server. If you just have a single bucket, this is likely not complex, but if buckets are being developed dynamically, it may be difficult to keep track of them and manage encryption appropriately.

On the server side, Amazon S3 buckets support encryption, but this has to be enabled. Once encryption is turned on, the information is encrypted at rest. Encrypting the bucket will make sure that any individual who manages to access the data will require a password (key) to decrypt the data.

For transport security, HTTPS is used to make sure that information is encrypted from one end to another. Every additional version of Transport Layer Security (TLS) ensures that the protocol is more secure and does away with out-of-date, now insecure, encryption methods.

S3-Compatible Storage On-Premises with Cloudian

Cloudian® HyperStore® is a massive-capacity object storage device that is fully compatible with Amazon S3. It allows you to easily set up an object storage solution in your on-premises data center, enjoying the benefits of cloud-based object storage at much lower cost.

HyperStore can store up to 1.5 Petabytes in a 4U Chassis device, allowing you to store up to 18 Petabytes in a single data center rack. HyperStore comes with fully redundant power and cooling, and performance features including 1.92TB SSD drives for metadata, and 10Gb Ethernet ports for fast data transfer.

cloudian hyperstore appliance

HyperStore is an object storage solution you can plug in and start using with no complex deployment. It also offers advanced data protection features, supporting use cases like compliance, healthcare data storage, disaster recovery, ransomware protection and data lifecycle management.

Learn more about Cloudian® HyperStore®.

Leave a comment on S3 Buckets: Accessing, Managing, and Securing Your Buckets

Object Storage in the Cloud: 4 Providers Compared

What Is Object Storage in the Cloud?

As your business expands, you have to manage isolated but rapidly growing pools of data from various sources, which are used for a variety of business processes and applications. Nowadays, many organizations grapple with a fragmented storage portfolio that slows down innovation and adds complexity to an organization’s applications. Object storage can help your organization break down these silos. It provides cost-effective, highly scalable storage that can retain any type of data in its original format.

Object storage is highly suitable for the cloud as it is flexible, elastic and can be more easily scaled into many petabytes to support indefinite data growth. The architecture manages and stores data as objects, as opposed to block storage, which relates to data as logical volumes, blocks and files storage, where data is stored in hierarchical files.

Related content: Read our guides on object storage vs block storage and object storage vs file storage.

In this article:

  • 4 Cloud Object Storage Options
    • AWS Object Storage
    • Azure Object Storage
    • Google Cloud Storage
    • IBM Cloud Object Storage
    • IBM Cloud Object Storage class tiers:
  • Cloud Object Storage Pros and Cons
  • Object Storage in the Cloud with Cloudian

4 Cloud Object Storage Options

Let’s review the object storage offerings by some of the world’s leading cloud providers: Amazon Web Services, Microsoft Azure, Google Cloud, and IBM Cloud.

AWS Object Storage

AWS provides a distinct variety of storage classes for different use cases. Amazon S3 is the main object storage platform of AWS, with S3 Standard-IA providing cool storage, and Glacier providing cold storage:

  • Amazon S3 Standard—this is the storage choice for information that is often accessed, and is great for numerous use cases including dynamic websites, cloud applications, content distribution, data analytics and gaming. It delivers high throughput as well as low latency.
  • Amazon S3 Standard-Infrequent Access (Amazon S3 Standard—IA)—this is a storage alternative for data which is accessed less often, such as disaster recovery and long-term backups.
  • Amazon Glacier—this highly durable storage system is optimized for data that is not often accessed, or “cold” data, such as end-of-lifecycle data kept for compliance and regulatory backup purposes. Data is archived for long-term storage, and is immutable and encrypted.

Azure Object Storage

Microsoft offers Azure Blob Storage for object storage in the cloud. Blob storage is suited to storing any form of unstructured data, such as binary or text. This includes videos, images, documents, audio and more. Azure storage offers high-quality data integrity, flexibility and mutability.

Blob storage is employed for serving documents or images directly to a browser, for retaining files for distributed access, streaming audio and video, writing to log files, disaster recovery, storing data for restore and backup, and archiving, so it can be analyzed by an Azure-hosted or on-premises service.

Azure has several storage tiers, including:

  • Hot access tier— for information that is in or anticipated to be in active use and staged for processing and subsequent migration to the Cool storage tier.
  • Cool access tier—for data that is intended to stay in the Cool tier for more than 30 days. This includes disaster recovery datasets and short-term backup, media content that is older and intended to be immediately available when drawn on and large data sets.
  • Archive access tier—for data which will stay in the Archive tier for more than 180 days, and which can tolerate hours of retrieval latency.

Note: The Archive storage tier is not accessible at the storage account level, but only at the blob level. Azure also provides a Premium tier, which is for workloads that need consistent and fast response times.

Google Cloud Storage

Google Cloud Storage (GCS) provides united object storage for all workloads. It has four classes for backup and archival storage and high-performance object storage. All four classes provide high durability and low latency:

  • Hot (high-performance) storage—GCS provides regional and multi-regional storage for high-frequency access information.
  • Multi-regional storage—allows for the storing of information that is often accessed around the world, including streaming videos, serving website content, or mobile and gaming applications.
  • Regional storage—allows for frequent access to information in the corresponding region of Google Compute Engine instance or Google Cloud DataProc, for example data analytics.
  • Nearline (cool) storage—for data that only needs to be accessed less than once a month, but several times a year. Suitable for backups and long-tail multimedia content.
  • Coldline (cool) storage—for data that only needs to be accessed less than once a year. Suitable for archival data and disaster recovery.

IBM Cloud Object Storage

IBM Cloud provides scalable and flexible cloud storage with policy-driven archive abilities for unstructured data. This cloud storage service is intended for data archiving, for example for the long term retention of data that is infrequently accessed, including for mobile and web applications, and for backup and analytics.

IBM has four storage-class tiers integrated with an Asperaâ high-speed information transfer option. This allows for the easy transfer of data from and to Cloud Object Storage, and query-in-place functionality.

IBM Cloud Object Storage class tiers:

  • Standard storage—for active workloads that need high performance and low latency, and data that requires frequent and multiple access in a month. Usage scenarios are for example, active content repositories, analytics, mobile streaming and web content, collaboration and DevOps.
  • Vault storage—for less active workloads which need real-time, on-demand access but only infrequently, up to once a month. Use cases include digital asset retention and backup.
  • Cold vault—for cold workloads, where data needs on-demand, real-time access when needed but is mainly archived. For example, data that is accessed several times a year. Common use cases involve long-term backup, large data set preservation such as older media content and scientific data.
  • Flex storage—this class tier is utilized for dynamic workloads (combining cold and hot workloads) and data based on access patterns. Typical use cases include cognitive workloads, cloud-native analytics and user-generation applications.

Cloud Object Storage Pros and Cons

The following are some of the key advantages and disadvantages of object storage in the cloud.

Cloud Object Storage Pros

The key advantages of object storage include:

  • Data is highly distributed, which ensures it is more resilient to hardware failures or disasters. This way, it is available even if various nodes fail.
  • Objects are kept in a flat address space, which minimizes complexity and scalability issues.
  • Data protection is built into this architecture in the form of erasure coding or replication technology.
  • Object storage is most suitable for cloud storage and static data. Common use cases for object storage include archiving and cloud backup—the technology functions best with data that is more frequently read than written to.
  • Object storage has developed to the point where it scales at the exabyte level and represents trillions of objects. The use of VMs or commodity hardware enables nodes to be added easily, with the disk space being used more efficiently.
  • Object storage systems, via the use of object IDs (OIDs) or identifiers, can gain access to any piece of data without knowing on which physical storage device, directory, or file system it resides on. The abstraction lets object storage devices operate with storage hardware configured in distributed node architecture. This way, processing power can scale together with data storage capacity.
  • I/O requests don’t need to pass via a central controller, allowing for a true global storage system for large amounts of data overseen by objects, physically kept anywhere, and retrieved through the internet or a WAN.

Cloud Object Storage Pros

The key disadvantages of object storage include:

  • Object storage systems are not steady enough for real-time systems, including transactional databases. An undesirable use case for object storage is an environment or application with a high transactional rate.
  • Object storage doesn’t guarantee that read requests will produce the most up-to-date version of the data.
  • This technology isn’t alway appropriate for applications that have high performance demands.
  • Cloud-based storage often ends up being more expensive because you need to pay for storage on an ongoing basis. With on-premises equipment you pay once and the storage is yours.

Bring Object Storage On-Premises with Cloudian

Cloudian® HyperStore® is a massive-capacity object storage device that is fully compatible with Amazon S3. It allows you to easily set up an object storage solution in your on-premises data center, enjoying the benefits of cloud-based object storage at much lower cost.

HyperStore can store up to 1.5 Petabytes in a 4U Chassis device, allowing you to store up to 18 Petabytes in a single data center rack. HyperStore comes with fully redundant power and cooling, and performance features including 1.92TB SSD drives for metadata, and 10Gb Ethernet ports for fast data transfer.

HyperStore is an object storage solution you can plug in and start using with no complex deployment. It also offers advanced data protection features, supporting use cases like compliance, healthcare data storage, disaster recovery, ransomware protection and data lifecycle management.

Learn more about Cloudian® HyperStore®.

Leave a comment on Object Storage in the Cloud: 4 Providers Compared

What Is Object Storage: Definition, How It Works, and Use Cases

What Is Object Storage?

Object storage is a data storage architecture that stores and manages unstructured data in units called objects. Objects can be any size or format, and can include data, metadata, and a unique identifier.

Unlike other storage systems, object storage is not organized into folders or a hierarchical path, so objects can be reached through multiple paths. In object storage, objects are stored in a flat data environment and can be accessed through multiple paths, rather than being organized into folders.

Objects can store photos, videos, emails, audio files, network logs, or any other type of structured or unstructured data. All of the major public cloud services, including Amazon, Google and Microsoft, employ object storage as their primary storage.

This is part of an extensive series of guides about data security.

In this article:

  • Object Storage Definition
  • Object Storage Architecture: How Does It Work?
  • Object Storage Benefits
  • Object Storage Use Cases
  • Selecting the Best Object-Based Storage Solution

Object Storage Definition

Object storage is a technology that manages data as objects. All data is stored in one large repository which may be distributed across multiple physical storage devices, instead of being divided into files or folders.

It is easier to understand object-based storage when you compare it to more traditional forms of storage – file and block storage.

storage types including object storage

File Storage

File storage stores data in folders. This method, also known as hierarchical storage, simulates how paper documents are stored. When data needs to be accessed, a computer system must look for it using its path in the folder structure.

File storage uses TCP/IP as its transport, and devices typically use the NFS protocol in Linux and SMB in Windows.

Block Storage

Block storage splits a file into separate data blocks, and stores each of these blocks as a separate data unit. Each block has an address, and so the storage system can find data without needing a path to a folder. This also allows data to be split into smaller pieces and stored in a distributed manner. Whenever a file is accessed, the storage system software assembles the file from the required blocks.

Block storage uses FC or iSCSI for transport, and devices operate as direct attached storage or via a storage area network (SAN).

Object Storage

In object storage systems, data blocks that make up a file or “object”, together with its metadata, are all kept together. Extra metadata is added to each object, which makes it possible to access data with no hierarchy. All objects are placed in a unified address space. In order to find an object, users provide a unique ID.

Object-based storage uses TCP/IP as its transport, and devices communicate using HTTP and REST APIs.

Metadata is an important part of object storage technology. Metadata is determined by the user, and allows flexible analysis and retrieval of the data in a storage pool, based on its function and characteristics.

The main advantage of object storage is that you can group devices into large storage pools, and distribute those pools across multiple locations. This not only allows unlimited scale, but also improves resilience and high availability of the data.

Object Storage Architecture: How Does It Work?

Anatomy of an Object

Object storage is fundamentally different from traditional file and block storage in the way it handles data. In an object storage system, each piece of data is stored as an object, which can include data, metadata, and a unique identifier, known as an object ID. This ID allows the system to locate and retrieve the object without relying on hierarchical file structures or block mappings, enabling faster and more efficient data access.

Objects can be any size or format, and can store photos, videos, emails, audio files, network logs, or any other type of structured or unstructured data.

Data Storage Layer: Flat Data Environment

The data storage layer is where the actual data objects are stored. Object storage is not organized into folders or a hierarchical path, so objects can be reached through multiple paths. Objects are stored in a flat data environment and can be accessed through multiple paths.

In an object storage system, data is typically distributed across multiple storage nodes to ensure high performance, durability, and redundancy. Each storage node typically contains a combination of hard disk drives (HDDs) and solid-state drives (SSDs) to provide the optimal balance between capacity, performance, and cost. Data objects are automatically replicated across multiple nodes, ensuring that data remains available and protected even in the event of hardware failures or other disruptions.

Metadata Index

The metadata index is a critical component of object storage architecture, as it maintains a record of each object’s unique identifier, along with other relevant metadata, such as access controls, creation date, and size. This information is stored separately from the actual data, allowing the system to quickly and efficiently locate and retrieve objects based on their metadata attributes. The metadata index is designed to be highly scalable, enabling it to support millions or even billions of objects within a single object storage system.

API Layer

The API layer is responsible for providing access to the object storage system, allowing users and applications to store, retrieve, and manage data objects. Most object storage systems support a variety of standardized APIs, such as the Simple Storage Service (S3) API from Amazon Web Services (AWS), the OpenStack Swift API, and the Cloud Data Management Interface (CDMI). These APIs enable developers to easily integrate object storage into their applications, regardless of the underlying storage technology or vendor.

5 Expert Tips to help you better optimize your object storage care

Jon Toor, CMO

With over 20 years of storage industry experience in a variety of companies including Xsigo Systems and OnStor, and with an MBA in Mechanical Engineering, Jon Toor is an expert and innovator in the ever growing storage space.

Leverage lifecycle policies to manage storage costs: Implement object lifecycle management to automatically transition objects between storage classes based on their age or access patterns. This can help you reduce storage costs by moving infrequently accessed data to colder storage tiers.

Optimize metadata for faster search and analytics: Invest time in designing your object metadata schema. Adding meaningful, searchable metadata can dramatically enhance retrieval speed and enable powerful analytics without needing to process the entire object.

Use erasure coding for efficient data protection: While replication is common, erasure coding provides more efficient storage utilization, especially in environments with large datasets. It offers high durability while using less storage space than simple replication.

Enable versioning for data integrity and compliance: Activate object versioning to protect against accidental overwrites or deletions. This is critical for compliance in industries where data integrity is required over long retention periods.

Implement policy-driven data tiering: Automate data movement between hot, warm, and cold storage using policy-based rules. This approach allows you to maximize cost efficiency by aligning storage costs with data value and access frequency.

Object Storage Benefits

Exabyte Scalable

Unlike file or block storage, object storage services enable scalability that goes beyond  exabytes. While file storage can hold many millions of files, you will eventually hit a ceiling. With unstructured data growing at 50+% per year, more and more users are hitting those limits, or they expect to in the future.

Scale Out Architecture

Object storage makes it easy to start small and grow. In enterprise storage, a simple scaling model is golden. And scale-out storage is about as simple as it gets: you simply add another node to the cluster and that capacity gets folded into the available pool.

HyperStore is an S3-compatible storage system. HyperFile is a connector that allows files to be stored on HyperStore.

Customizable Metadata

While file systems have metadata, the information is limited and basic (date/time created, date/time updated, owner, etc.). Object storage allows users to customize and add as many metadata tags as they need to easily locate the object later. For example, an X-ray could have information about the patient’s age and height, the type of injury, etc.

High Sequential Throughput Performance

Early object storage systems did not prioritize performance, but that’s now changed. Now, object stores can provide high sequential throughput performance, which makes them great for streaming large files. Also, object storage services help eliminate networking limitations. Files can be streamed in parallel over multiple pipes, boosting usable bandwidth.

Flexible Data Protection Options

To safeguard against data loss, most traditional storage options utilize fixed RAID groups (groups of hard drives joined together), sometimes in combination with data replication. The problem is, these solutions generally lead to one-size-fits-all data protection. You can not vary the protection level to suit different data types.

Object storage solutions employ a flexible tool called erasure coding that is similar to old-fashioned RAID in some ways, but is far more flexible. Data is striped across multiple drives or nodes as needed to achieve the needed protection for that data type. Between erasure coding and configurable replication, data protection is both more robust and more efficient.

Support for the S3 API

Back when object storage solutions were launched, the interfaces were proprietary. Few application developers wrote to these interfaces. Then Amazon created the Simple Storage Service, or “S3”. They also created a new interface, called the “S3 API”. The S3 API interface has since become a de-facto standard for object storage data transfer.

The existence of a de facto standard changed the game. Now, S3-compatible application developers have a stable and growing market for their applications. And service providers and S3-compatible storage vendors such as Cloudian have a growing user set deploying those applications. The combination sets the stage for rapid market growth.

Lower Total Cost of Ownership (TCO)

Cost is always a factor in storage. And object storage services offer the most compelling story, both in hardware/software costs and in management expenses. By allowing you to start small and scale, this technology minimizes waste, both in the form of extra headcount and unused space. Additionally object storage systems are inherently easy to manage. With limitless capacity within a single namespace, configurable data protection, geo replication, and policy-based tiering to the cloud, it’s a powerful tool for large-scale data management.

To learn more about Cloudian’s fully native S3-compatible storage in your data center, and how it can cut down your TCO, check out our free trial. Or visit cloudian.com for more information.

Object Storage Use Cases

There are numerous use cases for object storage, thanks to its scalability, flexibility, and ease of use. Some of the most common use cases include:

Backup and archiving
Object storage is an excellent choice for storing backup and archive data, thanks to its durability, scalability, and cost-effectiveness. The ability to store custom metadata with each object allows organizations to easily manage retention policies and ensure compliance with relevant regulations.

Big data analytics
The horizontal scalability and programmability of object storage make it a natural choice for storing and processing large volumes of unstructured data in big data analytics platforms. Custom metadata schemes can be used to enrich the data and enable more advanced analytics capabilities.

Media storage and delivery
Object storage is a popular choice for storing and delivering media files, such as images, video, and audio. Its scalability and performance make it well-suited to handling large volumes of media files, while its support for various data formats and access methods enables seamless integration with content delivery networks and other media delivery solutions.

Internet of Things (IoT)
As the number of connected IoT devices continues to grow, so too does the amount of data they generate. Object storage is well-suited to handle the storage and management of this data, thanks to its scalability, flexibility, and support for unstructured data formats.

How to Choose an Object-Based Storage Solution

When choosing an object storage solution, there are several factors to consider. Some of the most important factors include:

  • Scalability: One of the primary strengths of object storage is its ability to scale horizontally, so it’s essential to choose a platform that can grow with your organization’s data needs. Look for a solution that can easily accommodate massive amounts of data without sacrificing performance or manageability.
  • Data durability and protection: Ensuring the integrity and availability of your data is critical, so look for an object storage platform that offers robust data protection features, such as erasure coding, replication, or versioning. Additionally, consider the platform’s durability guarantees – how likely is it that your data will be lost or corrupted?
  • Cost: Cost is always a consideration when choosing a storage solution, and object storage is no exception. Be sure to evaluate the total cost of ownership (TCO) of the platform, including factors such as hardware, software, maintenance, and support costs. Additionally, if you’re considering a cloud-based solution, be sure to factor in the costs of data transfer and storage.
  • Performance: While object storage is not typically designed for high-performance, low-latency workloads, it’s still important to choose a platform that can deliver acceptable performance for your organization’s specific use cases. Consider factors such as throughput, latency, and data transfer speed when evaluating performance.
  • Integration and compatibility: The ability to integrate the object storage platform with your existing infrastructure and applications is essential. Look for a solution that supports industry-standard APIs and protocols, as well as compatibility with your organization’s preferred development languages and tools.

See Additional Guides on Key Data Security Topics

Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of data security.

Data Lake

Authored by Cloudian

  • [Guide] What Is a Data Lake? Architecture and Deployment
  • [Guide] Data Lakehouse: Is It the Right Choice for You?
  • [Whitepaper] Object Storage: Customer Insights and Best Practices
  • [Product] HyperStore Object Storage 

Veeam

Authored by Cloudian

  • [Guide] Veeam: Solutions, Use Cases, and Implementation Steps
  • [Guide] Veeam Backup: 5 Key Solutions, Features and Capabilities
  • [Whitepaper] TCO Report: Cloudian HyperStore File 
  • [Product] HyperStore Object Storage 

PCI Compliance

Authored by Exabeam

  • [Guide] What Is PCI Compliance? The 12 Requirements 
  • [Guide] PCI Security: 7 Steps to Becoming PCI Compliant 
  • [Blog] Cybersecurity Threats: Everything you Need to Know

Learn More About Object Storage

Object Storage vs. File Storage: What’s the Difference?

Object Storage vs. Block Storage: What’s the Difference?

6 Best Practices for Object Storage Deployment

How Object Storage Protects You From Ransomware

Enhancing Object Storage Analytics: Adding Metadata Labels to S3 Images with TensorFlow

S3 Compatible Storage Solutions Compared

Understanding Cloud Native Storage

Object Storage in the Cloud: 4 Providers Compared

Cloudian S3 Compatible Enterprise Object Storage – Watch 1 min Overview Video

Leave a comment on What Is Object Storage: Definition, How It Works, and Use Cases

Categories

  • A.I. / Machine Learning
  • Business Continuity
  • Cloud Service Providers
  • Data Analytics
  • Data Backup and Archive
  • Data Lakehouse
  • Data Protection
  • Featured
  • Hybrid and Private Cloud
  • Object Storage
  • Performance
  • Ransomware
  • S3 Storage
  • Security
  • Sovereign Cloud

Get Started With Cloudian Today

Request a Demo

Join a 30 minute demo with a Cloudian expert.

Sign Up

Download a Free Trial

Try Cloudian in your shop. Run on any VM, even your laptop.

Try Now

Pricing

Receive a Cloudian quote and see how much you can save.

Pricing

Products

HyperStore Object Storage
HyperStore File Services
HyperIQ Observability & Analytics
HyperCare Managed Service
HyperBalance Load Balancer
Product Specifications

Industries

Federal Government
State & Local Government
Financial Services
Telecommunications
Manufacturing
Media & Entertainment
Education
Healthcare
Life Sciences
Cloud Service Provider

Storage Guides

Data Backup & Archive
Data Lake
Data Protection
Data Security
Disaster Recovery
Health Data Management
Hybrid Cloud
Kubernetes Storage
Ransomware Data Recovery
Splunk Architecture
VMware Storage
Veeam
S3 Storage
Object Storage

Customers

Financial Services
Federal Government
Government
Healthcare
Higher Education
Manufacturing
Media & Entertainment
Retail
Service Providers
Video Surveillance / Digital Evidence
©2025 All Right Reserved. Privacy Policy
Pricing
Contact Us

Please note that on our website we use cookies necessary for the functioning of our website and performance optimization. To learn more about cookies and how we use them, please read our Cookie Policy

Cloudian
Powered by  GDPR Cookie Compliance
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.