S3 Compatible Storage Solutions Compared

S3 Compatible Storage, On-Prem

Today’s emerging on-prem enterprise storage medium is S3 compatible storage. Initially used only in the cloud, S3 storage is now being extended to on-prem and private cloud deployments.

The term “S3 compatible” means that the storage employs the S3 API as its “language.” Applications that speak the S3 API should be able to plug and play with S3 compatible storage.

A growing number of applications now support this storage type, thus benefitting from its unique attributes:

  • Scale: Designed to grow limitlessly within a single namespace
  • Geo-distribution: A single storage system can span multiple sites
  • Cost: Purpose-built to run on industry-standard servers, thus benefitting from the volume and efficiencies of that industry
  • Reliable data transport: The only storage type invented in the age of the Internet, S3-compatible storage is built to manage and move massive data volumes over WANs

Cloudian specializes in S3-compatible storage, but other examples of applications and devices the now employ S3 are Rubrik, Veeam, Commvault, Splunk, Pure Storage, Adobe, VERITAS, Hadoop, NetApp, EMC, Komprise, and more.

This is part of an extensive series of articles about S3 Storage.

Clarifying the Terms

But what is S3-compatible storage? This storage type goes by multiple names and can also be called:

Object storage: The underlying technology for S3 compatible storage is object storage. Over the years, multiple APIs have been used to access object storage, but the S3 API is now the most common.

Cloud storage: Most large-scale cloud storage today is object storage, and most of it employs the S3 API. There are multiple ways of referring to essentially the same thing: S3-compatible storage.

Benefits of S3 Compatible Storage On-Prem

There are 5 key reasons to deploy S3 compatible storage in your data center:

  1. Scale: S3-compatible solutions are designed to scale in a single namespace, and without disruption, to an exabyte. Grow your storage without adding workload.
  2. 70% less cost than public cloud: With industry-standard hardware, these solutions deliver the greatest value: less cost per GB and higher density. Also, no ingress/egress fees.
  3. Performance: Hardware is in your data center for low latency and high bandwidth.
  4. Control: Data is behind your firewall, so you consistently apply security and control access.
  5. Cloud compatibility: S3 is compatible with cloud storage, so you can employ cloud when you need it, without disruption. Capitalize on the growing ecosystem of S3 compatible applications. Seamlessly move data and applications from on-prem to cloud.

The S3 API

S3 compatible storage is built on the Amazon S3 Application Programming Interface, better known as the S3 API, the most common way in which data is stored, managed, and retrieved by object stores. Originally created for the Amazon S3 Simple Storage Service (read about the API here), the widely adopted S3 API is now the de facto standard for object storage, employed by vendors and cloud providers industry-wide.

Not All S3 Compatible Storage APIs Are Equal

Compared with established file protocols such as NFS, the S3 API is relatively new and rapidly evolving. Among object storage vendors, S3 API compliance varies from below 50% to over 90%. This difference becomes material when an application — or an updated version of that app— fails due to S3 API incompatibility.

Cloudian is the only object storage solution to exclusively support the S3 API. Launched in 2011, Cloudian’s many years of S3 API development translate to the industry’s highest level of compliance.

Employing the S3 API makes an object storage solution flexible and powerful for three reasons:

1) Standardization in S3 Compatible Storage

With Cloudian, any object written using the S3 API can be used by other S3-enabled applications and object storage solutions; the existing code works out of the box.

S3 compatible storage software

2) Maturity 

The S3 API provides a wide variety of features that meet virtually every need for an object store. End users planning to deploy object stores can access the plentiful resources of the S3 community — both individuals and companies.

3) Rich Feature Set

The S3 API is the only storage “language” created in the era of the internet. The other common storage protocols (SMB and NFS) were created prior to the internet’s meteoric growth, and therefore did not factor in the needs of this infrastructure. As a result, only the S3 API includes features such as multi-part upload that make it easy to reliably transfer large files over dodgy WAN links.

 

The Cloudian Difference

Among the S3 compatible storage vendors, only Cloudian HyperStore was built from the start on the S3 API.

Cloudian S3 compatible storage API is designed into the Cloudian storage layer

 

Translation Layers Introduce Potential Compatibility Challenges

Competitive solutions employ a translation layer (or some sort of “access layer” or software gateway), which introduces the risk of compatibility challenges. Cloudian has no translation layer, hence we refer to it as “S3 Native.”

Translation layer leads to incompatibility

Cloud Storage in the Data Center

The combination of object storage and the de facto language standard now creates the option for cloud-connected storage in the data center. For the cloud, AWS has set the standard with the S3 Storage Service. Now data center managers can capitalize on that identical set of capabilities in their own data center with Cloudian S3 compatible storage.

See the S3 API at Work

The City of Montebello uses the S3 API as a mechanism for streaming live video from busses to a central monitoring facility where it is recorded and stored with metadata to assist with search.

MSPs Look to Cloudian to Help Grow S3-Compatible Object Storage Business

In cloud storage, there are hundreds of managed service providers (MSPs) who offer enterprise-grade, S3-API storage services. Increasingly, these MSPs look to Cloudian as the supplier for their underlying object storage infrastructure.

Local and regional MSPs offer their customers great value in several ways. They act as a trusted guide for large companies, and they provide rich, turnkey service offerings for smaller ones. Furthermore, their regional focus builds deep connections with local businesses as well as expertise in compliance and regulatory issues, benefits that remove technical barriers, reduce risk, and accelerate cloud adoption.

A wide range of high-value services fall within their offerings:  storage, archive, collaboration, and data protection, among others. These services require cost competitive storage, with scalability that is both limitless and painless. They also require S3 interoperability that ensures trouble-free operation and high customer satisfaction. That’s where Cloudian HyperStore comes in.

Over the past year, we’ve been pleased to help dozens of global MSPs deploy and manage successful cloud storage offerings, building profitable businesses on Cloudian’s infinitely scalable platform.

Why MSPs Choose Cloudian

Cloudian object storage uniquely addresses this market with MSP-oriented features, including:

  • 100% native S3 API: Ensures trouble-free operation with best-in-class interoperability
  • Quality of Service (QoS): Eliminates the “noisy neighbor” problem with bandwidth controls
  • Multi-tenancy: Unique namespace for each client
  • Branded UI: Provides customers with a management portal that highlights your brand
  • Configurable data protection: Lets customers configure data protection settings to meet their specific needs, rather than one-size-fits all

Additionally, our object storage platform contains a full suite of other features that bring key advantages to MSPs:

  • Start small and grow: HyperStore allows MSPs to scale with demand. Start with a few dozen terabytes and scale up to hundreds of petabytes and beyond — with zero disruption
  • Appliance or Software: Deploy Cloudian HyperStore as a pre-configured appliance — with full support — or as software on your server.
  • Geo-distribution: You can deploy Cloudian HyperStore in a single data center, or distribute across multiple data centers with data replication to ensure uninterrupted service, even in the event of a data center outage
  • File services: Deploy NFS/SMB file services for additional value-add

 

A Growing List of MSP Customers

A sample of the MSPs now offering Cloudian-based includes:

Learn more about the value these MSPs saw by checking out our solutions.

 

Object Storage Bucket-Level Auto-Tiering with Cloudian

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

Cloudian HyperStore can be integrated with any of the following destination cloud storage platforms as a target for tiered data:

  • Amazon S3
  • Amazon Glacier
  • Google Cloud Platform
  • Any Cloud service offering S3 API connectivity
  • A remotely located Cloudian HyperStore cluster

Granular Control with Cloudian HyperStore

For any data storage system, granularity of control and management is extremely important –  data sets often have varying management requirements with the need to apply different Service Level Agreements (SLAs) as appropriate to the value of the data to an organisation.

Cloudian HyperStore provides the ability to manage data at the bucket level, providing flexibility at a granular level to allow SLA and management control (note: a “bucket” is an S3 data container, similar to a LUN in block storage or a file system in NAS systems). HyperStore provides the following as control parameters at the bucket level:

  • Data protection – Select from replication or erasure coding of data, plus single or multi-site data distribution
  • Consistency level – Control of replication techniques (synchronous vs asynchronous)
  • Access permissions – User and group control access to data
  • Disaster recovery – Data replication to public cloud
  • Encryption – Data at rest protection for security compliance
  • Compression – Reduction of the effective raw storage used to store data objects
  • Data size threshold – Variable storage location of data based upon the data object size
  • Lifecycle policies – Data management rules for tiering and data expiration

Cloudian HyperStore manages data tiering via lifecycle policies as can be seen in the image below:

Auto-tiering is configurable on a per-bucket basis, with each bucket allowed different lifecycle policies based upon rules. Examples of these include:

  1.      Which data objects to apply the lifecycle rule to. This can include:
  • All objects in the bucket
  • Objects for which the name starts with a specific prefix (such as prefix “Meetings/2015/”)
  1.      The tiering schedule, which can be specified using one of three methods:
  • Move objects X number of days after they’re created
  • Move objects if they go X number of days without being accessed
  • Move objects on a fixed date — such as December 31, 2016

When a data object becomes a candidate for tiering, a small stub object is retained on the HyperStore cluster. The stub acts as a pointer to the actual data object, so the data object still appears as if it’s stored in the local cluster. To the end user, there is no change to the action of accessing data, but the object does display a special icon denoting the fact that the data object has been moved.

For auto-tiering to a Cloud provider such as Amazon or Google, an account is required along with associated account access credentials.

Accessing Data After Auto-Tiering

To access objects after they’ve been auto-tiered to public cloud services, the objects can be accessed either directly through a public cloud platform (using the applicable account and credentials) or via the local HyperStore system. There are three options for retrieving tiered data:

  1.      Restoring objects –   When a user accesses a data file, they are directed to the local stub file held on HyperStore which then redirects the user request to the actual location of the data object (tiered target platform).

A copy of the data object is restored back to a local HyperStore bucket from the tiered storage and the user request will be performed on the data object once copied back. A time limit can be set for how long to retain the retrieved object locally, before returning to the secondary tier.

This is considered the best option to use when accessing data relatively frequently and you want to avoid any performance impact incurred by traversing the internet and any access costs applied by service providers for data access/retrieval. Storage capacity must be managed on the local HyperStore cluster to ensure that there is sufficient “cache” for object retrievals.

  1.      Streaming objects – Streams data directly to the client without restoring the data to the local HyperStore cluster first. When the file is closed, any modifications are made to the object in situ on the tiered location. Any metadata modifications will be updated in both local HyperStore database and on the tiered platform.

This is considered the best option to use when accessing data relatively infrequently and concern about the storage capacity of the local HyperStore cluster is an issue, but performance will be lower as the data requests are traversing the internet and access costs may be applied by the service provider every time this file is read.

  1.      Direct access – Objects auto-tiered to public cloud services can be accessed directly by another application or via your standard public cloud interface, such as the AWS Management Console. This method fully bypasses the HyperStore cluster. Because objects are written to the cloud using the standard S3 API, and include a copy of the object’s metadata, they can be referenced directly.

Storing objects in this openly accessible manner — with co-located rich metadata — is useful in several instances:

  1. A disaster recovery scenario where the HyperStore cluster is not available
  2. Facilitating data migration to another platform
  3. Enabling access from a separate cloud-based application, such as content distribution
  4. Providing open access to data, without reliance on a separate database to provide indexing

HyperStore provides great flexibility for leveraging hybrid cloud deployments where you get to set the policy on which data is stored in a public or private cloud. Learn more about HyperStore here.

 

YOU MAY ALSO BE INTERESTED IN

Object Storage vs. Block Storage: What’s the Difference?

An Introduction to Data Tiering

All data is not equal due to factors such as frequency of access, security needs, and cost considerations, therefore data storage architectures need to provide different storage tiers to address these varying requirements. Storage tiers differ depending on disk drive types, RAID configurations or even completely different storage sub-systems, which offer different IP profiles and cost impact.

Data tiering allows the movement of data between different storage tiers, which allows an organization to ensure that the appropriate data resides on the appropriate storage technology. In modern storage architectures, this data movement is invisible to the end-user application and is typically controlled and automated by storage policies. Typical data tiers may include:

  1. Flash storage – High value, high-performance requirements, usually smaller data sets and cost is less important compare to the performance Service Level Agreement (SLA) required
  2. Traditional SAN/NAS Storage arrays – Medium value, medium performance, medium cost sensitivity
  3. Object Storage – Less frequently accessed data with larger data sets. Cost is an important consideration
  4. Public Cloud –  Long-term archival for data that is never accessed

Typically, structured data sets belonging to applications/data sources such as OLTP databases, CRM, email systems and virtual machines will be stored on data tiers 1 and 2 as above. Unstructured data is more commonly moving to tiers 3 and 4 as these are typically much larger data sets where performance is not as critical and cost becomes a more significant factor in management and purchasing decisions.

Some Shortcomings of Data Tiering to Public Cloud

Public cloud services have become an attractive data tiering solution, especially for unstructured data, but there are considerations around public cloud use:

  1. Performance – Public network access will typically be a bottleneck when reading and writing data to public cloud platforms, along with data retrieval times (based on the SLA provided by the cloud service). Especially for backup data, backup and recovery windows are still incredibly important, so for the most relevant backup sets it is worth considering to hold onsite and only archive older backup data to the cloud.
  2. Security – Certain data sets/industries have regulations stipulating that data must not be stored in the cloud. Being able to control what data is sent to the cloud is of major importance.
  3. Access patterns – Data that is re-read frequently may incur additional network bandwidth costs imposed by the public cloud service provider. Understanding your use of data is vital to control the costs associated with data downloads.
  4. Cost – As well as bandwidth costs associated with reading data, storing large quantities of data in the cloud may not make the most economical sense, especially when compared to the economics of on-premise cloud storage. Evaluations should be made.

Using Hybrid Cloud for a Balanced Data Tier Strategy

For unstructured data, a hybrid approach to data management is key with an automation engine, data classification and granular control of data necessary requirements to really deliver on this premise.

With a hybrid cloud approach, you can push any data to the public cloud while also affording you the control that comes with on-premises storage. For any data storage system, granularity of control and management is extremely important as different data sets have different management requirements with the need to apply different SLAs as appropriate to the value of the data to an organization.

Cloudian HyperStore is a solution that gives you that flexibility for easily moving between data tiers 3 and 4 listed earlier in this post. Not only do you get the control and security from your data center, you can integrate HyperStore with many different destination cloud storage platforms, including Amazon S3/Glacier, Google Cloud Platform, and any other cloud service offering S3 API connectivity.

Learn more about our solutions today.

Learn more about NAS backup here.

 

New HyperStore 4000: Highest density storage

Rack space and budget. Most data centers are short on both. Yet somehow, you’re expected to accommodate a 50% increase in unstructured data volume annually. That’s a problem.

The new solution is the HyperStore 4000. With 700TB in just 4U of rack height, it’s nearly 2X the density of our earlier models. And it delivers storage in your data center at prices on par with the public cloud: about ½ cent per GB per month. Space savings and cost savings in one.

The HyperStore 4000, by the numbers

The HyperStore 4000 appliance was built to handle massive amounts of data. It’s housed in a 7” high 4U enclosure, with 2 nodes and a max capacity of 700TB. The drive sizes range from 4TB to 10TB and has 256GB of memory (128GB per node).

Better yet, the appliance reduces storage costs by 40% (versus other Cloudian solutions) – data management now costs half a cent per GB per month. Even with the reduced cost, there is no drop in data availability – with a three-appliance cluster, you’ll still see 99.999999% data durability.

For the full list of specs, check out our datasheet.

Save time, your scarcest commodity

With most storage systems, as your needs grow, your management headaches grow with them. But the HyperStore 4000 grows painlessly. You just add nodes. No disruption and no data migration.

We specifically designed the HyperStore 4000 appliance to help customers in industries with huge data needs such as life sciences, healthcare, and entertainment. These are the industries where data growth is exploding every year and where modern data centers feel the most burden, as they need high-density storage with peak performance for data protection, video surveillance, research, archival, and more. Now you can meet these growing needs without growing pains.

Finally, the HyperStore 4000 has a 100% native S3 API, and has the industry’s highest level of S3 API compatibility. In fact, we guarantee it to work with your S3-enabled applications.

Be sure to also take a look at Cloudian’s other solutions to see which one is right for you.

Cloudian Customer Receives Commvault Innovation Award

Cloudian Customer Receives Commvault Innovation Award for Data Protection with Object Storage

A Cloudian customer, Schuberg Philis, has been recognized by Commvault for their innovation in a data protection deployment with Cloudian object storage. Several aspects of this award-winning solution illustrate advancements that make backup a very exciting topic right now:

  • Object storage as a target: On-premises S3-compatible storage is the backup target in this solution
  • Backup as a service: 3800 clients employ this environment
  • Local and remote backup: Clients being protected are both local (within the Schuberg facility) and remote

Data protection is alive with innovation, and this illustrates why. Data center managers now have more options than ever to reduce headaches, cut costs, and increase service levels.

Object storage helps by providing a seamlessly scalable backup target that a) works with most backup solutions, including Commvault, b) delivers disk performance at costs approaching tape, and c) includes a broad range of capabilities including compression, encryption, and deduplication.

Backup as a service is now more practical than ever, thanks to the S3 protocol that enhances data delivery over network connections.

Schuberg Philis brought these innovations together to offer Data Management as a Service (DMS). This is a multi-tenant data protection solution that’s based on Commvault software and Cloudian storage. It runs within Schuberg Philis’ Mission Critical Cloud Infrastructure.

As a centralized backup and restore platform, DMS includes a wide swath of features such as object storage, SQL AlwaysOn, clustering, and encryption. These features make it easier for customers to manage data protection options without sacrificing data integrity. Commvault took notice and awarded Schuberg Philis a global Service Provider Innovation Award.

We’re very proud that we could be a part of this great solution!

To learn more about the economics of object storage, read this Object Storage Buyer’s Guide. Learn how you too can save a bundle, and beat your SLAs, all with the backup software you already have.

Better Backup With the Software You Already Have

You know the challenges of the backup process. Veritas and Commvault are good products, but backup is still a chore. Your three choices for a backup target all have challenges: Tape is troublesome, disk is expensive, and backup to the cloud is slow.

How to save cost, reduce stress, and keep using the software you already know

The New Backup Target: Hybrid Cloud

You know the challenges of the backup process. Veritas and Commvault are good products, but backup is still a chore. Your three choices for a backup target all have challenges: Tape is troublesome, disk is expensive, and backup to the cloud is slow.

As an IT manager, you pick the best solution you can afford, but you’re often forced to make compromises along the way. Too often, the result is busted backup windows and unmet RTO and RPO SLAs, not to mention hours of wasted time and accumulated stress.

Now there’s a fourth backup target option:  Hybrid Cloud. (see Backup Solutions Note)

Hybrid cloud as a target gives you a faster, more reliable, lower cost process — free of capacity constraints. It works right now with the software you already know. And you can get started at zero upfront cost.

How the Hybrid Cloud Helps

Hybrid cloud integrates an on-premises disk-based target with a cloud-based target. Both the on-prem storage and cloud storage use the same interface and are managed as a single storage pool.

Their respective functions are:

  • On-prem target: Fast disk-backup. Provides predictable backup time; ensures immediate access for RTO/RPO SLAs
  • Public cloud target: DR repository; low-cost and offsite, it provides the ideal long-term archive, plus overflow capacity for limitless scalability

Works with Existing Backup Software

Backup procedures are proven through years of development. And you know well the software you have. The hybrid cloud approach leverages all of that investment and learning by preserving your existing processes.

To the backup software, the hybrid cloud appears exactly as cloud storage. (Connectors to Amazon S3 and other services are now available with most popular backup software.)

With hybrid cloud, that connector is simply directed at the on-prem storage. The on-prem storage then connects to the cloud. The two are managed as a single, limitlessly scalable storage pool.

The on prem S3-compatible storage is then directed at the S3 public cloud for data tiering purposes. The most recent backups — ie, the ones you’re most likely to use — are kept on prem. The older copies are migrated to the cloud.

The combined solution becomes a simple, drop-in replacement for existing backup target technologies. The result: on-site storage for fast access, and cloud storage for low-cost archive and DR.

In summary, hybrid combines a petabyte-scalable, high-performance on-premises backup target with seamless cloud storage integration. Together they let you retain a familiar workflow while ensuring success on the objectives that matter to you: backup window predictability, and repeatable RTO / RPO.

Start Small and Grow

Best of all, you can start with a small deployment, prove it out, and grow. On-prem S3 storage can be deployed on servers you already have, or deployed as preconfigured appliances.

There are even zero-upfront-cost options using Amazon metered-by-use software from the Amazon Marketplace.

Eight Ways Hybrid Cloud from Cloudian Makes Backup Better

Cloudian is the on-prem storage node in a hybrid storage configuration. It features the industry’s highest level of S3 compatibility, ensuring full interoperability with Veritas, Commvault, and Rubrik.

The Cloudian architecture is a scale-out storage cluster comprised of shared-nothing storage nodes. Your media servers connect to the on-prem Cloudian cluster via Ethernet and communicates via an S3-compatible API. Your backup software views the cluster exactly as it views cloud storage. It stores data to Cloudian exactly as it would to cloud storage.

The difference with Cloudian vs cloud alone is that all recent backups are stored locally for quick recovery when needed. Policy-based migration then allows older snapshots to be migrated to the public cloud. This frees up local capacity, and also provides an offsite copy for DR use.

Here are eight ways this helps:

1) Performance to handle the largest environments

Cloudian scales to petabytes with a scaling model that grows in both capacity and bandwidth. Predictable backup windows result from Cloudian’s high streaming bandwidth: Writes in excess of 5000 MB/s can be achieved, or 18TB per hour.

2) Petabyte-scalable

You can start small with just three nodes, and scale to petabytes simply by adding nodes. Scaling is seamless and does not require downtime.

3) 70% less cost than conventional disk

Built on industry-standard hardware, Cloudian drives down the cost of on-prem, disk-based storage to 1¢/GB/month or less, depending on capacity.

4) Manage one data pool

Cloudian maintains data in a single pool across all nodes. You get one-to-many auto-replication, enhancing data durability. No need to juggle what’s “active” or “passive,” create complex policies and snapshot management techniques, or track which sites are replicating to where.

5) Distributed architecture for global data protection

Enterprises struggle to manage backup at remote offices. With Cloudian, clustered nodes can be deployed globally and interconnected, thus allowing data to be automatically replicated across sites.  Because the nodes form a single namespace, you can implement policy-based data migration to the cloud for DR purposes. You get global data protection with fast local recovery, all managed from a single location.

6) Deploy as appliances, or on your own servers

Cloudian is built on industry-standard hardware. You have the flexibility to buy either pre-configured, fully supported appliances, or software for installation on the servers you choose. Either way, you benefit from the value of commodity hardware.

7) Drop-in integration

Cloudian can be immediately integrated with backup software packages that support cloud storage, including Veritas NetBackup, Veritas Backup Exec, Commvault Simpana and Rubrik. Cloudian is viewed exactly as cloud storage for both backup and recovery. For information that has been migrated to the cloud, Cloudian transparently retrieves that data and presents it to the media server.

8) Start small, even at zero upfront cost

Contact Cloudian to get started. We can even show you options that get you started at zero upfront cost, with Cloudian from the Amazon Marketplace.

For more information, read the Backup Solutions Note or specific data protection solutions with

Configuration Guides for Veritas and Commvault are also available.

  • Veritas
  • Commvault

Why Internet Unie Chose Cloudian for Hybrid Cloud Storage

Internet Unie, a service provider in the Netherlands, has recently deployed an innovative hybrid cloud service, combining Cloudian object storage in their data center together with Amazon S3 storage.

The new service allows their colocation customers to employ local S3 storage in their data center, with additional capacity available in the AWS public cloud.

Why would a service provider launch a service that employs another service provider (in this case Amazon)?

The answer is simple: it fills a real business need and gives Internet Unie a competitive advantage.

By offering their customers this hybrid service, Internet Unie meets multiple possible requirements:

    • Performance: Local storage provides cloud-compatible capacity without the latency of a long network hop
    • Data governance: Locally stored data does not leave the data center
    • Capacity flexibility: Data can be tiered off to the cloud when desired, meaning capacity is always there
    • Disaster recovery: Backup information can be moved off site at any time
    • Cost: Locally stored information costs nothing to access, meaning that cloud storage invoices become far more predictable
    • Archival storage: Cloud archival services are very cost effective for rarely accessed information
    • Business simplicity: One invoice for both on prem and cloud storage, thanks to the Amazon Marketplace metered-by-use program

Internet Unie summed it up this way:

“This hybrid service opens up enormous possibilities for those using the AWS service cloud offerings and need to store certain data types in a private cloud, for reasons such as data governance policies. With Cloudian’s new offering on AWS, our customers can point their applications to either cloud storage or on-premises storage, and it’s completely transparent,” said Arvid Cauwels, Sales Director at Internet Unie. “With AWS metering now available for Cloudian storage, customers get one AWS invoice for both their public and private cloud storage usage.”

Cloudian is a natural fit due to our native support for the Amazon S3 API, which makes it easy to tier between a Cloudian storage system in the Internet Unie data center and AWS cloud storage. Additionally, Cloudian supports AWS metering, which pulls all usage and billing (for both public and private cloud) into a single monthly AWS invoice.

Hybrid cloud represents a ‘best of both worlds’ solution, giving customers extra flexibility and control while providing limitless scalability. Read our blog post to learn more about why you should consider a hybrid cloud solution.

 

AWS re:Invent Attendees See Benefits of Hybrid Storage

AWS re:Invent was a fantastic show this year. The show has seen phenomenal growth, with over 32,000 attendees, up from 18,000 attendees last year.

AWS re:Invent

Many visitors were looking for solutions to let them integrate their on-premises operations with the cloud. By adopting a hybrid cloud storage approach, they would be able to capitalize on the scalability and cost of cloud storage when appropriate, while also maintaining the cost predictability and control of on-prem storage.

For these visitors, Cloudian proved to be the perfect fit. We provide 100% native Amazon S3 object storage, with automated tiering between the data center and the cloud. Our HyperStore solution is also available directly from AWS Marketplace, which means users can get all their usage and billing data within a single monthly invoice from AWS.

Steve Varner, Principal Data Engineer at Motorola Solutions, visited our booth and had this to say afterwards:

Steve Varner

Interested in learning more about Cloudian? Contact us or try it out for yourself.