New Solution with VMware Tanzu Greenplum Data Warehouse

Cloudian is expanding its collaboration with VMware with a new solution combining Cloudian HyperStore with VMware Tanzu Greenplum, a massively parallel data warehouse platform for enterprise analytics, at scale.

Integrating Cloudian enterprise-grade object storage with VMware Tanzu Greenplum enables new efficiencies and savings for Greenplum users while also supporting the creation and deployment of petabyte-scale advanced analytics models for complex enterprise applications. This is especially timely with the amount of data consumed and generated by enterprises accelerating at an unprecedented pace and the need for these applications to capture, store and analyze data rapidly and at scale.

Greenplum Tanzu Cloudian Diagram

Whether your analytics models use traditional enterprise DB data; log & security data; web, mobile & click steam data; or your models use video and voice data; IOT data or JSON, XML geo and graph data; the need for a modern data analytics platform solution that is affordable, manageable, and scalable has never been greater.

Cloudian HyperStore, with its native S3 API and limitless scalability is simple to deploy and easy to use with VMware Tanzu Greenplum.  HyperStore storage supports the needs for data security, multi-clusters, and geo-distributed architectures across multiple use cases:

  • Storing database backups
  • Staging files for loading and unloading file data
  • Enabling federated queries via VMware Tanzu Greenplum Extension Framework (PXF)


Learn more about this new solution, here and see in the Greenplum Partner Marketplace

See how Cloudian and VMware are collaborating: https://cloudian.com/vmware

Learn more about Cloudian® HyperStore®

NAS Backup & Archive Solution with Rubrik NAS Cloud Direct

Cloudian and Rubrik are simplifying enterprise data protection with a best-in-class NAS backup and archival solution that combines Cloudian HyperStore and Rubrik’s NAS Cloud Direct. This simple solution makes it easy to manage and migrate massive amounts of NAS data to Cloudian on-prem storage without impacting production environments. Cost-effective and highly scalable, this solution delivers new levels of operational efficiency and flexibility to solve challenges for large-scale NAS data management.

With the surging growth in NAS data volumes, the need for an affordable, simple and cost-effective approach to data life cycle and storage management at scale has never been greater. Enterprise organizations must be able to store massive amounts of data while also ensuring that data moving across data centers and to the cloud is simple, seamless, and secure.

Combining Cloudian HyperStore with Rubrik NAS Cloud Direct, a software-only product with a direct-to-object capability, provides a single data management fabric with automated, policy-based protection and allows users to store their NAS backup and archive data in one or multiple geographically separated regions or data centers. Enterprises can extend and scale their Cloudian capacity as needed and non-disruptively while keeping NAS data storage costs to a minimum.

Rubrik NAS Cloud Direct is deployed as a virtual machine that can be up and protecting data from any local and remote NAS platform to Cloudian HyperStore, within minutes.

At any scale – from terabytes to petabytes of data and millions to billions of files – Cloudian HyperStore and NAS Cloud Direct eliminate the complexity of tape solutions and the vendor lock-in of disk-to-disk backup solutions, all at a lower cost.

Learn more about this new solution: Download Brief

See how Cloudian and Rubrik are collaborating: https://cloudian.com/rubrik/

Learn more about Cloudian® HyperStore®

An Introduction to Data Tiering

All data is not equal due to factors such as frequency of access, security needs, and cost considerations, therefore data storage architectures need to provide different storage tiers to address these varying requirements. Storage tiers differ depending on disk drive types, RAID configurations or even completely different storage sub-systems, which offer different IP profiles and cost impact.

Data tiering allows the movement of data between different storage tiers, which allows an organization to ensure that the appropriate data resides on the appropriate storage technology. In modern storage architectures, this data movement is invisible to the end-user application and is typically controlled and automated by storage policies. Typical data tiers may include:

  1. Flash storage – High value, high-performance requirements, usually smaller data sets and cost is less important compare to the performance Service Level Agreement (SLA) required
  2. Traditional SAN/NAS Storage arrays – Medium value, medium performance, medium cost sensitivity
  3. Object Storage – Less frequently accessed data with larger data sets. Cost is an important consideration
  4. Public Cloud –  Long-term archival for data that is never accessed

Typically, structured data sets belonging to applications/data sources such as OLTP databases, CRM, email systems and virtual machines will be stored on data tiers 1 and 2 as above. Unstructured data is more commonly moving to tiers 3 and 4 as these are typically much larger data sets where performance is not as critical and cost becomes a more significant factor in management and purchasing decisions.

Some Shortcomings of Data Tiering to Public Cloud

Public cloud services have become an attractive data tiering solution, especially for unstructured data, but there are considerations around public cloud use:

  1. Performance – Public network access will typically be a bottleneck when reading and writing data to public cloud platforms, along with data retrieval times (based on the SLA provided by the cloud service). Especially for backup data, backup and recovery windows are still incredibly important, so for the most relevant backup sets it is worth considering to hold onsite and only archive older backup data to the cloud.
  2. Security – Certain data sets/industries have regulations stipulating that data must not be stored in the cloud. Being able to control what data is sent to the cloud is of major importance.
  3. Access patterns – Data that is re-read frequently may incur additional network bandwidth costs imposed by the public cloud service provider. Understanding your use of data is vital to control the costs associated with data downloads.
  4. Cost – As well as bandwidth costs associated with reading data, storing large quantities of data in the cloud may not make the most economical sense, especially when compared to the economics of on-premise cloud storage. Evaluations should be made.

Using Hybrid Cloud for a Balanced Data Tier Strategy

For unstructured data, a hybrid approach to data management is key with an automation engine, data classification and granular control of data necessary requirements to really deliver on this premise.

With a hybrid cloud approach, you can push any data to the public cloud while also affording you the control that comes with on-premises storage. For any data storage system, granularity of control and management is extremely important as different data sets have different management requirements with the need to apply different SLAs as appropriate to the value of the data to an organization.

Cloudian HyperStore is a solution that gives you that flexibility for easily moving between data tiers 3 and 4 listed earlier in this post. Not only do you get the control and security from your data center, you can integrate HyperStore with many different destination cloud storage platforms, including Amazon S3/Glacier, Google Cloud Platform, and any other cloud service offering S3 API connectivity.

Learn more about our solutions today.

Learn more about NAS backup here.

 

New HyperStore 4000: Highest density storage

Rack space and budget. Most data centers are short on both. Yet somehow, you’re expected to accommodate a 50% increase in unstructured data volume annually. That’s a problem.

The new solution is the HyperStore 4000. With 700TB in just 4U of rack height, it’s nearly 2X the density of our earlier models. And it delivers storage in your data center at prices on par with the public cloud: about ½ cent per GB per month. Space savings and cost savings in one.

The HyperStore 4000, by the numbers

The HyperStore 4000 appliance was built to handle massive amounts of data. It’s housed in a 7” high 4U enclosure, with 2 nodes and a max capacity of 700TB. The drive sizes range from 4TB to 10TB and has 256GB of memory (128GB per node).

Better yet, the appliance reduces storage costs by 40% (versus other Cloudian solutions) – data management now costs half a cent per GB per month. Even with the reduced cost, there is no drop in data availability – with a three-appliance cluster, you’ll still see 99.999999% data durability.

For the full list of specs, check out our datasheet.

Save time, your scarcest commodity

With most storage systems, as your needs grow, your management headaches grow with them. But the HyperStore 4000 grows painlessly. You just add nodes. No disruption and no data migration.

We specifically designed the HyperStore 4000 appliance to help customers in industries with huge data needs such as life sciences, healthcare, and entertainment. These are the industries where data growth is exploding every year and where modern data centers feel the most burden, as they need high-density storage with peak performance for data protection, video surveillance, research, archival, and more. Now you can meet these growing needs without growing pains.

Finally, the HyperStore 4000 has a 100% native S3 API, and has the industry’s highest level of S3 API compatibility. In fact, we guarantee it to work with your S3-enabled applications.

Be sure to also take a look at Cloudian’s other solutions to see which one is right for you.