$500 Billion in Lost Market Value: VC Firm Estimates Impact of Public Cloud Costs

VC firm Andreesen Horowitz examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Cloud computing and on-prem computing will always co-exist, we believe. A recent article from the venture capital firm Andreesen Horowitz makes a compelling case for that. The article (“The Cost of Cloud, a Trillion Dollar Paradox”) examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Here are some of the article’s key findings:

  • “If you’re operating at scale, the cost of cloud can at least double your infrastructure bill.”: The authors note that public cloud list prices can be 10-12X the cost of running your own data centers. Although use-commitment and volume discounts can reduce the difference, the cloud is still significantly more expensive.
  • “Some companies we spoke with reported that they exceeded their committed cloud spend forecast by at least 2X.” Cloud spend can be hard to predict, resulting in spending that often exceeds plan. Companies surveyed for the article indicate that actual spend is often 20% higher than committed spend and at least 2X in some cases.
  • “Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud.”: This takes into account the TCO of everything from server racks, real estate, and cooling to network and engineering costs.
  • “The cost of cloud ‘takes over’ at some point, locking up hundreds of billions of market cap that are now stuck in this paradox: You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.”: While public cloud delivers on its promise early on, as a company scales and its growth slows, the impact of cloud spend on margins can start to outweigh the benefits. Because this shift happens later in a company’s life, it’s difficult to reverse.
  • “Think about repatriation upfront.” By the time cloud costs start to catch up to or even outpace revenue growth, it’s too late. Even modest or modular architectural investment early on reduces the work needed to repatriate workloads in the future. In addition, repatriation can be done incrementally, and in a hybrid fashion.
  • “Companies need to optimize early, often, and, sometimes, also outside the cloud.”: When evaluating the value of any business, one of the most important factors is the cost of goods sold (COGS). That means infrastructure optimization is key.
  • “The popularity of Kubernetes and the containerization of software, which makes workloads more portable, was in part a reaction to companies not wanting to be locked into a specific cloud.”: Developers faced with larger-than-expected cloud bills have become more savvy about the need for greater rigor when it comes to cloud spend.
  • “For large companies — including startups as they reach scale — that [cloud flexibility] tax equates to hundreds of billions of dollars of equity value in many cases.”: This tax is levied long after the companies have committed themselves to the cloud. However, one of the primary reasons organizations have moved to the cloud early on – avoiding large CAPEX outlays – is no longer limited to public clouds. There are now data center alternatives that can be built, deployed, and managed entirely as OPEX.


In short, the article highlights the need to think carefully about which use cases are better suited for on-prem deployment. Public cloud can provide flexibility and scalability benefits, but at a cost that can significantly impact your company’s financial performance.

Cloudian was founded on the idea of bringing public cloud benefits to the data center, and we now have nearly 700 enterprise and service provider customers that have deployed our award-winning HyperStore object storage platform in on-prem and hybrid cloud environments. On-prem object storage can deliver public cloud-like benefits in your own data center, at less cost and with performance, agility, security and control advantages. In addition, as long as the object storage is highly S3-compatible, it can integrate easily with public cloud in a hybrid cloud model.

To learn more about how we can help you find the right cloud storage strategy for your organization, visit cloudian.com/solutions/cloud-storage/. You can also read about deploying HyperStore on-prem with AWS Outposts at cloudian.com/aws.

 

John ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

LinkedIn Live: Secure Data with VMware vSAN & Cloudian HyperStore

Our joint solution combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu. Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage. 

Grant JacobsonGrant Jacobson, Director of  Technology Alliances and Partner Marketing, Cloudian

View LinkedIn Profile


Protecting Your Data with VMware vSAN and Cloudian HyperStore

Each month, VMware and Cloudian collaborate to promote our joint solution in a series of short (~15 minutes) LinkedIn Live sessions.  Each session highlights a new solution use case and for today’s session, the fourth in our series, we talked about Data Protection and how to keep data safe.  These are lively conversations about the solution and how our customers can take advantage of it to meet their evolving needs.  Last month, we covered the new Splunk SmartStore use case, with a 44% TCO savings compared with traditional storage.

Our joint solution became available in February and combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu.   Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage.  The solution enables many new use cases with Data Protection being one that cuts across all segments: everyone needs to ensure their data stays safe, especially from the accelerating increase in ransomware and other cyberattacks.


If you missed it, watch it here:

If you’d like more information about our solutions with VMware, see our dedicated webpage:
You can also reach us at [email protected]

Object Storage: Better Monetizing Content by Transitioning from Tape

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.

david phillips

David Phillips, Principal Architect for M&E Solutions, Cloudian

View LinkedIn Profile

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.


Object Storage: Better Monetizing Content by Transitioning from Tape

Media and entertainment companies derive significant recurring revenue through old content. From traditional television syndication to YouTube uploads, this content can be distributed and monetized in several different ways. Many M&E companies, particularly broadcasters, store their content in decades-old LTO tape libraries. With years of material, including thousands of episodes and millions of digital assets, these tape libraries can grow so large that they become unmanageable. Deployments can easily reach several petabytes of data and may sprawl across multiple floors in a broadcaster’s media storage facility. Searching these massive libraries and retrieving specific content can be a cumbersome, time-consuming task –like trying to find a needle in a haystack.

Object storage provides a far simpler, more efficient and cost-effective way for broadcasters to manage their old video content. With limitless scalability, object storage can easily grow to support petabytes of data without occupying a large physical footprint. Moreover, the technology supports rich, customizable metadata, making it easier and quicker to search and retrieve content. Organizations can use a Google-like search tool to immediately retrieve assets, ensuring that they have access to all existing content, no matter how old or obscure, and can readily monetize that content.

Here’s a deeper look at how the two formats compare in searchability, data access, scalability and management.

Searchability and data access

LTO tape was created to store static data for the long haul. Accessing, locating and retrieving this data was always an afterthought. In the most efficient tape libraries today, staff may be able to find a piece of media within a couple minutes. But even in this scenario, if there were multiple jobs queued up first in the library, finding that asset could take hours. And this is assuming that the tape that contains the asset is stored in the library and in good condition (i.e., it can be read and doesn’t suffer from a jam).

This also assumes the staff has the proper records to even find the asset. Because of the limitations of the format, LTO tape files do not support detailed metadata. This means that organizations can only search for assets using basic file attributes, such as date created or title. It’s impossible to conduct any sort of an ad hoc search. If a system’s data index doesn’t contain the file attributes that a user is looking for, the only option is to look manually, an untenable task for most M&E organizations that have massive content libraries. This won’t change in the future, as tape cannot support advanced technologies such as artificial intelligence (AI) and machine learning (ML) to improve searchability.

On the other hand, object storage makes it possible to immediately search and access assets. The architecture supports fully-customizable metadata, allowing staff to attach any attributes they want to any asset, no matter how specific. For example, a news broadcast could have metadata identifying the anchors or describing the type of stories covered. When trying to find an asset, a user can search for any of those attributes and rapidly retrieve it. This makes it much easier to find old or existing content and use it for new monetization opportunities, driving much greater return on investment (ROI) from that content. This value will only increase as AI and ML, which are both fully supported in object storage systems, provide new ways to analyze and leverage data (e.g., facial recognition, speech recognition and action analysis), increasing opportunities to monetize archival content.

Scalability and management

Organizations must commit significant staff and resources to manage and grow an LTO tape library. Due to their physical complexity, these libraries can be difficult and expensive to scale. In the age of streaming, broadcasters are increasing their content at breakneck speed. And with the adoption of capacity-intensive formats like 4K, 8K and 360/VR, more data is being created for each piece of content. Just several hundred hours of video in these advanced formats can easily reach a petabyte in size. In LTO environments, the only way to increase capacity is to add more tapes, which is particularly difficult if there are no available library slots. When that’s the case, the only choice is to add another library. Many M&E companies’ tape libraries already stretch across several floors, leaving little room for expansion, especially because new content (in higher resolution formats) tends to use larger data quantities than older content.

Object storage was designed for limitless scalability. It treats data as objects that are stored in a flat address space, which makes it easy to grow deployments via horizontal scaling (or scaling out) rather than vertical scaling (scaling up). To increase a deployment, organizations simply have to add more nodes or devices to their existing system, rather than adding new systems (such as LTO libraries) entirely. Because of this, object storage is simple to scale to hundreds of petabytes and beyond. With data continuing to grow exponentially, especially for video content, being able to scale easily and efficiently helps M&E companies maintain order and visibility over their content, enabling them to easily find and leverage those assets for new opportunities. Increasing the size of a sprawling, messy tape library is exactly the opposite.

Tape libraries also lack centralized management across locations. To access or manage a given asset, a user has to be near the library where it’s physically stored. For M&E organizations that have tape archives in multiple locations, this causes logistical issues, as each separate archive must be managed individually. As a result, companies often need to hire multiple administrators to operate each archive, driving up costs and causing operational siloing.

Object storage addresses the challenge of geo-distribution with centralized, universal management capabilities. Because the architecture leverages a global namespace and connects all nodes together in a single storage pool, assets can be accessed and managed from any location. While companies can only access data stored on tape directly through a physical copy, object storage enables them to access all content regardless of where it is physically stored. One person can administer an entire globally-distributed deployment, enforcing policies, creating backup copies, provisioning new users and executing other key tasks for the whole organization.

Conclusion

M&E companies still managing video content in LTO tape libraries suffer from major inefficiencies, and in turn, lost revenue. The format simply wasn’t designed for the modern media landscape. Object storage is a much newer architecture that was built to accommodate massive data volumes in the digital age. Object storage’s searchability, accessibility, scalability and centralized management helps broadcasters boost ROI from existing content.

 


To learn more about Cloudian’s Media and Entertainment solutions, visit cloudian.com/solutions/media-and-entertainment/.

Tape — Does It Measure Up?

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator. Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries.

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

lto tape

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator.

Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries. Also, security and secure remote access have become a critical element when considering data protection and business continuity. With production and engineering teams alike finding themselves “locked out of the building,” managing physical tape media and remediating mechanical issues with tape libraries has proved difficult if not impossible.

The drawbacks of tape that are even more highlighted by the pandemic include:

  • Accessibility: This one is obvious. The lack of immediate and complete accessibility has never been more problematic than during the pandemic.
  • Durability: Mechanical failures around tape library robotics and tape media failures inside have meant truck rolls into the tape vaults – not ideal for a shelter-in-place situation.
  • Compatibility: New tape drive hardware has limits to its backward compatibility, which have required recoding at a time when data availability has been the prime objective for business continuity
  • Security: Ransomware attacks have become commonplace during the pandemic. Considering the various drawbacks associated with tapes, the rationale for using tapes for ransomware protection is up for reevaluation. As they say, data not retrievable in the right timeframe is data not protected. This is especially true in the case of ransomware


As companies look to increase the capacity of their storage, as well as the frequency with which they access it, object storage checks off all the right boxes in terms of data durability, availability, performance, and accessibility. Whether in the public or private cloud, object storage overcomes the limitations of LTO tape listed above and has become the go-to for most IT administrators looking for a better solution. If you’re running tape today, it makes a lot of sense to evaluate the benefits of switching to object storage before the limitations of your current solution impact your business more severely — and the sooner the better. As tape infrastructure ages, the transition only becomes more difficult.

As with any major technology shift, there are many important factors to take into consideration.


Tape: Does it Measure Up?
An Insider’s Guide to Data Center Modernization

To read an insider’s view on data center modernization focused on this topic, please visit
https://cloudian.com/lp/data-center-modernization/

LTO tape library

S3-Compatible Storage for VMware Cloud Director

We’re excited to announce Cloudian Object Storage for VMware Cloud Director, an integrated storage platform that enables VMware Cloud Providers and their customers to deploy, manage and consume S3-compatible storage within their services environment.

Read the datasheet

View the demo

View the VMware lightboard video

Read about the integration

Scalable, Cost-Effective Storage for Unstructured Data

This new offering does for unstructured data – such as images and files – what vSAN does for structured data: it provides an integrated S3-compatible storage solution that is provisioned and managed within the VCD framework.

space

Furthermore, Object Storage for VMware Cloud Director enables a limitlessly scalable storage pool, where up to an exabyte of data can be managed within a single namespace, and at far less cost than other storage types.

VMware Cloud Director Integration

Jointly engineered by VMware and Cloudian, the solution consists of two elements:

  • VMware Cloud Director Object Storage Extension: Object Storage middleware in VMware Cloud Director that is extensible and provides the storage management framework.
  • Cloudian Object Storage: The storage layer that provides the S3-compatible storage environment.

As with vSAN, object storage is seamlessly integrated within the management environment.

Cloudian s3-compatible object storage for vCloud Director

The Simple Path to New, High-Value Add Services

For VMware Cloud Providers, this platform opens the door to new service revenue streams in use cases
such as:

  • Storage-as-a-service
  • Backup-as-a-service
  • Archive-as-a-service
  • Container storage services, with VMware PKS

Furthermore, a growing ecosystem of S3-compatible applications create many other services options. Whether in big data, healthcare, media & entertainment, video surveillance or others, a scalable, S3-compatible platform gives CSPs new opportunities to build differentiated services offerings.

Fully S3-Compatible Storage Platform

Designed exclusively to support the S3 API, Cloudian Object Storage features a native S3 API implementation and offers this industry’s best S3 compatibility. This makes it an ideal platform for S3-compatible services offerings and software development.

Storage Management via VMware Cloud Director

All commonly-used storage management functions are accessible via VMware Cloud Director. Create users and groups, provision storage, set policies, and monitor usage, all without leaving the VCD UI. This eliminates the console-hopping that saps productivity and allows management tasks to be automated within the VCD framework.

Cloudian and VMware vCloud Director S3-compatible storage management screen shot

Self Service for Cloud Providers’ Tenant and Users

On the customer side, users also gain a self-service portal, letting them also accomplish storage management tasks on their own, via VCD. For the cloud provider, this translates to increased productivity and higher customer satisfaction.

Deployment Options

Cloud providers have two deployment options (both are managed via VMware Cloud Director):

Software-Defined Storage: Deploy Cloudian software on your existing VMware compute and storage platform and leverage the storage you already have. Storage appears as a scalable S3-compatible storage pool. A utility-based pricing model lets you license Cloudian software for just the object storage capacity in use. (This option will be  available Summer, 2019)

Appliance: Deploy as a pre-configured storage appliance from Cloudian. Start small and seamlessly scale to an exabyte without interruption. (Available July 2019)

vmware s3 compatible storage for vCloud Director

Example Workflow

From end-to-end, cloud providers and their clients can manage entire workflows via VMware Cloud Director.
Consider this backup-as-a-service offering: A service provider can configure the storage target (Cloudian
Object Storage), configure the backup software, and create new tenant users, all from a single VCD
screen. The tenant can then create and schedule backup jobs, monitor progress, and perform restores,
also though VCD.

vcloud example

Free up Space From VCD Datastores

For the service provider, this platform can also increase storage efficiency by offloading vApps not currently in use, thus freeing up storage space from VMware Cloud Director datastores. When required, restore the vApp back in the datastore for continued use.

Ideal Feature Set for Service Providers

Built for service providers, the Cloudian platform includes the full range of features needed to build and manage a profitable services business:

  • Multi-tenant Resource Pooling: Create isolated, secure storage pools within a shared storage platform. Customers have independent role-based authentication and fine-grained access controls.
  • Geo-Distribution and Cloud Migration: Policy-based tools enable simple, secure storage migration and management across sites for disaster recovery and storage resource optimization, all within a single namespace.
  • Integrated Management: Manage commonly used storage functions, such as reporting and configuration of users and groups, with access provided from within the VMware Cloud Director user interface. For advanced functions, a single sign-on provides seamless access to the Cloudian user interface.
  • Quality of Service: Manage service level agreements with bandwidth controls to ensure a consistent customer experience.
  • Billing: Generate client billing information using selected usage parameters.
  • Modular Scalability: Start small and grow without interruption to an exabyte within a single namespace.
  • Data Durability up to 14 Nines: Deployment options, including erasure coding and data replication, allow for configurable data durability up to 99.999999999999%.
  • Data Security and Compliance: Data is secured with AES-256 server-side encryption for data stored at rest and SSL for data in transit (HTTPS). WORM and audit trail logging are provided for compliance.
  • Granular Storage Management: Manage data protection and security at the bucket level to tailor capabilities for specific users.
  • Self-service Management: Role-based access controls allow customers to select and provision storage on-demand from a service catalog via a self- service portal.

General availability of Cloudian Object Storage for VMware Cloud Director is July 2019. We’re looking forward to helping our cloud provider partners and their customers build new business opportunities to capitalize on the growing ecosystem of S3-compatible applications!

Read the datasheet

View the demo

View the VMware lightboard video

Read about the integration

VNAs and Object Storage: Changing Patient Outcomes with Consolidated Data

While medical professionals have more medical imagery resources at their disposal than ever, the technology that generates and stores those images is often proprietary. Nurses, doctors and others who need that information must devote time to retrieving files from disparate systems – something that inevitably takes away from the time that could be spent working with patients.

This frustrating scenario is common enough that it’s given rise to a solution: the vendor neutral archive, or VNA. This acts as a single shared storage environment for this information, pulling together all imagery from multiple platforms. This allows medical staff to make more informed care decisions based on the complete picture of a patient’s condition. The IT department likes them, too – by allowing them to use solutions from multiple vendors, IT is afforded equipment sourcing flexibility and can keep costs down. The shared storage pool also consolidates management to a simpler storage environment where data can be more effectively managed and protected with a reduced IT workload.

Object storage will play an important role in this approach. It’s uniquely suited to the scalability needs of modern medicine – the volume of medical data hospitals must manage is only going up. Object storage allows organizations to start small and scale to petabytes.

As long as that data can be provided to medical professionals in an organized manner, it can be used to help patient outcomes improve. The metadata search capabilities of object storage can help with that.

A VNA also helps make medical data more secure. Storing images in multiple repositories multiplies the security risks and data protection challenges – each silo introduces its own management burdens. By centralizing information, managers can apply security and DR protocols to the data under management in the VNA, making their jobs much easier.

And by the way, object storage can do this at one-third the cost of traditional enterprise storage.

If you’re looking for an example of a VNA and object storage system in action, we have a great example for you. Today, Cloudian and Hyland announced a new solution that combines Cloudian HyperStore with Hyland Acuo VNA. Acuo VNA employs the HyperStore platform to consolidate imaging information from across the healthcare organization to a single storage pool. To read more about the solution, check out the solution brief – or, if you’re going to be at the Health Information Management Systems Society 2018 (HIMSS2018) Conference on Las Vegas March 6-8, stop by booth 1633 and see the power of the solution in person!

Read more in our guide to Medical Record Retention.

This is part of a series of articles about Health Data Management.

The World is Being Changed by Camera Data & Object Storage

Impactful Imaging: How the World is Being Changed by Camera Data and Object Storage

Last month, Cloudian and Axis, the market leader in network video, announced a partnership that will allow the data captured by Axis’ network cameras to be saved directly to Cloudian’s HyperStore via the internet for economical archiving. That means that even as data volumes increase, Axis customers can manage video data effectively and economically while also positioning themselves to deal with future demands for video and storage.

This is yet another example of how powerful imaging technology and scale-out storage are combining to change business, security and public life. Today, cameras count people at busy places, recognize faces for access, and monitor production processes.  When you access a parking garage, your license plate is read and stored. If you drive on a Tokyo highway, cameras recognize your car and billboards project a targeted advertisement based on the type of car you drive. And if you drive certain models of automobile, your car itself is recording video as your drive.

Cameras are being used in a host of new and ingenious ways – not just to monitor people for security reasons but to create entirely new ways of serving customers and generating new revenue.

Automotive and security applications may be among the first applications of video that spring to mind, but farming is another example of an industry that is adapting camera imaging. Traditional farming relies on managing fields, based on regional conditions and historical data. Today, farmers have the ability to add sensors, robots, GPS, mapping tools and data-analytics software to customize the care that plants receive without increasing labour. Stationary or robot-mounted sensors and camera-equipped drones send images and data on individual plants to a computer, which looks for signs of potential problems. And AI/ML helps to find a proven cure for that problem, based on historical data. This allows the farmers to receive feedback in real time and take actions accordingly (deliver water, pesticide or fertilizer) to only the areas that need it. The technology also helps farmers decide when to plant and harvest crops.

But it is not only productivity and cost savings that count when it comes to cameras. The City of Montebello, California equipped 79 city buses with five IP cameras each. With over 8 million passengers traveling every year, the city has an enormous responsibility to ensure the safety of those being transported. To make sure that in an emergency situation the right response arrives quickly, the city combined an advanced mobile security system with Cloudian object storage.

Thanks to this new combination, the City of Montebello can simultaneously record all five bus-mounted cameras currently under testing with real-time metadata tagging (time, location, vehicle, etc.). They also improved upload reliability by allowing large clips to be broken up and the parts streamed concurrently, as opposed to consecutive streams which must be restarted in the event of an error. But most importantly, Montebello Bus Lines has real-time visibility to protect the millions of passengers who depend on its service. And it is able to find specific information quickly when an emergency appears.

With newer applications of video, and higher resolutions, comes a growth in the need for data storage which can become very expensive, very quickly with traditional storage systems. Conventional network cameras usually store captured images on media such as SD cards or on Network Attached Storage (NAS), but there is a limit to how much data these media can preserve and manage over time. The scale-out capabilities of object-storage and use of low-cost hard drives are just two more reasons why object storage works very well with camera data.

Object storage and cameras are made for each other. Using the right technology for video data saves money, increases efficiency and lowers your ecological footprint.

To learn more about how Cloudian can help you manage your video storage needs, visit our Media and Entertainment page.

 

How to Prepare Your Organisation for GDPR Compliance

The EU’s General Data Protection Regulation (GDPR) was approved last year, and the enforcement date of May 25, 2018 is fast approaching. After that, organisations found to be in non-compliance will face heavy fines. With only nine months until the enforcement date, it’s important to understand the potential problem areas in your data storage architecture and how you can improve it in time to be GDPR-compliant.

 

What is the GDPR?

The GDPR was designed to harmonize data privacy laws across Europe, bolstering privacy protection for EU citizens and empowering them to better control how their data is used. The regulation introduces the ‘Rights of the Data Subjects’, which essentially states that data belongs to the individual, not the organisation. For individuals, this means that they can access their personal data that’s being stored, and can request changes or even removal. They also have the right to compensation if their rights are violated. For organisations, information must be held only as long as it’s required, and in many cases they’ll need to appoint a Data Protection Officer to ensure that personal data is not compromised.

Organisations are now facing challenges interpreting what the new regulations mean to them and understanding what they need to do to ensure compliancy. Just deploying technology is not a good answer here, as organisations need to understand the data they are storing to ensure they have a legitimate reason for holding this data. It’s important to keep in mind six core principles when storing personal data. Data must be:

  • Processed lawfully, fairly, and transparently
  • Collected for specified, explicit, and legitimate purposes
  • Relevant and limited to what is necessary
  • Accurate and up to date
  • Retained for only as long as necessary
  • Processed in an appropriate manner to maintain security

The Path to GDPR Compliance

Because of the greater control individuals have over their personal data, it is the organisation’s duty to ensure that nothing happens to that data. There are two big questions you should ask yourself when assessing how compliant your organisation is with the GDPR:

1. Is the data protected?

If the personal data your organisation stores ends up compromised, the organisation will be held accountable. You must make sure your data is protected from:

  • Device failures – This includes any physical storage component, such as disk drives, storage controllers, and data centres.
  • Logical/soft failures – This refers to human errors such as accidental deletion/overwrite, as well as viruses and file data corruption. This currently accounts for up to 80% of data losses.
  • Security breaches – Data must be secure from forceful entry/hacks.

Data availability must be guaranteed not only for the security and privacy of personal data, but also in the event that individuals want to make changes to their data.

2. Can I find the data?

The second question you should ask is around data location awareness. If someone requests their personal data, would you be able to quickly locate and procure it? Not only does the data you’re storing need to be housed in GDPR-compliant systems and data centres, but the data itself needs to be searchable and well-organised. If you are not able to produce the requested data in a timely fashion, you may face fines under the new regulations.

 

Turning to Object Storage

One way you can start moving your organisation towards GDPR compliance is by looking to object storage. The inherent capabilities of object storage give you some real advantages in achieving compliance:

Customizable metadata tags: To ensure compliance, you must be able to find information. Traditional file systems only allow you to view limited metadata information on a file, such as the owner and the date created. With object storage metadata, you have no limit on how you tag your data, making it easily searchable for data requests.

Scalability: When data is consolidated, it’s much more easily searched and checked for duplicate records. The limitless capacity of object storage makes it feasible to consolidate data to a single, searchable pool.

Data protection features: Data must be available at all times. With data protection features such as erasure coding, replication, and multi-tenancy (to segregate users), you can ensure that data can still be retrieved no matter what situations arise.

Full GDPR compliance will not be an easy task, but you can start prepping your organisation for the enforcement date by making sure your data is protected, available, and searchable.

 

 

 

Data Management Partners Unite to Provide Comprehensive Object Storage

We just announced our Data Management Partners program to help our customers solve more capacity management problems in less time. The program combines technology, testing, and support to make it easy to put object storage to work. Inaugural members of this program are Rubrik, Komprise, Evolphin, and CTERA Networks.

Here’s why this program is exciting: object storage has the potential to solve many capacity management problems in the data center. It’s 2/3 less costly and infinitely scalable. In a recent survey, Gartner found that capacity management was the #1 concern of Infrastructure and Operations managers, so these are important benefits.

The question is how to get started with object storage? You can piece together solutions on your own, but that can be risky. We’ve done the homework for you and proved out these solutions.

The Solution for Unstructured Data Consolidation

These solutions solve capacity-intensive challenges where Cloudian’s scalability and cost benefits deliver huge savings. Cloudian consolidates data into one big storage pool, so you can add as many nodes as you want. With one set of users, groups, permissions, file structures, etc, storage managers see still only see one thing to manage. This cuts management workloads by 90% and makes it possible to grow with less headache and cost.

Solution areas in this program include:

  • Data protection: Rubrik and Cloudian together unify and automate backup, instant recovery, replication, global indexed search, archival, compliance, and copy data management into a single scale-out fabric across the data center and public cloud.
  • Data lifecycle management: Komprise and Cloudian tackle one of the biggest challenges in the data center industry, unstructured data lifecycle management, with solutions that offload non-critical data that is typically 70%+ of the footprint from costly Tier-1 NAS to a limitless scalable storage pool.
  • Media active archiving: Evolphin and Cloudian help media professionals address capacity-intensive formats (e.g., 4k, 8k, VR/360) with the performance to handle time-pressed workflows.
  • File sync and share: CTERA Networks and Cloudian provide enterprises with tools for collaboration in capacity-rich environments.

Reducing Risk with Proven Partners

This program is 100% proven solutions. All are deployed, with customers, in live production data centers, right now. They solve real capacity management problems and do not create new problems along the way.

Object storage is seeing rapid adoption. It costs significantly less than traditional storage and fixes the capacity problem with infinite scalability. If you’re looking into object storage, make sure you’re getting a complete solution, though. Learn more about our Data Management Partners today.

 

Bring Object Storage to Your Nutanix Cluster with Cloudian HyperStore

Your Nutanix-powered private cloud provides fast, Tier 1 storage for the information you use every day. But what about the information that’s less frequently used, or requires more capacity than your Nutanix cluster has to spare? Cloudian HyperStore is on-prem storage that provides extra capacity for your large-scale storage demands.

HyperStore Enterprise Object Storage Overview

Cloudian HyperStore is petabyte-scalable, on-prem object storage for unstructured data. It employs the S3 interface, so most applications that include public cloud connectivity will work with HyperStore.

Like Nutanix, HyperStore is a scale-out cluster. When you need more capacity you simply add nodes. All capacity resides within a single namespace, so it remains easy to manage. Key features of Cloudian HyperStore include:

  • 100% native S3 interface, so it works with most cloud-enabled applications
  • Scales from TBs to PBs without disruption
  • Fourteen-nines data durability with erasure coding and replication
  • 70% less cost than traditional NAS

Scalable Storage for Data-Intensive Applications

Cloudian HyperStore’s scalability and exceptional data durability make it ideal for use cases such as:

  • Backup and archive: Scalable backup target, compatible with Veritas, Commvault, Veeam, and Rubrik data protection solutions
  • Media and entertainment: HyperStore provides an active archive that’s 100X faster to access than tape, and ⅓ the cost of NAS; compatible with most media asset managers.
  • File management: Offload Tier 1 NAS to extend capacity with zero user disruption

HyperStore is guaranteed compatible with all applications that support the S3 interface, the same interface used by AWS and Google GCP. Think of HyperStore as hyperconverged storage, bringing together multiple data types to one, super-scalable pool.

Multiple Deployment Options

Choose from multiple HyperStore deployment options including:

  • HyperStore within your Nutanix cluster: Run HyperStore software on a Nutanix VM and store data to your Nutanix disk. No additional hardware required. A fast, cost-effective way to get started  or to develop S3-enabled applications.
  • HyperStore as a stand-alone appliance: Deploy HyperStore appliances in your data center for high-capacity, cost effective storage. Locate all nodes locally, or spread them out across multiple locations for distributed storage.

Nutanix is the perfect platform for your frequently used or performance-sensitive data. For everything else, there’s Cloudian. To learn more about our work with Nutanix, come find us at Nutanix .NEXT 2017 at booth G7. Additionally, Sanjay Jagad, our Director of Products and Solutions, will be presenting on how to bring object storage to your Nutanix cluster on June 30th, 11:15am in room Maryland D.

To learn more about Cloudian and sign up for a free trial, visit us at https://cloudian.com/free-trial/.

 

Object Storage Bucket-Level Auto-Tiering with Cloudian

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

Cloudian HyperStore can be integrated with any of the following destination cloud storage platforms as a target for tiered data:

  • Amazon S3
  • Amazon Glacier
  • Google Cloud Platform
  • Any Cloud service offering S3 API connectivity
  • A remotely located Cloudian HyperStore cluster

Granular Control with Cloudian HyperStore

For any data storage system, granularity of control and management is extremely important –  data sets often have varying management requirements with the need to apply different Service Level Agreements (SLAs) as appropriate to the value of the data to an organisation.

Cloudian HyperStore provides the ability to manage data at the bucket level, providing flexibility at a granular level to allow SLA and management control (note: a “bucket” is an S3 data container, similar to a LUN in block storage or a file system in NAS systems). HyperStore provides the following as control parameters at the bucket level:

  • Data protection – Select from replication or erasure coding of data, plus single or multi-site data distribution
  • Consistency level – Control of replication techniques (synchronous vs asynchronous)
  • Access permissions – User and group control access to data
  • Disaster recovery – Data replication to public cloud
  • Encryption – Data at rest protection for security compliance
  • Compression – Reduction of the effective raw storage used to store data objects
  • Data size threshold – Variable storage location of data based upon the data object size
  • Lifecycle policies – Data management rules for tiering and data expiration

Cloudian HyperStore manages data tiering via lifecycle policies as can be seen in the image below:

Auto-tiering is configurable on a per-bucket basis, with each bucket allowed different lifecycle policies based upon rules. Examples of these include:

  1.      Which data objects to apply the lifecycle rule to. This can include:
  • All objects in the bucket
  • Objects for which the name starts with a specific prefix (such as prefix “Meetings/2015/”)
  1.      The tiering schedule, which can be specified using one of three methods:
  • Move objects X number of days after they’re created
  • Move objects if they go X number of days without being accessed
  • Move objects on a fixed date — such as December 31, 2016

When a data object becomes a candidate for tiering, a small stub object is retained on the HyperStore cluster. The stub acts as a pointer to the actual data object, so the data object still appears as if it’s stored in the local cluster. To the end user, there is no change to the action of accessing data, but the object does display a special icon denoting the fact that the data object has been moved.

For auto-tiering to a Cloud provider such as Amazon or Google, an account is required along with associated account access credentials.

Accessing Data After Auto-Tiering

To access objects after they’ve been auto-tiered to public cloud services, the objects can be accessed either directly through a public cloud platform (using the applicable account and credentials) or via the local HyperStore system. There are three options for retrieving tiered data:

  1.      Restoring objects –   When a user accesses a data file, they are directed to the local stub file held on HyperStore which then redirects the user request to the actual location of the data object (tiered target platform).

A copy of the data object is restored back to a local HyperStore bucket from the tiered storage and the user request will be performed on the data object once copied back. A time limit can be set for how long to retain the retrieved object locally, before returning to the secondary tier.

This is considered the best option to use when accessing data relatively frequently and you want to avoid any performance impact incurred by traversing the internet and any access costs applied by service providers for data access/retrieval. Storage capacity must be managed on the local HyperStore cluster to ensure that there is sufficient “cache” for object retrievals.

  1.      Streaming objects – Streams data directly to the client without restoring the data to the local HyperStore cluster first. When the file is closed, any modifications are made to the object in situ on the tiered location. Any metadata modifications will be updated in both local HyperStore database and on the tiered platform.

This is considered the best option to use when accessing data relatively infrequently and concern about the storage capacity of the local HyperStore cluster is an issue, but performance will be lower as the data requests are traversing the internet and access costs may be applied by the service provider every time this file is read.

  1.      Direct access – Objects auto-tiered to public cloud services can be accessed directly by another application or via your standard public cloud interface, such as the AWS Management Console. This method fully bypasses the HyperStore cluster. Because objects are written to the cloud using the standard S3 API, and include a copy of the object’s metadata, they can be referenced directly.

Storing objects in this openly accessible manner — with co-located rich metadata — is useful in several instances:

  1. A disaster recovery scenario where the HyperStore cluster is not available
  2. Facilitating data migration to another platform
  3. Enabling access from a separate cloud-based application, such as content distribution
  4. Providing open access to data, without reliance on a separate database to provide indexing

HyperStore provides great flexibility for leveraging hybrid cloud deployments where you get to set the policy on which data is stored in a public or private cloud. Learn more about HyperStore here.

 

YOU MAY ALSO BE INTERESTED IN

Object Storage vs. Block Storage: What’s the Difference?