Cloudian Achieves Veeam Ready – Object Certification

veeam ready object storage

Cloudian Achieves Veeam Ready – Object Certification

Today Cloudian was honored to be among the first to receive Veeam’s new object storage certification. Called Veeam Ready – Object, this certification gives storage administrators a worry-free, out-of-the-box integrated path to employ Cloudian object storage as a backup target for Veeam.

Veeam and Cloudian have been working together to meet the data protection needs of customers (enterprise, corporate, service providers, governmental) for several years now. In Jan 2019, Veeam made the solution integration even better with the launch of Veeam Availability Suite 9.5 Update 4. This release added native support for S3-compatible storage targets through the Veeam Cloud Tier offering.
Because Cloudian offers the storage industry’s best S3 compatibility, Veeam data protection customers are now able to take full advantage of HyperStore’s limitless scale and value benefits as a backup, archive, and disaster recovery storage target.

veeam cloudian solution

The new Veeam Ready – Object certification ratifies this integration. Veeam’s rigorous certification process stresses the storage platform backup with workloads consisting of 1M+ objects.  They require the entire write operation (Scale Out Backup Repository – SOBR offload) to complete in less than 4.5 hours. A bulk delete must also complete in 4.5 hours. These criteria are representative of real-world scenarios where a backup chain of an expected size would be offloaded as well as an expired backup chain be deleted from Cloudian HyperStore object storage.

Veeam Ready – Object certification ensures confidence to joint Veeam-Cloudian customers and prospects that the join solution — Veeam Cloud Tier and Cloudian HyperStore object storage — will function properly and perform at a level that satisfies the expected customer workload, even when the data requirement scales to a large size distributed across multiple locations.

To learn more about the solution please visit cloudian.com/solutions/data-protection/veeam/

Share This:
Facebooktwitterlinkedinmail

Refining Your GDPR Strategy – Addressing User Data

The European Union’s General Data Protection Regulation (GDPR) deadline for implementation has come and gone. Many organizations have achieved a basic level of compliance, so now is the time to dig deeper, tie up loose ends and try to simplify the process. It is also a time for organizations not directly impacted by GDPR to tighten up their data security and protection practices. Organizations around the world need to realize that GDPR was just the initial warning shot and concepts like data privacy are more critical than ever.

**This is a reprint of a blog published by Storage Switzerland on July 24, 2018. Join us for our upcoming live webinar “How to Design a Compliant and GDPR Ready Collaboration System” on July 26th at 11:30 am ET / 8:30 am PT.**

An area to pay attention to is user data. These are files that users within an organization create and share with both internal employees and external business partners. The protection, management, and compliance with user data is one of the more overlooked topics in the organization, but it is also one of the data sets most susceptible to a breach.

A major weakness is how users share their data with external sources. IT, today, has very little control over how users share data and even less oversight as to when they share data. When sharing data, most users today still use consumer-grade cloud-based file sync and share services. These services provide IT with almost no control over with whom and for how long that data is shared.

Enterprise File Sync and Share May Not Be Enough

The immediate answer to a file sharing problem is to move the organization to enterprise sync and share (EFSS). These solutions do provide IT with oversight and control over how and who shares files. The problem is that most providers of these solutions did not design them with compliance in mind. They may encrypt data at rest and in-flight, but compliance with GDPR and the more stringent regulations to come requires more than encryption.

First, a compliant EFSS solution requires identity management. It should integrate with Active Directory, LDP, and SAML for single secure sign-on. Second, the EFSS solution needs to cover more than just one data store. It needs to provide time expiration of shares, password protection, and download restrictions across all corporate data storage. It also needs to provide GEO location restrictions.

Third, the EFSS solution shouldn’t burden IT. For example, IT can’t be expected to predict every possible reason for sharing or not sharing a file. Instead, the solution should provide sharing policies where users are required to outline why they created a shared link. IT then reviews the sharing justification. Additionally, the solution needs to provide full file event auditing to track file access by date and time as well as by whom and why. File auditing also allows an organization to prove file deletion in response to a right to be forgotten request.

Finally, the EFSS solution needs to provide complete discovery of personal data. Personal data as defined by GDPR is any data that relates to an identified or identifiable natural person. Finding personal data within and across the organization is a big challenge. The EFSS solution needs to index content under its management so authorized users can search it.

StorageSwiss Take

User data is the most exposed data set in an organization, and it is also the most likely to violate regulations and corporate governance policies. Enterprise file sync and share needs to evolve beyond just simple file sharing with encryption to meet the challenge of GDPR and other upcoming data privacy laws. The answer is to manage file data as a unique data set and provide advanced capabilities like auditing and content search.

To learn more about modernizing EFSS as well as how to build a backend storage architecture to support it join us for our upcoming live webinar “How to Design a Compliant and GDPR Ready Collaboration System” on July 26th at 11:30 am ET / 8:30 am PT.

Share This:
Facebooktwitterlinkedinmail

How to Implement File Sharing for GDPR Compliance

Employees are going to share files. It’s an essential part of collaboration. For any project involving more than a few people, this is likely to involve a cloud-based file sharing solution. In environments requiring GDPR compliance, that can be a problem. Especially when regulations state how data can be used and where it is stored, and require that you be able to find and delete information when asked.

In EMEA, GDPR is now in effect. And in the US, one of the country’s toughest privacy regulations, the California Consumer Privacy Act of 2018, was voted into law on June 29.

New storage solutions can help you remain in compliance, but first let’s consider the problem.

GDPR Compliance Places New Demands on File Sharing

Users appreciate the simplicity of cloud-based file sharing, but this may come at the cost of IT control. In the cloud, do you know what data is being stored, how it is protected and who has access?

 

GDPR compliance places new requirements on file sharing

 

Loosely managed assets can run afoul of regulations that impose requirements to:

  • Maintain data within specific physical boundaries
  • Control use of personal data
  • Delete instances of personal data if requested (aka, “the right to be forgotten”)

When data is shared among users and further replicated across the cloud, control is lost and the potential penalties mount. From IT’s perspective, what’s just as troubling is that your ability to respond to regulatory demands may be lost. When you receive a data subject access request (DSAR), can you quickly find all instances of the information?

The right to be be forgotten requires tight control. You cannot be sure of “forgetting” someone if you cannot locate every instance of their data. A single GDPR compliance lapse can cost the company many thousands of euros.

Solution: Cloud-like File Sharing and On-Prem Storage with Cloudian + SME

Cloudian now offers a simple solution: Cloudian storage plus Storage Made Easy (SME) collaboration software.

GDPR compliant solution for file synch and share from Cloudian

The combined solution is cloud-like file sharing software and an on-prem storage system that is under your control… and behind your firewall.

 

File synch and share within your data center

 

This combines the best of both worlds:

  • Ease-of-use: A cloud-like experience for your users makes it easy to adopt and use the service
  • Your security framework: The shared data repository receives the same protection as any other file, and the same access controls (VPN, AD, LDAP)

This lets you handle collaboration just as you would manage and monitor any other file service, with the same controls, same firewall, and your preferred data protection method.

Personal Data / Personally Identifiable Information Management

Personal data, or PII, is central to GDPR compliance and data privacy laws. Passport numbers, social security numbers, credit cards, etc, are ideally not being shared, but we’ve seen too many instances of laptop theft resulting in the disclosure of sensitive PII.

 

Identify personal data, or personally identifiable information in files, and control its distribution

 

The Cloudian/SME solution scans documents for PII, and takes action or sends notification as defined by your policy. Out of the box, it recognizes over 60 forms of PII, and you can add definitions to suit your needs.

Recognize personal data use in shared files with Cloudian solution

 

Shared Links Include Time Limits and Password Protection

Shared links to files can be password protected and time limited, providing an additional level of control. No more evergreen links that can be widely shared outside of your control.

Easy-to-Use

The solution is as simple to use as any cloud solution. Files can be accessed from Windows, Mac, Linux, IoS and Android platforms. You can view files/folders in Explorer/Finder, as with any storage system, and view within the apps own UI. The included UI adds capabilities as viewing the physical location of the file’s storage system, an important attribute for compliance. And you can see at a glance what personal data is present.

Highly Rated Storage

Best of all, the storage repository is Cloudian Object Storage, the most highly rated object storage system on Gartner Peer Insights. This limitlessly scalable system earned the highest “recommended” level at 96% positive, and the highest rating with 4.8 out of 5 stars. With up to 14 nines data durability and integrated data protection, it’s the ideal foundation for enterprise collaboration.

 

Gartner Cloudian review MQ

 

Find out more about this solution and GDPR compliance at cloudian.com/collaboration.

 

 

 

Share This:
Facebooktwitterlinkedinmail

Survey: Media Archive Storage Adopting Hybrid Cloud and AI/ML

At the NAB Show 2018, Cloudian surveyed over 300 attendees about their media archive storage. The goal was to learn about the archive challenges faced by post-production professionals, and how these challenges are being met.

The results were surprising, indicating an industry very much in transition. Workflows are demanding more from media archive storage, driving capacity and search challenges. And media owners are now responding by migrating away from tape — towards new disk and hybrid cloud solutions —  and integrating advanced solutions to accelerate search. A cool infographic has the results here.

Cloudian media archive storage survey shows increase in media archive capacity challenges

Capacity management challenge with growing media archive storage

Capacity management is a growing challenge with 4K media now pervasive and 8K media on the way.  Furthermore, producers now want to capture and keep every bit of the shoot. Cameras are on all the time, and every bit of that footage now gets archived.

In our survey the average archive size was 1.2PB, and 52% of respondents reported ongoing capacity challenges.

Cloudian survey shows media archive storage search challenges, with AI and machine learning emerging as a solution

 

Media Archive Storage Search Headaches Increasing*

* But solutions are on the way!

A majority of respondents (58%) reported ongoing problems with finding assets. This admission heard more than once: “It’s sometimes easier to re-buy stock footage rather than find the footage we already have.”

Search tools are not always as advanced as you might hope, with 30% of respondents saying they still use a combination of Excel sheets and labels on media. More surprisingly, that 30% figure was true for archives of all sizes, even those over 500TB.

Furthermore, users employing MAMs or other databases still had challenges, with 73% of those users reporting time consuming management.

The good news is that help is on the way. A popular topic at the show was metadata, and how it can help with media archive storage search. 79% of respondents seek AI and machine learning tools that can enrich metadata to boost search intelligence.

 

Cloudian survey shows increase in hybrid cloud usage for media archive storage

Hybrid Cloud Combines Best of Both Cloud and On-Prem Media Archive Storage

Respondents indicated significant interest in hybrid cloud, with 78% indicating as interest in moving to hybrid cloud within three years, up from just 16% on hybrid cloud today.

A hybrid cloud combines on-premises storage and cloud storage in a single management environment. The combination provides the quick-access and security of on-prem storage with the utility of cloud storage. In this setting, the cloud might be used for disaster recovery copies, content distribution, or with cloud-based applications for transcoding or metadata enrichment.

Tape use declining as storage for media archives

Tape is Going Away

Tape as an archive medium appears to be on the decline, with 51% of tape users saying they plan to move away from it in the next three years. The reasons given are:  too much management overhead (a tape lasts only 7 years, according to Steve Anastasi, VP of Global Media Archives at Warner Bros), too much physical handling, unreliability, and slow to find and retrieve assets.

Comments from media professionals

Speakers at NAB echoed the same themes in the talks linked below. Media archive storage is in a transition that will make it easier to store, find and protect your company’s most valuable assets. Learn more about how Cloudian helps here, and download the cool infographic here.

 

 

Listen to Matt Yonks, archivist for Saturday Night Live talk about his move from tape here:

There was an issue loading the embedded link.

Listen to Shane Miner of WGBH (Frontline, Nova, Antique Roadshow) talk about his move from tape here:

Listen to Sarah Semlear of Vox Media (SB Nation) talk about her move from tape here:

 

Share This:
Facebooktwitterlinkedinmail

WGBH Accelerates Archive with Hybrid Cloud and Object Storage

How WGBH Improved Active Archive Storage with Hybrid Cloud and Object Storage

At the end of every program produced by a PBS station, there’s a little sting – a video/audio combination that tells you which station produced it. KQED, WNET and KCET are among the most familiar to viewers, but one station’s sting is fairly ubiquitous across the PBS schedule: WGBH in Boston.

WGBH produced four out of 10 programs PBS offers to its affiliates. That means the station has a very busy set of production teams working on shows like Masterpiece, the American Experience, Antiques Road Show, Nova and many more. They’ve been at this for a long time, too – meaning their archive spans 50 years. It includes hard drives, tapes and even reels of film from years gone by.

 

At the NAB show April 2018 – 4 p.m. on April 11 – WGBH will present a session called “How (and Why) We Built a Hybrid Cloud Active Archive.” The why is pretty clear – to get production teams content faster, to make the archives more readily searchable, to create a disaster-resilient storage strategy for preserving a half-century of priceless content, and to scale to accommodate more content shot in increasingly data-intensive formats.

The how involves active archive storage with Cloudian in a hybrid cloud configuration. By combining on-premises object storage with Amazon S3 for disaster recovery, the WGBH team combined fast search and ready access with automatic data replication to the cloud.

For more information on this fascinating look into how active archive storage can capitalize on object storage scalability for even the busiest media production organizations, check out this link to WBGH’s session, or read this case study that explains in depth how and why WGBH made the leap to object storage.

Share This:
Facebooktwitterlinkedinmail

At NAB Las Vegas: New Solutions to Store, Protect, and Find Archive Media

If you’re at NAB Las Vegas 2018, a visit with Cloudian will offer you new media storage insights.

You know the impact of technology. Not only does it allow you to do more things than you ever thought it could, it also creates a new set of complexities: where are things stored? How fast can I retrieve them? Are they safe and secure? And for how long can I keep adding content top my storage system before it doesn’t work anymore?

If those questions sound familiar, you owe it to yourself to stop by the Cloudian booth (SL6321) at the NAB show in Las Vegas. Cloudian’s M&E experts will explain how object storage can make you more productive and make scaling a breeze, and make it much easier to find the content you need.

Not to spoil the surprise, but they’ll tell you about a specific workflow. First, you’ll do your editing in an application like Adobe Premiere, or perhaps catalog using a MAM. That content can be extensively tagged with metadata describing the details of the content. From there, the content is stored in HyperStore, and the media files can then be replicated for disaster recovery purposes to the cloud.

The cloud can also be used to apply AI to the media – for instance, to search for faces or landmarks – and then automatically add additional metadata. Not only does this enable you gain an even more granular view of your content, it allows elastic search to work increasingly well over time, enabling you to find what you need when you need it. Wherever your files go, the metadata goes with them; you can pull them back into a MAM and the metadata comes with them, allowing you to search within the MAM environment, if so desired.

It’s an elegant solution to what can be a very complex and thorny problem – and it makes more sense when you see it in person than when you read it in print. That’s why you need to come by the Cloudian booth at the NAB show to see it in person. We’ll make it worth your while – visitors will have a chance to win a Ducati motorcycle or $20,000 in cash. And, on April 10, we’ll be offering free drinks in a happy hour event in our booth with our partner WWT.

Not only will the booth featured speakers from many Cloudian customers and sponsors, a session dedicated to the value of objects storage featuring WGBH and a long-running late-night weekend comedy program will delve deeper into object storage’s impact on archives. Register for the session here!

Share This:
Facebooktwitterlinkedinmail

One Man’s Buying Journey to the Dark Side and Back

It’s the classic IT question: Do I stick with the incumbent vendor, or look at the alternatives? Here’s a story of one IT manager’s 2 1/2 year-long journey to hell and back.

Since this is the Cloudian blog, you can guess the ending, but it’s a fun read provided by Cloudian partner, SymStor.

The IT manager writing here prefers to remain nameless. Enjoy!

 

Well here we go…

I’ve just started a meeting with my incumbent storage vendor and today they want to talk to us about data protection. They are polite and professional, and introduce us to our new account manager, the third one this year. So, with a little smirk you imagine the first account manager left, the second has been moved to another vertical and the third account manager…well…err…his watch is bigger than his face and I’m finding it a little difficult to take them seriously.

The vendor pitch…

Round table introductions have now been completed, and they start the PowerPoint slides. Not one of them has asked about how our current environment is performing, what we have, what our requirements are, or what sort of issues we experience. They’ve just jumped straight into presentation mode.

Let the blind-siding begin I thought, they have bedazzled everyone in the room, including my manager who makes the final decision, by showing everyone amazing data protection PowerPoint and how they can solve all your problems.

A momentary pause happens. I grab the chance to intervene and start to explain our current issues, and how we would envisage how the future of data protection for our environment should be shaped. As soon as I take a breath the incumbent hijacks the conversation half-way through and describes how our infrastructure is architected with ANOTHER slide. This time, the slide details how applications, virtual and physical systems can be protected with cloud integration scalability.

At that point, I switch off and wait to voice my concerns to my colleagues and the management after the meeting.

Let’s fast forward 6 months, the decision’s been made…long sigh… whilst I exhale! My company has signed a five-year deal with ‘said’ incumbent vendor. The vendor’s professional services team rocks up and start deploying the “NEW” data protection platform. They are a nice bunch of chaps… wait for it…and they will be on-site for the next 3-6 months deploying, configuring and tweaking the platform.

What got delivered…

As I got started and into the swing of it (I don’t want the project to fail) I seem to feel that certain aspects of the truth are getting stretched and a little thin on content. The virtual backup server struggles to be “highly available”. We need to deploy multiple servers for reporting, media and device management. Every hyper-visor storage platform requires a virtual proxy…and the list goes on and on.

In total, the following number of products and servers that had to be deployed to enable data protection for up to 2,500 servers is as follows:

  • 1 x Backup server
  • 1 x Backup Management Console
  • 1 x Reporting Database server
  • 1 x Reporting Application server
  • 1 x License Manager server
  • 4 x Media & Device Management nodes
  • 7 x Virtual Proxies for backup up virtual machines
  • 2 x Data target deduplication appliances
  • 2 x Cloud Appliance (different deduplication algorithm than above?)
  • 2 x Virtual Machine Recovery appliances

That’s just bonkers…

The above deployment list was made up of seven different technologies that were supposedly integrated and seamless. Casting my mind back I referred to the copy of the PowerPoint that the vendor left us. They must really pay their marketing department big bucks to produce such a slick slide deck!

Operational nightmare and the cost…

Now, fast forward 2.5 years and I’m rapidly losing the will to be in this industry. Over the last 30 months, I counted that we had opened 47 support calls over this period. Remember we are half-way through our 5-year term and now we need to pay for more hardware to expand our deduplication appliances and increased support renewal costs, as well as pay for professional services to come back and upgrade our backup software etc.…

What’s that on the horizon?

The moment I took that call, the voice on the other end of the phone claimed that they do backup better, simpler, faster and with a TCO that will stack-up against all the upgrades that I am about to be burdened with from our existing vendor.

Not sure if I should laugh or cry at the moment…The chap on the phone said he was technical and proceeded to ask me questions about our existing environment, capacities, applications, platforms, issues, challenges etc.

At this point I have nothing to lose, so I opened-up and talked to them for over an hour. It was an enjoyable hour, as they were genuinely interested in making sure that they could deliver everything that I wanted.

At first, I was pessimistic, with a pinch of “all vendors tell me that they can do it all”. So, we agreed to have an online demonstration a few days later to give me a chance to herd my co-workers together.

Well, what can I say? The demonstration was amazing, easy to use, simple to configure and deploy. All applications were protected, ultra-fast search was available for backup data, application recovery was a no brainer and recovery of systems on-premise or in the cloud was just…. WOW!

How do we proceed, we asked? “Well, what we would like to do is understand how much you have to spend over the next three years on your existing backup platform, including software upgrades, hardware expansion, maintenance renewals and professional services engagements.”

All this was shared with the company and they performed due diligence and kept it realistic. They worked out that we would be saving £1.3 million over the next three years, over the cost of keeping our existing backup platform.

The result…

The company officially presented to our management team. Our finance team verified and validated the savings. Two weeks of discussions followed, and we then agreed that a purchase order be placed with terms, conditions and criteria to be met for their solution to replace the existing data protection platform.

What did SymStor deliver?

SymStor spent four days implementing the whole solution, two of which was physical racking and stacking. The other two days were spent configuring data protection, archiving and cloud tiering policies.

Do you know what the nicest thing was? The customer thanked SymStor for delivering such an easy-to-use data protection platform.

Share This:
Facebooktwitterlinkedinmail

Bring Object Storage to Your Nutanix Cluster with Cloudian HyperStore

Your Nutanix-powered private cloud provides fast, Tier 1 storage for the information you use every day. But what about the information that’s less frequently used, or requires more capacity than your Nutanix cluster has to spare? Cloudian HyperStore is on-prem storage that provides extra capacity for your large-scale storage demands.

HyperStore Enterprise Object Storage Overview

Cloudian HyperStore is petabyte-scalable, on-prem object storage for unstructured data. It employs the S3 interface, so most applications that include public cloud connectivity will work with HyperStore.

Like Nutanix, HyperStore is a scale-out cluster. When you need more capacity you simply add nodes. All capacity resides within a single namespace, so it remains easy to manage. Key features of Cloudian HyperStore include:

  • 100% native S3 interface, so it works with most cloud-enabled applications
  • Scales from TBs to PBs without disruption
  • Fourteen-nines data durability with erasure coding and replication
  • 70% less cost than traditional NAS

Scalable Storage for Data-Intensive Applications

Cloudian HyperStore’s scalability and exceptional data durability make it ideal for use cases such as:

  • Backup and archive: Scalable backup target, compatible with Veritas, Commvault, Veeam, and Rubrik data protection solutions
  • Media and entertainment: HyperStore provides an active archive that’s 100X faster to access than tape, and ⅓ the cost of NAS; compatible with most media asset managers.
  • File management: Offload Tier 1 NAS to extend capacity with zero user disruption

HyperStore is guaranteed compatible with all applications that support the S3 interface, the same interface used by AWS and Google GCP. Think of HyperStore as hyperconverged storage, bringing together multiple data types to one, super-scalable pool.

Multiple Deployment Options

Choose from multiple HyperStore deployment options including:

  • HyperStore within your Nutanix cluster: Run HyperStore software on a Nutanix VM and store data to your Nutanix disk. No additional hardware required. A fast, cost-effective way to get started  or to develop S3-enabled applications.
  • HyperStore as a stand-alone appliance: Deploy HyperStore appliances in your data center for high-capacity, cost effective storage. Locate all nodes locally, or spread them out across multiple locations for distributed storage.

Nutanix is the perfect platform for your frequently used or performance-sensitive data. For everything else, there’s Cloudian. To learn more about our work with Nutanix, come find us at Nutanix .NEXT 2017 at booth G7. Additionally, Sanjay Jagad, our Director of Products and Solutions, will be presenting on how to bring object storage to your Nutanix cluster on June 30th, 11:15am in room Maryland D.

To learn more about Cloudian and sign up for a free trial, visit us at https://cloudian.com/free-trial/.

 

Share This:
Facebooktwitterlinkedinmail

SNL Deploys Cloudian Object Storage for Active Archive

At the NAB Show in April, the folks from Saturday Night Live delivered a great talk on their next-generation active archive. They are migrating to Cloudian object storage, and away from tape, a change that is already delivering operational benefits. This project is worth a close look, as it speaks to the unique attributes of object storage and why it’s ideally suited to media archive applications. (Storage Switzerland just posted a blog post on this talk here)

The talk was delivered by Matt Yonks, who has spent the last 19 years at SNL as their post-production supervisor. Matt discussed three fundamental challenges SNL was looking to solve, all of which are familiar to anyone managing a media archive:

  1. Scaling the environment: A perennial challenge for any archivist is ensuring sufficient capacity on hand. SNL has an impressive archive, with more than 40 years of history, all of which is now digitized. SNL has multiple PB’s of data, but the problem can be just as vexing in any studio. Many production companies now shoot exclusively in 4K, consuming a TB of storage with just 3 hours of shooting.

 

  1. Grappling with the chain of dependencies: Ensuring access to assets requires that all parts of the delivery stack work together. Application software, drivers, components and operating systems must all work together. If any part of the chain becomes obsolete or out-of-support, it can immediately affect access. For a media archive, where longevity and assured access are fundamental assumptions, this is an ongoing risk. You can keep certain systems “under glass” to maintain access, but this approach ultimately has its limits.

 

  1. Facilitating search: Matt put this succinctly in his talk. “An asset is only as good as your ability to find it.” The job of finding media is typically left to the media asset manager software, but this too has its limits. Assets move across regions. MAMs can themselves become obsolete. Making sure assets are findable, even after 20 or 40 years, is the ultimate goal of an archive.

 

Object storage addresses each of these with elements unique to this storage type.

Scalability: Object storage is not limited in scale. With a flat file layout and shared-nothing cluster architecture, capacity and performance both expand with added nodes.

Open architecture:  The chain of dependencies that vexes other storage types becomes a non-issue with object storage for three reasons. First, object storage employs internet protocols and an API. There is no specific driver software. Second, objects themselves are portable. Users can, for example, migrate objects from Cloudian to Amazon S3 and immediately access them with cloud-based applications. Third, object storage is built on industry-standard hardware, thus eliminating dependence on proprietary hardware.

Metadata for search: Object storage is built for search. Each object includes metadata that describes the contents. Generated by applications or users, that metadata facilitates search irrespective of where the object is located. Whether on-prem or in the cloud, any specific asset can always be found with a search tool. In the case of SNL, their Evolphin MAM maintains a copy of the metadata set within the MAM itself, and a second copy with each object, thus ensuring long-term access and peace-of-mind for the archive manager.

The SNL example is a great use case that demonstrates the value of object storage and its key attributes — scalability, metadata, open architecture – in solving large-scale storage challenges. 

 

Share This:
Facebooktwitterlinkedinmail

Object Storage Bucket-Level Auto-Tiering with Cloudian

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

Cloudian HyperStore can be integrated with any of the following destination cloud storage platforms as a target for tiered data:

  • Amazon S3
  • Amazon Glacier
  • Google Cloud Platform
  • Any Cloud service offering S3 API connectivity
  • A remotely located Cloudian HyperStore cluster

Granular Control with Cloudian HyperStore

For any data storage system, granularity of control and management is extremely important –  data sets often have varying management requirements with the need to apply different Service Level Agreements (SLAs) as appropriate to the value of the data to an organisation.

Cloudian HyperStore provides the ability to manage data at the bucket level, providing flexibility at a granular level to allow SLA and management control (note: a “bucket” is an S3 data container, similar to a LUN in block storage or a file system in NAS systems). HyperStore provides the following as control parameters at the bucket level:

  • Data protection – Select from replication or erasure coding of data, plus single or multi-site data distribution
  • Consistency level – Control of replication techniques (synchronous vs asynchronous)
  • Access permissions – User and group control access to data
  • Disaster recovery – Data replication to public cloud
  • Encryption – Data at rest protection for security compliance
  • Compression – Reduction of the effective raw storage used to store data objects
  • Data size threshold – Variable storage location of data based upon the data object size
  • Lifecycle policies – Data management rules for tiering and data expiration

Cloudian HyperStore manages data tiering via lifecycle policies as can be seen in the image below:

Auto-tiering is configurable on a per-bucket basis, with each bucket allowed different lifecycle policies based upon rules. Examples of these include:

  1.      Which data objects to apply the lifecycle rule to. This can include:
  • All objects in the bucket
  • Objects for which the name starts with a specific prefix (such as prefix “Meetings/2015/”)
  1.      The tiering schedule, which can be specified using one of three methods:
  • Move objects X number of days after they’re created
  • Move objects if they go X number of days without being accessed
  • Move objects on a fixed date — such as December 31, 2016

When a data object becomes a candidate for tiering, a small stub object is retained on the HyperStore cluster. The stub acts as a pointer to the actual data object, so the data object still appears as if it’s stored in the local cluster. To the end user, there is no change to the action of accessing data, but the object does display a special icon denoting the fact that the data object has been moved.

For auto-tiering to a Cloud provider such as Amazon or Google, an account is required along with associated account access credentials.

Accessing Data After Auto-Tiering

To access objects after they’ve been auto-tiered to public cloud services, the objects can be accessed either directly through a public cloud platform (using the applicable account and credentials) or via the local HyperStore system. There are three options for retrieving tiered data:

  1.      Restoring objects –   When a user accesses a data file, they are directed to the local stub file held on HyperStore which then redirects the user request to the actual location of the data object (tiered target platform).

A copy of the data object is restored back to a local HyperStore bucket from the tiered storage and the user request will be performed on the data object once copied back. A time limit can be set for how long to retain the retrieved object locally, before returning to the secondary tier.

This is considered the best option to use when accessing data relatively frequently and you want to avoid any performance impact incurred by traversing the internet and any access costs applied by service providers for data access/retrieval. Storage capacity must be managed on the local HyperStore cluster to ensure that there is sufficient “cache” for object retrievals.

  1.      Streaming objects – Streams data directly to the client without restoring the data to the local HyperStore cluster first. When the file is closed, any modifications are made to the object in situ on the tiered location. Any metadata modifications will be updated in both local HyperStore database and on the tiered platform.

This is considered the best option to use when accessing data relatively infrequently and concern about the storage capacity of the local HyperStore cluster is an issue, but performance will be lower as the data requests are traversing the internet and access costs may be applied by the service provider every time this file is read.

  1.      Direct access – Objects auto-tiered to public cloud services can be accessed directly by another application or via your standard public cloud interface, such as the AWS Management Console. This method fully bypasses the HyperStore cluster. Because objects are written to the cloud using the standard S3 API, and include a copy of the object’s metadata, they can be referenced directly.

Storing objects in this openly accessible manner — with co-located rich metadata — is useful in several instances:

  1. A disaster recovery scenario where the HyperStore cluster is not available
  2. Facilitating data migration to another platform
  3. Enabling access from a separate cloud-based application, such as content distribution
  4. Providing open access to data, without reliance on a separate database to provide indexing

HyperStore provides great flexibility for leveraging hybrid cloud deployments where you get to set the policy on which data is stored in a public or private cloud. Learn more about HyperStore here.

 

YOU MAY ALSO BE INTERESTED IN

Object Storage vs. Block Storage: What’s the Difference?

Share This:
Facebooktwitterlinkedinmail

SNL and Object Storage: Archiving Media Assets for the Long Run

Picture all of your media assets today. How much space does it take up and how well does your current storage solution work? Now what if you had over 40 years of assets? Will the same solution work just as efficiently?

Tape storage is currently the preferred method for archiving media assets, but tape is a limited-life solution with many different ways it can be compromised. When thinking long-term, tape becomes less and less viable.

For a prime example on why we need to move away from tape storage, let’s look at Saturday Night Live. One of the longest running network programs in the US, SNL has generated 42 seasons of content consisting of 826 episodes and 2,966 cast members. In terms of data, that’s 42 years worth of archive data made up of multiple petabytes across 2 data centers.

That’s a lot of data, and for SNL, having a huge archive is useless unless they can easily access it. That’s why SNL utilized object storage to help digitize and store their 42 years of assets. Each asset can be tagged with as many metadata tags as needed, making it easy and fast to find, organize, and assemble clips from the show’s long history.

If your media assets are just sitting in cold storage, it may be time to rethink your strategy. By creating an efficient archival solution today, you can accelerate your workflows and continue to monetize those assets 40 years from now, just as SNL is doing today. 

We’ll be delving further into this topic at NAB along with Matt Yonks, who is the Post Production Supervisor for Saturday Night Live. The session will take place on April 25 at 3:30pm and will include a drawing for a 4K video drone. Register early for extra chances to win!

Share This:
Facebooktwitterlinkedinmail