Building and Protecting Data Lakehouse Projects with Cloudian and Vertica

See how to start a data lakehouse with Vertica EON mode and Cloudian, extend the data lakehouse with Vertica external tables and Cloudian, and protect Vertica datasets with data backup to Cloudian.

Building and Protecting Data Lakehouse Projects with Cloudian and Vertica

Over the past year, Cloudian has greatly expanded its support for data analytics through new partnerships. One of those key partnerships is with Vertica, where the combination of Vertica and Cloudian HyperStore enables organizations to build and protect data lakehouses for modern data analytics applications.

This blog highlights the three main use cases we’re currently serving together:

  • Starting a data lakehouse with Vertica in Eon mode and Cloudian
  • Extending the data lakehouse with Vertica external tables and Cloudian
  • Protecting Vertica datasets with data backup to Cloudian

Just as a reminder, Vertica is a unified analytics data warehouse platform, based on a massively scalable architecture, and Cloudian is a software-defined, limitlessly scalable, S3-compatible object storage platform for on-premises and hybrid cloud environments.

Starting a Data Lakehouse with Vertica in Eon Mode and Cloudian

Cloudian-Vertica Data LakehouseIn the data analytics space, Vertica is known for performance, whether it is run in “Enterprise Mode” or “Eon Mode.” In Enterprise Mode each database node stores a portion of the dataset and performs a portion of the computation. In Eon Mode, Vertica brings its cloud architecture to on-premises deployments and decouples compute and storage. In Eon Mode, each Vertica node can access a shared communal storage space via S3 API. The advantages are: a) compute can be scaled as required without having to scale storage, meaning no more server sprawl and b) storage can be consolidated into a single platform and accessed by various data tools:

Building out Vertica communal storage on Cloudian is easy. For this exercise we are going to assume we have both a functional Vertica and Cloudian HyperStore instance that can communicate via HTTP(s):

  1. Configure a bucket via Cloudian Management Console (CMC) on your HyperStore cluster:
      • Let’s use the name “verticabucketoncloudian” for this example.

  2. Create an auth_params.conf file:
    • On your Vertica node, create an auth_params.conf file that will be accessible when you create the Vertica database instance.
      auth_params.conf values required are going to be:

      awsauth = Access_Key:Secret_Key
      awsendpoint = HyperstoreAddress:Port (either 443 or 80)
      awsenablehttps = 0 Is required if not using HTTPs
  3. Create your Vertica in Eon Mode database instance:
    • On your Vertica node, create the database instance. Specify the location of your auth_params.conf file to leverage a Cloudian S3 bucket for communal storage.

      admintools -t create_db -x auth_params.conf \
      --communal-storage-location=s3://verticabucketoncloudian \
      --depot-path=/home/dbadmin/depot --shard-count=6 \
      -s vnode01,vnode02,vnode03,vnode04,vnode05,vnode06 -d verticadb -p 'YourDBAdminPasswordHere'
  4. Success! Let’s test.
    • Once the above command returns successfully, you can test the Vertica in Eon Mode instance.
    • Connect to your db instance and load a dataset.
    • Connect to Cloudian bucket “verticabucketoncloudian” via CMC or S3 browser, and you will see objects in the bucket.

Extending the Data Lakehouse with Vertica External Tables and Cloudian

One of the key tenants of a successful data lakehouse initiative is the ability to access and analyze datasets that have been generated by other analytics platforms.

Prior to the data lakehouse, an ETL (Extract Transform Load) operation would have been required to move data from one analytics platform to another. Today, Vertica can analyze the data in-place by leveraging external tables, without the need for complex and expensive data moves.

Let’s consider the following scenario… we have an ORC dataset, which was generated by an Apache Hive instance, stored on Cloudian, and we need to connect to it with Vertica. To analyze this dataset in-place, use the following Vertica syntax to connect to the ORC dataset:

That is much simpler and easier than working through any data ETL.

Here are the details for the S3 parameters and configuration.

Protecting Vertica Datasets with Data Backup to Cloudian

As with all datasets, backups of data are key to protecting and preserving data. For this purpose, Vertica has its own backup and recovery tool called “vbr,” and Vertica can leverage Cloudian as a backup target.

Vertica has thoroughly documented the process, but here’s a condensed version:

  1. Configure connectivity and credentials for HyperStore
    1. HyperStore credentials are important. They are configured within the database, as a security function, and they are configured as environmental variables to allow vbr to connect.
      • For the database that is going to be backed up, set the AWSAuth credentials (S3 credentials):
        ALTER DATABASE DEFAULT SET AWSAuth = 'accesskeyid:secretaccesskey';
    2. Configure vbr HyperStore URL address and credentials

      export VBR_COMMUNAL_STORAGE_ENDPOINT_URL=http://
      export VBR_COMMUNAL_STORAGE_ACCESS_KEY_ID=
      export VBR_COMMUNAL_STORAGE_SECRET_ACCESS_KEY=
      export VBR_BACKUP_STORAGE_ENDPOINT_URL=http://
      export VBR_BACKUP_STORAGE_ACCESS_KEY_ID=
      export VBR_BACKUP_STORAGE_SECRET_ACCESS_KEY=

      • Keep in mind that you can back up to the same endpoint using the same credentials as the communal storage, but to a different bucket. Or backup can be to a second endpoint with different credentials. Most users will want to back up to a different bucket to reduce associated cost.
  2. Setting the configuration file for vbr
    1. There are some additional parameters that must be stored in a configuration file for Vertica to successfully backup / restore with Cloudian
    2. Create a file called “eon_backup_restore.ini’ in the home directory of dbadmin
      As a quick reference, /opt/vertica/share/vbr/example_configs contains examples for cloud backups

      eon_backup_restore.ini
      [CloudStorage]
      cloud_storage_backup_path = s3://verticabackuponcloudian/fullbackup/
      cloud_storage_backup_file_system_path = []:/home/dbadmin/backup_locks_dir/
      cloud_storage_concurrency_backup = 10
      cloud_storage_concurrency_restore = 10
      [Misc]
      snapshotName = EONbackup_snapshot
      tempDir = /tmp/vbr
      restorePointLimit = 1
      [Database]
      dbName =
      dbPromptForPassword = True
      dbUser = dbadmin
  3. Target initialization and performing data backup
    1. Vertica requires the S3 bucket to be initialized prior to use
      • vbr -t backup -c eon_backup_restore.ini
        Initializing backup locations.
        Backup locations initialized.
    2. Run the Vertica backup
      • vbr -t backup -c eon_backup_restore.ini
        Enter vertica password:
        Starting backup of database VMart.
        Participating nodes: v_vmart_node0001, …., v_vmart_node0006.
        Snapshotting database.
        Snapshot complete.
        Approximate bytes to copy: x of y total.
        [================================================] 100%
        Copying backup metadata.
        Finalizing backup.
        Backup complete!

I hope this tech blog post helps make your Cloudian and Vertica data lakehouse project a success.

For more information about Cloudian data lakehouse / data analytics solutions, go to S3 Data Lakehouse for Modern Data Analytics.

 

Henry Golas

 

 

Henry Golas, Director of Technology, Cloudian

View LinkedIn Profile

VMware Cloud Service Providers Can Expand Their Business with Ransomware Protection

By helping their customers protect against ransomware, VCSPs can grow their footprint with existing clients and attract new ones. It’s easy with Object Lock technology.

 

Van Flowers, Senior Systems Engineer, Cloudian

View LinkedIn Profile

 

space


VMware Cloud Service Providers Can Expand Their Business with Ransomware Protection

VMware Cloud Service Providers (VCSPs) have emerged as an excellent alternative to hyperscalers for organizations that want to store their data in the cloud. By employing object storage, such as Cloudian HyperStore, these service providers can deliver hyperscaler-like scalability and flexibility while addressing organizations’ individual performance, data sovereignty, budget and security needs. Data security has become especially important due to the proliferation of ransomware attacks over the past few years, and VCSPs that can help their customers protect their data against this threat can expand their business by growing their footprint with existing clients and attracting new ones.

The best way to protect data against ransomware is with data immutability using Object Lock technology. As perimeter security solutions increasingly prove ineffective in preventing ransomware from getting in, having an immutable data backup copy ensures that this data cannot be deleted or encrypted. In the event of a ransomware attack, organizations can easily recover the unchanged backup without having to pay ransom.

Object Lock can be implemented as part of an automated backup workflow. For example, VCSPs using Veeam, Rubrik or Commvault can deploy Cloudian’s S3 Object Lock to seamlessly integrate a ransomware-proof, immutable S3 bucket into their customer backup solution.

So how do you do it? As shown in the screen shot below, all that’s needed to create the immutable bucket is to tick the slider, and you’re in the ransomware protection business! This simple task is performed at the time the bucket is created, and once created, the data written into the bucket cannot be deleted, altered or changed in any way until the defined immutability period expires. The bad guys – even a rogue administrator – can’t change it, but restores can be done in a flash!

Ransomware attacks are growing by the thousands every day, and VCSPs’ ability to offer this vital protection to their customers data can be a key contributor to continued growth, and to their customers’ peace of mind.

VCSPs – don’t let another minute go by without having immutable storage available for your customers. Nothing is easier to configure, integrate and fortify your customers’ data security than immutable storage built on Cloudian. Contact your local Cloudian representative for more information or drop me a note ([email protected]) or a tweet (@avf925). I’m always happy to help!

And to learn more about Cloudian solutions for VCSPs, visit Object Storage for VMware Cloud Director | Cloudian.

Fight Kubernetes Ransomware with Kasten and Cloudian

Adam BerghAdam Bergh
Cloud Native Technical Partnerships at Kasten by Veeam
LinkedIn Profile

amit rawlani

Amit Rawlani
Director Technology Alliances, Product & Solution Marketing, Cloudian Inc.

LinkedIn Profile

The threat of ransomware should be thought of as serious problem for all enterprises. According to an annual report on global cyber security, there were 304 million ransomware attacks worldwide in 2020 — a 62% increase from 2019. While most IT organizations are aware of the continuously rising threat of ransomware on traditional applications and infrastructure, modern applications running on Kubernetes are also at risk. The rapid rise of critical applications and data moving into Kubernetes clusters has caught the attention of those seeking to exploit what is perceived to be a new and emerging space. This can leave many organizations ill prepared to fight back.

Kubernetes Vulnerabilities

Kubernetes itself and many of the most common applications that run in Kubernetes are open-source products. Open-source means that the underly code that makes up the applications is freely available for any to review and find potential vulnerabilities. While not overly common, open-source products can often lead to exploitable bugs being discovered by malicious actors. In addition, misconfigured access controls can unintentionally lead to unauthorized access to applications or even the entire cluster. Kubernetes is updated quarterly, and some applications as often as every week, so it’s crucial for organizations to stay up to date with patching.

Surprisingly, many organizations that use Kubernetes don’t yet have a backup and recovery solution in place — which is a last line of defense against an attack. As ransomware becomes more sophisticated, clusters and applications are at risk of being destroyed, and without a means to restore them, you could suffer devastating data and application loss in the case of an attack.

What to Look for In a Kubernetes Ransomware Protection Platform

When looking to an effective defense against ransomware in your K8s environment, think about these four core capabilities:

  1. Backup integrity and immutability: Since backup is your last line of defense, it’s important that your backup solution is reliable, and it’s critical to be confident that your backup target storage locations contain the information you need to recover applications in case of an attack. Having guaranteed immutability of your backup data is a must.
  2. High-performance recovery: No one wants to pay a ransom because it was faster to unencrypt your data than recover it from your backup system. The ability to work quickly to recover resources is critical, as the cost of ransom typically increases over time. Being confident that your recovery performance can meet target requirements even as the amount of data grows over time.
  3. Operational Simplicity: Operations teams must work at scale across multiple clusters in hybrid environments that span cloud and on-premises locations. When you’re working in a high-pressure environment following a ransomware attack, simplicity of operations become paramount.

Cloudian and Kasten by Veeam Have the Solution

Kasten By Veeam and Cloudian have teamed to bring a truly cloud native approach to this mission critical problem. The Kasten K10 data management software platform has been purpose-built for Kubernetes. K10’s deep integrations with Kubernetes distributions and cloud storage systems provide for protection and mobility of your entire Kubernetes application. Cloudian’s HyperStore is an enterprise-grade S3-compatible object storage platform running in your data center. Cloudian makes it easy to use private cloud storage to protect your Kubernetes applications with a verified integration with Kasten. With native support of the cloud standard S3 API, including S3 Object Lock data immutability, Kasten and Cloudian offer seamless protection for modern applications at up to 70% less cost than public cloud.

Kasten Cloudian blog diagram 1

Fast recovery: Cloudian provides a local, disk-based object storage target for backing up modern apps using Kasten K10 over your local, high-speed network. The solution lets you backup and restore large data sets in a fraction of the time required for public cloud storage, leading to enhanced Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO).

Security and Ransomware Protection

Cloudian is a hardened object storage system that includes enhanced security features such as secure shell, encryption, integrated firewall and RBAC/IAM access controls to protect backup copies against malware. es in a shared-storage environment. In addition, to protect data from ransomware attacks, Cloudian HyperStore and Kasten support Object Lock for air-tight data immutability all the way up to the operating system root level.

Kasten-Validated Solution

Cloudian is Kasten-validated to ensure trouble-free integration. Kasten’s native support for the S3 API enables seamless integration with Cloudian HyperStore.

Easy as 1-2-3

Setting up Kasten K10 and Cloudian Ransomware Protection is as simple as 3 easy steps:

1. Create a new target bucket on Cloudian HyperStore and enable Object Lock.

Kasten Cloudian blog diagram 2


2. After Kasten K10 installation, check the “Enable Immutable Backups” box when adding a target S3 object storage bucket.

Kasten Cloudian blog diagram 3


3. Validate the Cloudian object storage bucket and specify your protection period.

Kasten Cloudian blog diagram 4

GET STARTED WITH KASTEN K10 TODAY!

NAS Backup & Archive Solution with Rubrik NAS Cloud Direct

Cloudian and Rubrik are simplifying enterprise data protection with a best-in-class NAS backup and archival solution that combines Cloudian HyperStore and Rubrik’s NAS Cloud Direct. This simple solution makes it easy to manage and migrate massive amounts of NAS data to Cloudian on-prem storage without impacting production environments. Cost-effective and highly scalable, this solution delivers new levels of operational efficiency and flexibility to solve challenges for large-scale NAS data management.

With the surging growth in NAS data volumes, the need for an affordable, simple and cost-effective approach to data life cycle and storage management at scale has never been greater. Enterprise organizations must be able to store massive amounts of data while also ensuring that data moving across data centers and to the cloud is simple, seamless, and secure.

Combining Cloudian HyperStore with Rubrik NAS Cloud Direct, a software-only product with a direct-to-object capability, provides a single data management fabric with automated, policy-based protection and allows users to store their NAS backup and archive data in one or multiple geographically separated regions or data centers. Enterprises can extend and scale their Cloudian capacity as needed and non-disruptively while keeping NAS data storage costs to a minimum.

Rubrik NAS Cloud Direct is deployed as a virtual machine that can be up and protecting data from any local and remote NAS platform to Cloudian HyperStore, within minutes.

At any scale – from terabytes to petabytes of data and millions to billions of files – Cloudian HyperStore and NAS Cloud Direct eliminate the complexity of tape solutions and the vendor lock-in of disk-to-disk backup solutions, all at a lower cost.

Learn more about this new solution: Download Brief

See how Cloudian and Rubrik are collaborating: https://cloudian.com/rubrik/

Learn more about Cloudian® HyperStore®

5 Reasons Ransomware Protection Needs to Be a Board-Level Conversation

It is not just the responsibility of the IT/IS department to keep the business safe, but the obligation of every CXO and Board member to ask for and implement stringent cyber security measures starting with zero trust, perimeter security, and employee training.

“We are on the cusp of a global pandemic,” said Christopher Krebs, the first director of the Cybersecurity and Infrastructure Security Agency(CISA), told Congress in May of 2021. The director of CISA isn’t talking about a virus created pandemic, rather he is referring to the pandemic of cyber-attacks and data breaches. This warning rang especially true when the Colonial Pipeline ransomware attack crippled the US energy sector the following week. 

Your files are encrypted

For the uninitiated, ransomware is the fastest growing malware threat, targeting users and organizations of all types. It works by encrypting the user’s data, rendering the source data and backup data useless and asks for ransom, threatening to hold the data hostage until it is received. Payments are usually demanded in untraceable crypto currencies which can (and in many cases do) end up with state sponsored bad actors.

Today, protection against and mitigation for a ransomware attack are information technology and information security responsibilities with the C-Suite and Board taking a relatively hands-off approach. But that must change and in some cases is already changing. Here’s why C-Suite and Board members should take this threat seriously and be the driving force to protect the organization against ransomware. 

1. To Pay or not to Pay: Financial Impacts of Ransomware

Ransomware impacts organizations of all sizes, across all industries. The security company Sophos(1) found that 51% of the companies responded in an affirmative when asked if they were attacked by ransomware in 2020 – the year of the pandemic. In 73% of those cases, data was successfully encrypted, thereby bringing the business to its knees. More than a quarter of the respondents (26%) admitted to paying the ransom at an average of $761K/ incident, which is a huge increase from the previous years where a similar report had pegged the average at $133K

The financial implication of paying the ever-increasing ransom demands aside, the real impact of ransomware is on the business itself. It cripples businesses and renders services ineffective and undeliverable. There is also the threat of data exfiltration which can expose sensitive customer data and leave the organization open to lawsuits and additional financial penalties. This does not even account for the loss of business due to downtime, or the brand damage that the ransomware can cause. 

With just these impacts alone, with rope in the Director of IT or IS, CFO, General Counsel, Public Relations, Chief Privacy Officer, CIO, and CISO. The CEO will also be roped in and will have to break the new to her board of directors. It would be far better if she remembers this as the day she was able to say, “We were prepared. We already have the business back up and running. We will not be paying.”

2. The Moral (and Regulatory) Low Ground of Paying a Ransom

Then there is the moral and regulatory dilemma associated with paying off ransom. This practice is actively discouraged by the US governmental agencies as it encourages and fosters similar and copycat attacks.  Added to this is the Oct 2020 advisory from Department of The Treasury(2), OFAC (Office of Foreign Assets Control) & FINCEN (Financial Crimes Enforcement Network) which talks about “Potential Sanctions Risks for Facilitating Ransomware Payments”. Given that most of the payments for ransomware are untraceable, this opens organizations, the executives and board members to US government sanctions violations.

3. Cyber Insurance: How to Get, Keep, and Save on This Must-Have for Business Continuity

Cyber Security Insurance, the fastest growing insurance segment is another important consideration. As a safeguard most large organizations require cyber insurances as part of their cyber defense strategy. But insurance companies are not immune to the US sanctions violation if a payment is made to rogue nations. Therefore, premiums for ransomware coverage are high or may require up to 50% coinsurance. In some cases, insurers may NOT even cover businesses unless they are able to show significant cyber security arrangements along with data immutability as part of their cyber security plans.

 

human cost of ransomware
Human Cost of Ransomware

 

4. The Human Cost of Ransomware

Finally in addition to a business, insurance and regulatory impact, the most reprehensible  danger of ransomware is its human impact. This applies across all industries. From impacting critical utilities in the energy sector, declined credit card and bank transactions in the financial sector, to delayed patient care, emergency treatments, and even death in the healthcare sector, the impact of ransomware is real and direct and all too inhumane. 

5. Getting Organized: Plan, Don’t Pay

Without a regularly drilled, top-down plan on how a business will respond to a ransomware attack, an organization is going to make mistakes in the heat of an attack. It will pay the costs of those mistakes whether to masked malware attackers, through ransomware-induced PR nightmares, or via increased cyber insurance premiums levied for lack of proper preparation and protection. It is not just the responsibility of the IT/IS department to keep the business safe, but the obligation of every CXO and Board member to ask for and implement stringent cyber security measures starting with zero trust, perimeter security, and employee training. But don’t forget to protect the attackers ultimate prize–your backup data—in immutable WORM storage. 

For all these reasons, ransomware MUST be a C-suite and Board-led conversation. Forrester analysts write: “Implementing an immutable file system with underlying WORM storage will make the system watertight from a ransomware protection perspective.” Data immutability through WORM features such as S3 Object Lock is also now a requirement for many cyber insurance policies to cover the threat of ransomware. 


To learn more about solutions for ransomware protection, please visit
https://cloudian.com/lp/lock-ransomware-out-keep-data-safe-ent/

Citation:

  1. https://www.sophos.com/en-us/medialibrary/Gated-Assets/white-papers/sophos-the-state-of-ransomware-2020-wp.pdf
  2. https://home.treasury.gov/system/files/126/ofac_ransomware_advisory_10012020_1.pdf

 

 

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

$500 Billion in Lost Market Value: VC Firm Estimates Impact of Public Cloud Costs

VC firm Andreesen Horowitz examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Cloud computing and on-prem computing will always co-exist, we believe. A recent article from the venture capital firm Andreesen Horowitz makes a compelling case for that. The article (“The Cost of Cloud, a Trillion Dollar Paradox”) examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Here are some of the article’s key findings:

  • “If you’re operating at scale, the cost of cloud can at least double your infrastructure bill.”: The authors note that public cloud list prices can be 10-12X the cost of running your own data centers. Although use-commitment and volume discounts can reduce the difference, the cloud is still significantly more expensive.
  • “Some companies we spoke with reported that they exceeded their committed cloud spend forecast by at least 2X.” Cloud spend can be hard to predict, resulting in spending that often exceeds plan. Companies surveyed for the article indicate that actual spend is often 20% higher than committed spend and at least 2X in some cases.
  • “Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud.”: This takes into account the TCO of everything from server racks, real estate, and cooling to network and engineering costs.
  • “The cost of cloud ‘takes over’ at some point, locking up hundreds of billions of market cap that are now stuck in this paradox: You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.”: While public cloud delivers on its promise early on, as a company scales and its growth slows, the impact of cloud spend on margins can start to outweigh the benefits. Because this shift happens later in a company’s life, it’s difficult to reverse.
  • “Think about repatriation upfront.” By the time cloud costs start to catch up to or even outpace revenue growth, it’s too late. Even modest or modular architectural investment early on reduces the work needed to repatriate workloads in the future. In addition, repatriation can be done incrementally, and in a hybrid fashion.
  • “Companies need to optimize early, often, and, sometimes, also outside the cloud.”: When evaluating the value of any business, one of the most important factors is the cost of goods sold (COGS). That means infrastructure optimization is key.
  • “The popularity of Kubernetes and the containerization of software, which makes workloads more portable, was in part a reaction to companies not wanting to be locked into a specific cloud.”: Developers faced with larger-than-expected cloud bills have become more savvy about the need for greater rigor when it comes to cloud spend.
  • “For large companies — including startups as they reach scale — that [cloud flexibility] tax equates to hundreds of billions of dollars of equity value in many cases.”: This tax is levied long after the companies have committed themselves to the cloud. However, one of the primary reasons organizations have moved to the cloud early on – avoiding large CAPEX outlays – is no longer limited to public clouds. There are now data center alternatives that can be built, deployed, and managed entirely as OPEX.


In short, the article highlights the need to think carefully about which use cases are better suited for on-prem deployment. Public cloud can provide flexibility and scalability benefits, but at a cost that can significantly impact your company’s financial performance.

Cloudian was founded on the idea of bringing public cloud benefits to the data center, and we now have nearly 700 enterprise and service provider customers that have deployed our award-winning HyperStore object storage platform in on-prem and hybrid cloud environments. On-prem object storage can deliver public cloud-like benefits in your own data center, at less cost and with performance, agility, security and control advantages. In addition, as long as the object storage is highly S3-compatible, it can integrate easily with public cloud in a hybrid cloud model.

To learn more about how we can help you find the right cloud storage strategy for your organization, visit cloudian.com/solutions/cloud-storage/. You can also read about deploying HyperStore on-prem with AWS Outposts at cloudian.com/aws.

 

John ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

LinkedIn Live: Secure Data with VMware vSAN & Cloudian HyperStore

Our joint solution combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu. Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage. 

Grant JacobsonGrant Jacobson, Director of  Technology Alliances and Partner Marketing, Cloudian

View LinkedIn Profile


Protecting Your Data with VMware vSAN and Cloudian HyperStore

Each month, VMware and Cloudian collaborate to promote our joint solution in a series of short (~15 minutes) LinkedIn Live sessions.  Each session highlights a new solution use case and for today’s session, the fourth in our series, we talked about Data Protection and how to keep data safe.  These are lively conversations about the solution and how our customers can take advantage of it to meet their evolving needs.  Last month, we covered the new Splunk SmartStore use case, with a 44% TCO savings compared with traditional storage.

Our joint solution became available in February and combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu.   Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage.  The solution enables many new use cases with Data Protection being one that cuts across all segments: everyone needs to ensure their data stays safe, especially from the accelerating increase in ransomware and other cyberattacks.


If you missed it, watch it here:

If you’d like more information about our solutions with VMware, see our dedicated webpage:
You can also reach us at [email protected]

Object Storage: Better Monetizing Content by Transitioning from Tape

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.

david phillips

David Phillips, Principal Architect for M&E Solutions, Cloudian

View LinkedIn Profile

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.


Object Storage: Better Monetizing Content by Transitioning from Tape

Media and entertainment companies derive significant recurring revenue through old content. From traditional television syndication to YouTube uploads, this content can be distributed and monetized in several different ways. Many M&E companies, particularly broadcasters, store their content in decades-old LTO tape libraries. With years of material, including thousands of episodes and millions of digital assets, these tape libraries can grow so large that they become unmanageable. Deployments can easily reach several petabytes of data and may sprawl across multiple floors in a broadcaster’s media storage facility. Searching these massive libraries and retrieving specific content can be a cumbersome, time-consuming task –like trying to find a needle in a haystack.

Object storage provides a far simpler, more efficient and cost-effective way for broadcasters to manage their old video content. With limitless scalability, object storage can easily grow to support petabytes of data without occupying a large physical footprint. Moreover, the technology supports rich, customizable metadata, making it easier and quicker to search and retrieve content. Organizations can use a Google-like search tool to immediately retrieve assets, ensuring that they have access to all existing content, no matter how old or obscure, and can readily monetize that content.

Here’s a deeper look at how the two formats compare in searchability, data access, scalability and management.

Searchability and data access

LTO tape was created to store static data for the long haul. Accessing, locating and retrieving this data was always an afterthought. In the most efficient tape libraries today, staff may be able to find a piece of media within a couple minutes. But even in this scenario, if there were multiple jobs queued up first in the library, finding that asset could take hours. And this is assuming that the tape that contains the asset is stored in the library and in good condition (i.e., it can be read and doesn’t suffer from a jam).

This also assumes the staff has the proper records to even find the asset. Because of the limitations of the format, LTO tape files do not support detailed metadata. This means that organizations can only search for assets using basic file attributes, such as date created or title. It’s impossible to conduct any sort of an ad hoc search. If a system’s data index doesn’t contain the file attributes that a user is looking for, the only option is to look manually, an untenable task for most M&E organizations that have massive content libraries. This won’t change in the future, as tape cannot support advanced technologies such as artificial intelligence (AI) and machine learning (ML) to improve searchability.

On the other hand, object storage makes it possible to immediately search and access assets. The architecture supports fully-customizable metadata, allowing staff to attach any attributes they want to any asset, no matter how specific. For example, a news broadcast could have metadata identifying the anchors or describing the type of stories covered. When trying to find an asset, a user can search for any of those attributes and rapidly retrieve it. This makes it much easier to find old or existing content and use it for new monetization opportunities, driving much greater return on investment (ROI) from that content. This value will only increase as AI and ML, which are both fully supported in object storage systems, provide new ways to analyze and leverage data (e.g., facial recognition, speech recognition and action analysis), increasing opportunities to monetize archival content.

Scalability and management

Organizations must commit significant staff and resources to manage and grow an LTO tape library. Due to their physical complexity, these libraries can be difficult and expensive to scale. In the age of streaming, broadcasters are increasing their content at breakneck speed. And with the adoption of capacity-intensive formats like 4K, 8K and 360/VR, more data is being created for each piece of content. Just several hundred hours of video in these advanced formats can easily reach a petabyte in size. In LTO environments, the only way to increase capacity is to add more tapes, which is particularly difficult if there are no available library slots. When that’s the case, the only choice is to add another library. Many M&E companies’ tape libraries already stretch across several floors, leaving little room for expansion, especially because new content (in higher resolution formats) tends to use larger data quantities than older content.

Object storage was designed for limitless scalability. It treats data as objects that are stored in a flat address space, which makes it easy to grow deployments via horizontal scaling (or scaling out) rather than vertical scaling (scaling up). To increase a deployment, organizations simply have to add more nodes or devices to their existing system, rather than adding new systems (such as LTO libraries) entirely. Because of this, object storage is simple to scale to hundreds of petabytes and beyond. With data continuing to grow exponentially, especially for video content, being able to scale easily and efficiently helps M&E companies maintain order and visibility over their content, enabling them to easily find and leverage those assets for new opportunities. Increasing the size of a sprawling, messy tape library is exactly the opposite.

Tape libraries also lack centralized management across locations. To access or manage a given asset, a user has to be near the library where it’s physically stored. For M&E organizations that have tape archives in multiple locations, this causes logistical issues, as each separate archive must be managed individually. As a result, companies often need to hire multiple administrators to operate each archive, driving up costs and causing operational siloing.

Object storage addresses the challenge of geo-distribution with centralized, universal management capabilities. Because the architecture leverages a global namespace and connects all nodes together in a single storage pool, assets can be accessed and managed from any location. While companies can only access data stored on tape directly through a physical copy, object storage enables them to access all content regardless of where it is physically stored. One person can administer an entire globally-distributed deployment, enforcing policies, creating backup copies, provisioning new users and executing other key tasks for the whole organization.

Conclusion

M&E companies still managing video content in LTO tape libraries suffer from major inefficiencies, and in turn, lost revenue. The format simply wasn’t designed for the modern media landscape. Object storage is a much newer architecture that was built to accommodate massive data volumes in the digital age. Object storage’s searchability, accessibility, scalability and centralized management helps broadcasters boost ROI from existing content.

 


To learn more about Cloudian’s Media and Entertainment solutions, visit cloudian.com/solutions/media-and-entertainment/.

Tape — Does It Measure Up?

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator. Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries.

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

lto tape

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator.

Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries. Also, security and secure remote access have become a critical element when considering data protection and business continuity. With production and engineering teams alike finding themselves “locked out of the building,” managing physical tape media and remediating mechanical issues with tape libraries has proved difficult if not impossible.

The drawbacks of tape that are even more highlighted by the pandemic include:

  • Accessibility: This one is obvious. The lack of immediate and complete accessibility has never been more problematic than during the pandemic.
  • Durability: Mechanical failures around tape library robotics and tape media failures inside have meant truck rolls into the tape vaults – not ideal for a shelter-in-place situation.
  • Compatibility: New tape drive hardware has limits to its backward compatibility, which have required recoding at a time when data availability has been the prime objective for business continuity
  • Security: Ransomware attacks have become commonplace during the pandemic. Considering the various drawbacks associated with tapes, the rationale for using tapes for ransomware protection is up for reevaluation. As they say, data not retrievable in the right timeframe is data not protected. This is especially true in the case of ransomware


As companies look to increase the capacity of their storage, as well as the frequency with which they access it, object storage checks off all the right boxes in terms of data durability, availability, performance, and accessibility. Whether in the public or private cloud, object storage overcomes the limitations of LTO tape listed above and has become the go-to for most IT administrators looking for a better solution. If you’re running tape today, it makes a lot of sense to evaluate the benefits of switching to object storage before the limitations of your current solution impact your business more severely — and the sooner the better. As tape infrastructure ages, the transition only becomes more difficult.

As with any major technology shift, there are many important factors to take into consideration.


Tape: Does it Measure Up?
An Insider’s Guide to Data Center Modernization

To read an insider’s view on data center modernization focused on this topic, please visit
https://cloudian.com/lp/data-center-modernization/

LTO tape library