New Solution with VMware Tanzu Greenplum Data Warehouse

Cloudian is expanding its collaboration with VMware with a new solution combining Cloudian HyperStore with VMware Tanzu Greenplum, a massively parallel data warehouse platform for enterprise analytics, at scale.

Integrating Cloudian enterprise-grade object storage with VMware Tanzu Greenplum enables new efficiencies and savings for Greenplum users while also supporting the creation and deployment of petabyte-scale advanced analytics models for complex enterprise applications. This is especially timely with the amount of data consumed and generated by enterprises accelerating at an unprecedented pace and the need for these applications to capture, store and analyze data rapidly and at scale.

Greenplum Tanzu Cloudian Diagram

Whether your analytics models use traditional enterprise DB data; log & security data; web, mobile & click steam data; or your models use video and voice data; IOT data or JSON, XML geo and graph data; the need for a modern data analytics platform solution that is affordable, manageable, and scalable has never been greater.

Cloudian HyperStore, with its native S3 API and limitless scalability is simple to deploy and easy to use with VMware Tanzu Greenplum.  HyperStore storage supports the needs for data security, multi-clusters, and geo-distributed architectures across multiple use cases:

  • Storing database backups
  • Staging files for loading and unloading file data
  • Enabling federated queries via VMware Tanzu Greenplum Extension Framework (PXF)


Learn more about this new solution, here and see in the Greenplum Partner Marketplace

See how Cloudian and VMware are collaborating: https://cloudian.com/vmware

Learn more about Cloudian® HyperStore®

5 Reasons Ransomware Protection Needs to Be a Board-Level Conversation

It is not just the responsibility of the IT/IS department to keep the business safe, but the obligation of every CXO and Board member to ask for and implement stringent cyber security measures starting with zero trust, perimeter security, and employee training.

“We are on the cusp of a global pandemic,” said Christopher Krebs, the first director of the Cybersecurity and Infrastructure Security Agency(CISA), told Congress in May of 2021. The director of CISA isn’t talking about a virus created pandemic, rather he is referring to the pandemic of cyber-attacks and data breaches. This warning rang especially true when the Colonial Pipeline ransomware attack crippled the US energy sector the following week. 

Your files are encrypted

For the uninitiated, ransomware is the fastest growing malware threat, targeting users and organizations of all types. It works by encrypting the user’s data, rendering the source data and backup data useless and asks for ransom, threatening to hold the data hostage until it is received. Payments are usually demanded in untraceable crypto currencies which can (and in many cases do) end up with state sponsored bad actors.

Today, protection against and mitigation for a ransomware attack are information technology and information security responsibilities with the C-Suite and Board taking a relatively hands-off approach. But that must change and in some cases is already changing. Here’s why C-Suite and Board members should take this threat seriously and be the driving force to protect the organization against ransomware. 

1. To Pay or not to Pay: Financial Impacts of Ransomware

Ransomware impacts organizations of all sizes, across all industries. The security company Sophos(1) found that 51% of the companies responded in an affirmative when asked if they were attacked by ransomware in 2020 – the year of the pandemic. In 73% of those cases, data was successfully encrypted, thereby bringing the business to its knees. More than a quarter of the respondents (26%) admitted to paying the ransom at an average of $761K/ incident, which is a huge increase from the previous years where a similar report had pegged the average at $133K

The financial implication of paying the ever-increasing ransom demands aside, the real impact of ransomware is on the business itself. It cripples businesses and renders services ineffective and undeliverable. There is also the threat of data exfiltration which can expose sensitive customer data and leave the organization open to lawsuits and additional financial penalties. This does not even account for the loss of business due to downtime, or the brand damage that the ransomware can cause. 

With just these impacts alone, with rope in the Director of IT or IS, CFO, General Counsel, Public Relations, Chief Privacy Officer, CIO, and CISO. The CEO will also be roped in and will have to break the new to her board of directors. It would be far better if she remembers this as the day she was able to say, “We were prepared. We already have the business back up and running. We will not be paying.”

2. The Moral (and Regulatory) Low Ground of Paying a Ransom

Then there is the moral and regulatory dilemma associated with paying off ransom. This practice is actively discouraged by the US governmental agencies as it encourages and fosters similar and copycat attacks.  Added to this is the Oct 2020 advisory from Department of The Treasury(2), OFAC (Office of Foreign Assets Control) & FINCEN (Financial Crimes Enforcement Network) which talks about “Potential Sanctions Risks for Facilitating Ransomware Payments”. Given that most of the payments for ransomware are untraceable, this opens organizations, the executives and board members to US government sanctions violations.

3. Cyber Insurance: How to Get, Keep, and Save on This Must-Have for Business Continuity

Cyber Security Insurance, the fastest growing insurance segment is another important consideration. As a safeguard most large organizations require cyber insurances as part of their cyber defense strategy. But insurance companies are not immune to the US sanctions violation if a payment is made to rogue nations. Therefore, premiums for ransomware coverage are high or may require up to 50% coinsurance. In some cases, insurers may NOT even cover businesses unless they are able to show significant cyber security arrangements along with data immutability as part of their cyber security plans.

 

human cost of ransomware
Human Cost of Ransomware

 

4. The Human Cost of Ransomware

Finally in addition to a business, insurance and regulatory impact, the most reprehensible  danger of ransomware is its human impact. This applies across all industries. From impacting critical utilities in the energy sector, declined credit card and bank transactions in the financial sector, to delayed patient care, emergency treatments, and even death in the healthcare sector, the impact of ransomware is real and direct and all too inhumane. 

5. Getting Organized: Plan, Don’t Pay

Without a regularly drilled, top-down plan on how a business will respond to a ransomware attack, an organization is going to make mistakes in the heat of an attack. It will pay the costs of those mistakes whether to masked malware attackers, through ransomware-induced PR nightmares, or via increased cyber insurance premiums levied for lack of proper preparation and protection. It is not just the responsibility of the IT/IS department to keep the business safe, but the obligation of every CXO and Board member to ask for and implement stringent cyber security measures starting with zero trust, perimeter security, and employee training. But don’t forget to protect the attackers ultimate prize–your backup data—in immutable WORM storage. 

For all these reasons, ransomware MUST be a C-suite and Board-led conversation. Forrester analysts write: “Implementing an immutable file system with underlying WORM storage will make the system watertight from a ransomware protection perspective.” Data immutability through WORM features such as S3 Object Lock is also now a requirement for many cyber insurance policies to cover the threat of ransomware. 


To learn more about solutions for ransomware protection, please visit
https://cloudian.com/lp/lock-ransomware-out-keep-data-safe-ent/

Citation:

  1. https://www.sophos.com/en-us/medialibrary/Gated-Assets/white-papers/sophos-the-state-of-ransomware-2020-wp.pdf
  2. https://home.treasury.gov/system/files/126/ofac_ransomware_advisory_10012020_1.pdf

 

 

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

$500 Billion in Lost Market Value: VC Firm Estimates Impact of Public Cloud Costs

VC firm Andreesen Horowitz examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Cloud computing and on-prem computing will always co-exist, we believe. A recent article from the venture capital firm Andreesen Horowitz makes a compelling case for that. The article (“The Cost of Cloud, a Trillion Dollar Paradox”) examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Here are some of the article’s key findings:

  • “If you’re operating at scale, the cost of cloud can at least double your infrastructure bill.”: The authors note that public cloud list prices can be 10-12X the cost of running your own data centers. Although use-commitment and volume discounts can reduce the difference, the cloud is still significantly more expensive.
  • “Some companies we spoke with reported that they exceeded their committed cloud spend forecast by at least 2X.” Cloud spend can be hard to predict, resulting in spending that often exceeds plan. Companies surveyed for the article indicate that actual spend is often 20% higher than committed spend and at least 2X in some cases.
  • “Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud.”: This takes into account the TCO of everything from server racks, real estate, and cooling to network and engineering costs.
  • “The cost of cloud ‘takes over’ at some point, locking up hundreds of billions of market cap that are now stuck in this paradox: You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.”: While public cloud delivers on its promise early on, as a company scales and its growth slows, the impact of cloud spend on margins can start to outweigh the benefits. Because this shift happens later in a company’s life, it’s difficult to reverse.
  • “Think about repatriation upfront.” By the time cloud costs start to catch up to or even outpace revenue growth, it’s too late. Even modest or modular architectural investment early on reduces the work needed to repatriate workloads in the future. In addition, repatriation can be done incrementally, and in a hybrid fashion.
  • “Companies need to optimize early, often, and, sometimes, also outside the cloud.”: When evaluating the value of any business, one of the most important factors is the cost of goods sold (COGS). That means infrastructure optimization is key.
  • “The popularity of Kubernetes and the containerization of software, which makes workloads more portable, was in part a reaction to companies not wanting to be locked into a specific cloud.”: Developers faced with larger-than-expected cloud bills have become more savvy about the need for greater rigor when it comes to cloud spend.
  • “For large companies — including startups as they reach scale — that [cloud flexibility] tax equates to hundreds of billions of dollars of equity value in many cases.”: This tax is levied long after the companies have committed themselves to the cloud. However, one of the primary reasons organizations have moved to the cloud early on – avoiding large CAPEX outlays – is no longer limited to public clouds. There are now data center alternatives that can be built, deployed, and managed entirely as OPEX.


In short, the article highlights the need to think carefully about which use cases are better suited for on-prem deployment. Public cloud can provide flexibility and scalability benefits, but at a cost that can significantly impact your company’s financial performance.

Cloudian was founded on the idea of bringing public cloud benefits to the data center, and we now have nearly 700 enterprise and service provider customers that have deployed our award-winning HyperStore object storage platform in on-prem and hybrid cloud environments. On-prem object storage can deliver public cloud-like benefits in your own data center, at less cost and with performance, agility, security and control advantages. In addition, as long as the object storage is highly S3-compatible, it can integrate easily with public cloud in a hybrid cloud model.

To learn more about how we can help you find the right cloud storage strategy for your organization, visit cloudian.com/solutions/cloud-storage/. You can also read about deploying HyperStore on-prem with AWS Outposts at cloudian.com/aws.

 

John ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

LinkedIn Live: Secure Data with VMware vSAN & Cloudian HyperStore

Our joint solution combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu. Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage. 

Grant JacobsonGrant Jacobson, Director of  Technology Alliances and Partner Marketing, Cloudian

View LinkedIn Profile


Protecting Your Data with VMware vSAN and Cloudian HyperStore

Each month, VMware and Cloudian collaborate to promote our joint solution in a series of short (~15 minutes) LinkedIn Live sessions.  Each session highlights a new solution use case and for today’s session, the fourth in our series, we talked about Data Protection and how to keep data safe.  These are lively conversations about the solution and how our customers can take advantage of it to meet their evolving needs.  Last month, we covered the new Splunk SmartStore use case, with a 44% TCO savings compared with traditional storage.

Our joint solution became available in February and combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu.   Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage.  The solution enables many new use cases with Data Protection being one that cuts across all segments: everyone needs to ensure their data stays safe, especially from the accelerating increase in ransomware and other cyberattacks.


If you missed it, watch it here:

If you’d like more information about our solutions with VMware, see our dedicated webpage:
You can also reach us at [email protected]

Object Storage: Better Monetizing Content by Transitioning from Tape

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.

david phillips

David Phillips, Principal Architect for M&E Solutions, Cloudian

View LinkedIn Profile

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.


Object Storage: Better Monetizing Content by Transitioning from Tape

Media and entertainment companies derive significant recurring revenue through old content. From traditional television syndication to YouTube uploads, this content can be distributed and monetized in several different ways. Many M&E companies, particularly broadcasters, store their content in decades-old LTO tape libraries. With years of material, including thousands of episodes and millions of digital assets, these tape libraries can grow so large that they become unmanageable. Deployments can easily reach several petabytes of data and may sprawl across multiple floors in a broadcaster’s media storage facility. Searching these massive libraries and retrieving specific content can be a cumbersome, time-consuming task –like trying to find a needle in a haystack.

Object storage provides a far simpler, more efficient and cost-effective way for broadcasters to manage their old video content. With limitless scalability, object storage can easily grow to support petabytes of data without occupying a large physical footprint. Moreover, the technology supports rich, customizable metadata, making it easier and quicker to search and retrieve content. Organizations can use a Google-like search tool to immediately retrieve assets, ensuring that they have access to all existing content, no matter how old or obscure, and can readily monetize that content.

Here’s a deeper look at how the two formats compare in searchability, data access, scalability and management.

Searchability and data access

LTO tape was created to store static data for the long haul. Accessing, locating and retrieving this data was always an afterthought. In the most efficient tape libraries today, staff may be able to find a piece of media within a couple minutes. But even in this scenario, if there were multiple jobs queued up first in the library, finding that asset could take hours. And this is assuming that the tape that contains the asset is stored in the library and in good condition (i.e., it can be read and doesn’t suffer from a jam).

This also assumes the staff has the proper records to even find the asset. Because of the limitations of the format, LTO tape files do not support detailed metadata. This means that organizations can only search for assets using basic file attributes, such as date created or title. It’s impossible to conduct any sort of an ad hoc search. If a system’s data index doesn’t contain the file attributes that a user is looking for, the only option is to look manually, an untenable task for most M&E organizations that have massive content libraries. This won’t change in the future, as tape cannot support advanced technologies such as artificial intelligence (AI) and machine learning (ML) to improve searchability.

On the other hand, object storage makes it possible to immediately search and access assets. The architecture supports fully-customizable metadata, allowing staff to attach any attributes they want to any asset, no matter how specific. For example, a news broadcast could have metadata identifying the anchors or describing the type of stories covered. When trying to find an asset, a user can search for any of those attributes and rapidly retrieve it. This makes it much easier to find old or existing content and use it for new monetization opportunities, driving much greater return on investment (ROI) from that content. This value will only increase as AI and ML, which are both fully supported in object storage systems, provide new ways to analyze and leverage data (e.g., facial recognition, speech recognition and action analysis), increasing opportunities to monetize archival content.

Scalability and management

Organizations must commit significant staff and resources to manage and grow an LTO tape library. Due to their physical complexity, these libraries can be difficult and expensive to scale. In the age of streaming, broadcasters are increasing their content at breakneck speed. And with the adoption of capacity-intensive formats like 4K, 8K and 360/VR, more data is being created for each piece of content. Just several hundred hours of video in these advanced formats can easily reach a petabyte in size. In LTO environments, the only way to increase capacity is to add more tapes, which is particularly difficult if there are no available library slots. When that’s the case, the only choice is to add another library. Many M&E companies’ tape libraries already stretch across several floors, leaving little room for expansion, especially because new content (in higher resolution formats) tends to use larger data quantities than older content.

Object storage was designed for limitless scalability. It treats data as objects that are stored in a flat address space, which makes it easy to grow deployments via horizontal scaling (or scaling out) rather than vertical scaling (scaling up). To increase a deployment, organizations simply have to add more nodes or devices to their existing system, rather than adding new systems (such as LTO libraries) entirely. Because of this, object storage is simple to scale to hundreds of petabytes and beyond. With data continuing to grow exponentially, especially for video content, being able to scale easily and efficiently helps M&E companies maintain order and visibility over their content, enabling them to easily find and leverage those assets for new opportunities. Increasing the size of a sprawling, messy tape library is exactly the opposite.

Tape libraries also lack centralized management across locations. To access or manage a given asset, a user has to be near the library where it’s physically stored. For M&E organizations that have tape archives in multiple locations, this causes logistical issues, as each separate archive must be managed individually. As a result, companies often need to hire multiple administrators to operate each archive, driving up costs and causing operational siloing.

Object storage addresses the challenge of geo-distribution with centralized, universal management capabilities. Because the architecture leverages a global namespace and connects all nodes together in a single storage pool, assets can be accessed and managed from any location. While companies can only access data stored on tape directly through a physical copy, object storage enables them to access all content regardless of where it is physically stored. One person can administer an entire globally-distributed deployment, enforcing policies, creating backup copies, provisioning new users and executing other key tasks for the whole organization.

Conclusion

M&E companies still managing video content in LTO tape libraries suffer from major inefficiencies, and in turn, lost revenue. The format simply wasn’t designed for the modern media landscape. Object storage is a much newer architecture that was built to accommodate massive data volumes in the digital age. Object storage’s searchability, accessibility, scalability and centralized management helps broadcasters boost ROI from existing content.

 


To learn more about Cloudian’s Media and Entertainment solutions, visit cloudian.com/solutions/media-and-entertainment/.

Tape — Does It Measure Up?

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator. Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries.

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

lto tape

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator.

Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries. Also, security and secure remote access have become a critical element when considering data protection and business continuity. With production and engineering teams alike finding themselves “locked out of the building,” managing physical tape media and remediating mechanical issues with tape libraries has proved difficult if not impossible.

The drawbacks of tape that are even more highlighted by the pandemic include:

  • Accessibility: This one is obvious. The lack of immediate and complete accessibility has never been more problematic than during the pandemic.
  • Durability: Mechanical failures around tape library robotics and tape media failures inside have meant truck rolls into the tape vaults – not ideal for a shelter-in-place situation.
  • Compatibility: New tape drive hardware has limits to its backward compatibility, which have required recoding at a time when data availability has been the prime objective for business continuity
  • Security: Ransomware attacks have become commonplace during the pandemic. Considering the various drawbacks associated with tapes, the rationale for using tapes for ransomware protection is up for reevaluation. As they say, data not retrievable in the right timeframe is data not protected. This is especially true in the case of ransomware


As companies look to increase the capacity of their storage, as well as the frequency with which they access it, object storage checks off all the right boxes in terms of data durability, availability, performance, and accessibility. Whether in the public or private cloud, object storage overcomes the limitations of LTO tape listed above and has become the go-to for most IT administrators looking for a better solution. If you’re running tape today, it makes a lot of sense to evaluate the benefits of switching to object storage before the limitations of your current solution impact your business more severely — and the sooner the better. As tape infrastructure ages, the transition only becomes more difficult.

As with any major technology shift, there are many important factors to take into consideration.


Tape: Does it Measure Up?
An Insider’s Guide to Data Center Modernization

To read an insider’s view on data center modernization focused on this topic, please visit
https://cloudian.com/lp/data-center-modernization/

LTO tape library

An Introduction to Data Tiering

All data is not equal due to factors such as frequency of access, security needs, and cost considerations, therefore data storage architectures need to provide different storage tiers to address these varying requirements. Storage tiers differ depending on disk drive types, RAID configurations or even completely different storage sub-systems, which offer different IP profiles and cost impact.

Data tiering allows the movement of data between different storage tiers, which allows an organization to ensure that the appropriate data resides on the appropriate storage technology. In modern storage architectures, this data movement is invisible to the end-user application and is typically controlled and automated by storage policies. Typical data tiers may include:

  1. Flash storage – High value, high-performance requirements, usually smaller data sets and cost is less important compare to the performance Service Level Agreement (SLA) required
  2. Traditional SAN/NAS Storage arrays – Medium value, medium performance, medium cost sensitivity
  3. Object Storage – Less frequently accessed data with larger data sets. Cost is an important consideration
  4. Public Cloud –  Long-term archival for data that is never accessed

Typically, structured data sets belonging to applications/data sources such as OLTP databases, CRM, email systems and virtual machines will be stored on data tiers 1 and 2 as above. Unstructured data is more commonly moving to tiers 3 and 4 as these are typically much larger data sets where performance is not as critical and cost becomes a more significant factor in management and purchasing decisions.

Some Shortcomings of Data Tiering to Public Cloud

Public cloud services have become an attractive data tiering solution, especially for unstructured data, but there are considerations around public cloud use:

  1. Performance – Public network access will typically be a bottleneck when reading and writing data to public cloud platforms, along with data retrieval times (based on the SLA provided by the cloud service). Especially for backup data, backup and recovery windows are still incredibly important, so for the most relevant backup sets it is worth considering to hold onsite and only archive older backup data to the cloud.
  2. Security – Certain data sets/industries have regulations stipulating that data must not be stored in the cloud. Being able to control what data is sent to the cloud is of major importance.
  3. Access patterns – Data that is re-read frequently may incur additional network bandwidth costs imposed by the public cloud service provider. Understanding your use of data is vital to control the costs associated with data downloads.
  4. Cost – As well as bandwidth costs associated with reading data, storing large quantities of data in the cloud may not make the most economical sense, especially when compared to the economics of on-premise cloud storage. Evaluations should be made.

Using Hybrid Cloud for a Balanced Data Tier Strategy

For unstructured data, a hybrid approach to data management is key with an automation engine, data classification and granular control of data necessary requirements to really deliver on this premise.

With a hybrid cloud approach, you can push any data to the public cloud while also affording you the control that comes with on-premises storage. For any data storage system, granularity of control and management is extremely important as different data sets have different management requirements with the need to apply different SLAs as appropriate to the value of the data to an organization.

Cloudian HyperStore is a solution that gives you that flexibility for easily moving between data tiers 3 and 4 listed earlier in this post. Not only do you get the control and security from your data center, you can integrate HyperStore with many different destination cloud storage platforms, including Amazon S3/Glacier, Google Cloud Platform, and any other cloud service offering S3 API connectivity.

Learn more about our solutions today.

Learn more about NAS backup here.