5 Reasons Ransomware Protection Needs to Be a Board-Level Conversation

It is not just the responsibility of the IT/IS department to keep the business safe, but the obligation of every CXO and Board member to ask for and implement stringent cyber security measures starting with zero trust, perimeter security, and employee training.

“We are on the cusp of a global pandemic,” said Christopher Krebs, the first director of the Cybersecurity and Infrastructure Security Agency(CISA), told Congress in May of 2021. The director of CISA isn’t talking about a virus created pandemic, rather he is referring to the pandemic of cyber-attacks and data breaches. This warning rang especially true when the Colonial Pipeline ransomware attack crippled the US energy sector the following week. 

Your files are encrypted

For the uninitiated, ransomware is the fastest growing malware threat, targeting users and organizations of all types. It works by encrypting the user’s data, rendering the source data and backup data useless and asks for ransom, threatening to hold the data hostage until it is received. Payments are usually demanded in untraceable crypto currencies which can (and in many cases do) end up with state sponsored bad actors.

Today, protection against and mitigation for a ransomware attack are information technology and information security responsibilities with the C-Suite and Board taking a relatively hands-off approach. But that must change and in some cases is already changing. Here’s why C-Suite and Board members should take this threat seriously and be the driving force to protect the organization against ransomware. 

1. To Pay or not to Pay: Financial Impacts of Ransomware

Ransomware impacts organizations of all sizes, across all industries. The security company Sophos(1) found that 51% of the companies responded in an affirmative when asked if they were attacked by ransomware in 2020 – the year of the pandemic. In 73% of those cases, data was successfully encrypted, thereby bringing the business to its knees. More than a quarter of the respondents (26%) admitted to paying the ransom at an average of $761K/ incident, which is a huge increase from the previous years where a similar report had pegged the average at $133K

The financial implication of paying the ever-increasing ransom demands aside, the real impact of ransomware is on the business itself. It cripples businesses and renders services ineffective and undeliverable. There is also the threat of data exfiltration which can expose sensitive customer data and leave the organization open to lawsuits and additional financial penalties. This does not even account for the loss of business due to downtime, or the brand damage that the ransomware can cause. 

With just these impacts alone, with rope in the Director of IT or IS, CFO, General Counsel, Public Relations, Chief Privacy Officer, CIO, and CISO. The CEO will also be roped in and will have to break the new to her board of directors. It would be far better if she remembers this as the day she was able to say, “We were prepared. We already have the business back up and running. We will not be paying.”

2. The Moral (and Regulatory) Low Ground of Paying a Ransom

Then there is the moral and regulatory dilemma associated with paying off ransom. This practice is actively discouraged by the US governmental agencies as it encourages and fosters similar and copycat attacks.  Added to this is the Oct 2020 advisory from Department of The Treasury(2), OFAC (Office of Foreign Assets Control) & FINCEN (Financial Crimes Enforcement Network) which talks about “Potential Sanctions Risks for Facilitating Ransomware Payments”. Given that most of the payments for ransomware are untraceable, this opens organizations, the executives and board members to US government sanctions violations.

3. Cyber Insurance: How to Get, Keep, and Save on This Must-Have for Business Continuity

Cyber Security Insurance, the fastest growing insurance segment is another important consideration. As a safeguard most large organizations require cyber insurances as part of their cyber defense strategy. But insurance companies are not immune to the US sanctions violation if a payment is made to rogue nations. Therefore, premiums for ransomware coverage are high or may require up to 50% coinsurance. In some cases, insurers may NOT even cover businesses unless they are able to show significant cyber security arrangements along with data immutability as part of their cyber security plans.

 

human cost of ransomware
Human Cost of Ransomware

 

4. The Human Cost of Ransomware

Finally in addition to a business, insurance and regulatory impact, the most reprehensible  danger of ransomware is its human impact. This applies across all industries. From impacting critical utilities in the energy sector, declined credit card and bank transactions in the financial sector, to delayed patient care, emergency treatments, and even death in the healthcare sector, the impact of ransomware is real and direct and all too inhumane. 

5. Getting Organized: Plan, Don’t Pay

Without a regularly drilled, top-down plan on how a business will respond to a ransomware attack, an organization is going to make mistakes in the heat of an attack. It will pay the costs of those mistakes whether to masked malware attackers, through ransomware-induced PR nightmares, or via increased cyber insurance premiums levied for lack of proper preparation and protection. It is not just the responsibility of the IT/IS department to keep the business safe, but the obligation of every CXO and Board member to ask for and implement stringent cyber security measures starting with zero trust, perimeter security, and employee training. But don’t forget to protect the attackers ultimate prize–your backup data—in immutable WORM storage. 

For all these reasons, ransomware MUST be a C-suite and Board-led conversation. Forrester analysts write: “Implementing an immutable file system with underlying WORM storage will make the system watertight from a ransomware protection perspective.” Data immutability through WORM features such as S3 Object Lock is also now a requirement for many cyber insurance policies to cover the threat of ransomware. 


To learn more about solutions for ransomware protection, please visit
https://cloudian.com/lp/lock-ransomware-out-keep-data-safe-ent/

Citation:

  1. https://www.sophos.com/en-us/medialibrary/Gated-Assets/white-papers/sophos-the-state-of-ransomware-2020-wp.pdf
  2. https://home.treasury.gov/system/files/126/ofac_ransomware_advisory_10012020_1.pdf

 

 

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

$500 Billion in Lost Market Value: VC Firm Estimates Impact of Public Cloud Costs

VC firm Andreesen Horowitz examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Cloud computing and on-prem computing will always co-exist, we believe. A recent article from the venture capital firm Andreesen Horowitz makes a compelling case for that. The article (“The Cost of Cloud, a Trillion Dollar Paradox”) examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Here are some of the article’s key findings:

  • “If you’re operating at scale, the cost of cloud can at least double your infrastructure bill.”: The authors note that public cloud list prices can be 10-12X the cost of running your own data centers. Although use-commitment and volume discounts can reduce the difference, the cloud is still significantly more expensive.
  • “Some companies we spoke with reported that they exceeded their committed cloud spend forecast by at least 2X.” Cloud spend can be hard to predict, resulting in spending that often exceeds plan. Companies surveyed for the article indicate that actual spend is often 20% higher than committed spend and at least 2X in some cases.
  • “Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud.”: This takes into account the TCO of everything from server racks, real estate, and cooling to network and engineering costs.
  • “The cost of cloud ‘takes over’ at some point, locking up hundreds of billions of market cap that are now stuck in this paradox: You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.”: While public cloud delivers on its promise early on, as a company scales and its growth slows, the impact of cloud spend on margins can start to outweigh the benefits. Because this shift happens later in a company’s life, it’s difficult to reverse.
  • “Think about repatriation upfront.” By the time cloud costs start to catch up to or even outpace revenue growth, it’s too late. Even modest or modular architectural investment early on reduces the work needed to repatriate workloads in the future. In addition, repatriation can be done incrementally, and in a hybrid fashion.
  • “Companies need to optimize early, often, and, sometimes, also outside the cloud.”: When evaluating the value of any business, one of the most important factors is the cost of goods sold (COGS). That means infrastructure optimization is key.
  • “The popularity of Kubernetes and the containerization of software, which makes workloads more portable, was in part a reaction to companies not wanting to be locked into a specific cloud.”: Developers faced with larger-than-expected cloud bills have become more savvy about the need for greater rigor when it comes to cloud spend.
  • “For large companies — including startups as they reach scale — that [cloud flexibility] tax equates to hundreds of billions of dollars of equity value in many cases.”: This tax is levied long after the companies have committed themselves to the cloud. However, one of the primary reasons organizations have moved to the cloud early on – avoiding large CAPEX outlays – is no longer limited to public clouds. There are now data center alternatives that can be built, deployed, and managed entirely as OPEX.


In short, the article highlights the need to think carefully about which use cases are better suited for on-prem deployment. Public cloud can provide flexibility and scalability benefits, but at a cost that can significantly impact your company’s financial performance.

Cloudian was founded on the idea of bringing public cloud benefits to the data center, and we now have nearly 700 enterprise and service provider customers that have deployed our award-winning HyperStore object storage platform in on-prem and hybrid cloud environments. On-prem object storage can deliver public cloud-like benefits in your own data center, at less cost and with performance, agility, security and control advantages. In addition, as long as the object storage is highly S3-compatible, it can integrate easily with public cloud in a hybrid cloud model.

To learn more about how we can help you find the right cloud storage strategy for your organization, visit cloudian.com/solutions/cloud-storage/. You can also read about deploying HyperStore on-prem with AWS Outposts at cloudian.com/aws.

 

John ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

LinkedIn Live: Secure Data with VMware vSAN & Cloudian HyperStore

Our joint solution combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu. Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage. 

Grant JacobsonGrant Jacobson, Director of  Technology Alliances and Partner Marketing, Cloudian

View LinkedIn Profile


Protecting Your Data with VMware vSAN and Cloudian HyperStore

Each month, VMware and Cloudian collaborate to promote our joint solution in a series of short (~15 minutes) LinkedIn Live sessions.  Each session highlights a new solution use case and for today’s session, the fourth in our series, we talked about Data Protection and how to keep data safe.  These are lively conversations about the solution and how our customers can take advantage of it to meet their evolving needs.  Last month, we covered the new Splunk SmartStore use case, with a 44% TCO savings compared with traditional storage.

Our joint solution became available in February and combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu.   Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage.  The solution enables many new use cases with Data Protection being one that cuts across all segments: everyone needs to ensure their data stays safe, especially from the accelerating increase in ransomware and other cyberattacks.


If you missed it, watch it here:

If you’d like more information about our solutions with VMware, see our dedicated webpage:
You can also reach us at [email protected]

Object Storage: Better Monetizing Content by Transitioning from Tape

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.

david phillips

David Phillips, Principal Architect for M&E Solutions, Cloudian

View LinkedIn Profile

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.


Object Storage: Better Monetizing Content by Transitioning from Tape

Media and entertainment companies derive significant recurring revenue through old content. From traditional television syndication to YouTube uploads, this content can be distributed and monetized in several different ways. Many M&E companies, particularly broadcasters, store their content in decades-old LTO tape libraries. With years of material, including thousands of episodes and millions of digital assets, these tape libraries can grow so large that they become unmanageable. Deployments can easily reach several petabytes of data and may sprawl across multiple floors in a broadcaster’s media storage facility. Searching these massive libraries and retrieving specific content can be a cumbersome, time-consuming task –like trying to find a needle in a haystack.

Object storage provides a far simpler, more efficient and cost-effective way for broadcasters to manage their old video content. With limitless scalability, object storage can easily grow to support petabytes of data without occupying a large physical footprint. Moreover, the technology supports rich, customizable metadata, making it easier and quicker to search and retrieve content. Organizations can use a Google-like search tool to immediately retrieve assets, ensuring that they have access to all existing content, no matter how old or obscure, and can readily monetize that content.

Here’s a deeper look at how the two formats compare in searchability, data access, scalability and management.

Searchability and data access

LTO tape was created to store static data for the long haul. Accessing, locating and retrieving this data was always an afterthought. In the most efficient tape libraries today, staff may be able to find a piece of media within a couple minutes. But even in this scenario, if there were multiple jobs queued up first in the library, finding that asset could take hours. And this is assuming that the tape that contains the asset is stored in the library and in good condition (i.e., it can be read and doesn’t suffer from a jam).

This also assumes the staff has the proper records to even find the asset. Because of the limitations of the format, LTO tape files do not support detailed metadata. This means that organizations can only search for assets using basic file attributes, such as date created or title. It’s impossible to conduct any sort of an ad hoc search. If a system’s data index doesn’t contain the file attributes that a user is looking for, the only option is to look manually, an untenable task for most M&E organizations that have massive content libraries. This won’t change in the future, as tape cannot support advanced technologies such as artificial intelligence (AI) and machine learning (ML) to improve searchability.

On the other hand, object storage makes it possible to immediately search and access assets. The architecture supports fully-customizable metadata, allowing staff to attach any attributes they want to any asset, no matter how specific. For example, a news broadcast could have metadata identifying the anchors or describing the type of stories covered. When trying to find an asset, a user can search for any of those attributes and rapidly retrieve it. This makes it much easier to find old or existing content and use it for new monetization opportunities, driving much greater return on investment (ROI) from that content. This value will only increase as AI and ML, which are both fully supported in object storage systems, provide new ways to analyze and leverage data (e.g., facial recognition, speech recognition and action analysis), increasing opportunities to monetize archival content.

Scalability and management

Organizations must commit significant staff and resources to manage and grow an LTO tape library. Due to their physical complexity, these libraries can be difficult and expensive to scale. In the age of streaming, broadcasters are increasing their content at breakneck speed. And with the adoption of capacity-intensive formats like 4K, 8K and 360/VR, more data is being created for each piece of content. Just several hundred hours of video in these advanced formats can easily reach a petabyte in size. In LTO environments, the only way to increase capacity is to add more tapes, which is particularly difficult if there are no available library slots. When that’s the case, the only choice is to add another library. Many M&E companies’ tape libraries already stretch across several floors, leaving little room for expansion, especially because new content (in higher resolution formats) tends to use larger data quantities than older content.

Object storage was designed for limitless scalability. It treats data as objects that are stored in a flat address space, which makes it easy to grow deployments via horizontal scaling (or scaling out) rather than vertical scaling (scaling up). To increase a deployment, organizations simply have to add more nodes or devices to their existing system, rather than adding new systems (such as LTO libraries) entirely. Because of this, object storage is simple to scale to hundreds of petabytes and beyond. With data continuing to grow exponentially, especially for video content, being able to scale easily and efficiently helps M&E companies maintain order and visibility over their content, enabling them to easily find and leverage those assets for new opportunities. Increasing the size of a sprawling, messy tape library is exactly the opposite.

Tape libraries also lack centralized management across locations. To access or manage a given asset, a user has to be near the library where it’s physically stored. For M&E organizations that have tape archives in multiple locations, this causes logistical issues, as each separate archive must be managed individually. As a result, companies often need to hire multiple administrators to operate each archive, driving up costs and causing operational siloing.

Object storage addresses the challenge of geo-distribution with centralized, universal management capabilities. Because the architecture leverages a global namespace and connects all nodes together in a single storage pool, assets can be accessed and managed from any location. While companies can only access data stored on tape directly through a physical copy, object storage enables them to access all content regardless of where it is physically stored. One person can administer an entire globally-distributed deployment, enforcing policies, creating backup copies, provisioning new users and executing other key tasks for the whole organization.

Conclusion

M&E companies still managing video content in LTO tape libraries suffer from major inefficiencies, and in turn, lost revenue. The format simply wasn’t designed for the modern media landscape. Object storage is a much newer architecture that was built to accommodate massive data volumes in the digital age. Object storage’s searchability, accessibility, scalability and centralized management helps broadcasters boost ROI from existing content.

 


To learn more about Cloudian’s Media and Entertainment solutions, visit cloudian.com/solutions/media-and-entertainment/.

Tape — Does It Measure Up?

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator. Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries.

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

lto tape

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator.

Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries. Also, security and secure remote access have become a critical element when considering data protection and business continuity. With production and engineering teams alike finding themselves “locked out of the building,” managing physical tape media and remediating mechanical issues with tape libraries has proved difficult if not impossible.

The drawbacks of tape that are even more highlighted by the pandemic include:

  • Accessibility: This one is obvious. The lack of immediate and complete accessibility has never been more problematic than during the pandemic.
  • Durability: Mechanical failures around tape library robotics and tape media failures inside have meant truck rolls into the tape vaults – not ideal for a shelter-in-place situation.
  • Compatibility: New tape drive hardware has limits to its backward compatibility, which have required recoding at a time when data availability has been the prime objective for business continuity
  • Security: Ransomware attacks have become commonplace during the pandemic. Considering the various drawbacks associated with tapes, the rationale for using tapes for ransomware protection is up for reevaluation. As they say, data not retrievable in the right timeframe is data not protected. This is especially true in the case of ransomware


As companies look to increase the capacity of their storage, as well as the frequency with which they access it, object storage checks off all the right boxes in terms of data durability, availability, performance, and accessibility. Whether in the public or private cloud, object storage overcomes the limitations of LTO tape listed above and has become the go-to for most IT administrators looking for a better solution. If you’re running tape today, it makes a lot of sense to evaluate the benefits of switching to object storage before the limitations of your current solution impact your business more severely — and the sooner the better. As tape infrastructure ages, the transition only becomes more difficult.

As with any major technology shift, there are many important factors to take into consideration.


Tape: Does it Measure Up?
An Insider’s Guide to Data Center Modernization

To read an insider’s view on data center modernization focused on this topic, please visit
https://cloudian.com/lp/data-center-modernization/

LTO tape library

5 Reasons to Adopt Hybrid Cloud Storage for your Data Center

Are you weighing the benefits of cloud storage versus on-premises storage? If so, the right answer might be to use both–and not just in parallel, but in an integrated way. Hybrid cloud is a storage environment that uses a mix of on-premises and public cloud services with data mobility between the two platforms.

IT professionals are now seeing the benefit of hybrid solutions. According to a recent survey of 400 organizations in the U.S. and UK conducted by Actual Tech, 28 percent of firms have already deployed hybrid cloud storage, with a further 40 percent planning to implement within the next year. The analyst firm IDC agrees: In its 2016 Futurescape research report, the company predicted that by 2018, 85 percent of enterprises will operate in a multi-cloud environment.

Hybrid has piqued interest as more organizations look to the public cloud to augment their on-premises data management. There are many drivers for this, but here are five:

  1. We now have a widely-accepted standard interface.

The emergence of a common interface for on-prem and cloud storage changes everything. The world of storage revolves around interface standards. They are the glue that drives down cost and ensures interoperability. For hybrid storage, the defacto standard is the Amazon S3 API, an interface that began in cloud storage and is now available for on-premises object storage as well. This standardization is significant because it gives storage managers new flexibility to deploy common tools and applications on-prem and in the cloud, and easily move data between the two environments to optimize cost, performance, and data durability.

  1. Unprecendented hybrid scalability delivers operational efficiency.

Managing one large, scalable pool of storage is far more efficient than managing two smaller ones. And hybrid storage is hands-down the most scalable storage model ever devised. It combines on-prem object storage – which is itself scalable to hundreds of PBs – with cloud storage that is limitlessly scalable, for all practical purposes. This single-pool storage model reduces data silos, and simplifies management with a single namespace and a single view — no matter where the data originated or where it resides. Further, hybrid allows you to keep a copy of all metadata on-premises, ensuring rapid search across both cloud and on-premise data.

  1. Best-of-breed data protection is now available to everyone.

Data protection is fundamental to storage. A hybrid storage model offers businesses of all sizes incredible data protection options, delivering data durability that previously would have been affordable to only the most well-heeled storage users. In a hybrid configuration, you can backup data to object storage on premises, then automatically tier data to the cloud for long-term archive (Amazon Glacier, Google Coldline, Azure Blob). This gives you two optimal results: You have a copy of data on-site for rapid recovery when needed, and a low-cost, long-term archive offsite copy for disaster recovery. Many popular backup solutions including Veritas, Commvault and Rubrik provide Amazon S3 connectors that enable this solution as a simple drop-in.

  1. Hybrid offers more deployment options to match your business needs.

Your storage needs have their own nuances, and you need the operational flexibility to address them. Hybrid can help with more deployment options than other storage models. For the on-premise component, you can select from options that range from zero-up-front cost software running on the servers you already own, to multi-petabyte turnkey systems. For the cloud component, a range of offerings meet both long-term and short-term storage needs. Across both worlds, a common object storage interface lets you mix-and-match the optimal solution. Whether the objective is rapid data access on-premises or long-term archival storage, these needs can be met with a common set of storage tools and techniques.

  1. Hybrid helps meet data governance rules.

External and internal data governance rules play a big part in data storage planning.  In a recent survey, 59% of respondents reported the need to maintain some of their data on premises. On average, that group stated that only about half of their data can go to the cloud. Financial data and customer records in particular are often subject to security, governance and compliance rules, driven by both internal policy and external regulation. With a hybrid cloud model, you can more easily accommodate the changing needs. With hybrid, you can set policies to ensure compliance, tailoring migration and data protection rules to specific data types.

While many are seeing the natural advantages of hybrid, some are still unsure. What other factors play in that I haven’t mentioned? With more and more being digitized and retained into perpetuity, what opportunities is your organization exploring to deal with the data deluge?

Embracing Hybrid Storage

It’s no surprise that Amazon Web Services (AWS) is a dominant force when it comes to the public cloud – it’s a $10B a year business, with nearly 10% of Amazon’s Q2 net sales attributed to AWS.

AWS Q2 net sales

While AWS has been touting public cloud since its inception, only recently has it started to acknowledge the need for hybrid storage solutions. Why? Because it’s simply not realistic for many companies to move all their data to the public cloud.

Private vs. Public Cloud

 

A company may choose to stay with private, on-premises storage solutions if they have existing data centers already in place. Or they may prefer the enhanced performance and extra measure of control that comes with on-premises storage.

Nonetheless, public cloud storage has significant advantages. It’s easy to implement, scales on demand, and automates many of the data management chores.

Neither option is clearly better than the other – in fact, customers are spending more than ever on both private and public cloud solutions. IDC forecasts that total IT spending on cloud infrastructure will increase by 15.5% in 2016 to reach $37.1B.The bottom line is that companies need both on-prem and cloud solutions.

The Best of Both Worlds: Hybrid Storage

 

What’s needed is a solution that allows you to enjoy that advantages of both — the speed and control of on-prem and the on-demand scalability of cloud. And ideally, you’d get both within a single, simple management model.

That’s what Cloudian HyperStore is. It’s S3 cloud storage that physically sits in your data center. And, it looks and behaves exactly like Amazon S3 cloud storage, so your apps that work with Amazon will work with Cloudian. Best of all, you can manage the combined Cloudian + Amazon S3 storage pool as a single, limitlessly scalable storage environment.

Amazon Makes It Easy

 

Fortune summed up Amazon’s need for a hybrid compute model in their recent article, stating:

It’s become clear that AWS, which is the leader in public cloud, will have to address this issue of dealing with, if not embrace, customers’ on-premises computing.

Thankfully, in the storage world they’ve already addressed this by adding Cloudian HyperStore directly to the AWS Marketplace. We announced this last month, but it bears repeating because it’s an important step in AWS’s evolution.

The advantages in moving towards hybrid storage are numerous. Everything folds up to AWS, so even usage and billing from private cloud will be centralized in the monthly AWS invoices. More importantly, Cloudian HyperStore was built from day one to be fully S3 compatible, which ensures complete investment protection.

So if you’re debating between public and private cloud options for your company, remember that you can still get the best of both worlds. Check out Cloudian HyperStore for a better hybrid storage solution with AWS and Amazon S3.

AWS CLI with S3-Compatible Storage

There’ve been a lot of discussions about Amazon’s Simple Storage Service (S3) and Amazon Web Services (AWS). It seems to me that everyone is saying that they are Amazon S3-compatible or that they work with S3 storage. That makes me wonder, what is the best way to validate a solution or test it out to see if the storage solution will meet my object storage needs? Well, why not just use Amazon’s own S3APIs and AWS Command Line Interface (CLI)?

AWS CLI is a unified tool developed to help manage AWS services. I believe this is the best way to test out any solution that says they are an S3 compatible storage such as Cloudian HyperStore. So let’s hop on to it and get started. The following shows the steps on how to install and use AWS CLI with Cloudian HyperStore on your Linux server.

Prerequisite:

You will need to install PIP to simplify your AWS CLI installation, you can copy the following python script to your Linux server and it will help you install pip and awscli. The script is provided as-is but feel free to copy, modify and improve it to your liking.

import urllib

import os

PIP=’get-pip.py’

urllib.urlretrieve (“https://bootstrap.pypa.io/get-pip.py”, PIP)

os.system(“python get-pip.py”)

os.system(“pwd”)

os.system(“pip install awscli”)

Process:

  1. Download the following dc_getpip.py to your Linux server. The script has been tested on RHEL and CentOS. The Cloudian S3 region used in this example is s3-region.addomain.local
  2. Run python dc_getpip.py. This script will download pip and install AWS CLI for you.
  3. When the AWS CLI is successfully installed, continue with configuring AWS CLI with Cloudian HyperStore.
  4. Execute aws configure and provide the Cloudian credential along with the Cloudian S3 region information. For example:
  5. cd ~/./.aws because the config and the credential files for aws is located in your user directory. In this example, this is the root user directory.AWS CLI root user directory
  6. There are 2 files in .aws directory:
    1. config
    2. credentials
  7. Update the config file with the Cloudian region information. Include [cloudian] in your update.AWS CLI Cloudian regional information
  8. Update the credentials files with the Cloudian information, include [cloudian] in your update.AWS CLI credentials file
  9. Run the following aws command to validate connectivity to your Cloudian HyperStore cluster. Using s3 ls will list the buckets of the tenant that was configured.
    1. aws –profile=cloudian –endpoint-url=http://s3-region1.addomain.local s3 lsAWS CLI validate connectivity to Cloudian HyperStore cluster
    2. Replace s3-region1.addomain.local with your Cloudian region.
    3. You can use aws –profile=cloudian –endpoint-url=http://s3-region1.addomain.local s3 cp file s3://bucket to test upload to your s3 bucket.
  10. Your AWS CLI is successfully configured with Cloudian HyperStore S3.

 

If you are curious to learn more about S3, download Cloudian HyperStore’s community edition and validate the solution for yourself.

Learn more about hybrid cloud management here.

IBM Spectrum Protect with Amazon S3 Cloud Storage

IBM Spectrum Protect (formerly IBM Tivoli Storage Manager) solution provides the following benefits:

  • Supports software-defined storage environments
  • Supports cloud data protection
  • Easily integrates with VMware and Hyper-V
  • Enables data protection by minimizing data loss with frequent snapshots, replication, and DR management
  • Reduce the cost of data protection with built-in efficiencies such as source-side and target-side deduplication

IBM Spectrum Protect has also enhanced its offerings by providing support for Amazon S3 cloud storage (version 7.1.6 and later) and IBM Spectrum Protect version 7.1.6 was just released on June 17th, 2016. I was actually a little nervous and excited at the same time. Why? Because Cloudian HyperStore has a S3 guarantee. What better way to validate that guarantee than by trying a plug-and-play with a solution that has just implemented support for Amazon S3?

Overview of IBM Spectrum Protect with Amazon S3 cloud storage

And the verdict? Cloudian HyperStore configured as “Cloud type: Amazon S3” works right off the bat with IBM Spectrum Protect. You can choose to add a cloud storage pool from the V7.1.6 Operations Center UI or use the Command Builder. The choice is yours.

We’ll look at both the V7.1.6 Operations Center UI and the Command Builder to add our off-premise cloud storage.

NOTE: Cloudian HyperStore can be deployed as your on-premise S3 cloud storage but it has to be identified as an Amazon S3 off-premise cloud storage and you have to use a signed SSL certificate.

Here’s how you can add an Amazon S3 cloud storage or a Cloudian HyperStore S3 cloud storage into your IBM Spectrum Protect storage pool:

From the V7.1.6 Operations Center UI

 

From the V7.1.6 Operations Center console, select “+Storage Pool”.

Adding 'Storage Pool' to the IBM Spectrum Protect V7.1.6 Operations Center console

In the “Add Storage Pool:Identity” pop-up window, provide the name of your cloud storage and the description. In the next step of the “Add Storage Pool:Type”, select “Container-based storage:Off-premises cloud”.

IBM Spectrum Protect cloud storage description

Click on “Next” to continue. The next step in the “Add Storage Pool:Credentials” page is where it gets exciting. This is where we provide the information for:

  • Cloud type: Amazon S3 (Amazon S3 cloud type is also used to identify a Cloudian HyperStore S3)
  • User Name: YourS3AccessKey
  • Password: YourS3SecretKey
  • Region: Specify your Amazon S3 region (for Cloudian HyperStore S3, select “Other”)
  • URL: If you had selected an Amazon S3 region, this will dynamically update to the Amazon region’s URL. If you are using a Cloudian HyperStore S3 cloud storage, input the S3 Endpoint Access (HTTPS).

Complete the process by clicking on “Add Storage Pool”.

IBM Spectrum Protect

NOTE: Be aware that there is currently no validation performed to verify your entries when you click on “Add Storage Pool”. Your S3 cloud storage pool will be created. I believe the IBM Spectrum Protect group is addressing this with a validation process for the creation of a S3 cloud storage pool. I hope the step-by-step process that I have provided will help minimize errors with your Amazon S3 cloud storage pool setup.

From the V7.1.6 Operations Center Command Builder

 

From the V7.1.6 Operations Center Command Builder, you can use the following define stgpool command and you are done adding your off-premise S3 cloud storage pool:

  • define stgpool YourCloudName stgtype=cloud pooltype=primary cloudtype=s3 cloudurl=https://s3.cloudianstorage.com:443 access=readwrite encrypt=yes identity=YourS3AccessKey password=YourS3SecretKey description=”Cloudian”

NOTE: You can review the server instance dsmffdc log if there’s errors. It is located in the server instance directory. There’s also a probability that the signed SSL certificate might not be correct.

For example:

06-20-2016 11:58:26.150][ FFDC_GENERAL_SERVER_ERROR ]: (sdcloud.c:3145) com.tivoli.dsm.cloud.api.ProviderS3 handleException com.amazonaws.AmazonClientException Unable to execute HTTP request: com.ibm.jsse2.util.h: PKIX path building failed: java.security.cert.CertPathBuilderException: unable to find valid certification path to requested target
[06-20-2016 11:58:26.150][ FFDC_GENERAL_SERVER_ERROR ]: (sdcntr.c:8166) Error 2903 creating container ibmsp.a79378e1333211e6984b000c2967bf98/1-a79378e1333211e6984b000c2967bf98
[06-20-2016 11:58:26.150][ FFDC_GENERAL_SERVER_ERROR ]: (sdio.c:1956) Did not get cloud container. rc = 2903

 

Importing A Signed SSL Certificate

 

You can use the IBM Spectrum Protect keytool –import command to import the signed SSL certificate. However, before you perform the keytool import process, make a copy of the original Java cacerts.

The Java cacerts is located in IBM_Spectrum_Protect_Install_Path > TSM > jre > security directory.

You can run the command from IBM_Spectrum_Protect_Install_Path > TSM > jre > bin directory.
For example, on Windows:

    • ./keytool –import ../lib/security/cacerts –alias Cloudian –file c:/locationofmysignedsslcert/admin.crt

 

Enter the keystore password when prompted. If you haven’t updated your keystore password, the default is changeit and you should change it for production environments. When you are prompted to “Trust this certificate?”, input “yes”.

NOTE: Keep track of the “Valid from: xxxxxx” of your signed SSL certificate, you will have to import a new certificate when the current one expires.

By the way, if you encounter error “ANR3704E sdcloud.c(1636): Unable to load the jvm for the cloud storage pool on Windows 2012R2”, update the PATH environment variable on the Spectrum Protect Server:
IBM_Spectrum_Install_Path\Tivoli\TSM\jre\bin\j9vm and also set the JVM_LIB to jvm.dll.

Here’s what your Amazon S3 cloud storage type looks like from IBM Spectrum Protect V7.1.6 Operations Center console:

Operations Center console final result after adding Amazon S3 cloud storage to IBM Spectrum Protect V7.1.6

And you’re off! If you encounter any issues during this process, feel free to reach out to our support team.

You can also learn more by downloading our solution brief.

How-To: S3 Your Data Center

As the Storage Administrator or a Data Protection Specialist in your data center, you are likely looking for some alternative storage solution to help store all your big data growth needs. And with all that’s been reported by Amazon (stellar growth, strong quarterly earnings report), I am pretty sure their Simple Storage Service (S3) is on your radar. S3 is a secure, highly durable and highly scalable cloud storage solution that is also very robust. Here’s an API view of what you can do with S3:

S3 API view

As a user or developer, you can securely manage and access your bucket and your data, anytime and anywhere in the world where you have web access. As a storage administrator, you can easily manage and provision storage to any group and any user on always-on, highly scalable cloud storage. So if you are convinced that you want to explore S3 as a cloud storage solution, Cloudian HyperStore should be on your radar as well. I believe a solution that is easy to deploy and use helps accelerates the adoption of the technology. Here’s what you will need to deploy your own cloud storage solution:

  • Cloudian’s HyperStore Software – Free Community Edition
  • Recommended minimum hardware configuration
    • Intel-compatible hardware
    • Processor: 1 CPU, 8 cores, 2.4GHz
    • Memory: 32GB
    • Disk: 12 x 2TB HDD, 2 x 250GB HDD (12 drives for data, 2 drives for OS/Metadata)
    • RAID: RAID-1 recommended for the OS/Metadata, JBOD for the Data Drives
    • Network: 1x1GbE Port


You can install a single Cloudian HyperStore node for non-production purposes, but it is best practice to deploy a minimum 3-node HyperStore cluster so that you can use logical storage policies (replication and erasure coding) to ensure your S3 cloud storage is highly available in your production cluster. It is also recommended to use physical servers for production environments.

Here are the steps to set up a 3-node Cloudian HyperStore S3 Cluster:

  1. Use the Cloudian HyperStore Community Edition ISO for OS installation on all 3 nodes. This will install CentOS 6.7 on your new servers.
  2. Log on to your servers
    1. The default root password is password (Update your root access for production environments)
  3. Under /root, there are 2 Cloudian directories:
    1. CloudianTools
      1. configure_appliance.sh allows you to perform the following tasks:
        1. Change the default root password
        2. Change time zone
        3. Configure network
        4. Format and mount available disks for Cloudian S3 data storage
          1. Available disks that were automatically formatted and mounted during the ISO install for S3 storage will look similar to the following /cloudian1 mount:
            Format and mount available disks for Cloudian S3 data storage
    2. CloudianPackages
      1. Run ./CloudianHyperStore-6.0.1.2.bin cloudian_xxxxxxxxxxxx.lic to extract the package content from one of your nodes. This will be the Puppet master node.
        S3 Puppet master mode
      2. Copy sample-survey.csv survey.csv
        sample-survey.csv
      3. Edit the survey.csv file
        Edit survey.csv
        In the survey.csv file, specify the region, the node name(s), IP address(s), DC, and RAC of your Cloudian HyperStore S3 Cluster.

        NOTE: You can specify an additional NIC on your x86 servers for internal cluster communication.

      4. Run ./cloudianInstall.sh and select “Install Cloudian HyperStore”. When prompted, input the survey.csv file name. Continue with the setup.
        NOTE: If deploying in a non-production environment, it is possible that your servers (virtual/physical) may not have the minimum resources or a DNS server. You can run your install with ./cloudianInstall.sh dnsmasq force. Cloudian HyperStore includes an open source domain resolution utility to resolve all HyperStore service endpoints.
      5. v. In the following screenshot, the information that we had provided in the survey.csv file is used in the Cloudian HyperStore cluster configuration. In this non-production setup, I am also using a DNS server for domain name resolution with my virtual environment.Cloudian HyperStore cluster configuration
      6. Your Cloudian HyperStore S3 Cloud Storage is now up and running.
        Cloudian HyperStore S3 cloud storage
      7. Access your Cloudian Management Console. The default System Admin group user ID is admin and the default password is public.
        Cloudian Management Console
      8. Complete the Storage Policies, Group, and SMTP settings.
        Cloudian HyperStore - near final

Congratulations! You have successfully deployed a 3-node Cloudian HyperStore S3 Cluster.

Can Scale-Out Storage Also Scale-Down?

Private cloud storage can scale-out to meet the demands for additional storage capacity, but can it scale-down to meet the needs of small and medium-sized organizations who don’t have petabytes of data?

The answer is, yes it can, and you should put cloud storage vendor claims to the test before making your decision to build a private storage cloud.

Scale-out cloud storage

The Importance of Scale-Down Private Cloud Storage

A private storage cloud that can cost-efficiently store and manage data on a smaller scale is important if you don’t need petabyte-capacity to get started. A petabyte is a lot of data. It is equivalent to 1000 terabytes. If you have 10 or 100 terabytes of data to manage and protect, a scale-down private storage cloud is what you need to do that. And in the future, when you need additional storage capacity, you must be able to add it without having to rip-and-replace the storage you started with.

Key Characteristics of Scale-Down Private Cloud Storage

The characteristics of scale-down, private cloud storage make it attractive for organizations with sub-petabyte data storage requirements.

It's important for storage to be both scale-out and scale-down

You can start with a few storage servers and grow your storage capacity using a mix of storage servers and storage capacities from different manufacturers. A private storage cloud is storage server hardware agnostic so you can buy what you need when you need it.

Peer-to-Peer Architecture in Scale-Down Private Cloud Storage

Scale-down, private cloud storage should employ a “peer-to-peer” architecture, which means the same software elements are running on each storage server.

A “peer-to-peer” storage architecture doesn’t use complex configurations that require specialized and/or redundant servers to protect against a single point of failure. Complexity is not a good thing in data storage. After all, why would you choose a private cloud storage solution that is too complex for your needs?

Ease of Use and Management

Scale-down, private cloud storage should also be easy-to-use and easy-to-manage.

Easy-to-use means simple procedures to add, remove or replace storage servers. It also means using storage software with built-in intelligence that can protect your data and keep it accessible without a lot of fine tuning or tinkering to do it.

Easy-to-manage means you don’t need a dedicated storage administrator to keep your private cloud storage cluster running. An in-house computer systems administrator can do it or you can hire out administration to a managed services provider who can do it remotely.

Determining the Right Storage Size for Your Needs

So just how small is small when it comes to building your own private cloud storage? Small is a relative term, but a practical minimum from a hardware perspective would be about 10 terabytes of usable storage. There is nothing hard and fast about starting with 10 terabytes of usable storage, but once you start moving data into your private storage cloud, you should have an amount of usable storage that is appropriate for the uses you have in mind.

Choosing the Right Private Cloud Storage Vendor

If you have never built your own private cloud storage, you will need to determine which private storage cloud vendor has a simple, easy-to-use and easy-to-manage, private cloud storage solution that will work for you.

Conducting a Proof-of-Concept (POC)

The best way to help you make your decision is to conduct a Proof-of-Concept (POC) to determine which vendor will best meet your requirements for private cloud storage. Every vendor will tell you how easily their cloud storage scales out, but they may not mention if it can easily scale-down to meet the needs of organizations with sub-petabyte data storage requirements.

A Proof-of-Concept is not a whiteboard exercise or a slide presentation. A POC is done by having vendors showing you how their storage software running on their storage hardware or your storage hardware works. A vendor who cannot commit to a small-scale POC may not be a good fit for your requirements.

Consideration of Vendor Ecosystems and Compatibility

The applications you plan to use with your private storage cloud should also be included in your POC. If you are not writing your own applications, then it is important to consider the size of the application “ecosystem” supported by the storage vendors participating in your POC.

After ten years in the public cloud storage business, Amazon Web Services (AWS) has the largest “ecosystem” of third-party applications written to use their Simple Storage Service (S3). The AWS S3 Application Programming Interface (API) constitutes a de facto standard that every private storage cloud vendor supports to a greater or lesser degree, but only Cloudian guarantees that applications that work with AWS S3 will work with Cloudian HyperStore. The degree of AWS S3 API compliance among storage vendors is something you can test during your POC.

The Value of a Proof-of-Concept for Private Cloud Storage

Running a POC will cost you some time and money, but it is a worthwhile exercise because storage system acquisitions have meaningful implications for your data. It is worth spending a small percentage of the acquisition cost on a POC in order to make a good decision.

The Future of Software-Defined Private Cloud Storage

The future of all data storage is being defined by software. Storage software running on off-the-shelf storage server hardware defines how a private storage cloud works. A software-defined private storage cloud gives you the features and benefits of large public cloud storage providers, but does it on your premises, under your control, and on a scale that meets your requirements. Scale-down private cloud storage is useful because it is where many small and medium-sized organizations need to start.

Tim Wessels is the Principal Consultant at MonadCloud LLC, which designs and builds private cloud storage for customers using Cloudian HyperStore. Tim is a Cloudian Certified Engineer and MonadCloud is a Preferred Cloudian Reseller Partner. You can call Tim at 978.413.0201, email [email protected], tweet @monadcloudguy, or visit http://www.monadcloud.com

YOU MAY ALSO BE INTERESTED IN:

Object Storage vs File Storage: What’s the Difference?

 

This is a guest post by Tim Wessels, the Principal Consultant at MonadCloud LLC.

Shifting Technology Habits and the Growth of Object Storage

Technology is, for many of us, a vital and inextricable part of our lives. We rely on technology to look up information, keep in touch with friends and family, monitor our health, entertain ourselves, and much more.

space

However, technology wasn’t always so ubiquitous – it wasn’t too long ago that our wireless phones had limited features and even fewer users actually using these features. Here’s the breakdown from 2004, according to a study from the Yankee Group:

This means that just over 10 years ago, less than 50% of cell phones had internet access and less than 10% had cameras. Even with 50% of phones having internet access, only 15% of users took advantage of this feature.

pew research center

By contrast, look at this survey conducted by Pew Research in 2014:

Among the 18-29 age group, text messaging and internet are more frequently used features than phone calls, which is indicative of the tremendous shift in technology use over the past few years. This study doesn’t even cover a major feature that many users use their phones for: pictures. As younger users turn almost exclusively to smartphone cameras for their photos (and, of course, #selfies), they turn to photo-sharing sites to host and display their images.

Photos are just one type of the ever-growing deluge of unstructured data, though. For enterprises, unstructured data also includes emails, documents, videos, audio files, and more. In order for companies to cost-effectively store this data (while keeping it protected and backed up for end-users), many of them are starting to turn to object storage over traditional network-attached storage (NAS).

Some of the benefits of object storage include a lower total cost of ownership (TCO) and the ability to easily scale up as data needs grow. That by itself is not enough, though. With a solution like our very own HyperStore, in addition to the affordable price (as low as 1c per GB per month) and infinite scalability (from tens of terabytes to hundreds of petabytes), we offer easy management and access control, plus strong data protection with both erasure coding and replication settings. You can read about all of HyperStore’s features and benefits here.

Unstructured data use is only going to continue to grow. Smartphones and other data-intensive technologies will only become more prevalent, and you’ll want to be prepared to meet that growth. Learn more about Cloudian’s hardware and software solutions today.