$500 Billion in Lost Market Value: VC Firm Estimates Impact of Public Cloud Costs

VC firm Andreesen Horowitz examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Cloud computing and on-prem computing will always co-exist, we believe. A recent article from the venture capital firm Andreesen Horowitz makes a compelling case for that. The article (“The Cost of Cloud, a Trillion Dollar Paradox”) examined the impact of public cloud costs on public company financials and found that they reduce the total market value for those companies using cloud at scale by at least $500 billion.

Here are some of the article’s key findings:

  • “If you’re operating at scale, the cost of cloud can at least double your infrastructure bill.”: The authors note that public cloud list prices can be 10-12X the cost of running your own data centers. Although use-commitment and volume discounts can reduce the difference, the cloud is still significantly more expensive.
  • “Some companies we spoke with reported that they exceeded their committed cloud spend forecast by at least 2X.” Cloud spend can be hard to predict, resulting in spending that often exceeds plan. Companies surveyed for the article indicate that actual spend is often 20% higher than committed spend and at least 2X in some cases.
  • “Repatriation results in one-third to one-half the cost of running equivalent workloads in the cloud.”: This takes into account the TCO of everything from server racks, real estate, and cooling to network and engineering costs.
  • “The cost of cloud ‘takes over’ at some point, locking up hundreds of billions of market cap that are now stuck in this paradox: You’re crazy if you don’t start in the cloud; you’re crazy if you stay on it.”: While public cloud delivers on its promise early on, as a company scales and its growth slows, the impact of cloud spend on margins can start to outweigh the benefits. Because this shift happens later in a company’s life, it’s difficult to reverse.
  • “Think about repatriation upfront.” By the time cloud costs start to catch up to or even outpace revenue growth, it’s too late. Even modest or modular architectural investment early on reduces the work needed to repatriate workloads in the future. In addition, repatriation can be done incrementally, and in a hybrid fashion.
  • “Companies need to optimize early, often, and, sometimes, also outside the cloud.”: When evaluating the value of any business, one of the most important factors is the cost of goods sold (COGS). That means infrastructure optimization is key.
  • “The popularity of Kubernetes and the containerization of software, which makes workloads more portable, was in part a reaction to companies not wanting to be locked into a specific cloud.”: Developers faced with larger-than-expected cloud bills have become more savvy about the need for greater rigor when it comes to cloud spend.
  • “For large companies — including startups as they reach scale — that [cloud flexibility] tax equates to hundreds of billions of dollars of equity value in many cases.”: This tax is levied long after the companies have committed themselves to the cloud. However, one of the primary reasons organizations have moved to the cloud early on – avoiding large CAPEX outlays – is no longer limited to public clouds. There are now data center alternatives that can be built, deployed, and managed entirely as OPEX.


In short, the article highlights the need to think carefully about which use cases are better suited for on-prem deployment. Public cloud can provide flexibility and scalability benefits, but at a cost that can significantly impact your company’s financial performance.

Cloudian was founded on the idea of bringing public cloud benefits to the data center, and we now have nearly 700 enterprise and service provider customers that have deployed our award-winning HyperStore object storage platform in on-prem and hybrid cloud environments. On-prem object storage can deliver public cloud-like benefits in your own data center, at less cost and with performance, agility, security and control advantages. In addition, as long as the object storage is highly S3-compatible, it can integrate easily with public cloud in a hybrid cloud model.

To learn more about how we can help you find the right cloud storage strategy for your organization, visit cloudian.com/solutions/cloud-storage/. You can also read about deploying HyperStore on-prem with AWS Outposts at cloudian.com/aws.

 

John ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

LinkedIn Live: Secure Data with VMware vSAN & Cloudian HyperStore

Our joint solution combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu. Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage. 

Grant JacobsonGrant Jacobson, Director of  Technology Alliances and Partner Marketing, Cloudian

View LinkedIn Profile


Protecting Your Data with VMware vSAN and Cloudian HyperStore

Each month, VMware and Cloudian collaborate to promote our joint solution in a series of short (~15 minutes) LinkedIn Live sessions.  Each session highlights a new solution use case and for today’s session, the fourth in our series, we talked about Data Protection and how to keep data safe.  These are lively conversations about the solution and how our customers can take advantage of it to meet their evolving needs.  Last month, we covered the new Splunk SmartStore use case, with a 44% TCO savings compared with traditional storage.

Our joint solution became available in February and combines Cloudian Object Storage with VMware’s vSAN Data Persistence platform through VMware Cloud Foundation with Tanzu.   Adding Cloudian object storage software to vSAN is simple and easy, and serves any cloud-native or traditional IT application requiring S3-compatible storage.  The solution enables many new use cases with Data Protection being one that cuts across all segments: everyone needs to ensure their data stays safe, especially from the accelerating increase in ransomware and other cyberattacks.


If you missed it, watch it here:

If you’d like more information about our solutions with VMware, see our dedicated webpage:
You can also reach us at [email protected]

Object Storage: Better Monetizing Content by Transitioning from Tape

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.

david phillips

David Phillips, Principal Architect for M&E Solutions, Cloudian

View LinkedIn Profile

As media organizations look for new ways to monetize their ever-growing content archives, they need to ask themselves whether they have the right storage foundation. In a recent article I wrote for Post Magazine, I discussed the advantages of object storage over tape when it comes to managing and protecting content. Below is a reprint of the article.


Object Storage: Better Monetizing Content by Transitioning from Tape

Media and entertainment companies derive significant recurring revenue through old content. From traditional television syndication to YouTube uploads, this content can be distributed and monetized in several different ways. Many M&E companies, particularly broadcasters, store their content in decades-old LTO tape libraries. With years of material, including thousands of episodes and millions of digital assets, these tape libraries can grow so large that they become unmanageable. Deployments can easily reach several petabytes of data and may sprawl across multiple floors in a broadcaster’s media storage facility. Searching these massive libraries and retrieving specific content can be a cumbersome, time-consuming task –like trying to find a needle in a haystack.

Object storage provides a far simpler, more efficient and cost-effective way for broadcasters to manage their old video content. With limitless scalability, object storage can easily grow to support petabytes of data without occupying a large physical footprint. Moreover, the technology supports rich, customizable metadata, making it easier and quicker to search and retrieve content. Organizations can use a Google-like search tool to immediately retrieve assets, ensuring that they have access to all existing content, no matter how old or obscure, and can readily monetize that content.

Here’s a deeper look at how the two formats compare in searchability, data access, scalability and management.

Searchability and data access

LTO tape was created to store static data for the long haul. Accessing, locating and retrieving this data was always an afterthought. In the most efficient tape libraries today, staff may be able to find a piece of media within a couple minutes. But even in this scenario, if there were multiple jobs queued up first in the library, finding that asset could take hours. And this is assuming that the tape that contains the asset is stored in the library and in good condition (i.e., it can be read and doesn’t suffer from a jam).

This also assumes the staff has the proper records to even find the asset. Because of the limitations of the format, LTO tape files do not support detailed metadata. This means that organizations can only search for assets using basic file attributes, such as date created or title. It’s impossible to conduct any sort of an ad hoc search. If a system’s data index doesn’t contain the file attributes that a user is looking for, the only option is to look manually, an untenable task for most M&E organizations that have massive content libraries. This won’t change in the future, as tape cannot support advanced technologies such as artificial intelligence (AI) and machine learning (ML) to improve searchability.

On the other hand, object storage makes it possible to immediately search and access assets. The architecture supports fully-customizable metadata, allowing staff to attach any attributes they want to any asset, no matter how specific. For example, a news broadcast could have metadata identifying the anchors or describing the type of stories covered. When trying to find an asset, a user can search for any of those attributes and rapidly retrieve it. This makes it much easier to find old or existing content and use it for new monetization opportunities, driving much greater return on investment (ROI) from that content. This value will only increase as AI and ML, which are both fully supported in object storage systems, provide new ways to analyze and leverage data (e.g., facial recognition, speech recognition and action analysis), increasing opportunities to monetize archival content.

Scalability and management

Organizations must commit significant staff and resources to manage and grow an LTO tape library. Due to their physical complexity, these libraries can be difficult and expensive to scale. In the age of streaming, broadcasters are increasing their content at breakneck speed. And with the adoption of capacity-intensive formats like 4K, 8K and 360/VR, more data is being created for each piece of content. Just several hundred hours of video in these advanced formats can easily reach a petabyte in size. In LTO environments, the only way to increase capacity is to add more tapes, which is particularly difficult if there are no available library slots. When that’s the case, the only choice is to add another library. Many M&E companies’ tape libraries already stretch across several floors, leaving little room for expansion, especially because new content (in higher resolution formats) tends to use larger data quantities than older content.

Object storage was designed for limitless scalability. It treats data as objects that are stored in a flat address space, which makes it easy to grow deployments via horizontal scaling (or scaling out) rather than vertical scaling (scaling up). To increase a deployment, organizations simply have to add more nodes or devices to their existing system, rather than adding new systems (such as LTO libraries) entirely. Because of this, object storage is simple to scale to hundreds of petabytes and beyond. With data continuing to grow exponentially, especially for video content, being able to scale easily and efficiently helps M&E companies maintain order and visibility over their content, enabling them to easily find and leverage those assets for new opportunities. Increasing the size of a sprawling, messy tape library is exactly the opposite.

Tape libraries also lack centralized management across locations. To access or manage a given asset, a user has to be near the library where it’s physically stored. For M&E organizations that have tape archives in multiple locations, this causes logistical issues, as each separate archive must be managed individually. As a result, companies often need to hire multiple administrators to operate each archive, driving up costs and causing operational siloing.

Object storage addresses the challenge of geo-distribution with centralized, universal management capabilities. Because the architecture leverages a global namespace and connects all nodes together in a single storage pool, assets can be accessed and managed from any location. While companies can only access data stored on tape directly through a physical copy, object storage enables them to access all content regardless of where it is physically stored. One person can administer an entire globally-distributed deployment, enforcing policies, creating backup copies, provisioning new users and executing other key tasks for the whole organization.

Conclusion

M&E companies still managing video content in LTO tape libraries suffer from major inefficiencies, and in turn, lost revenue. The format simply wasn’t designed for the modern media landscape. Object storage is a much newer architecture that was built to accommodate massive data volumes in the digital age. Object storage’s searchability, accessibility, scalability and centralized management helps broadcasters boost ROI from existing content.

 


To learn more about Cloudian’s Media and Entertainment solutions, visit cloudian.com/solutions/media-and-entertainment/.

Tape — Does It Measure Up?

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator. Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries.

amit rawlaniAmit Rawlani, Director of Solutions & Technology Alliances, Cloudian

View LinkedIn Profile

lto tape

Anyone who has worked with LTO tapes is well aware of the challenges. Let’s just say that they are anything but easy to deal with. From the lack of accessibility to the complexity of management and overall costs of maintaining and expanding aging tape libraries, the challenges have been a thorn in the side of many an IT administrator.

Historically, organizations have bitten the proverbial bullet and implemented tapes for long-term data archiving and backup, inheriting along with it all the associated problems. However, the remote aspect of distributed teams during COVID-19 pandemic has accentuated the accessibility and maintenance challenges inherent to large data tape libraries. Also, security and secure remote access have become a critical element when considering data protection and business continuity. With production and engineering teams alike finding themselves “locked out of the building,” managing physical tape media and remediating mechanical issues with tape libraries has proved difficult if not impossible.

The drawbacks of tape that are even more highlighted by the pandemic include:

  • Accessibility: This one is obvious. The lack of immediate and complete accessibility has never been more problematic than during the pandemic.
  • Durability: Mechanical failures around tape library robotics and tape media failures inside have meant truck rolls into the tape vaults – not ideal for a shelter-in-place situation.
  • Compatibility: New tape drive hardware has limits to its backward compatibility, which have required recoding at a time when data availability has been the prime objective for business continuity
  • Security: Ransomware attacks have become commonplace during the pandemic. Considering the various drawbacks associated with tapes, the rationale for using tapes for ransomware protection is up for reevaluation. As they say, data not retrievable in the right timeframe is data not protected. This is especially true in the case of ransomware


As companies look to increase the capacity of their storage, as well as the frequency with which they access it, object storage checks off all the right boxes in terms of data durability, availability, performance, and accessibility. Whether in the public or private cloud, object storage overcomes the limitations of LTO tape listed above and has become the go-to for most IT administrators looking for a better solution. If you’re running tape today, it makes a lot of sense to evaluate the benefits of switching to object storage before the limitations of your current solution impact your business more severely — and the sooner the better. As tape infrastructure ages, the transition only becomes more difficult.

As with any major technology shift, there are many important factors to take into consideration.


Tape: Does it Measure Up?
An Insider’s Guide to Data Center Modernization

To read an insider’s view on data center modernization focused on this topic, please visit
https://cloudian.com/lp/data-center-modernization/

LTO tape library

IBM Spectrum Protect with Amazon S3 Cloud Storage

IBM Spectrum Protect (formerly IBM Tivoli Storage Manager) solution provides the following benefits:

  • Supports software-defined storage environments
  • Supports cloud data protection
  • Easily integrates with VMware and Hyper-V
  • Enables data protection by minimizing data loss with frequent snapshots, replication, and DR management
  • Reduce the cost of data protection with built-in efficiencies such as source-side and target-side deduplication

IBM Spectrum Protect has also enhanced its offerings by providing support for Amazon S3 cloud storage (version 7.1.6 and later) and IBM Spectrum Protect version 7.1.6 was just released on June 17th, 2016. I was actually a little nervous and excited at the same time. Why? Because Cloudian HyperStore has a S3 guarantee. What better way to validate that guarantee than by trying a plug-and-play with a solution that has just implemented support for Amazon S3?

Overview of IBM Spectrum Protect with Amazon S3 cloud storage

And the verdict? Cloudian HyperStore configured as “Cloud type: Amazon S3” works right off the bat with IBM Spectrum Protect. You can choose to add a cloud storage pool from the V7.1.6 Operations Center UI or use the Command Builder. The choice is yours.

We’ll look at both the V7.1.6 Operations Center UI and the Command Builder to add our off-premise cloud storage.

NOTE: Cloudian HyperStore can be deployed as your on-premise S3 cloud storage but it has to be identified as an Amazon S3 off-premise cloud storage and you have to use a signed SSL certificate.

Here’s how you can add an Amazon S3 cloud storage or a Cloudian HyperStore S3 cloud storage into your IBM Spectrum Protect storage pool:

From the V7.1.6 Operations Center UI

 

From the V7.1.6 Operations Center console, select “+Storage Pool”.

Adding 'Storage Pool' to the IBM Spectrum Protect V7.1.6 Operations Center console

In the “Add Storage Pool:Identity” pop-up window, provide the name of your cloud storage and the description. In the next step of the “Add Storage Pool:Type”, select “Container-based storage:Off-premises cloud”.

IBM Spectrum Protect cloud storage description

Click on “Next” to continue. The next step in the “Add Storage Pool:Credentials” page is where it gets exciting. This is where we provide the information for:

  • Cloud type: Amazon S3 (Amazon S3 cloud type is also used to identify a Cloudian HyperStore S3)
  • User Name: YourS3AccessKey
  • Password: YourS3SecretKey
  • Region: Specify your Amazon S3 region (for Cloudian HyperStore S3, select “Other”)
  • URL: If you had selected an Amazon S3 region, this will dynamically update to the Amazon region’s URL. If you are using a Cloudian HyperStore S3 cloud storage, input the S3 Endpoint Access (HTTPS).

Complete the process by clicking on “Add Storage Pool”.

IBM Spectrum Protect

NOTE: Be aware that there is currently no validation performed to verify your entries when you click on “Add Storage Pool”. Your S3 cloud storage pool will be created. I believe the IBM Spectrum Protect group is addressing this with a validation process for the creation of a S3 cloud storage pool. I hope the step-by-step process that I have provided will help minimize errors with your Amazon S3 cloud storage pool setup.

From the V7.1.6 Operations Center Command Builder

 

From the V7.1.6 Operations Center Command Builder, you can use the following define stgpool command and you are done adding your off-premise S3 cloud storage pool:

  • define stgpool YourCloudName stgtype=cloud pooltype=primary cloudtype=s3 cloudurl=https://s3.cloudianstorage.com:443 access=readwrite encrypt=yes identity=YourS3AccessKey password=YourS3SecretKey description=”Cloudian”

NOTE: You can review the server instance dsmffdc log if there’s errors. It is located in the server instance directory. There’s also a probability that the signed SSL certificate might not be correct.

For example:

06-20-2016 11:58:26.150][ FFDC_GENERAL_SERVER_ERROR ]: (sdcloud.c:3145) com.tivoli.dsm.cloud.api.ProviderS3 handleException com.amazonaws.AmazonClientException Unable to execute HTTP request: com.ibm.jsse2.util.h: PKIX path building failed: java.security.cert.CertPathBuilderException: unable to find valid certification path to requested target
[06-20-2016 11:58:26.150][ FFDC_GENERAL_SERVER_ERROR ]: (sdcntr.c:8166) Error 2903 creating container ibmsp.a79378e1333211e6984b000c2967bf98/1-a79378e1333211e6984b000c2967bf98
[06-20-2016 11:58:26.150][ FFDC_GENERAL_SERVER_ERROR ]: (sdio.c:1956) Did not get cloud container. rc = 2903

 

Importing A Signed SSL Certificate

 

You can use the IBM Spectrum Protect keytool –import command to import the signed SSL certificate. However, before you perform the keytool import process, make a copy of the original Java cacerts.

The Java cacerts is located in IBM_Spectrum_Protect_Install_Path > TSM > jre > security directory.

You can run the command from IBM_Spectrum_Protect_Install_Path > TSM > jre > bin directory.
For example, on Windows:

    • ./keytool –import ../lib/security/cacerts –alias Cloudian –file c:/locationofmysignedsslcert/admin.crt

 

Enter the keystore password when prompted. If you haven’t updated your keystore password, the default is changeit and you should change it for production environments. When you are prompted to “Trust this certificate?”, input “yes”.

NOTE: Keep track of the “Valid from: xxxxxx” of your signed SSL certificate, you will have to import a new certificate when the current one expires.

By the way, if you encounter error “ANR3704E sdcloud.c(1636): Unable to load the jvm for the cloud storage pool on Windows 2012R2”, update the PATH environment variable on the Spectrum Protect Server:
IBM_Spectrum_Install_Path\Tivoli\TSM\jre\bin\j9vm and also set the JVM_LIB to jvm.dll.

Here’s what your Amazon S3 cloud storage type looks like from IBM Spectrum Protect V7.1.6 Operations Center console:

Operations Center console final result after adding Amazon S3 cloud storage to IBM Spectrum Protect V7.1.6

And you’re off! If you encounter any issues during this process, feel free to reach out to our support team.

You can also learn more by downloading our solution brief.

New Use Cases for Smart Data and Deep Learning

In case you missed it, we recently announced a project with advertising giant Dentsu, QCT (Quanta Cloud Technology) Japan, and Intel Japan. Using deep learning analysis and Cloudian HyperStore’s smart data storage, we’re launching a billboard that can automatically recognize vehicles and display relevant ads.

The system has ‘seen’ 3,000-5,000 images per car so that it can distinguish all the various features of a particular car and identify the make, model, and year with an average 94% accuracy. For example, if someone is driving an older Mercedes, the billboard could advertise the latest luxury car. Or, if someone is driving a Prius, then the billboard could show eco-friendly products. It’s important to note that none of this data is stored – it is simply processed and then relayed into a relevant ad.

Cloudian and Dentsu use smart data for billboardsOur smart data system sifts through thousands of images to accurately identify vehicles

You can also turn to this piece from CNN Money to learn a bit more about the project. The first billboard will be up and running later this year in Tokyo.

Broader Potential for Innovative Technology

 

One of the reasons why this technology is possible is through the use of metadata. Typically, big data is just stored passively for future analysis. Because this data is unorganized and untagged, it requires a good amount of effort in order to discover and pull out specific information.

Object storage, on the other hand, can have metadata tags attached to them. We run the data through real-time classification and auto-recognition/discrimination, which means these metadata tags are attached on the fly. As a result, we use this ‘deep learning’ to turn big data into smart data.

How IoT and deep learning combine to make smart data

So what are the implications of this technology beyond advertising? There is potential for tremendous applications of deep learning in other fields, such as improved object recognition for self-driving cars, higher quality screening for manufacturing equipment, or even better tumor detection in MRIs.

Still skeptical? Sign up for a free trial and test out our smart data storage for yourself.

Shifting Technology Habits and the Growth of Object Storage

Technology is, for many of us, a vital and inextricable part of our lives. We rely on technology to look up information, keep in touch with friends and family, monitor our health, entertain ourselves, and much more.

space

However, technology wasn’t always so ubiquitous – it wasn’t too long ago that our wireless phones had limited features and even fewer users actually using these features. Here’s the breakdown from 2004, according to a study from the Yankee Group:

This means that just over 10 years ago, less than 50% of cell phones had internet access and less than 10% had cameras. Even with 50% of phones having internet access, only 15% of users took advantage of this feature.

pew research center

By contrast, look at this survey conducted by Pew Research in 2014:

Among the 18-29 age group, text messaging and internet are more frequently used features than phone calls, which is indicative of the tremendous shift in technology use over the past few years. This study doesn’t even cover a major feature that many users use their phones for: pictures. As younger users turn almost exclusively to smartphone cameras for their photos (and, of course, #selfies), they turn to photo-sharing sites to host and display their images.

Photos are just one type of the ever-growing deluge of unstructured data, though. For enterprises, unstructured data also includes emails, documents, videos, audio files, and more. In order for companies to cost-effectively store this data (while keeping it protected and backed up for end-users), many of them are starting to turn to object storage over traditional network-attached storage (NAS).

Some of the benefits of object storage include a lower total cost of ownership (TCO) and the ability to easily scale up as data needs grow. That by itself is not enough, though. With a solution like our very own HyperStore, in addition to the affordable price (as low as 1c per GB per month) and infinite scalability (from tens of terabytes to hundreds of petabytes), we offer easy management and access control, plus strong data protection with both erasure coding and replication settings. You can read about all of HyperStore’s features and benefits here.

Unstructured data use is only going to continue to grow. Smartphones and other data-intensive technologies will only become more prevalent, and you’ll want to be prepared to meet that growth. Learn more about Cloudian’s hardware and software solutions today.

Lenovo Solves Data Storage Needs with a New Appliance

As our lives become increasingly digital, we’ll generate more and more data. By current estimates, storage needs are doubling in size every two years. That means that by 2020, we will reach 44 zettabytes – or 44 trillion gigabytes – of data, with most of that growth as unstructured data for backups, archives, cloud storage, multimedia content, and file data. This growth in data is quickly outpacing IT budgets. It’s clear we need a new storage approach if we hope to keep up with this deluge of data.

Introducing a New Appliance by Lenovo and Cloudian

 

Lenovo, together with Cloudian, is attacking the $40B storage market with a new, innovative capacity storage appliance for low-cost, scalable storage which addresses 80% of customer’s data needs. We are proud to introduce the Lenovo DX8200C powered by Cloudian as the storage building block which can scale to this challenge and further drive datacenter efficiency and investment protection.

Lenovo DX8200C powered by CloudianThe Lenovo DX8200C powered by Cloudian is an affordable and scalable object storage solution.

Offered as part of Lenovo’s StorSelect software-defined storage program, this factory integrated appliance is built upon Lenovo’s industry-leading servers and features:

  • S3: S3 is the de facto cloud storage standard as stated by Gartner. Cloudian is the only native S3-compatible mass capacity storage solution on the market, enabling customers and partners to take advantage of the $38B AWS ecosystem
  • Affordability: Lower the total cost of ownership (TCO) to $0.1 per GB per month
  • Scalability: The flexible design allows you to start small and scale up to 112 TB of storage capacity per node
  • Security: Utilize always-on erasure coding and replication to ensure your data is protected
  • Simplicity: Single SKU for full appliance and support

The Lenovo DX8200C powered by Cloudian delivers a fully-integrated and ready-to-deploy capacity storage system, reducing risks and variables in the datacenter. Global support is provided by Lenovo’s top-rated support team.

Additionally, what sets this appliance apart from others is the use of Cloudian’s HyperStore storage platform, bringing with it a full host of key features, including:

 

In a news announcement today, David Lincoln, GM of the Data Center Group at Lenovo, stated that “the Cloudian HyperStore solution enables us to deliver leading innovative, software-defined storage capabilities to enterprises and service providers worldwide.”

Michael Tso, CEO and co-founder of Cloudian, reiterated this point by stating that “enterprises and value-added resellers (VARs) can maximize their business investment and revenue opportunities with this fully turnkey, channel-ready, 100 percent S3 object storage solution.”

With more and more industries requiring massive amounts of data to be stored, this partnership with Lenovo represents a vital next step – one where pre-loaded appliances make it easy for companies to both integrate with existing infrastructure and scale out for large deployments.

The Lenovo DX8200C powered by Cloudian will be available worldwide in the third quarter of 2016 but Lenovo and Cloudian are working closely together to address all customer needs in the meantime.

Start Small and Grow with Unlimited Scale

It seems that much of the current conversation around data revolves around how much of it there is and how much there will be in the coming years. While this macro level perspective is important and should help inform how data is stored, it’s also important to focus in on the micro level use cases.

space
Many companies tout that they can start big and go bigger. The issue with this approach is that it ignores a large swath of customer needs. What if you don’t need hundreds of TBs of storage immediately? What if you want to start small, but anticipate growth down the line?

Cloudian HyperStore 6.0

Scale as you grow with Cloudian HyperStore

 

Cloudian offers the flexibility to start small without sacrificing any of the robust features in our HyperStore operating environment. We offer both software and hardware solutions so you can start with as little as tens of TB of storage and scale up to hundreds of PBs.

Cloudian HyperStore can be deployed on off-the-shelf commodity hardware for 1c per GB per month, making it both easy and affordable to scale out as your data grows. As you add more data, HyperStore will automatically divert from highly used disks to less used disks to avoid imbalance. Of course, as you scale, security and data resiliency become more and more vital, which is why this smart disk balancing is only one part of the wider array of protection features in HyperStore.

Big protection for all your data

 

No matter how much data you’re storing, we’ve built in some of the most robust security features possible to protect your data. On a read request on your data, all replicas are checked and missing or out-of-date replicas are automatically updated or replaced. As a result, you don’t have to worry about restoring to outdated data.

The Cloudian Management Console lets you monitor your system’s health and get alerts when things are off. Be proactive by utilizing replication or erasure coding (or both!) to properly protect your data. Plus, spread your data out among geographically independent data centers as an added contingency against data loss. If you need to conduct a more granular check-up on your system, we’ve implemented an “object GPS” so you can quickly and easily locate any specific object within a given bucket.

As your organization grows, your access needs will change as well. HyperStore gives you multi-tenancy controls so that you can give role-based access to administrators and users.

From the very beginning, we believed strongly in providing customers with all the tools they needed to create the storage platform that works for them. In addition to the HyperStore software, we also have turnkey appliances that enable small deployments with the potential to scale up to many PBs.

Cloudian HyperStore Appliance 1500 The Cloudian HyperStore 1500 Appliance offers hot-swappable hardware, automated data tiering, and unlimited scale.

If you’d like to try Cloudian HyperStore for yourself, sign up for a free trial today.

Betting on Software-Defined Storage

Picking a company to advise is not always easy, but sometimes it just clicks

Take, for example, my recent decision to join Cloudian’s advisory board. As outlined in my recent blog post, I look at several factors before deciding to advise a company:

  • The potential for growth
  • How well they know their target audience
  • The quality of the team
  • How passionate everyone is (not just the employees of the company, but also my own passion and excitement)

Of course, these are broad factors I always consider. When it came to Cloudian, I had plenty of other questions as well:

  • Does the product/service actually work?
  • Does it scale?
  • Does it save money?
  • Does it enable a more agile operations environment?

Everything I’ve heard from customers – and from the team, of course – indicates that the answer is a definite YES. But before I go into more detail on what Cloudian does, a bit of background on object storage.

Storage: Then and Now

Storage was long dominated by firms that did a good job protecting your data – and serving it up for the then dominant vertically scalable workloads – and that, in turn, locked you into their proprietary hardware and software, resulting in the largest margins in IT infrastructure. Back in 2008, many firms such as Intel and LSI and the whitebox server providers and entrepreneurs including myself thought: hmm, storage is taking more and more of the IT budget but is not keeping up with new application architectures, new patterns of data generation, and a new generation of storage managers. There has got to be a better way.

And that better way is now called software-defined storage.

Today, storage is much better than it was in 2008, with far better economics, business models that pass the benefits of flash and network improvements on to customers, and a shift towards scale-out, developer-friendly architectures. Much of this change has been forced by Amazon and S3, who set the bar quite high for easy-to-use, massively scalable, and comparatively less expensive storage.

How Cloudian Fits In

Cloudian provides on-premise software-defined storage at 1c/GB. This by itself does not set the company apart, but they made a smart move early on – they bet on Amazon’s S3 API. Instead of inventing another proprietary API in a sea of proprietary APIs, they decided to focus on S3 from day one. This gives them a unique offering – 100% S3-compliant storage that uses metadata in interesting and intelligent ways.

So if you and your developers and the software you are running can interact with S3 – it can interact with Cloudian. Cloudian’s management interfaces blow away the AWS storage GUI, incidentally – giving you a firm-wide view into your data, grouped into blobs, and the policies you’ve applied to these blobs. So your developers are happy because it scales and it is basically S3 out the front and your storage teams are happy because they retain the control and visibility they need to do their jobs. And that’s not to mention “native” multi-tenancy, which is one of many reasons service providers like NTT rely on Cloudian.

The results have been outstanding, with customers providing excellent reviews – and this growing word of mouth has led to a sales acceleration.

I went into the Cloudian offices recently to talk a bit more about my reasons for joining the advisory board, and you can watch the video here:

I’m excited to get a front-row seat as Cloudian continues to ramp up their momentum and grow, and I’m looking forward to getting to know everyone in this wonderful team of people.

Hypervisor Agnostic Cloud Storage for VDI Home Directories with Cloudian

Virtual Desktop Infrastructure (VDI) is, simply put, the process of running virtualized desktops for users within an organization using server-based computing. You have the option of running persistent or non-persistent virtual desktops.

If you are interested in knowing more about these types of deployments, you can find out more by using your favorite search engine. If you are like me and you just want to click on a link and have the information pop up on your screen, here is a blog post that explains VDI in more detail.

For example, a VDI deployment with Hyper-V can be explained with the diagram below. The RDP client logs in to the web browser server, then the RD Connection Broker server lists and orchestrates the virtual machines. The AD server authenticates access. The RD Session Host server redirects the RDP client to the right virtual machine and the RD Gateway server publishes and makes the VM available to the authorized user.

VDI deployment with Hyper-VVDI deployment with Hyper-V

Whether you are running persistent and/or non-persistent virtual desktop solutions, your organization will still require storage for your virtual desktop user home directories, etc. You can deploy additional LUN storage. However, with traditional LUN storage, you are likely to run through the following process:

  1. You, as the VDI administrator, will define the storage requirement for each user.
  2. You’ll submit the storage requirement request to your storage administrator.
  3. The storage administrator creates the LUN storage base on your requirements.
  4. You create your master image/golden image for your virtual desktops.
  5. You’ll provision the virtual desktops to your user when you are ready.
  6. You’ll run into late night support calls when your user runs out of disk space to store their files/presentations/media in their user home directories.
  7. You make a call to your storage administrator to provision more storage @ 1am in the morning.
  8. You and your storage administrator work through the wee hours on the support calls.
  9. Repeat steps 6 through step 8…monthly, weekly, or even daily!

 

A Better Solution for VDI Users with Cloud Storage

 

Did the previous process flow sound familiar? What if you and your storage administrator could host your own hypervisor agnostic on-premise cloud storage solution within your own firewalls for your virtual desktop users’ home directories? What if you, as the storage administrator, can easily increase the bucket capacity for each user when a support call comes in with a few clicks? All of this is possible because software-defined cloud storage solutions such as Cloudian are designed from the ground up to be:

  • Simple
  • Highly scalable
  • Always-on with secure and encrypted access

 

VDI deployment with cloud storageHyper-V VDI deployment with cloud storage for VDI users’ home directories

Instead of fielding late-night support calls, easily provision S3 buckets for each virtual desktop user and provide secure S3 portal access without missing a beat. With some on-premise cloud storage solutions, you can:

  • Use the inherent multi-tenancy feature to create and deploy storage for all VDI user home directories.
  • Use QoS to throttle each user’s PUTs and GETs.
  • Monitor per user usage and easily review reports for chargeback purposes.
  • Use replication or erasure coding on a per storage policy to ensure each group has the right data protection benefits.

Per bucket and per user granularity is possible. It is because some cloud storage solutions are fully Amazon S3 compliant. Think about it – rather than creating additional LUN storage for your VDI users’ home directory requirements, I can simply create a master image that has the secure on-premise cloud portal for every group and every virtual desktop user defined in the web browser of the desktop master/golden image.

From that one golden image, I can deploy hundreds or thousands of virtual desktops that have a secure on-premise cloud storage solution for all my virtual desktop users. My virtual desktop users can easily access and use my deployed cloud storage as their home directories from any web browser. Best of all, many cloud storage solutions also support NFS/SMB/FTP via its native file access integration. This means you get the benefit of industry standard file protocols access without any third-party gateways!

With cloud storage solutions for user home directories and for file sharing purposes, we get the following benefits for virtual desktop deployments:

  • Simplified virtual desktop deployment with a hypervisor agnostic cloud storage solution. Minimize your late night “virtual desktop user is out of storage” support calls.
  • Scale-out and highly available home directories for virtual desktop user storage. Each storage bucket capacity is tunable for each virtual desktop user. There is no single point of failure with cloud storage.
  • Secure in-flight data and data-at-rest with AES-256 encryption.
  • Simplified and flexible data protection. Virtual desktop users can manage their own data protection and retention requirements.
  • Predictive analytics for storage planning. Use built-in analytics to manage your storage growth requirements.
  • Fully S3 compliant storage. This means you can support hundreds of S3 compatible applications using your very own on-premise cloud storage solution.
  • Software-defined. Use any x86 commodity server to deploy your own hybrid, private, and public cloud storage solution and minimize your operating cost.
  • Manage access and performance with QoS throttle. Easily manage user PUTs and GETs by using QoS throttles at the group or user level.

With the availability of secure cloud storage solutions for on-premise deployments, it’s a no-brainer to look into simplifying our lives so that we can get away from the repetitive user storage support calls in the middle of the night. Say goodbye to conversations such as:


“Help! I can’t save my executive briefing presentation and videos on my desktop. I need to securely share the presentation and videos with our Tokyo office. I am getting an error on my desktop pointing out that my e:\ drive and my user home directory is full. And NO, I cannot delete any of the existing files to make space for these new files and videos because I need all of it to be always accessible to me.”

To find out more about the advantages of hybrid and private cloud storage solutions, visit Cloudian.

Cheers,

Dominic

Data Availability & Data Protection for the IoT World

New York cityscape

New York, “The City That Never Sleeps”. A very fitting moniker for a city that is full of energy and excitement. Servers located in data centers all around the world are constantly crunching numbers and generating analytics in every financial institution in New York. Why are some of these servers located worldwide? Well, for a variety of reasons, but in my humble opinion, it is to ensure that data is always on and always available. After all, we are talking about billions of dollars in capital electronically managed by the New York Stock Exchange alone.

By 2020, it is predicted that there will be at least 20+ billion internet connected devices. As your business grows, so will the amount of data and storage that you will need. We’ll obviously need solutions to protect our data on-premise or in the cloud. A company that can make sure customers data is always on, secure, highly available, and also protected, rules the IoT WORLD.

modern storage requirementsBut in order to serve and protect your data for the always on, always available IoT world, what requirements should we take into account before deploying any data protection or storage solution? If you are a data protection geek, you’ll most likely see some of your requirements being listed on the right. If you are a data protection solutions provider, you guys definitely rock! Data protection solutions such as Commvault, NetBackup, Rubrik, Veeam, etc. are likely the solutions you have in-house to protect your corporate data centers and your mobile devices. These are software-defined and they are designed to be highly available for on-premise or in-the-cloud data protection.

What about storage? What would you consider? I am sure there are many well-known storage providers you can easily name. But with the new kids on the block disrupting the storage market, would lowering your operating costs ($0.005/GB per month) and meeting the above-listed requirements pique your interest?

Amazon S3 and Cloudian
Cloudian is a software-defined storage company. The solution is fully S3 compliant, which means that if you are familiar with Amazon S3, you’ll love the features that comes with this solution. If you are not, as a data protection geek with more than 15 years of experience, I invite you to give Cloudian HyperStore free trial a shot. The features and capabilities of Cloudian HyperStore as a scale-out storage solution with true multi-tenancy is pretty cool in my books. Imagine being able to deploy and grow storage as you need it for your corporate user home directories, backups, archiving, and even object storage for virtualization solutions (i.e. Red Hat OpenStack). The use cases for scale-out storage solutions are vast. There is no more hardware vendor lock-in as you can easily select between the options of a Cloudian HyperStore appliance or commodity servers to roll-your-own scale-out storage with Cloudian HyperStore software.

Imagine that you, as a storage administrator, can easily provide Storage as a Service (STaaS) to all your users. Take a look at the image below. The granular object level management that is available on a per user basis is pretty sweet. I can provide access to my files/objects with read and/or write permissions, with object level ACL and share the object via a public URL access.
Cloudian object level management

To top it all off, I can also limit the maximum number of downloads of that specific object that I want to share. As a service provider, you can also use the analytics inherent in the solution to implement chargeback to your customers on every account that you manage using Cloudian HyperStore smart storage solution.

Best of all, if you decide that you want to move your data to Amazon, use Cloudian Hyperstore’s built-in auto-tiering feature. Dynamically move your data to Amazon S3 if you choose to do so. You don’t have to take my word for it. Cloudian will provide you with a 45-day free trial. Try it out today.