Bringing Cloud-Native Applications to the Edge

With the help of the Cloud Native Computing Foundation, enterprises have made major progress in adopting cloud native technologies in public cloud, private cloud, and hybrid cloud environments. The next step is to bring these technologies to the edge. Just like public cloud, private cloud and hybrid environments, the edge will benefit tremendously from better portability, improved agility, expedited app development cycles and minimized vendor lock-in that cloud native adoption delivers. Before this can happen, however, the industry must first overcome a major challenge — the lack of standardization.

Gary OgasawaraGary Ogasawara, CTO, Cloudian

View LinkedIn Profile

 

 

 

In the article below, recently published in The New Stack, I discuss the benefits of cloud-native applications for edge use cases and the challenge that lack of standardization poses to broader adoption.


 

The Challenge of Bringing Cloud Native Apps to the Edge

With the help of the Cloud Native Computing Foundation, enterprises have made major progress in adopting cloud native technologies in public cloud, private cloud, and hybrid cloud environments. The next step is to bring these technologies to the edge. Just like public cloud, private cloud and hybrid environments, the edge will benefit tremendously from better portability, improved agility, expedited app development cycles and minimized vendor lock-in that cloud native adoption delivers. Before this can happen, however, the industry must first overcome a major challenge — the lack of standardization.

How Cloud Native Apps Benefit Edge Use Cases

App portability and agility are perhaps the biggest advantages of cloud native technology. Using Kubernetes and its resource-aware scheduling and abstraction of the underlying operating system and hardware, a software developer can approach the goal of creating an app once and then running it anywhere.

This flexibility is extremely valuable for all kinds of different edge use cases. Consider a common example: video surveillance. Imagine a security camera monitoring an electrical substation. The camera is continually collecting raw video at this edge endpoint. With massive volumes of streaming data being generated, the IT team naturally needs an app to filter out all the unimportant footage (when no motion is occurring or if there’s insignificant motion, such as a plane flying in the distant sky) and send only the meaningful footage (when significant motion is occurring, such as a person approaching the substation) to a central cloud or hub for human review.

In this case, a single cloud native app can be run at both the edge and the cloud. Furthermore, it can be used for different content transformation purposes in each location. At the edge, the app’s machine learning capabilities perform content filtering and send only important footage to the cloud. In the cloud, humans use the app’s analytics capabilities to perform additional editing and post-processing on that video footage to determine whether suspicious activity is occurring and help identify people when necessary.

With a cloud native app, certain aspects can be run at the edge and other aspects can be run in the cloud. Similarly, the same app can deployed at many different edge endpoints, and in each location, the app can be employed differently. For example, the content transformation app could have unique motion sensitivity settings at every surveillance camera, changing which video footage is filtered from each device.

Standardization

There’s a major hurdle preventing cloud native apps from being widely deployed at the edge today. CNCF projects are all open source — and open source approaches will be challenging to implement at the edge. Why? For open source projects to succeed, they require standardization of both software and hardware.

However, as it stands today, there isn’t much standardization in either edge software or hardware, particularly the latter. Just take a look at how fractured the current hardware market is: e.g., leading home edge devices from Amazon, Apple and Google all employ their own standards and offer little interoperability. For cloud native technology to work at the edge, therefore, the industry must focus their efforts on achieving broad software and hardware standardization.

When it comes to edge deployments, the software players are certainly further along in these efforts than their hardware counterparts. While there’s still much to be done, vendors such as IBM/Red Hat and SUSE/Rancher Labs have led the way in driving early edge software standardization, including work on operating systems and APIs. This isn’t surprising, as the same organizations have also recently been at the front of the pack in promoting on-prem Kubernetes and cloud native adoption.

API standardization is an especially important piece of the puzzle for software. High-quality, standardized APIs are key to supporting certain functionalities in any computing environment. For instance, the Amazon Web Services‘ S3 API, standardized by Amazon, provides limitless scalability for object storage deployments (and it works well on-prem, in the public cloud, and at the edge).

There are existing storage and networking APIs, currently used in highly distributed computing environments, that can effectively be extended to the edge. As mentioned above, the S3 API is one. Still, the industry must standardize many more APIs to support very specialized functions that are unique to the edge.

Streaming APIs are among the most critical. An autonomous vehicle’s steering software has to make life-or-death decisions in real-time. While there are existing streaming APIs that can be applied for edge use cases like video surveillance, more robust streaming APIs must be developed and standardized to support next-generation applications such as those for self-driving cars. In addition to streaming APIs, edge apps also need standardized APIs to enable important operations like data windowing and mapping, as well as better control plane APIs.

Efforts to standardize edge hardware are only in their infancy. Chipmakers such as Intel and Nvidia will play the biggest role here. Edge devices need to be low-power and cost-effective. It’s the chip technology, which provides the computing and storage capabilities, that determines how energy- and cost-efficient these devices can be. Intel and Nvidia’s design decisions will ultimately influence how device manufacturers build edge products. Eventually, these types of manufacturers will need to work together to help standardize certain components.

Conclusion

These standardization challenges will eventually be solved. Organizations have realized edge computing’s tremendous value. And they’re recognizing that to fulfill its potential, the edge is where innovative new technology, such as cloud native tech, needs to be developed and deployed. In addition, over the past 15-20 years, leading software and hardware vendors have made major progress supporting highly distributed systems and have found a way to make loosely coupled devices work well together. The industry will leverage this expertise, along with a rapidly maturing Kubernetes ecosystem, to bring cloud native apps to the edge.

Scalable S3-Compatible Storage On-Prem for AWS Outposts

AWS Outposts gives you cloud-like services in your data center. Now Cloudian provides AWS-validated S3-compatible storage on-prem to help you do more with Outposts. With Cloudian, you can expand your Outposts use cases to applications where data locality and latency are key.

Jon ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

scalable-s3-compatible-storageAWS Outposts gives you cloud-like services in your data center. Now Cloudian provides AWS-validated S3-compatible storage on-prem to help you do more with Outposts. With Cloudian, you can expand your Outposts use cases to applications where data locality and latency are key.

AWS Outposts gives you access to many of the compute services that are offered in the public cloud, meaning that applications written for EC2 can now run in your own data center. However, in use cases that require large-capacity storage on-prem, you have a challenge: a maximum of 96TB of S3-compatible storage is available in the Outposts rack. In capacity-intensive use cases — such as imaging, financial records, data analytics, IoT, or AI/ML — this presents an obstacle.

Cloudian HyperStore overcomes that limitation with exabyte-scalable S3-compatible storage that you can deploy alongside your Outposts. Cloudian storage lets you start small and grow without disruption, so you get the local capacity you need for any use case.

While AWS Outposts can employ AWS S3 in the cloud as a storage repository, this is not an acceptable option in every use case. On-prem storage may be required for several reasons:

  • Data locality: Regulated industries often have specific requirements around data location. With Cloudian, all data is maintained on-prem, within the Cloudian cluster.
  • Access Latency: When performance is a concern, the cloud-access latency incurred by a WAN link may be unacceptable. Cloudian puts the data right next to the compute. All-flash and HDD-based storage options let you select the performance level you need.

Applications for On-Prem S3 Compatible Storage

Here are a few examples of industries that can do more with Outposts plus on-prem scalable storage:

  • Financial Services: Meet data governance requirements for data locality. Cloudian is already deployed at some of the world’s largest banks and is certified compliant with SEC Rule 17a-4(f), FINRA Rule 4511, and CFTC Rule 1.31(c)-(d). If desired, offers immutable data storage for compliance, governance or legal hold. These applications are now possible for AWS Outposts users as well.
  • Healthcare: Regulatory-compliant local storage for the ever-growing volumes of patient health records, scans, MRIs, and x-rays. Cloudian allows Outposts users to scale to an exabyte to meet growing requirements.
  • M&E & Imaging: Scalable capacity for all unstructured data types. Cloudian’s parallel processing and S3-compatible multi-part upload ensure fast streaming performance.
  • Telco & Public Sector: Secure, compliant and cost-effective storage for data collection and S3-compatible storage services.

Outposts Ready Designation

Cloudian achieved Outposts Ready designation as part of the AWS Service Ready Program, which means that Outposts users can now expand their Outposts environments with Cloudian as it integrates seamlessly with applications that employ the AWS S3 protocol.

To learn how you can do more with more Cloudian and AWS Outposts, please visit cloudian.com/aws.

A Streaming Feature Store Based on Flink and the AWS SageMaker Feature Store API

The next wave of digital transformation is a synthesis of on-premise and cloud workloads where the same compute and storage cloud services are also available on-premise, in particular at the “edge” — near or at the location where the data is generated. For this edge-cloud synthesis to work, the cloud services — specifically programming APIs — must be available at the edge without requiring cloud access. For example, popular AWS services such as S3 for data storage and Lambda for serverless computing also need to work at the edge independently of the cloud.

Gary OgasawaraGary Ogasawara, CTO, Cloudian

View LinkedIn Profile

 

 

 

 


 

The Edge Needs a Streaming Feature Store

The next wave of digital transformation is a synthesis of on-premise and cloud workloads where the same compute and storage cloud services are also available on-premise, in particular at the “edge” — near or at the location where the data is generated.  For this edge-cloud synthesis to work, the cloud services — specifically programming APIs — must be available at the edge without requiring cloud access.  For example, popular AWS services such as S3 for data storage and Lambda for serverless computing also need to work at the edge independently of the cloud.

Some APIs are more useful to have at the edge due to the specific nature of the edge where raw data is generated and real-time decisions are needed.  A “feature store” that rapidly stores data in the form of records composed of multiple features is an example of API functionality that is needed at the edge.  There are various descriptions of a feature store (e.g., AI Wiki), but it’s essentially a shared data store for multiple records of features and used for analysis and machine learning.  For example, the records can be used to train and validate a deep learning model.  Once the records are in the feature store, they can be archived and queried by different teams for tasks like machine learning, data exploration, and other analysis such as summarization.

SageMaker Feature Store provides feature store functionality for the AWS cloud, but users also want the same capabilities at the edge to ingest raw data and use it to make real-time decisions without needing to make a round-trip to the cloud.

Another property of edge data is that it is often continuously generated as a stream of data.  Ingesting the data stream can be done with a feature store API, and then users can transform the continuous data stream into metadata usable for analysis.  Streaming software like Apache Flink can first partition and window the streaming data, then apply transformations including aggregation functions (e.g., average), filtering, and map functions.

Cloudian Streaming Feature Store (SFS)

Cloudian has developed the Streaming Feature Store (SFS) that implements the SageMaker Feature Store API, adds data stream processing functionality with Flink, and is deployed as Kubernetes-managed software at the edge.

 

Streaming Feature Store diagram
Figure 1: Major components of Streaming Feature Store (SFS).

 

Feature Store API

The SageMaker FeatureStore API has two parts: Feature Group API and Record API.  The Feature Group API manages feature groups that are logical groupings of features.  Each feature is defined by a name (String) and a type (String, Integral, Fractional) — e.g., “name:Age, type:Integral”.  Each feature group must define a feature that is the unique identifier for the feature group and a feature that is the event time of the feature group used to determine the time ordering of the records.  The Record API is a REST API for PUT/GET/DELETE record where a record is a single instance of a feature group.

Example Feature Group “TemperatureSensor”:

FeatureName FeatureType Notes
sensor_id String Unique identifier
val String
time String Event time

Example Record:

FeatureName ValueAsString
sensor_id sensor324
val 28.4
time 2022-10-29T09:38:41Z

As depicted in Figure 1, SFS has both a key-value database system and an object store. These two types of storage systems are used to implement the online store and offline store of the SageMaker FeatureStore.

The online store is a low-latency store for real-time lookup of records, storing the latest (by event time) feature group data record.  The low-latency for queries makes it applicable for real-time analysis.  For example, if there is a point-of-sale device that is processing a credit card transaction, real-time analysis using a trained deep learning model is performed to predict whether the transaction is fraudulent or not.  This type of decision-making must be done in real-time.  Having the fraud-or-not decision be delayed makes it unusable at the point-of-sale.

The offline store is a high-capacity S3 object store where all data is versioned, thereby making the data available for point-in-time queries.  As record data streams in, the older records for the same unique identifier are automatically migrated from the online store to the offline store.  The records are stored in Parquet format for space-efficient as well as performant columnar storage.  The offline store is implemented using an S3 API compatible object store, Cloudian’s HyperStore. This provides a horizontally scalable system with the full power of the S3 API. By configuration of the S3 endpoint and credentials, any fully S3-compatible object store can be used.

 

Stream Processing

As mentioned earlier, a feature store for edge applications needs to operate with continuously streaming data like sensors emitting readings, surveillance cameras with live video, and manufacturing processes detecting bad components.  To make it a “streaming” feature store, it must have the capability to analyze continuous streams of data.

Stream processing within SFS is done using Flink’s DataStream API.  As new records are added via the PUT record API, a data stream transformation (e.g., filtering, defining windows, aggregating) is optionally executed with the transformed data written to a data sink that could be either an S3 bucket, a file, or another Feature Group.  By writing the transformed data to a Feature Group, multiple transformations can be chained together in a pipeline.  For example, raw transaction data can be aggregated into a 10-minute summary statistics Feature Group which can then be fed into a 24-hour summary statistics Feature Group.

Windows are used to split the data stream into groups of finite size which can then be processed.  Flink defines multiple window types, but currently SFS implements only the SlidingProcessingTimeWindows that has size and slide parameters used to create sliding windows of fixed size (e.g., 1 minute) every slide period (e.g., 10 seconds).  Below is an example of a Feature Group to hold aggregated data collected for each sliding window.

Example Feature Group “AggregatedTemp60”:

FeatureName FeatureType Notes
key String Unique identifier
start String Event time
count String The number of records in the window
max String The max value in the window
min String The min value in the window
sum String The sum value of values in the window
duration String The time duration of the window

 

Infrastructure to Enable the Edge

I have attempted to show how a streaming feature store is a useful component for edge applications where streaming data can be rapidly ingested and then analyzed in order to make real-time decisions.  SFS is a part of Cloudian’s HyperStore Analytics Platform (HAP), a Kubernetes-managed infrastructure for compute and storage at the edge.  Using built-in Kubernetes functions, flexible and dynamic Pod scheduling is based on available resources (CPU/GPU/RAM/Disk). Other HAP components include an S3-compatible object store, a scalable, distributed S3 SELECT processor, and an S3 Object Lambda processor.

How to Set up HyperStore S3 Object Storage on VMware Cloud Foundation with Tanzu

We recently announced general availability of Cloudian’s HyperStore object storage on VMware Cloud Foundation with Tanzu.  HyperStore® is S3-compatible object storage, which now integrates with VMware’s vSAN Data Persistence platform to provide a containerized version of HyperStore managed by Kubernetes. In this blog, I describe the steps to set up HyperStore in this environment so that apps can consume S3 object storage.  The setup can either use kubectl command-line operations or the vSphere Client User Interface (VC UI).  Here we focus on using the VC UI. 

Gary OgasawaraGary Ogasawara, CTO, Cloudian

View LinkedIn Profile

We recently announced general availability of Cloudian’s HyperStore object storage on VMware Cloud Foundation with Tanzu.  HyperStore® is S3-compatible object storage, which now integrates with VMware’s vSAN Data Persistence platform to provide a containerized version of HyperStore managed by Kubernetes. In this blog, I describe the steps to set up HyperStore in this environment so that apps can consume S3 object storage.  The setup can either use kubectl command-line operations or the vSphere Client User Interface (VC UI).  Here we focus on using the VC UI.  As a prerequisite, we assume VMware Cloud Foundation with Tanzu, including the vSAN Data Persistence platform, is installed.  The vSAN Data Persistence platform enables HyperStore to use vSAN storage with a shared-nothing architecture (SNA) where the data durability and availability is managed by HyperStore instead of vSAN.

Enabling HyperStore

As a virtual infrastructure (VI) admin you can enable HyperStore in the VC UI Supervisor Services section. Enabling HyperStore triggers the creation of a new Kubernetes Namespace for HyperStore and the download and creation of a HyperStore Operator Pod and a HyperStore UI Plugin Pod.  It also creates two vSAN storage policies (vSAN SNA, vSAN Direct) for HyperStore if those vSAN resources are available. Because these vSAN storage policies do not do data replication and rebuild, they are a good fit with software-defined storage software like HyperStore that itself manages data replication and rebuild.

In the VC UI, from the Workload-Cluster, select Configure → Supervisor Services → Services, select Cloudian HyperStore from the list of available services, and click ENABLE.

configure workload cluster

This brings up the Enable Supervisor Service screen where you can set the HyperStore Operator version and other parameters.

enable supervisor service

For the “Version” field pull-down menu, select “v1.0.0” or a later version. If you want to use a custom Docker image repository, then set the parameters for the Repository endpoint, Username, and Password.  The images must have previously been stored in this repository.  This method is how an air-gapped installation can be done.  If the repository endpoint is not set, then the default is to use https://quay.io as the registry where the images are available.

Optionally, custom parameters can be added by setting the Key-Value pairs under “Advanced settings”.  HyperStore supports timeout parameters before starting certain rebuild actions.  The above figure shows a custom parameter “rebuildTimerEMM.”  Details about the custom parameters can be found in the documentation, but for a standard installation, they can be left unspecified.

After the parameters screen, the Cloudian End-User License Agreement (EULA) URL is displayed.  Click through and read the agreement carefully before selecting the checkbox to accept the terms of the license agreement and clicking “FINISH”.

EULA service
HyperStore is now enabled.  This creates a new Kubernetes namespace prefixed with “hyperstore-domain-” and starts a HyperStore Operator Pod and UI Plugin Pod in that namespace.  In the below picture in the left pane, the newly created namespace “hyperstore-domain-c8” and the Operator and UI Plugin Pods can be seen.

workload cluster
The HyperStore Operator Pod uses the Operator SDK to manage HyperStore using Kubernetes principles, notably a control loop to reconcile desired and current states.

The UI Plugin Pod implements a VC UI plugin to configure and monitor HyperStore.  For example, the UI Plugin is used to configure and create a new HyperStore Instance.

Creation of a HyperStore Instance

A HyperStore Instance is a StatefulSet of Pods that store data using replication and/or erasure coding, providing object storage capabilities with an S3 API.  Using VC UI and the underlying Kubernetes infrastructure, a new HyperStore Instance can be created simply.

Role-based access control (RBAC) is enforced at the Kubernetes Namespace level.  In VC UI, a Namespace can be created and then configured for permissions and storage policies.  The “edit” permission in the Namespace is required for a user to create a HyperStore Instance in that Namespace.  For storage policies, two HyperStore-specific storage policies are available for vSAN Direct and vSAN SNA policies.

storage policies
A new HyperStore Instance is created by using the VC UI under the VC cluster Configure → Cloudian HyperStore → Overview, and then clicking “NEW INSTANCE”.

new instance storage
As an alternative to the UI Plugin, a new HyperStore Instance can be created by kubectl apply of  a Custom Resource (CR) file that has the configuration parameters to use.  Below is an example CR file:
memoryRequire
apiVersion alpha1
After entering the parameters and clicking “OK”, a new HyperStore Instance is created in the specified Namespace with a limited and temporary HyperStore license. The HyperStore image is downloaded from the image registry and used in a new HyperStore Pod.  Once the image is downloaded and started, the Pod transitions from Pending to Running status, and the HyperStore installation process starts to create and configure the additional Pods in the StatefulSet.

One function of the HyperStore Operator is to report on the cluster health status that the VC UI monitoring uses.  During the initial installation, the health status is RED.  When the instance’s health status changes to GREEN, the HyperStore Instance is ready for S3 traffic.

HyperStore Instance Health
The vSAN Data Persistence platform on VMware Cloud Foundation with Tanzu provides a powerful framework to deploy and manage HyperStore S3 object storage. As a foundation, Kubernetes provides functions like auto-scaling, resource scheduling, and role-based access control.  Layering on VMware’s vSAN Data Persistence platform enables efficient use of vSAN storage with management functions like maintenance mode and health monitoring.  The result is an environment for apps managed within VMware Cloud Foundation with Tanzu where HyperStore S3 object storage can be created and monitored from the VC UI, a convenient “single pane of glass.”

To see a demo of this new combined solution, go to cloudian.com/resource/demos-and-videos/demo-vmware-vsan/.

To learn more about Cloudian solutions for VMware environments, go to cloudian.com/vmware/.

VMware Cloud Director and Cloudian: A Closer Look at the Integration

Cloudian and VMware now offer an integrated solution that offers a seamless experience for all vCloud Director service providers and its customers/tenants to leverage Cloudian HyperStore object storage.

Cloudian and VMware now offer an integrated solution that offers a seamless experience for all vCloud Director service providers and its customers/tenants to leverage Cloudian HyperStore object storage.

Read the overview

Read the datasheet

View the VMware lightboard video

The integrated solution for the first time brings S3 API support and Cloudian object storage to VMware vCloud Director environments.

vmware cloud director use caseThe solution combines the power of:

  • VMware Cloud Director — a leading cloud service-delivery platform used by thousands of cloud providers to operate and manage successful cloud-service businesses
  • Cloudian HyperStore — an S3 API-based, infinitely scalable, durable and multi-tenant cloud object storage platform used by customers worldwide to address their ever-growing storage capacity needs

Now, cloud providers can now deliver new S3-compatible storage and and other high-value services to enterprises and IT teams across the world.

Under the Hood

So let’s dig a little deeper to better understand what this partnership and integrated solution offer. Every IT team has cloud on their mind and with vCloud Director, VMware is leading the charge by powering a network of thousands of cloud providers who guide their customers’ journey from on-premises to private cloud, hybrid cloud, or even multi-cloud roll out.

What was missing was a scalable, cost-effective storage layer. This is now addressed with the release of Object Storage Extension (OSE) and the integration of Cloudian HyperStore with VMware Cloud Director. The VMware Cloud Director admin can install OSE — just like they would install any other extension — which allows them to integrate and manage Cloudian HyperStore via the VMware Cloud Director admin portal. The VMware Cloud Director admin can also leverage SSO to sign on to the Cloudian management console to set up and configure a Cloudian HyperStore cluster.

vmware cloud director blogVMware Cloud Director creates virtual data centers with elastic pools of cloud resources that are seamless to provision and easy to consume. It creates a fluid hybrid cloud fabric between an on-premise infrastructure and Cloud Service Provider, offering a best-in-class private/hybrid cloud with on-demand elasticity, streamlined on-ramp, native security, and hybridity.

Deep Integration for Seamless Management

This integration is not just about offering S3 API-based storage. It’s fully integrated management. Now, a VMware Cloud Director admin can centrally manage, monitor and consume Cloudian HyperStore just like they would any other storage resource, such as vSAN. This integration covers three areas:

  1. Data APIs: S3 APIs have become the de facto language of cloud storage. Cloudian has a fully native implementation of S3 APIs, which means we have the industry’s most compliant S3 API solution out there. This is key because if a service provider wants to build services that leverage S3 APIs, it needs to support all of the S3 API verbs like MPU, Sig V4, Tagging, etc. Cloud service providers don’t have visibility into customers’ applications and what S3 API calls they are using. Not supporting certain S3 API will result in poor customer satisfaction and higher support costs, thereby impacting profit. Cloudian offers the highest S3 API support, ensuring the best customer experience.
  2. Object Storage Features: VMware Cloud Director is a multi-tenant framework, a key component of a VMware Cloud Provider platform. So, for a storage solution to seamlessly fit into that framework it must be securely sharable, and limitlessly scalable. Cloudian is a scale-out platform that offers multi-tenancy, QoS, geo-distribution, global namespace, integrated billing and reporting. It is cloud provider-ready.
  3. Control Plane APIs: Most important are the Control Plane APIs that allow the VMware Cloud Director admin to seamlessly manage, operate and report from a central VMware Cloud Director portal. It allows VMware Cloud Director tenants to self-service their environment – create users, buckets, assign policies and provide reports at a granular level.

With these, cloud providers can deploy and manage profitable, high value services is use cases such as:

  • Storage-as-a-Service (STaaS)
  • Backup-as-a-Service (BaaS)
  • Archive-as-a-Service (AaaS)
  • Disaster-Recovery-as-a-Service (DRaaS)
  • Big Data-as-a-Service (BDaaS)
  • Containers-as-a-Service (CaaS)
  • Software Test/Dev

Read the overview

Read the datasheet

View the demo

View the VMware lightboard video

S3-Compatible Storage for VMware Cloud Director

We’re excited to announce Cloudian Object Storage for VMware Cloud Director, an integrated storage platform that enables VMware Cloud Providers and their customers to deploy, manage and consume S3-compatible storage within their services environment.

Read the datasheet

View the demo

View the VMware lightboard video

Read about the integration

Scalable, Cost-Effective Storage for Unstructured Data

This new offering does for unstructured data – such as images and files – what vSAN does for structured data: it provides an integrated S3-compatible storage solution that is provisioned and managed within the VCD framework.

space

Furthermore, Object Storage for VMware Cloud Director enables a limitlessly scalable storage pool, where up to an exabyte of data can be managed within a single namespace, and at far less cost than other storage types.

VMware Cloud Director Integration

Jointly engineered by VMware and Cloudian, the solution consists of two elements:

  • VMware Cloud Director Object Storage Extension: Object Storage middleware in VMware Cloud Director that is extensible and provides the storage management framework.
  • Cloudian Object Storage: The storage layer that provides the S3-compatible storage environment.

As with vSAN, object storage is seamlessly integrated within the management environment.

Cloudian s3-compatible object storage for vCloud Director

The Simple Path to New, High-Value Add Services

For VMware Cloud Providers, this platform opens the door to new service revenue streams in use cases
such as:

  • Storage-as-a-service
  • Backup-as-a-service
  • Archive-as-a-service
  • Container storage services, with VMware PKS

Furthermore, a growing ecosystem of S3-compatible applications create many other services options. Whether in big data, healthcare, media & entertainment, video surveillance or others, a scalable, S3-compatible platform gives CSPs new opportunities to build differentiated services offerings.

Fully S3-Compatible Storage Platform

Designed exclusively to support the S3 API, Cloudian Object Storage features a native S3 API implementation and offers this industry’s best S3 compatibility. This makes it an ideal platform for S3-compatible services offerings and software development.

Storage Management via VMware Cloud Director

All commonly-used storage management functions are accessible via VMware Cloud Director. Create users and groups, provision storage, set policies, and monitor usage, all without leaving the VCD UI. This eliminates the console-hopping that saps productivity and allows management tasks to be automated within the VCD framework.

Cloudian and VMware vCloud Director S3-compatible storage management screen shot

Self Service for Cloud Providers’ Tenant and Users

On the customer side, users also gain a self-service portal, letting them also accomplish storage management tasks on their own, via VCD. For the cloud provider, this translates to increased productivity and higher customer satisfaction.

Deployment Options

Cloud providers have two deployment options (both are managed via VMware Cloud Director):

Software-Defined Storage: Deploy Cloudian software on your existing VMware compute and storage platform and leverage the storage you already have. Storage appears as a scalable S3-compatible storage pool. A utility-based pricing model lets you license Cloudian software for just the object storage capacity in use. (This option will be  available Summer, 2019)

Appliance: Deploy as a pre-configured storage appliance from Cloudian. Start small and seamlessly scale to an exabyte without interruption. (Available July 2019)

vmware s3 compatible storage for vCloud Director

Example Workflow

From end-to-end, cloud providers and their clients can manage entire workflows via VMware Cloud Director.
Consider this backup-as-a-service offering: A service provider can configure the storage target (Cloudian
Object Storage), configure the backup software, and create new tenant users, all from a single VCD
screen. The tenant can then create and schedule backup jobs, monitor progress, and perform restores,
also though VCD.

vcloud example

Free up Space From VCD Datastores

For the service provider, this platform can also increase storage efficiency by offloading vApps not currently in use, thus freeing up storage space from VMware Cloud Director datastores. When required, restore the vApp back in the datastore for continued use.

Ideal Feature Set for Service Providers

Built for service providers, the Cloudian platform includes the full range of features needed to build and manage a profitable services business:

  • Multi-tenant Resource Pooling: Create isolated, secure storage pools within a shared storage platform. Customers have independent role-based authentication and fine-grained access controls.
  • Geo-Distribution and Cloud Migration: Policy-based tools enable simple, secure storage migration and management across sites for disaster recovery and storage resource optimization, all within a single namespace.
  • Integrated Management: Manage commonly used storage functions, such as reporting and configuration of users and groups, with access provided from within the VMware Cloud Director user interface. For advanced functions, a single sign-on provides seamless access to the Cloudian user interface.
  • Quality of Service: Manage service level agreements with bandwidth controls to ensure a consistent customer experience.
  • Billing: Generate client billing information using selected usage parameters.
  • Modular Scalability: Start small and grow without interruption to an exabyte within a single namespace.
  • Data Durability up to 14 Nines: Deployment options, including erasure coding and data replication, allow for configurable data durability up to 99.999999999999%.
  • Data Security and Compliance: Data is secured with AES-256 server-side encryption for data stored at rest and SSL for data in transit (HTTPS). WORM and audit trail logging are provided for compliance.
  • Granular Storage Management: Manage data protection and security at the bucket level to tailor capabilities for specific users.
  • Self-service Management: Role-based access controls allow customers to select and provision storage on-demand from a service catalog via a self- service portal.

General availability of Cloudian Object Storage for VMware Cloud Director is July 2019. We’re looking forward to helping our cloud provider partners and their customers build new business opportunities to capitalize on the growing ecosystem of S3-compatible applications!

Read the datasheet

View the demo

View the VMware lightboard video

Read about the integration