Bringing Cloud-Native Applications to the Edge

With the help of the Cloud Native Computing Foundation, enterprises have made major progress in adopting cloud native technologies in public cloud, private cloud, and hybrid cloud environments. The next step is to bring these technologies to the edge. Just like public cloud, private cloud and hybrid environments, the edge will benefit tremendously from better portability, improved agility, expedited app development cycles and minimized vendor lock-in that cloud native adoption delivers. Before this can happen, however, the industry must first overcome a major challenge — the lack of standardization.

In the article below, recently published in The New Stack, I discuss the benefits of cloud-native applications for edge use cases and the challenge that lack of standardization poses to broader adoption.

The Challenge of Bringing Cloud Native Apps to the Edge

With the help of the Cloud Native Computing Foundation, enterprises have made major progress in adopting cloud native technologies in public cloud, private cloud, and hybrid cloud environments. The next step is to bring these technologies to the edge. Just like public cloud, private cloud and hybrid environments, the edge will benefit tremendously from better portability, improved agility, expedited app development cycles and minimized vendor lock-in that cloud native adoption delivers. Before this can happen, however, the industry must first overcome a major challenge — the lack of standardization.

How Cloud Native Apps Benefit Edge Use Cases

App portability and agility are perhaps the biggest advantages of cloud native technology. Using Kubernetes and its resource-aware scheduling and abstraction of the underlying operating system and hardware, a software developer can approach the goal of creating an app once and then running it anywhere.

This flexibility is extremely valuable for all kinds of different edge use cases. Consider a common example: video surveillance. Imagine a security camera monitoring an electrical substation. The camera is continually collecting raw video at this edge endpoint. With massive volumes of streaming data being generated, the IT team naturally needs an app to filter out all the unimportant footage (when no motion is occurring or if there’s insignificant motion, such as a plane flying in the distant sky) and send only the meaningful footage (when significant motion is occurring, such as a person approaching the substation) to a central cloud or hub for human review.

In this case, a single cloud native app can be run at both the edge and the cloud. Furthermore, it can be used for different content transformation purposes in each location. At the edge, the app’s machine learning capabilities perform content filtering and send only important footage to the cloud. In the cloud, humans use the app’s analytics capabilities to perform additional editing and post-processing on that video footage to determine whether suspicious activity is occurring and help identify people when necessary.

With a cloud native app, certain aspects can be run at the edge and other aspects can be run in the cloud. Similarly, the same app can deployed at many different edge endpoints, and in each location, the app can be employed differently. For example, the content transformation app could have unique motion sensitivity settings at every surveillance camera, changing which video footage is filtered from each device.

Standardization

There’s a major hurdle preventing cloud native apps from being widely deployed at the edge today. CNCF projects are all open source — and open source approaches will be challenging to implement at the edge. Why? For open source projects to succeed, they require standardization of both software and hardware.

However, as it stands today, there isn’t much standardization in either edge software or hardware, particularly the latter. Just take a look at how fractured the current hardware market is: e.g., leading home edge devices from Amazon, Apple and Google all employ their own standards and offer little interoperability. For cloud native technology to work at the edge, therefore, the industry must focus their efforts on achieving broad software and hardware standardization.

When it comes to edge deployments, the software players are certainly further along in these efforts than their hardware counterparts. While there’s still much to be done, vendors such as IBM/Red Hat and SUSE/Rancher Labs have led the way in driving early edge software standardization, including work on operating systems and APIs. This isn’t surprising, as the same organizations have also recently been at the front of the pack in promoting on-prem Kubernetes and cloud native adoption.

API standardization is an especially important piece of the puzzle for software. High-quality, standardized APIs are key to supporting certain functionalities in any computing environment. For instance, the Amazon Web Services‘ S3 API, standardized by Amazon, provides limitless scalability for object storage deployments (and it works well on-prem, in the public cloud, and at the edge).

There are existing storage and networking APIs, currently used in highly distributed computing environments, that can effectively be extended to the edge. As mentioned above, the S3 API is one. Still, the industry must standardize many more APIs to support very specialized functions that are unique to the edge.

Streaming APIs are among the most critical. An autonomous vehicle’s steering software has to make life-or-death decisions in real-time. While there are existing streaming APIs that can be applied for edge use cases like video surveillance, more robust streaming APIs must be developed and standardized to support next-generation applications such as those for self-driving cars. In addition to streaming APIs, edge apps also need standardized APIs to enable important operations like data windowing and mapping, as well as better control plane APIs.

Efforts to standardize edge hardware are only in their infancy. Chipmakers such as Intel and Nvidia will play the biggest role here. Edge devices need to be low-power and cost-effective. It’s the chip technology, which provides the computing and storage capabilities, that determines how energy- and cost-efficient these devices can be. Intel and Nvidia’s design decisions will ultimately influence how device manufacturers build edge products. Eventually, these types of manufacturers will need to work together to help standardize certain components.

Conclusion

These standardization challenges will eventually be solved. Organizations have realized edge computing’s tremendous value. And they’re recognizing that to fulfill its potential, the edge is where innovative new technology, such as cloud native tech, needs to be developed and deployed. In addition, over the past 15-20 years, leading software and hardware vendors have made major progress supporting highly distributed systems and have found a way to make loosely coupled devices work well together. The industry will leverage this expertise, along with a rapidly maturing Kubernetes ecosystem, to bring cloud native apps to the edge.

 

 

Gary OgasawaraGary Ogasawara, CTO, Cloudian

View LinkedIn Profile

New Solution with VMware Tanzu Greenplum Data Warehouse

Cloudian is expanding its collaboration with VMware with a new solution combining Cloudian HyperStore with VMware Tanzu Greenplum, a massively parallel data warehouse platform for enterprise analytics, at scale.

Integrating Cloudian enterprise-grade object storage with VMware Tanzu Greenplum enables new efficiencies and savings for Greenplum users while also supporting the creation and deployment of petabyte-scale advanced analytics models for complex enterprise applications. This is especially timely with the amount of data consumed and generated by enterprises accelerating at an unprecedented pace and the need for these applications to capture, store and analyze data rapidly and at scale.

Greenplum Tanzu Cloudian Diagram

Whether your analytics models use traditional enterprise DB data; log & security data; web, mobile & click steam data; or your models use video and voice data; IOT data or JSON, XML geo and graph data; the need for a modern data analytics platform solution that is affordable, manageable, and scalable has never been greater.

Cloudian HyperStore, with its native S3 API and limitless scalability is simple to deploy and easy to use with VMware Tanzu Greenplum.  HyperStore storage supports the needs for data security, multi-clusters, and geo-distributed architectures across multiple use cases:

  • Storing database backups
  • Staging files for loading and unloading file data
  • Enabling federated queries via VMware Tanzu Greenplum Extension Framework (PXF)


Learn more about this new solution, here and see in the Greenplum Partner Marketplace

See how Cloudian and VMware are collaborating: https://cloudian.com/vmware

Learn more about Cloudian® HyperStore®

Scalable S3-Compatible Storage On-Prem for AWS Outposts

AWS Outposts gives you cloud-like services in your data center. Now Cloudian provides AWS-validated S3-compatible storage on-prem to help you do more with Outposts. With Cloudian, you can expand your Outposts use cases to applications where data locality and latency are key.

Jon ToorJon Toor, CMO, Cloudian

View LinkedIn Profile

scalable-s3-compatible-storageAWS Outposts gives you cloud-like services in your data center. Now Cloudian provides AWS-validated S3-compatible storage on-prem to help you do more with Outposts. With Cloudian, you can expand your Outposts use cases to applications where data locality and latency are key.

AWS Outposts gives you access to many of the compute services that are offered in the public cloud, meaning that applications written for EC2 can now run in your own data center. However, in use cases that require large-capacity storage on-prem, you have a challenge: a maximum of 96TB of S3-compatible storage is available in the Outposts rack. In capacity-intensive use cases — such as imaging, financial records, data analytics, IoT, or AI/ML — this presents an obstacle.

Cloudian HyperStore overcomes that limitation with exabyte-scalable S3-compatible storage that you can deploy alongside your Outposts. Cloudian storage lets you start small and grow without disruption, so you get the local capacity you need for any use case.

While AWS Outposts can employ AWS S3 in the cloud as a storage repository, this is not an acceptable option in every use case. On-prem storage may be required for several reasons:

  • Data locality: Regulated industries often have specific requirements around data location. With Cloudian, all data is maintained on-prem, within the Cloudian cluster.
  • Access Latency: When performance is a concern, the cloud-access latency incurred by a WAN link may be unacceptable. Cloudian puts the data right next to the compute. All-flash and HDD-based storage options let you select the performance level you need.

Applications for On-Prem S3 Compatible Storage

Here are a few examples of industries that can do more with Outposts plus on-prem scalable storage:

  • Financial Services: Meet data governance requirements for data locality. Cloudian is already deployed at some of the world’s largest banks and is certified compliant with SEC Rule 17a-4(f), FINRA Rule 4511, and CFTC Rule 1.31(c)-(d). If desired, offers immutable data storage for compliance, governance or legal hold. These applications are now possible for AWS Outposts users as well.
  • Healthcare: Regulatory-compliant local storage for the ever-growing volumes of patient health records, scans, MRIs, and x-rays. Cloudian allows Outposts users to scale to an exabyte to meet growing requirements.
  • M&E & Imaging: Scalable capacity for all unstructured data types. Cloudian’s parallel processing and S3-compatible multi-part upload ensure fast streaming performance.
  • Telco & Public Sector: Secure, compliant and cost-effective storage for data collection and S3-compatible storage services.

Outposts Ready Designation

Cloudian achieved Outposts Ready designation as part of the AWS Service Ready Program, which means that Outposts users can now expand their Outposts environments with Cloudian as it integrates seamlessly with applications that employ the AWS S3 protocol.

To learn how you can do more with more Cloudian and AWS Outposts, please visit cloudian.com/aws.

A Streaming Feature Store Based on Flink and the AWS SageMaker Feature Store API

The next wave of digital transformation is a synthesis of on-premise and cloud workloads where the same compute and storage cloud services are also available on-premise, in particular at the “edge” — near or at the location where the data is generated. For this edge-cloud synthesis to work, the cloud services — specifically programming APIs — must be available at the edge without requiring cloud access. For example, popular AWS services such as S3 for data storage and Lambda for serverless computing also need to work at the edge independently of the cloud.

The Edge Needs a Streaming Feature Store

The next wave of digital transformation is a synthesis of on-premise and cloud workloads where the same compute and storage cloud services are also available on-premise, in particular at the “edge” — near or at the location where the data is generated.  For this edge-cloud synthesis to work, the cloud services — specifically programming APIs — must be available at the edge without requiring cloud access.  For example, popular AWS services such as S3 for data storage and Lambda for serverless computing also need to work at the edge independently of the cloud.

Some APIs are more useful to have at the edge due to the specific nature of the edge where raw data is generated and real-time decisions are needed.  A “feature store” that rapidly stores data in the form of records composed of multiple features is an example of API functionality that is needed at the edge.  There are various descriptions of a feature store (e.g., AI Wiki), but it’s essentially a shared data store for multiple records of features and used for analysis and machine learning.  For example, the records can be used to train and validate a deep learning model.  Once the records are in the feature store, they can be archived and queried by different teams for tasks like machine learning, data exploration, and other analysis such as summarization.

SageMaker Feature Store provides feature store functionality for the AWS cloud, but users also want the same capabilities at the edge to ingest raw data and use it to make real-time decisions without needing to make a round-trip to the cloud.

Another property of edge data is that it is often continuously generated as a stream of data.  Ingesting the data stream can be done with a feature store API, and then users can transform the continuous data stream into metadata usable for analysis.  Streaming software like Apache Flink can first partition and window the streaming data, then apply transformations including aggregation functions (e.g., average), filtering, and map functions.

Cloudian Streaming Feature Store (SFS)

Cloudian has developed the Streaming Feature Store (SFS) that implements the SageMaker Feature Store API, adds data stream processing functionality with Flink, and is deployed as Kubernetes-managed software at the edge.

 

Streaming Feature Store diagram
Figure 1: Major components of Streaming Feature Store (SFS).

 

Feature Store API

The SageMaker FeatureStore API has two parts: Feature Group API and Record API.  The Feature Group API manages feature groups that are logical groupings of features.  Each feature is defined by a name (String) and a type (String, Integral, Fractional) — e.g., “name:Age, type:Integral”.  Each feature group must define a feature that is the unique identifier for the feature group and a feature that is the event time of the feature group used to determine the time ordering of the records.  The Record API is a REST API for PUT/GET/DELETE record where a record is a single instance of a feature group.

Example Feature Group “TemperatureSensor”:

FeatureName FeatureType Notes
sensor_id String Unique identifier
val String
time String Event time

Example Record:

FeatureName ValueAsString
sensor_id sensor324
val 28.4
time 2022-10-29T09:38:41Z

As depicted in Figure 1, SFS has both a key-value database system and an object store. These two types of storage systems are used to implement the online store and offline store of the SageMaker FeatureStore.

The online store is a low-latency store for real-time lookup of records, storing the latest (by event time) feature group data record.  The low-latency for queries makes it applicable for real-time analysis.  For example, if there is a point-of-sale device that is processing a credit card transaction, real-time analysis using a trained deep learning model is performed to predict whether the transaction is fraudulent or not.  This type of decision-making must be done in real-time.  Having the fraud-or-not decision be delayed makes it unusable at the point-of-sale.

The offline store is a high-capacity S3 object store where all data is versioned, thereby making the data available for point-in-time queries.  As record data streams in, the older records for the same unique identifier are automatically migrated from the online store to the offline store.  The records are stored in Parquet format for space-efficient as well as performant columnar storage.  The offline store is implemented using an S3 API compatible object store, Cloudian’s HyperStore. This provides a horizontally scalable system with the full power of the S3 API. By configuration of the S3 endpoint and credentials, any fully S3-compatible object store can be used.

 

Stream Processing

As mentioned earlier, a feature store for edge applications needs to operate with continuously streaming data like sensors emitting readings, surveillance cameras with live video, and manufacturing processes detecting bad components.  To make it a “streaming” feature store, it must have the capability to analyze continuous streams of data.

Stream processing within SFS is done using Flink’s DataStream API.  As new records are added via the PUT record API, a data stream transformation (e.g., filtering, defining windows, aggregating) is optionally executed with the transformed data written to a data sink that could be either an S3 bucket, a file, or another Feature Group.  By writing the transformed data to a Feature Group, multiple transformations can be chained together in a pipeline.  For example, raw transaction data can be aggregated into a 10-minute summary statistics Feature Group which can then be fed into a 24-hour summary statistics Feature Group.

Windows are used to split the data stream into groups of finite size which can then be processed.  Flink defines multiple window types, but currently SFS implements only the SlidingProcessingTimeWindows that has size and slide parameters used to create sliding windows of fixed size (e.g., 1 minute) every slide period (e.g., 10 seconds).  Below is an example of a Feature Group to hold aggregated data collected for each sliding window.

Example Feature Group “AggregatedTemp60”:

FeatureName FeatureType Notes
key String Unique identifier
start String Event time
count String The number of records in the window
max String The max value in the window
min String The min value in the window
sum String The sum value of values in the window
duration String The time duration of the window

 

Infrastructure to Enable the Edge

I have attempted to show how a streaming feature store is a useful component for edge applications where streaming data can be rapidly ingested and then analyzed in order to make real-time decisions.  SFS is a part of Cloudian’s HyperStore Analytics Platform (HAP), a Kubernetes-managed infrastructure for compute and storage at the edge.  Using built-in Kubernetes functions, flexible and dynamic Pod scheduling is based on available resources (CPU/GPU/RAM/Disk). Other HAP components include an S3-compatible object store, a scalable, distributed S3 SELECT processor, and an S3 Object Lambda processor.

 

Gary OgasawaraGary Ogasawara, Cloudian

View LinkedIn Profile

 

 

How to Set up HyperStore S3 Object Storage on VMware Cloud Foundation with Tanzu

We recently announced general availability of Cloudian’s HyperStore object storage on VMware Cloud Foundation with Tanzu.  HyperStore® is S3-compatible object storage, which now integrates with VMware’s vSAN Data Persistence platform to provide a containerized version of HyperStore managed by Kubernetes. In this blog, I describe the steps to set up HyperStore in this environment so that apps can consume S3 object storage.  The setup can either use kubectl command-line operations or the vSphere Client User Interface (VC UI).  Here we focus on using the VC UI. 

Gary OgasawaraGary Ogasawara, CTO, Cloudian

View LinkedIn Profile

We recently announced general availability of Cloudian’s HyperStore object storage on VMware Cloud Foundation with Tanzu.  HyperStore® is S3-compatible object storage, which now integrates with VMware’s vSAN Data Persistence platform to provide a containerized version of HyperStore managed by Kubernetes. In this blog, I describe the steps to set up HyperStore in this environment so that apps can consume S3 object storage.  The setup can either use kubectl command-line operations or the vSphere Client User Interface (VC UI).  Here we focus on using the VC UI.  As a prerequisite, we assume VMware Cloud Foundation with Tanzu, including the vSAN Data Persistence platform, is installed.  The vSAN Data Persistence platform enables HyperStore to use vSAN storage with a shared-nothing architecture (SNA) where the data durability and availability is managed by HyperStore instead of vSAN.

Enabling HyperStore

As a virtual infrastructure (VI) admin you can enable HyperStore in the VC UI Supervisor Services section. Enabling HyperStore triggers the creation of a new Kubernetes Namespace for HyperStore and the download and creation of a HyperStore Operator Pod and a HyperStore UI Plugin Pod.  It also creates two vSAN storage policies (vSAN SNA, vSAN Direct) for HyperStore if those vSAN resources are available. Because these vSAN storage policies do not do data replication and rebuild, they are a good fit with software-defined storage software like HyperStore that itself manages data replication and rebuild.

In the VC UI, from the Workload-Cluster, select Configure → Supervisor Services → Services, select Cloudian HyperStore from the list of available services, and click ENABLE.

configure workload cluster

This brings up the Enable Supervisor Service screen where you can set the HyperStore Operator version and other parameters.

enable supervisor service

For the “Version” field pull-down menu, select “v1.0.0” or a later version. If you want to use a custom Docker image repository, then set the parameters for the Repository endpoint, Username, and Password.  The images must have previously been stored in this repository.  This method is how an air-gapped installation can be done.  If the repository endpoint is not set, then the default is to use https://quay.io as the registry where the images are available.

Optionally, custom parameters can be added by setting the Key-Value pairs under “Advanced settings”.  HyperStore supports timeout parameters before starting certain rebuild actions.  The above figure shows a custom parameter “rebuildTimerEMM.”  Details about the custom parameters can be found in the documentation, but for a standard installation, they can be left unspecified.

After the parameters screen, the Cloudian End-User License Agreement (EULA) URL is displayed.  Click through and read the agreement carefully before selecting the checkbox to accept the terms of the license agreement and clicking “FINISH”.

EULA service
HyperStore is now enabled.  This creates a new Kubernetes namespace prefixed with “hyperstore-domain-” and starts a HyperStore Operator Pod and UI Plugin Pod in that namespace.  In the below picture in the left pane, the newly created namespace “hyperstore-domain-c8” and the Operator and UI Plugin Pods can be seen.

workload cluster
The HyperStore Operator Pod uses the Operator SDK to manage HyperStore using Kubernetes principles, notably a control loop to reconcile desired and current states.

The UI Plugin Pod implements a VC UI plugin to configure and monitor HyperStore.  For example, the UI Plugin is used to configure and create a new HyperStore Instance.

Creation of a HyperStore Instance

A HyperStore Instance is a StatefulSet of Pods that store data using replication and/or erasure coding, providing object storage capabilities with an S3 API.  Using VC UI and the underlying Kubernetes infrastructure, a new HyperStore Instance can be created simply.

Role-based access control (RBAC) is enforced at the Kubernetes Namespace level.  In VC UI, a Namespace can be created and then configured for permissions and storage policies.  The “edit” permission in the Namespace is required for a user to create a HyperStore Instance in that Namespace.  For storage policies, two HyperStore-specific storage policies are available for vSAN Direct and vSAN SNA policies.

storage policies
A new HyperStore Instance is created by using the VC UI under the VC cluster Configure → Cloudian HyperStore → Overview, and then clicking “NEW INSTANCE”.

new instance storage
As an alternative to the UI Plugin, a new HyperStore Instance can be created by kubectl apply of  a Custom Resource (CR) file that has the configuration parameters to use.  Below is an example CR file:
memoryRequire
apiVersion alpha1
After entering the parameters and clicking “OK”, a new HyperStore Instance is created in the specified Namespace with a limited and temporary HyperStore license. The HyperStore image is downloaded from the image registry and used in a new HyperStore Pod.  Once the image is downloaded and started, the Pod transitions from Pending to Running status, and the HyperStore installation process starts to create and configure the additional Pods in the StatefulSet.

One function of the HyperStore Operator is to report on the cluster health status that the VC UI monitoring uses.  During the initial installation, the health status is RED.  When the instance’s health status changes to GREEN, the HyperStore Instance is ready for S3 traffic.

HyperStore Instance Health
The vSAN Data Persistence platform on VMware Cloud Foundation with Tanzu provides a powerful framework to deploy and manage HyperStore S3 object storage. As a foundation, Kubernetes provides functions like auto-scaling, resource scheduling, and role-based access control.  Layering on VMware’s vSAN Data Persistence platform enables efficient use of vSAN storage with management functions like maintenance mode and health monitoring.  The result is an environment for apps managed within VMware Cloud Foundation with Tanzu where HyperStore S3 object storage can be created and monitored from the VC UI, a convenient “single pane of glass.”

To see a demo of this new combined solution, go to cloudian.com/resource/demos-and-videos/demo-vmware-vsan/.

To learn more about Cloudian solutions for VMware environments, go to cloudian.com/vmware/.