Mobile Video Surveillance Solution for Montebello Bus Lines

Mobile video surveillance can do a lot to ensure safety on transit systems. After all, bus and train operators must focus on operating their vehicles, not on policing riders.

Real-time mobile video surveillance would allow one staff member to monitor multiple vehicles, which could save cost and increase safety.

The problem is this: traditional technologies record video on the vehicle for later retrieval after the vehicle returns home. The obvious problem here is the lack of a real-time view. When an incident occurs, you can only see what happened after-the-fact.

Also, when an incident occurs finding the relevant clip takes a long time. The manual process consumes expensive resources and slows a response.

A better video surveillance answer was devised by the City of Montebello. View this video to learn more.

The Challenges in Storing Video Surveillance

Montebello Bus Lines currently operates 72 buses that serve over 8 million passengers a year, and each bus houses five cameras and a recording system. All videos were only recorded locally on the buses. Transferring the data into the operations center at the end of the day took time.

Then, MBL had to manually locate clips using time codes. This made it difficult to follow up on reported incidents in a timely manner.

Another storage issue was budget. Budget limitations meant MBL couldn’t keep the video data for more than a few days. If someone filed a complaint after the video was deleted, the city of Montebello would face financial risk.

Finding the Answer in Object Storage

What MBL needed was the ability to wirelessly upload video in addition to storing the data locally. This would allow for immediate review by transit staff or law enforcement and would serve as an additional layer of backup to prevent data loss.

MBL first tried using a Network Attached Storage (NAS) system, but the problem with NAS is that the entry systems simply aren’t fast enough while the better performing systems are cost-prohibitive. Another challenge was the file structure, which did not allow graceful transfer over a wireless network. An interrupted transfer resulted in re-starting the process. Finally, NAS systems allowed limited metadata tagging, containing only the most basic information.

Backing up video surveillance with Cloudian

But this is where Cloudian steps in. With Cloudian and Transportation Security Systems (TSS) IRIS, MBL is now able to add metadata tagging on their videos. The metadata search also makes it easier to locate videos based on parameters such as time, location, vehicle, and more.

Large clips are broken into smaller pieces before being transferred concurrently, resulting in better reliability and successful use of wireless data transfers. Additionally, object storage is more cost-efficient, meaning it’s easy (and affordable) to scale up as more videos are stored.

David Tsuen, IT Manager for the City of Montebello, stated that “Cloudian and TSS together allowed us to solve a very challenging problem. We now have a path to significant cost savings for the City and a safer experience for our riders. That’s a genuine win-win.”

You can learn more about how we solved MBL’s challenges by reading our case study, or you can try Cloudian out for yourself with our free trial.

 

Cloudian Cisco Validated Design (CVD)

Cloudian HyperStore is certified for the Cisco UCS® S3260 storage server and integrated UCS Manager.

Read the Cisco Validated Design (CVD) document here.

One of the key benefits of our HyperStore solution is full S3 API compatibility, which results in hassle-free integration and interoperability with S3-compatible applications. Additional features and benefits of Cloudian HyperStore include:

  • Limitless scalability
  • Lower TCO
  • Erasure coding and replication for data protection
  • Multi-tenancy

rediCloud Uses HyperStore on Cisco Servers

Our certification in the Cisco Solution Partner Program helped catch the eye of service provider rediCloud, who was looking for an object storage solution that was easier to manage and more reliable. The highly scalable and turnkey nature of HyperStore was key in rediCloud’s decision to deploy Cloudian on Cisco servers across two data centers. Read more about our deployment with rediCloud here.

To learn more, download the Cisco Validated Design Guide.

 

MSPs Look to Cloudian to Help Grow S3-Compatible Object Storage Business

In cloud storage, there are hundreds of managed service providers (MSPs) who offer enterprise-grade, S3-API storage services. Increasingly, these MSPs look to Cloudian as the supplier for their underlying object storage infrastructure.

Local and regional MSPs offer their customers great value in several ways. They act as a trusted guide for large companies, and they provide rich, turnkey service offerings for smaller ones. Furthermore, their regional focus builds deep connections with local businesses as well as expertise in compliance and regulatory issues, benefits that remove technical barriers, reduce risk, and accelerate cloud adoption.

A wide range of high-value services fall within their offerings:  storage, archive, collaboration, and data protection, among others. These services require cost competitive storage, with scalability that is both limitless and painless. They also require S3 interoperability that ensures trouble-free operation and high customer satisfaction. That’s where Cloudian HyperStore comes in.

Over the past year, we’ve been pleased to help dozens of global MSPs deploy and manage successful cloud storage offerings, building profitable businesses on Cloudian’s infinitely scalable platform.

Why MSPs Choose Cloudian

Cloudian object storage uniquely addresses this market with MSP-oriented features, including:

  • 100% native S3 API: Ensures trouble-free operation with best-in-class interoperability
  • Quality of Service (QoS): Eliminates the “noisy neighbor” problem with bandwidth controls
  • Multi-tenancy: Unique namespace for each client
  • Branded UI: Provides customers with a management portal that highlights your brand
  • Configurable data protection: Lets customers configure data protection settings to meet their specific needs, rather than one-size-fits all

Additionally, our object storage platform contains a full suite of other features that bring key advantages to MSPs:

  • Start small and grow: HyperStore allows MSPs to scale with demand. Start with a few dozen terabytes and scale up to hundreds of petabytes and beyond — with zero disruption
  • Appliance or Software: Deploy Cloudian HyperStore as a pre-configured appliance — with full support — or as software on your server.
  • Geo-distribution: You can deploy Cloudian HyperStore in a single data center, or distribute across multiple data centers with data replication to ensure uninterrupted service, even in the event of a data center outage
  • File services: Deploy NFS/SMB file services for additional value-add

 

A Growing List of MSP Customers

A sample of the MSPs now offering Cloudian-based includes:

Learn more about the value these MSPs saw by checking out our solutions.

 

Cloudian Moves Private Cloud Beyond Backup

In a data protection application, Cloudian acts as a backup target. To excel in this role, the system of course needs to be scalable, fast, and cost effective. But most importantly, it must ensure that data is never, ever corrupted or lost. Fortunately, data durability was the #1 design goal of Cloudian HyperStore.

Storage Switzerland recently wrote a product analysis of our HyperStore solution, and the verdict is very positive. Read on to understand why Cloudian is “an ideal storage target for backups and archives” and “a system that organizations should seriously consider for all their storage needs.”

Effective Backup and Archive

In a data protection application, Cloudian acts as a backup target. To excel in this role, the system of course needs to be scalable, fast, and cost-effective. But most importantly, it must ensure that data is never, ever corrupted or lost. Fortunately, data durability was the #1 design goal of Cloudian HyperStore. Our HyperStore object storage solution offers robust data protection features that can protect your data from incidents of all kinds, including drive failure, node failure, even the failure of an entire data center, if that’s the level you require.

Features that support our 14-nines data durability include:

  • Erasure coding
  • Data replication across nodes or across sites
  • Hybrid cloud integration, that enables replication to the public cloud
  • Proactive repair
  • Data GPS, which helps you locate objects
  • Repair-on-Read, which automatically checks replicas for missing or out-of-date copies and then replaces or updates them
  • Smart Redirect, which creates a local cache replication as well as an Amazon S3 copy

Beyond backup applications, Cloudian HyperStore can play multiple other roles within your data center.

 

Multiple Applications, One Storage Pool

Other Cloudian use cases include:

  • NAS file server offload
  • Media archive management
  • File sync and share storage

Because Cloudian is a scale-out cluster, all of these can co-exist in a single, limitlessly scalable namespace.

We offer both a software solution and multiple hardware appliances which let you start with a small deployment (a few dozen TBs) and then scale up to multiple PBs and beyond.

 

AI and Machine Learning: The Future of Object Storage

Soon you will be seeing object storage widely used in the next big storage driver: data pools that support AI and Machine Learning. Object is ideal for this due to its scalability and integrated metadata support.

Check out the Storage Switzerland piece to learn more. You can read the full piece here or check out our solutions.

 

Bring Object Storage to Your Nutanix Cluster with Cloudian HyperStore

Your Nutanix-powered private cloud provides fast, Tier 1 storage for the information you use every day. But what about the information that’s less frequently used, or requires more capacity than your Nutanix cluster has to spare? Cloudian HyperStore is on-prem storage that provides extra capacity for your large-scale storage demands.

HyperStore Enterprise Object Storage Overview

Cloudian HyperStore is petabyte-scalable, on-prem object storage for unstructured data. It employs the S3 interface, so most applications that include public cloud connectivity will work with HyperStore.

Like Nutanix, HyperStore is a scale-out cluster. When you need more capacity you simply add nodes. All capacity resides within a single namespace, so it remains easy to manage. Key features of Cloudian HyperStore include:

  • 100% native S3 interface, so it works with most cloud-enabled applications
  • Scales from TBs to PBs without disruption
  • Fourteen-nines data durability with erasure coding and replication
  • 70% less cost than traditional NAS

Scalable Storage for Data-Intensive Applications

Cloudian HyperStore’s scalability and exceptional data durability make it ideal for use cases such as:

  • Backup and archive: Scalable backup target, compatible with Veritas, Commvault, Veeam, and Rubrik data protection solutions
  • Media and entertainment: HyperStore provides an active archive that’s 100X faster to access than tape, and ⅓ the cost of NAS; compatible with most media asset managers.
  • File management: Offload Tier 1 NAS to extend capacity with zero user disruption

HyperStore is guaranteed compatible with all applications that support the S3 interface, the same interface used by AWS and Google GCP. Think of HyperStore as hyperconverged storage, bringing together multiple data types to one, super-scalable pool.

Multiple Deployment Options

Choose from multiple HyperStore deployment options including:

  • HyperStore within your Nutanix cluster: Run HyperStore software on a Nutanix VM and store data to your Nutanix disk. No additional hardware required. A fast, cost-effective way to get started  or to develop S3-enabled applications.
  • HyperStore as a stand-alone appliance: Deploy HyperStore appliances in your data center for high-capacity, cost effective storage. Locate all nodes locally, or spread them out across multiple locations for distributed storage.

Nutanix is the perfect platform for your frequently used or performance-sensitive data. For everything else, there’s Cloudian. To learn more about our work with Nutanix, come find us at Nutanix .NEXT 2017 at booth G7. Additionally, Sanjay Jagad, our Director of Products and Solutions, will be presenting on how to bring object storage to your Nutanix cluster on June 30th, 11:15am in room Maryland D.

To learn more about Cloudian and sign up for a free trial, visit us at https://cloudian.com/free-trial/.

 

Object Storage Bucket-Level Auto-Tiering with Cloudian

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

As discussed in my previous blog post, ‘An Introduction to Data Tiering’, there is huge value in using different storage tiers within a data storage architecture to ensure that your different data sets are stored on the appropriate technology. Now I’d like to explain how the Cloudian HyperStore system supports object storage ‘auto-tiering’, whereby objects can be automatically moved from local HyperStore storage to a destination storage system on a predefined schedule based upon data lifecycle policies.

Cloudian HyperStore can be integrated with any of the following destination cloud storage platforms as a target for tiered data:

  • Amazon S3
  • Amazon Glacier
  • Google Cloud Platform
  • Any Cloud service offering S3 API connectivity
  • A remotely located Cloudian HyperStore cluster

Granular Control with Cloudian HyperStore

For any data storage system, granularity of control and management is extremely important –  data sets often have varying management requirements with the need to apply different Service Level Agreements (SLAs) as appropriate to the value of the data to an organisation.

Cloudian HyperStore provides the ability to manage data at the bucket level, providing flexibility at a granular level to allow SLA and management control (note: a “bucket” is an S3 data container, similar to a LUN in block storage or a file system in NAS systems). HyperStore provides the following as control parameters at the bucket level:

  • Data protection – Select from replication or erasure coding of data, plus single or multi-site data distribution
  • Consistency level – Control of replication techniques (synchronous vs asynchronous)
  • Access permissions – User and group control access to data
  • Disaster recovery – Data replication to public cloud
  • Encryption – Data at rest protection for security compliance
  • Compression – Reduction of the effective raw storage used to store data objects
  • Data size threshold – Variable storage location of data based upon the data object size
  • Lifecycle policies – Data management rules for tiering and data expiration

Cloudian HyperStore manages data tiering via lifecycle policies as can be seen in the image below:

Auto-tiering is configurable on a per-bucket basis, with each bucket allowed different lifecycle policies based upon rules. Examples of these include:

  1.      Which data objects to apply the lifecycle rule to. This can include:
  • All objects in the bucket
  • Objects for which the name starts with a specific prefix (such as prefix “Meetings/2015/”)
  1.      The tiering schedule, which can be specified using one of three methods:
  • Move objects X number of days after they’re created
  • Move objects if they go X number of days without being accessed
  • Move objects on a fixed date — such as December 31, 2016

When a data object becomes a candidate for tiering, a small stub object is retained on the HyperStore cluster. The stub acts as a pointer to the actual data object, so the data object still appears as if it’s stored in the local cluster. To the end user, there is no change to the action of accessing data, but the object does display a special icon denoting the fact that the data object has been moved.

For auto-tiering to a Cloud provider such as Amazon or Google, an account is required along with associated account access credentials.

Accessing Data After Auto-Tiering

To access objects after they’ve been auto-tiered to public cloud services, the objects can be accessed either directly through a public cloud platform (using the applicable account and credentials) or via the local HyperStore system. There are three options for retrieving tiered data:

  1.      Restoring objects –   When a user accesses a data file, they are directed to the local stub file held on HyperStore which then redirects the user request to the actual location of the data object (tiered target platform).

A copy of the data object is restored back to a local HyperStore bucket from the tiered storage and the user request will be performed on the data object once copied back. A time limit can be set for how long to retain the retrieved object locally, before returning to the secondary tier.

This is considered the best option to use when accessing data relatively frequently and you want to avoid any performance impact incurred by traversing the internet and any access costs applied by service providers for data access/retrieval. Storage capacity must be managed on the local HyperStore cluster to ensure that there is sufficient “cache” for object retrievals.

  1.      Streaming objects – Streams data directly to the client without restoring the data to the local HyperStore cluster first. When the file is closed, any modifications are made to the object in situ on the tiered location. Any metadata modifications will be updated in both local HyperStore database and on the tiered platform.

This is considered the best option to use when accessing data relatively infrequently and concern about the storage capacity of the local HyperStore cluster is an issue, but performance will be lower as the data requests are traversing the internet and access costs may be applied by the service provider every time this file is read.

  1.      Direct access – Objects auto-tiered to public cloud services can be accessed directly by another application or via your standard public cloud interface, such as the AWS Management Console. This method fully bypasses the HyperStore cluster. Because objects are written to the cloud using the standard S3 API, and include a copy of the object’s metadata, they can be referenced directly.

Storing objects in this openly accessible manner — with co-located rich metadata — is useful in several instances:

  1. A disaster recovery scenario where the HyperStore cluster is not available
  2. Facilitating data migration to another platform
  3. Enabling access from a separate cloud-based application, such as content distribution
  4. Providing open access to data, without reliance on a separate database to provide indexing

HyperStore provides great flexibility for leveraging hybrid cloud deployments where you get to set the policy on which data is stored in a public or private cloud. Learn more about HyperStore here.

 

YOU MAY ALSO BE INTERESTED IN

Object Storage vs. Block Storage: What’s the Difference?

An Introduction to Data Tiering

All data is not equal due to factors such as frequency of access, security needs, and cost considerations, therefore data storage architectures need to provide different storage tiers to address these varying requirements. Storage tiers differ depending on disk drive types, RAID configurations or even completely different storage sub-systems, which offer different IP profiles and cost impact.

Data tiering allows the movement of data between different storage tiers, which allows an organization to ensure that the appropriate data resides on the appropriate storage technology. In modern storage architectures, this data movement is invisible to the end-user application and is typically controlled and automated by storage policies. Typical data tiers may include:

  1. Flash storage – High value, high-performance requirements, usually smaller data sets and cost is less important compare to the performance Service Level Agreement (SLA) required
  2. Traditional SAN/NAS Storage arrays – Medium value, medium performance, medium cost sensitivity
  3. Object Storage – Less frequently accessed data with larger data sets. Cost is an important consideration
  4. Public Cloud –  Long-term archival for data that is never accessed

Typically, structured data sets belonging to applications/data sources such as OLTP databases, CRM, email systems and virtual machines will be stored on data tiers 1 and 2 as above. Unstructured data is more commonly moving to tiers 3 and 4 as these are typically much larger data sets where performance is not as critical and cost becomes a more significant factor in management and purchasing decisions.

Some Shortcomings of Data Tiering to Public Cloud

Public cloud services have become an attractive data tiering solution, especially for unstructured data, but there are considerations around public cloud use:

  1. Performance – Public network access will typically be a bottleneck when reading and writing data to public cloud platforms, along with data retrieval times (based on the SLA provided by the cloud service). Especially for backup data, backup and recovery windows are still incredibly important, so for the most relevant backup sets it is worth considering to hold onsite and only archive older backup data to the cloud.
  2. Security – Certain data sets/industries have regulations stipulating that data must not be stored in the cloud. Being able to control what data is sent to the cloud is of major importance.
  3. Access patterns – Data that is re-read frequently may incur additional network bandwidth costs imposed by the public cloud service provider. Understanding your use of data is vital to control the costs associated with data downloads.
  4. Cost – As well as bandwidth costs associated with reading data, storing large quantities of data in the cloud may not make the most economical sense, especially when compared to the economics of on-premise cloud storage. Evaluations should be made.

Using Hybrid Cloud for a Balanced Data Tier Strategy

For unstructured data, a hybrid approach to data management is key with an automation engine, data classification and granular control of data necessary requirements to really deliver on this premise.

With a hybrid cloud approach, you can push any data to the public cloud while also affording you the control that comes with on-premises storage. For any data storage system, granularity of control and management is extremely important as different data sets have different management requirements with the need to apply different SLAs as appropriate to the value of the data to an organization.

Cloudian HyperStore is a solution that gives you that flexibility for easily moving between data tiers 3 and 4 listed earlier in this post. Not only do you get the control and security from your data center, you can integrate HyperStore with many different destination cloud storage platforms, including Amazon S3/Glacier, Google Cloud Platform, and any other cloud service offering S3 API connectivity.

Learn more about our solutions today.

Learn more about NAS backup here.

 

SNL and Object Storage: Archiving Media Assets

Picture all of your media assets today. How much space does it take up and how well does your current storage solution work? Now what if you had over 40 years of assets? Will the same solution work just as efficiently?

Tape storage is currently the preferred method for archiving media assets, but tape is a limited-life solution with many different ways it can be compromised. When thinking long-term, tape becomes less and less viable.

For a prime example on why we need to move away from tape storage, let’s look at Saturday Night Live. One of the longest running network programs in the US, SNL has generated 42 seasons of content consisting of 826 episodes and 2,966 cast members. In terms of data, that’s 42 years worth of archive data made up of multiple petabytes across 2 data centers.

saturday night live logo

That’s a lot of data, and for SNL, having a huge archive is useless unless they can easily access it. That’s why SNL utilized object storage to help digitize and store their 42 years of assets. Each asset can be tagged with as many metadata tags as needed, making it easy and fast to find, organize, and assemble clips from the show’s long history.

If your media assets are just sitting in cold storage, it may be time to rethink your strategy. By creating an efficient archival solution today, you can accelerate your workflows and continue to monetize those assets 40 years from now, just as SNL is doing today.

We’ll be delving further into this topic at NAB along with Matt Yonks, who is the Post Production Supervisor for Saturday Night Live. The session will take place on April 25 at 3:30pm and will include a drawing for a 4K video drone. Register early for extra chances to win!

New HyperStore 4000: Highest density storage

Rack space and budget. Most data centers are short on both. Yet somehow, you’re expected to accommodate a 50% increase in unstructured data volume annually. That’s a problem.

The new solution is the HyperStore 4000. With 700TB in just 4U of rack height, it’s nearly 2X the density of our earlier models. And it delivers storage in your data center at prices on par with the public cloud: about ½ cent per GB per month. Space savings and cost savings in one.

The HyperStore 4000, by the numbers

The HyperStore 4000 appliance was built to handle massive amounts of data. It’s housed in a 7” high 4U enclosure, with 2 nodes and a max capacity of 700TB. The drive sizes range from 4TB to 10TB and has 256GB of memory (128GB per node).

Better yet, the appliance reduces storage costs by 40% (versus other Cloudian solutions) – data management now costs half a cent per GB per month. Even with the reduced cost, there is no drop in data availability – with a three-appliance cluster, you’ll still see 99.999999% data durability.

For the full list of specs, check out our datasheet.

Save time, your scarcest commodity

With most storage systems, as your needs grow, your management headaches grow with them. But the HyperStore 4000 grows painlessly. You just add nodes. No disruption and no data migration.

We specifically designed the HyperStore 4000 appliance to help customers in industries with huge data needs such as life sciences, healthcare, and entertainment. These are the industries where data growth is exploding every year and where modern data centers feel the most burden, as they need high-density storage with peak performance for data protection, video surveillance, research, archival, and more. Now you can meet these growing needs without growing pains.

Finally, the HyperStore 4000 has a 100% native S3 API, and has the industry’s highest level of S3 API compatibility. In fact, we guarantee it to work with your S3-enabled applications.

Be sure to also take a look at Cloudian’s other solutions to see which one is right for you.