Cloud storage, similar to other cloud services offered by a provider, covers several capabilities running on the provider’s hardware and remains available remotely. Each of these capabilities meets a different need, but all aim to provide the flexibility to only pay for what you use. The provider is responsible for maintaining the underlying hardware and ensuring the data is available, resilient, and protected.
The most common types of cloud storage include object, block, and file.
Storage types differ primarily by the method of access and performance. The application using the storage and its location determine storage requirements often aligned to a particular need.
Most traditional applications that run on a physical server and leverage physical drives in your data center use file storage. Operating systems like Linux or Windows Server present the applications that run on them with a file system, a single consistent set of rules and methods for storing and retrieving data. The operating system handles the details behind the scenes: Is the physical disk a solid state drive (SSD)? A traditional spinning disk hard drive? An optical disk? Or a remote network file share? Applications eliminate those needs, since they inherently open, read from, write to, and save files according to programmed rules and not the precise physical details that may vary.
Cloud file storage presents the operating systems running on the servers in the cloud with a standard network file share—similar to the network file shares that might run in your own data center that the operating system presents to applications as a file system. The applications running on compute instances in the cloud can then use this file accordingly. Applications don’t need to be modified or changed to work with different storage while running on the cloud but continue to run with the file storage they’ve always used.
The cloud provider manages the hardware, including physical disks and network, and ensures the data is replicated and protected, while capacity planning in place, automatically aligns to increases in usage. The inherent advantages are clear when compared to a traditional approach of carefully planning scheduled purchases of network file systems to meet the needs of future growth, then managing the hardware and ensuring the protection of that data yourself.
Archive Storage cloud service is the ideal solution for storing seldom-accessed data that requires long retention periods. Archive Storage is more cost-effective than Object Storage solutions for preserving cold data. Additionally, and unlike other storage options, Archive Storage data retrieval is not instantaneous.
Cloud Storage Buckets are logical containers for storing objects. A bucket is a single compartment with policies that determine the actions can be performed on a bucket, including those objects located within the cloud bucket storage.
When bucket containers are created for data, organizations can decide which default storage tier, archive or standard, is appropriate for your data. Object Storage can partner with archive storage to automatically move objects to archive while remaining in the Standard tier bucket.
Block volumes are like cloud file storage in that they represent an enhanced version of a type of network storage you may already be running in your data center. Block volumes present a different abstraction of storage capacity to an operating system—one which results in less network overhead, but requires more configuration and management within the operating systems using them, offering higher performance in return.
They can be configured with different settings to increase performance or reduce costs. Unlike cloud file storage, block volumes must be configured with a specific size, but that size can be increased at any time while the volume remains online and available to applications using it.
As with any cloud service, the provider manages the hardware, capacity planning, and ensures the data is replicated and protected.
Object storage is accessed differently than other storage types discussed. Accessed by software applications directly, not through an operating system, an application must be intentionally written to use object storage. Object storage is maintained remotely from the application, and is accessed in two similar but importantly different scenarios. First, it is often accessed via the internet by applications running on individual computers, mobile devices, Internet-of-things devices, etc. Second, it can be used by applications running in the cloud.
Applications which use object storage can flexibly store and retrieve unstructured data to a remote location without using a file system as the stored items are merely abstract “objects” to the cloud provider. This means the application developer maintains maximum flexibility and has an essentially bottomless free-form data store in the cloud while being charged only for the amount stored and transferred.
The downsides of object storage are two-fold: a bit more work on the application authors to manage their own object formats, and performance limitations. Object storage is accessed by software making API calls, typically over the internet, so what might take direct-attached storage microseconds, and block storage or file storage milliseconds, may sometimes take object storage a second or more. For many use cases, such as end users running applications on their phone connecting to cloud storage, this performance is acceptable, especially in return for the “anywhere access” these applications provide. Of course, in cases where an application using object storage is running in the cloud where the objects are stored, performance is considerably higher because all the resources are in the same region on the cloud provider’s own network.
High-performance computing (HPC) is becoming more and more common as AI, machine learning, engineering simulations, and financial modeling applications are used by more companies. Advancements in recent years have made high performance computing on the cloud possible, easily accessible, and affordable.
However, according to a recent Oracle Cloud Infrastructure blog post, shared file system throughput for compute clusters is often a barrier for simulations, artificial intelligence and machine learning, and complex modeling.
“We can provide this level of performance because our block storage is backed by NVMe SSD media and our data centers have a fast, flat network architecture. We believe so strongly in the capabilities of our storage offering that our block storage performance is backed by a unique performance SLA. Throughput varies by the size of the volume, and all volumes 1 TB and above provide the maximum 480 MB/s on the balanced performance tier, by default and at no additional charge.”
This type of storage requires the manual creation of file server clusters with cloud compute instances with direct-attached solid state drives, but provides the highest levels of performance—highest throughput and lowest latency—which is required for these HPC applications.
As the IT market and ecosystem shift, the changes largely foreshadow and somewhat reflect a seismic remaking of the storage landscape. Shaped by the nonstop explosion in data growth and the costs of storing and protecting it, storage professionals envision that their storage environments and data centers of the future will look dramatically different from today. For enterprise storage managers, trying to keep up with data growth while juggling data security needs, archiving requirements, and cost-containment issues is like swimming upstream with a pile of un-virtualized storage arrays on their backs.
The cloud’s scalability and elastic pay-as-you-grow model mean they don’t have to shell out for a storage upgrade whatever the size or granularity, whether it’s a planned or a short-term granular challenge. In addition, consuming cloud services is almost always considered OpEx and is often a monthly budget line item. These are both aspects that invariably make it easier to create and control expenditures.
Here are some of the many use cases for cloud storage solutions across object storage, file storage, and block volumes:
Backup and recovery is the process of storing copies of data to be accessed and to protect organizations against data loss. Leveraging the cloud for backup is the approach for sending the physical or virtual files or data to a secondary location for preservation in case of failure or disaster. A third-party storage vendor usually hosts the secondary server and data storage systems in the cloud.
Cloud data backup can cement an organization's data protection strategy without increasing the workload of information technology (IT) staff. Cloud storage backup services act as an offsite facility for many organizations. There are several approaches to cloud backups that can easily fit into an organization's existing data protection process, including: