Cutting edge technology

A text to understand: the definition and application of clou

Introduction: If the enterprise already has an on-premise system, then at some point it may be desirable to integrate on-premise and external deployment. One way to achieve this is to use a cloud outbreak, but what exactly is the cloud outbreak? And what does “outbreak in the cloud” mean?

Today, public clouds have quickly become a simple and accessible way to build IT infrastructure.

If the enterprise already has an on-premise system, then at some point it may be desirable to integrate on-premises and external deployment. One way to achieve this is to use a cloud outbreak, but what exactly is the cloud outbreak? And what does “outbreak in the cloud” mean?

The term “cloud outbreak” is not new and has been discussed by corporate IT departments for the past 10 years.

Cloud outbreaks mean that companies are scaling on-premises workloads and migrating some (or all) of their business to the public cloud. This is usually done to alleviate the rapid increase in workload, such as responding to peak demand.

Cloud outbreaks can be used as a tool to help with workload migration when the application is partially or fully migrated to the cloud to ease the load on the internal component toolkit during an upgrade or replacement.

The "on-demand" model of cloud outbreaks provides the ability to meet peak or peak workload demands without the need to keep a large number of unused and expensive devices on site.

Site traffic

For example, if the peak traffic of a website only occurs three to four times a year, it makes sense to use the infrastructure to meet these needs on demand only during peak hours.

When demand decreases, cloud computing resources can be shut down. This will save a lot of money compared to an on-premise data center.

In addition, the use of cloud outbreaks can alleviate the need for enterprises to scale their on-premises data centers.

Imagine that computing demand growth requires building a data center or expanding an on-premises data center. It makes sense for companies to shift some of their workload to the public cloud to reduce capital expenditures.

This situation is not exactly a cloud outbreak scenario, because by definition, an outbreak means that the workload is moved to the cloud for a period of time and then eventually returned to on-premises. butYes, it can be used as a temporary solution when upgrading an existing data center.

Misunderstanding of the cloud outbreak

Although it seems like a good idea to adopt a cloud outbreak, the process is actually very difficult. Many applications are not designed to be distributed in two (or more) computing environments at the same time because they are "integral" in nature.

For example, consider a system built on top of a large relational database. Migrating it to the cloud means moving the entire application. Even if the application layer can be separated (for example, the application logic and the web tier of the database), then the delay between these layers introduced by the cloud platform can make cloud outbreaks a challenge.

Therefore, although many organizations may be interested in cloud outbreaks, few will implement the process in a truly dynamic way. In fact, many cloud outbreak projects will focus on permanently moving the entire application or group of applications into the public cloud.

Cloud Burst and Storage

How to implement data storage in a cloud outbreak scenario?

First, storage plays an important role in enabling applications to move in and out of public clouds. The process of breaking an application out to a public cloud is typically based on moving the application and data together or moving the data to another application instance that already exists.

For example, most current applications are packaged as virtual machines. Vendors such as Velostrata (acquired by Google Inc.), Zerto and Racemi all offer the ability to migrate the entire virtual machine to the cloud.

Cloud computing providers also have their own solutions. Some of these tools focus on moving the entire virtual machine in a one-time process. However, for example, Velostrata provides the ability to just move active data and bring virtual machine updates back to on-premise in a truly dynamic way.

This feature highlights one of the main issues with this type of migration, keeping applications and data in sync.

Moving multiple virtual machines (or groups of virtual machines) across the network is expensive and time consuming. This is especially true when moving virtual machines back to on-premises. Ultra-large-scale cloud computing providers charge for export data, and it is not feasible for users to return their applications and data from the cloud to on-premises.

also needs to consider the delay time. Often, applications are not available when moving between public cloud platforms, which can be a problem. Extended interruptions will affect the user experience and need to be addressed as much as possible.

Storage-centric cloud outbreak

How to move data to public cloud? Simply using the public cloud as an extension to internal storage has been around for a while. Backup vendors, as well as primary storage solution providers and secondary storage solution providers, offer the ability to push data as an archive to the public cloud.

From the perspective of controlling the cost of inactive data, this is good, but the active application? Businesses need to consider a few things to make active storage cloud outbreaks viable.

The first issue is the consistency of the data view. This means managing the metadata associated with the data. For block storage, you need to track and access the latest version of any single block. For file and object storage, this means knowing the latest version of a file or object.

Metadata consistency is a challenge because all data updates change metadata, whether it's new file information or an existing file update. These changes must be distributed as quickly and efficiently as possible on all endpoints of the data. This has led to another problem with metadata management - locking.

In order to ensure that two locations do not attempt to update the same content at the same time, one or the other location will get a lock on the data, and other locations must wait.

This locking process can cause significant problems (such as unacceptable delays). Another solution is to not cause a lock (set a copy to read-only), or use the "last writer wins" process as seen in the object store, where the last update is effectively reflected as data Current copy of the.

"The last writer wins" is an acceptable solution for storage platforms like object storage, but it is completely impractical for block-based storage solutions where data Consistency is determined by ensuring that each read and write is accurately reflected in chronological order.

Data Protection

The final consideration in building a distributed storage and application architecture is to understand how to recover from a failure.

What happens if the on-premises server fails? What happens if the cloud provider's service is interrupted? When the data is in multiple locations, if one of the platforms fails, it is difficult to know where the last consistent copy of the data exists. In order to avoid data loss, people need to understand the fault scenario well.

Cloud Outbreak Storage Solution

How do vendors deal with storage cloud outbreaks? The major cloud computing providers identified this requirement at an early stage. AWS has a storage gateway product that can be deployed as a virtual machine in an on-premise data center and publicly available as an iSCSI LUN to native applications. Archive data back to the AWS cloud platform where it can be accessed remotely. The AWS Storage Gateway now supports file and virtual tape formats.

A few years ago, Microsoft acquired StorSimple to provide similar iSCSI capabilities for AWS storage gateways. Recently, the company acquired Avere Systems' vFXT technology, which allows the deployment of on-premises file systems to public clouds.

Storage vendors including NetApp (Data Fabric), Scality (Zenko), Elastifile (CloudTier) and Cloudian (HyperFile / HyperStore) are able to move data on demand across on-premise and public clouds. There are more examples of similar solutions available throughout the industry.

People's expectations

In the future, people will see applications being rewritten to distribute them across multiple public clouds and on-premises locations. In this case, cloud outbreaks will be an inherent feature of their design.

At the same time, storage vendors are bringing people closer to a more real-time distributed data ecosystem, although some companies are still adopting proprietary solutions.