Distributed Cloud
In general, distributed in an information technology (IT) context, means that something is shared among multiple systems which may also be in different locations.
Distributed cloud goes beyond edge computing standards. Distributed cloud speeds communications for global services and enables more responsive communications for specific regions.
Distributed cloud model enables lower latency and provide better performance for cloud services. In distributed cloud infrastructure, network functions and customer applications can share the same resources, which allows for a variety of business models and use cases.
Following are 8 use case of Distributed Cloud infrastructure:
1. Network applications / NFV
The NFV evolution has made it possible to distribute virtual network functions (VNFs) in a more flexible way.
The infrastructure for NFV is an important starting point for the distributed cloud evolution. Often today, an operator will have several sites with independent installations of virtualization environments. A step forward is to be able to handle resources and placement in a coordinated way. That opens the possibilities to formulate policies and constraints on the placement of the VNFs.
2. Content Delivery Networks:
To achieve a good consumer experience for video and other content-based services, delivery infrastructure must become more and more decentralized.
Recently, content delivery solutions have been run as applications on generic computing and storage platforms. This means that these platforms must support distribution across regional and hub sites as well as across multiple service providers. The benefits of a decentralized architecture are better response times for the consumer experience, as well as efficiency in transport and peering costs.
3. Data storage with regulatory compliance:
Enterprises are increasingly using cloud service providers for scalable storage of various data sets, and several studies have shown that security and regulatory constraints are major concerns.
An example is data sets that include personal information, where several countries have regulations such that at least a copy of the data must be kept within country borders. A decentralized architecture enables compliance with regulations and ensures control of cost and policy with regards to the cloud service providers.
4. Hybrid enterprise cloud:
Enterprises want to use cloud service providers for elasticity and scalability reasons, but also to control where applications are executed. A cloud platform can be deployed across on-premises and cloud resources such that applications and data can be placed according to policy and performance constraints and intents.
5. IoT data stream processing:
Applications that collect and process IoT data are often composed of several components. In a pipeline, for example, the components include: data collection, data throttling, data pruning, anomaly detection, machine learning, and storage.
There is an opportunity to improve scalability and performance by placing these components at an optimal location in the network topology, which will lead to better response times for machines and users, as well as efficiency in transport and peering costs.
6. Video processing:
Video is a good example of a data-intensive industrial application, for example, monitoring or surveillance in factories. In this case, the processing is also often composed in a pipeline: streaming source, computer vision (image feature detection), transport of meta data, anomaly detection, machine learning, and storage. The figure below shows an example where a complete video surveillance application consists of components for computer vision, anomaly detection, and storage, each placed at different sites to optimize resource usage.
7. Machine learning:
In today’s machine learning-oriented applications (context-aware advertising, for example), the machine learning models are often decomposed into layers or pieces, where common parts can be centralized, and personal/contextual parts can be placed closer to where they are used. The data behind the machine learning model can also be distributed across several geographic sites.
The benefits of a decentralized architecture here are better response times for data processing, which can translate into the ability to process more data within certain time limits, as well as efficiency in transport and peering costs and in regulatory compliance.
8. VR/AR
Virtual reality and augmented are examples of applications that are both latency sensitive and bandwidth demanding.
In addition to consumer-oriented (gaming) type applications, there are many professional and industrial use cases, for example, remote monitoring and inspection of equipment. Remote cameras can generate multiple wide-angle video streams. Necessary processing includes both the stitching these video streams into a unified view, which is fairly compute intensive, and the rendering of the resulting images to an end user.
Depending on where in the topology the cameras (data sources) and the end user (industrial equipment professional) are placed, these data transport- and data processing-heavy application components should be distributed in an optimal way.
The benefits of a decentralized architecture are better response times toward end users, as well as efficiency in transport and decreased peering costs.
2,063 total views, 2 views today