Edge Computing Architecture

A Practical Guide

What Is Edge Computing Architecture?

Edge computing is an IT architecture that processes client data at the edge of a network, at the nearest possible point to the data source. It transfers some computing and storage resources from the centralized data center to peripheral processing points in a distributed system. 

Instead of transmitting unprocessed data directly to the data center for analysis, the system performs the processing and analysis tasks close to the data’s point of origin. This point could be a store, factory, branch office, or smart city utility. 

Edge computing thus reduces the traffic back to the central data center, only sending the final result of computing workloads performed at the edge. The main data center receives actionable data, such as business insights, maintenance information, and other real-time responses, which human teams can review. 

In this article:

Key Elements of an Edge Computing Architecture

The key idea behind edge computing architectures is that they move infrastructure components from a central location in an enterprise data center to multiple edge locations. This includes computing resources, storage, applications, and sensors. All these are deployed at the edge with network connectivity back to a central data center or cloud.

Edge devices and sensors are where information is collected, processed, or both. These devices provide sufficient bandwidth, memory, processing power, and performance to collect, process, and execute data in real time without assistance from the rest of the network.

A scaled-down on-premises edge server or data center can be easily deployed in a relatively small remote location. This creates flexible topology options to accommodate different environmental requirements and smaller footprints.

Edge IoT Architecture

The widespread adoption of Internet of Things (IoT) deployments and the proliferation of IoT devices have made edge computing increasingly important in recent years. Distributed, complex IoT networks require the right architecture to function properly. 

For the IoT, edge computing involves processing data near the IoT devices rather than sending it directly to a cloud-based or on-premise data center.

Edge IoT architectures encompass the entire network, including the endpoint hardware (i.e., appliances, sensors, actuators, and other devices) and an IoT gateway. This gateway provides a hub for the network’s communications and performs critical network functions like collecting sensor data, translating sensor protocols, and processing IoT data before forwarding it to the main cloud-based or on-premise network.

Example of an edge IoT deployment

Take, for example, a branch office connected to an IoT network. There could be thousands of devices to monitor noise, light, temperature, and air quality. This is in addition to various security sensors. There may be actuators responding to the changes that these sensors detect—for example, by turning the lights on or off, locking security doors, or adjusting the air conditioning based on weather data. When connecting to the IoT network, these sensors and actuators might use different communication protocols (i.e., Bluetooth, Wi-Fi, MQTT, serial ports, etc.) when connecting to the IoT network. Each component might have a different security or management model.

Requirements for well-designed edge IoT

A well-designed edge IoT architecture should handle large traffic volumes from hundreds of devices and perform some data processing. Some edge devices, like infrequently used low-power controllers, lack the computing power to analyze data—these rely on the central data center to process their data.

Certain functions, such as locking a security door, are too critical to wait for data to make the full trip from the endpoint device to the main data center. Some devices generate too much data to send directly to the data center and must process the data before forwarding it. 

Benefits of edge IoT

Implementing an edge IoT network helps reduce system latency and increase application performance. It also helps minimize data traffic bottlenecks at the integration points in the overall network, between IoT endpoints and other components.

How to Build an Edge Computing Architecture

There are two main things to consider when building an edge computing architecture: the infrastructure and the data processing methods. 

When implemented correctly, an edge computing architecture minimizes latency and enhances resilience to Internet problems. Processing data at the edge makes the overall workload more reliable and faster.

Infrastructure

When edge computing first emerged, the architects had to build the infrastructure from scratch. They had to build an extended non-cloud infrastructure and find an appropriate hosting model (i.e., on-premises, private cloud-based, containerized, etc.). The coexistence of this custom architecture with the public cloud presented security and other technical challenges.

If the architects built a data center at one edge location, they would need to find a way to connect it to the other edge locations, usually via a cloud infrastructure. They had to find a place to store the data. Another challenge was ensuring the consistency, redundancy, and high availability of architectural components in diverse locations. 

However, the complexity of building an edge computing network is decreasing. Today, the major cloud vendors and CSPs provide edge computing services, such as AWS’ comprehensive edge computing service suite. These services support various use cases, extending the public cloud infrastructure to the network edge and allowing clients to set up local data centers in various cities, on-premises, or in a 5G network.

The services offered by cloud providers introduce opportunities for edge computing implementations, making the infrastructure building process simpler and more flexible. Organizations can leverage these services to launch edge infrastructure quickly via on-demand offerings and standardized environments.

Edge Data Processing

Not all databases are equal when it comes to edge computing. Before installing a database, organizations must identify the features and capabilities they require and choose an appropriate solution. 

Enabling data processing at each ecosystem layer is important in a distributed edge computing architecture spanning from the cloud to edge devices. Every layer must have real-time access to data and connectivity to other layers while also having the ability to run independently when the connection cuts off. 

An appropriate database should natively distribute the workloads and storage across various parts of the edge computing architecture. It should support instant replication and synchronization across multiple database instances regardless of location. The database should also be embeddable, allowing the direct integration of data storage into edge devices to facilitate offline processing. 

Ideally, embedded databases can operate without a central control point, automatically synchronizing with the rest of the data ecosystem whenever there is connectivity. By embedding databases at the edge, it is possible to ensure resilience in the face of outages and bottlenecks affecting the main network.

Synchronization should be a bi-directional, controllable function to secure and optimize data flow across the edge architecture. For example, a smart factory network may capture high-velocity data at the assembly line, processing and analyzing it at the edge. However, it will only sync aggregated processed data to the cloud for long-term storage,  reducing the burden on the network’s bandwidth. 

Edge AI with Run:ai

Kubernetes, the platform on which the Run:ai scheduler is based, has a lightweight version called K3s, designed for resource-constrained computing environments like Edge AI. Run:ai automates and optimizes resource management and workload orchestration for machine learning infrastructure. With Run:ai, you can run more workloads on your resource-constrained servers.

Here are some of the capabilities you gain when using Run:ai:

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can run workloads on fractions of GPU and manage prioritizations more efficiently.
  • A higher level of control—Run:ai enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Run:ai simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their deep learning models.

Learn more about the Run:ai GPU virtualization platform.