Edge Computing in IoT

Architecture and 6 Key Capabilities

How Does Edge Computing Affect the IoT?

Edge computing is a game changer for the IoT. It allows IoT devices to be more independent, storing, processing, and analyzing data locally instead of just sending it to a centralized server. This can improve the effectiveness of existing IoT devices, and make new devices and deployment topologies possible. 

The Internet of Things (IoT) refers to the process of connecting physical things to the Internet. The IoT consists of physical devices or hardware systems that receive and transmit data over a network without human intervention. A few examples are sensors, autonomous vehicles, smart homes, smart watches, and industrial IoT devices.

A typical IoT system works by continuously sending, receiving, and analyzing data in a feedback loop. Analytics can be performed in near real-time or over long periods of time, and is often aided by artificial intelligence and machine learning (AI/ML) algorithms to help derive insights from massive data volumes.

Edge computing involves moving computing, storage, and networking functions at or near to the physical location of users or data sources. By moving computing services closer to these locations, users benefit from faster, more reliable services and better user experience, and organizations have the ability to deploy new types of latency-sensitive applications.

Edge computing, when combined with the IoT, makes it possible for organizations to flexibly deploy workloads on IoT hardware, improving performance and enabling new use cases, including low latency and high throughput data, which were not possible with the traditional IoT.

In this article:

How the IoT Benefits from Edge Computing

Internet of Things applications often work as monitoring systems that collect and analyze data to trigger informed actions. IoT apps might process data daily, hourly, or in respond to external triggers. Edge computing benefits IoT by moving computing processes closer to the device, reducing network traffic and latency to enable real-time insights.

IoT devices often send small data packets back to a central management platform for analysis. This system works well for some applications, but the expected growth of IoT means that future networks will be overburdened with devices. Edge computing optimizes bandwidth and only sends long-term storage data to the central platform, not all data. 

Managing security is another major challenge for organizations with large numbers of IoT devices. Attackers could exploit the large volume of connected devices to execute DDoS attacks. Edge computing does not automatically provide more security than private clouds, but the localized approach makes it easier to manage security. For example, it is useful for data sovereignty and compliance with local data protection regulations.

3 Edge Computing Architectures

Here are three common options for edge computing architecture:

  • Pure edge—deploying all compute resources on-premises. This is suitable for organizations with security or compliance requirements that do not allow sending data to the cloud. This requires a larger initial investment.
  • Thick edge + cloud—deploying an on-prem data center, cloud-based resources, and edge computing devices. This lets an organization leverage existing investments in on-premise data centers, but use the cloud for aggregating, analyzing, and storing some of the data.
  • Thin edge + cloud—this approach connects edge resources directly to the public cloud, with no on-premise data center. This is the most lightweight and flexible approach, which also has the lowest upfront costs. But it provides less control over the operating environment and might raise security issues.

Related content: Read our guide to edge computing architecture (coming soon)

6 IoT Edge Evolution of IoT Edge Computing Capabilities

IoT edge computing systems have made tremendous progress over the past few years. Here are the most common features of edge computing and how they have evolved.

Consolidated Workloads

  • Traditional edge devices have a real time operating system (RTOS), and run proprietary software on top of the RTOS. 
  • Modern IoT devices use a hypervisor that can run multiple operating systems. This makes it possible to flexibly run different workloads on the same IoT device, consolidate workloads on IoT devices, and reduce the physical footprint of each device.

Pre-Processing and Data Filtering

  • Traditionally, in an edge computing system a remote server would “poll” edge devices continuously for data, whether there was a recent change or not. 
  • Modern IoT edge computing pre-processes data at the end and sends only relevant data to the cloud. This reduces network bandwidth requirements, improves performance, and reduces the need for massive cloud storage to store IoT logs.

Scalable Management

  • Traditional edge devices used outdated serial communication protocols. 
  • Modern edge devices can be connected to a local area network (LAN) or wide area network (WAN), integrating IoT devices into the network ecosystem and enabling central management. This has given rise to edge management platforms that can help manage fleets of edge devices.

Open Architecture

  • Traditional edge devices used closed, proprietary architectures. This resulted in vendor lock-in, high integration costs, and high complexity of switching or updating equipment. 
  • Modern edge computing relies on an open architecture with standard protocols such as OPC UA and MQTT, and data structures like Sparkplug that enable open data exchange. This promotes interoperability, easy integration, and agility of edge systems.

Edge Analytics 

  • Traditional edge devices were typically limited to performing a single task, such as ingesting data or reporting a specific metric. They had limited computing capacity by design.
  • Modern IoT edge systems have much more powerful processing capabilities, which go beyond “dumb” data collection to enable data analysis at the edge. This enables new use cases that require low latency and high data throughput.

Distributed Apps

  • Traditional IoT devices typically ran one proprietary application that performed a designated function.
  • Modern IoT edge computing systems de-couple applications from IoT hardware. This makes it possible to move and scale applications vertically (from edge resources to the cloud) and horizontally (from one edge resource to another).

Role of Machine Learning in IoT Edge Computing

Machine learning (ML) plays a key role in IoT edge runtimes and IoT applications, and many DevOps teams are incorporating machine learning into their application designs. Machine learning allows organizations to analyze and make predictions based on the data stored and processed by IoT devices.

ML application programming interfaces (APIs) can analyze data from IoT devices to identify data patterns, user behavior, trends, and more. By carrying out this analysis at the edge, an organization can reduce the time required for processing, and can continuously update the analysis based on real time data from IoT devices.

Role of an IoT Gateway

IoT gateways support device-to-device and device-to-cloud communication. Their key features include data filtering and analysis. They can also be programmed to handle authentication of data that needs to be sent to a cloud service. This can improve security for IoT data transfers.

When an edge agent needs to communicate with another device or the cloud, the IoT gateway processes the request, clears it, and sends the information to the destination. Organizations can analyze the transmitted data and use the results to monitor how the IoT network operates and improve system efficiency.

Edge AI with Run:AI

Kubernetes, the platform on which the Run:AI scheduler is based, has a lightweight version called K3s, designed for resource-constrained computing environments like Edge AI. Run:AI automates and optimizes resource management and workload orchestration for machine learning infrastructure. With Run:AI, you can run more workloads on your resource-constrained servers. 

Here are some of the capabilities you gain when using Run:AI: 

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can run workloads on fractions of GPU and manage prioritizations more efficiently.
  • A higher level of control—Run:AI enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their deep learning models. 

Learn more about the Run.ai GPU virtualization platform.