Edge computing is the practice of processing and computing client-side data close to the data source, rather than moving it to a central server or cloud-based location. It brings computing resources, data storage, and enterprise applications closer to where people or devices actually consume an application or data.
In this article:
Computing tasks require an effective architecture. Some architecture might work well for a particular type of computing task but not for another. Edge computing has emerged as an important architecture that enables distributed computing, distributing compute and storage resources very close to data sources (ideally in the same physical location).
The distributed computing model is not new and the concepts of remote offices, branch offices, data center hosting, and cloud computing have been around for a long time. However, decentralization can be difficult and requires a high level of supervision and control. This is often overlooked when moving away from traditional centralized computing models.
Edge computing is growing in importance, because it provides effective solutions to new network challenges associated with the vast amounts of data produced and consumed by today's mobile organizations. Edge computing can not only save costs, but also time—applications increasingly rely on time-sensitive processing and response, which can be much easier to achieve in edge computing architectures.
Learn more in our detailed guide to edge computing technology (coming soon)
Autonomous vehicles must analyze data in real-time to operate reliably and safely. However, real-time analysis in the cloud requires shifting the massive volumes of data (estimated at terabytes) generated by the vehicle, often resulting in latency or lack of connectivity.
While 5G technology can handle more capacity than 4G, it cannot transmit terabytes of data to the cloud for processing at the speed needed to ensure safe autonomous driving. Onboard computing power and edge data centers can support mission-critical processing for vehicle-to-vehicle communications, integration with smart cities, and navigation.
An example of edge computing in autonomous vehicles is Tesla's Autopilot system. The system uses cameras, ultrasonic sensors, and radars to gather data and make decisions about how the vehicle should navigate the road. The data is processed by onboard computers in the vehicle, rather than being sent to a centralized data center. This allows the vehicle to respond in real-time to its surroundings, improving the accuracy and speed of its decision-making.
Healthcare providers process and store massive amounts of data from many sources, such as medical devices located in doctor's offices, hospitals, and consumer wearables. Traditionally, healthcare providers move all this data to centralized servers for storage and analysis, leading to bandwidth congestion and overhead in storage costs.
Edge devices can ingest and analyze data locally to identify data that can be discarded, retained, or requires immediate action. It also supports medical care delivery like robot-assisted surgery.
An example of an edge computing product for healthcare is the GE Healthcare Clinical Decision Support System (CDSS). This system is a portable, edge-based solution that provides real-time clinical decision support to healthcare providers at the point of care. The CDSS integrates with various medical devices and electronic health records, and uses algorithms and machine learning to provide actionable insights and recommendations to healthcare providers.
Manufacturing facilities can employ millions of connected devices that gather and generate data on equipment performance, finished products, and production lines. Industrial IoT data is typically handled in centralized servers. It requires moving big data to centralized servers on-premises or in the cloud, leading to high costs.
Here are several ways edge computing helps support manufacturing and industrial processes:
An example is the Rockwell Automation Edge CompactLogix 5370 L3 Controller. This device is a programmable automation controller that provides real-time control and data management for manufacturing processes. It performs functions such as motion control, data acquisition, and process control at the edge of the network, reducing the amount of data transmitted to the cloud.
Edge computing enables streaming and content delivery processes to operate with low latency, ensuring a positive user experience for existing and emerging functionalities like search functions, personalized experiences, interactive capabilities, and content suggestions. It helps deliver live events and regional and original content with a seamless user experience.
Akamai Edge Platform is a cloud service that provides a global network of servers for content delivery and acceleration. The platform uses edge computing to bring computation and storage closer to users, reducing latency, improving performance and reliability, and offloading traffic from centralized data centers.
Learn more in our detailed guide to edge computing applications (coming soon)
Edge computing helps minimize bandwidth usage and server resources. Bandwidth and cloud resources are limited and expensive. According to Statista, more than 75 billion IoT devices will be installed worldwide by 2025. Supporting all these devices requires moving a lot of computing to the edge.
One of the biggest benefits of moving processes to the edge is low latency. Each time a device needs to communicate with a remote server, there is a delay. By avoiding the need to communicate with that remote server, edge computing achieves much lower latency.
Edge computing can also provide new capabilities not previously available. For example, businesses can use edge computing to process and analyze data at the edge to allow real-time processing.
Downsides of edge computing include new attack vectors. The mix of IoT devices with hardened embedded computers, and the growing variety of smart devices, such as edge servers, create new opportunities for malicious attackers to compromise these devices.
Another disadvantage is that edge computing requires more expensive and complex local hardware. For example, IoT cameras require an onboard computer to send raw video data to a web server, but require a more sophisticated computer with more processing power to run their own motion detection algorithms.
The Internet of Things (IoT) refers to the process of connecting physical objects to the Internet. IoT refers to a physical device or hardware system that sends and receives data over a network without human intervention. A typical IoT system works by continuously sending, receiving, and analyzing data in a feedback loop. Analytics can be performed by humans or artificial intelligence and machine learning (AI/ML) algorithms in near real-time, or in batches over an extended period of time.
Edge computing occurs at or near the physical location of users or data sources. By placing computing services closer to these locations, users can enjoy faster, more reliable services and better user experiences, and businesses can better support latency-sensitive applications to identify trends and deliver better products and services.
In the context of IoT, edge computing can place computing power closer to where the physical device or data source actually resides. In order to allow IoT devices to respond more quickly and mitigate issues, analytics should be done at the edge, rather than returning to a central site for analysis.
Edge AI is a combination of edge computing and artificial intelligence (AI). This involves running AI algorithms on local devices with edge computing capabilities. Edge AI does not require connectivity and integration between systems, allowing users to process data in real time on their devices.
Most AI processes today run on cloud-based hubs, because they require significant computing power. The downside is that network issues can cause service downtime or slow down AI services significantly. Edge AI addresses these challenges by making AI processing an integral part of edge computing devices. This saves time, aggregating data and serving users without having to communicate with other physical locations.
Learn more in our detailed guide to AI at the edge (coming soon)
An edge computing architecture consists of an ecosystem of distributed infrastructure components, spanning an enterprise data center or central server location and multiple edge locations. The ecosystem includes computing and storage equipment, applications, devices, sensors, and network connectivity to a central data center or cloud.
Devices and sensors are where information is collected, processed, or both. Bandwidth, memory, processing power and capabilities, and computing resources are sufficient to collect, process and process data in real time without the help of the rest of the network. Some kind of connection to the network allows communication between the device and the database from a central location.
A scaled-down, on-premises edge server or data center can be easily moved and scaled to a smaller remote location. Flexibility and scalability are critical as your company's needs evolve. Flexible topology options can accommodate smaller footprints or varying environmental requirements, including intermittent network connections.
Learn more in our detailed guide to edge computing architecture
Cloud computing allows businesses to store, process, and manipulate data on remote servers accessed via the Internet. Commercial cloud computing providers offer a set of digital computing platforms and services that businesses can use to reduce or eliminate their physical IT infrastructure and associated costs. Cloud computing also enables organizations to provide secure remote work capabilities to their employees, and easily scale data and applications.
Edge computing enables data collection, processing, and analysis at the "edge" of the most remote parts of an organization's network. This enables organizations to process data in near real time. There might be no need to communicate with a primary data center, and if there is, only the most relevant data can be sent to the primary data center. This can help reduce overall network latency and network costs.
Fog computing allows data to be temporarily stored and analyzed in the compute layer between the cloud and the edge, when edge data cannot be processed due to the limitations of edge device computing. The fog can send relevant data to cloud servers for long-term storage and future analysis. Fog computing allows businesses to offload cloud servers, and optimize IT efficiency, by sending only some edge device data to a central data center for processing.
It is important to note that edge computing does not depend on fog computing. Fog computing is an additional option that helps businesses achieve greater speed, performance and efficiency in certain edge computing scenarios.
Before embarking on an edge computing project, it is important to ensure that it aligns with each stakeholder involved and the end goal. Edge computing deploys information technology (IT) to manage information processing technology. Then there is the communication technology (CT), which is responsible for the processing and transmission of information.
Finally, you need operational technology (OT) to manage and monitor the hardware and software on the client endpoints. The challenge here is to facilitate cooperation and cooperation between these three disciplines. Breaking down silos in these situations is important, in order to facilitate collaboration between all elements of an edge computing program.
Contrary to popular belief, edge and cloud are not competitors. Instead, the edge can be deployed to complement the cloud. edge computing can also fuel organizations' digital transformation efforts, alongside the cloud.
In most cases, implementing edge computing alone is not ideal. By implementing edge and cloud together, you can effectively scale your business operations. Combining edge computing with the cloud can yield positive results, especially in large-scale digital transformation.
Special attention should be paid to safety. Businesses need to integrate security policies at the edge, just as they do across their entire IT landscape. Establishing corporate security practices isn't enough, nor can you rely on patch management solutions whenever bugs are discovered.
A smart strategy helps create a secure edge environment. When considering edge computing security, you need the same level of security and service visibility that is included in a central data center. Start by adopting security best practices such as multi-factor authentication (MFA), anti-malware, endpoint protection, and end-user training.
It is important to check service level agreements (SLAs) and compliance in advance. In today's fast-paced business world, slowdowns or downtime can be detrimental to a business. All data and information collected must be protected from malicious third parties.
Therefore, it is important to consider everything from maintenance to resiliency, security, scalability and sustainability. Additionally, the edge computing environment must be robust enough to withstand technological change and simple enough to be upgraded over time.
Kubernetes, the platform on which the Run:AI scheduler is based, has a lightweight version called K3s, designed for resource-constrained computing environments like Edge AI. Run:AI automates and optimizes resource management and workload orchestration for machine learning infrastructure. With Run:AI, you can run more workloads on your resource-constrained servers.
Here are some of the capabilities you gain when using Run:AI:
Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their deep learning models.
Learn more about the Run:ai GPU virtualization platform.