Edge Computing: Addressing the Physics of Data Delivery

Edge Computing: Addressing the Physics of Data Delivery

In an increasingly digitized world, milliseconds matter. Almost imperceptible network communications delays can have negative consequences on a growing array of latency-sensitive applications, including voice, video, artificial intelligence, analytics and more. In a recent IDC survey, 90 percent of business leaders say they now depend on a variety of applications that can tolerate delays of no more than 10 milliseconds. (For context, consider that an eye blink takes about 400 milliseconds!)

To accommodate those demands, more than two-thirds of the survey respondents have begun implementing edge computing solutions. IDC further predicts that more than half of all new IT infrastructure will be deployed at the edge by next year.

Despite the growing enthusiasm for edge, the technology remains a bit of a mystery to many. At a very basic level, it is a computing model that addresses the physics of data delivery. In the realm of physics, speed refers to the amount of time it takes an object to travel a certain distance. Edge computing essentially seeks to improve the speed of data by reducing the distance it must travel.

Bypassing Cloud Congestion

Edge computing is a distributed computing model in which computing resources are placed at the network’s edge in close proximity to data-collection sources such as mobile devices and IoT sensors. Variously known as cloudlets, micro data centers or fog nodes, these edge resources address some of the challenges created when organizations run increasingly data-heavy workloads in the cloud.

The cloud model transformed computing by centralizing data processing and storage inside large server farms from which organizations can access applications, services and data across Internet links. It is a proven model that creates significant operational benefits while dramatically reducing spending on on-premises infrastructure. However, increasing data volumes are making cloud computing impractical for low-latency applications.

Data generated in the cloud must be transferred to distributed data centers for processing before being transferred back across the network to users and apps. The distance between users and those cloud data centers increases the round-trip journey. Transferring data back and forth to far-flung cloud data centers creates issues with network congestion and latency.

Data Proximity

A landmark 2013 study by the University of California-San Diego and Google first demonstrated the validity of the edge concept. Researchers found that applications running on cloud computing systems ran up to 20 percent more efficiently when the data they needed to access was located nearby. They tested applications running in a warehouse-sized cloud server installation, then compared those results with tests on similar servers running in isolation rather than as part of a cloud.

The researchers found that apps running on the isolated servers ran significantly faster and more efficiently. This was largely because latency was reduced due to the proximity of data resources. Apps requesting data from remote cloud servers had to wait longer for the data they requested to arrive.

A number of factors contribute to cloud latency, including the number of router hops or ground-to-satellite communication hops between an organization and its cloud provider’s data center. The edge computing model reduces latency by putting processing resources only a single hop away from end-users. In the process, it improves the performance of many other critical applications by reducing the amount of data moving through the data center.

There are compelling use cases for edge computing across almost all industry verticals. In our next post, we’ll take a closer look at some of the important benefits and features of the model, along with some of the challenges the technology can introduce.


Just released our free eBook, 20 Signs That Your Business is Ready for Managed ServicesDownload
+