Edge compute, edge of your seat

Feb. 10, 2020
Edge computing has been a buzz for a couple years now as a way to run applications that previously needed special hardware to run on systems near customers. These applications often ...

Edge computing has been a buzz for a couple years now as a way to run applications that previously needed special hardware to run on systems near customers. These applications often require low latency to the extent that microseconds matter.

While edge computing is often discussed with reference to networking and location, the workload or application that runs on the compute can also contribute to latency, said Shamik Mishra, VP, Altran. This is where hardware acceleration, including field-programmable gate arrays (FPGAs) and graphical processing units (GPUs), comes into play. Altran has been working jointly with CableLabs as part of Project Adrenaline to solve related challenges, develop open source software and work on proof of concepts.

There currently aren't any set rules concerning what workloads should be accelerated on which platform, said Randy Levensalor, principal architect, infrastructure group, office of the CTO, CableLabs. In general, however, it is thought that integer (whole number) operations are faster on a general-purpose CPU, while floating point (decimal number) operations are faster on GPUs, and bitwise operations, manipulating ones and zeros, are faster on FPGAs. Decisions on where to place a workload should take into account the cost of transitioning a workload from one compute platform to another.

"There's a penalty for every memory copy, even within the same server. This means that running consecutive tasks within a pipeline on one platform can be faster than running each task on the platform that is best for that task," Levensalor wrote in a recent blog.

Challenges with hardware acceleration include the difficulty in writing applications since there are fewer language options than for general-purpose CPUs. There are frameworks, like OpenCL, which make it possible for a program to work on CPUs, GPUs, and FPGAs, but there currently is a performance cost that comes with this interoperability, Levensalor said. "The good news is that several major accelerator hardware manufacturers are targeting the edge, releasing frameworks and pre-built libraries that will bridge this performance gap over time."

Additional challenges that come with accelerators like FPGAs and GPUs include managing the low-level software drivers needed to run them. "Hooks to install these drivers during the OS deployment have been added to SNAPS-Boot, including examples for installing drivers for some accelerators," Levensalor wrote.

Project Adrenaline and the CableLabs SNAPS open source effort have created a set of tools to allow a developer to "bootstrap" an environment for compute acceleration, Mishra wrote in a blog post. He described SNAPS-Boot as including automated installation software that has pre-built methods for both installing low-level drivers and building workloads, and mentioned SNAPS-Kubernetes as a bootstrap tool for creating a cloud-native ecosystem.

"In future, we intend to drive the project toward building a cloud-native, end-to-end model for application development, deployment and monitoring of hardware accelerators - essentially creating an accelerator-as-a-service," Mishra wrote.

Among the goals for Project Adrenaline, Mishra listed integration of more hardware accelerators; creating an integrated monitoring system for heterogeneous accelerators in cloud-native edge deployments; improving lifecycle management frameworks for acceleration resources; and developing zero-touch provisioning, multi-tenancy, quality of service (QoS), fault and security aspects of edge compute hardware accelerators.

"We strongly believe that a robust container ecosystem that supports accelerators is required for a successful edge strategy," Mishra said. "Another key area of interest is network hardware accelerators, such as smart network interface cards."

"Project Adrenaline only scratches the surface of what's possible with accelerated edge computing. The uses for edge compute are vast and rapidly evolving. As you plan your edge strategy, be sure to include the capability to manage programmable accelerators and reduce your dependence on single-purpose ASICs. Deploying redundant and flexible platforms is a great way to reduce the time and expense of managing components at thousands or even millions of edge locations," Levensalor wrote, noting that SNAPS-Kubernetes facilitates the tying together of these components to test in a lab.        

About the Author

BTR Staff

EDITORIAL
STEPHEN HARDY
Editorial Director and Associate Publisher
[email protected]
MATT VINCENT
Senior Editor
[email protected]
SALES
KRISTINE COLLINS
Business Solutions Manager
(312) 350-0452
[email protected]
JEAN LAUTER
Business Solutions Manager
(516) 695-3899
[email protected]

Sponsored Recommendations

ON TOPIC: Innovation in Optical Components

July 2, 2024
Lightwave’s latest on-topic eBook, sponsored by Anritsu, will address innovation in optical components. The eBook looks at various topics, including PCIe (Peripheral...

PON Evolution: Going from 10G to 25, 50G and Above

July 23, 2024
Discover the future of connectivity with our webinar on multi-gigabit services, where industry experts reveal strategies to enhance network capacity and deliver lightning-fast...

Advancing Data Center Interconnection

July 24, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data ...

The Journey to 1.6 Terabit Ethernet

May 24, 2024
Embark on a journey into the future of connectivity as the leaders of the IEEE P802.3dj Task Force unveil the groundbreaking strides towards 1.6 Terabit Ethernet, revolutionizing...