ADLINK Technology Inc., a global leader in edge computing, today introduced the industry’s first embedded MXM-based graphics modules based on NVIDIA’s Turing architecture, to accelerate edge AI inference in SWaP-constrained applications. GPUs are increasingly used to provide AI inferencing at the edge, where size, weight and power (SWaP) are key considerations. The embedded MXM graphics modules offer high-compute power required to transform data at the edge into actionable intelligence, and come in a standard format for systems integrators, ISVs and OEMs, increasing choice in both power and performance.
“The new embedded MXM graphics modules provide the perfect balance between size, weight and power for edge applications, where the demand for more processing power continues to increase,” said Zane Tsai, director of platform product center, ADLINK. “Leveraging NVIDIA’s GPUs based on the Turing architecture, our customers can now increase their edge processing performance with ruggedized modules that are fit for any environment, while remaining inside their SWaP envelope.”
ADLINK’s embedded MXM graphics modules accelerate edge computing and edge AI in a myriad of compute-intensive applications, particularly in harsh or environmentally challenging applications such as those with limited or no ventilation, or corrosive environments. Examples include medical imaging, industrial automation, biometric access control, autonomous mobile robots, transportation and aerospace and defense. The need for high-performance, low-power GPU modules is increasingly critical as AI at the edge becomes more prevalent.
The ADLINK embedded MXM graphics modules:
● Provide acceleration with NVIDIA® CUDA®, Tensor and RT Cores
● Are one-fifth the size of full-height, full-length PCI Express graphics cards
● Offer more than three times the lifecycle of non-embedded graphics
● Consume as low as 50 watts of power
With the introduction of the embedded MXM graphics modules based on the Turing architecture, ADLINK leads the market in delivering powerful computing and AI inferencing at the edge, while remaining within customers’ SWaP constraints.