8 Hardware Modules Explained

What Is NVIDIA Jetson?

NVIDIA Jetson provides advanced embedding systems that enable you to create artificial intelligence (AI) products for various scenarios. It is a power-efficient hardware platform for AI that consists of modular, high-performance, small-form-factor edge computers. NVIDIA Jetson also offers the JetPack SDK for software acceleration and an ecosystem for speeding up the development of custom AI projects. 

This is part of our series of articles about NVIDIA A100 and other NVIDIA AI offerings.

In this article:

What Are the Benefits of the Jetson Platform?

NVIDIA Jetson is suitable for organizations of all sizes, and also for students or individual developers. It provides a set of modules that can be useful for anything from entry-level AI applications to highly complex AI-powered devices.

NVIDIA Jetson is powered by a unified software architecture that frees developers from the hassle of repetitive coding. Whenever they require AI/ML capability, they can add a relevant Jetson module to the device and it takes care of the heavy lifting.

The NVIDIA JetPack SDK comes with a Linux operating system, CUDA-X acceleration libraries, and APIs for various machine learning domains, including deep learning and computer vision. It also supports machine learning frameworks such as TensorFlow, Caffe and Keras, and computer vision libraries such as OpenCV.

The NVIDIA Jetson platform supports cloud-native technologies and workflows such as containerization and orchestration, giving developers the agility to rapidly develop and scale up AI products.

Related content: Read our guide to:

NVIDIA Jetson Modules

The NVIDIA Jetson platform offers several hardware modules to help you build AI-powered appliances.

1. Jetson AGX Orin Developer Kit

This kit allows developers to build a full-featured AI application using Jetson Orin modules. It provides a power-efficient, high-performance Jetson AGX Orin module and can emulate other modules.

2. Jetson AGX Xavier

Jetson AGX Xavier is a new module that lets you build an AI-powered autonomous machine. It runs on just 10W and delivers up to 32 TOPs. AGX Xavier benefits from the NVIDIA ecosystem, including a rich selection of AI workflows and tools provided by the major AI platform. Developers can use these offerings to train and deploy a neural network quickly. 

Jetson AGX Xavier has NVIDIA JetPack SDK support, which helps you reduce costs by minimizing your development efforts.

3. Jetson AGX Xavier Industrial

This module is part of the NVIDIA Jetson AGX Xavier series. It has a pin-compatible form factor design, allowing you to leverage the most modern AI models for demanding applications. It offers extended shock, vibration, and temperature specifications, advanced security capabilities and features, and up to 20 times the performance and four times the memory of the NVIDIA Jetson TX2i module.

AGX Xavier Industrial is useful for manufacturers producing robotics, automation, and other intelligent products. It lets build safety-certified and ruggedized products, delivering high performance for AI-embedded functional and industrial and safety applications in a rugged, power-efficient form factor design.

4. Jetson AGX Xavier Developer Kit

This developer kit lets you easily build and deploy an end-to-end AI-powered application for various robotics use cases, including manufacturing, retail, and delivery. It has wide support from NVIDIA JetPack, DeepStream SDKs, cuDNN, CUDA, and TensorRT, providing all the necessary tools to start an AI development project.

The Xavier processor provides the computing power, making it 20 times more performant and ten times more energy-efficient than the older TX2 processor.

5. Jetson Xavier NX

This module can deliver 21 TOPS to run a modern AI workload, consuming just 10 watts of power. Its form factor is more compact than a credit card. Xavier NX can run multiple neural networks simultaneously, processing data from various high-resolution sensors. It allows you to build applications for edge computing and embedded devices requiring high performance but with significant power, weight, and size constraints. 

6. Jetson Xavier NX Developer Kit

This developer kit contains a compact power-efficient Xavier NX module for AI-powered edge devices. It has cloud native support (a new feature) and can accelerate the NVIDIA software stack in just 10 W with over ten times the performance of the popular Jetson TX2.

AI startups, intelligent machine manufacturers, and application developers can leverage the NVIDIA Jetson Xavier NX Developer Kit to build innovative products using a compact, power-efficient form factor, and highly accurate AI inference. 

7. Jetson TX2 Module

This module is the fastest embedded-AI computing appliance offering high power efficiency (just 7.5 W). It offers supercomputer capabilities for edge AI devices. Built on a GPU from the NVIDIA Pascal family, it has 8GB of memory and a 59.7GB-per-second bandwidth. It offers various hardware interfaces making it easier to integrate into different form factors and products. 

8. Jetson Nano

This model is the smallest machine for AI-embedded applications and IoT devices. It is extremely powerful for its size, delivering the power required for advanced AI projects in a $99 module.

NVIDIA Jetson Nano Get lets you get started quickly using a comprehensive JetPack SDK and accelerated libraries for computer vision, deep learning, graphics, multimedia, and other applications. It offers the functionality and performance required to run a modern AI workload, letting you easily add AI capabilities to your products. 

NVIDIA Jetson and Edge AI with Run:AI

Kubernetes, the platform on which the Run:AI scheduler is based, has a lightweight version called K3s, designed for resource-constrained computing environments like edge devices based on NVIDIA Jetson. Run:AI automates and optimizes resource management and workload orchestration for machine learning infrastructure. With Run:AI, you can run more workloads on your resource-constrained servers. 

Here are some of the capabilities you gain when using Run:AI: 

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can run workloads on fractions of GPU and manage prioritizations more efficiently.
  • A higher level of control—Run:AI enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their deep learning models. 

Learn more about the Run.ai GPU virtualization platform