Set your AI infrastructure free with DCX

DCX ensures data centers can run existing CUDA-based AI workloads on AMD GPU infrastructure—without application changes, rewrites, or operational disruption.

Request Technical Briefing

How DCX works

DCX adds a centralized control and execution layer to the data center, enabling AI workloads to operate across alternative GPU infrastructure. This allows operators to evaluate, purchase, and scale NVIDIA-alternative systems with operational confidence.

Control Plane

Hybrid AI orchestration layer

The OpenDC Control Plane provides the centralized orchestration, policy, and operational intelligence required to run Hybrid AI infrastructure at data center scale. It is deployed on standard x86 or ARM CPU servers and operates as the authoritative management layer across all OpenDC-enabled GPU resources.

The Control Plane is responsible for:

  • Orchestration and policy management across Hybrid AI environments
  • Scheduling workloads across heterogeneous GPU resources
  • Monitoring, observability, and health management
  • Acting as the system of record for Hybrid AI Infrastructure state

The Control Plane does not sit in the application path. Instead, it coordinates execution, enforces policy, and provides visibility while preserving existing operational models.

Deployment model

  • Installed on dedicated CPU servers within the data center
  • Designed for high availability and horizontal scalability
  • Integrated into existing data center networking and security domains

Example Control Plane server configurations

The OpenDC Control Plane runs on commercially available enterprise servers, including:

Dell Technologies PowerEdge R760
Dual Intel Xeon Scalable processors, 256–512GB RAM, NVMe storage

HPE ProLiant DL380 Gen11
Dual Intel Xeon or AMD EPYC, enterprise-grade I/O and redundancy

Supermicro Ultra SuperServer Series
High-core-count CPUs, optimized for control plane and management workloads

These systems are selected to align with existing data center standards and procurement practices.



Runtime Nodes

Hybrid AI execution layer

OpenDC Runtime Nodes are installed directly on GPU servers and form the execution layer of the OpenDC Suite. They enable Hybrid AI workloads to run transparently across heterogeneous GPU environments while remaining fully coordinated by the Control Plane.

Runtime Nodes are responsible for:

  • CUDA compatibility on AMD GPUs
  • Transparent execution of existing AI workloads in hybrid environments
  • Delivering near-native performance without application or workflow changes

Deployment model

  • Installed alongside existing GPU software stacks on GPU servers
  • No changes required to AI applications or job submission workflows
  • Designed for data center-scale rollouts, not single-node experimentation

Control Plane interaction

Runtime Nodes maintain a continuous control channel with the OpenDC Control Plane, allowing:

  • Centralized scheduling and placement decisions
  • Policy enforcement at execution time
  • Real-time telemetry and performance reporting

This separation ensures execution remains efficient while coordination remains centralized and auditable.

APIs

Hybrid AI integration surface

OpenDC APIs provide the integration layer that connects the OpenDC Control Plane, Runtime Nodes, and existing data center automation systems. They are designed to reduce operational friction and accelerate time-to-deployment for Hybrid AI environments.

The APIs enable:

  • Programmatic workload deployment and lifecycle management
  • DevOps and automation integration with existing platforms
  • Policy and scheduling control across heterogeneous GPU vendors
  • Telemetry, observability, and infrastructure visibility

Operational role

Rather than introducing a new developer framework, OpenDC APIs integrate directly into:

  • Existing scheduling systems
  • Infrastructure automation pipelines
  • Monitoring and observability platforms

This allows OpenDC to operate as a first-class data center system, preserving established operational practices while extending them to Hybrid AI Infrastructure.

OpenDC Suite

End-to-end Hybrid AI Infrastructure

Individually, control planes, runtimes, or APIs provide limited value. The OpenDC Suite unifies these layers into a single, cohesive system designed specifically for data-center-scale Hybrid AI deployments.

Together, the Control Plane, Runtime Nodes, and APIs create:

  • Workload portability across heterogeneous GPU infrastructure
  • A seamless operational model for Hybrid AI environments
  • A dramatically simplified deployment and expansion path

The OpenDC Suite enables data centers to move from evaluation to production rapidly—significantly reducing time-to-value for large-scale AI workloads.

By abstracting complexity at the infrastructure layer, OpenDC allows operators to:

  • Confidently deploy NVIDIA-alternative AI infrastructure
  • Scale Hybrid AI environments without operational disruption
  • Align procurement, operations, and long-term infrastructure strategy

OpenDC Suite is the foundation that makes Hybrid AI Infrastructure practical, scalable, and sustainable for modern data centers.

Inside ABD

Our company
Explore ABD's history and what sets us apart
Our people
See why our team of trusted industry leaders are uniquely positionted to lead the way in Hybrid AI

Take the next step

Discover how ABD can help you make the biggest impact for your AI data centers

Contact Us