GPU Servers
GPU Servers
Home GPU Servers

Leading AI computing provider fueling innovations across AI scenarios

  • Full stack

    Most comprehensive AI server portfolio, covering solutions from single node to cluster, full-stack managing software and application optimizing services.

  • Extreme performance

    Leading performance in model training and inference, fueling R&D and application across AI scenarios.

  • Widely recognized by global customers

    Trusted by CSP and leading high-tech companies in AI+Science, AI+Graphics, and AIGC worldwide.

  • System-level open design

    Cooperating with leading AI chip vendors developing mature OAI solutions.

All Products
Main Application ScenariosModelHeightProcessorAccelerator CardMemoryStorageCooling ModeDetails
AI inference, deep learning, metaverse, AIGC, AI+ScienceKR4268V24U2x 4th Gen Intel® Xeon® 
Scalable Processors, TDP 350W

Supports 8x Dual-slot FHFL PCIe interface GPU cards,
and supports ≥ 4 PCIe 5.0 x16 slots

32x DDR5 DIMMs, up to 4800MT/s24x 2.5 or 12x 3.5-inch SAS/SATA drive bays in the front, supports up to 16x NVME or E3.S
Built-in 2x M.2 NVME/SATA SSD
Air coolingMore
2x AMD EPYCTM 9004 Series Processors, Max cTDP 400WUp to 10x full-height full-length double-width PCIe interface GPU cards24x DDR5 DIMMs, up to 4800MT/s24x 2.5 or 12x 3.5-inch SAS/SATA drive bays in the front, supports up to 16x NVME or E3.S
Built-in 2x M.2 NVME SSD
AI training, AIGC, metaverseKR6288V26U2x 4th Gen Intel® Xeon® 
Scalable Processors, TDP 350W
1x NVIDIA HGX-Hopper-8GPU module, TDP up to 700W per GPU32x DDR5 DIMMs, up to 4800MT/s24x 2.5’ SSD, up to 16x NVMe U.2Air coolingMore
2x AMD EPYCTM 9004 Series Processors, Max cTDP 400W1x NVIDIA HGX-Hopper-8GPU module, TDP up to 700W per GPU24x DDR5 DIMMs, up to 4800MT/s24x 2.5' SSD, up to 16x NVMe U.2
Pre-training, fine-tuningKR6298V26U2x 4th Gen Intel® Xeon® Scalable Processors, TDP 350W8xIntel Gaudi2/PVC OAMs, TDP up to 600W per OAM32x DDR5 DIMMs, up to 4800MT/s24x 2.5' SSD, up to 16x NVMe U.2Air coolingMore
MotusAIMotusAIFine-grained scheduling of GPU,  Data acceleration strategies,  Efficient distributed training,  Fault tolerance mechanismMore