Samsung unveils HBM4E, showcasing comprehensive AI solutions, NVIDIA partnership and vision at NVIDIA GTC 2026

Samsung Electronics will showcase its comprehensive AI solutions, including the new HBM4E, at NVIDIA GTC 2026. The event will highlight Samsung’s collaboration with NVIDIA and its advancements in AI infrastructure and semiconductor manufacturing.

Samsung Electronics, a prominent leader in semiconductor technology, has announced its participation in NVIDIA GTC 2026, which will take place in San Jose, California, from March 16-19. As the only semiconductor company offering a complete AI solution across memory, logic, foundry, and advanced packaging, Samsung will present its suite of products and solutions that empower customers to create innovative AI systems. Visitors can explore Samsung’s AI solutions at booth #1207 during the event.

A highlight of Samsung’s display will be the sixth-generation HBM4, which is now in mass production and designed for the NVIDIA Vera Rubin platform. This new HBM4 is expected to accelerate the development of future AI applications, delivering processing speeds of 11.7 gigabits-per-second (Gbps), surpassing the industry standard of 8Gbps, with potential enhancements up to 13Gbps.

By utilizing the advanced sixth-generation 10-nanometer (nm)-class DRAM process (1c), Samsung has achieved stable yields and top-tier performance. Additionally, Samsung will introduce its next-generation HBM4E, which provides 16Gbps per pin and 4.0 terabytes-per-second (TB/s) bandwidth, for the first time at GTC 2026.

Samsung will also showcase its hybrid copper bonding (HCB) technology, a novel method that allows next-generation HBM to achieve 16 or more layers while reducing heat resistance by over 20 percent compared to thermal compression bonding (TCB).

The collaboration between Samsung and NVIDIA will be a focal point in the ‘NVIDIA Gallery,’ featuring a range of Samsung’s advanced technologies, such as HBM4, SOCAMM2, and PM1763 SSD, designed for NVIDIA AI infrastructure.

Samsung’s SOCAMM2, based on low-power DRAM, is an ideal server memory module offering high bandwidth and flexible system integration for next-generation AI infrastructure. Currently in mass production, SOCAMM2 is an industry-first to achieve this milestone.

Samsung’s PM1763 SSD is developed for next-generation AI storage solutions, utilizing the latest PCIe 6.0 interface for rapid data transfers and high capacities. Its performance will be demonstrated on servers using the NVIDIA SCADA programming model.

As part of the new NVIDIA BlueField-4 STX reference architecture for accelerated storage infrastructure within NVIDIA’s Vera Rubin platform, Samsung’s PM1753 SSD will illustrate enhancements in energy efficiency and system performance for inference workloads.

Samsung will also highlight its partnership with NVIDIA in AI Factory development at GTC 2026, focusing on scaling Samsung’s AI Factory using NVIDIA accelerated computing. This collaboration supports one of the world’s most comprehensive chip manufacturing infrastructures, covering memory, logic, foundry, and advanced packaging.

Yong Ho Song, Executive Vice President and Head of AI Center at Samsung Electronics, will elaborate on the strategic collaboration between the two companies during his presentation on March 17, 2026. His session, titled ‘Transforming Semiconductor Manufacturing with Agentic AI from Design and Engineering to Production,’ will detail the AI Factory and showcase innovative use cases where AI and digital twins are revolutionizing semiconductor manufacturing.

Samsung’s memory solutions also enhance efficiency for local AI workloads on personal devices. At GTC 2026, Samsung will present efficient solutions for personal AI supercomputers, including the Samsung PM9E3 and PM9E1 NAND for NVIDIA DGX Spark.

Furthermore, Samsung will display DRAM solutions, LPDDR5X and LPDDR6, designed for premium smartphones, tablets, and wearables, offering faster data throughput and reduced latency. LPDDR5X achieves speeds of up to 25Gbps per pin while lowering power consumption by up to 15 percent, supporting ultra-responsive mobile experiences and AI-enhanced applications without compromising battery life. LPDDR6 advances bandwidth to a scalable 30-35 Gbps per pin and introduces advanced power-management features, providing the performance needed for next-generation edge-AI workloads.