Memory Chip Makers Consider Thicker Designs That Could Delay Advanced Technology

Memory Chip Makers Consider Thicker Designs That Could Delay Advanced Technology

2026-03-06 semicon

Global, Friday, 6 March 2026.
Major semiconductor companies are discussing increasing High Bandwidth Memory thickness to 825-900 micrometers for next-generation 20-layer stacks, up from current 775 micrometer standards. This potential relaxation of thickness limits could slow adoption of hybrid bonding technology, which directly connects copper interconnects without traditional bumps between memory layers. The move reflects industry tensions between achieving higher memory capacity and maintaining manufacturing precision, as companies balance performance gains against technical complexity in AI-driven memory production.

Understanding the Semiconductor Memory Innovation

This development centers on semiconductor technology, specifically High Bandwidth Memory (HBM) used in artificial intelligence applications. Companies participating in JEDEC standardization discussions are considering raising HBM thickness limits to approximately 825-900 micrometers for future generations such as HBM4E and HBM5, which are expected to adopt 20-layer stacking [1]. This represents a significant increase from the current HBM4 standard of 775 micrometers and the previous HBM thickness standard of 720 micrometers through HBM3E [1]. The innovation aims to increase memory density and capacity for AI workloads, where memory bandwidth has become a critical bottleneck. Current AI applications require substantial memory resources, with HBM3E achieving bandwidth of 1,178 GB/s per stack at 9.2 Gbps data rates [7].

Technical Benefits and Performance Gains

The thicker HBM configurations offer substantial benefits for AI inference and training applications. Every gigabyte of HBM consumes approximately three times the wafer capacity of DDR5, with this ratio increasing above 3:1 with HBM4 [5]. The enhanced thickness allows for higher-capacity stacks without compromising structural integrity, addressing the critical memory shortage in AI deployments. AI-related memory is projected to consume almost 20% of global DRAM wafer output in 2026, while annual DRAM capacity growth remains limited to 10-15% [5]. This supply constraint has made HBM particularly valuable, with HBM3E commanding approximately $15-20 per GB, representing 5-10 times the cost of commodity DRAM on a per-bit basis [7]. The thicker designs enable manufacturers to achieve higher memory densities while maintaining manufacturing yields, crucial as the industry faces unprecedented demand for AI memory solutions.

How the Technology Works and Manufacturing Challenges

HBM stacks consist of DRAM dies interconnected by Through-Silicon Vias (TSVs) with specifications including 5-10 micrometer diameter, 40-55 micrometer pitch, and over 5,000 TSVs per stack for HBM3 [7]. The increased thickness to 825-900 micrometers allows for more robust 20-layer stacking while maintaining structural integrity during manufacturing processes. Current die thickness for HBM dies ranges between 25-40 micrometers [7], and the additional overall thickness provides greater tolerance for manufacturing variations and thermal expansion. However, this approach could slow adoption of hybrid bonding technology, which directly connects copper interconnects of chips and wafers, eliminating the need for bumps between DRAM layers [1]. Hybrid bonding requires ultra-clean surfaces with less than 0.5nm roughness, plasma activation, sub-micron accuracy alignment, and precise copper alignment [7]. The technically challenging process may see declining yields when bonding up to 20 stacked chips, making the relaxed thickness standards an attractive alternative for manufacturers prioritizing production reliability over maximum miniaturization.

Industry Players and Geographic Distribution

Samsung Electronics is expected to introduce hybrid bonding partially at the earliest in 16-layer HBM4E products [1], while major memory companies including SK Hynix, Samsung, and Micron are allocating cleanroom space to HBM production [5]. South Korean company Hanwha Semitech announced on February 25, 2026, the completion of its second-generation hybrid bonder ‘SHB2 Nano’ development, with plans to deliver it to customers during the first half of 2026 for performance testing [6]. This represents the first achievement in four years since the delivery of the first-generation hybrid bonder to customers in 2022 [6]. NVIDIA and Amazon Web Services are reportedly planning to adopt TSMC-SoIC advanced packaging technology [1], while TSMC’s CoWoS capacity was projected to reach 50-60K monthly starts in 2025 and 80-100K in 2026 [7]. The timeline for widespread hybrid bonding adoption remains uncertain, with volume deployment for HBM-to-logic interfaces likely remaining in the 2027+ timeframe [7]. JEDEC typically finalizes key specifications about 1 to 1.5 years before commercialization, making the current discussions critical for establishing industry standards that will shape AI memory solutions through the remainder of the decade.

Bronnen


HBM memory hybrid bonding