❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Micron Begins Volume Production of 36GB HBM4, 28 Gbps PCI Gen6 SSDs, & 192 GB SOCAMM2 Memory For NVIDIA Vera Rubin Platform

16 March 2026 at 21:00

Micron HBM4 memory chip displayed next to its exposed circuitry on a black background.

Micron has commenced volume production of HBM4 DRAM, PCIe Gen6 SSDs & SOCAMM2 memory for NVIDIA's Vera Rubin platform. NVIDIA Vera Rubin Gets Full Support From Micron With Volume Production Beginning On HBM4 DRAM, PCIe Gen6 SSDs & SOCAMM2 Memory Press Release: Micron Technology has begun volume shipment of its HBM4 36GB 12H in the first quarter of calendar year 2026, and it is designed for NVIDIA Vera Rubin. With HBM4, Micron achieves over 11 Gb/s pin speeds, enabling a bandwidth greater than 2.8 TB/s, representing a 2.3 times bandwidth and greater than 20% power efficiency2 improvement over its HBM3E. […]

Read full article at https://wccftech.com/micron-volume-production-hbm4-gen6-ssds-socamm2-nvidia-vera-rubin-platform/

NVIDIA Vera Rubin Achieves 40 Million Times More Compute In 10 Years: 288 GB HBM4, 22 TB/s Bandwidth, 50 PFLOPs of AI Horsepower

16 March 2026 at 20:12

A lineup of NVIDIA hardware including 'Rubin,' 'Vera,' 'CX9,' 'BlueField-4,' 'NVLink-6 Switch,' 'Spectrum-X CPO,' 'Groq 3 LPU,' and various compute trays and servers displayed against a black background.

NVIDIA has officially unveiled its next-gen AI data center platform called Vera Rubin, powered by the Rubin GPU and Vera CPU architectures. NVIDIA Vera Rubin AI Data Center Offers A Stunning 40,000,000x Compute Growth Within A Decade The NVIDIA Vera Rubin platform is designed with a total of 7 chips and six different racks, each serving a singular purpose, to power next-gen AI datacenters. Those seven chips that have been announced today are: First up, we have the Vera Rubin Compute tray, and what has changed is the mounting system, with which AI data centers now take just 2 hours […]

Read full article at https://wccftech.com/nvidia-vera-rubin-achieves-40-million-times-more-compute-in-10-years/

Micron Ships Out the β€œWorld’s First” 256GB SOCAMM2 Modules Targeted Toward the Agentic AI Frenzy

3 March 2026 at 15:34

A close-up of a circuit board featuring Micron SOCAMM2 and LPDDR5X memory chips.

Micron's latest breakthrough in the memory industry is the debut of the more capable SOCAMM2 memory modules, featuring leading capacity and power efficiency. Micron's Newer SOCAMM2 Focuses On Reducing Bottlenecks With KV-Cache, Leading to Lower Latency Workloads With the 'applications' layer of AI, the memory bottleneck is growing as workloads continue to scale, which is why DRAM manufacturers have paid special attention to advancements being made with HBM and other AI-specific memory products. In Micron's latest announcement, the firm has set a "new benchmark" with SOCAMM2 memory modules, as they ramp up the per-module capacity to 256 GB, marking a […]

Read full article at https://wccftech.com/micron-ships-out-the-worlds-first-256gb-socamm2-modules/

NVIDIA’s CEO to Unveil Chips the β€œWorld Has Never Seen Before” at This Year’s GTC, Likely Pointing Toward Rubin or Next-Gen Feynman AI Lineups

18 February 2026 at 11:52

A person in a shiny jacket gestures with a pen against a backdrop of Earth viewed from space, connected by glowing lines.

NVIDIA's CEO has spoken about what we could see next at the company's GTC 2026 event, hinting that we could see chips that the "world has never seen before". NVIDIA's Next Stage at GTC Could Feature Next-Gen Feynman Chips, Possibly Showcasing Groq's LPU Integration NVIDIA has been at the forefront of the AI technological revolution, mainly through its compute portfolio and its ability to keep up with its product cycles. At CES 2026, we saw Team Green showcase its Vera Rubin AI lineup, revealing it was in full production and included six newly designed chips, including the Vera CPUs and […]

Read full article at https://wccftech.com/nvidia-ceo-to-unveil-chips-that-the-world-has-never-seen-before/

❌
❌