The introduction of the unified memory architecture changed the course of the MacBookβs future forever, and while other chipmakers are playing βcatch-up,β Apple continues to race ahead of the competition, taking advantage of the current memory crisisΒ by refusing to charge a premium for its products. With this new design, the Cupertino firm is expected to jump several positions in the notebook market, reaching the number three spot by beating Dell. However, a research firm believes that one key decision from Apple enabled the latter to reach new heights, and that was the MacBook NeoΒ release. The lower priced notebook market was [β¦]
The past few months in PC hardware have been eventful by any measure. Apple shipped the M5 Pro and M5 Max, Intel clarified its core architecture roadmap, and anyone trying to build a new PC has been quietly suffering through DRAM pricing that refuses to behave. These stories look separate on the surface. They are not.
The thread connecting all of them is memory, specifically the growing gap between what compute silicon can do and what the memory feeding it can keep up with.
Start with Apple. The M5 Pro and M5 Max are genuinely interesting chips, not just because of the performance numbers but because of what Apple was forced to do architecturally to get there. Fusion Architecture, Appleβs move to a dual-die SoC design, exists primarily because a single monolithic die cannot accommodate 40 GPU cores, 614 GB/s of unified memory bandwidth, and 18 CPU cores without hitting yield and cost walls. The memory bandwidth figure is the one that matters most for AI workloads running locally, and Apple knows it. The M5 Max at 614 GB/s is not chasing gaming benchmarks. It is chasing large language model inference throughput, and bandwidth is the bottleneck that determines how fast it runs.
That bandwidth problem is not unique to Apple. It is an industry-wide crisis, and the full picture of why is considerably more complicated than most coverage lets on. The AI memory crisis running through the data centre right now traces back to physics: DRAM scaling has not kept pace with compute scaling, HBM production is constrained by TSV fabrication yields and advanced packaging capacity, and the most powerful AI systems on the planet spend more time waiting for data than actually processing it. That is not a software problem. It is a silicon and packaging problem, and it does not have a quick fix.
For anyone building a PC right now, the consequences land differently, but they are still real. DRAM pricing has been pulled in two directions simultaneously: AI infrastructure demand is bidding directly on supply at the high end, while consumer DDR5 pricing has been volatile enough to meaningfully change the calculus on a new build from one month to the next. If you have been holding off on a memory upgrade, waiting for prices to settle, the honest answer is that the market dynamics driving this are structural rather than cyclical. Prices may ease, but the pressure from AI demand on overall DRAM supply is not going away.
On the Intel side, there has been a lot of noise about the company killing off its hybrid core architecture in favour of a unified core design. The reality, as is usually the case with Intel roadmap speculation, is more nuanced. Intel is not killing P-cores, at least not in the timeframe the headlines suggest. The unified core concept is a longer-term architectural direction, and the practical implications for anyone buying an Intel platform in the next year or two are limited. What matters more right now is whether Intelβs current generation delivers the performance-per-watt improvements it needs to stay competitive, particularly in a market where Apple Silicon has reset expectations for mobile efficiency and AMDβs desktop Zen 5 parts are putting pressure on the high end.
The bigger picture across all of this is straightforward: memory is the constraint that determines where performance goes next, whether that is Apple designing a new packaging approach to get more bandwidth, hyperscalers paying premiums to secure HBM allocation, or a consumer trying to figure out whether now is a sensible time to buy a 32 GB DDR5 kit. The compute side of the industry has never been more capable. The memory side is struggling to keep up, and that tension is shaping every major hardware decision being made right now.