❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 8 May 2026Main stream

AMD Launches MI350P, Its First PCIe β€œInstinct” In Four Years – Packs CDNA 4 GPU With 4.6 PFLOPs AI Compute, 144 GB HBM3E at 600W

7 May 2026 at 14:45

The image shows an AMD Instinct MI350P graphics card against a dark, abstract background.

AMD has announced its brand new Instinct MI350P PCIe GPU accelerator, which is the first PCIe design in years and is aimed at AI workloads. The Instinct MI350P PCIe GPU Takes The MI350X Chips, Cuts It Into Half For 128 CUs, 144 GB HBM3E & 600W Power With the Instinct MI350P PCIe GPU, AMD gives enterprise users an option to expand their AI computing capabilities without having to invest in expensive infrastructure. The PCIe design of the MI350P makes it an easy-to-use and drop-in solution that brings lots of performance in a standard dual-slot and server-focused design. Designed to help […]

Read full article at https://wccftech.com/amd-mi350p-first-pcie-instinct-in-four-years-cdna-4-gpu-4-6-pflops-ai-144-gb-hbm3e-600w/

Before yesterdayMain stream

SpaceXAI Gives Anthropic A Fresh Injection of 220,000 NVIDIA GPUs While Also Planning on Multi-GW β€œOrbital” AI Compute Capacity

6 May 2026 at 17:55

SpaceXAI Gives Anthropic A Fresh Injection of 220,000 NVIDIA GPUs While Also Working on Multi-GW "Orbital" AI Compute Capacity

SpaceXAI has announced that it will provide Anthropic access to its Colossus 1 supercomputer with 220,000 NVIDIA GPUs and is also planning orbital compute clusters. From Land To Space, SpaceXAI Has Offered Anthropic With Multi-Gigawatt Compute Capacity For Its AI Needs Anthropic is starving for more compute for its AI requirements. The company is expanding its capabilities at a scale like none other, while reportedly making in-house chips & working with some big chipmakers. The company also announced a partnership with Amazon for a 6GW Trainium chip capacity for its Claude AI models, among several other partnerships. Today, SpaceXAI has […]

Read full article at https://wccftech.com/spacexai-anthropic-fresh-injection-220000-nvidia-gpus-working-on-multi-gw-orbital-ai-compute-capacity/

NVIDIA’s CEO Jensen Huang Draws a Hard Line on China: No Blackwell, No Rubin, But US Firms Must Still Fight In Global Markets

5 May 2026 at 17:30

A person is holding NVIDIA server hardware at a presentation, showcasing multiple circuit boards with visible chipsets.

NVIDIA CEO has said that China should not have access to its most advanced AI chips, such as Blackwell or Rubin. Blackwell or Rubin AI Chips are a no-go for China, but NVIDIA's CEO Wants US Firms To Continue To Compete In Global Markets The US AI policy shift towards China has been ongoing since NVIDIA's Hopper generation of chips. Export controls restricted NVIDIA from selling its bleeding-edge chips to Chinese firms & this ban has extended into the Blackwell generation. Reaffirming the US-first policy, NVIDIA CEO Jensen Huang has stated that the company's most advanced AI chips, Blackwell and […]

Read full article at https://wccftech.com/nvidia-ceo-jensen-huang-draws-a-hard-line-on-china-no-blackwell-no-rubin/

xAI Is Reportedly Using Just 11% of Its 550,000 NVIDIA GPUs, While Meta and Google Squeeze Out 43-46% From Their Fleets

3 May 2026 at 12:25

xAI Is Reportedly Using Just 11% of Its 550,000 NVIDIA GPUs, While Meta and Google Squeeze Out 43-46% From Their Fleets

xAI is reportedly able to utilize just over 10% of its entire NVIDIA GPU fleet, as report suggests lackluster AI software stack optimizations. AI Software Stack Bottlenecks Are An Industry-Wide Problem, As xAI Is Only Able To Utilize 11% of Its Entire NVIDIA GPU Installation. The Information has reported that Elon Musk's xAI, the software firm behind Gorq and other key AI-based components, is only able to utilize a small chunk of its total installed GPU capacity. Currently, xAI runs around 550,000 NVIDIA GPUs, which are a combination of H100s and H200s. These are deployed within xAI's Memphis and Colussus […]

Read full article at https://wccftech.com/xai-using-just-11-percent-gpus-while-meta-google-squeeze-out-much-more/

NVIDIA Wants Everyone To Rethink AI TCO, & Explains Why β€œCost Per Token” Is The Only Metric That Matters

16 April 2026 at 16:00

NVIDIA Wants Everyone To Rethink AI TCO, & Explains Why "Cost Per Token" Is The Only Metric That Matters 1

As the AI industry enters the maturity phase, traditional terms have become outdated, which is why NVIDIA suggests that the new ways to think about AI TCO should be evaluated based on "Cost Per Token". NVIDIA Wants Everyone To Rethink AI TCO With "Cost Per Tokens" Metric Tokens are the single most important metric for AI. While yesterday's data centers were evaluated on their raw computing power, today's AI factories are evaluated on their token output. But it's not important for who does the most tokens, efficiency and cost are still the values that matter the most. That is why […]

Read full article at https://wccftech.com/nvidia-wants-everyone-to-rethink-ai-tco-explains-why-cost-per-token-is-the-only-metric-that-matters/

China’s Catch-22 Is Pushing NVIDIA to the Brink, and the Chipmaker Is Finally Fed Up With It

5 March 2026 at 17:21

A man in a black shiny jacket stands in front of a large circuit board against a background featuring the Chinese flag.

NVIDIA's ambitions for China are glooming down with each day, as a new report indicates the AI giant is now looking to scale back H200 production in favour of ramping up Vera Rubin production. NVIDIA Plans to Shift H200 Production Towards Vera Rubin, as it Prefers 'Consistency' Over Revenue We have reported extensively on the NVIDIA-China saga in the past as well, and one of the more common trends in these stories is that both NVIDIA and China seem to be running in cycles, trying to catch each other. We'll discuss this aspect further ahead, but for now, according to […]

Read full article at https://wccftech.com/china-catch-22-is-pushing-nvidia-to-the-brink/

❌
❌