❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 12 May 2026Main stream

Zyphra & AMD Launch New Open AI Platform Powered By 15MW MI355X GPUs With Expansion Planned To MI450 & Beyond

11 May 2026 at 20:40

A server room scene with the text 'Zyphra AMD 15 MW AMD MI355X GPUs'.

Zyphra has partnered with AMD to launch its brand new open-source AI platform that rivals DeepSeek & is based in the US, all powered by MI355X GPUs. Zyphra Cloud - The DeepSeek of China Is Here & It Is Powered By An All-AMD Powerhouse With 15MW of Compute Through MI355X & Expansion To Future GPUs The Zyphra AI Cloud platform is an inference-optimized service for frontier open-weight models such as DeepSeek V3.2, Kimi K2.6, and GLM 5.1. The platform combines custom kernels, novel long-context inference algorithms, and advanced parallelism techniques to deliver high-throughput, low-latency AI performance, perfect for agentic, deep […]

Read full article at https://wccftech.com/zyphra-amd-launch-open-ai-platform-powered-by-15mw-mi355x-gpus/

Before yesterdayMain stream

China’s LineShine Supercomputer To Hit 2 ExaFlops With 47,000 CPUs and Zero Reliance on Foreign Chips

27 April 2026 at 16:20

China Unveils Its 2 ExaFlops Supercomputer, Houses 47,000 CPUs Including Huawei Kunpeng & The Worlds Largest Liquid Cooling System 1

China has unveiled its new supercomputer, called LineShine, in Shenzen, which will deliver 2 ExaFlops of compute performance. The Chinese LineShine Supercomputer Is All Set To Become The Fastest In The World, Exceeding 2 ExaFlops During a conference at the National Supercomputing Center in Shenzhen today, the fastest domestic supercomputer project was announced. Known as LineShine, the Supercomputer will be built in two phases and will deliver over 2 ExaFlops of compute. Currently, the world's fastest supercomputer is the AMD-based El Capitan, which has a peak compute of 2.8 Exaflops. Coming to the specifications, the LineShine Supercomputer is purely a […]

Read full article at https://wccftech.com/chinas-lineshine-supercomputer-2-exaflops-47000-cpus-zero-reliance-on-foreign-chips/

DeepSeek V4 Squeezes Million-Token Context Into 10% of V3.2’s Memory, Escalating China’s AI Efficiency War With OpenAI

24 April 2026 at 18:24

Chinese artificial intelligence lab DeepSeek claims to significantly reduce computing resources required for token inference and memory resources with its latest V4 model, according to its release notes. DeepSeek claims that the V4 AI model requires just 27% single-token inference FLOPs and 10% of key-value (KV) cache when compared to its predecessor, the DeepSeek V3.2 model. The reduction in cache requirements addresses memory requirements, with lower requirements conserving memory and increasing the context available to model builders when creating their models. How DeepSeek V4 Slashes Compute and Memory Costs In its release notes for DeepSeek V4, DeepSeek outlines that the […]

Read full article at https://wccftech.com/deepseek-v4-cuts-kv-cache-by-90-at-1m-tokens-but-aggressive-compression-could-risk-needle-in-a-haystack-failures/

Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

23 February 2026 at 19:57
Anthropic accuses DeepSeek, Moonshot, and MiniMax of using 24,000 fake accounts to distill Claude’s AI capabilities, as U.S. officials debate export controls aimed at slowing China’s AI progress.
❌
❌