Killed by Robots

AI Artificial Intelligence / Robotics News & Philosophy

Amazing New AI Chip: 20x Faster Than NVIDIA

Amazing New AI Chip: 20x Faster Than NVIDIA

In a significant advancement in artificial intelligence, Cerebras Systems has introduced the Cerebras Inference Platform. This platform delivers AI inference speeds 20 times faster than NVIDIA’s H100 GPUs, thanks to Cerebras’ groundbreaking Wafer Scale Engine 3 (WSE-3) processor. This innovation promises to transform the deployment of AI models across various applications.

Performance Metrics

The Cerebras Inference platform sets new standards for speed. It can process 1,800 tokens per second for the Llama 3.1-8B model and 450 tokens per second for the Llama 3.1-70B model. These figures are significantly higher than what NVIDIA H100 GPU-based solutions achieve in hyperscale cloud environments.

Technology Behind the Speed

The heart of Cerebras’ superior performance is its WSE-3 chip, a wafer-scale processor with up to 900,000 cores and a whopping 44GB of SRAM. Unlike typical GPUs that break down large wafers into smaller chips, Cerebras keeps the wafer intact. This minimizes data movement and drastically increases memory bandwidth, providing 7,000 times more memory bandwidth than NVIDIA H100. This design overcomes a major hurdle in generative AI: memory bandwidth.

Advantages Over Traditional GPUs

The Cerebras Inference platform provides several key advantages:

  • Speed: By keeping the entire AI model on a single chip, Cerebras avoids off-chip memory access, a major bottleneck in traditional GPU architectures. This leads to quicker token generation and lower latency, perfect for real-time applications.
  • Memory Bandwidth: With its massive SRAM and integrated computation, the WSE-3 chip ensures efficient data access and processing, reducing the need for high-speed, power-consuming interfaces.
  • Cost Efficiency: Starting at just 10 cents per million tokens for the Llama 3.1-8B model and 60 cents per million tokens for the Llama 3.1-70B model, Cerebras Inference is competitively priced, offering a cost advantage over current GPU solutions.

Accessibility and Integration

The Cerebras Inference platform is accessible to a wide user base. It is available via an API compatible with the OpenAI Chat Completions API, ensuring effortless migration with minimal code changes. The service comes in three pricing tiers—Free, Developer, and Enterprise—to meet various user needs, from basic access to custom enterprise solutions.

Industry Impact

Cerebras Inference is poised to revolutionize industries that rely on real-time AI interactions. It can dramatically enhance customer service by providing instant, highly accurate responses. It also holds tremendous potential for natural language processing and real-time analytics, supporting larger and more complex AI models like the 405 billion parameter LLaMA model.

User and Industry Feedback

Early adopters and industry experts have praised the Cerebras Inference platform for its groundbreaking performance. Kim Branson, SVP of AI/ML at GlaxoSmithKline, emphasized the platform’s significant impact on AI applications. Russell D’sa, CEO and Co-Founder of LiveKit, noted the potential for creating ultra-low latency, more human-like AI experiences. Denis Yarats, CTO and Co-Founder of Perplexity, highlighted the new user interaction paradigms that ultra-fast inference speeds could unlock for intelligent answer engines.

In summary, Cerebras’ new inference platform marks a major leap in AI technology. Offering unparalleled speed, efficiency, and cost-effectiveness, it is set to spearhead the next generation of AI applications requiring real-time complex performance.