The biggest show in AI this week is Nvidia’s GTC developer conference in San Jose, Calif. At GTC yesterday, Nvidia CEO Jensen Huang unveiled the company’s newest graphics processing unit (GPU), the kind of chips that have become the workhorses of AI. The forthcoming Blackwell GPU will have 208 billion transistors, far exceeding the 80 billion its current top-of-the-line H100 GPUs have. The larger chips mean they will be twice as fast at training AI models and five times faster at inference—the term for generating an output from an already trained AI model. Nvidia is also offering a powerful new GB200 “superchip” that would include two Blackwell GPUs coupled together with its Grace CPU and supersede the current Grace Hopper MGX units that Nvidia sells for use in data centers. What’s interesting about the Blackwell is its power profile—and how Nvidia is using it to market the chip. Until recently, the trend has been that more powerful chips also consumed more energy, and Nvidia didn’t spend much effort trying to make energy efficiency a selling point, focusing instead on raw performance. But in unveiling the Blackwell, Huang emphasized how the new GPU’s greater processing speed meant that the power consumption during training was far less than with the H100 and earlier A100 chips. He said training the latest ultra-large AI models using 2,000 Blackwell GPUs would use 4 megawatts of power over 90 days of training, compared to having to use 8,000 older GPUs for the same period of time, which would consume 15 megawatts of power. That’s the difference between the hourl(”). And both those factors have made many companies reluctant to fully embrace the generative AI revolution because they are worried about the expense and about doing damage to net zero sustainability pledges. Nvidia knows this—hence its sudden emphasis on power consumption. The company has also pointed out that many AI experts working on open-source models have found ways to mimic some aspects of the performance of much larger, energy-intensive models such as GPT-4 but with models that are much smaller and less power-consuming.
Full story : Why Nvidia’s new Blackwell chip might be an environmental nightmare.