Nvidia’s CEO, Jensen Huang, revealed a pivotal 2018 decision that steered the company toward an AI-driven future, reshaping industries. Speaking at SIGGRAPH, Huang noted the embrace of AI-powered image processing with RTX and DLSS. The choice propelled Nvidia’s success, but Huang emphasized this is just the beginning of an AI-powered era. He anticipates AI integration across industries, envisioning factories and cars crafted by robotically designed robots. Huang’s optimistic outlook mirrors Nvidia’s interests, promoting advanced AI hardware and language-driven interfaces. Challenges and regulatory concerns were notably absent from his narrative.
Jensen Huang, the visionary founder, and CEO of Nvidia, revealed a pivotal decision in 2018 that flew under the radar at the time but has since propelled the company into an entirely new realm, fundamentally altering the trajectory of the tech industry. Although the gamble has paid off tremendously, Huang emphasized that this is merely the dawn of an AI-driven era — an era chiefly powered by Nvidia’s cutting-edge hardware. Was this audacious move a result of luck or strategic brilliance? The answer, it appears, encompasses both.
These insights were unveiled during Huang’s keynote address at SIGGRAPH in Los Angeles. The crucial turning point, five years past, centered around embracing AI-fueled image processing, manifesting as ray tracing and intelligent upscaling: RTX and DLSS, respectively. Huang commented, “We recognized the limitations of rasterization,” alluding to the conventional method for rendering 3D scenes. He described 2018 as a “bet the company” juncture that mandated reinvention across hardware, software, and algorithms. “In the course of revolutionizing computer graphics with AI, we were simultaneously revolutionizing GPUs for AI applications.”
Although the integration of ray tracing and DLSS is still in progress across diverse consumer GPUs and gaming scenarios, the architecture conceived for this purpose has harmoniously aligned with the burgeoning machine learning community’s needs.
Addressing the soaring computational demands of training increasingly intricate generative models, the optimal solution wasn’t merely bolstering traditional data centers with modest GPU capacity. Rather, it was the likes of the H100 — a system meticulously designed for scalable operations from the outset. This transformation rendered Nvidia a cornerstone in the AI domain, with server and workstation sales matching the fervent demand.
Huang, however, firmly asserted that this achievement is merely a prelude. Emerging models demand not only training but real-time implementation by millions, or even billions, of users regularly.
“Human” emerges as the new programming language, ushering in an era where virtually every facet integrates an LLM (Language and Learning Model) at its forefront. From visual effects to swiftly digitizing sectors like manufacturing and heavy industry, the incorporation of a natural language interface becomes inevitable.
Huang illustrated this profound shift, envisioning fully software-defined and robotic factories constructing autonomous vehicles. “It’s the emergence of robotically devised robots crafting robots,” he quipped, foreseeing a future shaped by these technological leaps.
While opinions may differ, Huang’s outlook — while plausible — strongly aligns with Nvidia’s interests. Despite the uncertainty surrounding the extent of LLM integration, it’s hard to argue against its adoption to some degree. Even a conservative estimation of its application and user base underscores the pressing need for substantial investments in novel computing resources.
Pouring resources into outdated computing infrastructure like CPU-centric racks seems imprudent when advanced AI development hardware like the GH200 can accomplish the same tasks at a fraction of the cost and power consumption. Huang exhibited a captivating video featuring Grace Hopper computing units, elegantly assembling into a blade, then a rack, and finally an entire row of GH200s, interconnected at extraordinary speeds, effectively constituting the “world’s largest single GPU.” This collective assembly boasts a full exaflop of ML-focused computational might.
He playfully remarked, “This is not a metaphor; it’s actual scale,” commanding the stage’s center for dramatic effect. “And it might even run Crysis.”
These GH200 configurations are poised to form the foundational unit of the forthcoming AI-driven digital industry, Huang postulated.
Closing his speech, he humorously shared, “I don’t recall who said it, but… ‘the more you acquire, the more you economize.’ If I could leave you with one takeaway from my discourse today, let it be that.” His statement elicited laughter from the SIGGRAPH audience, heavily steeped in the gaming world.
Notably absent from Huang’s address were the multifaceted challenges inherent to AI, regulatory considerations, or the fluid nature of AI’s conceptual framework — which has undergone numerous transformations over the past year. Although undeniably a rosy perspective, it’s fitting for a company to thrive by providing the essential tools during a technological gold rush.