
Rapidly expanding open-source initiatives may eventually pose a threat to OpenAI and Google’s dominant positions in the AI industry. While OpenAI and Google have gained a head start in funding and infrastructure, a leaked document from Google indicates that they no longer have a moat. The speed of OpenAI’s releases is impressive but still slower than comparable software updates. The success of open-source projects like LLaMA raises the question of whether companies should pick and engineer a version of AI that fits their business models rather than vice versa.
The artificial intelligence (AI) landscape is changing rapidly, and deep-pocketed corporations like OpenAI and Google may soon face a new threat: the rapidly multiplying open-source projects that push the state of the art and leave the dominant players on the defensive. While OpenAI and Google have gained a head start in funding and infrastructure, the memo from Google makes it clear that they no longer have a moat.
The speed of OpenAI’s releases may seem impressive, but compared to versions of iOS or Photoshop, they are still occurring on a scale of months and years. The leaked foundation language model from Meta, called LLaMA, was a turning point. Within weeks, people tinkering around on laptops and penny-a-minute servers had added core features like instruction tuning, multiple modalities, and reinforcement learning from human feedback. OpenAI and Google were probably poking around the code, too, but they couldn’t replicate the level of collaboration and experimentation occurring in subreddits and Discords.
Currently, OpenAI and other companies are pursuing a business strategy that is directly related to the Software as a Service (SaaS) model. However, customers are starting to wonder why they’re hiring the services of the largest and most general-purpose AI model ever created if all they want to do is match the language of a contract against a couple of hundred others. If GPT-4 is the Walmart you go to for apples, what happens when a fruit stand opens in the parking lot?
For a business like OpenAI, it effectively beggars the entire premise of their business that these systems are so hard to build and run that they have to do it for you. It starts to look like these companies picked and engineered a version of AI that fit their existing business model, not vice versa! In the AI world, it didn’t take long for a large language model to be run on a Raspberry Pi. Google and OpenAI weren’t the ones to do the optimization—and may never be at this rate.
Being a Walmart has its benefits, and companies don’t want to have to find the bespoke solution that performs the task they want 30% faster if they can get a decent price from their existing vendor and not rock the boat too much. But few enterprise IT departments are going to cobble together an implementation of Stability’s open-source derivative-in-progress of a quasi-legal leaked Meta model over OpenAI’s simple, effective API. The value of inertia in business cannot be underestimated!
However, the distance from the first situation to the second is going to be much shorter than anyone thought, and there doesn’t appear to be a damn thing anybody can do about it. Open-source projects like LLaMA are multiplying rapidly and pushing the boundaries of AI. The dominant players will have to keep up and innovate at an even faster pace to stay ahead.