Ghost Autonomy, backed by OpenAI, plans to revolutionize self-driving technology by incorporating multimodal large language models (LLMs). Despite skepticism from experts who question the suitability of LLMs for autonomous driving, Ghost remains committed. The collaboration with OpenAI aims to fine-tune existing models for reliability and performance, especially in complex scenarios like construction zones. While challenges persist, Ghost and OpenAI believe that LLMs could be a crucial tool in reshaping the future of autonomous vehicles.
The self-driving car industry is currently undergoing a critical period, marked by setbacks and skepticism. Cruise’s recent fleet recall and suspension from operating autonomous vehicles in California, coupled with public protests against testing in San Francisco, highlight the challenges faced by the sector. However, Ghost Autonomy, a startup backed by OpenAI, claims to have a solution that could address the safety concerns associated with self-driving technology. By incorporating multimodal large language models (LLMs), Ghost aims to enhance the capabilities of autonomous vehicles and reshape the industry’s trajectory.
Ghost Autonomy’s Vision:
Ghost Autonomy specializes in developing autonomous driving software for automakers, and it recently unveiled plans to explore the integration of multimodal LLMs into self-driving technology. The company’s collaboration with OpenAI, facilitated through the OpenAI Startup Fund, provides Ghost with early access to OpenAI systems and Azure resources, along with a $5 million investment. According to Ghost’s co-founder and CEO, John Hayes, LLMs offer a unique approach to understanding complex scenarios, addressing the limitations of current models.
How Ghost Utilizes LLMs in Self-Driving Technology:
Ghost’s strategy involves piloting software that utilizes multimodal models for higher complexity scene interpretation. By relying on LLMs that understand both text and images, the software aims to make sophisticated road decisions, such as suggesting lane changes based on images captured by car-mounted cameras. Hayes emphasizes that Ghost will fine-tune existing models and develop new ones to enhance reliability and performance on the road, especially in challenging scenarios like construction zones.
Skepticism from Experts:
Despite Ghost Autonomy’s optimism, some experts remain skeptical about the application of LLMs in self-driving technology. Os Keyes, a Ph.D. candidate at the University of Washington, dismisses Ghost’s use of LLM as a marketing buzzword, suggesting that LLMs may not be the most efficient tool for autonomous driving. Keyes compares the situation to using a fancy tool for a simple task, questioning the appropriateness of LLMs for addressing the challenges in vehicular autonomy.
Mike Cook, a senior lecturer at King’s College London, echoes Keyes’ concerns, emphasizing that multimodal models are not yet a perfected science. He argues against considering LLMs as a silver bullet in computer science, expressing reservations about their use in driving, a complex and potentially hazardous task. Cook emphasizes the challenges in validating the safety of LLMs, even for routine tasks like answering essay questions, questioning the premature application of this technology in autonomous driving.
Ghost Autonomy and OpenAI’s Response:
Despite skepticism, Ghost Autonomy and OpenAI remain resolute in their pursuit of integrating LLMs into self-driving technology. Brad Lightcap, OpenAI’s COO and manager of the OpenAI Startup Fund, highlights the potential of multimodal models to expand the applicability of LLMs to various use cases, including autonomy and automotive applications. He envisions these models combining video, images, and sounds to understand and navigate complex environments.
John Hayes defends Ghost’s approach, asserting that LLMs could enable autonomous driving systems to reason about scenes holistically, utilizing broad-based world knowledge to navigate diverse and challenging situations. Hayes claims that Ghost is actively testing multimodal model-driven decision-making through its development fleet and collaborating with automakers to validate and integrate these models into Ghost’s autonomy stack.
While Ghost Autonomy’s vision of incorporating multimodal LLMs into self-driving technology is ambitious, skepticism remains prevalent among experts in the field. The challenges faced by well-established players like Cruise and Waymo underscore the complexities of achieving safe and reliable autonomous driving. Ghost’s commitment to refining and validating LLMs for specific applications in collaboration with automakers will ultimately determine the success of their approach. As the self-driving industry navigates its reckoning, Ghost Autonomy’s journey with LLMs could shape the future of autonomous vehicles.