Ghost, now OpenAI-backed, claims LLMs will overcome self-driving setbacks -- but experts are skeptical
It's not hyperbolic to say that the self-driving car industry is facing a reckoning.
Just this week, Cruise recalled its entire fleet of autonomous cars after a grisly accident involving a pedestrian that led the California DMV to suspend the company from operating driverless robotaxis in the state. Meanwhile, activists in San Francisco have taken to the streets -- literally -- to immobilize driverless cars as form of protest against the city being used as a testing ground for the emerging technology.
But one startup says it holds the key to safer self-driving technology -- and thinks that this key will convince the naysayers.
Ghost Autonomy, a company building autonomous driving software for automaker partners, this week announced that it plans to begin exploring the applications of multimodal large language models (LLMs) -- AI models that can understand text as well as images -- in self-driving. To realize this, Ghost has partnered with OpenAI through the OpenAI Startup Fund to gain early access to OpenAI systems and Azure resources from Microsoft, OpenAI's close collaborator, plus a $5 million investment.
"LLMs offer a new way to understand 'the long tail,' adding reasoning to complex scenes where current models fall short," Ghost co-founder and CEO John Hayes told TechCrunch in an email interview. "The use cases for LLM-based analysis in autonomy will only grow as LLMs get faster and more capable."
But how, exactly, is Ghost applying AI models designed to explain images and generate text to controlling autonomous cars? According to Hayes, Ghost is piloting software that relies on multimodal models to "do higher complexity scene interpretation." suggesting road decisions (e.g. "move to the right lane") to car-controlling hardware based on pictures of road scenes from car-mounted cameras.
"At Ghost, we'll be working to fine-tune existing models and training our own models to maximize reliability and performance on the road," Hayes said. "For example, construction zones have unusual components that can be difficult for simpler models to navigate -- temporary lanes, flagmen holding signs that change, and complex negotiation with other road users. LLMs have shown to be able to process all of these variables in concert with human-like levels of reasoning."
The experts I spoke with are skeptical, however.
"[Ghost is] using 'LLM' as a marketing buzzword," Os Keyes, a Ph.D. candidate at the University of Washington focusing on law and data ethics, told TechCrunch via email. "Basically, if you take this pitch and replaced LLM with 'blockchain' and sent it back to 2016, it would be just as plausible -- and just as obviously a boondoggle."
Keyes posits that LLMs are simply the wrong tool for self-driving. They weren't designed or trained for this purpose, he asserts, and may even be a less efficient way of solving some of the outstanding challenges in vehicular autonomy.
"It's sort of like hearing your neighbor has been using a sheaf of treasury notes to hold a table up," Keyes said. "You could do it that way, and it's certainly fancier than the alternative, but... why?"
Mike Cook, a senior lecturer at King's College London whose research focuses on computational creativity, agrees with Keyes' overall assessment. He notes that multimodal models themselves are far from a solved science; indeed, OpenAI's flagship model invents facts and makes basic mistakes that humans wouldn't, like copying down text incorrectly and getting colors wrong.
"I don't believe there's any such thing as a silver bullet in computer science," Cook said. "There's simply no reason to put LLMs at the center of something as dangerous and complex as driving a car. Researchers around the world are already struggling to find ways to validate and prove the safety of LLMs for fairly ordinary tasks like answering essay questions, and the idea that we should be applying this often unpredictable and unstable technology to autonomous driving is premature at best -- and misguided at worst."
But Hayes and OpenAI won't be dissuaded.
In a press release, Brad Lightcap, OpenAI's COO and manager of the OpenAI Startup Fund, is quoted as saying that multimodal models "have the potential to expand the applicability of LLMs to many new use cases," including autonomy and automotive. He adds: "With the ability to understand and draw conclusions by combining video, images and sounds, multimodal models may create a new way to understand scenes and navigate complex or unusual environments."
TechCrunch emailed questions to Lightcap via OpenAI's press relations but hadn't heard back as of publication time.
As for Hayes, he says argues that LLMs could allow autonomous driving systems to "reason about driving scenes holistically" and "utilize broad-based world knowledge" to "navigate complex and unusual situations" -- even situations they hadn't seen before. He claims that Ghost is actively testing multimodal model-driving decision making via its development fleet and working with automakers to "jointly validate" and integrate new large models into Ghost's autonomy stack.
"No doubt the current models are not quite ready for commercial use in cars," Hayes said. "There's still a lot of work to do to improve their reliability and performance. But this is exactly why there's a market for application-specific companies doing R&D on these general models. Companies like ours with lots of training data and a deep understanding of the application will dramatically improve upon the existing general models. The models themselves will also improve .... Ultimately, autonomous driving will require a complete system to deliver safety, with many different model types and functions. [Multimodal models] are just one tool to help make that happen."
That's promising a lot with unproven tech. Can Ghost deliver? Given companies as well-financed and well-resourced as Cruise and Waymo are experiencing major setbacks many years into testing self-driving vehicles on the road, I'm not so sure.