Edgar Cervantes / Android Authority
TL;DR
- A new report suggests that Amazon is struggling to develop a large language model (LLM) that can compete with OpenAI, Google.
- The model is set to power an already-announced Alexa upgrade that brings conversational abilities to the assistant.
- Limited data sets and a lack of vision stalled progress, leading to the exit of many talented researchers and scientists.
A new report has come to light that suggests Amazon’s generative AI-powered Alexa is nowhere near completion and may never even see the light of day. The report, compiled by Fortune through interviews with over a dozen former employees, lists several hurdles that the company has faced while developing its conversational AI. Amazon demoed its next-generation Alexa late last year and promised to launch a limited preview in the US “soon.” Eight months have passed, however, with no sign of the new Alexa.
According to the report, the lack of high-quality data has slowed down Amazon’s large language model (LLM) efforts tremendously. The e-commerce company lacks extensive datasets as it doesn’t operate a search engine like Google or social networks like Meta. As a result, its LLM was trained on just three trillion tokens — the smallest unit of data in a language model. For comparison, OpenAI’s GPT-4 model is believed to have been trained on 13 trillion tokens.
A model rumored to be under development since late 2023, codenamed Olympus, was previously rumored to have 2 trillion parameters, or twice as many as GPT-4. A higher parameter count signals a larger and more capable model. However, a former researcher called Olympus “a joke”, adding that the current Alexa LLM stands at 100 billion parameters instead. It’s possible to have a small model deliver excellent results — I recently tested the Llama 3 model with 7 billion parameters and found it delivered responses on par with GPT-3.5.
But that’s not the only problem holding Amazon’s generative AI efforts back as employees have blamed Amazon’s senior executives for mismanagement. One stated,
It seems the leadership doesn’t know anything about LLMs—they don’t know how many people they need and what should be the expected time to complete each task for building a successful product like ChatGPT.
Amazon’s closest competitor in the digital assistant race was Google, but the latter has remained competitive by pivoting completely to Gemini while dropping the Assistant brand altogether. Today, Gemini can perform real-world tasks like setting reminders and interacting with smart home devices. Amazon’s central LLM team, on the other hand, reportedly struggled to get these features working due to differing visions across various sub-teams.
According to a machine learning scientist, efforts to fine-tune Amazon’s language model for smart home tasks would be rendered nearly useless after the Alexa Music team had finished fine-tuning it for their goals. The company reportedly has a dozen Alexa-related teams, each containing hundreds of employees that work toward different goals like shopping and entertainment.
This may explain the feature set disparity between Gemini and the Google Assistant, the latter of which the company plans to retire completely at some point. The search giant also hasn’t been as ambitious as Amazon — we haven’t heard any promises to bring the new conversational AI to smart speakers.
An Amazon spokesperson told Fortune that the company hasn’t wavered in its mission to build the “world’s best personal assistant.” It also confirmed that the new LLM-based Alexa remains under active development, with ongoing customer testing of the “Let’s Chat” feature demoed late last year. Former employees, meanwhile, say that even if the revamped Alexa does roll out, it will be significantly limited compared to other AI chatbots like ChatGPT.
Got a tip? Talk to us! Email our staff at [email protected]. You can stay anonymous or get credit for the info, it's your choice.