There’s been an explosion after the public launch of ChatGPT, followed by the leakage of the LLaMA weights and the open access to a OpenAI API. LLM’s enabled an incredible amount of applications that couldn’t be done before or at least not with the current level of success.
Human language is going to become the universal API, which we’ll use to communicate with the whole ecosystem of software applications. This was something I’ve read from PolyAI in this post https://poly.ai/our-voice-assistant-spoke-to-google-duplex-heres-what-happened/ and with the advent of LLM’s I think we are not too far away from it. I don’t think we are also too far away of having the possiblity to run a personal LLM or another AI related variant directly on our smartphone devices.
Imagine saying to your phone: “I’d like to go to Italy for three weeks in July or August, send me some options” to finally get some options provided to you by email or another medium. But instead of having some dumb application simply looking for all the possible tickets in the giving date window, your personal AI knows your preferences and it also can negotiate for you, interacting with a whole set of providers in order to pick the one offering you the best possible option. At the other side of the interaction there will be another AI agent, trying to offer you something that aligns with their business profits goals.
This is why I think we are going to need a private, personal AI. For sure that Google, Facebook, OpenAI/Microsoft, Apple and other companies will offer to create our personal AI using their platform, but this will have at least two problems: we are going to be sharing really sensitive and valuable data (more than nowadays, because currently our personal data is distributed on different applications) and we’ll have no leverage when trying to negotiate with another AI agents, because our personal AI will have the same or similar “intelligence” as the other users of the same platform.
And that’s why I’ll be focusing on this problem from now on.