[ad_1]
“If you asked people what they wanted, they would have said a faster horse.” It takes one innovation to completely change the world, making it difficult to predict the future of technology. This is especially true for the upcoming wave of AI capabilities for new and existing Google apps.
misunderstanding
Google wasn’t blind to what was to come. The company has publicly spoken about natural language understanding (NLU) and large-scale language models (LLM) at the last two of his I/O developer conferences, the biggest events of the year. In 2021 there was a language model for dialog applications with interactive demos with Pluto and last year there was his LaMDA 2 with the ability to demo via the AI Test Kitchen app.
Also, a multitasking integration model that can answer the question one day, “I’ve hiked Mt. Adams, and I want to hike Mt. Fuji next fall. What should I change and prepare for?” (MUM) is also available. In the future, Google Lens will allow you to take a picture of a broken bike part and receive instructions on how to fix it.
In addition to detailing his technology, Sundar Pichai says, “Natural conversational capabilities have the potential to make information and computing fundamentally more accessible and easier to use.” Search, Assistant, and Workspaces are what Google calls “[incorporate] Better Conversational Features. ”
But as recent narratives attest, that alone wasn’t enough to make people remember it. Instead, Google is guilty of not providing more concrete examples that capture public awareness of how these new AI features could benefit the products we use every day. .
Again, even if May 2022 provided a more concrete example, it would quickly be squashed when ChatGPT launched later that year. OpenAI demos/products are available (and paid for) today, and nothing is more concrete than an experience. It has sparked a lot of discussion about how direct response will affect Google’s ad-based business model. It is believed that users do not need to click links when they already have the answer as a generated and summarized sentence.
What caught Google off guard was the speed at which competitors integrated these new AI advances into their shipping apps. Given Code Red, it’s clear the company didn’t see the need to roll out more than a demo anytime soon. Safety and accuracy concerns are something Google has clearly emphasized in its existing previews, and management is very concerned about how what’s on the market today “could work.” quickly points to Google search.
Future plans
In announcing the job cuts, a leak from The New York Times appeared the same day, describing more than 20 AI products Google was slated to showcase at I/O 2023 in May this year.
These announcements, presumably led by a “search engine with chatbot capabilities”, seem very intended to be a perfect match for OpenAI. Of particular note is “Image Generation Studio,” which appears to be a competitor to DALL-E, Stable Diffusion, and Midjourney, and Pixel wallpaper creator is likely a branch of that. Of course, Google will squarely face the backlash from artists that image-generating AI has brought.
Aside from search (more on that later), none of the leaks seem to fundamentally change the way the average user interacts with Google products. Of course, this was not Google’s approach, but to imbue existing products (or parts of them) with small conveniences as the technology became available.
Gmail, Google Chat, and Messages have Smart Reply, but Docs and Gmail’s Smart Compose don’t compose emails perfectly, but the autocomplete suggestions really help.
Pixel has Call Screen, Hold for Me, Direct My Call, and Clear Calling, which uses AI to improve the phone’s original core use cases, and on-device voice recognition for a better recorder and faster Realize a perfect assistant. Of course, there’s also computational photography and Magic Eraser.
That’s not to say Google isn’t using AI to create entirely new apps and services. Google Assistant is the result of advances in natural language understanding, a computer that enables searching and sorting in Google Photos. His vision is still taken for granted more than seven years later.
These days, there’s Google Lens, which lets you search visually by taking a picture and adding a question, and Google Maps Live View offers AR directions.
Then search and AI.
After ChatGPT, people will be prompted by sentences generated entirely for you/their query, as opposed to getting a link or viewing a “featured snippet” citing relevant websites. Imagine a search engine that answers questions directly. You may have the answer.
Looking at the industry, I feel like I’m in the minority in terms of the lack of conversational experience and enthusiasm for direct answers.
One of the problems I foresee in my experience is that I don’t always (or often) want to read the full text to get the answer, especially when I can find the answer by just reading one line of the knowledge panel. That’s it. Dates, times, or other simple facts.
On the other hand, it will take time to trust any company’s chatbot search generators and summaries. At least with the featured snippet, you can immediately see and decide if you trust the publication/source you’re quoting.
In many ways, that direct sentence is what smart assistants have been waiting for, and today’s Google Assistant takes facts you already know (dates, addresses, etc.) (knowledge panels/graphs) and feature snippets (feature snippets). ) are looking at. It’s safe to assume that when you’re communicating by voice, you can’t look at the screen right away and want an answer right away.
We recognize that the history of technology is littered with iterative updates that are quickly trampled on by new game-changing innovations, but we don’t feel the technology is there yet. It reminds me of the early days of voice assistants that clearly tried to mimic humans. This coming wave of AI has a tinge of humans answering questions and performing tasks, but how long will the novelty last?
FTC: I use automated affiliate links to earn income. more.
For more information, visit 9to5Google on YouTube.
[ad_2]
Source link