Google's cutting-edge AI technology has a familiar connection to the past - and in this case, that isn't a good thing.
Allow me to paint a purely hypothetical picture for you - for, erm, no reason in particular.
Imagine the following conversation about a groundbreaking new product someone isreallyexcited to show you. I'll take the liberty of playing the role of you and responding as we go:
"Hey, look, this new magic instant dictionary is super convenient and easy to use!"
Oh, nice - so it gives you correct definitions and everything?
"Well, some of the time."
Huh. So sometimes it just doesn't tell you things?
"No, it always answers. It's just wrong with every fourth answer or so."
Weird. But you can tell when it's wrong, at least?
"No, it still says the answer reallyconfidently, so you assume it's right. You just have to double-check its answers in a regular dictionary to be sure. Or, you know, just accept that you're going to have wrong definitions sometimes."
Isn't that...a problem?
"I mentioned how convenient and easy it is to use, right?"
Aaaad, scene.
You can probably sense where I'm going with this by now. But somehow,so many peopledon't seem to see this in the context of our current obsession with AI technology - specifically, the large-language model variety that's been all the rage since ChatGPT seeped into our public conscience last year and Google got gung-ho about getting its Gemini equivalent out into the world.
My friend, wereally need to talk.
[Cut through the hype with my freeAndroid Intelligence newsletter. Three new things to know and try every Friday - straight from my keyboard to your inbox.]
The awkward asterisk with Google Gemini
Look - I'm no luddite. I love geeky goodies more than most and get embarrassingly excited about new tech advancements.
But for me, personally, it's thepracticalapplication of a new innovation that's the most interesting and important. Tech for tech's sake is neat, sure, but the best kind of technology is the kind that actually solves a relevant problem and makes our lives easier in some meaningful, even if miniscule, way.
And let's be brutally honest for a second: Google's Gemini system isnotthat technology. Not in its current form, anyhow, nor in the way Google is scrambling to cram it into every possible nook and cranny and have it act as the end-all answer for every imaginable tech purpose.
What's most frustrating of all is how few people - including, most of all, Google itself and the other companies pushing similar sorts of systems - are willing or able to acknowledge this.
The reality, though, is that large-language models like Gemini and ChatGPT are wildly impressive at a very small set of specific, limited tasks. They work wonders when it comes to unambiguous data processing, text summarizing, and other low-level, closely defined and clearly objective chores. That's great! They're an incredible new asset for those sorts of purposes.
But everyone in the tech industry seems to be clamoring to brush aside an extremely real asterisk to that - and that's the fact that Gemini, ChatGPT, and other such systems simply don't belong everywhere. They aren't at all reliable as "creative" tools or tools intended to parse information and provide specific, factual answers. And we, as actual human users of the services associated with this stuff, don'tneedthis type of technology everywhere - and might even be activelyharmedby having it forced into so many places where it doesn't genuinely belong.
That brings us back to our magic dictionary example from a minute ago. Would anyone in their right mind actually think that sounds like an appealing or advantageous real-world upgrade? Of course not. It's patently absurd - no two ways about it.
And yet, that'sexactlythe same scenario the Gemini-style AI tools are offering us as both virtual assistants and all-purpose search centers. For some reason, though - a pretty obvious reason, one might contend - the companies behind them are downplaying that reality as much as possible and trying to convince us that it's somehow all fine.
News flash: It isn't.
The Gemini reliability problem
Traditionally, tech teams have operated under a philosophy that something has to be damn-near close to 100% reliable if it's gonna be effective and accepted by the masses. It's a lofty standard, but it actually makes a lot of sense: If you know something is going to give you a wrong answer or fail at what you need it to do even one out of every 10 times, you aren't going to be able to rely on it. You'll get frustrated with it quite quickly. And you'll ultimately stop using it.
Anecdotally speaking, it seems safe to say that Gemini and its contemporaries get things wrong much more often than that. Based on my own experiences and those I've heard from other folks, I'd say we'd be generous to claim they're right and reliable with high-quality answers, info, and output even 70% of the time.
But the worst part is that when theycan'tcomplete a task confidently, they don't give you an error or tell you they're unable to finish. They make something up and serve you incorrect information - just like our magic dictionary from a moment ago. It'd be completely comical if it weren't for the fact that companies like Google are pretending this isn't a problem and pushing these systems toward taking over as our phones' virtual assistants and the brains behind our online searches.
To be clear, it's not that they're somehow oblivious to this disconnect. All of these companies are covering themselves legally. Look closely, and you'll see a fine-print disclaimer beneath every AI system telling you that the system makes mistakes and that the onus is onyouto double-check everything it tells you to confirm it's correct.
Erm, right. So you can rely on these systems for information - but thenyouneed to go search somewhere else and see if they're making something up? In that case, wouldn't it be faster and more effective to, I don't know, simply look it up yourself in the first place? Maybe using the types of tools we had before thesegroundbreaking innovationscame our way?
Even when you limit these systems to a small subset of specifically supplied documents or web pages, the results are wildly unpredictable. That's been my experience with Google's AI-powered NotebookLM service, which lets you upload your own private documents and ask questions about the associated data. I've tried inputting a bunch of my extremely cut-and-dried Android Upgrade Report Card data into the system and then asking it questions about that data, and it's returned fabricated, laughably inaccurate answers with an astounding degree of confidence.
It's not just me, either - or just Gemini, for that matter. The Verge Editor-in-Chief Nilay Patel shared an experience this week of asking the latest ChatGPT model to summarize an interview he'd done with Google's CEO - and, as he observed, it returned "a full hallucination complete with citations to things [they] did not talk about at all." Hell, even during its closely controlled on-stage presentation at Google I/O a week ago, Google featured factually inaccurate answers during a deliberate demo of Gemini's info-providing prowess.
How is any of this okay? The answer is simple: It isn't. And that brings us to the bigger issue here.
Gemini's minuses - and pluses
A Google UX design veteran who recently left the company shared some pointed words about this subject on LinkedIn earlier this week, saying that the AI projects he worked on within Google were "poorly motivated and driven by this panic that as long as it had 'AI' in it, it would be great":
This myopia is NOT something driven by a user need. It is a stone cold panic that they are getting left behind.
He went on to draw a parallel to the current situation with Gemini and the similar sort of "all in" sentiment around Google+ 13 years ago - when Google panicked about Facebook's then-rapid rise and the threat it posed to its business around the way people sought out information.
I was one of the few freaks who actually appreciated Google+, but there's no denying the frenzy around it was an ill-advised overreaction to an external factor. It was a new religion - a "this defines us and everything we do from this moment forward" pivot. And while G+ itself had its good qualities, it was ultimately that determination within Google to force it into every possible corner - whether or not it belonged there and whether its presence was a positive upgrade or a practical downgrade for Google's users - that doomed it from day one.
Beyond that, Google+ just wasn't the answer to a problem. It was a solution insearchof a problem to solve. And, call me crazy, but that manner of thinking is starting to feel awfully familiar again. The main difference is that the stakes are substantially higher this time - with these generative AI systems actively serving up misinformation and threatening to exterminate the very industries they depend upon to exist.
The question we all have to ask ourselves is if we really want to accept this new "magic dictionary" that feeds us alarmingly inaccurate information alongside the occasionally convenient results. Google and the other companies chasing this AI fantasy are desperate to have us see these systems as a life-changing leap forward, but it's critically important for us to remain aware of the very real minuses that come with this latest shiny plus.
Check out my free Android Intelligence newsletterfor three things to know and try in your inbox every Friday - a zesty blend of practical tips and plain-English perspective on all the juiciest Googley news.