Dr. Google Starts Sharing Regular Folks’ Advice As Chatbots Loom

14 hours ago 2

Rommie Analytics

By MICHAEL MILLENSON

“Dr. Google,” the nickname for the search engine that answers hundreds of millions of health questions every day, has begun including advice from the general public in some of its answers. The “What People Suggest” feature, presented as a response to user demand, comes at a pivotal point for traditional web search amid the growing popularity of artificial intelligence-enabled chatbots such as ChatGPT.

The new feature, currently available only to U.S. mobile users, is populated with content culled, analyzed and filtered from online discussions at sites such as Reddit, Quora and X. Though Google says the information will be “credible and relevant,” an obvious concern is whether an algorithm whose raw material is online opinion could end up as a global super-spreader of misinformation that’s wrong or even dangerous. What happens if someone is searching for alternative treatments for cancer or wondering whether vitamin A can prevent measles?

In a wide-ranging interview, I posed those and other questions to Dr. Michael Howell, Google’s chief clinical officer. Howell explained why Google initiated the feature and how the company intends to ensure its helpfulness and accuracy. Although he framed the feature within the context of the company’s long-standing mission to “organize the world’s information and make it universally accessible and useful,” the increasing competitive pressure on Google Search in the artificial intelligence era, particularly for a topic that generates billions of dollars in Search-related revenue from sponsored links and ads, hovered inescapably in the background.

Weeding Out Harm

Howell joined Google in 2017 from University of Chicago Medicine, where he served as chief quality officer. Before that, he was a rising star at the Harvard system thanks to his work as both researcher and front-lines leader in using the science of health care delivery to improve care quality and safety. When Howell speaks of consumer searches related to chronic conditions like diabetes and asthma or more serious issues such as blood clots in the lung – he’s a pulmonologist and intensivist – he does so with the passion of a patient care veteran and someone who’s served as a resource when illness strikes friends and family.

“People want authoritative information, but they also want the lived experience of other people,” Howell said. “We want to help them find that information as easily as possible.”

He added, “It’s a mistake to say that the only thing we should do to help people find high-quality information is to weed out misinformation. Think about making a garden. If all you did was weed things, you’d have a patch of dirt.”

That’s true, but it’s also true that if you do a poor job of weeding, the weeds that remain can harm or even kill your plants. And the stakes involved in weeding out bad health information and helping good advice flourish are far higher than in horticulture.

Google’s weeder wielding work starts with digging out those who shouldn’t see the feature in the first place. Even for U.S. mobile users, the target of the initial rollout, not every query will prompt a What People Suggest response. The information has to be judged helpful and safe.

If someone’s looking for answers about a heart attack, for example, the feature doesn’t trigger, since it could be an emergency situation.

What the user will see, however, is what’s typically displayed high up in health searches; i.e., authoritative information from sources such as the Mayo Clinic or the American Heart Association. Ask about suicide, and in America the top result will be the 988 Suicide and Crisis Lifeline, linked to text or chat as well as showing a phone number. Also out of bounds are people’s suggestions about prescription drugs or a medically prescribed intervention such as preoperative care.

When the feature does trigger, there are other built-in filters. AI has been key, said Howell, adding, “We couldn’t have done this thee years ago. It wouldn’t have worked.”

Google deploys its Gemini AI model to scan hundreds of online forums, conversations and communities, including Quora, Reddit and X, gather suggestions from people who’ve been coping with a particular condition and then sort them into relevant themes. A custom-built Gemini application assesses whether a claim is likely to be helpful or contradicts medical consensus and could be harmful. It’s a vetting process deliberately designed to avoid amplifying advice like vitamin A for measles or dubious cancer cures.

As an extra safety check before the feature went live, samples of the model’s responses were assessed for accuracy and helpfulness by panels of physicians assembled by a third-party contractor.

Dr. Google Listens to Patients

Recommendations that survive the screening process are presented as brief What People Suggest descriptions in the form of links inside a boxed, table-of-contents format within Search. The feature isn’t part of the top menu bar for results, but requires scrolling down to access. The presentation – not paragraphs of response, but short menu items – emerged out of extensive consumer testing.

“We want to help people find the right information at the right time,” Howell said. There’s also a feedback button allowing consumers to indicate whether an option was helpful or not or was incorrect in some way.

In Howell’s view, What People Suggest capitalizes on the “lived experience” of people being “incredibly smart” in how they cope with illness. As an example, he pulled up the What People Suggest screen for the skin condition eczema. One recommendation for alleviating the symptom of irritating itching was “colloidal oatmeal.” That recommendation from eczema sufferers, Howell quickly showed via Google Scholar, is actually supported by a randomized controlled trial.

It will take surely take time for Google to persuade skeptics. Dr. Danny Sands, an internist, co-founder of the Society for Participatory Medicine and co-author of the book Let Patients Help, told me he’s wary of whether “common wisdom” that draws voluminous support online is always wise. “If you want to really hear what people are saying,” said Sands, “go to a mature, online support community where bogus stuff gets filtered out from self-correction.” (Disclosure: I’m a longtime SPM member.)

A Google spokesperson said Search crawls the web, and sites can opt in or out of being indexed. She said several “robust patient communities” are being indexed, but she could not comment on every individual site.

Chatbots Threaten

Howell repeatedly described What People Suggest as a response to users demanding high-quality information on living with a medical condition. Given the importance of Search to Google parent Alphabet (whose name, I’ve noted elsewhere, has an interesting kabbalistic interpretation), I’m sure that’s true.

Alphabet’s 2024 annual report folds Google Search into “Google Search & Other.” It’s a $198 billion, highly profitable category that accounts for close to 60% of Alphabet’s revenue and includes Search, Gmail, Google Maps, Google Play and other sources. When that unit reported better-than-expected revenues in Alphabet’s first-quarter earnings release on April 24, the stock immediately jumped.

Health queries constitute an estimated 5-7% of Google searches, easily adding up to billions of dollars in revenue from sponsored links. Any feature that keeps users returning is important at a time when a federal court’s antitrust verdict threatens the lucrative Search franchise and a prominent AI company has expressed interest in buying Chrome if Google is forced to divest.

The larger question for Google, though, is whether health information seekers will continue to seek answers from even user-popular features like What People Suggest and AI Overview at a time when AI chatbots are becoming increasingly popular. Although Howell asserted that individuals use Google Search and chatbots for different kinds of experiences, anecdote and evidence point to chatbots chasing away some Search business.

Anecdotally, when I tried out several ChatGPT queries on topics likely to trigger What People Suggest, the chatbot did not provide quite as much detailed or useful information; however, it wasn’t that far off. Moreover, I had repeated difficulty triggering What People Suggest even with queries that replicated what Howell had done.

The chatbots, on the other hand, were quick to respond and to do so empathetically. For instance, when I asked ChatGPT, from OpenAI, what it might recommend for my elderly mom with arthritis – the example used by a Google product manager in the What People Suggest rollout – the large language model chatbot prefaced its advice with a large dose of emotionally appropriate language. “I’m really sorry to hear about your mom,” ChatGPT wrote. “Living with arthritis can be tough, both for her and for you as a caregiver or support person.” When I accessed Gemini separately from the terse AI Overview version now built into Search, it, too, took a sympathetic tone, beginning, “That’s thoughtful of you to consider how to best support your mother with arthritis.”

There are more prominent rumbles of discontent. Echoing common complaints about the clutter of sponsored links and ads, Wall Street Journal tech columnist Joanne Stern wrote in March, “I quit Google Search for AI – and I’m not going back.” “Google Is Searching For an Answer to ChatGPT,” chipped in Bloomberg Businessweek around the same time. In late April, a Washington Post op-ed took direct aim at Google Health, calling AI chatbots “much more capable” than “Dr. Google.”

When I reached out to pioneering patient activist Gilles Frydman, founder of an early interactive online site for those with cancer, he responded similarly. “Why would I do a search with Google when I can get such great answers with ChatGPT?” he said.

Perhaps more ominously, in a study involving structured interviews with a diverse group of around 300 participants, two researchers at Northeastern University found “trust trended higher for chatbots than Search Engine results, regardless of source credibility” and “satisfaction was highest” with a standalone chatbot, rather than a chatbot plus traditional search. Chatbots were valued “for their concise, time-saving answers.” The study abstract was shared with me a few days before the paper’s scheduled presentation at an international conference on human factors in computer engineering.

Google’s Larger Ambitions

Howell’s team of physicians, psychologists, nurses, health economists, clinical trial experts and others interacts with not just Search, but YouTube – which last year racked up a mind-boggling 200 billion views of health-related videos – Google Cloud and the AI-oriented Gemini and DeepMind. They’re also part of the larger Google Health effort headed by chief health officer Dr. Karen DeSalvo. DeSalvo is a prominent public health expert who’s held senior positions in federal and state government and academia, as well as serving on the board of a large, publicly held health plan.

In a post last year entitled, “Google’s Vision For a Healthier Future,” DeSalvo wrote: “We have an unprecedented opportunity to reimagine the entire health experience for individuals and the organizations serving them … through Google’s platforms, products and partnerships.”

I’ll speculate for just a moment how “lived experience” information might fit into this reimagination. Google Health encompasses a portfolio of initiatives, from an AI “co-scientist” product for researchers to Fitbit for consumers. With de-identified data or data individual consumers consent to be used, “lived experience” information is just a step away from being transformed into what’s called “real world evidence.” If you look at the kind of research Google Health already conducts, we’re not far from an AI-informed YouTube video showing up on my Android smartphone in response to my Fitbit data, perhaps with a handy link to a health system that’s a Google clinical and financial partner.

That’s all speculation, of course, which Google unsurprisingly declined to comment upon. More broadly, Google’s call for “reimagining the entire health experience” surely resonates with everyone yearning to transform a system that’s too often dysfunctional and detached from those it’s meant to serve. What People Suggest can be seen as a modest step in listening more carefully and systematically to the individual’s voice and needs.

But the coda in DeSalvo’s blog post, “through Google’s platforms, products and partnerships,” also sends a linguistic signal. It shows that one of the world’s largest technology companies sees an enormous economic opportunity in what is rightly called “the most exciting inflection point in health and medicine in generations.”

Michael L. Millenson is president of Health Quality Advisors & a regular THCB Contributor. This first appeared in his column at Forbes

Read Entire Article