Google CEO Sundar Pichai speaks at the Google I/O conference at the Shoreline Amphitheater in Mountain View, Calif., on Wednesday.  Melina Mara/The Washington Post

MOUNTAIN VIEW, Calif. — Google is fundamentally changing the way we search.

Here at its campus in the heart of Silicon Valley, executive after executive strolled onto the stage Wednesday to tout the company’s latest developments in artificial intelligence – most notably saying it would start answering some search queries directly by generating its results. The move has been dreaded by publishers and bloggers, who say it could upend the internet as we know it.

The change won’t affect all search results and will initially be small, rolling out only to people who specifically sign up for it. But it signals a monumental shift for the company, which will now be creating its content drawn from sources around the web, rather than linking, quoting, and summarizing from other websites as it has done for the past 20 years. It’s a treacherous balancing act for the tech giant, which is attempting to keep up in an accelerating AI race without undermining its own $280-billion-a-year business model and damaging the sprawling ecosystem of publishers, bloggers, and other human content creators who rely on traffic from Google search to survive.

“It’s not just bolting a chatbot onto a search experience,” Cathy Edwards, a vice president of engineering at Google, said in an interview ahead of the announcement. “It’s being thoughtful at every stage about how you bring these generative AI technologies into the existing product that users know and love today. And how do we augment what’s already there without throwing out the product?”

Using AI to generate answers to search questions has been a hot topic since the end of last year when OpenAI released its ChatGPT chatbot. The bot was so good at answering complex questions on innumerable topics that some tech leaders, analysts, and investors called it a mortal threat to Google’s massive search business. Microsoft quickly incorporated ChatGPT into its search engine, Bing, and investors have pressured Google to do the same, predicting it could lose market share if it doesn’t keep pace with its archrival.

At the same time, the talk of replacing search results with AI-generated answers has roiled the world of people who make their living writing content and building websites. If a chatbot takes over the role of helping people find useful information, what incentive would there be for anyone to write how-to guides, travel blogs, or recipes?

Advertisement

“When ChatGPT came out and everyone was going, ‘Oh this is going to put people out of jobs,’ I didn’t expect that the first thing it would come for was my industry,” said Will Nicholls, a professional wildlife photographer from Bristol, England, who has spent years building up an online business writing tutorials and blogs for photographers.

Google insists that its approach takes all those concerns into account and that it is much more nuanced than what competitors are doing.

In demonstrations this week and during a presentation at the company’s annual developer conference on Wednesday, Google executives showed off the new version of search. The system does generate its answers but checks them for accuracy against real websites. It also posts those links directly next to the generated text, making it easy for people to click on them. For questions that are about sensitive topics like health, finances, and hot-button political issues, the bot won’t write an answer at all, instead returning a news article or links to websites.

Google’s conversation chatbot, Bard, and the bots put out by other companies return information from their understanding of the trillions of phrases they were trained on, but the new search AI is “more directly grounded in our search results and in the information that’s on the web,” Edwards said.

The search bot does not have a name – the company is simply calling it the forgettable “Search Generative Experience,” or SGE – and it does not have a personality. Generative AI chatbots have shown that they can take on strange “personas” when pushed in a certain direction, and Google has worked to try to prevent that from happening by putting strict guardrails on its bot that will tamp down its creativity.

“It’s not going to talk about its feelings,” Edwards said.

Advertisement

Even as the company moves forward, Google CEO Sundar Pichai has been expressing his concerns about AI tech.

Generative AI could supercharge misinformation campaigns, and the fast pace of AI development means it unlikely society will be fully prepared for whatever comes of the tech, Pichai said during an interview with “60 Minutes” in April. But the level of debate and criticism is good, he said, showing people are working on heading off potential dangers before the tech becomes more widespread.

In all of its AI announcements over the last several months, Google has been repeating that it is being both “bold and responsible.” At the Wednesday event, the company’s head of society and technology James Manyika admitted there was tension between those two goals.

Manyika showed off a Google tool that takes a video and translates it into another language while modifying the speaker’s mouth movements to make it look like they’re speaking the new language.

“You can see how this could be incredibly beneficial, but some of the same underlying technology could be used by bad actors to generate deep fakes,” Manyika said.

Cathy Edwards, a vice president of engineering at Google, speaks at the Google I/O conference at the Shoreline Amphitheater in Mountain View, Calif., on Wednesday. Melina Mara/The Washington Post

Google is also adding tags to images in Google Search if it knows that they are computer-generated rather than actual photos, and is making it easier to see when an image first appeared on the web. Misinformation researchers already use Google’s reverse image search to check whether an image is old or new and where it came from, but the new tools are meant to make it easier for regular search users to do the same.

Advertisement

On Wednesday, the company also made announcements about incorporating more AI into its productivity tools, including Gmail, Google Docs, and Google Sheets, and showed off a new language model called PaLM 2. Language models are the foundation of the new AI products, and the Google model is being incorporated into 25 products, including search AI. Google also unveiled new hardware, including a folding phone.

The AI system automatically incorporates ads in its answers when it deems a search to be commercial, accommodating the millions of advertisers who pay the company to show their products in search results.

While the shift is a major departure for Google as its tech writes its answers, it’s part of a broader continuation of changes the company has been making for years to move the way it presents search results away from the original “ten blue links” format.

For years, Google has borrowed content from the websites it links to – especially Wikipedia – in the form of “featured snippets” and “knowledge boxes.” Microsoft’s Bing has adopted the same practices, and Microsoft’s search results often look even more cluttered than Google’s.

The practice has created tension with publishers, some of whom say the search giant is stealing their content and making it less likely that people will click through to their sites. Google says that the features make search more helpful for regular people and that the amount of traffic it sends through to other websites continues to grow despite the changes.

But the new search chatbots take that practice to a new level. Not only do they provide full-fledged answers that the writers of blog posts, Wikipedia, and how-to websites have invested time into writing about already, but the AI was also trained on many of those websites. Google spokespeople and executives have declined to say what specific data their bots are trained on, but the “large language models” behind the tech are generally trained on trillions of words and phrases scraped up from social media, news websites, blogs, and code databases across the web.

“It feels a bit like plagiarism; the bot is going onto our site and scraping the content and pulling it out in its own words,” said Nicholls, the photographer. When Google first showed off its chatbot tech at an event in February, it used as an example a question about the best constellations to look for when stargazing. As the bot generated an answer at the top of the search results page, it pushed down an article from Nicholls’s site Nature TTL – written by a human – describing which constellations photographers should look for.

The search AI that Google unveiled Wednesday has changed from that February demo, and executives argue that their approach thoughtfully uses the new technology while promoting websites created by people. The company’s research shows that users still want to go to outside websites and hear from other humans, Edwards said.

“We believe that while the AI can provide insights, fundamentally, what we think people want is to see information from other people,” they said.