Synthetic focus groups and RAG in the contact centre: Bayer, Verizon, WPP on their AI deployments

"You can create a consumer, a brand strategist, a brand marketer, client, encoded with actual ground truth data, then critique the content that's been generated by the system with agents playing off against each other."

Synthetic focus groups and RAG in the contact centre: Bayer, Verizon, WPP on their AI deployments

From radiology analysis to contact centre automation, via AI-generated “focus groups” and consumers, an industry panel showcased eclectic approaches to generative AI deployments at a Google Cloud event.

Stephan Pretorius, CTO, WPP; Kalyani Sekar, CDO, Verizon; Guido Mathews, VP Radiology, Bayer; and Josh Weiss, VP, IHG Hotels and Resorts joined GCP’s North America President Michael Clark for the discussion.

RAG-powered AI in the call centre

Sekar, an SVP and Chief Data Officer at multinational telco Verizon, said the company had used a Retrieval Augmented Generation (RAG) approach to building out an effective AI agent that supports call centre staff.

(RAG lets users “ground” an AI model in documentation, i.e. “asking the model to respond to a question by browsing through the content in a book, as opposed to trying to remember facts from memory.”)

After being trained on existing Verizon documentation it is now able to “answer any question about the customer, the plan” she said, so that reps “instead of going through multiple documents to answer the question, they just put the question [into the chatbot] and they get the answer…”

Don't miss out: Subscribe to The Stack today

(The application is serving internal customers, i.e. Verizon staff, rather than customers directly, as an augmentative tool for contact centres.)

“Now we are really looking into how to take it to the next level. So instead of [an] agent typing the question, we are seeing whether [the AI] can be listening to the call; to start [automatically] producing the answers.”

She said this freed up call centre agents to focus on client needs and up-selling services, rather than the mechanics of sourcing answers.

Verizon’s CDO said the company was using Google’s Gemini heavily to underpin this approach highlighting “the partnership we had in building the solution [because] to get… accuracy to more than 90% is not easy.”

Deployments gets easier and faster…

With many questions coming into contact centres essentially repetitive, once that heavy lifting had been done the process got easier, she added.

“We ask the question to the model, and get the answer in the cache. So that next time the same question or the same context comes up, we don't go to the model; we take it from the cache and stand it up…”

The knotty process of building out AI agents would be easier now, she said. “Now we did it for call centre. Tomorrow, we really would like to do a Q&A for network technicians. The content is different. And it is a different set of documentation. But it's a Q&A. It starts with a set of documents; you really need to organise the documents… what we do is we write it as a standard API where any use case that comes in, we take the [fundamental] design pattern… keep everything ready-made.”

“The first use case that is built is one design pattern... if it took six months, the second time, I would say it [will be] one-third or less than that.”

Agents playing off each other! 

Stephan Pretorius is CTO at global advertising agency WPP.

He said: “In the last six weeks what has become an extremely active area of R&D for us, is using models like Gemini Pro, in a very large context – to build far more interesting chain-of-thought-based systems into our tools. 

“We have a utility product called Collaborative Studio, where you can effectively test ideas against personas. We now have a chain of thoughts system that can generate multimodal content. [So we can make it] create a persona for this brand in this market, create image, texts, demographic data, and then you can test that against a castle of experts. 

Join peers following The Stack on LinkedIn

Pretorious explained: “So you can create a consumer, a brand strategist, a brand marketer, client, encoded with actual ground truth data, then critique the content that's been generated by the system. So you have these very, involved processes of agents playing off against each other.”

(On stage at Google Cloud NEXT WPP and GCP announced a partnership that includes using AI to predict how well marketing content will perform. The two noted that “Gemini 1.5 Pro’s large context window, which allows it to run one million tokens of information consistently, means that more brand content and guidelines can be used for prompts, such as a brand’s colour palette, fonts, voice, and even past marketing campaigns…”)

Asked whether AI was driving down client spend, he said no. Clients might spend less on a single campaign, but the ability to use AI to build niche/segmented campaigns for very specific demographics or geographies meant that they were spending more on hyper-local work.

Roboticising radiography

Bayer and GCP also announced an extended partnership at the event. 

“Radiologists and other clinicians face burnout due to the sheer volume of work they face every day. Gen AI can help tackle repetitive tasks and provide insights into massive data sets, saving valuable time and helping to positively impact patient outcomes,” said GCP CEO Thomas Kurian.

Joining the panel discussion, Bayer’s Guido Mathews told the audience that the pharmaceutical company was looking to build “a platform where [the] healthcare community can develop new applications in a regulated end-to-end environment… we see right now roughly 40 million [manual] diagnostic errors per year happening and we would like [industry] to engage with our offering so this number is getting [sic] reduced.”

See also: Google Distributed Cloud: AI, on-premises, without the drama?

That platform will launch later in 2024. 

GCP and Bayer described it in a press release as letting users:

“Analyze and experiment: The users will be able to uncover insights with AI-powered data analysis… explore and extract information from regulations and scientific papers, all within a collaboration platform.
“Build and validate: Developers will also be able to… generate documents in alignment with healthcare requirements to aid in gaining regulatory approval; leverage medical imaging core lab services from Bayer.
“Launch and monitor: … deploy AI medical solutions standardized for flexible integration across compatible healthcare systems; and analyze field data for insights, bias detection, and continuous improvement.”

"Jazz club recommendations near my hotel"

IHG’s Josh Weiss meanwhile said the global hotel chain would bake AI into its application later this year that will let “help people discover destinations among our more than 6,000 IHG hotels across 19 brands in over 100 countries” – using it to generate recommendations and ideas.

He told the panel: “Part of the reason that we were making the journey with Google is we migrated a lot of our data over to BigQuery” and that this approach had enabled it to clean up its data estate. IHG was going to be “very clear” when recommendations were AI-generated, he emphasised, rather than pulled from hard data across IHG’s systems. 

Caution, on the hype train...

For all the conversation suggested that these were all programmes deeply in deployment, squinting closer and every panelist, with the exception of Verizon, appeared to be still building out and stress-testing their “platform” proposition. Generative AI in customer-facing applications is still far less widespread than the noise suggests and most organisations are still ironing out multiple challenges, including reputational risk ones. 

Tellingly, it was RAG that had delivered most for the most advanced-seeming panelist, Verizon; whose CDO, in The Stack’s view, spoke with most authority and detail on the lessons learned to-date. 

See also: “Scant evidence” – DeepMind’s AI chemistry claims misleading

Bayer’s platform particularly drew attention and it will be interesting to see what it manages to deliver in a year or two; watch this space. Claims, per the press release, that it will “help design breakthrough healthcare solutions by accessing a data ecosystem” seem somewhat premature. 

As Dr Gurpreet Singh, Head of Applied ML at Bayer, emphasised to The Stack back in 2022, discussing AI and drug discovery, it’s not just data or complex compute issues that matter, but experienced people.

He said: "Is [AI] going to revolutionise [drug discovery]? Yes. Is the hype justified? No... [The] drug discovery process is a very complex process. 

“It's not just one umbrella term. [You can't just say] 'here's the data; we'll come and click the button'. You have to understand your input domain... on the early discovery side, I will say, we are still trying to understand human biology. If you talk to any pharmaceutical company they will all say, 'I don't think we understand human biology completely even yet'. That speaks to the complexity of the environment that we are dealing with.

See also: AI and drug discovery: "This is the start of things..."