My New Best Friend is a Voice Assistant: Why Your New BFF will be a Bot

Updated on August 26, 2023

By Antonella Bonanni

“I was so tired, I started talking to Alexa.”

I can’t tell you how many times I’ve heard people refer to Alexa as something more than a digital assistant this past year. Many people are talking to and asking questions of voice-activated digital assistants and cognitive powered colleagues more often than ever before. Comscore reported nearly 19 million homes in the U.S. had a smart speaker. In four years, Juniper Research predicts more than half of all American homes will have and use a smart speaker. That’s a smart speaker in more than 70 million U.S. households by 2022 or 55 percent of all homes.

Already, Alexa has more than 1,000 healthcare-related “skills,” which allow users to make queries and, in turn, lets the bot respond. These skills let us ask about pharmaceutical-company sponsored prescription medications, yoga, disease questions and much more.

As smart speakers continue a steady march into our homes and lives, we’ll be doing much more than just telling Alexa we’re tired. 

Before diving into the ins and outs of smart speakers and healthcare, it’s necessary to understand how the devices work. It’s important healthcare consumers appreciate how and why we get the answers we do.

Head Games

Using natural language processing, smart speakers and the artificial intelligence (AI) that powers them have the ability to understand much of what we say and respond. Over time, they will get smarter, and their capabilities and the uses for smart speakers will increase dramatically. (By 2030, it’s predicted AI will account for an infusion of nearly $11 trillion to the global economy.)

But we’ve got a problem because AI is not without controversy and concern. Since AI is built by humans it trusts our inputs. If we decide, for instance, a dog is now a cat, AI believes us. It doesn’t know any better. Societal biases are today an inherent part of AI and fairly common at this point in the technology’s development.

And it’s not going unnoticed. The following are just a couple examples:

  • AI programmes are made up of algorithms that follow rules,” explains an article published by the World Economic Forum. “They need to be taught those rules, and this occurs by feeding the algorithms with data, which the algorithms then use(s) to infer hidden patterns and irregularities. If the training data is inaccurately collected, an error or unjust rule can become part of the algorithm – which can lead to biased outcomes.”
  • In early 2018, the “White Paper on Artificial Intelligence Standardization” penned, somewhat ironically, under the auspices of the Chinese Electronics Standards Institute was released and acknowledged that bias undoubtedly will, if it hasn’t already, creep into AI. “We should also be wary of AI systems making ethically biased decisions. For example, if universities use machine learning algorithms to assess admissions, and the historical admissions data used for training (intentionally or not) reflect some bias from previous admission procedures (such as gender discrimination), then machine learning may exacerbate these biases during repeated calculations, creating a vicious cycle. If not corrected, biases will persist….”

Compounding the bias problem is that although humans create AI, we don’t always understand how it arrives with specific answers. Which makes it difficult to eliminate biased answers. “The core challenge for AI is that deep learning models are ‘black boxes,’” writes tech analyst Paul Teich in Forbes. “It is very difficult—and often simply not possible—for mere humans to understand how individual training data points influence each output classification (inference) decision. It’s hard to trust a system when you can’t understand how it makes decisions.”

This is another part of the bias conundrum: Once AI is created, it tends to learn from itself. So those ingrained biases can multiply over subsequent generations of computer code.

Ethics will become particularly important as AI becomes more ubiquitous, and as AI systems increasingly learn from one another, not just from the inputs that humans provide – that is, when machine-learning AI applications ‘teach’ other AI applications.”

Bias Control

To eliminate or at least put up a fight against biases in artificial intelligence, we need to recognize it exists and then act on eliminating prejudices so they don’t replicate over time. In addition, we must establish ways to eliminate bias before it becomes part of AI.

“With limited human direction, an artificial agent is only as good as the data it learns from. Automated learning on inherently biased data leads to biased results. The agent’s algorithms try to extract patterns from data with limited human input during the act of extraction. The limited human direction makes a case for the objectivity of the process. But data generation is often a social phenomenon (e.g., social media interactions, online political discourse) inflected with human biases,” Rand researchers write.

So controlling for bias starts before the algorithm is built; it must start with the inputs.

In order to ensure that AI platforms are managed in an ethical way, the platform needs to be human-assisted and continuously supervised through a proper governance process. It’s the only way we can control machine learning to a point where human interaction is learned and fully understood. Truly cognitive virtual agent platforms such as IPSoft’s 1Desk, have the ability to deliver a truly conversational and end-to-end service interaction. Unlike first generation simple web chatbots such as Alexa and Siri, Amelia, the conversational agent on top of 1Desk, has intelligence to mirror human conversation. Imagine the implications to healthcare. Amelia has the ability to empathize – detect sentiment and satisfaction level as the conversation progresses, and can leverage this understanding to respond empathetically and adapt the conversation flow (including escalating to a colleague where necessary).

 In one of our recent reports, we suggest several ways in which biases may be controlled. It comes back to getting humans involved with the creation and training of AI. “This is not as arcane a process as one might think. The concept is familiar to parents who provide feedback and guidance to raise their children to be good members of society, and there are well-understood tools and frameworks from the world of human sciences that can be used to instill ethics into the design and operation of AI.”

AI biases and ethics bring us all the way back to smart speakers and healthcare.

Our Regularly Scheduled Program

I began this article with a funny anecdote about Alexa , but quickly discovered that, although we tend to treat smart speakers as members of the family, there’s much more to consider before using one to diagnose an ailment or provide treatment suggestions: It’s important to know that all answers may not be created equally, particularly when it comes to how they may be influenced by those who built the software. So the exploration of biases in AI is an important stop on the way to using digital voice assistants for healthcare.

But this, really, isn’t much different than how medicine has been practiced for years. Healthcare providers are influenced by their biases, as well, and it’s the healthcare consumer’s responsibility (and in her best interest) to understand how a course of treatment may affect her.

Intentional or unintentional bias will be something to keep in mind as we continue to rely on and ask questions of smart speakers, whether the technology is ready or not.

Some of the bias, from healthcare’s point of view, may have nothing to do with race, gender, age or any number of other identifiers that make us all different. Instead, it may be the way healthcare consumers are perceived:

  • Are we patients?
  • Sufferers?
  • Survivors?
  • Consumers?
  • Just regular people who want information?
  • Or some combination?

Sebastian Vedsted Jespersen, who writes frequently on the intersection of brands, people and the resulting relationships, believes many services fall flat: “While many have tried to establish digital healthcare solutions to go beyond the product adherence, unfortunately, they often fall flat from being designed with a focus still on the product or treating the patient as a sufferer rather than an everyday person.”

Those supplying the answers to smart speakers would do well to view their customers as everyday people simply looking for information. As Jespersen points out, few of us will want to know about products during our first healthcare encounter with a smart speaker. Rather we’re more interested in learning about some specific illness or injury as we make a query.

To make a significant impact in the way healthcare is provided, smart speakers must do more, more quickly than we’d typically expect. For example, one hospital uses smart speakers in patient rooms to expedite answers to common questions. But does it really work in a new way? Or is the smart speaker just a different way of getting the same old answers? In this particular case, it’s likely latter. A patient can ask for a nurse, but the smart speaker still can’t tell the patient how long the wait will be. A patient at this hospital can ask “what is my diet”—which, really, no patient would ask, they would say “what can I eat today?”—and the response will be a “bland diet.” Now, a patient may know what that is, but the answer, and certainly the question, is constructed from the healthcare provider’s point of view, not the patient’s. Healthcare providers can tell you all day long what a bland diet consists of; I’d have to guess. Why not just tell me in the first place?

As is the case with many emerging technologies, we are not quite there yet. The intentions, overall, are likely good, but developers must address challenges like AI bias, providing answers that are customer-focused and, finally, designing applications that are truly healthcare consumer-focused.

With the number of smart speakers worldwide expected to grow—and retailers slashing prices to push acceptance—there’s no doubt a smart speaker will become your BFF sooner or later.

Antonella Bonanni is AVP and Chief Marketing Officer of Healthcare at Cognizant. She has an MBA from the University of Pittsburgh and a Masters of Communications from the CUOA in Vicenza, Italy. She is a member of the Forbes Communications Council. 

Antonella can be reached at [email protected], LinkedIn and Twitter.

14556571 1295515490473217 259386398988773604 o

The Editorial Team at Healthcare Business Today is made up of skilled healthcare writers and experts, led by our managing editor, Daniel Casciato, who has over 25 years of experience in healthcare writing. Since 1998, we have produced compelling and informative content for numerous publications, establishing ourselves as a trusted resource for health and wellness information. We offer readers access to fresh health, medicine, science, and technology developments and the latest in patient news, emphasizing how these developments affect our lives.