TechMediaToday
CyberSecurityArtificial Intelligence

What is Conversational AI? The Security Risks of Conversational AI

Artificial intelligence (AI) technology continues to evolve, for better or worse. AI can benefit people who rely on the automation of routine and cumbersome tasks. However, AI can potentially be used to fuel increasingly complex scams over the phone and through chatbots.

This article will summarize what is conversational AI, the security risks of conversational AI and what to do to mitigate them.

What is Conversational AI?

Conversational AI refers to the use of messaging apps, speech-based assistants, and chatbots to automate communication and create personalized customer experiences. 

Hundreds of millions of people use social media messaging platforms to communicate with their friends and family every day. Meanwhile, millions more are conversing regularly with speech-based assistants like Amazon’s Alexa and Google Home. Conversational AI offers new opportunities to automate everyday responsibilities.

As a result, messaging and speech-based programs are rapidly displacing traditional web and mobile apps to become the new platforms for interactive conversations.

Nearly three-quarters of people (73%) are unlikely to trust conversational AI to answer simple calls or emails for them, but experts say this hesitancy will shift as the technologies become more mainstream, despite security risks.

Also Read: AI Makes Virtual & Augmented Reality Even More Real

Conversational AI Can Be Inaccurate or Hacked

As people adopt digital assistants and conversational AI interfaces that allow real-time dialog with machines, there are several security implications to consider.

For example, inaccurate speech recognition could allow incorrect messages to be sent to the wrong people on your contact list.

This happened to one couple from Portland, Oregon in 2018, who found out that their Alexa sent recordings of their conversations to the husband’s colleague, despite them asking neither to be recorded or send a message.

Additionally, hackers could assume control of digital assistants. This would allow the hacker to eavesdrop on private or sensitive conversations, which could lead to identity theft or fraud.

Conversational AI Can Impersonate People

Impersonation is also a real concern. Last year, a messaging app company called Luka created a chatbot that impersonated the cast of HBO’s Silicon Valley

Conversational AI also has the potential to trick people into thinking they are talking to someone they are not. Consider Google Duplex, Google’s voice assistant that can call restaurants and book reservations for users.

The technology made waves upon its release in 2018, mostly due to the eerily human-like nature of its conversations. People couldn’t necessarily tell who was the AI and who was the human.

Meanwhile, AI company Lyrebird developed a program that could synthesize anyone’s voice by analyzing a one-minute recording. And Google’s Wavenet provides a similar voice functionality that requires a much bigger data set but sounds eerily real. 

Also Read: Healthcare Hurdles: How to Overcome Roadblocks to Adopting AI

Put together, conversational AI could use replicas of people’s voices to hold conversations, tricking people into thinking they are speaking to a friend, family member, or colleague. People may then share private information without realizing they aren’t talking to a trusted confidant.

Simply put, conversational AI technology is advancing at an accelerating pace, and the potential to misuse that technology is a real concern.

How Much Should We Worry?

The public and AI developers need to employ common sense to all conversational AI situations, whether it is with a live person or a chatbot. 

By not sharing sensitive information via computer, mobile, or voice assistant, people can minimize the potential impact of conversational AI for ill-gotten activities.