Indonesian Political, Business & Finance News

Voice Could Become a Privacy Threat: How to Prevent AI from Exploiting It?

| | Source: KOMPAS Translated from Indonesian | Technology
Voice Could Become a Privacy Threat: How to Prevent AI from Exploiting It?
Image: KOMPAS

In the era of artificial intelligence (AI), digital footprints are no longer merely social media uploads, search histories, or online shopping transactions. There is one element far more intimate and often overlooked: our own voices.

Recent research shows that the human voice contains far more personal information than we realise. Moreover, AI technology has the potential to exploit it for harmful practices — ranging from discriminatory pricing and unfair profiling to harassment and stalking.

If one knows how to listen, a person’s voice can reveal clues about their education level, emotional condition, and even their profession and financial status. Humans typically pick up on cues such as nervousness, fatigue, or happiness. However, computers can go far deeper — and far faster.

A study published on 19 November 2025 in the Proceedings of the IEEE revealed that a person’s intonation patterns and word choices can indicate their personal political views and certain health conditions.

This means that every time we speak — in customer service calls, voice messages, or other voice-based interactions — we may be sharing sensitive information without realising it.

Tom Bäckström, a professor of speech and language technology at Aalto University and lead author of the study, warned that the potential for misuse of this technology is very real.

He explained that if companies can understand our economic circumstances or needs simply from our voice, this opens the door to price gouging practices, such as setting insurance premiums differently based on voice profiles.

“If major insurance companies realise they can increase profits by pricing selectively based on information from our voices using AI, what would stop them?” he said.

Jennalyn Ponraj, founder of Delaire and a futurist researching the regulation of human nervous systems amid technological advancement, said: “Very little attention is given to the physiology of listening. In crisis situations, people do not primarily process language. They respond to tone, rhythm, prosody, and breath, often before cognition gets to work.”

In other words, before we even understand the content of a conversation, the brain already responds to emotional signals in the voice. AI technology is now learning to do the same — but with far greater analytical capacity.

Bäckström added that technology to detect anger or toxicity in online games and call centres is indeed often discussed and has ethical aims. However, he sees potential for more questionable uses.

For example, automated service systems that adapt their speaking style to match the customer’s style. On the surface, this sounds innovative, but this adaptive capability means the system is analysing user personal information in depth.

“I see many machine learning tools for privacy-invasive analysis already available, and their use for malicious purposes is not impossible,” Bäckström said.

“If someone is already aware of this, they could have a very significant advantage.”

View JSON | Print