Can deepfake audio be detected? - Veridas (2024)

With the rise of deepfake audioin recent years, people have become increasingly aware of the potential dangers of synthetic fraud. From fake news to impersonation attempts, deepfakes have made their way into mainstream conversations.

However, in the world of digital identity, voice biometric authentication offers a secure and reliable solution, even in the presence of such attacks, especially when compared to traditional methods of authentication, such as passwords or question-based authentication.

Voice biometric authentication provides a more robust and user-friendly experience that is gaining traction in relevant industries such as the financial or insurance sectors. In this article, we will take a closer look at voice biometric authentication and explore how it can help protect your business against synthetic fraud.

Can deepfake audio be detected? - Veridas (1)

How does deepfake audio work?

AI-generated or synthetic voices are computer-generated voices that sound like real human voices. These voices are created using complex algorithms that analyze and replicate the characteristics of human speech, such as intonation, pitch, and timbre. Synthetic voices can be programmed to sound like specific individuals.

These voices can be generated combining a variety of techniques, such as text-to-speech software, deep neural networks, and other forms of machine learning.

Pre-recorded voices can also be used to carry out fraud. For example, a fraudster might record a person’s voice during a phone conversation and then use it to authenticate themselves as that person in a subsequent call. This is known as voice replay attack or voice cloning.

Can deepfake audio be detected? - Veridas (2)

What is the voice clone fraud?

Voice clone fraud involves the unauthorized use of someone’s voice to deceive others for malicious purposes. This fraudulent practice typically employs advanced technologies such as deepfake or voice cloning algorithms to replicate a person’s voice with remarkable accuracy.

Perpetrators may utilize these voice clones to impersonate individuals, such as public figures or acquaintances, in various scenarios, including phone calls, video conferences, or audio messages. The primary aim is often to manipulate, deceive, or defraud unsuspecting victims into divulging sensitive information, making financial transactions, or engaging in other harmful activities.

Voice clone fraud poses significant risks to personal privacy, security, and trust in digital communications, highlighting the importance of robust authentication measures and awareness of potential threats in today’s technology-driven world.

Deepfake audio fraud can manifest in various forms, such as impersonating a loved one in distress to solicit financial aid or mimicking a company executive to authorize fraudulent transactions. For instance, imagine receiving a phone call seemingly from a family member claiming to be in urgent need of financial assistance due to an unexpected emergency.

The caller’s voice sounds genuine, reflecting the familiar tone and mannerisms of your relative. However, unbeknownst to you, the voice is a meticulously crafted deepfake, designed to manipulate your emotions and prompt immediate action. In another scenario, a cybercriminal might impersonate a CEO during a video conference, using a sophisticated voice clone to issue instructions for unauthorized fund transfers.

These examples illustrate how deepfake audio fraud can exploit trust and deceive individuals or organizations, underscoring the importance of vigilance and verification in digital interactions.

How can deepfakes be detected?

At a time when deepfake technology casts doubt on the authenticity of voice-based engagements, Veridas introduces Voice Shield. Leveraging our proven AI-driven authentication techniques, Voice Shield now provides seamless liveness detection without the need for user registration or consent.

Within seconds, Voice Shield identifies the legitimacy of voices and protects against fraudulent impersonation during calls.

Can deepfake audio be detected? - Veridas (3)

The Rise of Voice biometrics

Over the past two years, Veridas has seen a significant 325% increase in the use of voice biometrics by its main clients with when analyzing real production data. This trend demonstrates that voice biometrics is gaining popularity among users, as it provides a secure and convenient method of authentication.

The rise of voice biometrics is not surprising, given its advantages over traditional authentication methods. Voice biometrics is convenient, does not require physical contact, and is highly accurate. As more companies adopt voice biometrics to enhance their security measures, it is expected to become a standard practice in the near future.

How does voice authentication work?

Voice biometrics works by analyzing various voice patterns, such as pitch, tone, and rhythm, to create a unique voiceprint for each individual. This voiceprint can then be used to verify the identity of the speaker in real-time, providing a secure and seamless authentication process.

After they are registered, the user speaks again, generating another voiceprint that is compared to the original voiceprint created at the time of registration. Veridas voice biometric solution requires only 3 seconds for authentication, providing a fast and seamless user experience while still maintaining a 99% performance rate.

Voice biometrics has been adopted by many industries, including banking, healthcare, and government agencies, as a reliable form of identity verification.

However, as with any new technology, there are also new risks involved. One of the most significant threats is synthetic fraud, where fraudsters use pre-recorded or synthetic voices to impersonate someone else.

These synthetic voices can be used in a variety of fraudulent activities, such as accessing someone else’s bank account, making unauthorized transactions, or even impersonating a government official to obtain sensitive information. These types of attacks can be costly for individuals and companies, leading to financial losses, damaged reputations, and compromised security.

Can deepfake audio be detected? - Veridas (4)

How to detect AI calls with biometrics?

Relying on third-party verified technology

Veridas has been at the forefront of voice biometrics development, offering its clients the latest advancements in these technologies to meet the growing demand for secure and reliable identity verification. As more and more use cases arise, such as remote working, online banking, and e-commerce, voice biometrics is set to become even more widespread in the years to come.

Using a reliable and secure voice biometrics solution is crucial for companies looking to implement this technology as part of their identity verification process. Veridas offers a market-leading voice biometrics solution that is text and language independent, allowing users to speak in any language without having to repeat a specific phrase. Veridas is ranked among the top providers in the National Institute of Technology (NIST) Speaker Recognition Evaluation (SRE) rankings, which is the industry’s highest standard. By using third-party verified technology like Veridas, companies can ensure that their voice biometric authentication process is reliable, secure, and up-to-date with the latest fraud prevention measures.

Can deepfake audio be detected? - Veridas (5)

Controlling the capture process

Voice anti-spoofing technology analyzes audio to detect what are known as “Presentation Attacks”. This type of attacks are those in which the attacker plays a pre-recorded audio through a speaker. This speaker can be that of a mobile device, a stereo, a PC or similar.

When an audio is played through a speaker, the voice present in the audio contains hints that differentiate it from an authentic voice that is played through the vocal cords. Veridas anti-spoofing engine is trained to detect these traces and thus distinguish cases where the voice is emitted by the vocal cords from cases where the voice is reproduced through a loudspeaker.

With this in mind, the main axis to consider when evaluating voice anti-spoofing systems is the loudspeaker used to reproduce the audio. The higher the quality of the sound equipment and the loudspeaker emitting the audio, the more similar the reproduction of the voice in the loudspeaker will be to what we would have of that same voice emitted by the vocal cords. The effect is similar to what we feel when we listen to music. If we listen to a song with high quality equipment, our perception of the sound is better than if we listen to the same song from a poor quality device.

Currently, our anti-spoofing technology is able to achieve an accuracy of approximately 97% when the reproduced audio comes from a low-end or mid-range speaker. When the speaker is high-end, the detection capability is slightly reduced. Last week we deployed a new version that has increased detection capabilities on high-end speakers to 92%. The above data refers to the performance of Veridas’ voice biometrics engine. However, the state of the art of the technology is at this level. That is, higher accuracies have not been achieved by any system to date.

Can deepfake audio be detected? - Veridas (6)

Try our technology for free in Telegram now

Multi-factor biometric authentication

Another way to strengthen an identification process is the use multi-factor authentication, where voice biometrics is combined with other forms of authentication, such as an standard identity verification process that include document verification and facial recognition. This links the identity of the user to can make it more difficult for fraudsters to impersonate someone else.

In addition to these measures, it is essential for companies to stay up-to-date with the latest advancements in voice biometric technology and fraud prevention. In this sense, Veridas is constantly researching and developing new methods to improve the accuracy and security of voice biometrics.

Voice biometrics, the next big trend in digital transformation

The rise of voice biometrics is making it a “must-have” for companies looking to digitally transform their operations. As the demand for secure and convenient authentication methods grows, voice biometrics is emerging as a leading solution for various industries, from finance to healthcare. With the rise of deepfake audio attacks and synthetic voice fraud, it’s crucial to ensure that your voice biometric system is secure and reliable.

By incorporating state-of-the-art voice biometric technologies, businesses can provide their users with a seamless and secure experience while protecting against fraud. As you consider your digital transformation strategy, make sure to explore the benefits of voice biometrics and choose a provider that offers reliable, secure, and user-friendly solutions.

Can deepfake audio be detected? - Veridas (7)

Can deepfake audio be detected? - Veridas (2024)
Top Articles
Latest Posts
Article information

Author: Carmelo Roob

Last Updated:

Views: 6315

Rating: 4.4 / 5 (65 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Carmelo Roob

Birthday: 1995-01-09

Address: Apt. 915 481 Sipes Cliff, New Gonzalobury, CO 80176

Phone: +6773780339780

Job: Sales Executive

Hobby: Gaming, Jogging, Rugby, Video gaming, Handball, Ice skating, Web surfing

Introduction: My name is Carmelo Roob, I am a modern, handsome, delightful, comfortable, attractive, vast, good person who loves writing and wants to share my knowledge and understanding with you.