PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

Voice-Cloning Tech Coming to iPhone, iPad to Help Those With Disabilities Speak

The upcoming 'Personal Voice' feature for iOS and iPadOS is designed to help people at risk of losing their ability to speak from conditions such as ALS.

Although voice-cloning software has already been raising alarm bells over its potential misuse, Apple is tapping the technology to help those with disabilities speak. 

The company today previewed its “Personal Voice" feature, which will arrive through a collection of new accessibility improvements later this year for the iPhone and iPad. 

“Those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends,” Apple says.

The feature could be helpful for people suffering from conditions such as Amyotrophic Lateral Sclerosis (ALS) or Lou Gehrig’s disease, which can break down nerve cells and prevent someone from speaking in their normal voice.

How Personal Voice works.

Apple’s system requires people to record 15 minutes of audio on an iPhone or iPad, which should be enough audio to clone their voice. From there, the user can type in what they want to say and Apple’s hardware will read it out loud in their synthesized voice. 

“This speech accessibility feature uses on-device machine learning to keep users’ information private and secure,” Apple adds. So the cloned voice will live on the user’s phone, and nowhere else, preventing others from accessing it.  

The feature seems similar to Samsung’s voice-cloning feature, which has launched on certain Galaxy phones in Korea. But in Samsung’s case, the company is applying the voice-cloning tech for consumers when they’re in a noisy area or need to remain silent. Apple, on the other hand, has developed Personal Voice as an accessibility feature. 

For people who can’t speak, Apple created a feature called Live Speech. “Users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations,” Apple says. 

Live speech
Live Speech

The company is expected to release the accessibility features in iOS 17, which will likely be announced at WWDC in early June and launch in September. Another improvement includes an overhauled iPhone/iPad interface called “Assistive Access,” which is designed for users with cognitive disabilities. 

“The feature offers a distinct interface with high contrast buttons and large text labels, as well as tools to help trusted supporters tailor the experience for the individual they support,” Apple says.

Assistive Access interface
Assistive Access interface

About Michael Kan