As part of its preview of iOS 17 accessibility updates coming this year, Apple announced a pair of new features called Live Speech and Personal Voice. Live Speech allows users to type what they want to say and have it be spoken out.

Personal Voice, on the other hand, is a way for people who are at risk of losing their ability to speak to create and save a voice that sounds like them. Apple says it’s designed for people at risk of losing their ability to speak, such as those with a recent diagnosis of ALS.

With the first beta of iOS 17 now available, you can now try out Personal Voice for yourself.

Here’s how Apple describes the new Live Speech feature coming later this year:

With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations. Users can also save commonly used phrases to chime in quickly during lively conversation with family, friends, and colleagues. Live Speech has been designed to support millions of people globally who are unable to speak or who have lost their speech over time.

Building on Live Speech is something Apple calls Personal Voice; this is an incredibly powerful feature that Apple says is designed for users at risk of losing their ability to speak. This includes people with a recent diagnosis of ALS (amyotrophic lateral sclerosis), which is a disease that progressively impacts speaking ability over time.

Using Personal Voice, users will be prompted to read along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. Using on-device machine learning, the iPhone or iPad can then create a voice that sounds like them.

This voice feature then integrates with Live Speech, so users can speak with their Personal Voice in FaceTime calls and during in-person conversations.

Apple’s announcement:

For users at risk of losing their ability to speak — such as those with a recent diagnosis of ALS (amyotrophic lateral sclerosis) or other conditions that can progressively impact speaking ability — Personal Voice is a simple and secure way to create a voice that sounds like them. 

Users can create a Personal Voice by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility feature uses on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.

Essentially, what this feature will do is give people the ability to create their synthetic voice on their iPhone by just reading through Apple’s pre-crafted prompts. Philip Green, who was diagnosed with ALS in 2018 and is a board member and advocate at the Team Gleason nonprofit, praised Apple’s efforts in a statement on Tuesday:

At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, board member and ALS advocate at the Team Gleason nonprofit, who has experienced significant changes to his voice since receiving his ALS diagnosis in 2018. “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.

Apple says that these new accessibility features will start rolling out later this year. In addition to Live Speech and Personal Voice, Apple has announced a number of other new accessibility features as well.

9to5Mac’s Take

Apple has always been a leader in accessibility features, and today’s announcements are just the latest example of that. But more so than ever before, these features resonate with me.

My mom passed away in December after a short seven-month battle with ALS. Her voice was one of the first things she lost. In fact, by the time she was actually formally diagnosed with ALS, her voice was already mostly gone.

Just reading this press release moved me to tears. The Personal Voice feature gives me hope that people with ALS and other speech-impacting conditions might suffer ever-so-slightly less. I wish this had been a feature when our mom was here, but I’m thrilled it’s something on the horizon for others.

I’d even go as far as to say that I think everyone should spend 15 minutes setting up the Personal Voice feature once it’s available. As my sisters and I learned with our mom, your ability to speak can be taken away in a matter of weeks, and it might be too late at that point to set up something like Personal Voice.

While there are certainly some questions and specific details I’m waiting on Apple to answer about this feature, if there is one company I trust to get something like Personal Voice right, it’s Apple. Unlike other voice synthesis tools on the market, which require you to upload sample data of your voice, Personal Voice is doing everything entirely on-device. There is no cloud processing whatsoever. Users will be able to opt-in to syncing to other devices using end-to-end iCloud encryption.

Coincidentally, May happens to be ALS Awareness Month. I implore you to learn more about it via the ALS Association’s website or via Team Gleason’s website.

Follow ChanceTwitterInstagram, and Mastodon


Add 9to5Mac to your Google News feed. 

FTC: We use income earning auto affiliate links. More.

Read More