From intricate functionalities to unknown risks to potential impact, new technology can be quite intimidating. Yet when emerging technology is used to help people with disabilities be more independent and connected, it can be incredibly life-changing.
Apple is one company with a long-standing commitment to making products for everyone, and it’s intriguing to watch the company develop innovative solutions aimed at empowering all individuals to use its products with greater ease and independence.
The company recently previewed new accessibility features as part of its AI-driven iOS 17 updates. Their latest accessibility features are geared toward people who have disabilities or delays in cognition, vision, hearing, speaking, and mobility.
So, what are these new accessibility features, and how can they be used by students with complex challenges? Here’s a breakdown of the latest features and how they work:
-
Assistive Access. This feature aims to reduce the cognitive load and make apps easier to use for people with cognitive disabilities. Within this feature, Apple is uniting the Phone and FaceTime apps into one Calls application, as well as customizing Messages, Cameras, Photos, and Music.
Assistive Access provides a unique interface; buttons have large text and contrasting designs. Additional tools allow the individual user to tailor the experience. For example, for individuals who prefer communicating visually, Messages offers an emoji-only keyboard or an option to record a video message to share.
-
Live Speech and Personal Voice. For individuals who can't speak, Live Speech allows users to type what they want to say and have it spoken out loud during phone, Facetime calls, or even in-personal conversations. They can also save commonly used phrases to quickly respond during conversations with family and friends.
Personal Voice, on the other hand, allows individuals to capture their own sound and create a voice that sounds like them. Apple explains, “Those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends.” This especially affects people with conditions that progressively impact speaking ability.
To do this, a user records a 15-minute voice sample on an iPhone or iPad. The feature will then use machine learning to integrate that recorded voice and will also keep the information private and secure.
-
Point and Speak. This feature is part of Apple’s Detection Mode in Magnifier, which aids people with vision disabilities or low vision. Users can point at an object with writing on it, and the iPhone will follow that point and read whatever the user is pointing at out loud.
Point and Speak uses the Camera app and the LiDAR Scanner, and on-device machine learning to audibly announce the text displayed on each button as users move their finger across the keypad. Point and Speak can be used with other Magnifier features like People Detection, Door Detection, and Image Descriptions to help users navigate their physical environment.
To say we’re excited by the possibilities of this enthralling technology and its impact on students’ educational experiences is an understatement!
We applaud Apple’s continuous efforts to make technology more accessible to all, and we look forward to harnessing this and other emerging technology to help our students reach their full potential.
What do you think about these new features – and what are some ways you could envision this technology being helpful to students with complex challenges? Let us know your thoughts on Facebook.