I was working on a file the other day when my iPhone popped up a message: “A sound has been recognized that may be a doorbell.” Indeed, a doorbell had just rung.
This is one of the new collection of accessibility notifications for those who have trouble hearing. Apple has been rolling out a lot of these lately, and Google’s Android has been doing the same.
In fact, the iPhone has quite a few sounds it is trained to listen for: fire alarms, sirens, smoke alarms, cats and dogs, appliances (though I'm not clear about exactly which appliances), car horns, doorbells, door knocks, glass breaking, kettles, water running, baby crying, coughing and shouting. It also has to deactivate “Hey, Siri” voice commands if it's listening for other sounds. It's not clear why that's the case; if the phone's already listening, why not just include the "Hey, Siri" command to the list of items to listen for?
But what if this sound-recognition could be tweaked to do core IT and operational chores? Think of it as an option to customize the phone to listen for sounds specific to your company. Just like the classic machine learning example, could the phone hear a sound in a work area and say, “That sounds like the XYZ component in that huge piece of machinery is overheating.”
Or perhaps the feature could be something even more useful, such as detecting when a specific person is coming down the hall. “Alert! Ken from Legal is approaching. Hide now.” Or perhaps you could place the phone by an open window so that it can listen for the sound of your boss’s car arriving?
It could also become an evil management tool, alerting someone if no keyboard clicks have been detected for a predetermined period of time. How about a helpful identifier? If caller-ID isn’t germane, could it be programmed with the voices of all users so that it can flag the name of the caller? (An evil version would be identifying employees who phone into an anonymous complaint line.)
Take this up a notch and a smartphone could be customized to identify any sounds you want, to help the business. We already know that videoconferencing systems are always listening — even when you have muted your mic — but what if the phone could help identify who's actually talking? Some systems offer that now, but it’s not universal and it doesn’t even routinely work with systems that claim to have it.
Ever run into a fast-talker at work? What if the phone could listen and pipe into your earbud a slow and more clear interpretation? Yes, it could also display a realtime transcript on the screen, but it’s hard to look at that screen constantly and not be noticed. Earbud prompts are more discrete.
Then there are always real-time “voice-lying detection” alerts. Imagine having a chat with your supervisor and hearing, “That's likely a lie.” It could help during board or audience presentations by listening for a high volume of sighs or yawns leading to a cautionary prompt: “Wrap it up. You’re losing them.” Granted, a good speaker should know that, but if the speaker is focused on some complicated material, he or she may not pick up on the audience getting distracted.
As Apple, Google and others work to perfect accessibility features that are genuinely useful and helpful, it's clear so much more can be done with these devices.