“To Err Is Human” but definitely not AI

< 1 minute

With the rise of voice controlled AI assistants, many videos and articles mock the technologies ability to answer questions, dialects and more. The technology is getting there slowly (comparative to our current perception of time) but surely…

Ben Evans, Andreessen Horowitz — Mobile is Eating the World, Dec 2016

What cause me to “err” on the topic under a new light while reading “OK, House. Get Smart: Make the Most of Your AI Home Minion” on Wired was this:

Think before you speak
In conversation, sometimes, um, people, you know, hem and haw and sort of, like, meander. But smart speakers are about as forgiving as your high school debate coach. Rudnicky’s advice: no trailing off, no stopping and starting.

Don’t be ambiguous
Humans are masters at using context to parse ambiguity. Machines? Not so much. “People often say things like ‘What about the other one?’ without specifying what the other one is,” says Vlad Sejnoha, CTO of voice systems outfit Nuance Communications. Say exactly what you want, without any room for misunderstanding — it will stave off your bot rage.

With children around such devices almost from infancy how will this affect language and verbal communication for upcoming generations. Standardizing commands and conversations to meet AI understanding is scary. Language, culture and heritage, tone of voice, facial expressions and body language all contribute to our understanding of any spoken word.

How much impact would this have on the future of verbal arts, communication, emotional intelligence, and things we haven’t even realized yet?

In no way is this a call to end-all-AI, the future is inevitable, but a careful awareness to how our surroundings are shifting and priorities re-aligning may be good to consider.

This post originally appeared on Medium on Jun 21, 2017