In looking at voice technologies, it’s interesting to see the parallel between layers of technologies and layers of development of the brain. In the brain, it was first the reptilian complex, then the limbic system, then the neocortex. Each layer adds complexity and a an order of magnitude ability to perceive the world around it.
At least when it comes to voice technology evolution, there seems to be a similar trend. First, there’s sound identification. First you have number recognition in the fifties, then words, then sentences, then natural language, then sentiment. And of course, we now do all of these layers with much more sophistication. Soon, there will be the ability to understand motivation and we can think of full psychoanalysis by machines.
In some respect, this has already happened. Machines have been able to effectively determine what will influence us in a way that we find detrimental. For example, if enough of us can be influenced by runaway fake Facebook postings to compromise how we govern ourselves, then we’ve already surrendered control.
The layers we’re building might be outgrowing us and we’ll need to direct where and how they grown or we’ll become overwhelmed.