Nosy AI or Mindful Neighbor?
A look at AI after reading The Inevitable
The Inevitable
I recently completed the audiobook of Kevin Kelly’s The Inevitable. Kevin Kelly is one of those individuals who both speaks and looks like a prophet. He’s been right on many occasions. In the book, he shares practical wisdom on how to live in a world filled with more and more technology.
One of the highlights of my career to date was visiting Kelly in California. I was dropping off an early version of the Ubi for him and much to my surprise, he provided a copy of one of his other books, Cool Tools, a huge tome filled with amazing illustration.
When we met, he asked me what I thought about the idea of AI-as-a-Service. This was 2014, several years before AI was trending. My response was a shrug. I said it sounded interesting, but I wasn’t sure of any specific applications.
In my nervousness of meeting such an amazing author, it didn’t occur to me that AI was already working behind the scenes in so many of the services that we were employing as part of the Ubi and that we were consuming these as services. Speech recognition, natural language understanding, speech synthesis — all of these we’re using machine learning and AI-based systems in some form.
Listening to The Inevitable brought back memories of that meeting. Like in that meeting, Kevin Kelly in his latest book seems very prescient about what’s going to happen over the next few years and how we might end up interacting with AI driven systems.
One of the items that he brings up is how we’re going to have to deal with AI-based systems knowing more about us than we would have openly been willing to share in the past. Kelly goes through the many sources of information that are currently available to search engines and other network operators.
Duped + Perplexed = Duplex
Google, for example, might know all of your search history, your YouTube views, how, when, and to whom you’re writing your emails, when and from where you’re accessing different websites, every website you’ve visited (since these might be using Google Ads, AdSense or Analytics), your location history, which devices you own and use and how often, who is in or close to your home if you’re using Google Wi-Fi hardware, and with which other routers you’ve been in proximity.
Having access to information and being able to use it are different things. Just because Google could build some type of recommendation system based on your sleeping habits, doesn’t mean it will. Limitations to the applications might be governed by country laws around data usage or even just our acceptance of the creepiness factor that Google knows all of these things and is trying to influence us based on knowing these things.
Google showcased Duplex at it’s I/O Conference back in 2018 and people were in uproar that Google duped people into interacting with hyper-realistic voice bots. It wasn’t the realism of the voices that irked people, it was the lack of transparency that was off-putting.
What limits companies today is consumer appetite towards their data being used obviously and without much benefit to them. For the most part, when the outcome of this analysis is obfuscated through the tailoring of their experience with a platform, consumers tend to be unaware of how much their reality is being shaped by the data mined from them. When a TED talk or some other high profile story highlights how much these companies know about us, it leads to some short-term outrage that usually peters off because consumers can’t easily piece together the mining of their data and how it influences their day-to-day interactions with different platforms.
If the use of data was more blatant, then it would arouse suspicion and consternation from the public. For the individual, this might happen when someone’s visited one site and they are consciously aware that the email newsletters they’ve subscribed to, their Facebook feed, and the sites that they visit regularly are suddenly flooding them with the same advertisements. When companies use the data more effectively, we love the result. It’s the YouTube recommendation that seems fascinating, or the Netflix suggestion that’s spot on, or a recommended Tweet that’s hilarious for a product that’s exactly what we needed. The real-world equivalent is that we tend to be perfectly fine with a salespeople approaching us and recommending to us something if that is something is what we are looking for or wanting.
This Is Deep
Our tolerance towards the use of our data, especially a third party’s deep analysis of ourselves, is proportional to the usefulness we get back from the product. The value only has to be slightly higher than our perceived risk of someone abusing that data for us to be open to using the service. For example, people were terrified of using credit cards early in the days of the Internet because of the risk that someone steal the credit card number and use it for fraud. This was a leftover fear from “I never give my credit card number over the phone” for phone purchases, which admittedly is much less secure but still low risk for fraud.
Early in the days of the Internet when encryption was a term that most people didn’t understand and was reserved for cyber thrillers, there was indeed something to be concerned about. However, the convenience of being able to order something and have it show up within a few days at your doorstep without having to leave your home was enough to offset the concern about this risk and push most consumers to widely adopt ecommerce.
Today, a lot more can be done with someone’s data than could have been imagined when we first started ordering stuff over the Internet. Google captures all the recordings of the requests that we make to Google Assistant or on our Google Home devices. Amazon, Apple, Facebook, and almost any provider of voice services can access large amounts of recordings of our voices. If you call a company and they record the conversation, it’s likely that recording can be used in the future for development.
Just Say It Wasn’t You
The use of this data for product improvement and by “responsible” players is benign. However, someone with malicious intent could develop an extremely realistic synthesized version of a voice. Used creatively, this could wreak havoc on an individual. Forget about bad credit rating, think about the social implications of simulated you with your deep faked voice calling all your contacts and harassing them — your boss, your coworkers, or your clients. Sure, it might be possible to call everyone back and explain to each of them what happened. In the process, they might doubt if they’re speaking with the real you and continue this doubt in every future interaction.
Not only does a custodian of our data have to provide us with useful benefit for providing that data, but they must safeguard it as though their business depended on it. These are the table stakes for any service.
Trust in The System
A greater concern is the misuse of our data by those in positions with authority to develop abusive AIs. Governments might use data to undermine democracy by harassing the opponents of those in power. This is something that’s becoming common place. If a political opponent could have access to your emails, web searches, and other data sources, they could use it to develop bots to smear you in public. The could flood news outlets with deep faked images and soundbites, or machine-written posts using natural language generation that sounds exactly like their political opponents.
A requirement to sharing data for use by AI driven systems is equality in the law. Another requirement is that the law will not only work to prevent bad actors but it actively discourages them through harsh and transparent penalties for abuse.
Should these two conditions be met, it then becomes possible to have an extremely open society where all sorts of information about yourself and your neighbors is publicly available. What would be the result of the data version of extreme honesty? What advances in scientific research, justice, medical care, economics, or other areas could we see if data was freely and widely available?
A Cup of Sugar
Kevin Kelly refers to this idea as that of the nosy neighbor. Sure, my neighbor might see when I come and go from my home, when I receive packages, what might be in those packages, what groceries fall out of my grocery bag when I bring them into the house, if I get into a heated exchange with my significant other, or other things I might be proud or not so proud of. They can also see when something looks off, if I’m in trouble and might need help, if someone’s trying to break into my house, etc. Kelly suggests that this is a return to something more basic that humans have evolved towards over millennia. We were meant to be together and close with our neighbors. We watched over each other.
The Inner Sanctum
The slow inevitable turn is that we are going to have at least two sides to us. A public side that shares more and more of what we’re thinking and doing, and a smaller, inner sanctum in which we can put our prized thoughts and share those only with specific individuals of our choosing.