…I really think people need to understand when we talk about AI (artificial intelligence), it’s evolving into a lot of different things. I’ve talked about big brother and we think about how we’ve evolved from that. AI is becoming that in a lot of different ways… Do you believe that we’ve ended up in this surveillance economy that we’re describing?
So, generally I agree that we’re in a surveillance economy. Maybe I’ll clarify this. What we tend to mean by surveillance economy is that our data that we produce online has been commodified and produces value. And oftentimes we exchange the value of that data for free services or things that we like, like Gmail. I personally love the photos that Facebook resurfaces for me every year of my family as they grow up. That’s part of the surveillance economy and it’s very much a part of our world.
So, then how did we get there? I mean, the way that we got there is because we’re getting these new services and we love that. But at the same time when we advance and we get these things, we also get the downsides. We get that baby monitor that gets hacked and that just gets so darn creepy. Some guy or gal out there is hacking in and thinking it’s funny, but it becomes so creepy and scary. It’s breaking the law. Those bad guys are out there and they’re doing things to what we think is great, but it becomes bad.
Isn’t that the nightmare? And I think it’s with all things powerful. We’re aware that powerful things in the hands of cynical actors or nefarious characters, they can produce a lot of bad stuff. That’s why I believe that figuring out how we’re going to regulate, how we’re going to tax, and how we’re going to manage the surveillance economy is a very important question. I don’t tend to fall on the side of the conversation which is “Because it’s big and because it’s powerful, it’s bad,” but I definitely fall on the side of “Let’s be careful.”
And there’s a couple of examples that we can talk through around how we can see very great applications of the extraordinary things that we can do with data now and we can also see a couple things where maybe we’re not comfortable with that, and we’ve got to figure out what are the legal frameworks that are going to protect the misuse or perhaps the absence of true permission from the user when they’re generating the data.
I like to start conversations about the surveillance economy maybe on something sort of positive as we walk into the negative. Because one of the ways that we got here is by getting a lot in exchange that we’re pretty comfortable with.
So, I tend to go back to some pretty basic stuff that almost all people love and want and then look at the way that the surveillance economy is creating these experiences for us, and how are they not quite the same as when they happen in our social communities and in our families?
So, the three things I always point out are people want to be seen, right? What does my son say when he’s proud of something? He says, “Dad, look.” People want to be known. When I walk into a bar and they know my drink, I feel like I’m home. I want to be anticipated, right? When I find out that my friends are conspiring to figure out what I want for my birthday, I feel loved, right? These are the things in very broad strokes the surveillance economy is trying to produce for us, but it still leaves us with a little bit of an icky feeling. But the reason we’re attracted to it is these are really experiences that we crave in our social lives.
So, I look at it that technology that improves our lives in any which way is great, but we have to be weary of it. We have to watch it. We have to know our limitations with it. The way you described that, whether seeing, knowing, and anticipating that, being loved, that you just described. It’s improved our lives, but at the same time we have a responsibility to it.
Absolutely. And we have to ask some other questions, right? The conversation is tricky because there’s some obvious stuff that I’d say is good. And I watch people interact with technology and I do it myself and am delighted by a lot of it. But there’s at least three questions that are probably fertile ground for some thought experiments about how we get a handle around this.
One question is: Is it fair? For example, we’re trading our data in exchange for free services. Is the balance of value appropriate? One could make the argument that Google is moving a bit too quickly to collect data from us that’s far more valuable than what a free email server should be. I’m not making that argument, but it’s an interesting question, is the value bargain that we’re making with Google in exchange for all of our data a good bargain?
I think another question that comes up is how do we think about being intentional when we’re constantly signing privacy agreements that we have no idea what we’re signing? So, there’s a smart home doorbell that if you were to read all of the privacy agreements that relate to them as a company, and all of the third-party data buyers that they sell the data of their users to, you’d have to read 1,000 privacy contracts, right? How do you be intentional when that’s the barrier to being intentional?
And then I think the third thing is: What’s the potency of it? So, the things that we don’t like are when we start thinking about, for example, Facebook did their emotional contagion experiment, so they wanted to see about five years ago, were they able to plant emotions in their user base subliminally? Long story short, the answer is yes. We’re not entirely sure how do we think about a platform with that degree of potency being something that we interact with almost innocently in our daily lives, not really suspicious and not really thinking about what’s going on behind the scenes.
When we look at all those things that you just described, is it that they’re doing something deliberate that’s almost deceitful then? Because you and I both know the average person cannot read those kind of privacy agreements. They don’t even understand some of the words in those, because we actually say the average person has what, a sixth grade reading level?
Because they do that, they’re almost saying, “Look, you’re going to do that.” And there are some agreements you can’t even get services without signing certain agreements.
Correct. Yeah, they leverage you into it.
And you have no choice. You have no choice if you want it, and you say, “Well, then you don’t get it.” But how do you get a smartphone, right? So there’s some things you have no choice. Sign it or nothing. So how do you communicate in the world?
I think that’s probably one of the most important frontiers for us to tackle early. The idea of what does permission mean? And what are we signing and releasing when we’re agreeing to these privacy or lack of privacy agreements?
I do think that there’s a responsibility to make what you’re agreeing to intelligible. The agreements that we come across, I’m college educated, I spend a lot of time reading myself, even for me, they’re relatively impenetrable. And if I were to put the effort into it, they’re so dry and boring that I almost surrender just because of the torture of getting through the agreement.
If we’re going to exchange data, I do think that we have a responsibility to allow the user somewhat of an awareness around what they’re exchanging. And I think that we’ve got the evidence right now that not all surveillance platforms view it this way, right?
Facebook had a case in and around the same time when Mark Zuckerberg brought out his platform that the future is private. At the same time in California, Facebook was arguing that somebody that interacts on the Facebook platform doesn’t have a reasonable claim to privacy if they’re interacting on the platform. Well, which is it? Is the future private or once you’re on our platform, you relinquish all claims to privacy, or expectations thereof?
We’ve got to sort that through because when it comes to the nefarious actor, I think that’s part of where it begins to be cut, right? Who is buying the data from the platforms that I decided I trust? So, I’ll use an example. I have no feelings, good or bad, about this company, but it’s an interesting example. There’s an organization that many of use to test our genetics, right? We spit in a tube, we send it off, we find out you’re going to live a long life and you’re super smart or whatever we are told by our genetic code. That’s a lot of fun. And largely because of the branding and because of the visibility of the ownership and so on, we think to ourselves, “I’m okay with this bargain.” But it turns out that you often, 80% of their users have released their rights to their genetic data when they did that.
So, that organization has sold, in a $300 million deal to a large pharmaceutical company, the results of the genetic codes of all of those people that went in and tested their code. Now I might think to myself, “The pharmaceutical company, fair enough, they’re trying to cure diseases and heal people and make life better.” But I didn’t know they were going to sell that data to a pharmaceutical company. And who else might they?
Now your imagination can torture you, but I’ll give you an example of how your imagination could torture you here. If I have the genetic code for somebody and I can synthesize DNA, I can put them at the scene of a crime. Do I think that’s happening? No. And I don’t want to sound tin-hatty, but that’s sort of how your imagination goes when you think to yourself, “Well, if we take this power and we’re not aware of whose hands we’re putting it in, I can imagine some uses that really are disruptive and concerning.”
So, now you’ve put the paranoia factor into how people can think, which is exactly what happens, right?
Yeah, that’s right.
So, when we think about, “What are the reactions that people have to some of this?” But that’s what they started thinking about when we started thinking about big brother, right?
We started thinking about all the things that can happen, but when we think about the benefits and you went back to curing diseases. Those are the things we have to think about the positive. But how do then, to go back to your original statement, do you regulate?
We never like regulation because regulation stifles, in some ways, innovation. Because, in some ways, we want to be able to reach the sky, right? So, how are we able to do that in order to say, “Let’s keep pushing the envelope as far as we can to get the most that we possibly can as we keep encouraging growth?”
This is why I lean in the direction of putting the responsibility on these organizations to, in fact, educate their user base. I don’t love the idea of regulation. It’s very, very difficult as innovation proceeds at the rate that it does to figure out what are the right rules, what are the right laws, what will an individual citizen really exchange their data for knowingly and comfortably, and do we really want to prevent that? I’m not sure, right? And I also know, like you said, generally speaking, the idea of sort of putting the dampener on something that appears to also produce some really great stuff, right? We’ve alleviated poverty globally in ways that we couldn’t have imagined 20 years ago, and so on and so forth.
And then, all the way down to the trivial, I got my wife an iWatch for her for Christmas, and she loves it. Just today, we’re sitting there at lunch, and she leans down to her watch and says, “Siri, put such and such on the grocery list.” She loved it. It’s really, really great.
So, I lean in the direction of we need to make sure that we’re putting effort into and responsibility on the companies that are providing these services to, in fact, educate what are you doing with my data, right? Who are you selling it to and how will I be updated? Because I may be perfectly comfortable with this genetic testing company selling my genetic data to a pharmaceutical company. Personally, that may be something that I’m totally okay with. But then if they’ve got a long tail of other actors, who I don’t really know much about, I may prefer that they don’t have my data. And being able to make that decision when I’m exchanging so much, I want to be able to think about that clearly and have a vote in that exchange.
Does the onus then go back on the companies if something happens? Just like what happened when an employee goes rogue? Should the regulation be part of this thing that you have a rogue employee, and it’s not a whistleblower, it’s a rogue employee that says, “Look, I wanted to see what I could do and show that anybody could hack into the information, and we could do whatever we want with it.” So, are there some precautions companies have to have as well with employees doing bad things with the information, not only with our personal information?
We have laws about negligence and I think that we should have interpretations of those laws in this context, a standard of responsibility around securing the data that is consumed and owned by the genetic testing company or the smart doorbell company.
In terms of ultimate culpability, if anything ever bad happens, I have a hard time quite getting there, but I do think that getting to a point of clear definitions of negligence and standard of care around the data should probably be legally in place, clearly understood, and enforced. Then on the same side of responsibility to the company, I do think that they have a responsibility of plain English and transparency.
One of the reasons I point at that is also not just because I like the idea of informed consent as opposed to regulation, but because today it wouldn’t be hard to make the argument that some of these surveillance economy companies are running in the opposite direction, right? I don’t interpret most of what I watch some of these big companies doing as a sincere effort to educate on “What are you planning to do with my data?”
It seems to fall little bit more into a pattern of we’re going to move first, we’re going to run an encouraging campaign, we’re going to get you used to what we’re doing and let some habits form, and then we’re going to redirect if we get into hot water, right? We’ll say some nice things, claim ignorance, wait till the furor dies down, and then continue moving forward.
That I don’t think is healthy for our democracy. I don’t think it’s healthy for people in general, because it really breaks out from the dialogue the ability to have an informed consent and a rational agreement with these companies to say, “Yeah, I am happy to exchange my data in exchange for Facebook.” And I’m a Facebook user, I’m not criticizing Facebook. But for Facebook, I actually love seeing pictures of my children as they grow up when I log into Facebook. I love that. I love free email from Google, right? But I do want to know what am I exchanging and what are you doing with that data?
But here’s the really good question that we need to be asking now: What about the way they go about marketing and implying what happens from the reality of what actually does happen? Because I think there’s some confusion on what people understand. We just talked about the understanding of these contracts. But even just the marketing now. Now let’s talk about privacy agreements and what you’re signing there.
But the marketing of what is happening and what you’re agreeing to and what you believe and how we galvanize everybody to believe all these amazing things with AI and what it can do and you’re connecting all these things without understanding you’ve let a bad guy get in.
Yeah, so I don’t know how popular my opinion will be here. And I’ve not given this particular topic a ton of thought, so have patience with me if you think that I’ve under exercised this idea. It’s very possible.
As societies change, as technology evolves, I do think that there’s an adaptation that occurs in society as well on how do we interact with these technologies, what are the consequences of them, and how do we find the right boundaries for them? I’m not sure that I’m convinced that the innovator also owns social change. That the innovator is also completely responsible for making sure that you have fully appreciated all of the consequences of this.
My father is 72 years old. He might interact with a platform… Actually in particular, I’ll give you a sort of a sufficiently vague case, but a real case from our lives. During a political season, he didn’t realize that everything he’s saying on there is, one, inflammatory, and, two, is going to everybody in his network. He’s just not aware. I tend to look at that as society learning how to interact with new technologies and ways of communicating. I don’t know that that’s Facebook’s problem.
Now, at the same time, we can imagine looking at that advertising as deceptive. I do have a problem with Facebook coming out with a tagline, “The future is private.” That makes it harder for society to adapt because we can’t quite figure out what’s going on. So, I don’t know that I believe that that’s sincere advertising in any interpretation. It probably wouldn’t be appropriate to completely advertise in the opposite direction of your intention. But at the same time, I don’t know that I would put on Facebook, educating my father on “Here’s the way that you need to consider what you’re putting online, because it’s consumed in a way that nothing you’ve ever produced in your life will be consumed.”
But does it beg the question though, in a society that you can’t erase anything, are we saying the surveillance and capitalism that we’re talking about, should it facilitate, critique, and reconstruction then for that very reason? You just described your father, he doesn’t know when he’s saying something, it’s out there forever now. He doesn’t recognize what’s actually happening. Even some younger children who don’t understand what they’re doing, does it keenly then draw on the flaws of what’s actually happening around us because of that very fact that you just described?
Yeah, so I do think that it requires critique and reconstruction from us. I think that we need to have a very healthy dialogue as a society, and to do it as much as possible without polarization, because it’s very, very hard to have a conversation soberly when you’ve introduced a cynic or a nefarious actor or evil. Because our moral instinct, no matter what you believe, if you start believing that this is a battle of good versus evil, you run to a side. It isn’t easy to have a conversation when we’re also playing team sports. And so I think moving the conversation forward as a society is very, very important because otherwise what we’re probably going to end up doing is growing fatigued of the discussion and simply giving in to the ease and simplicity of these new tools and technologies. And I don’t think that’s good.
For example, Pokemon Go is a game that took the world by storm. Pokemon Go is a surveillance economy strategy that was originally born out of Google, that then once it was fully developed, was moved outside of Google into a new organization and brought to market via a brand that nobody knew or had any thoughts about. What Pokemon Go’s ultimate objective was, was to try and bridge the digital world and the in-real-life world by delivering to advertisers actual live foot traffic.
So, you might think I’m just a casual user going out and chasing some Pokemon, in reality, that pizza store down the street paid to bring you there. And so, they put a Pokemon gym at their store to bring you to their pizzeria. Now if I know that’s going on, I could perhaps look at it this way. I can say, “Hey, I had a lot of fun and that was a novel way to get to the pizzeria. And if I’m hungry, when I get there and I want to buy a piece of pizza, I am perfectly okay with that.” But if I have no idea about that is happening at all, your imagination again begins to torture you. And the way that we get into a place where we don’t know that it’s going on at all is growing weary of the conversation.
So let’s go with the creepy that drew some young child some weird, creepy place. Who controls and regulates this new frontier that somebody didn’t draw in some young child into some creepy place through Pokemon? Who regulated who was going to be allowed to be drawn into some store? It wasn’t a pizza parlor, maybe some creepy stalker wanted to draw in some young child somewhere?
The truth is, I don’t know how to answer that question. I think that we need to keep asking it. And it’s a good question. I can think of other things where, granted, the magnitude and the intensity don’t match, but Ford sold a car to a kidnapper and the kidnapper used the car to kidnap somebody. At what point do I hold Ford responsible for providing the vehicle to a kidnapper?
I don’t know how to answer that. Because there is a different level of visibility on a digital platform of the user base and the user activities. But at what level does responsibility of the cynic or the nefarious actor start, and at what point does the responsibility of the platform stop? And again, as you move through this conversation, I think it’s so important that as families, as parents, as colleagues and as neighbors, we’re discussing what are the impacts on our society and what are the things that we might need to be mindful of?
I don’t think that you should give your 11 year old kid a cellphone and let him run rampant. Because that little device is a portal to all things good and all things bad. And so I wish I could answer that question. I tend to lean in the direction of what somebody does with the tool is still their responsibility. I lean in that direction. And like I said, it’s not the same as a car, but that’s one of the images that come to mind. If I sell a car to somebody that does a bad thing, is Ford responsible?
Elon Musk says AI is evolving faster than our ability to understand it. Jack Ma, cofounder of Alibaba, is actually saying AI is nothing for street people, like them, to be scared of. Where do you lean then? Is it in the middle? It’s an interesting paradigm based on what we just discussed, because it’s rapidly changing. It’s a new frontier and I think if we look at it, depending on your perspective out there, you could go almost any way.
I’m uncomfortable with too much of a rose-colored glass when we look at this stuff. If we look at this technology and we just choose to say that it’s half-full, as sort of a force of will, as opposed to being persuaded that it’s half-full, or we put on our pink sunglasses and say we don’t really need to worry about it. I don’t want to over characterize Jack Ma’s opinion in this direction. But I think, generally speaking, he tends to communicate that love is going to overcome, that the human spirit is ultimately dominant, and that we don’t need to worry too much about the intersection of technology, because humanity is changing who we are because we are stronger than technology. I love that sentiment. If I were to write a story, I would want a character in my story that had those beliefs. I don’t believe it. I don’t think that that’s true and I think that we do need to be a bit more careful and a bit more intentional about thinking through what are the possible negative consequences of this.
And I’ll give you an example of if we’re not careful and if we’re not thoughtful, we might end up enabling things that we’re not crazy about. So, we put all of our photos online, right? When we put all of our photos online, the big data companies and the surveillance economy companies, they’re not actually interested in your photos per se, they’re interested in the metadata from your photos, because that enables them to train visual recognition software.
So, I won’t name a company, but there’s a company that has used all of the photos uploaded to their site to create some absolutely astonishing facial recognition. Not, facial recognition to help you tag your friends faster, facial recognition to help advertise and be tailored to who you are when you log into your computer. Those are… Let’s just say maybe odd but semi-delightful experiences. Facial recognition sold to an authoritarian state, used to control and observe their population and exert minor levels of influence, so it appears as if they’re not forceful, well that’s very uncomfortable, isn’t it?
And so I do think that as tempting as it is to say, “You know what, the human spirit is stronger and we don’t need to worry about it,” that’s probably more fun to say than it is true. And what is better, instead of running in the other direction and saying, “Well you can’t do any of it. We’re going to tear down the factories. Let’s all become Luddites and resist technology.” That’s just the opposite case. We need to find the middle path, have the conversation, understand that this is a radical change in our society, and we do need to take seriously how does it affect us and how do we control where our data goes? Because I don’t mind targeted advertising. I don’t mind facial recognition to help me tag my wife faster in a photo. I do mind thinking that the data that I upload to a platform helps an authoritarian state do a better job of controlling and corralling who should otherwise be free individuals, but aren’t because of the monitoring.
When you look at the big picture right now, how would you summarize everything we talked about?
So, I think there’s three questions that we have to really ask ourselves. And one is how do we tax this? So, for example, I’ll give you a quick factoid. Our world is moving digital, which is pulling a lot of in-real-life out of the in-real-life. The average Macy’s store back when we had malls used to produce $36 million in tax revenue for the local economy. Those stores are gone, it’s now going to Amazon. So, how do we tax this? Because we do need to have a contribution from those that benefit from the society that they’re serving.
The second is: how do we regulate it, as we talked about it. And the third is how do we keep up? And then finally from sort of a dispositional perspective, try and fight the temptation to assume that everything will be evil. Right? I don’t think that that’s true and I think that’s probably an overreaction. And the result of overreacting is not actually winning, it’s getting tired and giving in, and I don’t think that’s good for us.
On the other end, probably don’t go all the way to the optimism of Jack Ma, because we do need to take seriously that we can introduce things into our society that challenge our democracies, and that perhaps diminish the intimacy of our communities and we don’t want to do that either.
So, find the middle road, listen and participate in the dialogue, and avoid the extremes because they’ll be fatigued or you’ll be unsuspecting and deceived.