AI and Big Data in Government Policy - Transcript

May 3, 2018
Jerome Greene Annex, Columbia Law School

Presented by

The Software Freedom Law Center

and

Columbia University Law School

with

Paul Nemitz, Principle Advisor to the Director General for Justice and Consumer Protection of the European Commission

Daniel J. Weitzner, Founding Director of the MIT Internet Policy Research Initiative and Principal Research Scientist at the MIT Computer Science and Artificial Intelligence Lab

Moderated by Eben Moglen, Professor of Law at Columbia Law School and the founder of the Software Freedom Law Center

See event page for further information.

EBEN MOGLEN: You can tell when you invited a White House guy to an event because he’s the one schmoozing the audience when you should be getting started. So, thank you all for coming. It’s a pleasure to see so many people willing to turnout for a boring subject like this in the first week of exams. So, I’m Eben Moglen, and I teach in the law school, and what I do is to try and figure out how freedom can exist in the twenty-first century, a problem which seems to me excessively difficult, so excessively difficult that I started trying to figure out what to do about it late in the twentieth century, and look where we wound up.

The predicate of what we’re doing, I guess, is pretty simple. Everyday we open the newspaper and we discover that these two things called artificial intelligence and machine learning are going to change society–the fourth industrial revolution, I heard this morning, and much more dangerous than all the rest of them put together and so on. Everybody knows that everything is going to be different, nobody knows exactly how. The hype to reality proportion is always quite high because the real problems are extremely difficult and nobody really knows how to begin solving them, and the unreal problems, the sunshine about how everything is going to be wonderful tomorrow is so extraordinarily well worked out that we spend a lot of time, well, imagining let us say, how everything is going to be. Governments around the world are both extremely conscious of the importance of what is happening and extremely stumped by what to do. The question, how does public policy get made in this world of new and differently scaled, differently scoped information technology is the most important problem on the long thinkers’ lists–not those worried about Brexit or immediate running out of drinking water in their cities or something of that kind.

I wanted to have a conversation about that, and two of the very most deep-thoughted and long-experienced people in the world were willing to come and join us to talk about it. So, let me introduce them to you, then we’ll have a little bit of colloquy here and then, given the extraordinarily well-informed audience I see before me, we’ll try and do it all collectively. To my right, Paul Nemitz, the principal advisor to the Director General of the European Commission, which he has been for approximately a year now, after destroying the global economy with GDPR, a subject that we’ll come back to soon enough. In his previous post, as Director for Fundamental Rights and Union Citizenship in the Directorate-General for Justice of the European Commission, he led negotiations on the code of conduct against hate-speech on the net, the EU-US Privacy Shield, which we’ll come back to in a moment, and the GDPR, of sainted memory. He launched a justice policy workstream on democracy, freedom of speech, and press plurality in Europe, beginning with a colloquium on fundamental rights hosted by Frans Timmermans in November of 2016. Paul’s a visiting professor of law at the College of Europe in Bruges, and a great and important friend of everything that we do–GDPR, to the contrary, notwithstanding. (Laughter.)

DANIEL J. WEITZNER: That’s not fair.

MOGLEN: Okay, now that, that was really important–our first inter-continental agreement of the day. To my left, Daniel J. Weitzner, founding director of the MIT Internet Policy Research Initiative and the principal research scientist at CSAIL on policy questions. He teaches internet public policy in MIT’s Electrical Engineering and Computer Science Department–it is a very important milestone in the twenty-first century that you can’t teach electrical engineering and computer science at MIT without public policy. That’s a life-change for the thing called computer science, which was never about any of those things when I was young, which is why I didn’t want to take it. From 2011-2012, Danny was the United States Deputy Chief Technology Officer for Internet Policy in the White House, a role which allowed him to lose many very important policy decisions to the Director of National Intelligence and other American government officials, but gracefully and persistently, let us say… Where he lead initiatives on online privacy, cyber-security, internet copyright and trade policies to promote free flow of information. He was also Associate Administrator for Policy at the United States Department of Commerce’s National Telecommunications and Information Administration–NTIA, when we are not in a post-fact American government, is one of the most important factual research organizations in American government.

I’m not sure how much facts still matter, but if they matter it’s too bad that Danny’s successors at NTIA don’t have quite the leeway he had. He was a member of the Obama-Biden presidential transition team, it says here in this biography. Enough said about that. Danny has been a leader in the development of internet public policy from its inception, it says here, which is surely true–from the beginning of EFF and the beginning of CDT and the beginning of most of the policy shops that have considered these questions in the relation to the United States government since–well, since all of us were young. He has made fundamental contributions to the successful fight for strong online free expression protection in the United States Supreme Court, crafting laws that provide protection against government surveillance of email and web-browsing data–we did have those, once. His work on U.S. legislation, limiting the liability of internet service providers, laid the foundations for social media services and supporting the global free flow of information online… That’s an ironic distinction though, don’t you think? Safe harboring, we invented…

WEITZNER: I’ll own it, I’ll still own it.

MOGLEN: There you go, that’s why it’s so great to have these people with us. We are now, all of us, in the age of artificial intelligence, but you can’t do anything with artificial intelligence that you can’t do with real intelligence first, so I’m hoping that we’re going to get some real progress here this afternoon that no machine could have learned before us. I think the place to begin, Paul, is by saying, so, now that personal data privacy is a solved problem, artificial intelligence, big data, and public policy–how will the next European Commission, the one that will come into existence at the end of this decade… How will it face those problems? What do you think the issues on which you are advising them to pay their biggest attention?

PAUL NEMITZ: Well, internet policy, of course, is a very broad field, and I think it’s right to have a holistic view of it, in theory, but in practice, of course, there’s very few people who are able to deliver such a holistic view, and I also happen to believe that actually in terms of academic work, we need something like a theory of the meta-dialogue between the digital and democracy. So with this in mind, I can’t give you a complete and exhausting picture of what the next commission which will take office in September, 2019–we have election in Europe in June, 2019–will have to look at, but artificial intelligence is certainly a subject. And the way our legislative cycles work, that from one year before the elections, so that means from June this year, we don’t start any new projects on legislation. That’s why everything that we do right now on artificial intelligence is soft policy, it’s scoping, it’s structures for deliberative process, and it can, maybe, prepare – depending on the development of problem identification – measures of law or policy starting at the end of 2019 or the beginning of 2020.

So what we have done is on the 25th of April, we have published a policy statement on artificial intelligence, which is largely inspired, first of all, by our industrial policy people who say, “We’ve got to catch up with America, we’ve got to catch up with China,” but in our processes any initiative which starts, no later comes to the guys who look from a fundamental rights–you would call it civil liberties–point of view, a democracy point of view. So this communication also has a chapter on law and ethics challenges in the area of artificial intelligence, and this chapter addresses, on the one hand, let’s say the more profane issues of civil liability – these autonomous systems, can they break the chain of responsibility? But it also, in a nutshell, spells out the question, “Don’t we need a principle that, by design, these programs, which will be pervasive, which will factually set the rules in society everywhere, starting from education via city management right to health… That they need to incorporate, by design, from the outset, the basic elements of the constitutional settlement, rule of law, meaning they have to comply with all existing law, fundamental rights and democracy. And posing this question is already controversial because in the classic approach to technology regulation – the neo-liberal technology regulation approach–is let them move forward, let’s see not stand in the way of innovation.

I would say that we have become a little bit wiser, all of us, in the United States and Europe, also, maybe that that is not so smart, in particular when you deal with technologies which may have irreversible impacts. So, we’ve learned that the first time, of course, was atomic power, but I would say that in every technology where you have invisible risks, and what I mean by invisible risks, I’m talking about the invisibility to the electorate, the demos, not to the experts–there is a danger that politics and policy and democracy moves too slowly, and, of course, this danger has to be taken seriously when it is possible, you cannot with certainty be excluded that in the long term there would be irreversible negative consequences. So, that’s Hans Jonas, the Principle of Responsibility in 1979, which lead to the Principle of Precaution in environmental policy, which in Europe now is primary law–not only for environmental policy but actually for all policy, so it obliges us to anticipate, to look into the future, and we have a principle that is called the Principle of Essentiality that means anything which happens in society which can either have an impact on individual rights of people or can be very important for the society as a whole is something which needs to be dealt with by the legislature, in particular when it involves exercise of state power.

So with this in mind, it is true that we may be a little bit more anticipatory. We still have a network of technology impact assessment systems for parliamentary purposes. In the United States, you had this, too, until it was closed in the time of Reagan–the Congressional Office of Technology and Impact Assessment. I think in the times in which we live this becomes very important again, and this is basically what we are doing right now. We’re trying to identify what exactly are the challenges to individual rights of artificial intelligence. What are the democracy challenges, and, honestly one doesn’t need to search very far–there is already a lot of material on the market, including very good material from America here in New York. AI Now, all this work is excellent work. There are around fifteen ethics catalogs on ethics codes for artificial intelligence already there. They have identified all the problems, so the question is, “How, then, are the solutions going to look?” And there the debate is, as I perceive it, can we all, now, leave this to ethics, the new word of fashion when we talk about artificial intelligence, or don’t we need the classic type of legal regulation. The difference being that law has the legitimacy of democracy, ethics codes don’t because these are self-appointed groups of people, it’s the church or whoever, but… And, second, ethics codes are not enforceable while the law is, and maybe, in the world in which we live, which has very big companies with a lot of power and they may be talking very smooth and very smart and very sweet but to get them to do something is a completely different ballgame, so, from time-to-time, I would say, it’s quite good to have enforceable law.

So, I think the challenge before us now, and I will stop there, for the next commission, is to identify which of the challenges that AI poses to public interest, public policy, democracy, fundamental rights, and rule of law, and safely be left to ethics and self-regulation and which, on the other hand, requires binding law, and I will give you one example–and there are not so many yet–and I am interested in the discussion here, whether you have other ideas, where I am already quite convinced that we need binding law, and that is the “making visible” in the context of public debate in the automated public sphere, and I will say, the more urgent the closer we get to elections, that a machine is talking to you and not a human. So, when you wake up in the morning and you think, “Gee, everybody is in favor of this or that candidate today on the social media,” you need to know that all these messages come from machines or are these real humans because democracy, otherwise, is not going to work anymore.

So, in the same way that we oblige those who call themselves journalists, fourth power, very important, to make it visible when they receive money for the contributions, then it has to be marked, at least in Europe, in the newspaper and also on the TV, it has to be marked, sponsored content. In the same way, I would say, we have to mark machine participation in public discourse, whether in spoken language or written language, and this must be a rule which is enforced, I would say, pretty tightly because, otherwise democratic processes, elections, and so on–and I’m not getting into “fake news” and propaganda and all those things–just this thing alone can destroy democracy, so we have to take these things seriously, and I would say that the time of naive John Perry Barlow, “Stay out of this declaration of independence of cyberspace,” 1996, these times are definitely over. Unfortunately, too many people only learn by catastrophe. We have had a number of catastrophes, so let’s collectively learn of this and do what is necessary to make sure that AI will deliver the benefits and we will not suffer from the negatives.

MOGLEN: Just one question–it occurs to me before we ask Danny to weigh in–that your identification of industrial policy as the conversational interlocutor for this position… Should I think… Even more pointed, there are a lot of businesses in the twenty-first century that consist of pretending to be human. That’s a very important business model. The dating services need non-human potential dates to keep the flow running, and the advertisers need thought leadership and influencers whether real or not real. The legislation that you’re imagining, the sort-of “no cheating on Turing Tests” bill confronts a really serious economic pressure in the other direction. Having the machine interact with human beings on the basis of pretending to be human is a trillion dollar source of wealth in future. Isn’t the industrial policy of this going to turn out to be, in Europe, we need a Google and a Facebook and a Twitter of our own–where’s our Baidu, where’s our Tencent?

NEMITZ: I don’t, I mean, yes, that’s what we say, we would love to have it, but our policy is extremely open… The American companies and all the Chinese companies they could earn and they do earn billions of dollars in Europe, and the internal market of five-hundred million people benefits them, first. Just to give you an example, Microsoft alone earns twenty billion European euros, which is twenty-two or twenty-four billion U.S. dollars, only from licensing fees from public authorities in Europe–so big money is made by American companies in Europe. This whole propaganda about protectionism and so on–I think, honestly, it’s crap. And, I would say, if you believe in the primacy of democracy, you have to face the fight of being willing to say, “We don’t want a world that is ruled in the first by technology–or, for that matter, corporate and economic interests–we want a world that is ruled by democracy, and, of course, one has to fight for it.” These things, these regulations, to get them through and my exercise of six years of being bombarded by the lobby from the morning until the afternoon and the evening and the night, and not only the American lobby but also, of course, economic interests from within Europe and elsewhere in the world, you just have to stand through them, and I think that’s the job of elected politicians and also of the civil service where, at least in our system, we are lifetime civil servants, so we can afford to not make friends with everyone.

MOGLEN: Alright. So, now is the moment for the American view, I think–if you don’t mind being the American view…

WEITZNER: I don’t really know what an American view is exactly anymore, I’ll give you Danny’s view. I think that–I’ll make kind of one general statement, but I actually want to, then, start with a sort of a story. I do think that the web is on order of twenty-five years old. The internet as a commercial entity, that is, an entity that anyone in the public could use, is a little bit older than that. And I think that… I think there’s a lot… What I hope to suggest is that I think there’s a lot to learn, both positive and negative, about what the experience, both in the U.S. and around the world, has been in the way that we’ve approached policy, law, regulation, and social practice on the web in this roughly quarter century. I think there’s some good parts of the story. I think there’s some not-so-good parts of the story. There’s always the risk of, kind of, throwing the baby out with the bathwater, as it were, and I’m pretty committed to making sure doing whatever I can do to make sure that we don’t do that because some of what I think we have gotten right is a really enormous change in the way that people access information, notwithstanding the fact that there’s some of it that’s crap as Paul would say, but I think around the world peoples’ relationship to information has fundamentally changed, peoples’ ability to speak has fundamentally changed, in a way that, I think, supports a lot of important values, and I would actually say that peoples’ relationship to democracy and ability to participate in the democratic process has also fundamentally changed, and I would still say for the net good.

We’re very focused on the effects of 2016, where there were clearly real problems which I think are, in part, associated with choices that we’ve made about how we regulate the internet environment and the ability to exploit it by clever adversaries like Mr. Putin, but I’d also say that we shouldn’t forget 2008, when my experience working for the upstart candidate, then Senator Obama, was that he was not supposed to be the Democratic Party’s candidate, Hillary Clinton was supposed to be the Democratic Party’s candidate–it was obvious, she was the inevitable candidate–and a bunch of things happened very much dependent upon the way the internet works that made it possible for him to change the story, and so we shouldn’t forget that kind of effect when we’re thinking about the clearly very damaging effects that we’ve had to democracy to date.

But I do think a lot of what we’ve gone through in the last twenty-five years has been kind of just the warm-up for the choices that we face going forward, and the reason for that, I’ll try to explain with a story: when I first arrived at MIT it was after about ten years working in Washington on a lot of these issues in the 1990s, very much in a kind of law and policy culture, and I arrived at MIT to work with Tim Berners-Lee, who was running this new thing called the World Wide Web consortium, which did a lot of the basic design of the web as it evolved and set the technical standards for the web, and Tim–he did this kind of personal orientation for everyone who came to work for him, and he said, “Rule No. 1 is if it’s not on the web, it doesn’t exist.”

That was his maxim, and he actually meant something, on the one-hand very small by that, which was basically to share you work. So if you’re writing something and you’re doing it with a bunch of people, don’t spend three months just writing off in your little, private hard drive, as we had been, put it on the web–that is, at least on the web that was accessible to all of us in the web consortium–and share it and make it available. And so, it was sort of a simple, “share with people,” view, but it actually reflected, I came to understand, a much bigger view of the world and a much bigger view of what the web was going to be.

Tim designed the web from the beginning so that essentially every piece of information in the world could be represented on the web in a common information space, and that was the extraordinary and revolutionary thing that he did with the web. The internet had the basis for doing that because of the way it’s addressing system worked, but the web kind of realized that by having a unified set of URLs–the addresses that we all know–so that everything, everything in the world, could be represented on the web, and that seemed like kind of a silly idea in the mid-1990s because all we really had on the web was sort-of random documents and the occasional song and some kind of MP3 file or something like that, but now we really do have–to a first approximation–pretty much everything on the web in one way or the other, and it’s because of that–now, that doesn’t mean it’s all accurate or it’s all true, but it’s somehow all there, and what really enables a lot of the artificial intelligence technology that we’re talking about is the fact that everything is there and we want to learn from it, we want to do things with that information.

What’s tricky about that is that what’s on the web is what happened in the past and maybe what’s going on in the present, but we want to predict the future with it, and my colleagues in computer science, some maybe act as if this is all brand new stuff and the machine learning this kind of extraordinary power that has descended from heaven. It really isn’t, as many of you know. It is really simply the re-invention of statistics. It’s the re-invention of the ability to make predictions, hopefully reliable predictions from the past about the future or from one sample of data to broader generalizations, and I think a lot of the questions that we’re going to have about the way to “regulate” artificial intelligence, which I think is a bit of a misnomer and I’ll say why, but a lot of the questions that we legitimately have about regulating the uses of artificial intelligence have to do with the fact that we’re going to have to remember all over again all the things that we actually know about the use of statistics in public life.

So, as an example, no one gets to just make random statements about the population of the United States or any other country in the world or the population of New York City, right, we actually have ways of counting, and even if we can’t count everyone we have ways of taking samples of populations and inferring in a reliable way from those samples to what we think is true in the world, and we don’t let anyone do that–we expect that you’re a statistician, we expect that you’re an economist, that you’re a demographer, you have some training, you use some method that is recognizable. What is a little hard about machine learning right now is that a lot of those methods don’t actually exist. If you ask people, “Well how do you tell whether a machine vision algorithm is producing an accurate result? How do you measure the accuracy?” It’s still very hard to do–there are not really understood ways, standardized ways, of doing that. If you asked, “How likely is it that an autonomous vehicle is going to hit a divider in the road or hit a pedestrian, as occasionally now seems to happen?” The answer is, well, hopefully not very often, but how can you look at that system and actually make a prediction about when it’s going to happen next–you really can’t.

So, we have the challenge of, as I said, re-inventing a sense of accuracy, reliability, and truth in these new systems that we’re building. What I think is going to be especially challenging is that the tools to do this, certainly the software tools, and to a lesser extent the data on which those software tools run, are, to a first approximation, available to anyone. Anyone can use them, you can go download Google’s TensorFlow, you can run it on a cloud version of that service–a bunch of other companies are coming along and doing that. So, people have the ability to be the equivalent of the U.S. census chief economist and have no clue what they’re doing, and that, I think, is going to pose a very serious set of challenges. And, as opposed to just a couple of economists or a couple of companies like a Google or a Facebook or whatever who we might, sort of, figure out how to subject to some set of standards, we’re really going to have a lot of people who are going to have these capabilities, who are going to integrate them all over the place, and I would say, for all that I absolutely agree with Paul, that we ought to be setting standards through a democratic process, we ought to be setting expectations in a way that’s visible and accountable to society. We don’t yet know the terms on which we should be doing that. So, it’s much easier to say we should have those standards and much harder to say what they ought to be.

And I want to just say a couple things very quickly about what I think we’ve learned from the experience of pretty rapidly having to integrate a lot of the new capabilities that the internet offered. I think they’re different in many ways but they do represent a new technology that all of a sudden came into a lot of peoples’ hands very quickly, shifted power relationships quite dramatically, and there are a couple things that I actually think we got right.

I think, number one, we did, in the U.S. and Europe and other parts of the world, identify some clear principles that we thought were important in the ways that the internet technology was going to be used. In the case of the internet, I think in the U.S. we were a little bit better at articulating the importance of free expression, maybe not quite as good at articulating the importance of privacy–Europe maybe got that in the opposite direction, though I would say that together the U.S. and Europe actually put together a pretty reasonably effective package of those principles.

We also did something that I think was extraordinarily important–we established some bright-line rules, bright-line rules that the new internet platforms, the new internet companies, could apply pretty clearly. So, Eben mentioned Section 230 of the Communications Decency Act, the liability limitation on platform providers that said that third-party speech, essentially, that’s available on platforms is not the responsibility of the platform provider but instead the responsibility of the speaker. We’re now discovering some of the limitations of that rule, but, nevertheless, that rule did enable platforms to grow very quickly, to make their services available very widely, I think, engendering quite a bit of social benefit, and they could do it because the rule was really clear. It wasn’t actually subject to much interpretation, and rule interpretation is expensive, number one, and hard to do for new technology developers.

The other thing I want to say about what I think we got right is that we were very clear, at least in the U.S., that existing law still applies, that just because something’s happening on the internet doesn’t make it dramatically different. So, there’s a bit of a caricature between the U.S. and Europe about Europe having a lot of laws and the U.S. having no laws–it’s a silly caricature, the U.S. has some of the oldest privacy laws, such as the Fair Credit Reporting Act, and pretty early on it was made very clear that, for example, if you were performing a service that looked like a credit reporting service, that is if you were a company like Spokeo that tried to rate potential employees, if you were a company like Instant-Check that provided reliability ratings for roommates and tenants, that even though that was happening on the internet, even though that used untraditional sources of data, the Federal Trade Commission said very clearly, “You’re a credit reporting agency,” and you have all the responsibilities that we put on those agencies in 1970. So, we did remain clear, or eventually got clear, that just because something is happening in a new technology context, the rights and obligations that we put on individuals, on consumers, and on companies that serve them in commerce still apply.

The final thing I’m going to say is that I think that the other tool that I think has been very important in the development of the internet–in particular, in a lot of ways of software and the kind of digitally driven services in general–is that we did evolve a whole number of social conventions that shaped the way information flowed, that shaped the way things like software were available to people. These were–and I’m thinking specifically of the work that Eben and people in the free software movement have done for decades now–that not through any kind of government fiat, and often despite government fiats, said that organized people to have a particular relationship to software, to say that people should be able to see software, they should be able to interact with it, they should be able to change it, they should be able to understand how it works.

Similarly, Creative Commons licensing, which was a kind of a way to provide open access to documents and other kinds of data, it was a set of social conventions that governed how hundreds of thousands, millions of people–hundreds of millions of people, arguably–relate to the information in this new, digitally driven environment, and I think we ought to make sure to give adequate credit to those efforts which are regulatory efforts, which are efforts that shape the way we all live in this world–they didn’t come from government, they rely on certain government institutions to enforce, but I think that we’re going to need to use those kind of tools every bit as much as we use tools that are derived directly from government action, simply because we have such a complex environment to figure out, to figure our relationship to… Sometimes it’s going to be better to do it that way than through a more traditional regulatory process.

MOGLEN: Alright, so let’s get a little closer to a couple of the technologies hidden inside those words, AI and big data, along the lines that you were offering. What we have that we call machine learning is pattern-matching on steroids, and what it is allowing us to do is to learn things about people that they don’t know about themselves because we possess so much behavioral data being collected by so many highly motivated commercial parties, and that behavioral data can be used to generate inferences both about individuals and about populations that are obscure–they constitute a reservoir of hidden power, and the question of how democracy is supposed to interact with that, which both Paul and you have addressed in the different conceptual frameworks of your governmental systems, are not just an ethical or a legal problem. They’re fundamental to a replacement of social sciences by something new, what our friend, Sandy Pentland, called social physics, right? That you have all those data about all those molecules bouncing around in the bottle, and now you can say something about those things that those molecules themselves could not possibly reach on their own. It’s not only statistics, it’s the basis of the statistics–it’s the thing we call data science.

When I went to give a talk in the new Columbia Data Science Institute–what was that, five years ago–called, “How Not To Be Evil While Data Mining,” nobody was really interested, I can assure you. It was not important. Let’s worry about the environmental consequences of the mining after we learn how to get the coal. But now we have it–quite a lot of it–and it has unintended consequences that we don’t necessarily expect. People upload their genetic data to a genealogy website in order to find their long lost third cousins and the next thing you know, law enforcement is using that to find serial killers on the run for decades. We’re all very happy that we’re catching serial killers on the run for decades, and we’re all a little bit creeped out by the fact that all of that genetic material uploaded by consumers for family discovery purposes is about to have enormous and far-reaching social consequences. What you, in the conversation before we started, called “horizontal effect everywhere”–uniformly transformative, but not necessarily evident.

The same thing applies to what we are calling AI these days.

I have been wondering about this artificial intelligence since the first time Marvin Minsky explained to me that it was going to happen and it was going to be perfect and all of that. What we mean is autonomous systems, and I do have a pretty strong intuition that there will be two kinds of autonomous systems in the twenty-first century.

There will be Chinese ones, which will not explain to anybody what they are doing because that’s western Democracy, and we’re against it, and there will be whatever we invent.

And, from my point of view, this again is an industrial policy issue in a way–whatever it is that Mr. Mnuchin and Mr. Lighthizer are supposed to be agreeing about, presenting a common front in Beijing, that must be extremely difficult I would think… But whatever is the common front that they are going to be presenting in Beijing, what they’re really doing is standing astride “Made in China 2025” and trying to yell stop. They’re trying to present an alternate industrial policy in which Chinese autonomous systems are not more powerful and more effective than the autonomous systems of the democratic world.

That seems to me, in a way, a second-best objective, the better objective would be the one that we recommend when Paul’s colleagues get down on Google and Facebook–why don’t you invent your own? The real question is how are we going to have autonomous systems that know how to explain themselves and have an obligation to do that.

From my point of view, I feel the rightness of what you say: in the twentieth century, comrades of mine worried a lot about how to make it possible for people to understand computer programs, and the basic answer was hack copyright to make sure everybody gets the source code, and that worked to a very good extent, right? I mean, if what we’re talking about is stuff that runs on UNIX boxes, that was the correct answer… But it wasn’t the full answer even then, and it can’t be a full answer in a world where code is sample and training data is the whole subject.

Now we need data licensing, you’re quite right–without the Creative Commons element of our thinking in the twentieth century, we could make no progress here. But beyond the problem of the access to data, which actually determines the inferences that engines produce, we also have the problem of building forms of autonomy which are communicative, which are explanatory, which take responsibility for expressing what they are. Paul’s test, “no cheating on Turing Tests on this continent, please,” seems to me really important, but it’s kind of a low bar. If all we are doing is making autonomous systems identify themselves, “I’m an autonomous system, I’m not going to tell you what I’m doing,” the disclaimer is of limited value.

What we really require is a rule that says… I did once upon a time argue for the retro-fitting of the first law of robotics into our technology, now I realize that’s not good enough anymore. What we actually needed was a fourth law, which says, “robots will explain to human beings what they are doing, or else they can’t do it.”

When Ginni Rometty said on behalf of IBM that there is no acceptable AI that can’t explain itself, it seems to me that she was expressing a worthy goal, that did not have a whole lot to do with any product that IBM is selling or knows how to make but which we all have to find out how to make or “Made in China, 2025,” is what we should be thinking about–the social credit system, the overwhelming use of all the behavioral data in society to constrain who may buy a train ticket, who can get a residence permit to live in an attractive place, who can go to which schools. If we are saving all the learning behavior of everybody from birth onward, then we are producing alphas, betas, gammas in society without bothering to put any alcohol in the bottles.

Systems of teaching have to explain to learners what they are teaching them and why. Systems of transport have to explain where they are taking you and why they are taking you there. We are actually asking for forms of technological transparency, which some of us in the twentieth century have very primitive ideas about within very limited technological contexts. But within the two forms of technological development now rapidly going on–learning more about society than society knows through better pattern matching and the creation of autonomous agents that operate in meatspace and run over people and cause harm but even more importantly lurk behind all the systems that we are comfortable with, biasing outcomes in ways which nobody gets to hear an explanation about. We are up against the radical difficulties of technological obscurity. I don’t think that the choice is between regulation and ethics–that seems to me as it seems to both of you: inadequate. We are talking about law, but what law? What is the proper role of the state in these subjects?

So, once again, let me try and be specific technologically. More than a decade ago, a former student of mine who was then a minister of social welfare in a western European democracy called me in to the ministry for an official consultation. His question was very simple, he said, “So, now I’m running this large social welfare bureaucracy in this state of millions of people, and I’ve learned that this big bureaucracy barely responds to me at all, but maybe as minister I can do one thing. So here’s the thing I want to do, Eben, tell me how do I use big data to make the lives of workers better in my country?”

“Okay,” I said, “That’s a really good question. We could assemble a team of experts around the world who could help you to answer that–how does big data make workers’ lives better? The thing is, if I come back here in a year and you’re the minister of defense it will all have gone for naught.”

Well, that’s basically what happened. He got promoted, he worked his way up a coalition government, he became deputy prime minister, and then his social democratic party got six and a half percent of the vote in the last general election in his country–wiped out like so many other traditional social democratic parties around Europe. Why? Well, one could say because he hadn’t figured out how to deliver better lives for workers in his country using big data.

We do have more than just protecting privacy or regulating misbehavior to be responsible for. We need to explain how public policy made around these forms of new information technology can actually make peoples’ lives better, and we need to explain that directly. Nandan Nilekani, who had so much to do with building the Aadhaar system in India said last month that Indians should be willing to give up all their personal data in order to get better healthcare and cheaper loans. My view was that was a really low-ball offer, but it constituted what a British politician would call at least a retail offer to the voters in his great big democracy–this what we’re going to deliver for you in return for your participation in a biometric database, embracing everybody, that is going to rule the world. Well, at least he explained what it was: cheaper loans and better healthcare.

My view is if we’re really going to talk about the role of public policy in connection with these things, we’re going to have to start explaining to voters pretty soon why this is good for them directly and immediately. Otherwise, all this obscure technology becomes more of the rigged system that I don’t understand that seems to be working for somebody else, which is empowering not my favorite politicians all around the democratic world.

So, that would be the thing that seems to me most important–we need law, we need to understand how to govern new social science and new forms of autonomy in society, technological autonomy, directly. We need to do that in a fashion which allows us actually to explain to voters in democracies why all this makes their lives better. It seems to me that the positive case is crucial, not just in order to protect the policymakers from populist pushback, but because this is either the real promise of this technology or it is just more inequality–those who possess the strongest inference engines will rule.

NEMITZ: But even the job of the policymaker is not to make the lives of workers better by using big data. The sentence ends before. The job is to make the lives better, and whether big data delivers it or other tools, that is a totally different question, and in many cases it will not be big data right now delivering anything. If you look at the productivity increases in America with digitalization, they have gone down. There are companies which make huge profits but at the same time, the impact on society are not only positive. So, I mean, honestly, I think the challenge for the policymakers, at least in Europe, is much more humble. Our job, first of all, if you look from the fundamental rights point of view is rather defensive. We have to make sure no harm is done, and we are willing to look at the potentials and the positives, but these have to be demonstrated by those who want to sell these technologies. I mean that’s… You do a little bit support, a little bit public subsidy–what in America comes out of the Department of Defense, we do it as research aid, okay–but it’s not our job as policymakers to prove that these technologies are the great fulfiller of all dreams. They’re probably not. So, I think in our discussions I think we need a little bit of slowness of thinking. You are brilliant in bringing this all together, I can’t keep up with this, so I think we do need a little bit of slowness of thinking and to take the issues point by point.

So, you mentioned the very important issue of the sociological reality in our society, and I think there’s truth in that. The only ones who know, today, the sociological reality in our society are the Googles and the Facebooks of this world–they have all the raw data about our lives, about our ambitions, about our situations. Who has access to this data? Not the sociologists, not the political scientists, not the historians. It’s the corporations which have this data, and that’s a problem, and I think there the issue of opening up data, not only government data re-use of government data–yes let’s open it up–but there is also the challenge of, “let’s open up data which is held by private companies,” and then, of course, we have to make a difference between private, personal data on the one hand and non-personal data, but that is certainly a challenge where we in Europe, at least, also are moving in terms of legislation, and in artificial intelligence this becomes even more pertinent because if you need these data points to make artificial intelligence work, all the issues of who has the data but how can you use it, how is it constituted, are there competition issues in monopolizing the data, for example, on languages, through the scanning of all books of this world–supposedly Google has this great advantage. This needs to be addressed. So, I think that’s a big chapter: we need to open up the data while at the same time maintaining data protection and privacy–one challenge.

And another challenge, as you say, and you ask the question, “How can we make sure that artificial intelligence explains yourself?” Well, I can tell you how we make sure: by being already today clear that these programs will not be able to be used in government, in the judiciary–everywhere that public authority is exercised–if they are not able to motivate their decisions to a degree which on the one hand creates the trust in a democracy necessary by voters, and on the other hand allows judicial review. A judge can only review an act of government power if there’s a motivation. That’s what our laws says, and if the principle applies that the digital doesn’t end the application of the normal laws, well, it’s a normal law and a rule of law in democratic state that power, decisions of the government, have to be explained. That’s how we do it, and if they are programs which don’t do it, they cannot be used. So, that means it’s a lot of business lost. If you are Microsoft–and I understand IBM–they want to sell to government. They make a huge money from government, and they know what the requirements are. And there we have to stand firm–if technologists today come and say, “Guys, you don’t understand/ This, we can never explain the new rules system, and we will never be able to do it, but you have to accept that the benefit of running this system without explanation is so high that you just have to live with it,” the answer must be no because that is the end of judicial control of power, that’s the end of the basic constitutional settlements under which we live. It just cannot be.

And, as far as personally, it does concern, and then I stop because I think also we need a little bit of slowness of thinking because otherwise everything becomes a mix which actually doesn’t bring us to any conclusions… But on the question of how does this work with personal data, well, in our regulation–Article 15 and 22, you can read it in the GDPR–you already have the right to reject to automated decision-making–say I want to have human intervention–and you have the right to ask for meaningful information about the logic of processing, the purposes and the consequences.

So, it may not be one hundred percent perfect, but there is already a nucleus, and I’m pretty sure the judges will give this a rather broad interpretation because this is so important–and, by the way, for the serious law students among you, there is already also good literature, including in the United States, on this right to explanation under GDPR. Andrew Selbst here from Data and Society did an article about this, Sandra Wachter from Cambridge, so this is already something where academics now write and interpret the GDPR, and eventually judges will give us the jurisprudence–how far this right to explanation goes–but as far as personal data is concerned, I think we’re already very close to the goal post when it comes to motivation and reasoning and the right to ask for transparency. And I think we should advocate for a broad interpretation of this right because if it’s narrowly construed it will not fulfill its purpose.

MOGLEN: So, do you think, Paul, that that means any government purchased autonomous vehicle, car, train, tram will have to be able to explain its principles of operation in order to be…

NEMITZ: When exercising public power, of course, when the government provides services… And what I mean by exercising public power is if it’s in a–not if it’s in a horizontal relationship with the individual–but it’s vertical, so the state tells you. It makes a decision which says you have to do this or this or you don’t get the building permit or you have to go to prison and so on. So, when the state exercises public power there must be a motivation. When the state is a service provider, thus, operates like a private party–not exercising public power–then we are in a different area because then we are in an area which on the face of it, first of all, is not that relevant from the point of view of intrusion into fundamental rights. It can be relevant, but it’s not necessarily so. So, there we will have to analyze what are the fundamental rights and implications of this private to private relationship, and what are the requirements there, and they need to be then laid down in law.

WEITZNER: Could I just suggest that… I’m all for explanation. I think it’s necessary. I don’t think it’s sufficient, and I think there’s actually a lesson from everything we’ve learned in the U.S. and Europe and around the world about privacy–that in privacy we have fetishized consent. We’ve fetishized notice and consent–we’ve said everyone should have a right to have an explanation of privacy practices, and what we know is that no one does anything with them, and we know that it has not fundamentally made a difference, unfortunately, in the realization of what I think is really at the heart of privacy, which to me has much more to do with controlling the power relationship between individuals and institutions and the risk of chilling effect on individuals.

So, we got in privacy, because the thing that was sort of easier to talk about was consent and notice, we’ve spent a lot of time on that. I think there’s a bit of a risk that we do the same thing with explanation.

Now, let me just say as a caveat, I think the technical questions about how to generate explanations out of neural nets are really interesting, they’re important, we’re working on them, a lot of people are working on them, we need that, it’s necessary, but what I think is–the only thing that makes it sufficient is we have to figure out an accountability model. I care less about explanation and a lot more about accountability.

What I care about with a car, with an autonomous vehicle, is I care about who’s responsible when it crashes, I care about who’s going to bear the loss. I have a fair amount of confidence that if we can work that out, these issues of explanation and transparency and everything else will follow, and I’m not saying they’re easy, but I think if we… Right now we have a lot of work on explanation and very little on accountability, and I actually think we have the same problem in privacy, by the way, but I think that, to me, it’s really an essential role for government to, essentially, in order to manage the externalities associated with all of these autonomous systems.

We’ve talked, I think quite correctly, about analogies to environmental policy–ultimately, what is environmental policy but managing the externalities of individual and institutional behavior that causes harm to others. How we internalize that harm is kind of the whole ballgame in environmental policy–or ninety percent of the ballgame–and what I think we have to figure out in a lot of individual cases, whether it’s in… How do we… We may say I want to come back to your question, Eben, what’s going to benefit individuals, what’s going to benefit society in the use of advanced analytic technology? Well, I think there’s a reasonable–there’s reason to believe that if we made available more widely data about peoples’ health status, both phenotype and genotype and environmental data, we could actually learn a lot about treating a lot of diseases. That would be a good thing, but it has some cost–you keep all that data around, it could be misused all kinds of things happen.

What government has to do is figure out how to make sure that we have internalized the cost of whatever the risk is associated with that kind of potential benefit, and it’s really something that only government can do, or maybe government with insurance markets, but if we actually want to be able to live in a kind of a rule of law version of this very data driven environment as opposed to a Chinese version, which is, the very efficient version of the data driven environment but not a necessarily humane one–or one that recognizes individual rights. I think what we have to do at every step is say, “Where are there potential harms and who’s responsible for them?”

And, again, I think that–Paul, as you suggested–I think it’s a step-by-step process. I think there’s one set of questions that you have to answer when it comes to health data, I think there’s another set of questions for autonomous vehicles. There’s a whole other set of questions for other kinds of transport data. It goes on and on and on, because they’re different harms. The underlying technology is going to look pretty similar, and I’m hoping, actually, even the underlying explanation capabilities are somewhat generalizable, but what we actually want from them is accountability and managing externalities.

MOGLEN: Yes, but let’s follow your point about we want to know who is responsible when cars collide with things. So, we did that in the twentieth century, for a long time, by trying to access fault. We took explanations, we subjected them to judicial testing, and we came to conclusions about who was at fault and, therefore, who ought to pay, and we discovered after a while that we would do much better if we got rid of the idea of fault and we socialized the processes of paying for all the harm done. It turned out that the effort to discover who was responsible was at cross purposes with the goal of making sure that…

WEITZNER: Oh, so I would suggest what we did–because we did that through insurance mostly, right? And so what we did was we had enough of an organized set of information about what seemed like a fair way to allocate costs programmatically, that is, if you’re under twenty-five you pay an awful lot more, if you’re a male you pay more, depending on where you live, if your car is parked on the street you pay more, all kinds of things. So, yes, I agree that we socialized it, but we socialized it based on a whole lot of actuarial data, and, yes, we switched from the sort of individual adjudications of fault to the kind of collective adjudications, essentially…

NEMITZ: Yes, but is it not a big system in America also, that there is still a fault determination if you’re at fault and your insurance rate goes up?

WEITZNER: Most–I don’t know the numbers–but I think most of the accidents are handled as Eben suggested in this no-fault environment, but the fault is sort of hidden behind the insurance rates though, because everyone does not pay the same.

NEMITZ: But is there anything fundamental that changes, since Calabresi’s “Costs of Accidents”, who figured all this out…

MOGLEN: No, not even since Oliver Wendell Holmes Jr. thought up the cost of accidents at the end of the nineteenth century…

NEMITZ: But this question of the civil liability, of course we talk about product liability rules, do they apply, joint and separate liability of everybody in the chain, the guy who produces the program, the guy who supplies the data… And so, I mean, honestly, we are now a big working group, do we need new rules or not, I’m not convinced. I mean, the judges will have to adjudicate according to the rules as they exist. Yes, the autonomy of the system poses questions, but until further notice I would say it’s very clear that if you produce a technology knowingly which has this autonomy it will not interrupt your responsibility. So is this such an issue which is so difficult, and I think it’s going to be…

WEITZNER: Well, I think the difficulty is that you want to make sure that you put the cost of harm on those who can reduce it, right? I mean, so even in the automotive industry structure, you’ve got lots of parties–you’ve got the automakers, who are basically marketing companies that design cars, but then lots of other people who build them and software developers way underneath there, who no one knows who they are or how to control them, so I guess my thought, Eben, is that hopefully, I mean, if all of the long economics theorists are perfectly correct, that what has happened over time in traditional automotive liability is that we’ve kind of gotten the costs allocated more or less effectively, efficiently, but I don’t think we yet quite know how to do that, and I don’t know that it’s purely a question of individual adjudication. Maybe it starts there, maybe it doesn’t. I don’t know.

MOGLEN: Well, my point was, I think, that in fact we’re not certain that the rules of aggregation are going to be the same. You were talking about a world of very low quality data: drivers below twenty-five are more likely to bang into things than drivers above twenty-five. Part of the question that we face is how individuated to get in those judgments. After all, the Chinese social credit system wants to know not just are you’re careless, do you get drunk, but do you have friends who are careless and get drunk. What is your social media show about whether you’re likely to pay back your loans? Our ability to reduce the level of aggregation and to achieve levels of precision in the allocation of fault and responsibility may actually become hypertrophed.

WEITZNER: The risk is we’ll probably do it too well…

MOGLEN: There you go.

WEITZNER: Right, right. I mean, it’s sort of like the healthcare debate that we had in the United States, where people kind of forgot what insurance was about and said, “Well, why should I have to pay for any healthcare ever because I don’t ever get sick.” So, you can imagine–and we already have these various insurance companies are already selling policies that allow you to have the potential of getting rebates on the cost of your insurance if your driving patterns match certain profiles…

NEMITZ: But in the automated car world, in the automated car world, I thought that’s coming up with artificial intelligence, all this is completely irrelevant…

WEITZNER: Well, first of all…

NEMITZ: Because you are not the driver!

WEITZNER: The actual… But–I think, first of all…

NEMITZ: The human doing a mistake is not an issue because you sit in your car and you read a book and so actually the circle of those you have to look at becomes smaller.

MOGLEN: The day the manufacturers are prepared to agree that you can read a book in your autonomous car, they will be responsible for everything. We can all agree with that. But the gentleman in Britain who lost his ability to drive for eighteen months because he was sitting in the passenger seat of his Tesla with his hands behind his head rolling down the street at eighty kilometers an hour, Tesla will always say, “But autopilot is only there to assist an alert driver.” If we saw that median crash case go into litigation, Tesla would say, “But three times in the prior hour he took his hands off the wheel,” and suddenly we would be in an argument about the proportional degree of responsibility between the software that misread the median and the driver who took his hands off the wheel…

WEITZNER: And I think today we already have enormous problems about fair rules, about who gets access to data even from cars today. Today’s cars mostly have vehicle event recorders: little black boxes, kind of like what airplanes have. And there is a lot of contention about who can see that data. How much of it can be seen. The automakers really don’t want anyone looking at that. They claim proprietary interest in that data because if you look at it too carefully it might be discovered that actually they should have designed the brakes a little better or whatever else. So so the rest of us who might have an interest in that data, and society that arguably has an interest in making sure that we’re using it to allocate costs fairly and efficiently, are at this point out of what– we haven’t even got into autonomous vehicles and we’re already at a point where there’s a significant disadvantage in open access to that, that kind of thing.

NEMITZ: Yes but the complexity of the issue– this is a transitional problem. You’re describing the complexity of the issue in automated driving when the human failure and all the related issues of fault don’t exist anymore because the human is not driving but reading. It will reduce many of those issues which we are now discussing because it takes the driver, as you know, one of the main causes– Was the guy drunk? Was he driving right? It takes the driver out of the equation.

WEITZNER: Paul, but I…

NEMITZ: I think on this count, I mean I would say it will be easier.

WEITZNER: I’m happy to say this is a first. You’re more of a technological optimist than I, because I actually think, I really do think, that if you talk to people who are working in robotics, who are working in in machine vision, what they will tell you is that we are certainly many years, maybe decades away, from the point when there’s really full autonomy. There’ll be incremental increases towards it, but it may be that we remain in that never never land, in that transitional mode for the better part of our lives. It’s really not obvious that you get there.

NEMITZ: But it’s a transitional problem. I mean, I think here the liability issue, eventually, at the end of the situation, when everything is automated in driving, it will be easier, because you don’t have the human, the difficult human at the wheel anymore. So, I think that’s something where I would say yes in the transition, and given the economic interest of the car industry, we have to invest brainpower in this logic, logic, logic, and there’s a competition element. The industry wants to have these the rights of trial on the streets, and so on. And there are people dealing with it. But I would say from a fundamental rights point of view and a democracy point of view, not the number one or the number two question.

MOGLEN: All right, so before we open it up to the audience, I just want to raise one more topic, and it felt particularly important to me given your point about the difference between government as hierarch and government as service provider. What do you think is the right role for government in identity management? Is it government’s role hierarchically to define who we are and to authenticate us everywhere? Or is that a service provision for which government is not subject to requirements of motivation? Should, in the end, identity be the function of the state to determine? For these purposes it is clear that the Chinese alternative is baked, and the Chinese alternative baked as it is in China is now the experiment of the Indians in Aadhaar and cashlessness multiplies this one twice over by intervening that state authentication and identity management function to every transaction no matter how big or how small. Are we ultimately going to want in your judgment for the state to be the manager of identity, or are we going to run shy of that?

NEMITZ: Well, I mean what we have is of course very different traditions even in Europe, in the UK you have no identity card. In many of our member states that you have an identity card, which you actually also have been for a long time also obliged to carry. Also the use of biometrics in passport and identity cards is a very thorny issue, but I would say this, there is no reason to go further in the direction of identity management. The state should not manage your identity, but it has provided in the past the basic ingredients for your ability to show others I am Paul Nemitz, here is my passport, and I think this function has to be maintained in the digital. I don’t think that there’s a good reason to hand this over to private parties by law, or may actually be that today people identify themselves so voluntarily with Facebook, and everybody in business and maybe even the state accepts it.

And at the same time, I would say one has to be very careful in going further, for example, the discussion about the cashless society. Well, that is a huge gain in efficiency, maybe, but it’s a loss in freedom, and therefore we have a lot of political resistance, and I think for good reasons. People are worried about this. They want to be able to pay in cash, and because cash payments is anonymous. And I would say there don’t rush into this type of systems which only increase the potential for people management and surveillance of individuals. I mean, in the end, we want a state which is controlled by the people and not a state in which the state controls the people. So, any tool which we put in place which increases the control power over the state, we have to be extremely critical, and there must be severe limitations. So my view is also let’s say in the whole law enforcement area and secret services, their capabilities are increasing simply because of progress of technology, and this increase of capabilities in these central areas of government powers with deep intrusions into individual rights has to be compensated by stricter oversight and stricter laws and limitations which are pretty tough.

And I would say one has to fight for this, because if one doesn’t do it, the progress of technology in the hands of the state will lead to a net loss of freedom.

WEITZNER: I mean, I’m very uncomfortable with the idea of the state controlling one’s ability to assert one’s identity. I think the good news is even– I won’t speak to the situation in India because I think it’s A) complex and B) in flux, but if you really look at identity assertion, identity is essentially a statistical property now in any case where it’s interesting. So, how does MasterCard decide whether I’m Danny Weitzner? Not by checking a whole lot of documents but by analyzing my behavior. And at any given moment I’m either Danny or I’m not depending on whether MasterCard believes it. And that’s, and I’m actually perfectly happy in a world in which people have actually different manifestations of their identity. It may be the same name, the same identifier, but I think from a from a civil liberties and limited government perspective, that’s a good result.

I think it’s also a result that recognizes that in most cases one’s identity is a kind of a risk-based phenomenon. It’s again if my identity for the purpose of signing a mortgage on a half a million-dollar house is a very different proposition than my identity on Twitter. And I’m not saying which is more important, but they’re different. [Laughing] And so, I think it’s better that we have flexibility in society to have these different kinds of identities assertions function somewhat independently. It is interesting to me that in the context of fake news and concern about disinformation online…

I actually, as a sort of odd little data point, I had three different groups of students who were quite savvy from a computer security perspective and quite vociferous from a civil liberties perspective. All said, well maybe we need the government to vouch for who’s speaking online and who isn’t. And that is, it’s just it’s an alarming, it’s anecdotal but it’s an alarming data point to me simply because it seems like maybe we have run out of confidence in other sources of identity.

But to me the idea of the state controlling the ability to make political speech is horrifying. And but maybe that’s going to be our answer. I don’t know.

MOGLEN: Well we at least ought to understand the source of the alternate argument which is the point you made yourself that in a world of statistical assessment of identity unsupported by state authentication, the incentive to collect as much behavior data as possible is maximized, because you can’t perform any of the basic functions of commerce without spying the fuck out of everybody.

WEITZNER: But I’m not, I’m not unhappy about that. As as as a MasterCard holder, I want them to spy on me, because I don’t want to be responsible when someone else gets my card or uses my number.

MOGLEN: But you do want them to be responsible if…

WEITZNER: Of course. Right. And there I do want the protection of the state to make sure that that spying only happens in very limited boundaries.

MOGLEN: Which is why eventually the state gets around to cutting out the middle man, whether it is student loans or identity.

WEITZNER: That, that’s right.

MOGLEN: And that’s why the irony gets so deep. Right? Because those expectations are in conflict. And somebody wants to sell you technical solution.

WEITZNER: No no no no. But I still think both in the U.S. and Europe we do a reasonable job of looking to governmental functions that have some independence, whether it’s the Federal Trade Commission, the Consumer Financial Protection Bureau, may it long live [Laughing]. The data protection authorities, may they become better enforcers. But we do have these kinds of governmental functions that are, nevertheless, I think, independent enough that I am okay with the FTC protecting me against MasterCard identity abuse, and I’m not that worried that the FBI is going to get its hands on what the FTC uses to protect me with. I mean we always have to be careful about that. But I think it’s reasonable to expect that we have almost this quasi-judicial enforcement function that is separate enough from other government functions that I think it’s kind of the best we have.

MOGLEN: We use patchy regulation in the United States. Vertical over here, empty over there.

WEITZNER: And Europe does as well.

MOGLEN: Well, with an ambition maybe for a more unified approach, which I think Paul is expressing. If we take it slow enough, we get it right once.

WEITZNER: I mean I think if anything the ambition is to make it a more independent approach.

NEMITZ: Well, I mean I think we have both. But we have the horizontal regulation on privacy, and I’m convinced also for artificial intelligence we need some basic household rules of horizontal nature. Then, of course, also on data protection, you will have, and you do have already now, specific rules on specific areas, and this will also be the case of artificial intelligence. So I think the classic quarrel between Europe and the United States has been: do we need horizontal rules? And my understanding was you guys wanted horizontal rules. You tried with President Obama twice based on privacy act. But it didn’t work politically. So, I think there was agreement we need some horizontal rules. And I would think that on artificial intelligence it would of course be good to agree on some basic horizontal rules, but it’s clear that…

WEITZNER: Ours were only partially… ours were horizontal in that they fill the gap. But they were not horizontal in that we continue appropriately to rely on the health privacy laws that we have the financial privacy laws that we have. So, yes, we have a gap that needs to be filled but that doesn’t mean that we’re replacing the sectoral approach that we have to privacy that I think works well in the U.S. that is appropriate to the U.S. environment with an omnibus approach which I understand why it exists in Europe but we don’t live in that world.

NEMITZ: But in this world of theoretically being able to connect all databases, it’s very important to keep them separate and to have purpose limitations that means if the data is collected for a certain purpose it cannot just be used for anything. So we have to also confront very self-confidently this theory of big data, which is well, there must be a big data pot that is in there and everybody can use it. It sounds great. And you get these messages about maximizing the learning from big data. It’s all true. But, if you do it, you completely lose freedom and control over the data. So, our rules are just different. Our rules say the data must be collected for a certain purpose, and it can only be used for this purpose and principle, and it’s true then the learning…

WEITZNER: But we have rules all over the place in health privacy, in financial privacy. We have them all over for purpose limitations.

NEMITZ: No, but I mean also the discussion about the benefits of big data. We– of course democracy and rule of law has a cost. It will reduce the efficiency of big data learning. If you say there’s a purpose limitation, the data can only be used for this purpose. And those who dream of perfection in government, starting with our Chinese friends and maybe also others in this world. Also, in Europe, we have this constantly, the security guys, they want to see everything, all the time, and there we have to fight and say no. Because that’s the end of freedom.

MOGLEN: Yes, I’m skeptical about both of you because I think what you really mean is we have very important U.S. restrictions defeated entirely by meaningless consent. And this was the point of—

[CrossTalk]

WEITZNER: No! No, I don’t think that we do. I think we have plenty of use restrictions that are not defeated by blind consent.

MOGLEN: Not formally defeated by consent but practically defeated by consent. That’s how 87 million people got dragged into a political scam because their friends filled out a psychographic questionnaire. That’s what we mean when Facebook decided to move all terms of service for non-European parties out of Ireland last month.

WEITZNER: But Eben, the problem there, I would suggest, is very clearly a lack of enforcement both in the U.S. and in Europe. When all of those events were happening as to Cambridge Analytica. When Facebook designed the APIs that made the collection of all these people’s data possible in 2011. Number one, the FTC was investigating Facebook and could have stopped those practices. And number two, Europe had on the books through the implementations of the previous data protection law, enormous amount of authority that should have been used to protect us all. And neither government did it because neither government was confident enough in its enforcement abilities. So we really should be careful about thinking that more law is going to help us. What I think will help us is more vigorous enforcement, more eyes and effective pressure.

NEMITZ: Yes, but the story in America is not over yet, and neither is it in Europe.

WEITZNER: Yes, but five years went by and no one did anything.

NEMITZ: But isn’t the FTC now investigating with the view, potentially on the basis of the consent decree, imposing a pretty large fine? And the fine is possible also in Europe. I mean, after all…

WEITZNER: But all I’m saying is that we didn’t need the GDPR to go after those practices. We didn’t need any changes in U.S. privacy law to go after those practices.

MOGLEN: No, we needed what Paul described at the beginning, which is engineered-in rules about APIs in the private market. And the idea that the FTC is going to get technically tough enough, and well-funded and well populated enough to determine whether the APIs being used by platform companies in the private market design in adequate respect for human dignity.

WEITZNER: They would design it in, Facebook would design those controls in if they knew there was going to be cost at the backend for failing to have them, for allowing the data to get out. It’s very simple. It is really well understood how to do it, and Facebook took an entirely calculated risk about choosing to open up those APIs. And again, to me it all comes down to the perception of these companies about what kind of enforcement risks they are under. It’s not about what is written either in their privacy policies or anything–

MOGLEN: Which is why, which is why policy is such a trick. Because if they have Larry Summers one step from the Oval Office and he’s a friend, that’s going to have an effect on their willingness to take certain risks. On the other hand, if we tried to treat Facebook as an entity which doesn’t know that it has political leverage with governments around the world and therefore can be a little risk taking about things, you wouldn’t be understanding the power that they now exercise. So there’s the second order consequence of effect on political process, which policy wonks ought really to be concerned about.

OK, we need to get other people into this, because now we’re almost agreeing, which is the proper time to get other people into it. Please just wait for a microphone to be delivered to you when you raise your hand so that we can keep this all on the tape. Matt, why don’t you make that pick here.

AUDIENCE MEMBER: Oh, thank you. Thank you. Oh, my name’s– Hi, hi, I’m Rachel. I don’t work in this field at all. But I’m the consummate end user. And just, two questions: One is kind of following up on all the talk about Facebook right now. I was at the hearings a couple of weeks ago in Washington with Mark Zuckerberg. And just as a concerned citizen, but I kept thinking why… I– OK, first, I found out that they have a 200– like the rest of the world– a 200-person team dedicated to counter terrorism at Facebook. And I just, and that actually really struck me as: Isn’t this the government’s role or job? Why? Why is Facebook, a technology company, just in the most simple policy concept, why are they taking on the onus of surveillance and counter surveillance? So, that that’s one question. Instead of their main job of connecting the world, which they do very well and which does make them powerful. And then my second question is related to autonomous vehicles, and you did mention Tesla and you mentioned the Tesla in the U.K. and the accident there. But what about some of the other big companies that are not automakers, who are kind of like the Facebook-doing-counterterrorism thing for me– in my simple mind, why? Why is there… Google is now creating and testing autonomous vehicles. Uber has been doing the same. So what does that landscape look like for the future and the future of the auto industry and how, when the big tech is working on these very same issues. Thank you.

WEITZNER: Maybe I say one thing about Facebook and counter terrorism. And I think this really– So, first of all, my sense is that most of what those 200 people are doing, whatever the number is, is they’re responding to either internally generated takedown requests, that is this person this user has posted something that looks like something that’s threatening or responding to those requests that come from outside of Facebook. I think that the the reason, well, I don’t know that I want to characterize. I don’t know how to characterize the reason that they’re doing it. But I think why they’re in the position of doing that is because of this very unusual challenge of scale. It’s a platform of two billion people, as you know. And so, there’s going to be unusual behavior amongst that many people, and we’ve decided for a whole bunch of reasons that the first line of response is the Facebooks and the Googles and the other platforms. So, whether it’s on questions like the right to be forgotten, which is a right that Europe has now recognized. The European legal system has said to the platforms, if someone claims a right to be forgotten, you have to go figure out whether they in fact have that right. And in the case of the specific information and to take the information down. We’ve done the same thing with copyright enforcement. We’ve said to YouTube and Facebook and anyone else who’s in the position of hosting third party video or audio or anything that’s potentially copyright infringing that that they’re kind of the first line of defense in responding to those sort of requests. And as a very practical matter, it is simply because there’s no way governments could even begin to have enough people to do this. I guess they could, but we would end up– talking about social welfare– we’d end up spending money where

MOGLEN: It would be hard to reduce taxes on the rich, very much.

WEITZNER: And so we, but it goes back to the history of these platforms where, specifically, when the U.S. Congress said in 1996 that the platform’s liability for third party content would be limited. One of the reasons the U.S. Congress said that is because the Congress actually wanted the platforms to self-police. The Congress actually wanted Facebook to say OK we’re going to create a family friendly environment or we’re going to create an environment free of speech that various people may find offensive. Now when Congress did that there were 6,000 plus Internet service providers and hundreds and hundreds of Web hosts and other kinds of platforms. What I think is very complicated now is that we have a much smaller number of platforms, and the power that those platforms are exercising over content in some cases looks a whole lot more as you suggest like state power than it does like private power. So, there was a rationale behind that which was that we actually didn’t want government in the position of making those kinds of choices about content on free expression grounds. We wanted different platforms to have different personalities, if you will. But now that there are so few of them, I think some of those decisions look different.

MOGLEN: Paul, did you want to speak to this?

NEMITZ: Well, I mean the banks have to make sure that there’s no money laundering. They have a legal obligation to check from certain amounts. When you come to the bank with one hundred thousand dollars in cash, they have to look a little bit, what kind of guy you are. And in the same way, the platforms and terrorism contacts or pedophilia, pictures of rape, they have legal obligations. And that’s what they have to comply with. The notice and takedown means, if they get notice of the content which is illegal, they have to deal with it. And in America, everybody understands money when it’s about copyright because it’s the music industry and the film industry and everybody obliges. But when it’s about other public interests, people start talking about freedom of speech. Well, I’m sorry. No. I mean we cannot have a world in which the freedom of speech means you can call on violence and say kill this guy, kill that guy. And this woman journalist, rape her, because we don’t like her contributions. And terrorism is the same thing.

WEITZNER: Someone is going to have to tell the President.

MOGLEN: Someone is going to have to tell the President.

NEMITZ: So, I think it’s completely legitimate. And I would say they have responsibilities, because they are so powerful, they are so big, and making profits and being a big owner of all these infrastructures and services also comes with responsibilities. And I would say we have to tighten the screw on the responsibilities, because they’re not doing enough. They still are for incitement of violence and hatred, Antisemitism, hatred of Muslims, and so on. And they’ve got to get their shop in order. And they’re competing with journalism and the press, and they have to do it, and they have to carry the costs, and they undermined them by taking away all the advertisements. All the money on advertisement goes to Facebook, Google now, and the journalists force power in the state, in democracy. They don’t exist anymore. They have to find the money on the street somewhere, and at the same time we want to continue saying, yes, but Google and Facebook, their freedom is so important. And we don’t want them to do this and that. What the press has to do with which they compete for attention and for money. So, there is something fundamentally wrong in this discourse about “don’t touch the platforms.” The platforms today and not any more just passive telecommunications-like providers of technology. They are huge editors. They regroup the content. They are basically enterprises to attract, to produce advertising friendly environment. Yes. And they have also the capability to do this. And I’m sorry big profits also oblige you to do what is necessary in the public interest. – I have no… In America, I have to get my rant out. [Laughing]

MOGLEN: We are very glad to provide the platform for that.

NEMITZ: And I would say on this platform liability privilege, from 1996 and Europe has copied it at the time, we have to start thinking about it. Well it is still justified today because these guys are not any more passive platforms. They are active shapers of content, and with active shaping of content, taking the money away from press, in addition, come responsibilities.

WEITZNER: Could I have a counter rant? [Laughing] Just very quickly because I think I actually agree with Paul that the the capabilities and behaviors of the platforms are much expanded, obviously from what they were in ’96. So to me the role of platforms as advertisers, as an example, as services that profile users, target ads, to me that is outside the scope of what should be protected by this section 230. However, exactly because the platforms are so powerful and because we depend on them so much, I still think that we have to be extra careful that there is a lot of plain old ordinary speech that happens on these platforms. And we had debates as you know in U.S. law about speech in malls and other kinds of private places and that didn’t really get resolved, I think, in a very satisfactory way ever. But I think that for all the reasons Paul is saying about the power of the platforms and the fact that we depend on them as individuals, we also have to make sure, we have to be very careful about the kind of pressure that governments put on these platforms. Obviously, there’s sort of agreement about pedophilia. You get into other kinds of speech, and it gets more complicated. And so, I think it’s a, it’s a delicate balance, and to me the most important thing is that we should have more transparency. We should have adequate transparency into understanding what those decisions look like, because we have to understand if either governments are putting too much pressure on the platforms or the platforms are exercising authority they ought not to be able to exercise.

MOGLEN: So, I’ve got exactly two cents to contribute to that, which is to say that Danny is right. That of those 200 people, let us say 190 are busy responding to takedown requests and compliance activity. The other ten are spies. The most important story we cannot write yet because we don’t have the data concerns the real time merger between the platform companies and the intelligence services. The intelligence services in China, with respect to the Chinese companies, well, we all have a certain amount of understanding of that. And as usual, in the world since 2001, it’s actually more obscure what happens in the United States. Those of us who walk this beat have a bunch of information we can’t reliably share publicly because we have no way to verify it. But you should understand that from the point of view of those of us who do this work professionally, what is going on is a merger between intelligence services and the platform companies, seeking real time connections between the two forms of entities. And your idea that what is happening is that the private world is being conscripted to intelligence activity is partly right, and partly what is happening is that CIA officers are working at Google and Facebook. And that we can say with confidence even though we’re not in a position to point at Jimmy and Sally and Suzie over there. We know that it is going on. We see it in our sources inside the companies, which means that one of the things you need to understand is the depth of the obscurity of the process you are calling attention to.

WEITZNER: That was the answer to half of one question.

MOGLEN: Yes, I’m sorry to say it isn’t going very fast. Let’s do better. Matt, can you, yes. Thank you. That’s fine.

AUDIENCE MEMBER: Thank you. Thanks for your contributions. I’d like to make one remark and then a question mostly for Mr. Nemitz. But for the panel, so it seems like what’s on the table is this discussion of statistics and the large data collection and the large and another element I’d like to bring into it is the actual infrastructure that’s required to make these reasoning and to collect the data itself. And that bottoms out with things, devices like the phone that I have in my pocket, the router that I bought that was unable to accept updates from the vendor or did accept updates from something like this. And so, I’d also just like to make people consider or bring that materiality of the regulatory aspect. There are actually devices, just like cordless telephones or plugs that have fuses in them that we can regulate and have regulated in the past. The other question hearkens back to something that Mr. Nemitz talked about at the very, very beginning which is when he was explaining the rationale for the commission’s interest in like this idea that we have to preempt the harms or the dangers of this particular technology just like atomic energy and just like environmental regulations. Like the doctrine of anticipation that justifies intervening in situations where the demos can’t understand or can’t foresee the harms just yet. And I was wondering if you could talk a little bit more about that? About what makes, say Big Data and artificial intelligence or more generally statistics one of these areas in which the commission is obligated to intervene? Is it because it’s too difficult to explain? And if so, should we, should we not try? Or is it because the harms are unforeseeable, or is it some combination of both? So what are the specific aspects of this problem domain that mean that there is a necessity for the Commission to intervene? Thank you.

NEMITZ: So just to start with a misunderstanding. We are doing thinking work, but the intervention in terms of making binding law will come from the legislator. So, but I would say it’s our duty to scope the issue and to learn and understand and to come to an analysis of what will be the impact of these technologies? So that is an essential part of precaution that you equip yourself, that you invest in finding out what are the capabilities of these technologies and what can they lead to? And then you have to make a judgment. The commission is only a body of initiative. We regulate nothing. We make the proposal then to the legislator and then we have to convince them, and often it takes very long time to convince the legislator because if the risk is invisible, people are saying, yes, but we don’t need legislation. Everything is fine. Let them continue innovating. So that’s what is the mechanism. But sea and climate change and environmental policy always takes ages and then smoking and so the learning is often by catastrophe. The legislator then acts after a catastrophe. And what I’m saying is, and I’ll give you a precise answer on the risks of AI. If technology is moving fast, maybe from time to time if the risks are high on the horizon, we have to move fast. Why are the risks of AI potentially high? And one problem is that we don’t actually know very well how these capabilities are developing behind closed walls of big corporations who spend billions of dollars on it, have thousands of people, it’s – They give the TensorFlow to the public but what’s actually the capability? They’re very different informations. If you go to the MIT. There are some people who say, oh, it’s going to take 20, 30 years until they can do this and this. And the others say five. That’s the quote I remember, was in five to 10 years. AI will win all games, and when I ask “what does it mean to win all games?”, the answer was stock markets and elections. Right. So if people tell me as a policy developer, in 10 years AI programs will win the elections, well, then we start working on the question, how can we maintain democracy? And that needs to be looked at, because I would say that this is something where precaution is justified. You cannot just let it go down the drain, because it may be irreversible. In fact, the modern technologies today in the hands of dictators, once they are in place, make a return to democracy increasingly difficult.

WEITZNER: I’m ready for all of the autonomous systems to get together and elect themselves to whatever world the political offices are. When we see that, then we really will have something. [Laughing]

AUDIENCE MEMBER: Hi, so we talked about these platforms, right, why they are limited back home, so my question is why that’s happening in Internet itself, right? So that has to come from the policy side. So, we talked about data leaks coming in, but I feel what’s missing is data mobility itself. So we don’t have open APIs. So, there is no Facebook A or Facebook B. Right? So the issue is once we have this Cambridge Analytica issue, there is no option B for us. So, what are your ideas in terms of policy and how government can help us in creating open APIs itself and helping the consumers have a choice as well?

WEITZNER: So I would say that I find this question of data portability very complex and maddening. A lot of people talk about the importance of being able to move your Facebook profile around, to move your social network around. I frankly don’t understand what in the world that means without completely obliterating any sense of privacy of everyone else in my social network. So, I think that it is– so I think that’s an example of a policy direction that I am frankly a little bit underwhelmed by, to be blunt. I know it’s a right in the GDPR but I don’t understand what good it really is unless we… I do think, on the other hand, we have clearly movement across different platforms. I think you talk to people at Facebook, they will tell you they’re quite worried by the fact that they have fewer and fewer people under 18 spending any time at all on Facebook because they spend time on Snapchat or other sorts of things. And to me what that says is this certainly is an important role in competition policy to make sure that kind of that we don’t end up with platforms in such dominant positions that it’s hard to move. But I think that is a, it’s a market phenomenon, not really so much a technical phenomenon. I do think there’s very interesting work happening with entirely different architectures of social networks. Eben is involved in the FreedomBox work. My colleague up at MIT, Tim Berners-Lee, has an architecture called Solid, which is a way of enabling people to store data kind of wherever they want to store it and have applications that work on top of it, social applications that are separate from the data. And I think my hope is that those kinds of applications will develop so that we have the ability to exist in less centralized social media. I do think they all pose enormous, enormous privacy challenges. I think the more we decentralize these systems, the harder it will be to hold them accountable to any set of rules at all. I think there are some technical approaches there, but I think it’s a complicated problem.

MOGLEN: That was the, that was the crucial exchange I think right now for the issue you are talking about. Danny, and Sir Tim and I have all wanted to decentralize the web for a very long time. Danny just said, yes, but accountability is really my bottom card and re-decentralization is not pro-accountability, and so I’m beginning to get dubious. I’m not where he is about that because as prior exchanges have suggested, I’m not sure that I think accountability is all he cracks it up to be. But if he and Hal Abelson had been right about how to run data accountability at the end of the 20th century, I would now agree with him. It would have worked. It didn’t work because it didn’t have uptake then when it needed to. So now I’m still in radical re-decentralization mode– insert FreedomBox advertisement here– but let’s just say that there are public policy steps you could take. FCC could tell telecommunications service providers no banning servers from the end points of your retail customers. We could stop discriminating in favor of everybody’s a client and takes what she gets and begin to use the telecommunications network in a way which was more populist in character. The consequences of which would be that more data would be in tiny little silos of people who really are friends with one another. It is true that it would be harder to hold that data accountable, but it would be much harder to aggregate it and turn it into behavior collection networks for advertising companies, right?

Then we do have the question: How should the advertising market be organized? Paul said remember they used to be media, they took advertisements. They had advertising respectability rules. They had acceptability departments. They didn’t let people put fraudulent advertising in their newspapers. They were regulated, but nobody said that violated the First Amendment, because everybody thought that false advertising was not protected speech. That suggests that we do have steps to take in the large-scale regulation of the advertising market that would change underlying incentive structures against data centralization and in favor of at least some form of agnostic or neutral attitude about whether you store it away or you store it at home. There are public policy steps, in other words, that could be taken. But the Zuckerberg television show was not a step in that direction. And that’s the problem, right? That this layer that you want to get to where we have to come to a conclusion about accountability as against decentralization as a way of controlling out-of-control behavior protection can’t complete, because although we can talk about it here, the policy makers are not actually listening to us because the political conversation has been tilted in its usual sensationalist fashion.

NEMITZ: But, if I may, on accountability, I mean of course, I understand your questions, what does accountability mean in the social network, but, by God, the Internet is not only social networks. I mean we have loyalty programs, bank accounts, playlists. There are many areas where portability absolutely makes sense, raises no privacy issue, is good for competition and innovation, and we have to enforce it radically. I mean, when we negotiated that article which is not a data protection article. It’s really a market provision. There was the classic resistance of incumbents, which we had when we imposed by law the portability of mobile telephone numbers and portability of bank accounts. Huge resistance, each time, we will go bankrupt and socialism and I don’t know, all the devils of the world is against portability.

But now we have it in law, and it will be enforced. And I think we have to complain and put the pressure on. And on this decentralization, I would say, well, the GDPR actually gives a big incentive to decentralization in the sense that if you can, you want to not hold in a central depository, all this personal data. So, I mean, I see there are some interesting trends. If Apple says, well, we better keep it all on the phone, we build an AI chip. We don’t want the data to come to us; rather leave it in the hand of the individual who produces the data. For me, that’s interesting, and it could be going in the right direction. I’m in favor of re-decentralization of the net for empowerment reasons, for reducing the centralized power of these corporations, and also for data protection, because if the data stays with the individuals, the likelihood is that the abuse will be much less. There would be some problems, but they will be less big than the problems which we have now with all the piles of data in the depositories of Facebook and Salesforce.

MOGLEN: Well, the behavioral experiment of whether the GDPR is going to make them want to keep less data is now ongoing. We shall see what happens, that’s for sure.

AUDIENCE MEMBER: I actually had a question for Mr. Weitzner. So as someone who is on this panel and as someone who worked for the Obama administration, I was curious to know what your sort of look-back on the way the Obama administration used social network data in the run-up to the 2008 campaign? I know it used some Facebook data, and I just was sort of curious to hear your retrospective on that.

WEITZNER: So I didn’t have a huge amount to do with that. I would say that there was a kind of an interesting comparison between the the main Obama campaign app on Facebook in 2012 between what, how that worked, which did have access to a very large amount of personal data and how the the Cambridge Analytica analytic function worked. So, the Obama app was an app that enabled, that would suggest to the individual which of your friends you might want to communicate to about the campaign. But the communication was from the individual to whomever, whoever in your friend network. The Cambridge Analytica approach was obviously different. That was about placing ads or messages or other kinds of things based on a kind of a centrally-developed profile. So, interestingly, same universe of data when you look at it from above, but the use was quite different. And I had my preferences for which one I think was more respectful and consistent with democratic process. I mean, there’s no question that politics is a very, it’s a personally intrusive activity. You’re trying to get in front of people and tell them what you think and what they should think. So, it’s not a leave me alone kind of environment. And, but I think it is an environment where we, where I think you certainly have to expect that you know who’s talking to you or at least you know whether a machine or a person is talking to you. My guess is we’re going to come back to some of the anonymous political speech questions that were raised in McIntyre a long time ago. And probably think about some of those again in this context.

MOGLEN: After the last Indian general election, my law partner, Mishi Choudhary, and I were talking to someone who had worked in the BJP IT cell on the Modi campaign, and he said, well, we’ve learned that there is really no way to get to a voter through SMS with an argument to the conscious mind, but we have learned that there is no limit to what you can do by getting to a voter through SMS in the unconscious mind. I think that the distinction that Danny was making between using the data for organizing purposes under human control on the one hand and reaping data for psychographic models that you keep private on the other hand is close to the correct distinction. The most important distinction is between the conscious and the unconscious in politics, and what is most important, in my judgment, about the machine’s intervention in politics around the world is that it is driving political discourse to interaction with the unconscious mind, which is destructive of democracy in a different and fundamentally, to my mind, more complete way. We are driving a bunch of technology against the idea that democracy is about making choices as opposed to responding to feelings, and the button pushing is now primarily going on at the unconscious level because professionals around the world have realized that that’s the level where they can be effective. Diana, I am sorry to keep you from asking your questions.

AUDIENCE MEMBER: Sorry I had about 8 questions in mind, but I’ll restrict myself to just a few. I work for a human rights group, so I’m interested in this issue from the perspective of the right to a remedy for a violation. How does the individual achieve a remedy? Which may not always be through a cause of action. It may be through public policy, as you’ve noted. I wanted to, in that context, ask about a few different issues. One of them is we’ve talked about the problems of data aggregation. So, while certainly creating smaller silos might be helpful, it seems to me it doesn’t entirely answer the problem of aggregation as a circumvention of ways we might regulate, for example, against flat out discriminatory profiling or something like that, because you can get to discrimination easily through aggregation. We haven’t also talked about the data sets as biased. We’ve talked about the need to be able to access data and the restrictions of having private pools of data, but we haven’t talked about how those private pools embed discrimination and all the ills of our world of perception. So in that in that context I ask, are there solutions, for example, involving data auditing whether by individuals or litigants or a profession of auditors, because this problem of review of data and who owns the data is very complex. And finally, when I talk to American engineers who genuinely care about human rights, they always bring up, but we’re going to fall behind China. I mean everybody brings this up constantly. So, in terms of solutions, should there be market barriers, quality controls, transfer regimes? What do you think the prospect is for regional or international standards on essentially AI acceptability or AI technology transfers and uses? And I worry about this a lot because, of course, it’s an issue both in private application and defense application. So those are three enormous issues that we haven’t talked about and feel free to ignore any of them because I know it’s late.

NEMITZ: OK. So first of all, at least in Europe and particularly the country I know best, which is Germany, we’ve been through a number of China crazes. Oh God, China’s going to take over all the machine building and it’s going to take over the car industry. So there are interests which drive these crazes, and they, these are classically people who say if you don’t do this and this and this, then our industry will go down the drain. Not very impressed. Look, needs a very hard look. In any case, all of these technologies they develop have, according to European law, they have to comply with our rules. So, for example, if there are no privacy protections and the collection of GDPR, they can’t provide the service in Europe. Finished. And it’s as easy as that. And if they violate their own population, it’s terrible. And it’s a human rights issue on the global level. But they can’t use these technologies in Europe, and if they want to sell there and make money, they will have to adapt and learn. And by the way, I mean, I’m convinced if the Chinese get richer, if the middle class is moving up, they will seek also these protections and these rights, because it’s part of the lifestyle of richer people that they don’t want to be controlled all the time by government and a total nanny state.

So how does enforcement work? Well, my conviction is it cannot be left to individual complaints only because the individuals often don’t know what’s happening with the data. So you need ex-officio investigations. That’s why we have these data protection authorities. Their job is to go in and do the checking and do the audits. Of course, as an individual under our law, you can ask what do you have about me. And you can require full transparency. So, and I think you know that’s going to become thorny. Facebook today, I saw Yann LeCun, the chief AI engineer of Facebook says, “We make 200 trillion predictions a day.” All right. So, all these predictions are personal data because they predict how you, you, you, you will behave. It’s a thousand predictions on each of you. Well, I want to know how Facebook applies all rules to these predictions, and I want to see all predictions which they make on me, because I have that right under GDPR. So the answer is we must use our rights. We must make the DPRs do their jobs. That’s why under our regulation you can force the DPA to act. That’s a problem with the FTC. Nobody can really force them to act. You have to deal with that in domestic law. Nobody will help you there. And in parallel, of course, you can bring damage cases. You have the right under the GDPR now. I don’t think that’s going to be very effective because of the huge cost of litigation and difficult to show economic damage. So the core element of our system is that these DPAs, data protection authorities, which have constitutional law position. In primary law, they’re independent. They have to be staffed with hard-nosed people. Up to now it’s a lot of talky talky shop, and we love this conference. Well, in the future, put in a public prosecutor who already has done a good case, a number of good cases. Or put in an auditor to, this is going to be very, very important, and I hope and I think it’s possible there will be some positive competition between the member states which authority has the big juicy cases and imposes the big fines.

And these fines, up to 4 percent of world turnover is the fine under the GDPR. The only reason why in America now so many companies invest in this is this fine. This is the big thing. Plus, of course, now the experience with Facebook’s stock market. Fifty-billion-dollar stock market value loss after Cambridge Analytics for Facebook. That’s also quite hefty. So, I would agree, we need the incentives. They are in our law. They are quite good, but they have to be realized now through tough enforcement, and this means also that NGOs, for example, can force a DPA to act. I have founded with Max Schrems in Vietnam an NGO which is called None of Your Business, and this NGO will do strategic litigation on data protection against companies but also against DPAs who are not doing their job. Please all join it. It’s also good for Americans that this happens in Europe because if you have a global business model as we see with Facebook now, they try to differentiate but it costs them the differentiation. Ideally, they want to have one business model. So if our rules are working well, Americans will also benefit. So help us to strengthen the NGO world in Europe so that this strategic litigation also has an impact.

WEITZNER: Diana, could I? I just want to focus on the audit question because I really like that, and I think it’s, in some ways, the heart of the matter in how we’re going to over the long run get to a more sensible set of policies. And what I think is interesting about it is it hopefully can inform a broad social and policy discussion about just what it is that we think is reasonable and not, in what standards do we actually expect. I mean I think they’ll be, they’ll be uses of data that are just outright either inaccurate or bad, and everyone will say well forget those, they’re just… they are wrong. But then I think there’s going to be a lot of really hard problems. So, if you followed the research on recidivism prediction that Julia Angwin kicked off. When you start looking at that, you then have people who are much smarter about math than I, Jon Kleinberg, came in and said well there’s this sort of– he actually proved that you kind of can’t have fairness both ways. That you can’t, when you’re trying to predict amongst people who come from different populations with different crime histories, that you can’t have fairness in every dimension. You can have fairness as to your position in your subpopulation or you can have fairness as to the whole society, but then you have less accuracy. And that’s a, that’s just a hard problem. I don’t know that there’s an obvious answer, and the only way we’re going to get to an answer is by having data out there that exposes that and has has us all debate it. And I think that’s going to recur in health and it’s going to recur in finance and all kinds of, all kinds of things.

What I think is– and it is a little bit like the insurance debate where because we had the big Obamacare debate, it got everyone to start thinking correctly and not correctly from first principles about insurance. We’re going to have to start thinking from first principles about fairness. Right? And I think the only way to do that is going to be based on open audits, which are going to be– in my mind have to be– about the collective effect of these different analytic processes, not the individual effect. That’s a kind of an established data protection right that you can say how did it affect me.

But I think the question that’s really hard is how does it affect us as a society, and what what are the balances that we’re willing to live with? Where are there fundamental rights, no-go lines? And then where are there sort of more more more nuanced things. So I think it’s a great question. And I think, again, not just to overly toot Julia Angwin’s horn, but I think she does great work. And one of the things that she’s done is she said journalists have to get much better at working with data. Well, I think probably human rights lawyers are probably,and as I know you guys are doing, are going to have to get much better at working with data because it is going to be the terms on which we have these debates, not the doctrine as much as the sort of empirical underpinnings of what’s actually happening in different cases. So I think it’s a fantastic question.

MOGLEN: On the discriminatory datasets question, I think what we’re going to wind up with, we’re going to need legislation. The law won’t do it without it. We are going to wind up with the equivalent of Title VII public accommodations law. What it’s going to say is that collections of information, behavioral data about people, are the equivalent of public accommodations, and the decision making that results from them is subject to both disparate treatment and disparate impact kinds of reviews. Now how that law is going to develop is going to depend very much on how the legislation that roots it all happens, and we’re not even 10 years from that legislation yet. We need several Democratic congresses in a row or we need Lyndon Johnson to arrive back from the grave. [Laughing] But the crucial problem will be that the equivalent of conciliation will be auditing. And the problem will be the relation between this self-regulatory inspection regime lightly overseen by the equivalence of something between EEOC and data protection authorities on the one hand and the role of the courts in systematically pursuing the Duke Powers of the future. Right? Which is of course the same people we keep talking about.

WEITZNER: So, a great example of this is– I can’t remember if Human Rights Watch was a signatory of this letter from a number of civil rights organizations about face recognition, police use of face recognition technology. So it ends up, in large part, because of work of a graduate student of ours in the Media Lab at MIT has shown that there’s radically different accuracy levels depending on the race of the person in the facial recognition system, and it actually goes all the way back to the fact that A) the training sets are not representative but even worse than that actually photography doesn’t work very well for people who are not white, and so you have kind of all the way back to the beginning this kind of problem about inaccuracy and an inability to detect, to recognize people correctly, which has this whole long-tail effect. And so that’s just, it’s just one case. We are going to have to look and say, “Okay, well, what does that– does that mean that technology is usable at all? Are there, are there mitigations in, to deal with the kind of systematic bias? Or what do you do?

MOGLEN: Which is going to turn out to be a lot like occupational testing and access to public employment and other sorts of situations for which we have at least operable models.

WEITZNER: What were the red line testers called? People who went into jobs?

MOGLEN: And all of this hinges upon the particular behavior of the Fair Housing Act or of Title VII or of the statutory scheme which presents underneath some quasi-administrative justice system which is supposed to handle most of it leaving, one hopes, the courts for some kinds of class action kind of activity that say, well, if you’re building an advertising platform which allows people to buy ads for folks who hate Jews and don’t want to rent apartments to them, then that should be low hanging fruit. We should be able to collect that first. And we should be able to use that as the basis for the legislative activity that has to follow, in the same way that we saw it happen between 1954 and 1964 in the United States. But there has to be, at the end, a willingness on the part of Congress to do that work, and that’s a political rather than a policy question, and we’re nowhere close.

All right, well, I want to thank you all for staying 20 minutes extra. It was so wonderful that you came. Thank my guests please for me because they deserve it. [Clapping] Enjoy the lovely weather.

END.