When Software is in Everything: Future Liability Nightmares Free Software Helps Avoid

A Speech given by Eben Moglen at a meeting of the The Scottish Society for Computers and Law (SSCL) annual meeting on June 30, 2010

Event records

[Iain Mitchell]

We are very very privileged indeed to have Professor Eben Moglen speaking to us this evening. Last time he spoke in Edinburgh for our annual lecture a couple of years ago it was to mark the launch of the GPLv3. We haven’t gotten anything quite so headline-grabbing this evening. But we have something that I think is going to be very much an issue in the years to come, because the days when computers were merely just things that sat on your desktop have long since disappeared. The true nature of a computer is that a computer is in everything. Software is in everything. And when software is in everything, what are the future nightmares and liabilities and how can they be avoided? There is the theme of Eben’s talk this evening.

I don’t think I need to say very much about Professor Moglen, who is very well-known throughout the world. But for those of you who may be unfamiliar with him, he had in his earlier career the distinction of clerking for Justice Marshall of United States Supreme Court. And he has taught at Columbia Law School and holds visiting appointments at Harvard University, Tel Aviv University and the University of Virginia. He is the founder of the Software Freedom Law Center. He has achieved the Electronic Frontier Foundation’s pioneer award for efforts on behalf of freedom in the electronic society. He is admitted to practice in the state of New York and before the United States Supreme Court. Would you please join with me in welcoming Professor Moglen.

[Clapping]

[Eben Moglen]

Thank you, Iain, thank you so much. It’s an honour and a very great pleasure to be here again among friends.

When I was here last year as Iain mentioned, I was coming to the end of a very long and complicated process of negotiating one particular matter of licensing free software. I spoke here partly out of desperation and partly out of hope.

I was desperate to begin thinking about something else after 16 months of the GPL all the time, and I was hopeful that we had done the job in a way which, although nobody was more conscious than I of the difficulties, the bumps and the primitiveness of our collaboration, might signal something good about the future.

I speak here today, I suppose, out of hope and desperation, which feels very different to me. I am concerned because one of the things I have been thinking about recently is the difficulties that we face when software is in everything, based on a little bit of the experience I have going around the world looking at all the places where software is.

And I speak out of hope because I do think it is possible that if we understand what is happening a little bit quicker than we usually do about large scale social change, we will avoid a lot of nastyness that is otherwise going to be pretty serious for us.

So the place I want to start, is with the definition of a problem. The problem that I would like to define, I did try to put in the title, but “everything” is of course a remarkably indistinct word. So let me begin where I think the moment tells us to begin by pointing out that software is in cars. Software is in medical devices. Software is in all other forms of vehicular transport and software is now and increasingly will be more and more fully represented in buildings themselves that are the fundamental constituents of the built environment where we dwell, work and take care of one another.

These aspects of softwares being in everything then, though they are merely reflective of everything (I don’t mean that is the exclusive list of everything,) let us just, because I don’t mean to talk all evening, talk about them as though they were enough of everything to make the point.

Let us begin then with the few, what shall I say, speculations. Not facts, which would take too long and require too much like proof. And all of us here will understand that when matters of large liabilities are at stake, proof is not a thing you can do lightly.

One of the things that we know this year in 2010, and I am going to stick to matters about now, is that sometimes cars mysteriously speed up and crash into things.

That is not a disputed statement. Why they mysteriously speed up and crash into things raises all the usual kinds of difficulties of causation and proof you would expect when liability is a serious social matter. But let us just say that we know that cars mysteriously speed up and crash into things and it is reasonable to wonder about software in relation to the cause of those accidents. And wondering about whether there is software behind some of those accidents raises some important questions.

There is software in the things that power peoples’ hearts, and it fails sometimes. That too is a fact. There is a lot more to be said and once again proof is an important subject and I don’t mean to do more than speculate. The rules about what software you can put in medical devices and how you test are not rules which would be regarded as sufficient to create safety in other industries whose liability profile is substantially lower than medical devices.

My organization, the Software Freedom Law Center, will next month be publishing a report on this particular subject and so I am going to limit myself to speculations here in knowledge that some facts will become publicly available shortly that will be illuminating.

Software flies airplanes. And sometimes software fails, perhaps, creating accidents. Once again, that is the most that one could say without entering into complicated discussion. But I think it would be useful to indicate the nature of the subject there a little more precisely. So let us give it an airline and a flight number: Air France, 447, which went down in the Atlantic Ocean on the First of June, 2009, killing 228 people. Today, 13 months after that accident, the flight data recorders have not been discovered. The problem has been posted by one crash investigator as locating an object the size of a shoebox, in an area the size of Paris, three thousands meters below the surface of the ocean, in terrain as rugged as the Alpes.

There is every reason to believe that those flight data recorders which have not been discovered in the last 13 months will never be discovered. And the only direct information available about the cause of the loss of Air France 447 will be the automated telemetry received from the plane in the last hour of its flight in thunder storm activity over the central south Atlantic.

The telemetry shows that the aircraft experienced the loss of trust in one of its inertial navigation guidance systems. It is hypothesized that may have occurred due to icing of a tube on outside the face of the plane which registers air pressure changes for inertial guidance input. We know that one of the two redundant inertial guidance systems had failed in the opinion of the software that determines whether or not to trust the system and that the standby air data system and the other inertial guidance inputs were disagreeing. Thus there was a disagreement between the two available sources of information and one had been ruled unreliable and not to be consulted, and in that condition the next thing registered was vertical falling of the passenger cabin which led to, we infer, powered flight into ground. In other words, the only thing we are ever likely to know about Air France 447 is that there was a multiple condition software failure in process on the airplane, after which it was lost.

So what I am talking about then, is the inevitable occurrence of what we would regard as significant liability issues surrounding software failure as amongst the significant causal possibilities throughout society.

That is the definition of the problem.

The parties affected by the problem, in addition to the human beings killed, injured or otherwise subjected to losses for which liability may rest with someone else, are manufacturers and regulators around the world who face serious issues about operation at the edge of their ability to foresee.

Manufacturers face obviously the problem of constructing devices which meet both regulatory demands and market conditions in which we may treat the avoidance of avoidable liabilities as among their regulatory demands.

But they experience some secondary difficulties from time to time, with relation to the software they embed in the products that they make whose failure may cause harm. One of the difficulties that they experience is that they acquire software from third parties with indemnities, or liability exclusions, which are extremely limited for them as purchasers. And more serious problem is sometimes they do not acquire software legitimately.

One of the difficulties one can speculate would be faced by an automobile manufacturer who learned that some of its fundamental control software is causing harm. One of the difficulties one might speculate, I at least would on the basis of my experience, is that the manufacturer might not be in a position to disclose about its software all of the matters one would expect them to know, like how they got it, because there is a lot of software in the world doing jobs that we might think of as quite legally important with respect to the possible incurring of liability by the manufacturer which was acquired through means that we would characterize as informal, if we were being exculpatory.

And if you are in a situation where you have software which you reasonably believe is malfunctioning, and which you may even be able to fix, but which had already caused very substantial harm, in your opinion, the last thing you would want to have to do is come forward and confess a sin in its acquisition because that would lead to problems that you cannot control very easily. And therefore it is much simply to fall upon the difficulty of proving the software had anything to do with it.

Now in the case of automobiles it is particularly easy, and in the case of aircraft crashes it is particularly hard. To suggest what the manufacturers mostly want to be able to suggest in a situation like that which is that the person operating the product probably caused the harm.

The aircraft passenger is among the most passively vulnerable forms of modern human experience, as we are made to remember every single time we go to the airport and somebody prods us as though we were criminals. Our vulnerability, at least if you travel as much as I do, is always reinforced to you by the behavior authority deals out to you in international air transport. But once you are belted in, if computers on the airplane begin to disagree about what information should be presented to the expert human beings, who are supposed to make the judgment, who have the fate of the aircraft, the passengers and the crew in their hands, would be something you can do nothing whatever about. And if the computers disagree, and the pilots don’t get to make expert judgment and the airplane falls out of the sky, which could conceivably have happened once already, at least, then obviously it would be very difficult for the manufacturer of the airplane to say that the passenger was in any way at fault and he would limit himself to say that the airline did not do with the airplane as it should. He would also fix anything that is wrong with the software. Which is why, oddly enough, the aircraft is not our biggest problem and I did not put it in the headline.

The automobile on the other hand is a very dangerous machine whose tendency to cause harm can always be blamed on the driver. And I would simply limit myself to pointing out to you that Toyota has had for many years both expert witnesses and as consultants a number of social physiologists with distinguished appointments at American Universities on the payroll in order to testify in lawsuits that people often press the accelerator pedal under the mistaken impression that they are pressing the brake, particularly under conditions of stress.

This is one of those beautiful counter-intuitive results of social psychology. Teaching you something about human beings which you are able then to marvel at because it is a property of human beings which is apparently universal but which has never happened to you in your own life. Where I wager with great certainty that you have never actually pressed the accelerator pedal accelerating down the highway and crashed into something under the impression that you were holding your foot on the brake.

This is what you do when software malfunctions, sometimes, I would suggest. And lawyers make money doing it and things that lawyers make money doing are unlikely to stop happening unless forced.

Regulators then have two problems. First they must have jurisdiction to regulate and second they must have competence. Jurisdiction to regulate is not merely a formal question. It is a practical one. Japanese administrative agencies have authority in the jurisdictional sense to regulate automobile safety. But it is famously the case that automobile safety in Japan is a self-regulatory matter as Internet privacy is in the United States, a subject I am not going to talk about today, but which would justify another visit to Edinburgh if you ever incline to invite me back.

Regulated jurisdiction in other words over software in particular would mean regulators deciding to go into businesses they have largely left, each and in their own way, to be adjusted by other people. If one could say only the best of regulatory conduct in this area, one would say that it had resulted in a lot of self-regulation. That is the good news. One of the other problems about regulation then, (I won’t get to the whole of the bad news all at once, because I wish to emphasize hope over desperation at least to some extent,) is the extraordinary difficulty that regulators have in maintaining competence to cover this portion of their jurisdiction, practically.

The National Highway Traffic Safety Administration in the United States, our chief automobile safety regulator, an agency which is comparatively active, extremely thorough, and from a technical point a view very well informed but which often loses battles over recalls due to the politics of regulation in the United States, NHTSA, an organization which rarely has the difficulty of getting its facts right, was compelled to admit in the course of discussion about Toyota’s automobiles in the United States this Spring, that it had no engineers capable of providing independent testing of Toyota’s relevant software in its relevant models of automobile. No capable engineers, because this is an area so far outside the practical jurisdiction of even a quite conscientious regulator.

And so NHTSA has borrowed 50 software engineers from NASA in order to thicken its ability to conduct a meaningful investigation in this incidence, which says nothing about how a continuing presence in this area would be managed if facts happened to justify the desire to look into the software in cars more thoroughly then has been done in the past. Similarly I am not going to restrict myself to beating up on North American regulators in this talk and I am not going to restrict myself by any means to beating up on regulators but, similarly, to offer another U.S. example the Food and Drug Administration in the United States which modulo again its difficulties in the politics of regulation, is also a highly factually competent agency with a comparatively deep technical understanding of its subject. That FDA long ago outsourced to private commercial parties the job of testing the safety of medical devices, under a devolution of government into the private sector activities that I could call by some name that would be familiar to you but which might sound deprecatory.

At any rate, what has happened is, that those organizations that contract to test the safety of medical devices and as we shall report in the next month, the protocols concerning how they test software for those purposes which are contractual in nature and which are therefore documented, the protocols they use would not be sufficient for testing software in a matter far less important than a pacemaker or an insulin pump.

Once again the fundamental difficulty will turn out to be that testing software is a complex activity. And simple testing of software, asserting that it manages under conditions of single cause of failure situations, is inadequate, even if the software cannot in its malfunction cause imminent death as some of the software can, and perhaps, has.

So once again what we shall discover is regulatory authorities face significant constraints on their cognitive capacity, and on their ability to conduct the kind of testing even if it is only sporadic spot testing which we assume assures the safety and quality of materials used in society where harm imminently results from failure.

In the hotel in which I was staying here, a lovely establishment, but which I shall not name for reasons that will be apparent in a moment, there was an accident last week in which an elevator cable parted and an elevator containing guests in the hotel plummeted from the second story into the basement. When you check in at the hotel you merely see a sign that says “We are sorry that this elevator is not working. And we are apologetic about any inconvenience it may cause.” I know that the accident occurred because a gentleman I met in the course of my journey from New York to Edinburgh earlier this week was the employer of the two people who were in the car. And in casual conversation waiting for a delayed airplane the matter came out. I have not, I admit, looked into the question of elevator safety regulation in the municipality. But in every city in the world where buildings are tall (and they have been tall here in proportion to the environment for longer than they have in most parts of the world) elevators safety is a regulated matter, and there are periodic inspections and people who are independent engineers, working at least in theory for the common good, are supposed to conduct such tasks as would allow them to predict statistically that there is a very low likelihood of a fatal accident until the next regular inspection.

With most of the software that causes harm if it fails in the world, there is no regular inspection. There is no requirement to make the materials inspectable. And there is great doubts about the capacity of regulators’ and the technicians they can reasonably expect to employ within budgetary constraints, to conduct the kind of investigation to assure safety which is characteristic of the simple physical stuff out of which the dangerous parts of our world are built.

That is the full explication of why we are going to have liability nightmares. I recognize that there may be people in the room for whom the phrase “liability nightmare” sounds like a good thing. And this is part of why I speak out of desperation. Because oddly enough there are a lot of smart people on the other side of what I’m about to say.

Some of those people have business interests in being allowed to determine the quantum of this risk all by themselves and to lay it off as silently as possible. Because of course there are pathologies of private governance just as there are pathologies of public governance.

Oddly enough, under late capitalism when financial industries are strong, businesses’ incentives to study and prevent the risks of catastrophic loss can be remarkably low. The reason those incentives can be so low is that the avoidance of catastrophic but low-probability risks with real costs in the present looks like expenses you can cut.

And if you are leveraging your business, avoidance of catastrophic risks of low probability with substantial present costs in either time or money will cause failures to go under-prevented routinely as a result of the gravity of the balance sheet.

Allow me to mention in this context Lord Brown, whose creation of the don’t-call-it-British-Petroleum Company as we know it now for a short while more, resulted from the leveraged acquisition of large numbers of oil companies building an immensity which then had to save money everywhere it could in order to manage the expenses against which it had to balance the costs of the immense leverage that had created it.

That BP became well understood throughout the world as a safety miser, and its record in every major jurisdiction where it functioned showed that its incentives had become to under-insure against low-frequency catastrophic risk because the avoidance of present expense was irrevocably determined by the gravity of the balance sheet. We had a refinery explosion in the United States; we had significant pipeline injury resulting from inadequate management in the United States; and now, I say no more.

So I am desperate because there are forces at work in all the places where justice must be made—that is, among the public regulators within the private businesses and even at the bar—there are forces that do not want to hear what I am going to say, which is that we can’t live this way.

This must not happen. This is another form of ecological harm resulting from our inability to understand the technological nature of our transformation of society shrewdly and rapidly enough to avoid serious human harms.

I said recently, I will admit, that Mr. Zuckerberg had done more harm to the human race than anybody else his age. And that’s an unfortunate fact about where we live now, but I need to point out to you that there are a lot of people in the world a lot older than Mr. [Zuckerberg]. Now we got a problem we must fix and the bad news, as I have pointed out, is that we are not socially aligned even to recognize it, let alone fix it.

The hopeful part of my talk is unfortunately rather short but it’s rather intense, because the good news is, freedom foresaw the problem, and we could fix it if we were let. You see, the fundamental difficulty is a difficulty which arises from the inadequacy of regimes of inspection. Manufacturers have incentives for non-transparency, including non-transparent ways of creating the code they put in things. Regulators have an incentive for transparency, but they cannot manage the expensive cognitive machinery necessary to understand and to repair the liabilities created by software.

And legal rules, though of course productive of an exacting and thorough sort of justice, as we all know, are at their very best effective in certain forms of post-harm redistribution, against which I have nothing bad to say, except that they don’t prevent the nightmare. All they do is, after long litigation, move money around between insurers, which is not really a sufficient response.

We do possess the answers necessary to implement a different way of thinking about things in the free world. First of all, we produce transparently. Second, we avail ourselves of what has come to be known in the free world as Linus’s Law, named after Linus Torvalds, that in the presence of enough eyeballs, all bugs are shallow.

This is not a necessarily correct technical statement, but it is, in this context, an important social proposition. The correct way to maximize the available inspection of software that can fail is to use civil society’s full width to conduct inspection. I don’t need to explain to you what can be accomplished in this world by a single motivated hacker.

I don’t need to explain to you why it is that if you tell everybody on Earth, “the software that could fail, killing your mother the next time she takes an airplane, is on the Web, you might want to have a look at it,” there is a remarkably high number of very talented and thoughtful people around the world who will do exactly that.

So what I’m going to say, oddly enough, reduces to a couple of rather simple principles, which could avoid a great deal of liability nightmare around the world. On the downside, some lawyers would get less rich doing those liability nightmares, and I acknowledge, in an audience such as this, the legitimacy of that consideration.

But the upside is more substantial. We would actually avoid a lot of deaths.

Proprietary software is an unsafe building material. You can’t inspect it. You can’t assess its complex failure modes easily, by simply poking at the finished article. And most important of all, if you were aware of a problem that was of a safety-enhancing kind, that you could fix, you couldn’t fix it.

If you were aware of a catastrophic failure mode, you couldn’t do anything about it, except ask the manufacturer to fix it, who of course sells almost all the software that it sells, if it sells to consumers, under a shrink-wrap with a Hadley against Baxandall-ization of the whole thing. Which basically says, if the software fails catastrophically and obliterates your town, we’ll give you your money back.

So proprietary software is an unsafe building material. We shouldn’t use it for purposes that could conceivably cause harm, like running personal computers. Let alone should we use it for things like anti-lock brakes, or throttle control in automobiles. We wouldn’t allow people to build black-box elevators, you know. They’ve got to be inspectable, and they have to be repairable by the people in whose buildings they are.

That’s a sensible rule, arrived at over a long period of experience with what can happen when things fall, which you would expect us to carry unchanged into our experience of the digital environment, but which is not. The basic principle of the difficulty that we face is we can’t see enough and we can’t modify it fast enough to avoid merely assessing in an extraordinarily complex way that the legal system, too, will be no good at, what went wrong after it fails. What we actually need is the ability to harness civil society to prevent failure. This is a problem, in other words, which can be prevented more easily than it can be coped with after the fact.

The obscurity of my principle, the fact that it hasn’t been widely endorsed around the world, well, I will leave the question why everybody hasn’t seen it already to be discussed by others.

Because, after all, I really am, however desperately, an optimist. I actually think what we ought to do is just recognize the truth of this and fix it. I can’t imagine that there’s anybody who wouldn’t want to—unless they had existing incentives already not to want to.

And, so what we have is a democracy problem, because that’s how we deal with things like this. In other words, we need regulation, but the regulation that we need is regulation that prevents harm, a not-difficult proposition, usually, to offer to a legislator.

We need to use inspectable and testable building materials in constructing the artifacts that run our lives.

Well, that’s not a terribly difficult proposition to put before a legislature. Every legislature in the twentieth century accepted that to a great extent, from the municipalities around us, to the national governments, and beyond. The European Commission prohibits, flatly, the use of user-modifiable software in medical devices. The European Commission’s view is that the presence of modifiable software in medical devices causes risk. I perfectly understand this point of view, but it’s precisely backward.

On the whole, over the entirety of the problem, the availability of software you can read, understand, and repair, which can be vetted thoroughly, which can be fully disclosed to civil society, which can be assured to work, though in which who installs modifications in which devices can be rigidly controlled by many forms of law, including criminal law, makes sense.

The determination that every medical device will be a black box, fully testable only by its manufacturer, does not make sense. The existing compromises, including the European Commission’s view, are, unfortunately, not working.

In the United States, at least in theory, regulation makes more room for the possibility of free software in medical devices, but practice is, of course, very much the other way.

I will state, as grounded speculation resulting from my experience, that there is at least one major manufacture in Europe who is out of compliance with GPL, concerning GPL’ed software embedded in the medical devices they sell here, because they believe that it is less risky to disobey the GPL and risk copyright infringement lawsuits than to risk the wrath of the European Commission for using that GPL’ed software in medical devices.

If you were a large manufacturer of medical devices in Europe and that’s the choice your regulatory masters put you to, that would be a bad thing, I say, happening to believe that violating the GPL is a bad idea for practical as well as moral reasons.

But what we really benefit from is the recognition that the more brains we harness to the process of making this extraordinarily complex and failure-prone technological environment around us safe, the better we will do.

Failures in software that cause security problems are not the biggest difficulty. They’re over-emphasized, by several orders of magnitude. But they’re not trivial, and I would be remiss if I didn’t say something about them, which is that they offer an excellent demonstration of why it’s better to have more eyeballs on the code.

I appreciate that there is strong controversy around the world of whether proprietary operating systems or free operating systems are more secure. But you appreciate that that controversy is like the controversy over whether people sometimes press the accelerator when they meant to press the brake and keep it there long enough to drive down the highway and crash into things, because you have more Windows computers in your life, in all likelihood, than I have in mine, and so you know.

What we really recognize ourselves is also recognized by the regulators, and, to some extent, is recognized by the manufacturers, though they adopt our software primarily because it’s cheap for them. They also know it works, and “works” includes “doesn’t send their devices up in smoke” and other such things, which are, after all, not good for you, and which they wish to avoid. If they didn’t believe they were avoiding those risks, which are catastrophic to them, if not to the human beings around them, they wouldn’t use our stuff.

Even the lawyers know this would be a good idea because, I’ve told you and, although I’m happy to answer hostile questions if anybody has any, the truth is, this is common sense, really. And, despite predictions on the subject by non-lawyers, lawyers listen to common sense.

So we’re going to have to do it. We’re going to have to do it. It’s going to take some trouble to get it done, because there are going to be a lot of people on the other side, for reasons we’ve just investigated. And each one of the catastrophes that ought to be the last straw, there’s going to be argument about. There’s going to be discussion about causation and proof, and it’s going to be immensely complicated.

And, some of the people in this room will be adding smoke, because that’s their job and they do it well. So, it’s not going to work the way it ought to work, namely, “look, we’ve got to do something about that.” Unless people are willing to synthesize the data for themselves, and put it together, and add common sense to it, and make a democratic demand, it won’t occur.

And a lot of other things will occur that we will feel bad about, that we should have avoided, that I just told you we could raise our odds of avoiding very drastically, and all we’d have to do is be for freedom, which is surely the most desperate kind of hope anybody could have offered under these circumstances. Thank you very much.

[Iain Mitchell]

Eben has very kindly agreed to answer questions, so I was wondering if we have somebody who might like to kick off the discussion.

[Audience member]

I have several questions. Thank you so much. You raise so many interesting points. I am Paula from the Open Knowledge Foundation in Scotland and, so a lot of questions. Is there a mailing list where we can ask them all, by the way.

[Eben Moglen]

So, there is a place called moglen@columbia.edu, and I’ll put a website up or add it to my blog, or do something. If it’s a useful conversation we’ll keep it around.

[Audience member]

There are several things, but I’m going to ask you just one.

We are learning how to use the “put a lot of eyeballs on the code.” I think, although there are issues, we can start. Would you recommend that we have many eyeballs on the license? My approach to open source licensing is that at the moment I see that there are limited lawyers who are experts and although the lawyers who are experts have been [inaudible]. So my approach would be why don’t we open the licensing process to a group of people, even with different opinions, to try to make these license more reliable. This is something that I don’t see happening now and I would like to have your opinion on your experience.

[Eben Moglen]

So, as F. Scott Fitzgerald says, so we beat on like boats against the current, borne back ceaselessly into the past. Well, that’s why GPL3 was done the way it was done, because I wanted to put together a process like that in which we could somehow model the social consequences of mixing in a deliberative process everybody who, regardless of the size of organization, or the geographic dispersion, or the nature of the technical or legal specialization of the parties, and we spent 16 months putting a license together in that way, and the last time I was here, the talk I gave, which is rattling around the net somewhere, was about what I thought we might have learned on the basis on that early experiment with the process of making better licenses that way.

The Mozilla Foundation is currently engaged in a process of revising the Mozilla Public License, which pretty much adopts that general approach to the making of free software licenses, and given that MPL and the Free Software Foundation copyleft licenses are the most complex licenses that are used in the free world for most purposes, I think we’ve pretty much tried in a conscientious way to fulfill your request. I don’t know what would happen if you tried to get together a lot of people around the world to reconsider the MIT X11 license, or BSD. My guess is that people would say, yes, well, they are simple things, and they work, why fix them, they ain’t broken. And they don’t have to be very adaptable to circumstances because they basically defer to downstream users’ decision-making.

I think Creative Commons is correct that the process of manufacturing software licenses doesn’t need to occur in the Creative Commons process. There are answers that are important where Diane Peters, the general counsel of Creative Commons and I work closely at the moment. Diane sits on the board of the Software Freedom Law Center, and we are, I hope, valued colleagues. She is

What we have been talking about recently is the world in which we live in, in which media objects are converging so that both software and non-executable media bitstreams—video, audio, texts, and graphics—are living inside a single object from the user’s point of view and we need to think about how multiple licenses exist and work together inside that barrel, one is for the code, and one is for the graphics, the text, the media of every kind.

There will be some adjustments around the edges and I have every reason to think that those, too, will occur in Wiki-like ways. We all are benefiting enormously from enhanced Web collaborations. I feel sure that license-making is going to go in that direction.

[Audience member]

I was just wondering, as well as having the software publicly available, do you think it would be useful if software had test suites that were publicly available?

[Eben Moglen]

Well, if you look at how most free world software works, that’s how it works. “make configure,” “./configure,” “make test,” “make install,” right? We do that. We’ve always done that, not just the free world, right?

[Audience member]

But should there be a regulator are defining that there should be certain tests in the test suite…

[Eben Moglen]

Why worry about whether regulators define it? In the free world we define it. Developers define tests because they want to test their software. Testing is part of the process of making.

[Audience member]

The whole idea of a regulator is to ensure that it doesn’t go wrong.

[Eben Moglen]

Let’s suppose that regulators try to be maximally parsimonious. Let’s suppose they operated either in libertarian political environments or under the rigid routine of having to explain to a political appointee everything they do, or in any of the other ways, have limited budgets, let’s suppose that for any of the reasons that regulators want to be parsimonious, they want to be parsimonious. The minimum set of regulations necessary is, you must make all parts available to inspection, and you must permit anybody to fix a safety problem at any time.

[Audience member]

There would be contentions.

[Iain Mitchell]

Coming from a European legal perspective, the difficulty, of course, you’ve got, is, that regulation can never be a silver bullet. Think of the mass of regulation that surrounded the banking industry, and think of where that got us. I think that the point is, that Eben’s point is very well made, that regulation might be necessary on some stratum, but essentially you’ve got to rely upon commercial and market pressures, you’ve got to rely on public opinion, you’ve got to rely upon persuading politicians. Don’t think that regulation is the silver bullet that will cure everything.

[Eben Moglen]

One of the elements of this that’s contentions is that what you have to rely upon is society, sometimes known as socialism, which is why it’s so contentious.

What the businesses have learned is that they could socialize research and development in software to the free world. We did it for them with enormous efficiency, both in order to demonstrate a theoretical proposition, namely that freedom is good, and a practical proposition, namely that we could make neat stuff if people would let us. And as a consequence, we altered the way the software industry around the world works because we proved to them that socialization of research and development was highly profitable.

Now even Lawrence Ellison, a man who never had a research division—because what good is a research division in a company that makes and sells software?—now even Lawrence Ellison participates in socialism heavily, because he bought a relationship with the free world of enormous value and he paid seven billion dollars for it, which to him is real money, even. You could raise a sailboat for that.

Now, the consequence of relying on society is that the regulator gets a free ride the same way that the capitalist does. In the same way that the manufacturer who sells at a profit has socialized his R&D to great efficiency gain, so the regulator socializes the process of testing and fixing. The reason that it gets done is people want it done, it’s got an itch, it gets scratched, and because we’re talking about software, when one guy fixes it everybody gets the benefit. We take advantage of the very same multiplicative effect in zero-marginal-cost economics that the manufacturers took advantage of. We use it for a different purpose, namely to achieve social good.

Well, that’s not an unprecedented activity. That’s what we did in the first place; that’s what we’re about. We use the socializ-ability of software knowledge in the zero-marginal-cost economy to produce social gains with very little apparent social input, because we harness the creativity and ingenuity of people and we free that to do the work. All I’m pointing to is that with tiny regulatory interactions you can harness that same process to make the environment safer, and you will get immense safety from it. But, it will be contentious, yes, my goodness it will.

[Audience member]

No, I’m saying that…

[Eben Moglen]

No, it would, you’re right, it will.

[Audience member]

No, what I’m saying is, let me paint out if you say to somebody, you say, “it’s not safe, let me fix it.” How do I know that you’re going to make it more safe, and on top of that, I cannot sue you or anybody else for [inaudible]…

[Eben Moglen]

Then don’t use the fix. That’s easy!

[Audience member]

But what I’m saying is I question the competence of anyone who comes up to me and says ’“hey, I’m gonna make it more safe.”

[Eben Moglen]

That’s odd, because that’s how we do it now. We say to people “I can make it safer, I can make it more secure, I can make it use less energy, I can make it work better,” and we’re right. And if we’re wrong, people don’t use the fix. That’s what we’ve already done. I understand your suspicion, I appreciate the point, I come to you on that subject with proof in hand. A quarter of a century of work.

[Audience member]

In your model, what is going to exist with quality assessment [inaudible]…

[Eben Moglen]

Well, you can do it any way you want, can’t you, because everybody participates equally in that process in the free world. Regulators would surely want to participate. I would rather imagine they would participate in a variety of ways, including by putting some of the people who successfully fix things on the technical advisory committees that are so important to the functioning of the regulatory entities.

There’s nothing to prevent us from issuing trumps to the regulators if we want to. There’s nothing, for example, that prevents us from coupling the system of ‘everybody’s got a right to inspect and everybody’s got a right to nominate patches’ with the idea that a regulatory entity produces authoritative versions of things which are safety-critical. If the German government wants to decide what the German automotive operating system consists of, which they might, given my experience, that wouldn’t be a problem for me.

The point is that the software’s free availability and everybody’s opportunity to read it, think about it, deal with it, poke it, test it, modify it, and compose patches for it, crucially advantages that national regulator. And I point to the national operating systems built on free software that occasionally are discussed by national governments, as the Russian government is discussing one now.

I don’t necessarily think at any given moment that that’s a good idea - I have views in particular contexts about it - but there would be nothing to prevent a society from doing it and I wouldn’t think it was a bad response, unless some practical detail suggested it was poorly implemented. The goal here isn’t to establish all that regulators might do, the goal here is to establish a minimum that every society ought to do because it’s a predicate to doing it right - whatever ‘doing it right’ turns out to mean.

[Audience member]

Let’s take a simple example we’re all familiar with, domestic heating boiler, which is controlled by British standards and European Union standards, and if you design a new pump, they have to approve it before you put it on the market.

Now let’s imagine you’ve got a bit of software in our pump and it’s gone free, as you’ve just described. Surely the only way that’s going to work in terms of the consumer is that there will then have to be a system for checking that the fixes are safe. And you’ll simply be putting the civically-enthusiastic fixer under the same burden as a manufacturer of pumps. And therefore people will not want to go and check our boilers because when they find a fix they won’t feel confident about the regulatory system.

[Eben Moglen]

No, not necessarily. I appreciate that that’s a possible difficulty, and if it arises it needs to be solved in one of several ways. Generally speaking, standardization doesn’t involve making it impossible for free software authors to work—we work heavily in standardized areas, in fact I should say we work heavily in heavily standardized areas. We work best, it is true, in heavily standardized areas where the standards are open, that is where everybody has an equal right to implement and therefore we took the area that we standardized the most in, namely the web, and we created at WC3 an extraordinarily important open standards manufacturing policy, which is now a model for open standards discussion in, among other things, government regulatory entities around the world. The Software Freedom Law Center was providing - is currently proving - some advice to the government of India on that subject, but the relationship between standardization and free development is not somehow one of incompatibility that would make it wrong to say that standards-making is a good way of doing, among other things, safety regulation, and the free world would be somehow disadvantaged by it.

The major difficulty with using standards regulation as safety regulation is that standards are by-and-large purchasable outcomes of pay-to-play organizations. That’s how standards are made around the world by-and-large, and the result is that if you expect standards-making in software to be effective at producing safety, there will be difficulty, that’s all that I would say.

The OOXML standard mess is a reasonable example of how tame standard making can cause industry pathologies. If you spend $150 million around the world in bribes, as Microsoft did, you can make anything a standard. I’m not sure that’s what you want out of the thing you want to make your safety regulations from, but I would agree that standardization is a deeply important component of how things ought to be made safe.

The problem with thinking of software failure as cured by standardization, which is the last comment I want to make, is that standards are very general things in the world of software. With respect to your boiler, it’s true that a standard can define how valves work in a way which is important to safety criticality, but software standards don’t define what will happen under multiple-failure conditions and things like that. They define how things work under normal circumstances, they define how protocols work when they are properly implemented - they don’t define what happens when tubes freeze over and arbitration software has to decide which navigational system is to be relied upon. That’s not the sort of stuff standards do. If we tried to use standards to do it, we’d have to revise how we make standards.

[Audience member]

It’s evident at the moment that most manufacturers do not release the source for embedded software currently. Is your impression that their current reason for doing this is because they think it’s good, some other people might take it, or because it’s bad and some other people might find this out?

[Eben Moglen]

Mostly it is the former. It’s not merely that it’s good and somebody else might take it, it’s that every standardization reduces a downstream service monopoly that they can control. For example, with respect to the diagnostic codes emitted by complex automotive systems and how to understand them, every manufacturer in the United States - and as far as I know, in the world economy - tries to control downstream access to the ability to access and interpret their codes. This despite the fact that the American Society of Automotive Engineers is supposed to standardize everything of importance about automobiles, and every couple of years, a guy calls me up and wants me to help him challenge the inactivity of the American Society of Automotive Engineers in requiring standardization of the diagnostic code scam in the automotive industry as they currently standardize the pitch and diameter of every screw and bolt in every automobile.

But standards structures don’t work well for that purpose in the area of software and they allow manufactures to derive various downstream anti-competitive advantages from the maintenance of their own proprietary software stacks. Whether there is any social good to balance that resulting from any increase in profitability to the manufacturer should at least be an explorable question. In my society regulatory interventions are supposed to occur on a cost-benefit basis, and I would abide the outcome of the cost-benefit investigation of that just as you were suggesting. My guess is that manufacturers derive substantially less value whatever it is from the harm caused.

[Audience member]

I’m just curious to think about where software ends, because we’ve kind of got the situation now that perhaps 20 years ago hardware was relatively simple but we have open software sitting on the most incredibly complicated hardware device, I can see that the sort of chip designs themselves are basically software now—we can classify it as software—but I’m just trying to think how far we can expand such a scheme. The chemistry of chip fabrication could be cause for a problem.

[Eben Moglen]

Well oddly enough, chip manufacturers worry a great deal about that already. We don’t experience a lot of hardware failure in the world, in that context. Hardware—computing hardware, digital use processing hardware—tends to fail catastrophically if it fails at all because manufacturers are very good at dealing with the things that would cause the kinds of failures—the multiple-condition peculiarities. We know that gamma-rays can distort unshielded hardware, and even so we worry about it very little because we add an extra bit that doesn’t cost us anything in the memory and we fix single bit errors when they happen.

So we take even physical limitations in hardware and we deal with it. Hardware engineering is orders of magnitude more sophisticated than software engineering. I’ve said this before—I’ll be quick about it now. When I went to work at the IBM Santa Teresa laboratory, in July of 1979, it was one of the largest clusters of hardware in the world, we had 330 professional programmers producing software used by IBM databases, programming languages, and all sorts of other stuff, we had acres, hectares of 3330 and 3350 disk drives. I have the spec sheet of the laboratory hardware from the day I joined, a little piece of employee bumf, 330 people 20 7168’s, the total capacity of that laboratory was 29 gigabytes and we thought that was big.

Okay? 32 gigabytes on a thing the size of your thumbnail that costs $129 or a terabyte hard drive that costs $79, right? Hardware builders have built machines that dwarf what we expected could be achieved when I was young, they reduced them to less than the size of your hand, they put them on a table top for $200. Software is arguably worse—surely not substantially better. The great mystery of our world, unless you understand the harm done by the proprietization of software, is why software engineering is so primitive compared to hardware engineering.

So I can’t stand here and tell you that you’re at risk from catastrophic hardware failure, that we can’t test and don’t diagnose, and that manufacturers don’t find. That would be untrue. Every once in a while, as you know, guys put out chips with some significant unexpected problem in them—Intel has had to fall on its sword twice in the personal computer era because there was some error in a floating-point box that didn’t do its job right. In one revision of one chip. But this is not a difficulty like software because software has been engineered differently, and although we in the free world would like to say we haven’t done it, and mostly we haven’t done it, the truth is software engineering had been held back for two generations by over-proprietization and we’ve just begun to fix the problem. But this would fix the problem in a bigger way.

Thank you all.