Software Governance and Automobiles - Session 1b
Automated Software Governance and Copyleft in Cars, by Mark Shuttleworth
EBEN MOGLEN: The place, then, we’re beginning may be a little idiosyncratic to the larger question of software governance in cars, but I don’t think so idiosyncratic as to be a poor beginning place. I’m shuffling here because what people gave me to use as biography readings is not in the order of presentation, and so I wanted to be sure that I have my co-author right. The worst thing that you can do in co-authorship is to introduce your co-author wrong. It means a lifetime of bitterness, so I’m reading off the page about my old friend, Mr. Shuttleworth.
Mark founded Thawte, an internet commerce security company, in 1996 while studying finance and IT at the University of Cape Town, which had water back then. In 2000, he founded HBD, an investment company, and created the Shuttleworth Foundation to fund innovative leaders in society with a combination of fellowships and investments–exceptional people doing exceptional things.
Every time I ever pointed somebody at him, I discovered it was not an exceptional enough person doing an exceptional enough thing, so the Shuttleworth Foundation has very high standards and has, indeed, created an opportunity for a lot of quite exceptional people.
Now we come to that in 2002 he flew to the International Space Station (ISS) as a member of the crew of Soyuz mission. I have never met a geek who wasn’t in awe of that, I have to say. Ever… The mission TM-34 after a year of training in Star City, Russia.
After running a campaign to promote code, science and mathematics in aspiring astronauts and other ambitious types at schools in South Africa, he started work on Ubuntu, a small thing which did, in fact, revolutionize the usability of free software in peoples’ personal computers all over the world. Tiny little thing.
Today, he lives on the lovely Mallard’s Botanical Garden in the Isle of Mann along with eighteen ducks, the equally lovely Claire, two black bitches, and the occasional itinerant sheep. There are no sheep coming today, I promise you. We are lions not lambs here today. But on that sound biographical basis, I give you for the discussion of CopyLeft and software governance in cars, Mark Shuttleworth.
MARK SHUTTLEWORTH: Eben, thank you very much for setting the stage for this discussion and for pulling this group together. I won’t say thank you for reading my entire biography because there are some things that you write at the end of a piece thinking nobody’s every going to read them, but there we go… [LAUGHTER].
It’s very exciting to see the list of speakers today because it’s always interesting to pull together people with widely different backgrounds but common problems because you get much more creativity that way. It’s exciting also to be at a university because for me universities are just wonderful places to kind of get out of the–off the treadmill–and into a space where creativity can happen. And it’s exciting for me to be meeting people who have done great things in both commerce and free software because I think the combination of those things is very powerful and very interesting.
We all have lucky breaks, and for me, possibly the biggest lucky break was finding what we now call free and open source software (FOSS) at just the right time. I was a student in a space like this very interested in things that were completely inaccessible from the tip of Africa. I was very interested in the internet, which had barely reached the tip of Africa. I was very interested in cryptography, which was something that very much happened in an international arena. And I was trying to kind of combine these interests and not really making much progress until somebody handed me a stack of disks, and after quite a lot of wading through all of that I found the raw materials to sort of explore those ideas and, together with a couple of other lucky breaks, that worked out really quite well.
And so, I find myself in the position to sort of explore interests of all sorts, and after some consideration I came to the view that probably the most impactful way to spend the next couple of decades would be on enabling people to take their idiosyncratic interests and explore them faster, and it struck me that the raw materials that were available in free software were a basis for innovation unlike anything that we had ever seen before. But there was a ton of friction, friction in the consumption of that, in the use of that, and that friction was essentially reducing the number of people who could innovate, right?
And so that’s really how Ubuntu came to be, the idea was very, very simple. How do we, essentially, connect people who are innovating in different ways and in different places so that goodness can happen faster, and that’s essentially how I came to be interested in software delivery. My interest in being here is not because I have any passion for cars, but it’s simply from the realization that cars are going to be one of the places that software has to be delivered.
So in this journey, you know, I started by pulling together a few–I thought–inspired developers from Debian and other distributions and trying to solve this problem of friction, right–getting rid of all the things that made it difficult to get to those raw materials effectively on which ideas could be explored, and that process was really the founding of Ubuntu, the beginning of Ubuntu. And what it turned into was a platform that enabled people to go and break the rules or explore boundaries or push back boundaries really really quickly, and looking back it’s kind of extraordinary to me today that every single one of these institutions, and this is a very small subset, has essentially moved to using Ubuntu as the platform on which they create their future. It’s very exciting to me because everybody who graduates from one of those businesses then takes that idea onward, right? In a competitive world the ability to innovate faster and to move faster is a competitive advantage. And so it’s kind of like in a Darwinian environment, it’s a very successful meme. So, when I look at this picture, what I know is that this idea of spreading free software to spread innovation, to accelerate the innovation, is working.
But along the way, one also realizes that as successful as something is, it always has limitations and problems, and we started to become aware of problems in the very, very nature of Linux–in the very nature of Ubuntu, problems that really only showed up when you started to look at something at scale. Remember, when we started we were very focused on the developer, and a developer is in complete control of their environment, right? In fact, a developer wants a malleable environment, an environment where they can just pull whatever they need, re-arrange it, turn it upside down, and connect the dots in completely new ways. Right? And so that’s what we built. We built an environment that is enormously productive for developers, and then you extend that out to systems which are well-administered, and so, again, there’s power and value in this flexibility and malleability.
So, a traditional system, essentially, consists of all the software that comes from somewhere else, it has provenance associated with it, and then you mix it all together, right–on your laptop, on your workstation, on your VM–and you make whatever you need, which is incredibly, incredibly powerful and efficient and fast.
It’s also messy. And so what we started to see were the consequences of the fact that Linux enables the mixing of software. And so you don’t really know what you’ve got when you’re at a system level… And when you have hundreds of thousands of systems–Netflix, north of one-hundred thousand VMs on any given day, that’s hundreds of thousands of systems–that underlying uncertainty of what you actually have becomes friction.
So, we started out trying to solve friction for developers and we found ourselves increasingly engaged by people to try to help them solve questions of friction in the system itself. Right? If I’ve got hundreds of thousands of systems, and now I have questions of friction when I’m upgrading and changing, what have I actually got? All these things interact with each other. All these packages can write anywhere.
We also had another concern, which is that people said, “What can a bad piece of software do?” Right? Because, just like any package that comes from a trusted source can go anywhere, so can a package that’s, perhaps, shouldn’t be trusted. So that got us thinking about the question of trust. What does it really mean to trust software? Then, we realized, we generally have–certainly in the community–we’ve got very naive ideas about what it means to trust software. We’ve done an enormous amount of work in Ubuntu and in Debian and in other upstream projects to sort of establish provenance of code all the way through build processes down onto the disk, but all of that is lost when effectively you start mixing and spreading software around.
It’s very hard to know what to trust. There are ways–there have been ways since the early 2000s–to write complicated rules about exactly what any piece of software can do, but they’re so arcane and so complicated that in practice nobody uses them, and they’re also ultimately limited by the fact that because anything can legitimately go anywhere or have gone anywhere, it’s very hard to really write rules that you can count on.
The insight at the time that I recall was that it didn’t matter so much whether or not you trusted the software, the question was what you trusted it with. We’re all starting to mix software from lots of different places on our systems, phones, laptops, servers, VMs–that’s not going to change. This idea that all the software will come through one funnel and get curated and managed, that’s old. Now software is going to come from lots and lots of different places, and it’s not really a reasonable option as a developer or as a business to say, “I’m not going to trust anything that doesn’t come from one place,” because from a Darwinian point of view you’ll be costing yourself access to innovation.
So, you have to start asking a different question, and it’s, “What am I going to trust that software with?” And so that led us to–being in the software delivery game–a new approach to essentially putting software on those systems. Which is to say, let’s put it there, but put it in a box. And instead of mixing it in and then trying to write complicated rules about how it’s being mixed in, let’s just not mix it in. Let’s just keep software that comes from other places in a box because then we know what we’re trusting it with.
So, that got us started thinking about new ways to effectively put boundaries around what a piece of software on a system is doing, and so, as things do, they lead to one another. Very, very quickly Ubuntu started to spread in all of these mushy, elastic environments. More than half of the private clouds that are built with Linux are built with Ubuntu. Nearly two-thirds of all the workloads on the public cloud are Ubuntu. And that was all going swimmingly–we were starting to sort of try to solve these problems–then we realized the next thing was going to ratchet these problems up by one or two orders of magnitude, and that was the internet of things.
We saw this advanced way, if we get this early warning because we know what developers are doing with Ubuntu, what they’re asking us to do with Ubuntu, and we saw this very clear signal that effectively the next wave of innovation was going to be out of the edge, on devices. And there are all sorts of interesting things there, but we realized that the problems we were dealing with at the level of hundreds of thousands of VMs were going to be significantly compounded if you got up to a few billion devices, and the approaches that we were trying to take were going to struggle to scale. There’s a cost associated with a server and it’s considered a reasonable cost. In other words, having people devoted to servers and a ratio of people to servers is actually a cost on every server if you want to think about it that way, and that cost just simply wouldn’t work out at the edge with cameras and GSM towers and cheap devices.
So, for all of these problems we’re going to have to find fundamentally better technology and fundamentally better ways to attack them. And so we said, well, instead of just having the system built the old way with mixed up files and external applications confined, what if we tried to build the entire system the same way?
And so, we eventually came to the view that we needed to public two versions of Ubuntu at the same time with exactly the same content, just delivered different. So in this latter version of Ubuntu edition, effectively, instead of having all of those files that make up the root file system and the kernel and everything, we end up with very, very few files. We end up with one big zipfile that’s the kernel. We end up with another very big zipfile that’s the entire operating system, effectively, the base operating system. And, then, we end up with those applications in their boxes.
In other words, we take this idea of bounding the trust of some software and having that software effectively always in a protected, signed format, and extend that to the entire device–tricky, as you can imagine.
There’s a lot of illusion. There’s a lot of magic in the Linux kernel now–a lot of the tools to create magic, to create illusions effectively so that for that software, the fact that they’re in this entirely different packaging format is invisible. To an application written to run on the left, it’s not that hard, actually, to have it work just as well on the right because we can create the illusion to it that effectively everything is as it was before. But from an ownership and operations point of view, we essentially bring a whole lot of new characteristics to the platform.
The first is that we never, ever lose track of provenance. So, just like we can get hashes on git commits, which tell us who wrote a line of code. We can then take that through to various build systems that give us certainty of the relationship between where the code came from and where the binaries came from, right? What we used to lose was a lot of that provenance when we unpacked the packages and spread those files around, but now we’re never unpacking those packages. Essentially, all the files associated with an application or all the files associated with an operating system are one zipfile, and that one zipfile never changes, so we can sign it and we can check the signature when we install it and the next day. We always know where everything came from.
We can essentially allow people to share information about software knowing that they’re sharing that information about exactly the same software. So if somebody’s tested the software, they can say so. And everybody can know that. If somebody has proven that two pieces of software work together, they can say so and everybody knows that, and they know that forever. It isn’t a hint. It’s a fact. It’s a mathematical binding effectively. We get this deep sense of rigor around software, the integrity of the software can be validated and permissions can be asserted in a way that is mathematically consistent or mathematically attested.
So, signatures and assertions… Assertions are the term for those mathematical statements, effectively–digitally signed documents: GPG, Open-GPG, signed documents.
We also get some operational primitives. So, because these pieces of software are never unpacked, we can keep multiple versions of them very, very cleanly because we’re not spreading files around the system and then getting a new version and starting to spread the files. When we do that with traditional packaging, we’re always in this slightly dangerous stage where we might have put down some of the new files, when the power goes out, and now we’re not sure exactly what we’ve got.
In this world, you know you’re always running this version or you’re running that version and you’re running that version in its entirety. Perhaps more importantly, you can go back. So, we can essentially keep multiple versions of software on a device and choose, “Are we going to use this version or that version?” There is a very precise sense of going forward and going back, which is part of that attack on the underlying operational complexity and cost of having billions of devices, right? It’s expensive to administer software because things go wrong. If we can attack the places where things go wrong, then we can dramatically reduce the cost of administering software, and almost everything now is going to be ultimately tied to software and carry that cost.
We can express the relationship between pieces of software. So, if somebody says, “Look, I’ve got a piece of software that needs a database,” we can allow them to say that they’ve tested it with that version of the database, and so we can have this very precise sense of what should work and what shouldn’t and when somebody is legally bound–when, for example, somebody’s warranties apply and when they don’t. They apply when they’ve made a commitment, and that commitment isn’t fuzzy anymore. That commitment is really, really precise and really easy to manage effectively.
So in these sorts of systems, you observe this great behavior where an application can be installed, a database can be installed, an update for the database can be available but it won’t automatically be applied because the application that’s depending on that database has said, “Look, I haven’t yet tested with that database.” So, there’s an update available. There’s a better version of the database available, but because the application hasn’t yet said we’re good with that, it won’t automatically update–only when the application says that we’ve tested that will it update.
Of course, as a user, you have the right to say, “No, no–I really need that update. I don’t believe that testing is going to take place or I can’t wait for it. I’ll take the risk or I’ll take the risk on this device.” So we get very nice frameworks for different entities–the publishers of software, the manufacturers of the device, the owners of the device–to express what they want and to manage the inherent tensions or rights in their relationship.
We also get really interesting primitives for integration. Remember, none of these pieces of software can simplistically read or write to the same file, by default. So, the integration between pieces of software becomes very, very interesting.
First, there is no integration by default. Nothing can simplistically assume that it can poke at something else. The only ways is to integrate are again through those digitally signed documents. The only ways to integrate are by saying, “It is necessary for me to be able to talk to something else in this way and for the system to agree”–the system being a representation of what the manufacturer thinks and potentially what the owner of that system thinks. We can mediate that! So, these lines of communication only–they aren’t arbitrary, they have to be shaped and designed and agreed. They can also be mediated. In other words, two pieces of software may think that they’re talking directly to each other but actually may be talking through something which asserts policy on the conversation. Yes, these two things are allowed to talk. Yes, they will think they are talking directly to each other, but, in fact, there can be something else, which is working to a policy that says, “If they start talking about things which are outside of policy, we can stop their line of communication.”
And these primitives are, essentially, profoundly new ways of thinking about software delivery. The technology underneath all of this is the same sort of capabilities that power Docker, that have powered digital signatures for a long time. This is, as with all kinds of work, essentially a small step being taken on top of the work of many many people. Bringing it all together, specifically focused on software governance and software delivery is, for us, profoundly interesting.
In the end, I think what we are entering is a time where we can have really rigorous and reasoned–a really rigorous and reasoned–view of software security and governance, and to that I would add trust.
So, what about GPLv3?
Now, Eben and I have had the opportunity to discuss, debate, and work together on a number of different issues, and I approached Eben because I really wanted to understand the GPLv3 because increasingly I was getting anxious questions from partners and customers about the GPLv3. And the question was always basically the same, which is, “Can we avoid it, or what?”
Now, it’s very, very clear to me that the freedoms inherent in free software are the real driver of the innovation that ultimately I’ve benefited from and others have benefited from. I think it’s important that there’s a diverse fit of approaches to open source. But if you had asked me where I stand, I’d say I’m always going to bet on the meme that ultimately drives innovation furthest. There’s this question about, you know, what happens when an unstoppable force meets an unmovable object. I think quite clearly CopyLeft, and with it GPLv3, is an unstoppable force. I watch where it’s going, who’s adopting it, and the dependencies between those. In my view, it’s an unstoppable force.
The apparently unmovable object is an industrial view that it’s somehow dangerous and unacceptable.
But, as an investor and as a betting person, I would always bet on the company or the team that has access to the bigger pool of innovation. And so in these conversations, clearly, my job is to work with people to get them to where they need to be. It’s perfectly straightforward for us to deliver capabilities that exclude GPLv3 code to customers and partners who wish to exclude GPLv3 code. It’s not hard. That’s what they want. I can support and facilitate that, and I can do it with rigorous governance and trust.
But the reason I’m really interested in this conversation is because I think that is the losing strategy for those parties because they will be excluding themselves from the mainstream track of innovation. They’ll face competition from, perhaps, more forward-looking institutions who will ride the full wave of that innovation.
And so that’s where Eben and I started to talk because the question for me was really, can we use some of these new primitives to manage the rights and responsibilities that are created in these contracts, in an elegant way–in a way that takes an unstoppable force meeting an unmovable object and creates space effectively, not for compromise but for productive forward motion.
And I think that–I hope that’s a useful grounding. I hope that these are useful primitives, and I hope that the diverse opinions here today, that we can effectively find ways to bridge perspectives, to bridge requirements. Everybody has legitimate requirements–absolutely. People have different views, but I think that what’s always interesting to me is that the successful people in companies are the ones who find a way to move forward.
I think that’s it. So ultimately while our focus has been on the bits, the mechanisms here equally speak to the rights. In our thinking it’s very clear that there are multiple parties working in a device. Think of that projector–there’s a manufacturer, there may be applications from vendors, the university has rights. Managing what the software can do and what we trust the software with is, in a way, a different way of thinking about managing the rights of all of those parties.
Eben, thank you for the opportunity and for pulling this group together.
MOGLEN: Thank you, Mark. I think that shows what the source of the thinking is quite well. If you re-architect a distribution of software so that you have effectively isolated both the code that runs in read-only condition and you have given yourself the opportunity to update and roll-back in a controlled fashion without breaking dependencies, then you have the possibility of an agent within each computer in a device that can govern the software–can report exactly what is running, can report exactly when it was installed and what it replaced, and can, under appropriate circumstances, undo some portion of any existing install, knowing that breakage cannot occur. You have used digital signatures to assert in a testable fashion all of those pieces, and you have architected the operating system so that the inter-process communications and communications with other devices on a network are all controlled at a high level of granularity, program by program, function by function. And once again, you have stated all of that in documents which are simple to read but which are impossible to forge because they are digitally signed by somebody with authority to determine, at low level, beyond even role, in the sort of security-enhanced Linux sense, interaction by interaction with devices and other processes on a network which are permitted and which are not.
This now gives, as Mark says, a set of primitives for using CopyLeft–even CopyLeft, which requires that the user be allowed to request installation information to change the software for herself–in a very controlled setting.
So as you see in the paper that we wrote about this, suppose we imagine ourselves in a vehicle in which the OEM’s view about how the media player should work is not the parent’s view of how the media player should work, a phenomenon I think is likely to occur in cars with video screens and honking devices in them and so on… The manufacturer’s version of the media player may be one which an individual owner of an automobile wishes to change, she’s good at that kind of stuff–maybe that’s even what she works on. She’d like to be able to modify the media player in her car. The media player as distributed by the OEM may actually have some integration with parts of the vehicle operation, which it would not be a very good idea to permit another version of the software to capitalize upon. Safety might be compromised if, for example, the media player’s relationship to control electronics is hijacked.
But it is possible for the manufacturer to respond under GPLv3 to a request for installation information in the following form: here is a set of assertions, signed by us, for the vehicle with VIN number such and such, and the software governance agent will install a modified version if you provide this assertion for it, and it will do everything that can be done by the media player in this automobile except for one or two things which our version can do but we believe that a modified version that did those things could compromise the operation of the network or the safety of the vehicle, which GPLv3 makes a ground for manufacturers’ exception for installation information. Modified versions must work on the network in the same way that unmodified versions must, unless there is a problem in compromising the network or safety of the device. So, now the manufacturer says, “Yeah, I’ll let you modify that media player even though pretty much anything that media player can do, except one or two things that I think are safety-involving, and here’s the installation information that will allow you to do that in your vehicle, and I’m publishing that I’ve given a modification exception for this software in this VIN so that everybody can be sure that any modification to your vehicle’s software which occurred was made by you and not by some hostile third-party.
All of this, then, can be operated even more sensitively than that in the sense that this is just one more version being managed by those roll-back primitives. It’s possible for the vehicle to work in such a way that when in pulls onto a smart road or is in some other technically demanding environment, user modifications could be temporarily rolled-back and then automatically re-installed when that condition is no longer present.
It becomes possible, in other words, to govern the software in the vehicle with high levels of accountability and extraordinarily high levels of granularity, thus allowing a manufacturer or any other party who controls the particular issuance of assertions for the software in the vehicle to determine exactly how users’ rights are balanced against other interests–regulatory interests, security interests, legal interests of the manufacturer giving permission, and so forth.
If even the most determined, activist, free software license–the one that everybody wants to know, “Can we avoid this?” because… Rights for users? I mean… Really? If everybody can actually come to grips with the possibility, “Yes, that will work, right?” We have technical machinery at the level at which we package and distribute our software which gives us, ‘for free’ as it were, all the governance aspects that we would need in order to make that work so that we both can tell what the state of the modified software is in the vehicle, in every computer, at every time. We can pull it out in order to do warranty service or to prove as we used to do on a 360 mainframe, right? When IBMs S.E. showed up and you said, “Man, I got a big problem,” and they say, “Okay, pull your modifications and prove that it works on our software.”
That was easy for S.E.’s to demand of big companies when I was a kid-programmer working on mainframe software. Nobody has ever thought that we could do that with laser printers and home routers and stuff like that but we now can, right? What Mark is talking about is a way of governing software at a granular enough level that we can actually say, okay, this software should be rolled back to base state to prove what it is that’s going on here and then we can re-install user-modified software bit by bit, function by function, place by place, context by context, and the manufacturer issues a document when it permits modification which says exactly what modification I am permitting, in the sense that I am allowing this code to work every bit as much as the old code did except for whatever it is which is particularly sensitive to safety or the operation of the vehicle.
This, I think, is what we have. Mark has explained why it is that we have it–that is, how that is part of a much larger process of trying to figure out how to deliver software in the twenty-first century. And we have it for a common project, which I think we both care deeply about for the reasons that he has offered.
If we are to keep user innovation in this world of automotive devices, we need to be able to allow user modification. I am concerned that the alternative is a world in which we do not permit users, that is, people, to have any rights in automobiles. As we point out in the paper, there is a broad public policy coalition now building around the world under the rubric of shared mobility principles, sharedmobilityprinciples.org, a fine set of public policy recommendations about how we ought to think about shared mobility in cities of the future in which, if you will look, a whole bunch of world governments and a whole bunch of commercial organizations are signed up for point ten: in dense urban cores around the world, no private party should be allowed to own a self-driving vehicle, only fleets.
Why? You say, should–well, Zipcar’s founder is one of the major movers in sharedmobilityprinciples.org, but not for oligopolization or anti-competitive reasons–why would one bar private ownership of something? Oh, because software governance is going to be such a nightmare, and we can’t trust people to maintain the software in their cars correctly–only Uber will know how to maintain the software correctly.
Now, this is why, from my point of view, whether you’re interested in cars or not, you face one of these questions about the fate of users’ right in the twenty-first century that contains an awful lot of political importance, at least for people like me. We are now, actually, confronting the question, can we achieve important social goals–safety, limitation of liability, understanding of the unintended consequences of technology–without extinguishing all the freedom that the car created in the first place for human beings.
Are we really going to turn this into an oligopolized service, conducted over appliances you can’t understand, can’t mess with, can’t change–that fate, it’s way worse than the removal of the carburetor and its replacement by fuel injection and the reduction of the ability to tinker with the engine in order to achieve cleaner air. This is a much more profound and much more complicated trade off.
What I understand Mark and his colleagues at Canonical to have done in the evolution of snap packaging is to have given us, as he says, the primitives for the kind of governance that can compromise these social goals, in a period of rapid technological transformation in which, as usual, everybody is moving very fast and breaking a lot of things but they’re not actually understanding where the real dangers, technical or social, are.
That was why this work seemed to us to be so important and why it seemed to me to be worth gathering people around to think about it.
The most important kinds of people to think about it are the, well, the people who think about making cars and having free software in them, which is why the next presentation has to be by Daniel Patnaik–that’s how we have to have this conversation.