Software Governance and Automobiles - Session 3b

Systems Engineering and the Sins of Software, by Nicholas McGuire.

EBEN MOGLEN: So then the next speaker is Mr. Nicholas McGuire who, I read in the biography, Nicholas says he began fooling around with Linux in 1995, which means that that was about 0.95 or so about the same time I climbed on. Zero-nine-nine-one-hundred-twelve. Version numbers are very important in the way we think about software, aren’t they?

So Nicholas has worked in many good places with that Linux kernel from the Technical University of Vienna to New Mexico where he worked on RTLinux a wonderful product with, what shall I say, an ethically challenged leadership. With whom Richard Stallman and I had many interesting conversations in the last part of the 20th century. After RTLinux became a proprietary activity to the extent it legally could, there was an RTLinux GPL variant, which Nicholas was primarily responsible for, and he has worked in China since what, the turn of the century, roughly?

NICHOLAS MCGUIRE: 2004.

MOGLEN: 2004. Where he has been involved in systems engineering education around software and where I take it about six years ago autonomous vehicle software began to trouble his dreams and disturb his sleep. So why the sins of software engineering can only be repaired by systems engineering in the automotive sector is the subject of his talk. I give you Nicholas McGuire.

MCGUIRE: Thank you. I’ve got to give out a warning. My slides are generally quite ugly. I got a very friendly compliment that my slides reminded a German audience of the 1970s. I thought, well, that’s not that bad until he said in eastern Germany. [Laughing]

So, yes, I want to focus on the system engineering problem that we have, and will start with the basics. I’ll try and go fast.

So, the first problem is why do we actually need regulations? I mean we’re talking about, generally we talk about regulations in a quite negative connotation of, “Oh, they’re prohibiting innovation, they’re irritating everybody, and so on.” But really, the problem is it’s about consolidating solutions space or actually problem space first, then getting context, distribute it, so that everybody actually has the same context to talk about. That’s the fundamentals that actually enables research, is to have sort of a starting consolidated expectation. And if this consolidation doesn’t happen, then we also cannot do reasonable peer review because we don’t understand or we don’t have a common understanding of the solution space.

So the goal of regulation really is to get wide acceptance of technology that is developing. That’s of course not something that regulations are there first and then technology developed. It’s a highly iterative thing, and if we look at how regulation came around to be in the last few decades, it has not been a homogeneous process, but essentially what it’s trying to establish is this common understanding so that we actually can start the process of innovation. So I think that regulation is an important part to actually achieve acceptance.

That brings us to acceptance. We’ve been discussing these numbers. These numbers might not be that correct. It is around 1.2 million people that were killed per year in automobile accidents. Somewhere between 0-1K in the last few years in air traffic. But you know that’s what gets the media headlines, and what that really means is that we have a totally different acceptance of human casualties. Now some of these are due to different socializations or different states in which societies are. We like to point fingers at society and say, “Oh, how dare they do that!” But if you look at the development of the numbers, and you will see that societies that are still in a developing faze are basically just have a time lag with respect to human toll that they’re accepting on technology.

And, what really deciding what we accept is controllability, and why this is going to be important in autonomous vehicles, is because what we’re doing now is automotive industry is using ISO 26262 for some reason, don’t ask me why, it’s a really crappy standard. It’s not suitable for autonomous vehicles, and it’s fundamentally wrong because it’s assuming that there’s a driver in place, and if there’s a driver in place, we can allocate responsibility, and then we accept 1.2 million people being killed.

But as soon as it’s autonomous, we’re going to look at the numbers for rail and air traffic, where we don’t have the autonomy, or the decision on our side. At least the perception of “we are deciding our fate” and then we expect significantly lower numbers by orders of magnitude. So that’s sort of one of these fundamental problems.

It’s really about controllability, and that’s the screw that we are turning, the knob that we are turning in autonomous vehicles, and that’s going to change the acceptance completely. So we applied the wrong standard then I don’t think we will be able to create versions that are acceptable for the society.

So what is actually in the standard? And standards are actually quite a simple thing. As I said before, there standards as a representation of regulation or the sort of codification of regulation. Now I’m talking more about technical standards than anything about legal standards here. And what we’re really doing is we’re developing a state of the art. We do that in a quite chaotic way sometimes. Then this consolidates through research, through exchange of ideas. And once we have the state of the art, then we sit down and we codify this in different forms of functional standards, safety standards, communications standards, or whatever. And we put together processes, metrics, techniques, and measures that we have we’re going to apply, and we put thresholds on it to classify it as suitable for purpose to some classification. And that’s really what we’re trying to do when we create a standard.

But if we look at the state of automotive systems, what is a state of the art of, well for that matter, automotive systems or autonomous vehicles in general, we don’t have a consolidated state of the art, and therefore, by definition, we don’t have a suitable standard yet.

And for that matter, we don’t have a state of art for verification of anything related to artificial intelligence or machine learning. And as long as we don’t have anything for verification, there is really nothing you can do with these things. I mean you can enjoy for computer games, and you can use it for toys, and you might be able to use it for mobile applications or for spell checking, which is one of the few applications where I’ve used it, or for Spam filters. But you can’t use it for things we’re going to put the life of people on.

And it’s it’s not a matter of sort of the regulation is prohibiting technology from evolving. It’s a problem that the technologies that we have been prototyping, that are sort of starting to emerge, we don’t have a common understanding. It’s not consolidated yet. And you can see that in scientific literature quite nicely that there is no such consolidated basis yet.

Which gets me right to a little bit of ISO 26262 bashing, which I enjoy doing, because as I said, it’s a fundamentally crappy standard. It violated the very basic concept of functional safety at the time. It tried to go back to what I call or the community safety community calls “table driven safety.” That you’re saying, well we just comply with X, Y, Z, and then we’re safe. But especially in autonomous vehicles, if I prove that my matrix multiplication algorithm is implemented correctly, that proves absolutely nothing about the correctness of your trajectory calculation. Because we have this notion in simple systems that we have this strong correlation between correctness of code and behavior of the system. Now it’s not even true in simple systems or in low complexity systems, but in low complexity systems, it’s reasonably close to reality, so that we basically accept the discrepancy and say OK well from time to time it’s a bit flip that wasn’t caught, wasn’t handled. Some car might hit the wall, but as long as it’s rare enough and as long as a drunk driver is the primary source of accidents, we’ll except that.

And that’s exactly what… If we look at accidents statistics, it’s actually quite rare that it is technical reasons for accidents, let alone software reasons. And if it is technical reasons then the attention it gets in the public dramatically change. So if it was the floor pedal– I forgot which company that was a thing with, it was a Japanese car manufacturer. Toyota? So that killed in the range of 50 people, and there was a big, big story. In the same time that these 50 people got killed in the road, I am– without having the numbers– I’m very sure that more people got killed in Toyotas without the brake pedal problem because they were drunk, because somebody else was violating traffic rules, or they were driving in a way that wasn’t adequate with the weather situation. And that’s why we accept the one, and we don’t accept the other.

So this is why I think this standard is fundamentally a wrong standard for this. Aside from the technical issues that I noted that it is only suitable for low complexity systems, and we’re not talking about low complexity systems in this case at all.

So the problem really is that if we have no standard, then we can build systems that we can let them drive around streets until they hit people, but we can’t verify if they’re suitable or not. And using the wrong standard doesn’t really make the thing any better. That’s exactly what the automotive industry or the industry for autonomous system at large is doing. We see the same thing happening in robotics. We see the same thing happening in medical devices. They’re using medical device standards like IEC 62304 , which never was intended for this, and actually the authors of ISO 26262 also were not thinking about autonomous systems that we’re talking about. Micro-controllers was an R2S and the restricted API on it. If anybody knows what the OSEK operating system looks like, if you’re forced to write a “hello world” on it, you will break your fingers. And that’s the type of operating system this was actually intended to do.

So what is the state of the art of autonomous systems or autonomous vehicles? And that’s really the problem, as I said before. We really don’t know. We have a lot of the fundamental questions that are not answered yet. And a lot of these unanswered questions have to do that we are doing a contact switch here. This is not an evolution. Automotive industry, lot of other industries, have been doing this very evolutionary approach. We get faster, bigger cars, stronger engines, and we add fuel efficiency measures, but we don’t fundamentally change the design. Even if we are switching from gas-based engines to electric engines, generally they weren’t dramatically changing the design of the systems.

So what the impact of that was is that the fault pattern stayed the same and that’s why applying well-tested approaches, even if it wasn’t ISO 26262, which probably didn’t exist at that time, it was issued in 2009 or 2010. These changes could be sort of subsumed under an evolutionary change with respect to the engineers working on this.

What we’re looking at now is not an evolutionary step. We’re starting to use floating points, sort of a no-go in safety critical systems for a very long time. Ariane 5 is a beautiful example of floating point problems in safety critical systems. It was a beautiful firework for two billion dollars. Luckily it didn’t kill anybody, so it was worth it.

It seems like a trivial task, but really we don’t have the capacity to cover a lot of the complexity that is involved with the behavior of these systems because our problem is the accidents are not the general case, it’s the corner cases. But that’s what engineering or humans focus on. We focus on this general case. We focus on managing the general case correctly.

So we look at the legal environment that we’re trying to operate these vehicles in. We already heard a lot about this today anyway about who owns the data, what is security problems. We don’t even have a reasonable, specified behavior of a car on the road. There is no legal documented minimum distance between two cars driving side by side. So what are you going to use as a specification? And then you basically are generating unknown or undefined behavior of two tons bolting down the road with a few hundred gallons, or not hundred gallons, hundred liters, of highly explosive, combustible fuel in it.

So going back to what classes of safety standards or safety-related systems do we actually have that regulation is trying to cover. Now normatively IEC 61518, which is sort of the top-notch standards for one of the basic safety standards for functional safety that is focused on procedural issues, defines low complexity. Low complexity is quite easy to define: it’s all faults are known for the system, and behavior under false conditions is understood or known.

Now, for quite simple systems, we can do that. And even for simple systems, this is not easy. So if you look at New York subway system, it’s still using interlocking system based on exclusively on Boolean algebra because that’s sort of the level of complexity that they feel that can be managed. It’s painful to do any updates in that interlocking system, but it’s really focused on keeping it under control. We want to have this ideal of 100 percent safety, is related to the complexity that we can manage.

We have been moving away from that because there simply were market needs to extend that to introduce this type B system. Now 61518 actually just defines normatively type A system of low complexity, and by inference, you can conclude what a type B system would look like. IEC 62061 for machine tools actually formally specifies complex systems. That’s where the definition here comes from. And it’s then saying, well, we don’t have complete understanding of the faults, and we don’t have complete or well-defined behavior under false conditions.

So you might say, well, okay, that’s getting at least closer to what we’re trying to doing in autonomous system, but what I think we are really trying to do in autonomous system is what I’m going to call the Type C system. It’s a system where we can’t even specify what a fault is. And that’s the problem that we have because we know what a parity error is, and we can detect a parity error, and we can fix a parity error, but actually if you have a parity error in a video input, you don’t really care. We can have all sorts of floating point rounding problems, and we could devise all sorts of nice mitigations for these low-level problems, but we actually don’t know if these bit flips or whatever would have a significant impact on the behavior of an autonomous vehicle anyway.

And if it does have an impact, it would have a very hard time actually detecting that it was that, because we have, if you look at any neural network, you have beautiful, large matrices, and we don’t understand what the value in these matrices actually mean. You can actually take a simple neural network model in Keras, some default model that you download from the Internet, you can dump the weighting matrix, and you can start flipping bits in there, and it takes a long time until you notice any difference in the behavior.

So, we don’t have an understanding, or our classical fault categories that we have been traditionally using don’t apply to this class of system. That’s also where we get into trouble with allowing people to modify systems, because it’s not that we necessarily say that we don’t want anybody to modify it. We don’t have clear criteria to say, okay, this modification is sound, this modification is not sound.

So we have this decoupling that happens when we go to this highly complex system where AI or machine learning based systems– and that is that software correctness really tells us nothing anymore about behavior. And that’s where we have the problems.

So since I had it in title of my slide, “The sins of software,” I did change it to “Sins of complex software.” I’m not going to go through this list because actually this is only a snapshot of a very long list.

The problem is not to say, “Oh, they’re all doing such horrible things. This is incompetence,” or whatever. There’s a lot of… I think this is an evolutionary impact that we have of an industry that was successfully working in a quite confined area of isolated non-connected vehicles, and software was basically considered an extension of what cannot be done in hardware anymore.

So, sometimes it was maybe a price issue. Sometimes it’s a convenience issue. Fuel efficiency of motors, if you control it by a micro-controller, it is just better than if you try to do it by mechanical connections between valves and crankshaft angle. I really know nothing about cars, so don’t take me seriously on any of the technical details.

So this is an optimization problem for them, but it wasn’t that they were anticipating that this will be connected systems with highly dynamic software. There was some level of dynamics in the software, but it is extremely low in current cars. It has been going up. That’s why we have been seeing software based recalls, which makes automobile makers really happy. The consequence of that was the call for over the air update and blow up the complexity in the current systems already, let alone what we are going to require for the later systems.

So, in autonomous systems, we have, aside from these the whole problem of said decoupling that we can’t see anything about correctness anymore, we have one thing that I do want to point out and that on the long list is that we have this notion of testing popping back in. And I heard it today a few times as well. That is really interesting, and they didn’t have that on the slide because I know it would be popping up. Type A systems, these low-complexity systems that we fully understand, we can actually, ideally, exhaustively test because we know all faults, and we know behavior under faults, so we can distinguish between the two.

And when the systems became more complex in the history of safety-related systems, we went away from testing, and you can see this encoded in the safety standards: rather than writing testing, they call for analysis and testing. And the higher the complexity of the system, the more onus is on analysis, and now we’re going to even more complexity, and everybody’s going back to testing.

Why are we doing testing in autonomous vehicles? That’s because we don’t know what else to do. We don’t know how to analyze them.

It was a very nice statement about Google cars where a Google car was trying to get out of a parking lot from a supermarket. It was just standing there for 20 minutes or so, and just couldn’t figure out ever to detect the road is free, and I can go out. And somebody commented that was “Well, if a driver student that had a cumulative driving training of 27 years and can’t get out of the parking lot, I would recommend him not to get a driving license.” [Laughing] And I think that’s really where we are.

So, why did this happen? What was the big change? And we said a little bit about that, it’s really that the solution space changed completely when we go to autonomous vehicles. We are using different sensors. We’re using different actors. We’re using totally different algorithms, especially nice non-deterministic algorithms because all the deterministic algorithms are too inefficient, so we don’t use something like depth-first search, because it would be, at best, it’s polynomial time, but we can’t accept these complexities, so we’re using a lot of heuristic-based search algorithms, and search is really not the most complex algorithm that is used in machine learning.

But if you look at something like a hill-climbing algorithm, highly efficient, but nobody can tell you which minimum it will find in the space that it’s searching. If it will find an absolute minimum or not. So what do we do when we’re building computers for chess? We just run it 3, 4 times and take the minimum of it, and then say well that’s about as good as it gets and we’re happy with that. We’re doing that with autonomous cars.

That might still be an applicable strategy, but it makes it impossible after an accident to say what the cause of the accident was. We just can say, well, he was just in his search algorithm, but we don’t know how he got there, because we’re basing a lot of these optimizations on nondeterminism or explicitly on random numbers.

So, this means that we change the solutions space completely, and this solution space might actually be an evolving solution space, which might not be a bad thing. But I’m not going to get into that. It’s not clear if we will actually have cars with the same software version or the same state of the software configuration, or different states of the configuration.

The second one is performance, So that’s the one that I find the most amusing about this. I also fell into the same naive problem. In 2012, we were asked by a company if we can qualify mainline Linux. Now I have been working on qualification strategies for mainlining for a little bit more than 10 years, so we’ve finally said, okay, let’s try and get out of our cozy little university institute. (It’s not that cozy in China, but it’s OK.) And try and really do it.

And we found it is extremely hard for us to do. But we then found it’s impossible as soon as we raised the question on which hardware will we run this. Because there is not a single multi-core CPU that you can qualify even to meet integrity levels on this planet today, so I don’t know what they’re going to put in their autonomous cars, and they’re not talking about acceleration units with 256 vector units on it to do calculation of trajectories or I don’t know what. Just talking about normal, quiet, core CPUs, not even really high end. Nobody has a clue how to qualify them.

And when you then go and look at the details of things like branch prediction, it’s non-deterministic. Cache replacement, non-deterministic. Why? Because a strict deterministic logic would be far too slow. So we do non-synchronized logic, which is much faster, but it’s not deterministic. For a server system, you don’t care if it’s going to flush the cache line that you just requested and it does it in ten to the minus 5 with a probability of 10 to the minus five, you won’t even notice. If the average speed of your system then gets up with the increases then you accept this nondeterminism.

But as soon as we’re going into certified systems, we want to make a defined statement about that, and we can’t. We simply have no clue how to do that.

So that’s going to be an open issue. So nobody’s going to have anything qualified on the road in the next 10 years because it’s not going to have any hardware for it.

Then we go into the legal situation, where it’s a lot of legal questions that are open. I’m not into legal expertise in any way, so I’m not going to say much about that, but there is a connection between the legal assumptions and technical capabilities of this industry, or of all industries for that matter. And there is a significant disconnect I think. Because when we– take the first two talks– we’re talking about, oh, we could aggregate these systems and we can verify this by testing, and if testing shows A and B fit together then we can use this version and they’re nicely encapsulated, so that sounds good. But the fact is we can’t test if A and B fit together without side effects. We simply can’t do that.

And so the question if this aggregation problem is of course a correct problem, that’s going to be one key problem to resolve, but we have to find a far more capable means of judging if something fits together or not, if you want to extend that beyond compliance and extend it into safety capabilities or systematic capabilities. And only if that happens will this ability actually be extendable to highly dynamic software in the sense of frequent updates, which is a problem that industry has to solve independent of the question of do we want owners to modify their software or not?

But as soon as industry solved the dynamics problem that they internally have anyway, then at that point we can open the discussion and then there’s actually probably not really a good reason not to allow you to modify software, because as soon as the first kernel bug is shipped in over the air update, where did that kernel bug or who actually detected this kernel bug? Probably it was one of the open source users.

So there is a certain relation I think between the software dynamics that will have to transform the way we treat safety-related software, and as soon as the transformation happens, then I assume that the question of open source would also be, could be integrated. Probably going a little bit too slow.

That’s some of the solutions or some of the mitigations.

So now just saying everything is bad is of course the easy job to do. I don’t think everything is bad. I think that there is actually opportunity for autonomous system because there is tremendous potential in it. The question is how can we actually harvest that? And one of the primary problems that we have is because we have been looking at this individual software entities all the time.

The automotive people love to call this “safety element out of context”, or SEooC, which is a fundamentally flawed concept for a safety-related system, because if you have a capacitor and my 10 microfarad capacitor, and I want to exchange it by another capacitor, that’s easy because it’s clear interface specifications, behavioral specification. I have my loading curve and whatever. I can just exchange it. That capacitor for safety-related system is truly a safety element out of context. I don’t care if this is an airplane, as long as the environmental conditions that are specified are satisfied. Or if it’s a tractor or if it’s a mobile phone, we could use the same capacitor.

As soon as we get a reasonably complex system, that doesn’t work anymore. And the only mitigation for that is to reintroduce system engineering. It’s a little bit of a paradox concept and that is– what we actually want to do is we want to maximize context for complex systems.

Why do we want to do that? I think that the only way to achieve safety is to actually be able to judge the relevant behavior, sort of get the Type C system back into Type B world, by saying, “okay, we’re going to analyze the system to the point where we understand all the relevant components so that we have a reasonable probability of catching all relevant faults. And then we can start discussing how the behavior of these faults is.

Now the drawback to this system-engineering approach is that it very severely restricts what can be added dynamically in the system, not only from user-modifiable, but also just updates. You don’t want to re-certify the system every time. We have to find some way to find a compromise between systems engineering or re-engineering from a system context every time and actually being able to judge properties of an individual element to retain safety.

I guess it’s kind of a no brainer that we need a new set of standards. We don’t have any suitable standards for autonomous systems at the moment. We don’t have any standards for machine learning. Nobody even has a clue how to do serious verification of machine-learning algorithms. We’re doing it by hand waving at the moment. And these standards are something that, as I said before, are actually derived from the state of the art. So what that really means is we’re going to have to establish this state of the art and then from this established and agreed state of the art, start to drive these standards.

If that doesn’t happen we’re going to run into the problem that we actually cannot say, what are the criteria where we say the system is acceptably safe? There is no 100 percent safety. But currently we really don’t know what failure rates are or which failure rate is acceptable. We don’t know it at the normative level, because we’re using the wrong standard and the socially accepted value will be one or two orders of magnitude below what we have now in automotive industry. And we don’t know how to assess something like machine learning or generally AI non-deterministic algorithms or complex software for this realm.

Just to maybe give you the number on this, I mean currently ASIL D, which has the highest integrity level in ISO 26262 is rated a ten to the minus 8 failures per hour. Sounds like never, but you have 10 to the 8 cars on the road, then unfortunately your expected value is once per hour. This scalability problem is what has actually been protecting other domains from this problem.

That’s why we built nuclear power plants when people told us, oh, we can achieve 10 to the minus 9 failures per hour. That’s once every twelve thousand five hundred years. And everybody said, oh, that’s great. That’s never. Until they built five new power plants and one every 25 years goes boom. Which is quite precisely what they are doing, and will go on doing.

Now scale that to a system that has 10 to the eight or 10 to the seven units deployed. It just will be a bloody mess if we do that.

So these issues have to be addressed. We can address them. We have to address them in research. Come to the conclusions now.

So establishing the state of the art is the first step if you want to build any qualifiable or safe systems. As long as there is no state of the art, there is no point in trying to qualify anything. We have to come up with a common understanding of the problem. We have to standardize a lot of these things. If we can’t standardize things, as I said, simple things like how far does a neighboring car, what’s a minimum distance to the neighboring car. Every car-maker is specifying these things implicitly or explicitly in their test software, but there is no regulated agreed on behavior of these systems.

And we have seen this in the past. If we look at rail systems in Germany in 1880. There are at 13 different buffer heights and distances and I think eight or nine different track specifications so they couldn’t inter-operate. So for inter-operations reasons, we solved these problems in the rail industry, and we have the same problem here. Nobody knows how two autonomous vehicles will behave when they meet.

Currently it’s very funny: you see two Uber cars driving around in traffic. Did you ever see three Uber cars driving side-by-side or doing a head on collision approach? We don’t know what they will do. I don’t know if Uber knows. Maybe they do, but it’s not about Uber. It’s true for all of them.

So we would have to start looking at how are these things going to behave? Maybe if we send a thousand autonomous cars across a bridge, a large bridge, it will start swinging synchronously and the bridge will collapse. Nobody knows.

So we have to start standardizing these things. We have to adjust methods and techniques. Because with the current technical means of assessment of design, of analysis, we can’t manage complex software, and needless to say complex non-deterministic software. That means complete transition to probabilistic world, which is my world. That’s why I like it. But that’s not where engineering is at the moment.

The problem with this is this is research. This is not something where company A can come up and say 2022 will have autonomous cars on the road. And management of Company B panics and says 2021, causing the panic of Company C that says 2020. I think the lowest prediction was 2018. A French company that said by 2018 we will be on the road with autonomous vehicles.

And then you look at what they are trying to do. They’re trying to get away with testing, by taking all those SIS systems that they have, lane assist, emergency braking assist, what not, and then you’re going to stuff it all in the car and hope that this thing will be an autonomous vehicle, which of course it will not be.

One last rant. Well, the whole thing was a rant so… [Laughing]

It’s interesting to note that other industries had the same problems. When the Vienna subway system was built in 1974, I think we were the last capital in Europe or in the western world to get a subway. We’re a little bit slow on some things, but in 1974, we finally decided to build a subway system and the first trains that were put on the subway system were capable of driving autonomously.

Well they never did because people didn’t like it. But the upside of it is that we have no shared tracks in Vienna in the subway system, so we have very few delays because one breaks down it doesn’t impact the others. That would be a nice model for New York I guess.

And from these starting points, this has undergone a 50-year evolution until we now have autonomous trains driving here, JFK, Nuremberg now has autonomous subways, Paris has autonomous subways. Next subway line in Vienna maybe really will be autonomous. It’s spreading.

This is fundamentally simpler technology than roads running in unknown, untested environments. But it’s a model that I think would be a reasonable start for autonomous vehicles. They say, well, a car is just a train running on virtual tracks with dynamic switches. We’ll have ways that equipment monitoring the cars and autonomous vehicle will sort of take care of its local trajectory only.

But if we don’t come up with a strategy like that and we let marketing drive these decisions, then of course it will be what it is now. It’s going to be a bloody mess. And the claim that this is much safer than human-operated cars and numbers don’t show that, at least not until now.

So to get this fundamentally working, we need to agree on the foundations. We need to actually build these foundations. These foundations will be, of course, technology first. Understood technology, not working technology, because building a prototype and then saying, well, it didn’t kill anybody from A to B is not evidence of it actually doing something reasonable.

Then we have to look at once we have the technology, how are we going to get that into our regulations. How are we going to standardize this? How’s this foundation going to be developed?

My estimate, or my best guess, it’s not my guess, it’s Deutsche Bank that analyzed this and said, well, before 2040 we’re not going to have any autonomous vehicles on public roads. And I assume what they were talking about is safe autonomous vehicles, and I would say that’s about a reasonable time-line. If this industry, academia, and innovative startups get together, they could make that date. Thanks. [Clapping]

MOGLEN: Questions? Yes. Please.

AUDIENCE MEMBER: So this is effectively like the chicken and the egg kind of dichotomy. It’s like do we establish what the standard is first? Even though we kind of don’t have a good state of the art for autonomous vehicles instead of to replace the ISO twenty six or whatever the long number is. It’s kind of, it’s kind of like that?

MCGUIRE: I wouldn’t say that. I mean the way you handle this in safety-related systems is you take the domain standard, ISO 26262 is a domain standard for automotive functional safety, and automotive vehicles below three and a half tons, and if that doesn’t satisfy your needs, you go up the standard stack. So domain standards are type B standards. You go up and say, okay, then I’ll go to the basic standard to IEC 61508, which would be in addition to at the moment and apply that, which is much more strict than the domain standard. That would be an iteration that could start off this or break this hen or egg problem. But you’re fundamentally right. As long as we don’t understand technology, there are no– Even if I have a standard, I don’t really know how to evaluate what I have on the table. So it’s going to be an iteration and this iteration will take a few years.

AUDIENCE MEMBER: So would it be accurate to say that, in threading-in open source to this part of the discussion, that until we set that standard, open source can’t really effectively be used by the OEMs and Tier Ones?

MCGUIRE: No, I would say that is exactly the opposite. We have to do this in open source so that we actually can establish the state of the art. If five geeks in BMW build something nice and don’t talk about it, what’s the use? We have to do this in the open, just like any new novel technology. Well, most of the novel technologies that were relevant were actually developed by competitors, were developed together. And that’s the only way to get all of these infancy problems out of it. Spread the knowledge, get it peer reviewed, get critical reviews. So, open source is exactly the tool to enable this. Proprietary approaches will actually, at this complexity level, will for sure fail.

MOGLEN: Mike?

AUDIENCE MEMBER: So I’m suffering from a little moment of cognitive dissonance here, because it basically– what you’re saying is this isn’t going to work for 40 years, and to be honest, I agree with everything you just said. And yet, Waymo, the mobility division of Google just ordered 20,000 autonomous vehicles from Jaguar to be shipped in the next year or two, so there’s this… It reminds me of an old saying that pessimists are right, but optimists are rich. And so is this one of these moments where corporate ambition and greed is so far outstripped the reality that we’re just in for a pasting?

MCGUIRE: How many 3-D TVs were ordered? [Laughing]

MOGLEN: For every household in Korea.

MCGUIRE: Okay, well, so I don’t know the numbers, but it was exactly the same hype. Everybody thought everybody was seeing dollar bags flying around us and thought they will be rich off that. What happened is, well, they invested a lot. I mean, it wasn’t comparable to autonomous vehicles. But basically. And there was no safety regulation. There just was, I don’t know. There wasn’t the industry behind it though the ecosystem, and we have similar standards, and safety is a part of the ecosystem. If the ecosystem is not prepared, you can put in an arbitrary amount of money, you’ll hit the wall.

AUDIENCE MEMBER: So what’s– Maybe Eben knows the answer to the question: who is the regulator in the United States that’s going to let those 20,000 cars on the road?

MOGLEN: Up until a couple of weeks ago it was the governor of Arizona. [Laughing] Right? I mean, the same basic proposition that Nicholas is offering with respect to the engineering, it seems to me would be fair to offer with respect to the legal technology. A race to the bottom is going on because when there is so little information available, when there is so little data that a rational policymaker could apply, politics is what gets made rather than policy. And there’s competition to be the first and to be the most open. And so the structure will be regulators desiring as quickly as possible to get out of the way.

MCGUIRE: We may not forget that it’s a lot of money. And so there’s political interest in attracting companies. But we have the same thing going on in Austria. The party that is currently in government and says, well, don’t make test roads near Graz which is our automotive center in Austria. They had no regulations, no concept for it. They just did this because they want to prevent companies from moving their own research divisions to Slovenia or to Germany. And the same thing is going on here in the U.S. as far as I understand it.

MOGLEN: Let’s stay with the technological problem before you go more deeply into the legal problem. I’m of course with you that we better be able to read the code or we’re never going to be able to figure out what any of this stuff does. But part of the problem in the kinds of computer programs that we’re thinking about now is the code is very simple and all the magic is in the training data, and we don’t have any legal licensing or other mechanisms for making sure that we understand the ecology of that data which runs, or which defines, the behavior of machine learning systems, the way we have a few of really kind of primitive, I will say this about my own licenses, legal structures for making sure that people get a copy of the source code. Beyond the openness of the code itself, we’re going to need a whole series of agreements and understandings about the data, which is containing the functional behavior of these things we don’t yet understand. How do you, from an engineering point of view, think we ought to go about trying to make rules about the qualifying of the data?

MCGUIRE: Okay, I’ll get to that. Just one sentence: The algorithms are much simpler, but it’s not so that we understand them.

MOGLEN: Fair enough.

MCGUIRE: Okay. When it comes to data, well there are standards for this because this is not the first industry that is having this problem. If we look at classical automation in this building, you will find PLCs. These PLCs are configured by data. The PLC has not changed. It’s only configured. There are standards for what must be the qualification of data. There’s a number of industries that have such qualification standards. This could be a starting point. None of this will be one to one applicable to an autonomous system, but it’s a starting point. And just like any other complex system, evolution beats revolution. So what we have to do is we have to start looking at simple system applying these guidance. These are not checklists. This is guidance. Apply that. Find what is a deficiency in the guidance. Fix the guidance. Build the next complex system. Apply it again, and through this iterative process, and we’re going to do the same thing for data.

MOGLEN: If we’re allowed to. That is to say if people don’t proprietize the data even more than they proprietize the code.

MCGUIRE: If they try and do that, it’s not going to happen at all. That solves a problem from a safety perspective as well. [Laughter.]

MOGLEN: Or they will do it because people will let them and we wind up with a bloody mess. Yes, Dan?

AUDIENCE MEMBER: Let me just add, I think also of course some standards are there, available already for systems. And I also believe that we don’t make the jump from 1, 2, or 0 to 100. But evolved a system from one system to another, and we gather information. We gather experience. And this is how the approach we take. So we start with assisting systems than we make, at the moment, we are in the process of making sensor fusion. We have a system which can bring all the data’s information together, and this is how we evolve this system step by step. I fully agree that it probably is too quick from a regulatory side. From an experience side, to jump immediately from zero to 100.

MCGUIRE: It’s not an evolution. That’s the problem. But you’re probably referring to the SOTIF standard, which is an attempt to regulate– let’s test the complex system until we believe it. The reason why this is not an evolution is because you are building it on functionality. You are evolving the system from a functional perspective, and you are tolerating extremely large complexity. A lot of the test systems that I’ve seen are running on Ross. Ross is an open source project written by PhDs. And some of it is very beautiful code. Some of it as Ph.D. code is horrible. And nobody has a clue how to qualify that. The KIT, Karlsruhe Institute for Technology, is starting to think about how to take a subset of Ross and maybe do an assessment of non-compliant development, which is dubbed Route 3S in 61508 to qualify a really small piece of it. There’s no clue how to do that. Nobody knows how to handle numeric instability. The mitigation that you’re doing at the moment is saying this is an assist system. You may not rely on it, and then you’re doing it in the way that it only can go toward the safe state. But it shouldn’t be able to go to the unsafe state. That’s why you can do this, but as soon as the safety of the car is determined by these algorithms, you can’t do it anymore. This is not an evolution. It’s an illusion of an evolution, but it’s not.

MOGLEN: One more question.

AUDIENCE MEMBER: Hi, thank you. Last month, in March, an autopilot, a Tesla on autopilot came to a fork in the road and took it. And I want to know would you have expected at least the brakes to be applied by either the driver who was apparently asleep or the autopilot to do something? Because you have to be able to differentiate a barrier from a line in the road. It’s so basic.

MCGUIRE: No, it’s not basic. It’s actually the problem that if you look at the single accident, and you’ll say how could this happen? But then you have to scale it and say, well actually how many– I don’t know how many Teslas are driving around, let’s say it’s 20,000, 30,000, something like that. So we’re exercising this autopilot all the time, and it’s actually doing fairly good and from time to time it is missing the target. And this is exactly the point. We don’t know why it missed. Because we don’t understand what it was actually doing before. And so you can’t deduce much from this individual accident except that this situation should have been managed, and it just means it was not in the training data or the weighting of this training scenario was weaker than it should have been for the actual environment that it was operating in.

AUDIENCE MEMBER: What is kind of interesting is that the same driver that died complained about the same barrier and same location in a previous note to Tesla.

MCGUIRE: Okay, well then, maybe he would qualify for the Darwin Award, or so. I don’t know. [Laughing] I mean he’s getting into a car and complaining to the manufacturer autopilot doesn’t work and then going to sleep.

MOGLEN: So on the basis that the National Highway Traffic Safety Administration really does not want us to discuss intermediate conclusions and an investigation ongoing, I’m not going to attempt to figure out what that tells us. What it does tell us, I think, is that most of the people in this room who are in one way or another thoughtful about computer software and cars do not believe that software can drive cars. I think it’s sort of important to keep that in mind. We may all believe that software will eventually be able to drive cars.

MCGUIRE: Yes, it can, why not? NASA has proven that, that it can do it.

MOGLEN: That’s right. And on Mars where it doesn’t snow very often it works particularly well, and we do all this in California for a reason, right? Because it allows us to ignore a whole lot of what makes driving cars…

MCGUIRE: Okay, I do have to defend NASA. The sandstorms on Mars are not easy to handle.

MOGLEN: OK. fair enough. You’re absolutely right. And I’m not suggesting that it wouldn’t be much beyond what a Tesla can handle because it would wipe out all the sensors immediately.

We did begin with skepticism about autonomous driving. We’re not going to conclude only with skepticism about autonomous driving. First, we’re going to take a brief break so people can go to the bathroom, and then we’re going to talk way more about autonomous driving.

Thank you, Nicholas. Very, very good.

Previous: 3a-adams | Next: 4a-gilpin | Contents