20. April 2017 | Magazine:

“A good algorithm is like a poem” Focus on Research: Self-Aware Vehicles

Sebastian Stiller is mathematics professor, philosopher and author. He develops algorithms and explains why we shouldn’t be frightened of them. The scientist lives in Braunschweig with his wife and child (and a robotic vacuum cleaner).

The easiest question first: what is an algorithm?

Thinking in algorithms means thinking about how you think. Your try to find a simple principle that applies to a wide variety of situations or things. For example, the pen-and-paper method of addition we learned in primary school is an algorithm, and it simplifies additions tremendously. Even five-digit numbers can be added up without much effort when you follow the scheme step by step. Laws and traffic regulations are algorithms, too. They provide simple operational rules for a wide range of people and circumstances.

Professor Sebastian Stiller, Institute for Mathematical Optimization of the TU Braunschweig. Credit: Hans Scherhaufer

Professor Sebastian Stiller, Institute for Mathematical Optimization of the TU Braunschweig. Picture credits: Hans Scherhaufer

When did you realise how commonplace algorithms really are?

During my studies, I obviously I learned what an algorithm is, and then applied that knowledge. But I think I only truly realised it while writing my book.

Why did you write the book?

I really didn’t like how algorithms were being discussed in public, in the media and in politics. It only served to stir up unnecessary fears about how we would one day be ruled by machines. Algorithms are a powerful tool when applied in the right place and using common sense. But they are only a tool and cannot solve every problem. Algorithms expand our possibilities, but they cannot replace us.

Where can algorithms be useful?

Algorithms help us manage globalisation, allocate resources and energy, and coordinate the logistics of worldwide trade. In research, they help us gauge which experiments are likely to be successful. Algorithms can considerably speed up the search for new antibiotics and vaccines. I myself am working on algorithms that can make cars safer and space missions more reliable and cost-effective. In the future, such missions might be carried out with a fleet of spacecraft, and little to no knowledge of the conditions that await them at their destination. We will have to come up with suitable courses of action and equip the spacecraft with the appropriate algorithms.

If algorithms are so helpful, why are people so scared?

It’s partially due to misleading terminology, such as the “autonomous” car. Autonomous means “self-governing”. This may well lead to some people becoming afraid of robotic cars running wild, determining what they should do next completely on their own. That’s nonsense, of course, because scientists are the ones who determine the principles on which the cars operate. And autonomous cars do not make driving more dangerous; they make it safer. That’s why we are working on them.

What do you think of the term “artificial intelligence”?

That one is probably even worse. Scientifically speaking, it means a specific type of algorithm. But it has nothing to do with intelligence in the human sense. Human intelligence requires consciousness, a will, and the ability to assume responsibility and show remorse. Machines have none of these.

But what exactly is artificial intelligence?

Artificial intelligence algorithms are a type of artificial estimation. Today, they make it possible for machines to win complex games such as chess and Go, working just like humans, with rather half-baked rules that are influenced by their playing experience. The more often they play, the better they get. But it is still all a matter of guesswork. Or do you happen to know the principle for playing chess perfectly?

What can we do about fears that the world will soon be ruled by machines?

Above all, it is important to understand the criteria used in various algorithms. Once you do, algorithms stop being frightening. But that’s the sticking point. I have a feeling that even decision-makers in the political and economic spheres are pretty clueless in this arena. A basic understanding of algorithms should be just as much a part of everyone’s general knowledge as the topics of politics, nutrition and the climate. You don’t have to be a maths genius to understand them – any 7th grader can learn how a search engine algorithm works.

Are all critical views of algorithms unfounded?

No, you can always question an algorithm. Is it any good? And most important, how does it come to its decisions? If it is based on statistics, I would have to assess the outcome quite differently than if it was a clear, exact calculation. We should always know about an algorithm’s quality and how changing circumstances affect the result. Because of this, we take the self-awareness approach at TU Braunschweig, like for autonomous driving. The system should be able to recognise whether its capabilities are sufficient for the task at hand, or whether it would be better to play it safe and stop. A little like a person who is self-aware enough to leave their car keys with the bartender if they’ve had too much to drink.

Can you give me some examples of bad algorithms?

(Thinks for a moment.) For example, there is this app that determines whether or not a person suffers from depression by analysing their Facebook likes. It has a higher accuracy rate than the person’s friends, but that is only because friends are really bad at this. I think applications like these are irresponsible. A psychologist or psychiatrist would be a much better reference point. Of course, it is possible to correlate all kinds of things together, even Facebook likes and depression. But there’s no valid theory behind this.

(Thinks some more.) Another example: we have a talking robotic vacuum cleaner at home. The other day, it said that its brush needed cleaning. It looked pretty clean to me, but still I cleaned it again and again, until my wife and I realised that the robot could only determine whether or not the brush was rotating. And sure enough, it was only jammed. The robot makes us feel as if it knows when its brush needs cleaning, so we stop thinking for ourselves. With an older vacuum cleaner, there would just have been a little light to indicate that something was wrong, and we probably would have found the problem much sooner.

What is it like to live with a robotic vacuum cleaner?

It’s quite alright. But it is a little alarming to realise how we sometimes treat it almost like a human. The other day, my son called out, “Robbie, don’t drive into that corner! Robbie, don’t!” A little surprised, we reminded him that it can’t understand him, but then we realised that we actually often talk about the robot as if it was human.

In your book, you say that a good algorithm is like a poem. Do you also feel an emotional bond with formulas?
Well, we scientists tend to get very enthusiastic about our work. It can sometimes happen that you fall into a kind of trance. And if you actually manage – usually around four in the morning – to develop an algorithm of a certain simplicity and clarity that does exactly what you want it to do, when it all comes together and is still working right when you wake up the next morning, that is quite a thrilling feeling!

Text: Andrea Hoferichter