Why should I care?
Algorithms and AI have become unavoidable. They are used in decision making in a many areas; not only social issues, but also in climate research, energy transition and healthcare.
There is a good chance that you will have to deal with AI; in your work, education, the news or in your personal life. Find examples in these themes:
You may need to make decisions about the use of algorithms, or think about their opportunities and risks. To do this in a responsible way, it is important to understand how these algorithms and AI work.
Insight starts with asking questions. But what questions can you ask about AI to gain the insight you need? We help you with this by looking at a broad topic such as ‘data science’ at three levels, based on our main questions.
Source: where do the data come from?
Analysis: what happens to the data?
Outcome: how are source and analysis used?
What do you think about algorithms and AI?
If you think about it, it’s actually quite strange. The amount of data and the complexity of data use and analysis continues to increase. While, on the other hand, our communication, even on these complex topics, has become increasingly shorter. How much can you explain in a two-line tweet or a few second video clip? Sometimes you read positive, optimistic messages on a topic. The same topic is other times described negative and critical. We will take a closer look at some of those contradictions. This helps us to keep asking questions and gain insight.
‘AI is everywhere and widely used’
‘AI is in its infancy and can hardly do anything’
Although the statements seem to contradict each other, it is of course quite possible that AI is both widely used and in full development. It is important to pay attention to how the term ‘AI’ is used. Is it an algorithm that has learned to perform one task independently. Or is it a discussion about what can really be called artificial intelligence?
Machine learning algorithms are often called ‘AI’. They are used in many different ways and widely applied. But self-learning is not the same as intelligent. A truly intelligent algorithm does not yet exist. Intelligence can be defined as the ability to learn, use appropriate techniques to solve problems and achieve goals. All this while taking the context into account, in an uncertain, ever-changing world. In most AI research, machines were programmed by humans to behave smartly, for example to play chess. Today, the emphasis is more on machines that can learn in a way that is somewhat similar to how humans learn.
AI researcher Professor Antal van den Bosch of the Meertens Institute (KNAW) is one of the scientists who is fascinated by this topic. What is intelligence and how do you ‘make’ it? “I often see some form of deep or general intelligence being attributed to algorithms that have at best learned one task very well. In my view, this has more to do with marketing than it actually says anything about the intelligence of the algorithm. Real intelligence as we recognize it in people, is something we will not be able to replicate for a long time. How long will it take? Well, we are really just getting started. If we ever succeed, it will certainly take decades.”
‘Algorithms are objective’
‘Algorithms are full of prejudices’
By using an algorithm to make a choice, you can be sure that the decision was not based on human emotion. The algorithm knows no emotion. The algorithm will make the choice the same way every time. It is reproducible and traceable, properties that we associate with objective.
But which numbers the algorithm uses and which choices are made based on certain numbers? Those settings come from people. Based on these settings, the algorithm makes assumptions. Making those assumptions transparent and assessing how (un)desirable they are, is again a task for people. Precisely because an algorithm makes reproducible and traceable choices, it is possible to reveal and – more importantly – remove prejudices.
Professor Sennay Ghebreab is an AI scientist at the University of Amsterdam and works on developing AI so that it promotes equality in society: “How do we ensure that AI does not make the same mistakes as we do? This is the question that is central to my work. Humans live and learn in a world full of assumptions, which consciously and unconsciously contain certain prejudices. It is precisely a technology such as AI that makes it possible to hold up a mirror to our society and expose those prejudices. Only when we make them visible can we start working on solving them. So that we can ultimately ensure that AI does not make the same mistakes as we do.”
‘Predictive models can tell the future’
‘Predictive models are speculative and unreliable’
A model uses algorithms to calculate how data influence each other based on many different large data sets. The precise values in the calculations – called weights or parameters – are adjusted so that the results of the model correspond as closely as possible to the results registered in the dataset. The algorithm learns to recognize patterns in the data, based on which it can predict more accurately what the correct outcome will be. The model therefore uses data from the past to find out certain ‘rules’ or connections. It can then apply these rules to make predictions about future scenarios.
Its strength lies in the fact that the model can discover and use complex relationships. Of course, even a very complex model is never so complete that it contains all variables. Predicting the entire future with certainty is impossible. Machine learning algorithms do not guarantee that they will learn complete and correct patterns. For example, self-driving cars used to confuse the full moon for an orange traffic light, and completely new situations can also be problematic for algorithms. Diverse and reliable data sets are therefore important to make it more likely that the algorithm and thus the model are also more reliable.
Researcher Stefan Buijsman is working on providing insight into AI. “With self-learning algorithms that learn from large amounts of data, it is often unclear why they give a certain outcome. Why does the model mark one person as a fraudster and not another? Due to the many calculations that precede it, learned from the data, we cannot give a good answer. So we are developing new techniques, using ‘explainable AI’, in which we try to tell why the algorithm gives that answer. That brings responsible AI a step closer. With more transparency about algorithms, we gain more insight into the choices and how acceptable they are. For example, we hope that with more information there will be more control over the use of algorithms.”
Dr Cynthia Liem also works as a researcher on techniques that can strengthen our trust in AI. “AI systems are based on mathematics and computer science. The ‘language’ of mathematics and computer science is much more rigid than the language used by people. So we have to pay close attention to whether problems that people express do have enough nuance in data, algorithms and AI. In services like Spotify and Netflix, recommendation algorithms, which automatically try to suggest what you as a user would like, are crucial. This is often based on ‘engagement’; how often people watch an item. But that can also lead to unforeseen incentives, where clickbait would be the ultimate type of content. I think we want to see more than that, don’t you?”
Do you have a question or do you want to talk to an AI researcher? Mail us at info@inzicht-in-ai.nl.