Critical Thinking in the Age of AI

Elderly Man Thinking while looking at a chessboard. Image by pexels.com (free use)

My reflections on the matter originate from the same fulcrum: with Artificial Intelligence taking over tasks and even replacing competences otherwise humane, how is our way of thinking and living going to be affected?

I believe that, especially because AI is more and more in place every day, there will be more need for interpersonal and humanistic competencies, that are what make us different beings from any other we know so far, natural and artificial alike.

There is an ongoing debate on ethical implication and the need for governance for the new technologies, to establish what is allowed and what is not. Anyways, law is a thing, but then there is our conscience, and our ability to assess every situation differently.

That is going to require more critical thinking than ever because, the way we human beings are designed, are extremely imperfect in the way we reason. For example:

  • We believe that cold reasoning is better than gut feeling. In reality, they are two sides of the same coin, and both are valid and powerful if understood.
  • We think we see facts objectively, while we are used (unconsciously, or not) to make stories that fit our understanding of the world. Our biases make us clearly subjective individuals.
  • We give for granted many things until somebody does not speak up loud enough for us to listen. Were slavery, women’s rights, racism not a problem until they were? Or were we just blind to the obvious, because these things were considered acceptable? As you can see, our morality changes with history.

These points converge to the conclusion that, since data used to train AI is collected by people, and with people having a limited view of the world, the data used to train AI is not objective. The consequence is that human bias, if unchecked, propagates within artificial intelligence systems, which in turn can exacerbate existing unfair societal definitions with their decision making and prediction.

This does not mean that this is a challenge that can’t be resolved. However, while data scientists and AI engineers can work on curbing the issue, ensuring the creation of fair and robust technology, it is a responsibility of any of us to understand if and how the way we use it can cause harm to something or someone, while benefiting us.

Once more, critical thinking is what can give us the upper hand. As of today, machines make decisions without understanding them; they learn how to perform tasks without grasping the meaning of it. On the other hand, we the humans have awareness of the world and of ourselves, and ethical and moral values influencing our decision making. We are able to consider whether the output of an AI does not feel right, and do something about it.

My conclusion is that whereas STEM disciplines (i.e. Science, Technology, Engineering and Mathematics) will define qualitatively better technologies, humanistic disciplines will rise to be of extreme importance to assess the use of technology with respect for human rights, as well as non-human – animals and environment. Philosophy, Psychology, Art, Literature always have been great assets to the growth of cultures and society, and they will be once again fundamental.

Humans and machines do not necessarily share the same values (and machines are far from understanding what “value” is in human terms… Only knowing it as the response to a parameter!). We should keep that in mind while the world progresses, and once more, learn and evolve accordingly.

Here is what you can do to better understand where you stand today, as human being and citizen of the world:

  • Know Thyself. All humans are biased, by default. Knowing the hidden, underlying mechanisms of how we make decisions is essential. I recommend you to read Thinking, Fast and Slow, written by Nobel prize Daniel Kanheman. You should also check the documentary Coded Bias.
  • Know your Surroundings. Reaching to other and diverse point of views will open your eyes to see the elephants in the room. You shall read the work of journalist and activist Caroline Criado-Perez: Invisible Women to get a grasp of how unequal are still modern societies today.
  • Know your Rights. Artificial Intelligence will progress and undoubtedly coalesce with industries and society. Even if you are not technically knowledgeable on the matter, what you can do is to familiarize with the blueprints established by researchers from around the world. Check out the 2017 Asilomar’s AI Principles about research, values and issues on application of Artificial Intelligence into real world. Refresh your understanding of data protection and privacy, like in this GDPR wrap-up. Get in touch with local Data & AI regulations: European Union is just discussing the AI Act and other countries will follow with their own standards.
  • Reflect on all (if any) of the points above: where do you see yourself impacted? Why it is important for you to know? What can you do about it? How can you contribute (as individual/citizen/professional/scientist…) to the upcoming change?

Published by Andrea Paviglianiti

I practice coaching, I love reading, and I work as a data scientist. I also recharge my batteries with meditation, martial arts, and video games. I perform career and skills coaching – thus I define myself as a “cognitive” coach: I help people improve their learning experience to succeed where they want. My method is based on behavioral analysis, psychology of learning, philosophy of dialogue, and classic literature. I write about how to get better at learning, the best books I read, and my personal philosophy of coaching. And I will not lie to you – I can get verbose at times! I’d be happy if you stick around and read more of what I have to share!

One thought on “Critical Thinking in the Age of AI

Leave a comment