Our artificial future - book review #1
Jan 19, 2026 • 4 • 851
Table of contents
Most of the blogs I write are about Power Platform, Microsoft 365, and how to apply technology in a practical way within organizations. Concrete, hands-on, and usually quite technical.
This year, I want to add something to that mix: occasionally stepping back and reflecting on the other side of technology.
One of the first books I read with that goal in mind is Onze kunstmatige toekomst: wat wij willen met AI (en AI met ons) (in English: Our Artificial Future – What we want with AI and what AI wants with us) by Joris Krijger. It’s not a guide on how to implement AI, but a book that forces you to think about what AI does to our society, our decision-making, and ultimately to ourselves.
And that is exactly why I think it’s relevant for people working in IT.
AI as a prediction machine, not a future thinker
One of the ideas that stood out to me most is how AI is described not as a system that predicts the future, but as one that largely repeats the past. AI learns from historical data and that data is never neutral.
That may sound abstract, but the consequences are very real. If historical data already contains inequalities, AI is likely to reinforce them rather than correct them. For some groups that works out well; for vulnerable or historically disadvantaged groups, it often does not.
No matter how advanced an AI model is, it remains dependent on:
- the data we feed it
- the assumptions embedded in that data
- and the choices we, as humans, make These are things we don’t always explicitly reflect on in IT projects.
The line between predicting and deciding
Krijger presents AI primarily as an extremely powerful prediction machine: systems that can process enormous amounts of data and detect patterns within them. And yes, that is impressive. I see the value of this every day in my own work.
But the book also makes an important distinction: AI can calculate probabilities, but it cannot decide what truly matters.
Which values do we prioritize? How do we weigh errors? What level of risk do we consider acceptable?
Those remain human decisions. In a world where AI increasingly influences decision-making, it is essential to recognize where that boundary lies. For me personally, this reinforces something I already believe: AI can support us, but ownership and responsibility should never be fully outsourced.
What AI does to our own skills
Another concept that stayed with me is cognitive atrophy. A technical term for something very human: if you stop using certain skills, they weaken, just like muscles.
If AI increasingly writes texts for us, summarizes information, or generates analyses, what does that mean for our own ability to think, write, and reason clearly? This is not an argument against using AI, but rather a call to use it consciously.
I notice this in my own work as well. AI can create a great first draft, but I still want to understand what’s there. I want to be able to challenge it and improve it myself. That balance matters.
Inequality, power, and platforms
The book also looks at the broader societal impact of AI, not only economically, but socially and politically. It highlights the role of major platforms like Meta, X, and TikTok, which have become essential channels for political communication.
That raises difficult questions about equal access, influence, and democratic values. Not because there are simple answers, but because AI forces us to make these questions explicit. Do we accept new forms of inequality as inevitable? Or do we try to compensate for them in a fair and deliberate way?
These are discussions IT professionals are often indirectly involved in, but rarely actively participate in.
Can we hold AI to higher moral standards than ourselves?
One of the more nuanced points in the book is the question of whether we can or should hold AI to a higher moral standard than humans. After all, people do not always act ethically. We have biases, conflicting interests, and incomplete information.
AI is designed by those same imperfect people. Expecting AI to behave in a morally flawless way can therefore feel unrealistic. At the same time, that doesn’t absolve us of responsibility. If anything, it increases it. Perhaps that is the core message: AI doesn’t just automate decisions, it exposes our own values, choices, and blind spots.
Why this book is relevant for IT professionals
In my day-to-day work, I’m not yet explicitly confronted with these ethical dilemmas. But that’s exactly why I think it’s valuable to engage with them now, before they become unavoidable challenges.
This book doesn’t offer technical solutions or clear-cut answers. But it does help you look more critically at the role AI plays and the role we play in shaping it.
And maybe that’s what we need alongside all technological progress: moments of reflection, slowing down, and asking ourselves not just
“What is possible?” but also “What do we actually want?”
Book information
- 📖 Link to the book (only available in Dutch): Onze kunstmatige toekomst
- 🌍 Website of the author: joriskrijger.nl
