ISO 42001:2023 places a duty on organisations adopting AI systems to understand the impact on individuals, groups of individuals, and society; both positive and negative. Artificial Intelligence Impact Assessments (AIIA) are not a formality, they exist to protect not only fairness and compliance, but human capability. This responsibility matters because technological erosion rarely happens suddenly. It doesn’t announce itself, it arrives quietly through comfort and convenience. A concern I have is the lack of balance between all the financial benefits that AI has to offer and the potential long-term consequences.
An AIIA should not only explore bias, privacy, and safety. It should also ask:
- What skills weaken if humans stop performing this task?
- Does this system build capability or remove the need for it?
- What are the societal implications if cognitive engagement declines?
- How do we avoid creating a workforce of operators instead of thinkers?
I experienced something similar as a child. I received my first computer at ten, and within months my handwriting had changed to look more like computer fonts. As a teenager and as a young adult, I walked or cycled everywhere. After graduating university, I needed a car to commute 20+ miles to work, and not too long after that I began driving everywhere. Even a short walk to buy a newspaper started to feel unnecessary.
I recall watching Idiocracy, a satirical film set in a future where society slowly abandoned curiosity, learning, and thoughtful decision making, leading to a population that couldn’t solve basic problems. Joe Bauers, an average man from the present day, wakes up centuries later as the smartest person alive because he still possessed basic reasoning skills.
In no way should this article suggest that AI is bad. AI can simplify decision making, accelerate work, and make life more efficient in a way that amplifies human capability. It can also weaken both skills and independence if society over-relies on AI as a substitute for thought rather than a support for it. The danger is not intelligence disappearing overnight. It is a slow, comfortable slide into convenience where fewer people understand how things work, fewer question information, and fewer develop deep expertise because automated systems handle everything; a potential gradual drift into the world that Idiocracy mocks.
A healthier path is to use AI as a partner, not as a replacement. We can let tools handle repetition while we apply judgment, creativity, and continuous professional development. There is much discussion on AI taking over the world, and parallels to Terminator and John Connor, but we shouldn’t ignore the possibility that civilisation could sleepwalk into a future where the smartest person in the room is the one who can still think for themselves. The future belongs to those who can think with machines rather than depend on them.

Information security, risk management, internal audit, and governance professional with over 25 years of post-graduate experience gained across a diverse range of private and public sector projects in banking, insurance, telecommunications, health services, charities and more, both in the UK and internationally – MORE
