Ethical considerations of AI

Ethical development, deployment, and use of artificial intelligence is essential to ensure responsible innovation, fairness, trustworthiness, and societal benefit.

  • When developing AI systems, it is crucial to prioritise human well-being, autonomy, and dignity.
    • AI should enhance user capabilities and decision-making processes.
    • Design systems to accommodate people of all abilities and demographics.
    • Provide clear, understandable explanations of AI functionality and outcomes.
    • Incorporate mechanisms to prevent harm, misuse, or unintended negative consequences.
    • Regularly incorporate user feedback to improve AI systems and address potential concerns.
  • Transparency builds trust and understanding between users and AI systems, making it essential to communicate AI processes.
    • Users should always be aware of when they interact with AI technologies.
    • Provide detailed yet understandable explanations of how the AI operates and makes decisions.
    • Share potential risks, limitations, and intended uses of AI systems openly with stakeholders.
    • Be transparent about how AI models collect, use, and safeguard data.
    • Maintain an open dialogue with users, researchers, and regulators to ensure ongoing alignment with ethical standards.
  • Develop and maintain AI systems to promote equitable outcomes and avoid discrimination.
    • Conduct regular audits to identify and mitigate biases in data and algorithms.
    • Use diverse datasets to prevent systemic inequalities from being embedded into AI systems.
    • Test and validate systems to guarantee fair treatment for all users.
    • Build AI solutions that actively address and reduce societal inequities.
    • Ensure compliance with laws and ethical norms to safeguard fairness and equality.
  • Protecting user data and respecting privacy rights is critical when designing and implementing AI systems.
    • Only collect the data necessary for the intended purpose.
    • Ensure sensitive data is anonymised to protect user identities.
    • Employ appropriate security measures to protect data from breaches or misuse.
    • Obtain explicit, informed consent for data collection and usage.
    • Align all practices with relevant privacy laws and regulations such as GDPR.
  • Accountability mechanisms ensure the responsible use of AI and the ability to address ethical challenges effectively.
    • Establish specialised teams or committees to oversee ethical compliance.
    • Conduct periodic reviews to verify adherence to ethical policies.
    • Define transparent processes to identify, address, and resolve issues related to AI systems.
    • Provide ongoing education for teams to remain informed on best practices and emerging ethical challenges.
    • Maintain accessible avenues for reporting concerns or suggesting improvements.
  • As technology and societal expectations evolve, so should the ethical frameworks surrounding AI.
    • Regularly review and update policies to address new challenges and opportunities in AI ethics.
    • Partner with global AI ethics communities to exchange insights and best practices.
    • Stay informed of advancements and risks to refine ethical approaches proactively.

I recently looked at Certified Ethical Emerging Technologist (CEET), a certification from CertNexus. The certification marketplace is expanding as more professional bodies offer qualifications in AI. CertNexus also offer the Certified AI Practitioner (CAIP) certification.

I chose to focus on the Artificial Intelligence Governance Professional (AIGP) from the International Association of Privacy Professionals (IAPP) and both Certified ISO/IEC 42001 Lead Auditor and Certified ISO/IEC 42001 Lead Implementer from the Professional Evaluation and Certification Board (PECB).

In a rapidly evolving field, embedding ethics into AI development is not a constraint, it is a critical enabler of long-term trust and value.

Unacceptable use of AI

The European Union’s Artificial Intelligence Act (EU AI Act) prohibits certain AI practices that pose unacceptable risks. The law considers these practices to significantly undermine fundamental rights, distort human behaviour, or cause harm to individuals or society. The following examples align with the EU AI Act’s definition of unacceptable risk practices, which are strictly prohibited within the EU:

  • Deploying subliminal, manipulative, or deceptive techniques impairs informed decision-making and causes significant harm.
    • AI-driven adverts with subliminal cues that exploit consumers’ unconscious desires, leading to overspending or unhealthy consumption habits.
    • Undermining informed decision-making with virtual assistants that guide users toward specific political agendas or products.
    • Use of AI in computer games to manipulate player behaviours into making excessive in-game purchases.
  • Exploiting vulnerabilities related to age, disability, or socioeconomic circumstances to distort behaviour in a way that causes or is likely to cause significant harm.
    • Use of AI to exploit low-income individuals by encouraging them to take out high-interest loans.
    • Using educational software to manipulate children’s choices or limit their learning potential based on stereotypes or biases.
    • Targeting elderly individuals with deceptive offers for unnecessary products or services, capitalising on cognitive impairments or isolation.
  • Using biometric systems to infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.
    • AI systems profiling individuals based on facial features to infer their religious beliefs, leading to discriminatory treatment in public services or employment.
    • Systems that categorise by sexual orientation or political views, resulting in exclusion or targeting in advertising or societal participation.
  • Evaluating or classifying individuals or groups based on social behaviour or personal traits in ways that result in detrimental or unfavourable treatment unrelated to the original purpose of data collection:
    • Scoring individuals based on social media activity to determine access to housing, loans, or educational opportunities.
    • Monitoring employees’ behaviour and punishing them for perceived misalignment with organisational culture.
    • Allocating public services based on AI-generated social scores that disadvantage vulnerable groups.
  • Assessing or predicting an individual’s likelihood of committing criminal offences solely based on profiling or personality traits, except when augmenting human assessments grounded in objective and verifiable facts directly linked to criminal activity:
    • Identifying potential offenders based on socioeconomic background, place of residence, or prior associations.
    • Predictive policing models disproportionately target ethnic minorities or marginalised communities, reinforcing existing biases.
    • Systems that flag individuals as risks based on psychological assessments rather than direct criminal behaviour.
  • Compiling facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
    • AI systems that harvest facial images from social media platforms without consent to create mass surveillance databases.
    • Using AI to collect and store via in-store cameras without proper notification or consent.
  • The EU AI Act prohibits the use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes in situations such as:
    • Scanning crowds at peaceful protests to identify participants for potential targeting.
    • Using remote biometric identification in shopping malls to monitor and track individuals in real time without evidence of criminal activity.
    • Deploying real-time facial recognition for general surveillance rather than specific investigations at public events.
  • However, exceptions to this prohibition include:
    • Searching for missing persons, abduction victims, or individuals subjected to human trafficking or sexual exploitation.
    • Preventing a substantial and imminent threat to life, such as an act of terrorism.
    • Identifying suspects in serious crimes, including murder, rape, armed robbery, drug and weapons trafficking, and organised crime.

The EU AI Act places compliance obligations on businesses that use AI systems, even if licensed by third-party providers. Businesses remain responsible for ensuring the AI complies with the law when offered to EU customers. Using unacceptable AI within the EU is prohibited, even if it is developed or procured from outside the EU.

AI legal frameworks in the UK and the EU

Artificial Intelligence (AI) is transforming industries and societies across the globe, driving the need for robust legal frameworks to govern its use. In the United Kingdom (UK) and the European Union (EU), AI governance is a blend of existing legislation, including data protection, consumer rights, and intellectual property, and in the EU, dedicated legislative initiatives like the EU AI Act.

This article is intended as a high-level overview to provide a general understanding of the regulatory landscape and the differing strategic approaches between the UK and the EU.

United Kingdom

The UK government’s AI white paper, published in 2023, outlines its vision for AI regulation, guided by five cross-sectoral principles:

  • Safety, Security, and Robustness – Ensuring AI systems operate reliably and mitigate risks.
  • Transparency and Explainability – AI systems must be understandable to users and regulators.
  • Fairness – promotes equitable outcomes and prevents bias in AI systems.
  • Accountability and Governance – establishes clear roles and responsibilities for AI developers and users.
  • Contestability and Redress – Ensuring mechanisms for users to challenge AI-driven decisions.

Instead of enacting new AI-specific legislation, the UK relies on existing regulators, such as the Information Commissioner’s Office (ICO), to enforce these principles within their domains.

European Union

The EU AI Act adopts a risk-based regulatory framework, classifying AI systems into four categories:

  • Unacceptable Risk – AI practices deemed harmful and prohibited, such as social scoring by public authorities or real-time biometric identification in public spaces (except for narrowly defined security purposes).
  • High Risk – includes AI systems used in critical areas such as healthcare, recruitment, and law enforcement. These are subject to stringent requirements, including risk assessments, transparency, and human oversight.
  • Limited Risk – includes applications like chatbots or recommendation systems requiring minimal obligations such as transparency notices.
  • Minimal or No Risk – Most AI systems are largely unregulated in this category.

The EU AI Act complements the EU Ethics Guidelines for Trustworthy AI, emphasising human autonomy, fairness, and societal well-being. Violations can result in significant fines, mirroring the General Data Protection Regulation (GDPR) enforcement model.

Leveraging existing laws

Both the UK and EU leverage existing laws to address challenges associated with AI:

  • Data Protection Laws – the EU’s General Data Protection Regulation (GDPR) and the UK’s Data Protection Act 2018 govern personal data collection, processing, and storage, which is integral to AI systems. Compliance with these laws is essential for ensuring responsible use of AI technology.
  • Consumer Protection Laws – AI-powered products and services must comply with laws like the EU’s Unfair Commercial Practices Directive and the UK’s Consumer Rights Act 2015 to prevent deceptive practices.
  • Intellectual Property (IP) Laws – this is governed by the UK Copyright, Designs and Patents Act 1988 and the EU Copyright Directive. Key challenges include:
    • Use of copyrighted materials to train AI models.
    • Ownership of AI-generated content.
    • Accountability if AI-generated content infringes on existing copyright.

As AI continues to evolve, the regulatory approaches of the UK and EU will shape not only compliance expectations but also how innovation unfolds within ethical and legal boundaries. While the UK relies on a flexible, principles-based model, the EU has introduced a comprehensive legal framework through the EU AI Act. Both regions aim to balance innovation with the protection of fundamental rights, addressing the ethical, legal, and societal challenges posed by emerging AI technologies.

Facing AI challenges

In a previous article, I mentioned the need to conduct Artificial Intelligence Impact Assessments (AIIA). Businesses are accountable for using AI, even when developed by third parties. Ethical standards and legal requirements are evolving in this direction, and the need includes:

  • Essential due diligence when purchasing new software.
  • Maintaining an inventory of software and its use of AI
  • Understanding how the AI works and its impact on stakeholders
  • Being responsible for the outcomes

These are core requirements for the implementation of ISO 42001. AI can impact individuals, groups of individuals, and society as a whole. The following explores the impact in more detail.

AI can disrupt personal lives in ways that raise ethical, psychological, and practical concerns. These issues often stem from the misuse of data and lack of transparency. As I mentioned in an earlier article, the risk is that AI turns ‘Computer says NO’ into ‘Computer says NO on steroids’, automated decisions without explanation, empathy, and with limited recourse.

  • Many systems collect and process personal data without user consent, and even with widespread privacy legislation, this has still led to a lack of control over personal information. AI only exacerbates this.
  • Automation continues to replace manufacturing, retail, and transportation jobs, leaving many individuals without employment or requiring them to acquire new skills in a rapidly changing economy.
  • Personal decisions, such as loan approvals or job applications, may be influenced by biased AI systems, such as unfairly favouring specific demographics and perpetuating any existing discrimination.
  • Dependence on AI for everyday tasks, such as navigation, time management, drafting of documents and communication, may reduce critical thinking and problem-solving skills over time. “We believe that when you create a machine to do the work of a man, you take something away from the man.” – Star Trek: Insurrection

Under GDPR, people have the right not to be subjected to decisions based solely on automated processing, the right to a meaningful explanation of any decisions made, and the right to contest the outcome. These rights extend to using artificial intelligence and its impact on individual rights. The expected result from a request for a meaningful explanation is often limited to “we put the information into the computer it gives us the answer”. Emerging AI legislation, including the EU AI Act, strengthens these rights.

AI poses profound challenges to society due to its scale and potential misuse:

  • AI-generated fake images, videos, and news erode trust in media, businesses, institutions, and elections.
  • Governments and corporations increasingly use AI for mass surveillance, raising ethical issues around civil liberties and human rights, such as discriminatory misuse of facial recognition.
  • Automation can disproportionately impact low-skilled workers, leading to unemployment and widening economic gaps as industries like retail and manufacturing transform rapidly. Businesses need to transform rapidly, anticipate these changes,and develop strategies for workforce transition, reskilling, and support.
  • AI decisions in critical areas such as healthcare and law enforcement prompt questions about responsibility when errors or harm occur. The EU AI Act includes an unacceptable risk category for banned use and a high-risk category with increased safeguarding requirements.

Adopting AI in business brings ethical, compliance, and operational risks, and a failure to address these can lead to financial, reputational, or legal repercussions:

  • AI systems can be vulnerable to hacking, adversarial attacks, and data breaches like any other software system. Manipulated AI outputs can compromise decision-making and business operations.
  • AI-powered recruitment, lending, and advertising tools may perpetuate biases, exposing businesses to reputational damage and legal liabilities.
  • Evolving laws, like the EU AI Act, require legal and technical expertise. Failing to comply risks, fines, and operational disruptions.
  • Misuse or failures in AI can harm trust. Biased recommendations or faulty product suggestions can alienate customers and damage corporate brands.
  • Heavy reliance on AI systems risks operational disruptions from bugs, data inaccuracies, or cyberattacks. Businesses must maintain contingency plans that include operating with AI and other software.
  • AI requires significant investment in infrastructure, training, and maintenance, often challenging businesses to demonstrate a return on investment.
  • AI-generated content raises questions of ownership and copyright, creating potential disputes over AI-driven designs or innovations.

Adopting AI-based software requires a clear understanding of its impacts to ensure responsible use and to avoid biases, privacy issues, or harmful inaccuracies, raising ethical and accountability concerns. The societal and economic effects, like job loss and trust erosion, highlight the need for proactive risk management.

By acknowledging these challenges, businesses and policymakers can shape AI systems that balance innovation with fairness, accountability, and long-term trust.