Concerns among professionals in the AI space

I am pleased to report that I completed the next stage of my journey to become an AI subject matter expert. I passed the ISO 42001 Lead Auditor exam. Although I currently qualify only as a PECB Certified ISO 42001 Provisional Auditor, I can upgrade this from Provisional Auditor to Auditor later this year.

This journey has included attending seminars, reading news articles, conversations with other professionals, and generally trying to stay informed and remain current in a rapidly evolving field. This article summarises professional concerns and forms a core part of delivering governance of artificial intelligence. The extent to which these risks and concerns already exist and unfold daily is open to debate and not part of this article. I leave this with you to consider.

Misinformation and disinformation

AI can create and amplify false or misleading information at an unprecedented scale, threatening trust in media and democratic institutions.

  • AI models can generate thousands of fake articles, social media posts, or reviews in seconds, tailored to spread specific narratives, making manipulating public opinion easier for bad actors.
  • AI can create realistic videos or audio clips of individuals saying or doing things they never actually did for purposes such as blackmail, propaganda, or to discredit public figures.
  • AI-powered automated bots can hijack social media platforms, amplifying false narratives or silencing dissenting voices.
  • As AI-generated content becomes more challenging to distinguish from genuine material, people may lose trust in legitimate sources of information, leading to societal instability.
  • State-sponsored actors could leverage AI to influence elections, destabilise economies, or create population discord.

Bias and discrimination

AI systems are only as unbiased as the training data. Without careful oversight, they can perpetuate or even exacerbate discrimination.

  • AI learns from historical data, which often reflects societal inequalities. Recruitment algorithms, for example, trained on biased data might favour specific demographics over others.
  • Without transparency in AI decision-making processes, it is challenging to identify and address discriminatory outcomes.
  • AI tools and solutions developed by teams with limited diversity can lead to blind spots in understanding and addressing diverse needs.
  • Companies deploying biased AI systems can face reputational damage, lawsuits, and regulatory scrutiny.

Job displacement and economic impact

AI is transforming the job market, raising concerns about unemployment and economic inequality.

  • Routine manufacturing, logistics, customer service, and transportation jobs are highly susceptible to automation. Self-driving vehicles could replace millions of drivers, for example.
  • Transitioning displaced workers into new roles requires significant training programs and education investment. The lag between technological advancement and workforce adaptation is an important concern.
  • AI may disproportionately benefit those who own and develop the technology, widening the gap between low and high-income groups.
  • While AI boosts productivity, the economic benefits may not translate into job creation, potentially leaving millions without viable employment.

Privacy

AI systems thrive on data, but this dependency raises concerns about privacy violations, unethical data usage, and mass surveillance.

  • Companies and governments could collect vast amounts of personal data to train AI models without explicit consent.
  • AI-powered surveillance tools like facial recognition cameras can track movements and activities, often infringing on civil liberties.
  • The centralisation of data for AI training can increase the risk of breaches, exposing sensitive information to hackers.
  • Using AI to analyse and link disparate data sources can make it nearly impossible for individuals to remain anonymous.

Loss of control

As AI systems grow more sophisticated, there is increasing concern about their autonomy and the potential for catastrophic misuse.

  • Advanced AI systems may act in ways their creators did not anticipate, potentially causing harm in critical areas such as healthcare or transportation.
  • AI-driven weapons could operate without human intervention, raising ethical and strategic dilemmas, including the potential for accidental escalation of conflicts.
  • When AI surpasses human intelligence, it might prioritise itself over the well-being of humanity, leading to existential threats.
  • Many AI algorithms are complex and opaque, making it challenging to understand decision-making processes. This lack of transparency can lead to dangerous or harmful outcomes.
  • Governments and organisations struggle to keep up with the pace of AI development, creating a gap in oversight that could allow harmful applications to flourish.

With international cooperation, proactive regulation, ethical development, and public awareness, we can collectively address these risks and shape a safer, more trustworthy AI future.

Ethical considerations of AI

Ethical development, deployment, and use of artificial intelligence is essential to ensure responsible innovation, fairness, trustworthiness, and societal benefit.

  • When developing AI systems, it is crucial to prioritise human well-being, autonomy, and dignity.
    • AI should enhance user capabilities and decision-making processes.
    • Design systems to accommodate people of all abilities and demographics.
    • Provide clear, understandable explanations of AI functionality and outcomes.
    • Incorporate mechanisms to prevent harm, misuse, or unintended negative consequences.
    • Regularly incorporate user feedback to improve AI systems and address potential concerns.
  • Transparency builds trust and understanding between users and AI systems, making it essential to communicate AI processes.
    • Users should always be aware of when they interact with AI technologies.
    • Provide detailed yet understandable explanations of how the AI operates and makes decisions.
    • Share potential risks, limitations, and intended uses of AI systems openly with stakeholders.
    • Be transparent about how AI models collect, use, and safeguard data.
    • Maintain an open dialogue with users, researchers, and regulators to ensure ongoing alignment with ethical standards.
  • Develop and maintain AI systems to promote equitable outcomes and avoid discrimination.
    • Conduct regular audits to identify and mitigate biases in data and algorithms.
    • Use diverse datasets to prevent systemic inequalities from being embedded into AI systems.
    • Test and validate systems to guarantee fair treatment for all users.
    • Build AI solutions that actively address and reduce societal inequities.
    • Ensure compliance with laws and ethical norms to safeguard fairness and equality.
  • Protecting user data and respecting privacy rights is critical when designing and implementing AI systems.
    • Only collect the data necessary for the intended purpose.
    • Ensure sensitive data is anonymised to protect user identities.
    • Employ appropriate security measures to protect data from breaches or misuse.
    • Obtain explicit, informed consent for data collection and usage.
    • Align all practices with relevant privacy laws and regulations such as GDPR.
  • Accountability mechanisms ensure the responsible use of AI and the ability to address ethical challenges effectively.
    • Establish specialised teams or committees to oversee ethical compliance.
    • Conduct periodic reviews to verify adherence to ethical policies.
    • Define transparent processes to identify, address, and resolve issues related to AI systems.
    • Provide ongoing education for teams to remain informed on best practices and emerging ethical challenges.
    • Maintain accessible avenues for reporting concerns or suggesting improvements.
  • As technology and societal expectations evolve, so should the ethical frameworks surrounding AI.
    • Regularly review and update policies to address new challenges and opportunities in AI ethics.
    • Partner with global AI ethics communities to exchange insights and best practices.
    • Stay informed of advancements and risks to refine ethical approaches proactively.

I recently looked at Certified Ethical Emerging Technologist (CEET), a certification from CertNexus. The certification marketplace is expanding as more professional bodies offer qualifications in AI. CertNexus also offer the Certified AI Practitioner (CAIP) certification.

I chose to focus on the Artificial Intelligence Governance Professional (AIGP) from the International Association of Privacy Professionals (IAPP) and both Certified ISO/IEC 42001 Lead Auditor and Certified ISO/IEC 42001 Lead Implementer from the Professional Evaluation and Certification Board (PECB).

In a rapidly evolving field, embedding ethics into AI development is not a constraint, it is a critical enabler of long-term trust and value.

Unacceptable use of AI

The European Union’s Artificial Intelligence Act (EU AI Act) prohibits certain AI practices that pose unacceptable risks. The law considers these practices to significantly undermine fundamental rights, distort human behaviour, or cause harm to individuals or society. The following examples align with the EU AI Act’s definition of unacceptable risk practices, which are strictly prohibited within the EU:

  • Deploying subliminal, manipulative, or deceptive techniques impairs informed decision-making and causes significant harm.
    • AI-driven adverts with subliminal cues that exploit consumers’ unconscious desires, leading to overspending or unhealthy consumption habits.
    • Undermining informed decision-making with virtual assistants that guide users toward specific political agendas or products.
    • Use of AI in computer games to manipulate player behaviours into making excessive in-game purchases.
  • Exploiting vulnerabilities related to age, disability, or socioeconomic circumstances to distort behaviour in a way that causes or is likely to cause significant harm.
    • Use of AI to exploit low-income individuals by encouraging them to take out high-interest loans.
    • Using educational software to manipulate children’s choices or limit their learning potential based on stereotypes or biases.
    • Targeting elderly individuals with deceptive offers for unnecessary products or services, capitalising on cognitive impairments or isolation.
  • Using biometric systems to infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.
    • AI systems profiling individuals based on facial features to infer their religious beliefs, leading to discriminatory treatment in public services or employment.
    • Systems that categorise by sexual orientation or political views, resulting in exclusion or targeting in advertising or societal participation.
  • Evaluating or classifying individuals or groups based on social behaviour or personal traits in ways that result in detrimental or unfavourable treatment unrelated to the original purpose of data collection:
    • Scoring individuals based on social media activity to determine access to housing, loans, or educational opportunities.
    • Monitoring employees’ behaviour and punishing them for perceived misalignment with organisational culture.
    • Allocating public services based on AI-generated social scores that disadvantage vulnerable groups.
  • Assessing or predicting an individual’s likelihood of committing criminal offences solely based on profiling or personality traits, except when augmenting human assessments grounded in objective and verifiable facts directly linked to criminal activity:
    • Identifying potential offenders based on socioeconomic background, place of residence, or prior associations.
    • Predictive policing models disproportionately target ethnic minorities or marginalised communities, reinforcing existing biases.
    • Systems that flag individuals as risks based on psychological assessments rather than direct criminal behaviour.
  • Compiling facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
    • AI systems that harvest facial images from social media platforms without consent to create mass surveillance databases.
    • Using AI to collect and store via in-store cameras without proper notification or consent.
  • The EU AI Act prohibits the use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes in situations such as:
    • Scanning crowds at peaceful protests to identify participants for potential targeting.
    • Using remote biometric identification in shopping malls to monitor and track individuals in real time without evidence of criminal activity.
    • Deploying real-time facial recognition for general surveillance rather than specific investigations at public events.
  • However, exceptions to this prohibition include:
    • Searching for missing persons, abduction victims, or individuals subjected to human trafficking or sexual exploitation.
    • Preventing a substantial and imminent threat to life, such as an act of terrorism.
    • Identifying suspects in serious crimes, including murder, rape, armed robbery, drug and weapons trafficking, and organised crime.

The EU AI Act places compliance obligations on businesses that use AI systems, even if licensed by third-party providers. Businesses remain responsible for ensuring the AI complies with the law when offered to EU customers. Using unacceptable AI within the EU is prohibited, even if it is developed or procured from outside the EU.

AI legal frameworks in the UK and the EU

Artificial Intelligence (AI) is transforming industries and societies across the globe, driving the need for robust legal frameworks to govern its use. In the United Kingdom (UK) and the European Union (EU), AI governance is a blend of existing legislation, including data protection, consumer rights, and intellectual property, and in the EU, dedicated legislative initiatives like the EU AI Act.

This article is intended as a high-level overview to provide a general understanding of the regulatory landscape and the differing strategic approaches between the UK and the EU.

United Kingdom

The UK government’s AI white paper, published in 2023, outlines its vision for AI regulation, guided by five cross-sectoral principles:

  • Safety, Security, and Robustness – Ensuring AI systems operate reliably and mitigate risks.
  • Transparency and Explainability – AI systems must be understandable to users and regulators.
  • Fairness – promotes equitable outcomes and prevents bias in AI systems.
  • Accountability and Governance – establishes clear roles and responsibilities for AI developers and users.
  • Contestability and Redress – Ensuring mechanisms for users to challenge AI-driven decisions.

Instead of enacting new AI-specific legislation, the UK relies on existing regulators, such as the Information Commissioner’s Office (ICO), to enforce these principles within their domains.

European Union

The EU AI Act adopts a risk-based regulatory framework, classifying AI systems into four categories:

  • Unacceptable Risk – AI practices deemed harmful and prohibited, such as social scoring by public authorities or real-time biometric identification in public spaces (except for narrowly defined security purposes).
  • High Risk – includes AI systems used in critical areas such as healthcare, recruitment, and law enforcement. These are subject to stringent requirements, including risk assessments, transparency, and human oversight.
  • Limited Risk – includes applications like chatbots or recommendation systems requiring minimal obligations such as transparency notices.
  • Minimal or No Risk – Most AI systems are largely unregulated in this category.

The EU AI Act complements the EU Ethics Guidelines for Trustworthy AI, emphasising human autonomy, fairness, and societal well-being. Violations can result in significant fines, mirroring the General Data Protection Regulation (GDPR) enforcement model.

Leveraging existing laws

Both the UK and EU leverage existing laws to address challenges associated with AI:

  • Data Protection Laws – the EU’s General Data Protection Regulation (GDPR) and the UK’s Data Protection Act 2018 govern personal data collection, processing, and storage, which is integral to AI systems. Compliance with these laws is essential for ensuring responsible use of AI technology.
  • Consumer Protection Laws – AI-powered products and services must comply with laws like the EU’s Unfair Commercial Practices Directive and the UK’s Consumer Rights Act 2015 to prevent deceptive practices.
  • Intellectual Property (IP) Laws – this is governed by the UK Copyright, Designs and Patents Act 1988 and the EU Copyright Directive. Key challenges include:
    • Use of copyrighted materials to train AI models.
    • Ownership of AI-generated content.
    • Accountability if AI-generated content infringes on existing copyright.

As AI continues to evolve, the regulatory approaches of the UK and EU will shape not only compliance expectations but also how innovation unfolds within ethical and legal boundaries. While the UK relies on a flexible, principles-based model, the EU has introduced a comprehensive legal framework through the EU AI Act. Both regions aim to balance innovation with the protection of fundamental rights, addressing the ethical, legal, and societal challenges posed by emerging AI technologies.