The Emergence of AI Insurance

Artificial intelligence is now embedded in decisions across finance, health, retail, and public services. With the upside comes exposure: model errors, bias, data misuse, and unpredictable failures. Insurers are beginning to write AI-specific cover, but as with car or home insurance, protection won’t be automatic. To secure a policy that is both affordable and viable for insurers, organisations will need to show reasonable precautions through robust governance, a living AI policy, and evidence that requirements are followed in practice.

The idea of insuring against AI risks may sound futuristic, but real-world harms are no longer hypothetical. From credit scoring to recruitment and medical imaging, failures are already creating measurable impacts. This shift explains why insurers are now exploring AI-specific products, and why businesses must increasingly demonstrate that they are managing these risks responsibly.

Lessons from other insurance products

The insurance industry has always operated on the principle of reasonable precautions. A policyholder must take steps to protect themselves and their property, otherwise the insurer may decline a claim. These expectations are explicit in policy terms and not left to interpretation.

  • Car insurance – policyholders are expected to lock the doors, remove the keys from the ignition, and avoid leaving valuables visible. If a car is stolen while the engine is left running or the keys are inside, insurers will reject claims on the grounds of negligence.
  • Home insurance – insurers require homeowners to close windows and lock all external doors when leaving the property. Many policies also specify the use of approved locks. If entry was gained through an open window or an unlocked door, theft claims can be denied. Some insurers also mandate working smoke alarms as a condition for fire coverage.
  • Travel insurance – travellers must take reasonable care of their belongings, such as keeping passports and valuables on their person or in a hotel safe. Losses that occur because items were left unattended, for example, on a sunbed at the beach, are commonly excluded.
  • Health insurance – policyholders must accurately disclose pre-existing medical conditions when applying for cover. Failure to disclose such information may result in claims being refused or the policy being voided entirely.

These examples illustrate a clear pattern: insurance is not designed to cover reckless or preventable losses. Instead, it assumes that the insured party will act responsibly, follow mandatory controls, and reduce the likelihood of avoidable claims.

Economic viability of AI insurance

For AI insurance to succeed, the policy must balance two interests: it must be affordable and worthwhile for the business while remaining sustainable and profitable for the insurer. Without this balance, either premiums will be prohibitively high, or insurers will withdraw cover altogether.

For businesses:

  • Predictable costs – Companies want insurance to protect against catastrophic or unexpected AI failures, not to replace routine risk management. If premiums are too high, businesses will either self-insure (absorbing risks internally) or avoid purchasing cover altogether.
  • Fair pricing for good governance – Businesses that can demonstrate alignment with ISO/IEC 42001, robust AI policies, and effective lifecycle controls should be rewarded with lower premiums. This mirrors the way homes with burglar alarms or cars with immobilisers qualify for discounts.
  • Avoidance of hidden gaps – Businesses need clarity on what is excluded. “Silent AI exclusions” in cyber, professional indemnity, or liability cover can lead to gaps that make coverage ineffective. A policy is only economically viable if it provides genuine protection against the risks the company faces.
  • Support for compliance – As AI regulations tighten (e.g., under the EU AI Act), insurance aligned with those requirements helps companies offset compliance costs by reducing the financial impact of failures or incidents. This strengthens the value proposition of cover.

For insurers:

  • Risk selection and underwriting discipline – Insurers need assurance that customers are acting responsibly. Without this, claims could spiral in frequency and size, making AI policies unprofitable. Structured frameworks like ISO/IEC 42001 or mappings to the NIST AI Risk Management Framework provide underwriters with measurable checkpoints.
  • Loss prevention through controls – By requiring baseline precautions, such as model validation, bias testing, and incident response plans, insurers reduce the likelihood of avoidable losses. This preserves the claims ratio and ensures premiums reflect true residual risk rather than gross exposure.
  • Pricing for uncertainty – AI carries novel risks such as systemic bias, cascading errors, or IP infringement at scale. Insurers must price in these uncertainties while avoiding premiums so high that they deter buyers. The only way to narrow the pricing gap is to insist on strong governance evidence from policyholders.
  • Limiting systemic exposure – Some AI failures could create correlated losses across many policyholders (e.g., reliance on the same third-party model provider). To remain viable, insurers may cap aggregate exposures, share risk through reinsurance, or demand that businesses diversify suppliers.

The shared value proposition rests on balance. For businesses, AI insurance must deliver affordable premiums, meaningful protection, and the reassurance that one failure will not destabilise financial resilience. For insurers, it must reduce uncertainty, limit avoidable claims, and ensure that payouts respond to unforeseen incidents rather than preventable negligence. The bridge between these interests is demonstrable governance. An AI policy aligned with ISO/IEC 42001, not just written but embedded across the organisation, provides the evidence of responsibility that makes the economics viable. It is this visible governance that enables insurers to underwrite sustainably and allows businesses to access cover with confidence.

Artificial Intelligence Impact Assessment (AIIA)

Insurance ultimately exists to protect people, not just balance sheets. For AI, that means insurers will increasingly look beyond technical controls to the human and societal impacts of algorithmic decisions. An Artificial Intelligence Impact Assessment provides the missing bridge:

Individuals:

  • How are people affected if the AI system fails?
  • Are there safeguards against discrimination, wrongful denial of services, or reputational harm?
  • Insurers will expect evidence of user-centric risk analysis and redress mechanisms.

Groups:

  • Does the AI system disadvantage certain demographics, communities, or professions?
  • Could biases compound existing inequities?
  • Demonstrating proactive bias testing and mitigation will be essential for cover.

Society:

  • Could widespread adoption of the system cause systemic harm (e.g., destabilising financial markets, spreading misinformation, or eroding trust)?
  • Insurers may cap exposure to such risks, but businesses must still show they have mapped and mitigated them.

The business:

  • Beyond compliance, how would a failure impact on brand, trust, and financial resilience?
  • AIIAs document these scenarios and demonstrate to insurers that risks are both understood and actively managed.

By embedding the AIIA into governance, businesses not only strengthen their insurance case but also align with regulators, stakeholders, and the broader social licence to operate.

Clearly defined AI policy

Insurers will expect companies to have a formal, leadership-approved AI policy that sets the tone for safe and responsible use of artificial intelligence. The policy should not only exist on paper but must be embedded into daily practice, communicated across the business, and reinforced by training and accountability.

  • Leadership-approved and organisation-wide.
  • Sets out principles of safe, lawful, and ethical AI use.
  • Communicated to staff and backed by regular training.
  • Assigns clear roles and responsibilities for oversight.

Defined scope and AI inventory

An organisation cannot manage what it does not know it has. Insurers will look for a clear inventory of AI systems that captures ownership, purpose, and risk classification. This inventory should help demonstrate that the business understands which systems are critical, which are high-risk, and how external stakeholder or regulatory expectations are being addressed.

  • Comprehensive list of AI systems in use.
  • Ownership and accountability documented.
  • Risk categories assigned (low, medium, high).
  • Stakeholder and regulatory requirements considered.

Structured risk assessment

Insurance is built on understanding risk, and insurers will expect businesses to do the same for their AI systems. Companies must carry out structured assessments of legal, ethical, operational, and security risks and ensure these are revisited whenever a model is retrained, repurposed, or deployed in a new context.

  • Formal AI risk assessments completed and updated.
  • Legal, ethical, operational, and security risks identified.
  • High-risk systems subject to enhanced safeguards.
  • Regular reviews triggered by system changes or retraining.

The Artificial Intelligence Impact Assessment (AIIA) covered earlier will also include risks to individuals, groups, society, and the business.

Operational resilience

Insurers will expect to see strong operational controls that span the entire lifecycle of AI systems, from design to retirement. This means having processes for responsible development, secure data handling, rigorous validation, controlled retraining, and clear points where human judgment is required.

  • Documented lifecycle processes for design, training, deployment, monitoring, and retirement.
  • Strong data governance (provenance, rights, privacy, and security).
  • Validation for accuracy, bias, robustness, and reliability.
  • Change control processes and retraining safeguards.
  • Clear rules on human oversight and intervention.

Monitoring and incident readiness

Continuous monitoring is critical for keeping AI systems safe and reliable. Insurers will expect organisations to track performance, fairness, and compliance on an ongoing basis and to have an incident response plan ready for when things go wrong. That plan must be tested, so staff know how to act in real scenarios.

  • Ongoing monitoring for accuracy, bias, and compliance.
  • Documented incident response plan for AI-specific failures.
  • Defined escalation procedures and responsibilities.
  • Evidence of rehearsals or drills of the response plan.

Evidence and documentation

In disputes or claims, evidence matters. Insurers will expect companies to maintain detailed records that prove risks were managed responsibly. This includes documentation of risk assessments, testing results, monitoring reports, corrective actions, and audit trails of significant model changes.

  • Documented records of all risk and control activities.
  • Audit trails for model changes and decision-making.
  • Retention of evidence to support claims or investigations.
  • Corrective actions recorded and tracked to closure.

Leadership and accountability

Responsible AI cannot be delegated entirely to technical teams. Insurers will expect senior leadership to be visibly involved in oversight, ensuring that governance is a cultural priority. Named individuals must be accountable for AI risks, and there should be evidence that leaders actively drive a culture of responsibility.

  • Senior management visibly engaged in AI oversight.
  • Named individuals accountable for risk and compliance.
  • Oversight committees or equivalent governance structures.
  • A culture of responsibility supported from the top down.

Regulatory alignment

Compliance is no longer optional; it is a condition for trust. Insurers will expect companies to demonstrate that their AI systems meet legal obligations, such as data protection, equality laws, and sector-specific requirements. Businesses should also show they are prepared to adapt quickly as new regulations emerge.

  • Systems designed to comply with data protection and discrimination laws.
  • Alignment with industry-specific regulations.
  • Processes in place to track regulatory change.
  • Adaptation mechanisms to meet new legal requirements.

Competence and training

Even the best policies fail if staff lack the knowledge to follow them. Insurers will expect evidence that employees at all levels are trained to use AI responsibly. This includes broad AI literacy for general staff and deeper, role-specific training for developers, auditors, and risk managers.

  • AI literacy training for all employees.
  • Specialist training for developers, auditors, and oversight roles.
  • Mandatory training tied to AI policy requirements.
  • Records of training completion and effectiveness.

Ongoing review and improvement

AI governance must evolve with the technology itself. Insurers will expect companies to review their systems regularly, track and close corrective actions, and commit to continual improvement. This shows that businesses are not just meeting today’s standards but preparing for tomorrow’s risks.

  • Regular reviews of AI risks, incidents, and performance.
  • Documented management reviews with action tracking.
  • Corrective actions logged and closed.
  • Commitment to continual improvement and adaptation.

From risk transfer to building trust

Insurers will expect companies to show that they are treating AI with the same care as any other business-critical system. Strong policies, structured risk management, lifecycle controls, and a culture of responsibility will be seen as the “locked doors and smoke alarms” of AI, essential for affordable and dependable coverage.

AI insurance is not just about transferring risk, it is about building trust in a technology that is both powerful and unpredictable. Just as locked doors and smoke alarms became shorthand for responsible home ownership, AI policies, impact assessments, and lifecycle controls will become the baseline for responsible AI adoption. Insurers, businesses, regulators, and society all stand to benefit if these foundations are laid early.

Protecting Human Capability in an AI World

ISO 42001:2023 places a duty on organisations adopting AI systems to understand the impact on individuals, groups of individuals, and society; both positive and negative. Artificial Intelligence Impact Assessments (AIIA) are not a formality, they exist to protect not only fairness and compliance, but human capability. This responsibility matters because technological erosion rarely happens suddenly. It doesn’t announce itself, it arrives quietly through comfort and convenience. A concern I have is the lack of balance between all the financial benefits that AI has to offer and the potential long-term consequences.

An AIIA should not only explore bias, privacy, and safety. It should also ask:

  • What skills weaken if humans stop performing this task?
  • Does this system build capability or remove the need for it?
  • What are the societal implications if cognitive engagement declines?
  • How do we avoid creating a workforce of operators instead of thinkers?

I experienced something similar as a child. I received my first computer at ten, and within months my handwriting had changed to look more like computer fonts. As a teenager and as a young adult, I walked or cycled everywhere. After graduating university, I needed a car to commute 20+ miles to work, and not too long after that I began driving everywhere. Even a short walk to buy a newspaper started to feel unnecessary.

I recall watching Idiocracy, a satirical film set in a future where society slowly abandoned curiosity, learning, and thoughtful decision making, leading to a population that couldn’t solve basic problems. Joe Bauers, an average man from the present day, wakes up centuries later as the smartest person alive because he still possessed basic reasoning skills.

In no way should this article suggest that AI is bad. AI can simplify decision making, accelerate work, and make life more efficient in a way that amplifies human capability. It can also weaken both skills and independence if society over-relies on AI as a substitute for thought rather than a support for it. The danger is not intelligence disappearing overnight. It is a slow, comfortable slide into convenience where fewer people understand how things work, fewer question information, and fewer develop deep expertise because automated systems handle everything; a potential gradual drift into the world that Idiocracy mocks.

A healthier path is to use AI as a partner, not as a replacement. We can let tools handle repetition while we apply judgment, creativity, and continuous professional development. There is much discussion on AI taking over the world, and parallels to Terminator and John Connor, but we shouldn’t ignore the possibility that civilisation could sleepwalk into a future where the smartest person in the room is the one who can still think for themselves. The future belongs to those who can think with machines rather than depend on them.

The Fallacy of Being Irreplaceable

Imagine if one person held the only keys to a vital system, and then went on sudden sick leave. How long would it take before things ground to a halt?

There is a persistent fallacy in some workplaces, the belief that if you keep your knowledge to yourself, you become indispensable. That by becoming the only person who knows how to do a task, run a process, or fix a problem, you’re creating job security. In reality, this mindset introduces fragility, not strength. It’s a short-term tactic that fails the long game of career growth, leadership, and organisational resilience.

In a previous article, Winning the lottery or failing the bus test (8th June 2015), I explored a simple resilience thought experiment: What happens if a key person wins the lottery or gets hit by a bus? Whether someone leaves for joyful reasons or tragic ones, the point remains. Businesses must be prepared to continue functioning without any one individual.

That article focused on assessing the risk, whereas this one focuses more on one of its root causes; knowledge hoarding.

The Hidden Cost of Knowledge Hoarding

The idea that hoarding knowledge makes you valuable is deeply flawed. It’s understandable that people want to feel needed, but locking knowledge inside our heads doesn’t protect our position. In practice, it limits our growth, introduces single points of failure or single points of success into the business, situations where one person’s absence could derail critical operations or progress. These points of fragility are the antithesis of good governance, risk management, and succession planning.

I have experienced many situations over the years where we have discussed specific topics and issues, identified information that we needed to proceed, assigned tasks in the meeting, and scheduled a follow-up meeting, only to discover a week later that someone already had all the information but didn’t share it, allowing us to waste valuable time.

Understanding the Motives

People hoard knowledge for many different reasons. Sometimes fear-based, other times it’s due to past experiences or organisational dynamics. Here are some of the most common causes:

  • Fear of being replaced
  • Belief that knowledge equals power
  • Job insecurity
  • Ego or status-driven behaviour
  • Lack of trust in colleagues or leadership
  • Past experiences of being overlooked or unrewarded after sharing
  • Competitive or toxic workplace culture
  • Unclear job boundaries or expectations
  • Lack of recognition for knowledge-sharing efforts
  • High workloads and time pressure
  • Absence of easy-to-use documentation tools or systems

These are based on a fallacy, that being the only one who knows something creates job security. In truth, this behaviour can backfire spectacularly. The real value comes from empowerment, not from exclusivity.

Sharing knowledge also depends on trust. In organisations where people feel safe to ask questions, admit what they don’t know, and share openly, knowledge flows more naturally. This psychological safety underpins a learning culture and strengthens resilience.

I once had a conversation with someone who told me that, because I’d gone to university and earned a degree at great personal expense, my knowledge was my property, something to sell, not share. I’ve never fully agreed with this mindset. I’ve always felt that knowledge becomes more valuable when it helps others. It is one of the reasons I continue to write and publish articles here on Integritum; not because I have all the answers, but because collective thinking helps us all improve.

In resilient teams:

  • Knowledge flows through cross training
  • Processes are documented, shared, and improved collaboratively.
  • Team members cover each other’s workload during sickness and annual leave
  • If someone receives a promotion, resigns, or retires, they can do so without the business suffering.

The Path to Growth

If someone else can do what you do, it doesn’t make you replaceable, it makes you promotable. Consequently, the objective is not to cling to tasks, but to enable others so we can move on to higher-value work. It is not about guarding secrets, but creating capacity in others.

For succession planning, every role should have a shadow, a backup, or at least a process manual. This also supports onboarding new staff, covering holidays or sickness, or responding to emergencies. Shared knowledge builds resilience.

When we share knowledge, the benefits are wide-reaching. It builds trust, supports professional development, and improves operational continuity. For example:

  • We build trust with colleagues
  • We support a culture of learning
  • We position ourselves as leaders
  • We reduce organisational risk
  • We create the conditions for personal growth and advancement

The idea that ‘if no one else knows how we do this, they’ll always need us’ may feel like control, but in reality it only serves to trap us where we are. True job security doesn’t come from being irreplaceable, it comes from being so effective, helpful, and growth-oriented that people want us to succeed, and want to give us responsibility, not less. The more people that can do what we do, the more space we will have to do something greater.

Standards such as ISO 27001, ISO 9001, and ISO 42001 define management system frameworks that reinforce the importance of documented processes and knowledge continuity as core aspects of resilience and risk management.

As AI continues to automate routine work, the value of human roles will shift even more toward strategic thinking, mentorship, and collaboration, all of which depend on shared knowledge.

Reflections on Client Confidentiality

In 2017 and 2018, I wrote a four-part series titled “How Much Info Is Too Much?” to challenge an uncomfortable norm: professionals, especially in IT and information security, are routinely expected to share confidential client details as proof of credibility. Whether in procurement discussions or during recruitment, the pressure to disclose private information to secure the next opportunity became standard practice.

Seven years later, this article recaps the original series, ties in a follow-up article on recruitment ethics, and considers whether we have changed our professional culture or whether client confidentiality is still treated as expendable when careers or contracts are on the line.

The Ethics of Disclosure

The series began with a direct comparison that exposed the double standards applied to confidentiality in IT and other professions.

“Imagine asking a solicitor about their past divorces to prove they can handle yours. It would never happen.”

Part 1 (4th December 2017) challenged the assumption that sharing previous client information demonstrates trustworthiness. It used everyday examples like taxi drivers, alarm installers, and lawyers to make a key point. In nearly every other industry, discussing former clients would be considered unprofessional, if not a breach of duty. Why should IT and information security be any different?

Even now, procurement teams and hiring managers sometimes equate name-dropping past clients with credibility. Sharing details about past clients to win future work may signal cooperation, but it also demonstrates a lack of discretion and professionalism.

During my years running a small business, I often found myself pressured to list previous clients. This expectation not only contradicted the NDAs I had signed, but also reflected a fundamental misunderstanding of professional discretion. This isn’t to say that referencing clients is always wrong, but it must be:

  • Done with clear consent
  • Aligned with contractual terms
  • Handled with the client’s reputation and privacy in mind.

Anything less risks crossing the line from credibility to compromise. In my case, I relied on client-provided testimonials, which offered a transparent and ethical way to demonstrate value without breaching confidentiality.

  • An obvious red flag during procurement is the request for client names during early-stage discussions. Referencing industries served, or challenges solved is usually sufficient; naming clients is rarely necessary.
  • While IT lacks formal licensing, many adjacent fields, including law, healthcare, and finance, would consider such disclosure a breach of conduct. Expectations in tech are starting to catch up.
  • Professional trust is earned through process, integrity, and insight, not by exposing others’ confidential work.
  • Clients who ask you to sign a Non-Disclosure Agreement (NDA) while requesting information about your past clients fail to see the contradiction.
  • Professionalism includes protecting client reputations long after the contract ends, reinforcing trust and encouraging future referrals.

Spotting Red Flags in Early Conversations

The second article shifted from theory to practice, warning about misleading early-stage discussions.

“A 15-minute call that’s all about your past clients and nothing about current needs? This is not a sales lead; it is a red flag.”

Part 2 (12th December 2017) moved from principle to practice, focusing on identifying conversations that veer into information mining rather than genuine engagement. If a potential client spends more time asking about previous engagements than outlining their own needs, it’s likely not a real opportunity.

With AI-powered phishing, voice impersonation, deepfakes, spoofed job interviews, fake procurement opportunities, and corporate espionage now part of the landscape, the original advice has only grown in relevance.

  • If the caller avoids basic discovery questions like “What are your current pain points?” or “What do you need?”, it’s likely not a real opportunity.
  • Attackers use believable personas (recruiters, clients, journalists) to harvest sensitive business information.
  • Ethical conversations involve reciprocal openness, so we shouldn’t share anything confidential if the other party is unwilling to share anything about their needs.
  • Professionals don’t always know how to identify social engineering.
  • Having a clear discovery script or process can help deflect and detect bad actors while remaining professional.

When Disclosure Becomes Expected

Part three took a more candid tone, acknowledging that indiscretion is sometimes rewarded, even incentivised, in the workplace.

“Those who break confidentiality are often rewarded with a contract opportunity, not because they are professional, but because they cooperate in breaching confidentiality.”

Part 3 (18th December 2017) took a sharper tone, acknowledging the grim reality that disclosing confidential information helps people win work. This behaviour has become normalised in sectors like IT and information security, where no professional licence to revoke and no external body enforcing ethical standards.

We still see this today, especially in competitive bids where clients and employers reward name-dropping and logo slides. ISO standards, frameworks, and organisational codes of conduct are slowly shifting this culture by embedding expectations of privacy, discretion, and ethical information handling.

  • Selection teams often reward evidence of previous client activities, and from personal experience, I know how not being short-listed for an opportunity by refusing to disclose confidential information feels demoralising. This perpetuates indiscretion as a competitive advantage.
  • I’ve seen contracts where the NDA was so vague that confidentiality was open to interpretation.
  • It feels like people and businesses are gradually reframing discretion as a strength rather than treating it as an obstruction. I’ve lost count of the opportunities I missed because I chose to uphold confidentiality. Today, especially in regulated sectors, disclosing client details without authorisation now carries legal risk as well as reputational damage.

A Professional Alternative

To counter the trend of oversharing, the fourth article offered a proactive solution: shifting the focus to structured, client-first engagement.

“Credibility should come from solving real problems, not showcasing someone else’s private history.”

Part 4 (4th January 2018), and the final article in this series, offered a way forward: a professional five-step process for engaging with clients. It helps avoid off-topic digressions about past work and puts the focus where it belongs, on solving the client’s current problems through a structured, ethical dialogue.

As procurement and supplier due diligence processes become more rigorous, driven by regulatory scrutiny, AI governance, and Environmental, Social, and Governance (ESG), structured professional processes are no longer a luxury. They’re essential for resilience, trust, and legal protection.

  • A professional engagement process builds more trust than any client list ever could.
  • Clients need insights into their problems, not a retrospective view on someone else’s problems.
  • When done well, structured onboarding protects both sides and guards against phishing and reputational damage to  both parties. Clients are increasingly judged by what they ask and how they conduct vendor selection.
  • I found that having repeatable processes helps maintain credibility through moments of pressure, ambiguity, or inconsistency. I don’t recall who, but someone once told me that “How people behave some of the time reflects how people behave all of the time”. I heard a similar quote recently “How you do anything is how you do everything”. This applies to confidentiality, as if I am will to tell a potential client what I did for previous clients, it is reasonable for them to assume I would do the same in the future with their confidential information.

Recruitment: A Parallel Problem

The follow-up article expanded the issue into recruitment, highlighting the ethical risks of sharing confidential information when changing jobs.

“If you’re willing to use your current employer’s clients now to get a new job, you’ll likely do the same to your next employer.”

In this follow-up article, Avoid Revealing Employer’s Clients (12th April 2018), I explored how the same confidentiality breaches play out during recruitment. Candidates sometimes list their employer’s clients on public profiles and CVs or refer to them in interviews, thinking it shows breadth of experience. However, the ethical problem is the same: those clients aren’t theirs to share.

Today, these disclosures can end careers before they start. Many firms now treat unauthorised disclosure of client identities as a breach of NDA, contract, or even data protection law, and rightly so. Confidentiality applies just as much when leaving a company as when engaging with a new one.

  • Listing employer clients without authorisation can violate contracts and raise character concerns before the interview stage.
  • Employers increasingly filter out candidates who appear to treat sensitive data as a personal asset.
  • I recall refusing to give client names because of confidentiality, then declaring that I had passed their confidentiality test, and that we can move on to discussing their requirements. They refused to do that. I saw this as a big red flag.
  • Some roles now include applicant-level risk profiling where client name-dropping is seen as a security weakness.
  • NDAs, employment contracts, and professional ethics don’t expire when you change jobs.
  • The best candidates increasingly demonstrate judgement, not just experience.
  • Organisations with strong governance cultures actively avoid hiring people who treat discretion as optional.
  • Some recruitment agents still encourage “name-dropping” for profile strength, but this can backfire for both candidates and the agency.

Additional Thoughts

Whether in recruitment or procurement, the heart of this matter remains unchanged. Professionalism in this context is about discretion, not disclosure. We can’t gain trust and establish credibility by revealing what we did for others; we can only demonstrate what we can do for future clients or employers.

That said, one fact remains the same. Suppose people and businesses are forced to choose between disclosing client names (along with what was done and when) or risking the conversation about future work ending abruptly. In that case, they often choose the opportunity first.

Unfortunately, many professional cultures still reward indiscretion while overlooking integrity, especially when disclosure offers a short-term advantage. This isn’t just a question of professionalism; it’s a systemic problem.

It reminds me of the UK smoking ban in pubs. Many landlords wanted to implement a smoke-free policy years before it became law, not just for the health benefits but also to create a better environment for their customers and staff. They couldn’t because if one pub acted alone, the smokers would go next door, taking away a significant portion of their revenue. There were a few exceptions, but it wasn’t until the ban became a legal standard that everyone could act without fear of competitive loss.

Confidentiality suffers from a similar imbalance. Clients, employers, and recruiters are legally entitled to ask questions. Employees, consultants, and small business owners often feel compelled to respond, as not answering might mean losing the opportunity.

Change will remain slow until our professional culture matures to reward discretion and due process rather than indiscretion and shortcut credibility. It will remain impossible to lead alone, and the cycle will continue.