1000 Black Voices sat down with Raj Mahapatra, Senior Counsel in the London office of Wilson Sonsini Goodrich & Rosati, and Roop Bhadury, PhD Candidate in Information Systems and Innovation at the London School of Economics and Political Science (LSE), and an experienced entrepreneur, marketer and technologist.
We explored two topics that are increasingly front of mind: who owns AI ethics, and are we quietly outsourcing too much thinking to machines?
As AI moves into the core of how organisations work, ethics is being pushed as the responsibility of engineers, product teams, compliance and boards, but if everyone is responsible, it raises a practical question: who is truly accountable when things go wrong?
Raj starts from the premise that responsibility must be widely shared, but draws a firm line between responsibility and accountability. Everyone involved in building and deploying AI is responsible for acting ethically in their part of the system, but accountability must sit at the top: the C‑suite and the board. Only they can design the structures, incentives and reporting lines that make ethical behaviour real. A board cannot check every line of code, but it must still be able to say: “We put the system in place, and we are answerable for it.”
Roop broadly shares this framing and leans into the practicalities of making it work. If we are serious about shared responsibility, he argues, we should use AI itself to help uphold it. For example, AI ‘monitoring agents’ that continuously watch for behaviour outside agreed ethical boundaries and flag issues to the right people. Developers remain responsible and executives remain accountable, but a supporting layer of tools makes distributed responsibility practical rather than purely aspirational.
When people talk about AI ethics, the conversation often leads into a narrow one about regulation. But regulation is only one way societies encode their ethical choices into rules and with AI, those choices now need to work across borders and at machine speed.
Roop sees regulation and ethics as tightly linked but not identical. He points to a consortium model from advertising, where industry bodies like the Internet Advertising Bureau helped turn principles into day‑to‑day standards. In AI, he argues for industry‑led groups that translate fairness, transparency and harm‑reduction into concrete tests, disclosure norms and red lines, which regulators can then lift into law. You will never reach every “tinkerer”, he accepts, but if the major platforms and providers commit to a shared ethical baseline, you shape most of the impact and raise the floor for everyone else.
Raj agrees regulation is essential and frames it as an ethical project: law is meant to distil a society’s values into enforceable norms. With AI, that breaks down because systems act across borders and at speeds national laws struggle to match, and because countries start from very different ethical baselines on privacy, surveillance and bias. That encourages ‘ethics arbitrage’, with firms drifting to the weakest regime. He argues for international rules that are explicit about the principles they protect – human dignity, fairness, accountability – rather than a patchwork of narrow, national fixes.
Boards are legally bound to serve shareholders who are seeking business innovation and profit, despite leaders being told to slow down, think long‑term and ‘do the right thing’ on AI. That tension sits at the heart of every serious ethics discussion.
Raj is frank that this is a structural problem: most company law bakes in a duty to maximise shareholder returns. If someone presents a highly profitable AI strategy – even one that destroys jobs or concentrates power in uncomfortable ways – directors are at least obliged to treat it as a live option. Until we change how we define a successful company in law and in capital markets, he believes AI ethics will swing between moments of panic and moments of spin. He calls for long‑horizon actors – pension funds, large investors, global policymakers – to rewrite the incentives that boards operate under.
Roop accepts this tension and highlights a missing link: evidence. He argues we have not yet built a strong, data‑driven case that robust AI ethics frameworks actually increase shareholder value over time. For example, by reducing catastrophic risk, preserving trust or opening safer, more durable markets. Until that link is clear and visible, ‘AI ethics’ will be prone to turning into ethics‑washing: attractive language with little impact on how money is spent. His prescription is serious collaboration between industry and researchers to generate proof that boards and investors cannot easily ignore.
As generative AI becomes a default part of knowledge work, leaders are starting to ask whether we are quietly outsourcing too much thinking. ‘Cognitive debt” has become a useful – if worrying – phrase for this.
Roop welcomes the term and gives it a clear definition: cognitive debt is the gap between our rising reliance on AI and our continued ownership of the outcome. You cannot say, ‘the system made me do this’; the responsibility still sits with you. He pushes back on the idea that more AI use automatically makes us less sharp, using the shift from manual to automatic cars, or a founder delegating to a team, as examples. Humans are good at reallocating mental effort: as reliable tools take over some tasks, we free up capacity to tackle more complex, strategic problems.
Raj sees a real risk here, especially in the short term. Drawing on the arrival of calculators in the 1980s, he recalls how quickly people lost a feel for the ‘beauty of how maths works’, and with it the instinctive ‘sniff test’ for whether an answer looks right. Cognitive debt builds up when we rely on tools without keeping enough basic understanding to judge their outputs. We can and should stop trying to understand every detail, but we cannot afford to lose our grip on the fundamentals or on our own judgement.
People are not just worried about skills; they are worried about instinct: the gut feeling that spots danger on the road or senses when something is ‘off’ in a room. If AI takes over more of the sensing and deciding, do we lose something uniquely human?
Raj accepts there is a short‑term risk of ‘dumbing down’ whenever we lean heavily on a powerful tool, including AI. But he disputes the idea that this permanently erodes our humanity. History suggests that when certain skills (like hunting or manual driving) stop being relevant, we either redeploy those instincts elsewhere or allow them to fade without losing our core capacities. The important thing is to keep enough relevant fundamentals and attention in the areas that still matter, while accepting that some older skills will and should decay as the context changes.
Roop largely concurs and goes further in reframing intuition itself. What we call ‘gut feel’ is often just our brain processing large amounts of information and patterns in the background. Humans have a finite ‘cognitive load blob’ that we constantly reallocate: if we no longer need to micromanage driving, for example, we can invest that energy in harder intellectual or creative challenges. Today’s AI systems still cannot match the complexity of human intuition, but they are starting to generate genuinely new patterns rather than simply repeating training data, an early step towards machine intuition that can complement, rather than replace, our own.
We are clearly in a long transition, not a brief phase. The practical question for leaders is how to protect people – and performance – while making the most of what AI can offer.
Roop stresses tactical engagement with AI over the next 12–24 months. His advice is to encourage people and companies to use these systems deeply, “trust but verify”, and only put their name on outputs they truly understand – the cognitive equivalent of keeping your hands on the wheel in an almost self‑driving car. He reverses the usual risk lens, arguing that experienced professionals may actually be more vulnerable because rapidly improving tools give younger colleagues access to layers of insight and judgement that previously took decades to build. Cutting entry‑level roles purely to save money in the short term, he warns, treats AI as just a new outsourcing lever, and risks weakening the very organisations that most need strong human capability. It shrinks the pipeline of future leaders and reduces the human oversight that should be challenging AI’s conclusions, not just inheriting them.
Raj agrees the transition will be prolonged and uneven, warning that the interim period could last decades because we still do not know what the end‑state skills mix looks like. He is sceptical that governments, constrained by election cycles, can lead, and argues instead for wider oversight bodies and a renewed focus on skills such as critical thinking, contextual judgement and attention management. On workforce risk, he challenges the idea that senior people are most exposed, pointing out that AI is already replacing or reshaping many junior and mid‑level roles. Aggressively cutting entry‑level positions, in his view, is a tactical mistake that will leave organisations hollowed out and less resilient
Conclusion: Where to Go From Here
As AI shifts from experiment to a routine part of how we work , ethics and cognitive debt can’t stay abstract; they are now questions about how we design roles, incentives and everyday decisions. The opportunity, and risk, is that we are setting the norms for a generation: who owns the output, who keeps thinking critically, and who gets left behind. 1000 Black Voices will continue to convene these conversations; if you’re shaping AI strategy, governance or talent, now is the moment to engage, challenge your assumptions, and put practical guardrails in place.
| Cookie | Duration | Description |
|---|---|---|
| cookielawinfo-checbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
| cookielawinfo-checbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
| cookielawinfo-checbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
| cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
| cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
| viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |