As algorithmic decision-making is now deeply integrated into the core infrastructure increasingly around the world, the conversation can’t stay abstract. We need to connect the macro questions – human dignity, the future of work, widening inequality – to the micro decisions organisations make every day: how we build, deploy, govern, and educate. Drawing on the recent Ethical AI initiative in Lagos and London and the Rome Conference on AI, we look at where to focus now.
AI is increasingly a confidant for young people. That brings benefits – 24/7 support, accelerated learning – but also risks when surfacing harmful content. The response must be two‑fold:
Schools and youth services should integrate AI literacy into the curriculum whilst near‑term, expert talks and workshops can bridge the gap while policy catches up.
The promise of AI will be unevenly distributed unless we invest in access and understanding. Across and within countries, disparities in connectivity, language, and digital skills are real. Practical steps include:
Regulations such as the EU AI Act and the Nigerian AI policy are in force, but policy will always trail innovation and can feel abstract to operators. Organisations need simple, operational rules of thumb that translate compliance into everyday behaviour:
“AI evolves faster than law, but its impact runs deeper than any single industry. Without proactive regulation, profit will prevail over the public interest” Said Emmanuel Adebayo, Head, Innovation and Technology, Aluko & Oyebode.
AI amplifies the good and the harmful. Deepfakes, automated fraud, targeted manipulation, and model misuse are already here. A credible strategy blends:
Prevention: make abuse harder by stress‑testing systems and limiting powerful tools, such as bulk messaging, to the right people.
Detection: spot trouble early by monitoring for odd patterns and using authenticity checks.
Response: Move fast and be clear by having a simple playbook and communicating early to affected users, key platform partners, and your frontline teams and leaders.
High‑level principles (fairness, safety, accountability) don’t implement themselves. Engineers need concrete support to turn company values into design choices, tests, and documentation.
According to AI ethics researcher Esther Galfalvi, “General principles are hard to translate into concrete developer actions. We need real developer support to translate two things: principles into the concrete – and values into the concrete. Principles such as fairness and justice, and context‑specific human values.”
In practice, that means giving teams:
We can’t treat this as a ‘wait and see’ problem. The choices we make today will shape tomorrow’s AI.
Raj Mahapatra, Senior Counsel, Wilson Sonsini commented, “The hard part isn’t just building the tech; it’s making the big calls now: moral, ethical, legal, and technical. The choices we make today will set the rails for how AI evolves in the next few years and the decades to come.”
Waiting for perfect rules isn’t an option. Organisations can set their own bar today and invite regulators, peers, and users to inspect and improve it.
As AI is becoming part of everyday life in many parts of the world, ethics must become practical choices across how we build, deploy, govern and educate.
Governments in the Global North and South should set clear, forward‑looking rules that protect young people and close the digital divide, and co‑invest in reliable, clean energy and last‑mile connectivity. They should also use AI responsibly in public services to improve efficiency and sustainability with transparency and accountability.
Companies should invest in practical training for everyone who builds or uses AI, put a clear governance framework in place so roles and reviews are understood, give engineers the right tools to build safely and fairly, strengthen data governance and testing, and keep a simple incident playbook to find and fix issues quickly.
| Cookie | Duration | Description |
|---|---|---|
| cookielawinfo-checbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
| cookielawinfo-checbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
| cookielawinfo-checbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
| cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
| cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
| viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |