As algorithmic decision-making is now deeply integrated into the core infrastructure increasingly around the world, the conversation can’t stay abstract. We need to connect the macro questions – human dignity, the future of work, widening inequality – to the micro decisions organisations make every day: how we build, deploy, govern, and educate. Drawing on the recent Ethical AI initiative in Lagos and London and the Rome Conference on AI, we look at where to focus now.

 

1. Youth safety first: education and guardrails

AI is increasingly a confidant for young people. That brings benefits – 24/7 support, accelerated learning – but also risks when surfacing harmful content. The response must be two‑fold:

  • Education that treats AI as a tool, teaching critical thinking, digital resilience, and the values behind ‘human dignity.’
  • Product‑level guardrails with clear escalation to human support and age‑appropriate protections.

Schools and youth services should integrate AI literacy into the curriculum whilst near‑term, expert talks and workshops can bridge the gap while policy catches up.

 

2. Closing the knowledge gap: literacy, access, inclusion

The promise of AI will be unevenly distributed unless we invest in access and understanding. Across and within countries, disparities in connectivity, language, and digital skills are real. Practical steps include:

  • AI literacy: embed it in school curricula and workplace training.
  • Everyday channels: use banking apps and mobile money for public education in mobile‑first markets.
  • Localisation beyond translation: build for dialects and cultural context.
  • Transparency in services: clearly communicate where AI is used (e.g., as a healthcare “second opinion,” not a diagnosis).
  • Energy and connectivity: invest in reliable, clean power and last‑mile networks in  both the Global North and South, including grid resilience, microgrids, and community power so digital services and AI stay on.

 

3. Regulation vs practice: from policy to ‘Monday‑morning rules’

Regulations such as the EU AI Act and the Nigerian AI policy are in force, but policy will always trail innovation and can feel abstract to operators. Organisations need simple, operational rules of thumb that translate compliance into everyday behaviour:

  • Know your use case and risk categories.
  • Test before vs after deployment and document limitations.
  • Keep a human in the loop when needed.
  • Record data lineage and model changes.

“AI evolves faster than law, but its impact runs deeper than any single industry. Without proactive regulation, profit will prevail over the public interest” Said Emmanuel Adebayo, Head, Innovation and Technology, Aluko & Oyebode.

 

4. The double‑edged sword: confronting bad actors

AI amplifies the good and the harmful. Deepfakes, automated fraud, targeted manipulation, and model misuse are already here. A credible strategy blends:

Prevention: make abuse harder by stress‑testing systems and limiting powerful tools, such as bulk messaging, to the right people.

Detection: spot trouble early by monitoring for odd patterns and using authenticity checks.

Response: Move fast and be clear by having a simple playbook and communicating early to affected users, key platform partners, and your frontline teams and leaders.

 

5. Builders’ responsibility: from principles to code

High‑level principles (fairness, safety, accountability) don’t implement themselves. Engineers need concrete support to turn company values into design choices, tests, and documentation.

According to AI ethics researcher Esther Galfalvi, “General principles are hard to translate into concrete developer actions. We need real developer support to translate two things: principles into the concrete – and values into the concrete. Principles such as fairness and justice, and context‑specific human values.”

In practice, that means giving teams:

  • Pattern libraries and checklists
  • Dataset health reports
  • Bias and evaluation tools and model cards
  • Decision logs and clear escalation routes 

 

6. Decide now what future AI becomes

We can’t treat this as a ‘wait and see’ problem. The choices we make today will shape tomorrow’s AI.

Raj Mahapatra, Senior Counsel, Wilson Sonsini commented, “The hard part isn’t just building the tech; it’s making the big calls now: moral, ethical, legal, and technical. The choices we make today will set the rails for how AI evolves in the next few years and the decades to come.”

Waiting for perfect rules isn’t an option. Organisations can set their own bar today and invite regulators, peers, and users to inspect and improve it.

 

Conclusion

As AI is becoming part of everyday life in many parts of the world, ethics must become practical choices across how we build, deploy, govern and educate.

Governments in the Global North and South should set clear, forward‑looking rules that protect young people and close the digital divide, and co‑invest in reliable, clean energy and last‑mile connectivity. They should also use AI responsibly in public services to improve efficiency and sustainability with transparency and accountability.

Companies should invest in practical training for everyone who builds or uses AI, put a clear governance framework in place so roles and reviews are understood, give engineers the right tools to build safely and fairly, strengthen data governance and testing, and keep a simple incident playbook to find and fix issues quickly.