top of page

Responsible Artificial Intelligence is Good Business

Updated: Apr 23

ORADA Responsible AI Conference 2025 - Purpose and Reflections




What inspired the ORADA Responsible AI Conference? 


We found ourselves in a lot of conversations and read headlines where artificial intelligence (AI) was either going to bring the end of times and be the worst thing that has happened to humanity or be  the silver bullet that is going to solve all our problems and thereby herald a golden era. Additionally, we weren’t - and still aren’t - seeing enough engagement around what can go wrong when: 


  1. AI models are trained on data that hasn’t been properly collected, organised, cleansed, prepared and verified as appropriate for use in the training and validation of AI models; 

  2. the implementation of artificial intelligence is directed at the “wrong” business or social problems, e.g. those that have an impact on human rights or people’s ability to access means of survival, without the proper guardrails; and

  3. decision makers blindly trust artificial intelligence and remove the human being from the workflow since this approach ascribes more strengths to AI and fails to take cognisance of its weaknesses as well as the strengths that humans  bring to the table. 


We came up with the Opportunity, Responsibility and Accountability in Data & AI (ORADA) Conference because we recognise that AI, especially following the extensive accessibility and usability of deep learning models, is incredibly powerful and presents unprecedented commercial growth and social improvement opportunities that should be leveraged. However, this is only the case if the mindset of building AI solutions as responsible, ethical, people-first technologies is addressed at the strategic business decision level, at board and executive levels, resulting in decisions and communications that articulate the prioritisation of responsible AI by design, i.e. the use of responsible AI (RAI) practices as well as the application of AI ethics standards and governance from concept to implementation. Making RAI by design a strategic pillar of the AI strategy next to other commercial and organisational goals led to the theme of this year’s conference, “Balancing the Opportunity, Responsibility and Accountability in Data and AI”. We see the balancing of the opportunity of data and AI solutions with RAI practices to ensure that the implementation of AI is aligned to organisational values, considers broad stakeholder groups and is sustainable in the long term as the key. 


What were the key take-aways from the ORADA Responsible AI Conference 2025?


ORADA is designed to combine strategic insights and considerations through keynotes, engaging talks and panels as well as enable professional development and upskilling through hands-on data and AI workshops. 


The key takeaway from ORADA 2025 is that Responsible AI is good business and can be broken down into the following: 

  1. The opportunity is immense: Through the DCA CEO, Juanita Clark’s opening keynote, attendees were made aware of the immense opportunities to be expected from the exponential doubling of connectivity every 2 - 3 years since businesses and organisations, especially in Africa, can prepare to offer digital, AI augmented solutions to new markets especially previously unreachable digital markets as well as explore new ways to solve long-standing problems. Currently only 23% of people in rural areas in Africa enjoy access to an internet connection, compared to 50% globally. Here, the opportunity is the challenge. 


    Additionally, Deborah Choi, Managing Director at Founderland shared the success case of using a people first, problem first approach to AI in their journey to democratise access to Venture Capital (VC) Funding by creating a safe, constructive, repeatable upskilling space for VC pitch coaching through the AI powered tool PAM. PAM frees Founderland’s members from the typical constraints they faced when looking to access mentors and coaches to guide them in navigating the VC world. They have successfully improved their members’ ability to pitch and win investors. 


    Further, Thato Sopeng, VP of Technology at Sasol showed how organisations that are heavily regulated and have to consider critical health, safety and environment considerations and responsibilities can thoughtfully and conservatively leverage AI in a way that drives efficiencies and generates business value while leading from a people centred perspective. She shared how AI has been implemented successfully in the energy industry for oil and gas exploration, seismic interpretation, predictive maintenance and business administration. 


    Dr. Christian Temath, Managing Director of KI-NRW, shared AI success cases from the German state of North Rhine-Westphalia and outlined how responsible AI centres humanity by putting humans in the “conductor” position of an intelligent automation orchestra, thus taking advantage of the strengths of both humans and technology and addressing pressing workforce challenges. An example are open vacancies in Germany’s public sector, which can be quantified at 550 000.


  2. Transformation to becoming an AI led business cannot be driven as a plus 1 responsibility and requires adequate financial and human resources: Highlighting Bain statistics showing that only 37% of leadership teams have allocated budget to AI initiatives and only 41% have set up AI-focused teams, Thato Sopeng’s talk underscored the importance of looking at the introduction of AI and leveraging it as both a cultural and technological transformational effort. This requires awareness, training and co-innovation efforts from the board level all the way to the entry levels of organisations to address the fear and concerns that people within the organisations may have about AI, improve their understanding of AI, and create room for building collaborative teams that leverage the institutional knowledge within the organisation to innovate with AI. 


    Laetitia Cailleteau, Responsible AI & Generative AI Studios Lead at Accenture in Europe Middle East and Africa showed AI value cases as well as “horror” stories of real harm to people and businesses in cases where responsibility by design is not put into place. Her talk affirmed that opportunities are immense, with Accenture clients using AI to improve the quality of outcomes while reducing product development timelines. She also cautioned against treating the transformation to becoming an AI led business as a plus 1 responsibility that’s simply attached to existing roles, is poorly resourced and [not provided with the requisite executive-level sponsorship]. Both Thato and Laetitia highlighted the importance of dedicating people and financial resources that are equal to the task to effectively scale AI solutions, cautioning against underestimating the transformational effort at hand. 


  3. Guardrails and regulations create room for responsible AI innovation: Regulation and guardrails can often be seen as expensive and cumbersome, but they allow organisations to innovate in a manner that is efficient, aligns to their organisational values, and is protective of their stakeholders. Through unpacking the EU AI Act with Elena A. Kalogeropoulos, Managing Director at E K & the good lab; exploring the risks that AI-powered online / cyber influence and hostile campaigns present to companies, brands and governments as well as mitigations that can be taken to protect against said risks with Florian Frank, Director at Cyfluence Research Center gGmbH; together with the panel on Cybersecurity in the Age of Data and AI with panelists Wandile Mcanyana, Security Delivery Director at Accenture Security, Mpho Moseki, Head of Data Governance and Metadata at CIB Standard Bank and Dominic White, Ethical Hacking Director & Managing Director at Orange Cyberdefense, attendees explored how regulations, a risk-based approach to RAI together with the right guardrails can protect their organisations from: 

    • Reputational and brand risks that can arise where harm is caused to customers and other stakeholder groups; 

    • Negative financial harm through a) the need to roll-back solutions that have gone live without realising their financial and strategic objectives, b) regulatory fines, and c) long-winded court cases; 

    • Internal discord and negative talent retention implications that can arise where the application of big data and AI solutions is seen as misaligned to company values; and 

    • Hostile external actors seeking to use the power of AI tools to cause harm to organisations and businesses as well as successfully disrupt business operations. 


    On the other hand, the Panel on The Impact of AI on Journalism, Communications and Media with the panelists, Juliet Nanfuka, Research and Communications at CIPESA, Catherine White, Executive Director at Cat White Media and Shoki Kandjimi, Head of Communications and Stakeholder Engagement at PETROFUND highlighted the challenges to the authentic and integral delivery of news and information while highlighting the complexity of regulating a field that needs regulation, but faces risks where said regulation can be misused to restrain freedoms and attack human rights. 


  4. Inclusion and sustainability are key to Responsible AI: Khethiwe Nkuna, CEO at Skillsquest took attendees through success cases that show how artificial intelligence can be used to effectively include people and social groups that previously did not have access to solutions that many of us take for granted. This talk presented the perfect example of the conference's expanded definition of Responsible AI being inclusive of #AIforGood. Examples included an AI driven learning platform by Thooto that makes it easier for learners to achieve certification through individually tailored, curriculum aligned support and the use of AI by the Scott Morgan foundation to help Amyotrophic Lateral Sclerosis (ALS) patients continue to communicate effectively despite the impact of the condition. Showing that we still face a global global connectivity gap of 3 billion people lacking access to connectivity, and - building upon Juanita Clark’s talk on the importance of ensuring access to connectivity and ensuring digital literacy - Khethiwe’s talk further underscored the importance of addressing this gap, building digital skills and prioritising mobile first and off-line AI solutions. 


    Sustainability and the environmental impact that comes with running AI systems is a known challenge that is excacerbated by the ubiquitous use of AI tools and systems. While several approaches to addressing these challenges are being explored - including attempts at using green energy together with efforts to address the impact that running and hosting AI solutions has on water resources, this topic was engaged at ORADA 2025 through the exploration of AI on the Edge with Christina Ambos, CEO at AI Strategy Partner. Her talk explored the role that running some AI models on-site can have on driving efficiencies and reducing, though not completely eliminating, reliance on large data centres. 


  5. Building the skills required to be part of the responsible AI-ready Workforce (highlighted by the World Economic Forum as central to the future of work) is possible: Attendees at ORADA 2025 had access to three practical, hands on workshops where they were able to build skills that they could apply as soon as they returned to their desks allowing them to: 

    • Improve their understanding of artificial intelligence, how it works, followed by building their own AI model during the machine learning modelling workshop with Mosa Nyamande, Director of Delivery at Khonology; 

    • Acquire the ability to build AI Ethics Governance into every phase of the artificial intelligence project life cycle, with a specific focus on the machine learning project life cycle, during a workshop led by Dr. Ann Borda, Ethics Fellow in Ethics and Responsible Innovation and her team from The Alan Turing Institute’s Public Policy Programme; and

    • Get a practical understanding of how they can use the IBM AI Ladder to analyse and prepare a data set to be used for building an AI model, identify risks and issues like bias in the data. Practical approaches for implementing measures to resolve the identified issues and mitigate against said risks were also shared during the workshop led by Diana Pholo Stone, Data Scientist at Predictive Insights on Creating a Trustworthy and Accountable AI Process Using the AI Ladder.  


The conference also illustrated that RAI is practical and a necessary part of leadership and business stewardship. Further, considering the amount of upskilling that was possible within each short workshop, it’s clear that with the right prioritisation and collaboration, we can build the responsible AI ready workforce that’s needed. 


At Inno Yolo, we’re excited to continue to work with partners like the Digital Council Africa to bring you ORADA 2026 with the conference theme: Value Focused Responsible AI Innovation Made Easy. 


Avela Gronemeyer - Managing Director at Inno Yolo UG
Avela Gronemeyer - Managing Director at Inno Yolo UG

 
 
 

Comments


orada black-final

Empower. Innovate. Connect.

Contact Us

General Inquiries:
info@innoyolo.com

Support:
orada@innoyolo.com

Quick Links

Connect with Us

Stay informed with the latest updates and insights.

Stay Connected!

© 2025 ORADA. All rights reserved.

bottom of page