In recent months, Centre for Multilateral Affairs (CfMA) undertook an initiative to explore critical issues surrounding Artificial Intelligence (AI) governance in Uganda. To deepen our understanding, we reviewed existing literature on AI at national, sub-regional, and continental levels.
We engaged with various industry experts, including representatives from government ministries, departments, agencies, private sector actors, NGOs, and development partners. One of the four research questions we focused on was assessing the regulatory environment for AI governance in Uganda. Using a semi-structured interview guide, we conducted interviews with over 21 respondents. Their insights were analyzed through thematic and content analysis, considering both their stated opinions and the underlying assumptions that informed their views.
Key Findings
Our analysis revealed that while there is a broad understanding of Uganda’s digital governance laws and regulations, there is limited direct linkage to AI-specific policies. Respondents identified several laws, such as the Data Protection and Privacy Act (2019), the Fourth Industrial Revolution Strategy, the Digital Transformation Roadmap, and the Computer Misuse Act, as contributing to a potential AI policy landscape.
However, most participants emphasized that the AI sector in Uganda is still in its infancy, characterized as “evolving and growing,” with “no clear law or policy” currently in place. This lack of targeted AI governance raises concerns about Uganda potentially falling behind global advancements. Respondents also highlighted risks associated with AI biases and their potential negative impact on human rights, particularly in the context of surveillance technologies and the country’s preparations for the 2026 general elections.
There was also notable uncertainty among some respondents about whether Uganda has any existing AI-specific legislation. Additionally, many felt that the transformative potential of AI across various sectors of the economy remains largely underutilized and underappreciated within the political sphere.
Our research further delved into the gender dimensions and human rights implications of Uganda’s AI governance framework—or the lack thereof—adding depth to the discussion.
Challenges and Recommendations
Experts identified several challenges, including inadequate broadband connectivity, ICT literacy gaps, and a persistent gender digital divide. Concerns were also raised about biases in generative AI systems, which often fail to align with local contexts and may exploit marginalized communities. Another issue was the tendency of the Ugandan government to adopt “copy-and-paste” policies from the Global North without proper contextual adaptation.
To address these challenges, respondents proposed several recommendations:
- Capacity Building: Prioritizing the inclusion of minority groups in all AI initiatives.
- Investing in AI Research: Supporting research, sensitization, and awareness campaigns to highlight the benefits of AI.
- Ethical Guidelines: Establishing mechanisms for AI transparency, accountability, and ethical standards.
- Multi-Stakeholder Collaboration: Encouraging partnerships across different sectors to foster innovation and inclusivity.
Next Steps
This summary offers a snapshot of our draft findings. A detailed report exploring AI governance frameworks in Uganda will be published soon. The findings will be validated by stakeholders and presented to policymakers to stimulate ongoing debates and action on AI governance at local, sub-national, and national levels in Uganda.