AfriLabs

Terms of Reference (ToR) for Reviewer/Markers - Llama 3.1 Impact African Hackathon

AfriLabs Background

AfriLabs is a network organization that supports innovation hubs, tech communities, and entrepreneurship in Africa. With a presence across 53 countries, AfriLabs plays a key role in fostering the growth of innovative tech solutions that address critical challenges in Africa. By connecting innovators, providing resources, and promoting collaboration, AfriLabs drives entrepreneurship and technological advancements across the continent. AfriLabs has been a strategic partner in various initiatives aimed at empowering African innovators, including organizing hackathons, incubation programs, and capacity-building initiatives.

 

BMGF and Meta Background

The Bill & Melinda Gates Foundation (BMGF), through its Gender Equality Digital Connectivity (GEDC) and Digital Public Infrastructure (DPI) teams, is committed to fostering digital inclusion and gender equality across Africa. The GEDC team focuses on developing digital tools and technologies that are sensitive to gender issues, ensuring equitable access to information and services for women.

 

Meta (formerly Facebook) is a global technology leader, known for its work in advancing artificial intelligence (AI), machine learning, and open-source tools. Meta supports the Llama Impact Grants Program, which seeks to empower innovators and startups to leverage open-source AI models like Llama to address societal challenges, including gender sensitivity and inclusivity.

Together, BMGF and Meta have partnered to launch a series of programs aimed at improving AI outputs in African languages and ensuring that these solutions are gender-sensitive, linguistically diverse, and culturally relevant.

 

About the Project

The Llama 3.1 Impact African Hackathon is an Africa-focused initiative that invites AI developers and innovators from Sub-Saharan Africa to create AI solutions that are sensitive to gender issues, particularly in African languages. The hackathon will take place in Rwanda, with 100 participants working in 20 country-based teams.

 

Phase 1 of the project will culminate in a pitching session, where each team will present their AI solutions to a panel of judges. The selected top teams will advance to Phase 2, where they will receive either incubation or acceleration support to further develop and scale their gender-sensitive AI models.

 

Opportunity

We seek reviewers who will be responsible for reviewing, scoring, and selecting the top 100 participants from the pool of applications. Reviewers will assess each application based on predefined criteria, focusing on the applicant’s technical skills, creativity, and alignment with the hackathon’s objectives of developing gender-sensitive and linguistically diverse AI solutions. This role requires a strong understanding of AI technologies, gender sensitivity, linguistic diversity, and the African innovation landscape.

 

Scope of Work

 

  • Application Screening:
      • Review and assess applications submitted by prospective participants using a predefined rubric. This rubric will focus on technical expertise, innovation, problem-solving abilities, and the applicant’s interest in developing gender-sensitive and linguistically diverse AI solutions.
      • Ensure that all applications are evaluated fairly and consistently, adhering to the criteria set by the hackathon organizers.
  • Scoring and Shortlisting:
      • Score each application based on the rubric, ensuring that the evaluation process is objective, thorough, and aligned with the hackathon’s goals.
      • Shortlist the top applicants from the pool of application, ensuring a diverse representation in terms of gender, geography, and expertise.
  • Ensuring Diversity and Inclusion:
      • Ensure that the selection process reflects the hackathon’s commitment to diversity and inclusion, giving equal consideration to applicants from different regions, genders, and cultural backgrounds.
      • Support efforts to maintain gender balance and geographic diversity in the final selection of participants.
  • Feedback and Recommendations:
      • Provide constructive feedback on the review process, offering insights into the overall quality of applications and any trends observed during the evaluation.
      • Suggest improvements to the application process for future hackathons, ensuring that the criteria are continuously refined to attract high-quality, diverse participants.
  • Collaboration and Reporting:
    • Work closely with other reviewers to ensure that the evaluation process is consistent and transparent.
    • Submit a final report summarizing the evaluation outcomes, including key observations, challenges, and recommendations for selecting the top participants.

General Qualifications and Experience

All reviewers  are expected to possess the following qualifications:

  • Previous Evaluation Experience: Experience in reviewing applications for hackathons, incubators, or competitive programs is required. Reviewers should have experience assessing both technical capabilities and innovative potential.
  • Analytical and Objective Thinking: Strong analytical skills and the ability to evaluate applications objectively, ensuring that each application is scored fairly and consistently.
  • Familiarity with the African Context is an added advantage.

Specialized Qualifications and Experience

Experience in one or both of the following areas is required.

  • Expertise in AI and Machine Learning: Reviewers should have a background in AI, machine learning, or software development, with a particular focus on applications that address societal challenges or gender-related issues.
  • Experience in Gender-Sensitive Projects: Experience working on projects that promote gender equity or inclusivity, particularly in relation to AI technologies.
  • Qualification in Linguistic Diversity: A comprehensive understanding of the disparities and nuances among African languages. This expertise is essential for evaluating AI tools designed to accurately understand and translate these languages, ensuring that the solutions developed are both effective and culturally inclusive.

Timeline

  • Selected Reviewers are expected to carry out this assignment for a total of seven (7) days.
  • Submission of the list of shortlisted participants is to be done on 25th September 2024.

 

Remuneration: $500

 

Application Process

Interested candidates should submit their detailed CV and a sample of previous evaluation work by 19th September 2024 to procurement@afrilabs.com. The sample should demonstrate experience in evaluating applications for hackathons, competitive programs, or related initiatives.

 

Note: We are currently seeking application reviewers from Sub-Saharan Africa.