Our summer interns won 2nd place among 25 projects for their project -- 'Native vs. Non-Native Language Prompting: A Comparative Analysis'

This year, we offered two interesting and impactful projects, one of which focused on investigating whether LLMs are truly inclusive in understanding and responding in Arabic. Together with our summer interns — Mohamed Bayan Kmainasi, Rakif Khan, Ali Shahrour, and Boushra Bendou— we explored how these models handle native and non-native prompts across different LLMs. Among the 25 projects, they won second place.

Our large-scale study involved 12 NLP datasets, covering areas such as factuality, subjective information, propaganda, and harmful content.

We examined three types of prompts: 1) Native (All are in Arabic) 2) Not-Native (Only input is in Arabic) 3) Mixed (Only output label is in Arabic)

Surprisingly, our findings reveal that an Arabic-centric model struggles with native prompts. This raises an important challenge for the community: how can we build more inclusive LLMs? Inclusivity becomes even more critical when considering the complexities of Arabic dialects. As a community, we need to tackle this challenge together.

For more details, check out our paper, accepted at the WISE-2024 conference, at the link below.

📄 Read the Paper
💻 Explore the Code