The University of Connecticut has initiated a study on how language influences artificial intelligence (AI) and its interaction with humans. The project, titled “Reading Between the Lines: An Interdisciplinary Glossary for Human-Centered AI,” investigates how terms like “intelligence,” “learning,” and “ethics” are interpreted differently across varied cultures. These nuances are becoming increasingly important as AI technologies become integral to everyday life. UConn aims to foster a more culturally nuanced approach to AI development, emphasizing the diverse meanings of language in shaping AI systems.
UConn’s research arrives at a crucial time, as global studies highlight ongoing disparities in multilingual AI models. A study by Johns Hopkins University in 2025 observed that such models still prefer dominant languages, primarily English. Similarly, an MIT Sloan analysis found that the same AI prompt could yield distinct answers depending on the language used. This reveals the necessity of UConn’s focus on language as a cornerstone for creating inclusive and accurate AI systems.
How Does Culture Shape AI?
At the heart of the UConn initiative are three themes: care, literacy, and rights, which the university uses to explore the interactions between people and machines. The care dimension concentrates on empathy, reminding us of the importance of language in AI systems that often misinterpret context when local dialects are used. As Ihsane Hmamouchi from Université Internationale de Rabat noted, “Care begins with language,” pointing to limitations in AI trained on restricted datasets.
Can Literacy in AI Bridge the Gaps?
Literacy, the project’s second theme, emphasizes understanding the creation of meaning within AI systems rather than merely knowing how to use them. This perspective is supported by a Cornell Global AI Initiative study revealing that predictive-text tools tend to normalize Western phrasing. This highlights concerns about the possible narrowing of linguistic expression due to AI systems.
Large language models inevitably reflect the biases of their input data, as reported by MIT News. Consequently, there’s a move towards more inclusive AI, with ETH Zurich developing an open-source model across 1,000 languages to address preservation issues. Addressing this, Michael Lynch from UConn cautioned that increasing trust in AI could lead to reduced creativity and critical thinking.
In its innovative approach, UConn’s team is developing an “anti-glossary” to dynamically adapt as languages and technologies transform over time. They argue for flexibility in AI terminology, encouraging discussion among stakeholders involved. The words used in AI policy and governance dictate how systems act and are perceived.
Given the rapid advancements in AI, UConn posits that a consensus on terminology reflecting real-world language is essential. Their work suggests a shared vocabulary is crucial for effective dialogue between AI technologies and their human counterparts, impacting economic and social landscapes.
