As AI innovation snowballs, some developers are calling for a moratorium on new services. We asked stakeholders if the launch of new language-related AI should be put on hold.
Ìý
John Worne, ÌìÃÀ´«Ã½ CEO
As language professionals, we find ourselves at what feels like a crossroads, where artificial intelligence is rapidly transforming our context. The question of whether we should pause language-related AI development is thought-provoking, but a moratorium would likely be neither practical nor achievable. Realistically, we need to focus on how to harness AI’s potential while addressing its challenges. Firstly, it’s crucial to recognise that ‘AI’ is not a monolithic entity but a diverse set of technologies with varying applications in the lives and work of linguists. From machine translation to speech recognition, many of these tools are used routinely.
As linguists, we have a unique responsibility to shape the development of language-related AI. Our expertise is invaluable. One key area where we can make a significant impact is in addressing AI bias. Large Language Models (LLMs) can perpetuate and amplify societal biases present in their training data. Furthermore, the data used is dominated by English and, as we are well placed to know, LLMs do a much, much poorer job in other languages. By pointing out tangible, memorable mistakes – e.g. soy Sauce (Sp; ‘I am Sauce’) for ‘soy sauce’ in food ingredients – we can highlight the risks of unsupervised use of LLMs and generative AI while working to advocate for more inclusive and representative language.
In the realm of interpretation and translation, the growing sense is that AI’s best use is not in replacing human linguists but in augmenting our capabilities. The hope is that by utilising these tools we can focus on higher-order tasks that require real cultural understanding, where humans remain irreplaceable: context, nuance, humour, creativity, sensitivity, artistry.
Rather than a moratorium, what we need is a proactive approach to AI. This could involve:
As the ÌìÃÀ´«Ã½, we have a natural role in this debate. By engaging with policymakers, our membership, universities and wider stakeholders we can help to shape a future where AI enhances rather than diminishes the role of human linguists; but we cannot do it alone. Working with other bodies, such as the ITI, NRPSI, ATC and international associations, is vital – as well as excellent university research centres working on AI in languages like those at Surrey and Vienna. This is a real focus for us.
Ultimately, AI is not a battle to win, or a technology to ban; it is a capability we need to shape. And we are well placed to do so as it is built on what we do best: languages.
Ìý
Christiane Ulmer-Leahey FCIL
With such rapid technological advancements, it is challenging to predict how language-related jobs will evolve. For example, will foreign language skills retain their value when AI can translate texts instantly through voice functions? It is important not to passively endure these changes but to actively shape future work methods and goals of linguistics professionals. This leads to the idea of a moratorium on development to provide breathing space for professionals to develop solutions. Such a pause could foster the establishment of think-tanks and interdisciplinary collaboration, mitigating long-term negative societal impacts.
Historically, technological innovations have presented themselves as a double-edged sword in all sectors, including translation, interpreting and language teaching. These advancements have transformed the workforce but they have not obliterated these professions. Instead, they have adapted and evolved. Progress has initiated the disappearance of certain occupations, but it has also generated new employment opportunities. For example, advanced translation programs, though diminishing the earnings of translators and interpreters, have expanded their roles into new communication contexts that require professional expertise. Similarly, online language learning programs have allowed language teachers to broaden their reach and save time on mundane tasks, enabling them to focus on creative activities.
Historical precedents suggest that development cannot be entirely stopped. Even if national and international bodies agreed to temporarily halt the development of language AI, there would be entities that would not comply. Thus, the focus should be on ensuring development progresses with positive and ethical standards.
Creating the right framework is paramount. Although past efforts have not always succeeded, continuous attempts are necessary due to the potential destructive power of AI if misused. AI’s influence on communication – a fundamental human capability – is profound, impacting the organisation of societal affairs. The economic implications are also significant, raising questions about who benefits from AI’s wealth creation, especially when jobs are lost due to automation.
The outlook on AI – positive or negative – depends on the broader perspective on life. With the push of a button, it is now possible to destroy the Earth or improve the lives of many people. It is crucial to define the necessary competencies and authority to act in connection with these advancements. Ethical considerations must be addressed through interdisciplinary platforms involving experts from various fields.
This has to take place with some sense of urgency without falling into a rushed panic. A comprehensive pause in development is unrealistic; instead, individual projects and questions should be managed independently, allowing time for thoughtful progression. The competition with potential darker forces in AI development remains uncertain, but the hope is that AI will ultimately enhance communication, provide time for creativity, and foster better intercultural understanding.
Ìý
Sabine Braun, Surrey University
Despite its increasing role in meeting the demand for multilingual and accessible content, ‘language AI’ lacks understanding of the world, and the social, economic, cultural, political and other factors shaping human language use. It therefore remains unreliable, posing risks to multilingual and inclusive communication. To achieve human-level quality, intelligibility and accuracy, AI needs to go beyond identifying patterns and correlations; it must integrate human experience and knowledge of communication. This requires transformations in research and development, including a greater contribution from humanities-led language and translation studies.
Humanities-led research is well placed to shape the integration of AI tools into human translation/interpreting practices. More controversially, perhaps, such research should also pioneer human-centric and inclusive approaches to supplant conventional, risky data-driven methods in developing autonomous language systems (machine translation/interpreting) for situations where language professionals are unavailable or constrained by time and budget. In a highly multilingual society seeking equitable access to information for all, human professionals alone cannot meet all of the demand. Efforts should therefore be made to advance high-quality machine translation/interpreting, especially for ‘low-resource’ languages, to bridge global and intra-societal AI divides responsibly. However, without safeguards, these solutions risk perpetuating imperfect language AI, deteriorating the human experience and exacerbating inequalities.
Recent debate around rapid advances in generative AI has increased awareness of the risks inherent in the methods currently underpinning its development. But merely pausing AI development may not be sufficient or effective. A fresh approach is needed – one that sees the use of such technologies as part of a comprehensive solution for communication in a multilingual and inclusive society, that embraces the potential benefits of language AI while minimising its risks.
The guiding question must be how language AI can be developed and used safely to create multilingual and multimodal content serving users of language services with diverse linguistic, sensory and cognitive abilities. Ethical principles are key: human-centric development, inclusiveness, fairness, complementarity to the work of language professionals, transparency and accountability.
If such a shift can be achieved, language AI can have positive impacts on society and the economy by enhancing communication and accessibility, and promoting inclusive participation in digital society. A shift in direction and implementation of ethical principles will also foster genuine innovation in language AI, creating new market opportunities for many stakeholders.
Ìý
Katharine Allen, SAFE-AI
Given the relentless global pace of technological progress, our focus should be on educating ourselves and the industry about these technologies and developing a legal framework that balances innovation with measurable quality improvements that ensure safe, fair, ethical and accountable adoption of this technology.Ìý
Along with my colleagues on the Interpreting SAFE-AI Task Force , I believe the challenge lies in shaping AI development to align with ethical standards and human values, not halting advancement. Our mission is to establish industry-wide guidance for the accountable adoption of AI in interpreting, facilitating dialogue among developers, vendors, buyers, practitioners and end-users. We track and assess AI capabilities in real-time language interpretation – covering translation captioning, multilingual captioning, speech-to-text, speech-to-speech, speech or text-to-sign and sign-to-speech or text.
LLMs have shown impressive advancements but cannot replace the human interpreter’s skillset to manage nuanced, culturally aware and contextually sensitive communication. Human communication encompasses not just language, but also emotions, cultural context, and non-verbal cues – elements AI struggles to replicate. Additionally, there are over 7,000 spoken and 300 sign languages, yet AI technology is only available for a handful, creating a digital divide where many are left behind.
Language barriers cannot be magically overcome by a single technology; the world needs to adopt a more nuanced understanding of human communication. Ongoing and widespread public and client education will be a crucial part of this work. To safeguard human expertise in interpreting, SAFE-AI advocates for robust ethical legislation framing AI development and how it is used for interpreting. This includes identifying where AI can enhance language services and where human intervention is essential. Our ‘Perception Survey and Advisory Group Research Report’ represents an initial step in building reliable knowledge about AI in interpreting - see and .
As the next step, we recently published best practice guidance to determine when AI technologies can safely expand multilingual access and when the risks outweigh the benefits - see . Our goal is to provide actionable insights for a broad diversity of stakeholders, including policymakers, developers, educators, practitioners, buyers and vendors, so they can integrate AI interpreting as an effective and ethical option.
Ìý
This article is reproduced fromÌýthe Autumn 2024 issue ofÌýThe Linguist. Download the full editionÌýhere.
Ìý
The ÌìÃÀ´«Ã½ (ÌìÃÀ´«Ã½), IncorporatedÌýby Royal Charter, Registered in England and Wales Number RC 000808 and the IoL Educational Trust (IoLET), trading as ÌìÃÀ´«Ã½ Qualifications, Company limited by Guarantee, Registered in England and Wales Number 04297497 and Registered Charity Number 1090263.