Balancing Innovation with Responsibility
Artificial Intelligence (AI) has already revolutionised many industries. It is easy to think that it offers the potential to transform the field of translation. AI-powered translation and localization tools are certainly powerful but with great power comes great responsibility. It is vital that AI is used advisedly and ethically by those who provide language services.
The arrival of ChatGPT has served to highlight recent advancements in AI. But can AI systems provide reliable translations and what are the ethical issues associated with its use ? Other than what we have seen with its Neural Machine translation (NMT) counterpart.
Accuracy and reliability
One of the primary concerns with AI translation is the accuracy of the outputs. While AI has made significant strides in translating texts, it is far from infallible. Low quality of translation can lead to misunderstandings, miscommunications, and even potentially harmful consequences, especially in fields such as medicine, law, and diplomacy.
It is important to understand that an AI tool does not have a mind of its own. It can’t think as such but instead analyses information that it has been fed in the form of training data for linguistic structure. When answering a query, an AI algorithm will predict which answer is most useful. Algorithms cannot cope well with the concept of not providing an answer. If they don’t benefit from the required data to answer accurately, they will answer the query inaccurately instead. AI is prone to such errors which are known as hallucinations. And that is a problem.
No AI tool can utilise outside context or ask for clarification when language is ambiguous which, of course, it often is. AI is forced to choose a translation while drawing only on its training data for facts and context. If the appropriate factual or contextual information is absent, algorithms essentially create an answer. It’s a form of educated guessing that often delivers inappropriate responses.
The inaccuracy of AI is clearly an ethical issue. Businesses, professionals, public bodies and governments rely on language service providers (LSPs) to provide accurate translations of their documents, content and messages. Inaccurate outputs could be misleading, offensive or dangerous.
It would be unethical to risk delivering translations that could mislead and offend readers or those that could result in harm.
Nuance and cultural sensitivity
All languages are nuanced, and nuances ensure that translating from any language into another is a challenging undertaking. Certain words in the source language may have no direct equivalent in the target language and many words in most languages have multiple meanings. To choose the right words to convey the intended meanings of content, a translator requires an in-depth understanding of the languages and the cultures concerned. AI still struggles with such nuances and so can deliver translations which are inappropriate or just plain wrong. Algorithms are most definitely challenged by culturally specific references.
An understanding of cultural sensitivities is also crucial when translating. Suitable subjects, expressions and references in one region may be jarring or offensive to the people of another culture. AI cannot always detect the cultural implications of any text and then render appropriate translations.
Could it ever be ethical to risk delivering translations that offend cultural sensitivities?
Idiomatic expressions
It isn’t hard to imagine that idioms are problematic when translating and that AI does not cope well with such expressions. AI will usually deliver literal translations which can be confusing or even incomprehensible.
This might appear to be a practical issue rather than an ethical dilemma, but confusion can lead to readers getting the wrong end of the proverbial stick. An AI generated translation could leave readers in Japan wondering what a stick has got to do with understanding this issue! More importantly, a mistranslation of an idiom has the potential to mislead or confuse which would alienate readers.
Legal and medical material
It Is absolutely vital that legal and medical texts are translated correctly. The implications of errors are huge. As no country has identical laws to another and regulations tend to be complex, clearly the potential exists for AI to make mistakes. Laws are constantly updated, and it is possible that AI training data would not been updated immediately in response to such changes.
Medical translations involve the interpretation of complex and specialized terminology together with a deep understanding of medical concepts, cultural nuances, and contextual accuracy. Mistakes will have serious consequences and it isn’t possible to hold AI to account for any errors.
Given the likelihood of errors and the lack of accountability, it would be unethical to rely on AI translations of legal and medical material.
Data Privacy and Security
AI translation tools often require access to large amounts of text data to function effectively. This raises concerns about data privacy and security. Sensitive information, such as personal data or confidential business communications, can be exposed if proper safeguards are not in place.
Impact on Human Translators
The rise of AI in translation has sparked fears about job displacement among professional translators. While AI can handle straightforward and repetitive translation tasks, it lacks the creativity, contextual understanding, and emotional intelligence that human translators bring to the table.
Copyright issues and false news
The training data for large language models has included the work of journalists. This has led to news outlets filing lawsuits against OpenAI and Microsoft for violating copyright laws. Of course, journalists don’t always cover themselves in glory when it comes to factual accuracy. Using articles as training data could be considered tantamount to feeding bias within datasets and could promote the spread of false news.
Could it ever be ethical practice to rely on AI translations that may have been influenced by factually incorrect training data?
AI and the environment
It is worth considering the environmental cost of AI. Large language models utilise huge amounts of electricity. This should be factored into the carbon counting of any business that seeks to promote its environmentally friendly credentials.
How can AI be used responsibly for translation?
AI lacks the nuanced understanding of language and the cultural sensitivities of human translators. Worse still, large language models do make factual errors. It simply wouldn’t be responsible to generate translations using AI and then not to have the results thoroughly checked by skilled humans. Poor or erroneous translations will negatively impact brands and businesses. More importantly, mistakes can cause offence or great harm.
That’s not to say that AI can’t come in handy or assist in the translation process. It can provide a framework around which translators can work, helping to speed up the process. But it would not be ethical for language service providers to pass off AI generated outputs as human translations or to allow such outputs to be published without them first being edited by professionals.
Much of the responsibility for the ethical use of AI lies with its developers. The technology exists and so it will be used. Developers of AI translation tools should incorporate cultural competence into their algorithms by training models on diverse datasets.
Companies developing AI translation tools must implement robust data protection measures, such as encryption and anonymization. Users should be cautious about the type of information they input into these tools and ensure that the services they use comply with data protection regulations such as GDPR.
AI developers should strive for transparency by providing explanations of how their algorithms work and the data they rely on. Users should be informed about the limitations and potential biases of AI translation tools. Establishing clear lines of accountability is also essential to address any issues that arise from the use of these AI-powered tools or human oversight.
Conclusions
AI has the potential to transform the field of translation, making communication across languages more accessible and efficient. However, ethical considerations must guide its development and use. Ensuring accuracy, cultural sensitivity, data privacy, and the welfare of human translators should be primary considerations when harnessing the benefits of AI translation. By adopting best practices and fostering a collaborative relationship between AI and human expertise, it is possible to create a more inclusive and effective translation ecosystem.