Translation & Localisation

Sustainability - At the Heart of What We Do

Sustainability - At the Heart of What We Do

Author: Miriam Finglass | Translation Project Manager

At Context, Sustainability is much more than just a buzzword. It’s one of our most important values and a guiding principle in what we do. It is at the core of our activities, informing our interaction with clients and suppliers and the way we collaborate and grow as a team. Here we highlight some practical aspects of our sustainability efforts in relation to the environment, health and wellbeing and community.


Environmental Sustainability

The Context team works towards continuously reducing our environmental impact. Some of our actions and achievements to date include:

  • An almost paperless office through digital invoicing and printing only when necessary
  • Installation at the Context office of a solar PV system that powers all Context servers and desktops
  • Switching to a provider that supplies electricity from 100% renewable energy sources
  • Zoned heating at our office to reduce our carbon footprint
  • Our office environment contributes to local biodiversity with one acre of native woodland and wildflowers
  • Flexible hybrid work for our team members avoids long commutes and reduces our employees’ carbon footprints
  • Participation in tree-planting at Hometree, Co. Clare to help restore native Irish woodlands
  • Taking part in the Climate Heroes Challenge.

 

Climate Heroes Challenge

From 15 – 26 April, Context team members together with some of our freelance linguists, took part in the Climate Heroes Challenge organised by Global Action Plan. Two Context teams competed against each other and the other community groups taking part around Ireland to reduce their carbon footprint. Each team member logged daily activities on the Climate Heroes simple and easy to use platform, which showed encouraging real-time calculations of our carbon savings. It was a fun, enjoyable experience and helped to develop habits that will stick with us into the future. And we had some nice prizes for the winners! Across all teams, participants in the challenge saved a combined total of 43 tonnes of CO₂. To put that into perspective, if everyone in Ireland did this, it would amount to a 63% reduction in Ireland’s total annual consumption-based emissions. Context also made a donation to support community programmes in Global Action Plan’s GLAS community gardens and nature explorer programme. The next Climate Heroes Challenge will be happening in October 2024 and we’ll be looking to improve on our performance even more!

 

Health and Wellbeing

At Context, we know that health and wellbeing are vital for working sustainably. We have an optional and customisable health and wellbeing programme for employees. In this programme, employees form pairs of health and wellbeing buddies. Each employee chooses their own health and wellbeing goals from categories covering Eating Well, (Home) Office Ergonomics, Financial Wellbeing, Personal-Professional Development, Physical Exercise, Positive Impact, Quality Sleep, Personal Activities, Social Interaction and Workload Balancing. Once they’ve decided on their goals, employees discuss them with their buddy, who they then meet with regularly to catch up on how things are going. A great positive of the programme is that it’s completely up to the employee to decide on their goals and what they want to share with their buddy. Goals could include taking more exercise or getting better sleep, making more time for social activities or hobbies, improving work-life balance etc., but they can be anything the employee wants to achieve in terms of their health and wellbeing. Since employees can choose their own goals, they can be more realistic than is often the case in one-size-fits-all programmes. Another advantage is that the programme is motivating without being stressful, as regular catchups are intended as a fun, friendly opportunity to pause, think about, define and discuss goals and progress on an ongoing basis, aiming for continuous improvement. In addition to their personal commitment, the catchups with their buddy create an increased sense of accountability for employees. And to top it off, employees who participate in the programme take an extra annual leave day per quarter, a wellbeing day, a great opportunity to pursue their personal health and wellbeing objectives.

 

Community

At Context, we value the important contribution of all our freelance linguists and we support their fair and just treatment in relation to rates and working conditions. We believe that freelance translation and interpreting should be a sustainable activity. My colleague Ulrike Fuehrer’s article What is a ‘Translator’? details the important work done by our community interpreters in often very challenging situations. It highlights the need to create a robust and sustainable job profile for community interpreters. Despite the growing numbers in migrant communities in Ireland today, no coherent government approach to the training and accreditation of community interpreters exists. In light of this reality, Context provides support to community interpreters in the form of training and resources. Without the vital work of our community interpreters, equal access to public services for migrant communities would not exist. Through our work with our community interpreters and our strong working relationships with community and public sector institutions, Context supports equal access of migrant communities in Ireland to public sector services such as medical care, legal supports, asylum seeking, citizenship rights and employment. We hope that this contributes to the creation of sustainable communities in Ireland, now and into the future.

What does Sustainability mean to you?  Will you join the next Climate Heroes Challenge?


context-generative-ai

Generative AI in Perspective: An Overview

context-generative-ai

Generative AI in Perspective: An Overview

Author: Miriam Finglass | Translation Project Manager at Context

In our recent post Where is the translation industry right now on the AI hype curve?, we shared our thoughts on AI and translation. To put the current AI boom into perspective, here we give an overview of developments in the field and look at some of the common terms currently encountered in relation to AI and machine learning.

Artificial intelligence is not new. Alan Turing was one of the first to conduct substantial research in what he termed “machine intelligence” and published his seminal paper “Computing Machinery and Intelligence” in 1950 (Turing, 1950). In this paper, he proposed an experiment called “The Imitation Game”, now called the “Turing Test”, under which a machine was considered intelligent if a human interrogator could not distinguish it in conversation from a human being. It was AI pioneer Arthur Samuel who popularised the term “machine learning”, describing it in 1959 as the “programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning” (Samuel, 1959). In other words, machine learning (ML) involved computers learning from experience, giving them the ability to learn without being explicitly programmed. Samuel appeared on television in 1956 to demonstrate a computer playing checkers against a human, having used machine learning techniques to program the computer to learn to play the game. The term “artificial intelligence” itself was coined by American cognitive and computer scientist John McCarthy for the Dartmouth research conference later in the same year, one of the first dedicated events in the AI field (McCarthy et al, 1955). See Karjian’s timeline of the history and evolution of machine learning for more details on the development of AI over the last eight decades.

How can machines learn without explicit instructions? The answer is data. In ML machines are trained with large amounts of data. Most machine learning involves developing algorithms (sets of rules or processes) that use statistical techniques to analyse and draw inferences from patterns in data (Xlong, 2023). After training and testing, the ML algorithms or models have learned from existing data and can make decisions or predictions for unseen data. The more data the models analyse, the better they become at making accurate predictions. ML models have been built for a range of tasks and find application in many different fields, including image recognition, speech recognition, recommendation systems, data analysis, fraud detection, medical diagnostics and many more. They are also used in natural language processing (NLP), the branch of AI that enables computers to understand, generate, and manipulate human language, including tasks such as machine translation (MT), text classification or summarisation.

ML models that are trained to recognise and generate plausible human language are called language models. Language models are trained to conduct a probability distribution over words or word sequences. Simply put, they look at all the possible words and their likelihoods of occurring to predict the next most likely word in a sentence based on the previous entry (Kapronczay, 2022). They do this by converting text to numerical representations called tokens, and based on the context, estimate the probability of a token or sequence of tokens occurring next. The simplest language models are n-gram models. An n-gram is a sequence of n words, e.g. a 3-gram is a sequence of three words. These models estimate the likelihood of a word based on the context of the previous n-1 words. One of the main limitations of n-gram models is the inability to use long contexts in calculating the probability of the next word. Language models are the basis of the technology behind autocomplete, speech recognition, optical character recognition and are also used in machine translation. For more information on types of language models and how they work, see Voita (2023).

Most ML models today are based on artificial neural networks (ANNs). These are ML models inspired by the neural networks in the human brain. The origins of ANNs go back to the work of Warren McCulloch and Walter Pitts who published the first mathematical model of a neural network in 1943, providing a way to describe brain functions in abstract terms and to create algorithms that mimic human thought processes (Norman, 2024). An artificial neural network is a statistical computational ML model made up of layers of artificial neurons (Mazurek, 2020). Data is passed between the neurons via the connections or synapses between them. A simple neural network consists of three layers: an input layer, a hidden layer and an output layer. The input layer accepts data for calculation and passes it to the hidden layer, where all calculations take place. The result of these calculations is sent to the output layer. Each synapse has a weight, a numerical value that determines the strength of the signal transmitted and how much it affects the final result of the calculation. During the training process, a training algorithm measures the difference between the actual and target output and adjusts the weights depending on the error, so that the ANN learns from its errors to predict the correct output for a given input (DeepAI.org). In this way, ANNs can be developed to become special-purpose, task-specific systems. The first artificial neural network was developed in 1951 by Marvin Minsky and Dean Edmonds. The Perceptron, developed by Frank Rosenblatt in 1958 was a single-layer ANN that could learn from data and became the foundation for modern neural networks.

ANNs with at least two hidden layers are referred as deep neural networks and were first developed in the late 60s. In 2012, there was an event that set off an explosion of deep learning research and implementation. AlexNet, a ML model based on a deep neural network architecture, won the ImageNet Large Scale Visual Recognition Challenge, a competition that evaluated ML algorithms’ ability in the area of object detection and image classification. AlexNet (Krizhevsky et al, 2012) achieved an error rate more than 10.8% lower than the runner up. Its success was largely based on the depth of the model and the use of multiple GPUs (graphical processing units) in training the model, which reduced the training time, allowing a bigger model to be trained. Deep learning transformed computer vision and drove progress in the late 2000s in many areas, including NLP.

Neural network architectures also transformed language models. Neural language models use deep learning to predict the likelihood of a sequence of words. Compared to n-gram models, they differ in the way they compute the probability of a token based on the previous context. Neural models encode context by generating a vector representation for the previous context and using this to generate a probability distribution of the next token. This means that neural language models are able to capture context better than traditional statistical models. Also, they can handle more complex language structures and longer dependencies between words. For further details on the mathematics behind these models, see Voita (2023).

Machine translation (MT) based on artificial neural networks is referred to as neural machine translation (NMT), which outperformed statistical machine translation (SMT) systems in 2015. NMT models learn from parallel corpora using artificial neural networks, carrying out translation as a computational operation. NMT offers improved quality and fluency for many language combinations in a variety of domains compared to previous MT systems, although the apparent fluency can sometimes make errors more difficult to identify. NMT models are, for example, the technology behind Google Translate, DeepL and Microsoft Bing Translator.

The reason for the current AI boom, generative AI models are capable of generating text, images, video or other data. They are often thought of as the models we can interact with using natural language. But how are they different from all the previous technology discussed? These models also work by learning the patterns and structure of their input training data, using this to generate new data. And they are still based on neural architectures. The difference is that prior to the emergence of generative AI models, neural networks, due to limitations of computer hardware and data, were usually trained as discriminative models in that they were used for distinguishing classes of data, classifying, rather than generating data, a good example being their application in computer vision. However, the availability of more powerful computer hardware and even more immense datasets have made it possible to train models that are capable of generating data. In general, generative AI models tend to be very large, while traditional models tend to be smaller. Generative models also tend to be multi-purpose, whereas traditional models tend to be task-specific. For a detailed discussion on distinguishing generative AI from traditional AI ML models and common network architectures for generative AI models, see Zaamout (2024).

Large language models (LLMs) are deep neural networks trained on enormous amounts of data and are capable of generating what appears to be novel, human-like content. They are the current technology behind many NLP tasks. They function in the same way as small language models, i.e., conducting a probability distribution over words as described above. The main differences are the amount of data on which they are trained and the type of neural network architecture, with most current models using the Transformer architecture. Transformer architecture is discussed in more detail below.

OpenAI introduced the first GPT model, a type of LLM, in 2018. GPT stands for generative pre-trained transformer. The transformer architecture is a neural network model that was developed by Google in 2017 (Vaswani et al., 2017) and has since revolutionised the field of NLP and deep learning, thanks to its attention mechanisms. Attention is a mathematical technique that enables a model to focus on important parts of a sentence or input sequence, allowing it to better consider context, consider relationships between words at a longer distance from each other and resolve ambiguities for words with different contextual meanings (Shastri, 2024). Transformer models are also capable of processing input data in parallel, making them faster and more efficient. Pre-training involves training a model on a large amount of data before fine-tuning it on a specific task. GPT models are pre-trained on a vast data set of text, containing millions of websites, articles, books etc, learning the patterns and structures to give them a general understanding of the language. After pre-training, the model is fine-tuned on specific tasks, for example translation, text summarisation, question answering or content generation. Following the first GPT, OpenAI introduced successive releases, the most recent being GPT-4o. GPTs can be used to write many different types of content, including essays, emails, poetry, plays, job applications or code. ChatGPT, the chatbot service developed by OpenAI, is based on task-specific GPT models that have been fine-tuned for instruction following and conversational tasks, such as answering questions. Although it is a conversational, general-purpose AI model and not an MT system, it can be used for translation.

Studies have shown positive results for generative models and LLMs in the translation of well-resourced languages but poor quality for low resource languages. For example, Hendy et al. (2023) tested three GPT models on high and low resource languages, finding competitive translation quality for the high resource languages but limited capabilities for low resource languages. Castilho et al. (2023) investigated how online NMT systems and ChatGPT deal with context-related issues and found the GPT system outperformed the NMT systems for contextual awareness except in the case of Irish, a low resource language, where it performed poorly. It should also be remembered that such studies are limited to small-scale test sets and may not be generalisable across language pairs, specific domains and text types.

Some drawbacks of generative AI and GPTs/LLMs also need to be considered.

  • Transformer models are computationally expensive, requiring substantial computational resources during training and inference (when using the model to generate predictions) and training times and costs are high.
  • LLMs come at a high cost to the environment. They have a high carbon footprint and as generative models have become larger and larger to improve performance, their energy requirements have become immense. Large amounts of water are also needed to cool data centres and demand has grown for the rare earth minerals required to manufacture GPUs.
    Due to their highly complex architecture and the “black box” nature of the internal working of the models, interpreting and explaining why certain predictions are made is difficult.
  • Due to the way the attention mechanisms of transformers work, transformer models are very sensitive to the quality and quantity of the training data and may inherit and amplify societal biases present in the data (Vanmassenhove, 2024).
    LLMs require a large amount of training data. In the case of machine translation, a lack of data generally means poor quality results for low-resource languages.
  • Hallucinations, i.e. the generation of text that is unfaithful to the source input or nonsensical Ji et al. (2023), occur across models used for natural language generation (NLG). In the case of machine translation, LLMs, like traditional NMT models, can produce hallucinated translations. Since LLMs tend to generate fluent and convincing responses, it is more difficult to identify their hallucinations, posing a risk of harmful consequences. Guerreiro et al. (2023) found that the types of hallucination differed between traditional NMT models and GPTs. Hallucinations in the case of LLMs also extend to deviations from world knowledge or facts. For more information on hallucinations in the field of NLG, see Ji et al. (2023).

The EU AI Act, the first binding regulation on AI in the world, was adopted by the European Council in May 2024. It aims to “foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it aims to ensure respect of fundamental rights of EU citizens and stimulate investment and innovation on artificial intelligence in Europe”. There are questions as to whether the Act will be effective in protecting the environment from the impact of AI, see Warso and Shrishak, (2024) and Laranjeira de Pereira, J.R. (2024), but it’s clear that at this point in the development of AI, it is time that proper consideration is given and action taken on its social and environmental consequences.

New developments in AI are happening at an ever-increasing pace and bring both opportunities and challenges to Translation and many other industries. We will continue to monitor changes in this space as well as the environmental repercussions of AI.

How has AI impacted on your role/industry? What is your experience?


Communication Across Language Barriers: Guidelines for Success

Communication Across Language Barriers: Guidelines for Success

Author: Ulrike Fuehrer | Director at Context

Successful communication is something we all strive for and at times may struggle with.

The aim of effective communication which leaves both sender and receiver satisfied, requires a deliberate approach when we do not speak the language of the other person, and our conversation is mediated by an interpreter. In Ireland, scheduled appointments with public sector organisations that are facilitated by an interpreter occur approximately 1000 times per working day. Additionally, unplanned events in Emergency Rooms, at Garda Stations or with social or asylum support services require language interpretation, if members of the public are not sufficiently confident to hold the conversation in English.

The lack of a common language can be a source of frustration to both parties, to the member of the public and the public service provider alike. Living in a country or in a world where you do not understand the – spoken or signed – language is deeply frustrating and leads to increased exclusion. The least we can do to initiate a virtuous circle of empowerment and equal access to public services, apart from supporting cultural awareness, community level solidarity and progressive state led policies, is to ensure that service users of all nationalities are well supported and can be heard.

In its Report on Refugees and Integration from November 2023, the Irish Joint Committee on Children, Equality, Disability, Integration and Youth references interpreting services in one of its 96 recommendations: ‘Refugees of all nationalities should be supported equally and offered the same services, in particular translation services.’ However, the recommendations extend solely to the use of remote online interpreting services, which may be suitable when it comes to exchanging facts and figures, but may not be appropriate for consultations on sensitive cases or with vulnerable children or adults. The lack of adequate video-conferencing facilities or even two-way telephone systems in most public service settings would be one obstacle, together with uncertainty about the role of an interpreter and how or where to source interpreting services.

If you currently use interpreting services for your client appointments or wish to prepare for when you will need an interpreter to assist, you may find these guidelines helpful:

1. Expect the interpreted appointment to take longer, schedule additional time
2. Establish what language the client speaks well before the actual appointment date
3. Book an interpreter of that language in good time, provide details of the reason for the appointment, so the interpreting company can select and brief the best suited interpreter
4. Before the appointment, introduce yourself to the client via the interpreter, and allow the interpreter to briefly outline their role, in both languages
5. During the appointment, talk directly to the client using plain language, and allow the interpreter to be both your and your client’s voice
6. Ensure that the interpreter meets the client in your presence only
7. Pick up on the client’s body language and ask for clarification via the interpreter
8. Summarise any actions/advice/instructions for your client at the end of the appointment
9. Rebook the interpreter for any follow-on appointments via their company
10. Provide any feedback and special requirements to the interpreting company.

Clients can contact us at interpreting@context.ie if you require staff training on ‘How to Work Well With Interpreters’ – we are happy to deliver the relevant training to you, onsite or online, to support you in communicating successfully with any service users who speak languages other than English.


The Power of Technology, Teamwork and Tenacity

Case Study: Striking Success

The Power of Technology, Teamwork and Tenacity

Author: Ulrike Fuehrer | Director at Context

Strikes occur in Belgium nearly as often as in Paris, Berlin or Madrid and frequently affect the transport system. They are as powerful as they are disruptive.

On 20 June 2022, access to all Belgian airports and train stations was severely disrupted, also impacting on participation of two parallel meetings we supported in Brussels. As a project manager on site in Brussels, I received a flood of emails, phone calls and messages from participants and interpreters scheduled to arrive from 13 different countries who were unable to reach Belgium.

The stakes of the meetings were high. The organisational effort of getting everyone together – after 2 years of lockdowns – had been considerable.

At 6pm that evening, I took stock of the situation on the ground: We had some participants in Brussels, some participants from the same and/or from other countries stuck at their home airports, some interpreters on site and some equally stranded elsewhere.

During the next 3 hours, I discovered the power of advanced conference technology and the pivotal role conference technicians and a committed team of interpreters can play. With the help of the two most dedicated and determined technicians and a lot of communication back and forth, a solution was designed that allowed all participants, whether in Brussels or at home, and all interpreters likewise to connect successfully.

Both meetings went ahead the next day as scheduled. The software and hardware deployed in Brussels interfaced in a technically highly complex solution, supporting an event which ran as smoothly as the proverbial swans gliding across the lake. The amount of peddling beneath the water was extraordinary.

One interpreter received a call from her airline at 5am that morning offering her an early evening flight; she interpreted remotely on day 1, rushed to the airport at close of business, caught a flight and worked from the Brussels booth the next morning. Half of the participants made it to the destination on day 2, the other half logged in online and stayed where they were.

While travel may have been a small nightmare during that week, the technical challenges were of a different magnitude altogether. My gratitude and respect for our solution providers took on a new dimension. Our long-term take-aways have been: a creative, solution-focused, state-of-the art technical provider is worth their weight in gold. A committed interpreter team of versatile travellers, unfazed by any eventualities and ready to surmount any obstacles is as crucial as a group of excellent linguists. And: All meetings need to be planned with an in-built virtual component, to be activated in case of a volcanic ash cloud, an air controller strike, icy weather conditions and – strike.


Where is the Translation industry right now on the AI hype curve

Where is the Translation industry right now on the AI hype curve?

Authors: Angelika Zerfass | Ulrike Fuehrer | Miriam Finglass

 

Context AB and AI

May 2024 saw the inauguration of the Context Advisory Board as an information and consultancy resource for the operational Context team.

On the occasion, internationally renowned translation technology expert and Context Board member Angelika Zerfass, made a very welcome and meaningful input in the Context discussion on AI and Translation.


Key Takeaways from the Discussion

Here are the thoughts and thought-provoking nuggets we took away from Angelika’s presentation:

  • AI is machine learning. Machines are trained with large amounts of data. They use statistics to discern patterns in the data in order to be able to make decisions or predictions for unseen data.
  • Generative AI is capable of generating text, images, video or other data. It has been made possible by the availability of more powerful computer hardware and immense datasets.
  • AI is pattern matching. It’s very useful in areas such as radiography/healthcare where, for example, X-ray patterns can be established in seconds to feed into diagnosis and patient care but, as it stands, mainly inadequate in situations where context knowledge and understanding is crucial.
  • AI hallucinates. Where there is no content, it makes it up by using the most probable combination (of words, sounds, pixels…). While the result looks plausible to the human user at first glance, these most probable combinations state something that is simply not true.
  • AI tools do not understand, cannot evaluate and do not know when something is incorrect, biased, inappropriate or untrue.
  • AI systems have been shown to produce text (and images) that perpetuate gender, racial and other biases.
  • Hence the content quality available on the internet may have been at its best up until recent years. As AI propagates its own mistakes and myths, content quality stands to deteriorate. Content may look great – and yet bear no relation to reality.


Where are the Human Competencies required?

  • While large language models (LLMs) and other AI tools can generate images, videos, songs, texts and translations, they rely on human created and curated content as training material.
  • Human translations continue to be an essential component in the quality segment of the market.
  • Human intervention on machine translated output is required, new linguistic profiles can add value in:
    • light or full post-editing of machine translated content
    • continuous development of QA tools for machine translated output
    • clean TMs, term lists, added metadata
    • editing content created in the target language: checking facts, ensuring consistency, eliminating bias
    • determining which texts are suitable for machine translation post-editing and which not. Possibly pre-editing texts to make them more suitable for machine translation
  • We’ll need to hear from linguists as to the quality of the machine translated output and how that might vary by domain, language pair or text type. We’ll need their feedback on the post-editing effort needed and their experience of the translation process, considering job motivation and satisfaction.
  • For smaller languages, insufficient training data is available; humans are crucial here as subject matter experts, product experts and language experts.


Environmental

The environmental impact of AI is huge. In their study, Strubell et al (2019) look at machine learning models based on the transformer neural architecture, commonly used for machine translation. The graphical processing unit (GPU) emissions generated when training a large model were equivalent to the output of 1.5 cars over the 20-year lifetime of those cars. And that’s only considering the training. This doesn’t consider the power and cooling requirements for the computers or the carbon emissions generated each time one of these systems is used. Luccioni et al (2023) highlight the additional emissions related to generative AI as compared to traditional “task-specific” systems.


Data Protection and IP

  • Confidentiality of data processed by AI systems must be a priority.
  • There are intellectual property considerations in terms of the source of data used in training AI systems and the copyright of its authors.

 

So where are we at Context on the hype curve that all new technology – all new products? – traverse. Perhaps more inclined to critically evaluate generative AI solutions, to discuss and pilot post editing models with our linguists and clients, to embrace the creation of new specialised job profiles, and – quite horrified at the environmental cost of AI.


Where do you sit on the curve?

 


References
Luccioni, A.S., Jernite, Y. and Strubell, E. (2023). ‘Power Hungry Processing: Watts Driving the Cost of AI Deployment?’ Available at: http://arxiv.org/abs/2311.16863

Strubell, E., Ananya, G., and McCallum, A. (2019). ‘Energy and Policy
Considerations for Deep Learning in NLP’. Available at: http://arxiv.org/abs/1906.02243


What is a ‘Translator’?

What is a ‘Translator’?

Author: Ulrike Fuehrer | Director at Context

“A big round of applause to our translators without whom this meeting would not have been possible”. The world thinks very highly of conference interpreters. How can they possibly listen and speak at the same time, and render in a different language what they just heard, with a delay of only 3 seconds, or even finish the sentence before the key note speaker who is racing through their presentation at breakneck speed.

While I have interpreted hundreds of meetings of this kind and assisted heads of state and government on their various missions abroad, my own admiration and deep respect goes to a different group of ‘translators’: those who facilitate communication with refugees and victims of war in reception centres, who assist the family of a terminally ill patient in a hospice, who communicate bad news to the parents of a new born baby, those who assist busy nurses, stressed front-line staff, prosthetic surgeons, A&E staff trying to manage long lines of patients on trolleys. All in the one day, every day.

I’m speaking of our freelance community interpreters.

Every community interpreter I have met is motivated by humanitarian aspects, by the desire to help, to facilitate communication, to make a meaningful contribution to having patients, clients, refugees, children, vulnerable adults provided with adequate support.

Community interpreters are a typically underpaid, underrated, untrained cohort of linguists without whom, however, equal access to public services would not exist. Their fate is largely determined by public procurement; where working conditions are set by tender competitions, and where service quality is compromised by rates per hour (or even per minute), dictated by a tendering authority.

If we want to create a robust and sustainable job profile for community interpreters who choose this as a career path and develop to become the best professionals they can be, we need to talk about budgets, about ownership and about political will. Training and professionalisation for community interpreters come at a cost. There are some – albeit rudimentary – training courses currently available, and comprehensive training expertise in community interpreting does exist in Ireland.

Where simultaneous conference interpreters (who process the spoken word in real time) and translators (of written text) have university degrees and the prospect of or actual experience of an adequate income, community interpreters struggle to make a living from their service. Despite the key role community interpreters play in making health care, education, local government services etc. accessible for everyone.

15 years ago, the National Consultative Committee on Racism and Interculturalism (NCCRI) was about to set up a state register of trained and accredited community interpreters, when the banking crisis defeated this project. Waiting for the tide to turn again does not address the current, urgent need for qualified community interpreters of a broad range of languages. At Context, we do what we can to recruit, onboard, support, train and professionalise hundreds of community interpreters to take on the daily challenges. We try to motivate our freelancers and fight for their recognition. Every respectful service user is appreciated. Every positive comment helps.

What is your experience, as an interpreter, service user or observer?