Hybrid Meetings – Technology and the Human Touch
![](https://www.context.ie/wp-content/uploads/2024/12/european-council-room.jpg)
Hybrid Meetings – Technology and the Human Touch
Authors: Context Conference Interpreting Team
Hybrid meetings are becoming the norm both for European Works Council meetings and other types of events. ‘Hybrid’ means different things to different people: participants and interpreters on site with remote key speakers ‘dialling in’, or everybody on site and only the interpreters remote. Let’s take a closer look at the second scenario.
Language service providers (LSPs) and Remote Simultaneous Interpreting (RSI) platforms offer an apparently similar package including platform use and interpreters, so does it matter whose services you engage? The packages offered may look the same at first, but under the bonnet they are very different and suit different requirements.
Platforms draw from a huge pool of interpreters available at very short notice in all time zones. They are large organisations that offer a standard ‘on-demand’ product. Project Managers may be dispersed, difficult to contact and busy with several meetings running in parallel at the same time. Information sharing between the various layers of the organisation can be slow. Their web sites and internal tools offer video tutorials and digital training material. Audio and video are usually managed via the platform without any additional equipment like table microphones and cameras on site. As a result, sound and vision are often less than ideal and impact user experiences.
Clients have access to a platform, interpreters are an add-on; a good choice if you are looking for a standard product available on demand and at short notice worldwide. Perhaps less suitable for sensitive negotiations where continued support by a steady team of interpreters, technicians and project managers ensures smooth meeting experiences for all stakeholders.
If you work with a hands-on LSP like Context, the approach will be radically different: we start from selecting the best interpreters for the job and focus on a tailor-made solution to meet your individual needs. Our team of experienced Project Managers, who are themselves seasoned interpreters, are based in Europe and will be happy to discuss with you the pros and cons of a range of different platforms to find the one best suited to your requirements and your budget. Not all platforms offer the same functionality! Advance introduction to the use of specialised platforms is delivered by competent humans and actively supported. Crucially, one of our support managers is always present in the background throughout your meeting, to assist participants and coordinate with interpreters and technicians. We work closely with a wide network of skilled equipment providers in many European locations to make sure that additional microphones and cameras are installed where necessary, and to achieve high quality sound and vision. In a nutshell, we ensure that the participants in your meeting make the most of the service provided and communicate successfully.
So, what would work best for you? A standard on-demand package or a solution tailored to your company, your people and your goals? You can call us on +353 91 353820 or email conference@context.ie to discuss your needs.
Embracing Flexibility The Impact of a Four-Day Work Week for Context’s Translation Project Management Team
![](https://www.context.ie/wp-content/uploads/2024/11/four-days-work-week.jpg)
Embracing Flexibility: The Impact of a Four-Day Work Week for Context’s Translation Project Management Team
Authors: Context Translation Project Management Team
In an era where work-life balance is more important than ever, the translation project management team at Context began its journey to explore the potential of a flexible working week in 2023. Inspired by a push to rethink traditional work structures and a desire to show up better at work and in our personal lives, we decided to trial a four-day work week from January to June 2024. As the Context work structure is based on self-managing teams, we had the authority to develop and ultimately implement this life-changing concept.
After researching successful implementations in various organisations, we tailored the concept to our unique Context team structure and workflow. The results have been incredibly positive, significantly improving our wellbeing and enhancing our productivity.
Rethinking the Work Week
The initiative began with an open discussion about how we could reimagine our working hours. Recognising the need for flexibility in our increasingly fast-paced environment, we explored existing literature and case studies on four-day work weeks to understand how similar projects had been implemented and their effects on people and organisations. This groundwork laid the foundation for our trial, which we implemented on 1 January 2024, after careful planning and enthusiastic support from our colleagues.
Positive Impacts on our Wellbeing
One of the most immediate benefits of the four-day work week has been the profound impact on our wellbeing. With an additional day off, team members reported feeling less stressed and more energised. Many used this time to run essential errands typically squeezed into busy weekdays or weekends, allowing for a more balanced approach to daily life.
This newfound flexibility has enabled us to engage more deeply in activities that enhance our personal lives, whether volunteering in the community, pursuing hobbies, or spending quality time with family. Some reported feeling more inclined to exercise and explore creative passions, fostering a more positive and vibrant team culture.
To ensure these positive impacts are sustainable, we implemented a system for tracking our wellbeing and work capacity daily. Each team member records their mood, energy levels, and workload at the end of each day. This data collection allows us to analyse trends over time, providing insights into how the four-day work week affects our overall wellbeing and productivity. By reviewing this data regularly, we can identify patterns and make informed decisions about workload distribution and support needs, ensuring a healthy work environment.
Boosting Productivity and Efficiency
The positive effects of the four-day work week extend beyond personal wellbeing; they also translate into tangible gains for our work. We have observed an increase in timely project deliveries. By structuring our week around a designated day off, we prioritise tasks more effectively, leading to improved planning and communication among team members.
It’s important to note that our decision for a four-day week was particularly suited to the nature of the specific type of project management work in our business line. For other teams, a flexible working week may look different and may involve shorter work days and/or flexible working hours. The core idea is to plan and prioritise effectively so that we don’t end up working extra hours on those four days. Yet, we acknowledge that there are times when we do put in extra hours, which is a testament to our commitment and willingness to make it work. Ultimately, we aren’t counting the time; it’s about the quality of time spent on our tasks and the outcomes we achieve.
Sustainability in the Workplace
An often-overlooked aspect of flexible work arrangements is their contribution to sustainability. The extra day off has encouraged more sustainable practices among team members. Many have used their free time for activities that promote sustainability. This collective engagement strengthens our connection to the environment and fosters a sense of shared responsibility in our team.
Building a Stronger Team Culture
The transition to a four-day work week has also reinforced our team culture and camaraderie. We’ve had more opportunities to collaborate, share ideas, and support one another, which has strengthened our professional relationships. We work interdependently, and can manage each other’s tasks if we need to.
The focus on wellbeing and work-life balance has created an atmosphere of trust and respect. Team members feel valued not just for their output, but as individuals with lives outside of work. This cultural shift has resulted in increased job satisfaction, fostering an environment where everyone is motivated to contribute their best.
Conclusion
The decision to trial a four-day work week has proven to be a significant step forward for Context’s translation project management team. Not only have we seen improvements in our wellbeing and productivity, but we’ve also reinforced our commitment to sustainability and team cohesion.
As we continue to embrace this flexible approach, we believe that the benefits will extend beyond our immediate team, ultimately to the benefit of the entire organisation. Our experience highlights the value of a self-managing team and of flexibility in the workplace. We hope that our positive experience will encourage other organisations to consider similar initiatives for a healthier, more engaged, and sustainable working life.
What do we recommend you consider when moving to a 4-day work week structure? Think about:
- What structure would work for your team to ensure continuity of service and low impact on client and suppliers?
- Will workflows and processes need adjustment? Can you devise a contingency plan?
- How will you measure impact and results?
We would be delighted to hear from other teams and about similar initiatives!
Save Food, Save the Planet - the Context Food Waste Challenge
![](https://www.context.ie/wp-content/uploads/2024/10/save-food.jpg)
Save Food, Save the Planet
– the Context Food Waste Challenge
Author: Miriam Finglass & Alice Gallanagh | Translation Project Managers
Following on from taking part in the Climate Heroes Challenge earlier in the year, the Sustainability team at Context is now organising our next Climate Challenges!
First, we will hold a Food Waste Challenge starting Tuesday 29th October and running until Tuesday 5th November aiming to reduce our food waste.
Context and some of our freelance linguists will compete in two teams to reduce our food waste as much as possible over the week! The week-long challenge will help us develop better habits and awareness of our consumption and food waste patterns.
Globally, around 931 million tonnes of food go to waste each year. 61% of this comes from households, 26% from food service and 13% from retail (1). The EPA estimates that Ireland generated 750,000 tonnes of food waste in 2022 (2). This equates to 146kg of food waste per person. Food waste is a significant contributor to climate change, with food loss and waste contributing to 8-10% of greenhouse gas emissions (3).
We’re really looking forward to learning about the steps we can take to reduce food waste. We know that there’s much more we can do to make a significant impact.
There will be a well-earned prize at the end of the challenge for the winner of each team. In the Climate Heroes Challenge we held earlier this year, we sent one of the winners for a weekend away to an eco-retreat which you can read about here.
After the Food Waste Challenge, we will hold a Plastics Challenge from 11th to 18th November. More details on that to follow!
How do you feel about food waste? Do you ever ask a restaurant or business about their food waste policy? Doing so can help businesses to see that their customers do care about the issue and can bring about some positive changes. Why not organise or take part in a food waste challenge?
Additional Resources:
1 UN Food Waste Index Report 2024
2, 3 Environmental Protection Agency – Food Waste Statistics
Tackling Climate Change - Strength in Numbers
![](https://www.context.ie/wp-content/uploads/2024/10/sustainable.jpg)
Tackling Climate Change - Strength in Numbers
Author: Emily Scott | Translator
I started working with Context as a freelance translator in 2022. In April of this year, the Context Translation team invited me to take part in ‘Climate Heroes’, a fun, team-based challenge that invites you to learn about the contributing factors to climate change, develop positive habits and compete in reducing carbon emissions. Participants tracked their individual actions and efforts on an app and competed within their team as well as almost 60 community groups from 18 counties across Ireland, with a total of 443 people participating nationwide. It’s a really worthwhile endeavour and free to join for community groups and organisations in Ireland. The donation Context made was of course appreciated.
I took part in the Climate Heroes Challenge with Context because I know that sustainability is one of their enduring values and I was already conscious of my impact on the environment. I’m also a big fan of friendly competition and was confident in my ability to rack up a few points on the scoreboard! While taking part, I found it fascinating trying to discover ways of tweaking my lifestyle in order to gain as many points as I could. During the challenge, I often found that there were numerous small changes I could make that would make a big difference in terms of my carbon footprint. For example, I’m already a vegetarian, so having oat milk some days wasn’t too much of a change and I enjoyed finding new vegan recipes to try. And while I couldn’t do them every day, doing a few big energy saving actions was a real motivator, for example, on days when I wasn’t washing my hair, I challenged myself to shower in under four minutes in order to get those all-important points!
I was delighted to have topped the leader board and I thoroughly enjoyed taking part. The Climate Heroes Challenge has helped me become more climate conscious in my day-to-day life and I’ve kept some of the habits that I picked up while taking part in the event – I’ll definitely be looking to join in the next one! I was blown away by the fantastic prize offered by Context – I had a lovely weekend with my husband and my dog staying in a luxury bell tent in a secluded corner of the North York Moors, not far from where we live. We roasted marshmallows on the fire, cooked in the open air under the trees and fell asleep listening to the stream that ran through our camp, all while living in keeping with the site’s eco-friendly ethos.
We can all make small steps to be more sustainable and changes that seem big at first soon become second nature. Last year, I found out that UK supermarkets have recycling points for packaging like crisp packets and bread bags. Initially, it seemed quite cumbersome to remember to separate out this type of packaging and take it to the supermarket but it’s now become part of our routine at home and the amount of general waste we collect every two weeks has dramatically reduced. There are little things we can all do that will make a big difference.
That said, we as individuals can only do so much and need the cooperation of large companies in a number of industries in order to make a real difference. I believe that working with organisations who prioritise and are committed to climate conscious practices is essential for a sustainable future.
So, what are you waiting for? Visit the Climate Heroes website and enjoy the challenge of trying out some new habits to reduce your carbon footprint.
Winner of the Context Climate Curlews team,
Emily Scott
Media Coverage:
Generative AI in Perspective: An Overview
![context-generative-ai](https://www.context.ie/wp-content/uploads/2024/07/context-generative-ai-e1731318831553-uai-258x172.jpg)
Generative AI in Perspective: An Overview
Author: Miriam Finglass | Translation Project Manager at Context
In our recent post “Where is the translation industry right now on the AI hype curve?”, we shared our thoughts on AI and translation. To put the current AI boom into perspective, here we give an overview of developments in the field and look at some of the common terms currently encountered in relation to AI and machine learning.
Artificial intelligence is not new. Alan Turing was one of the first to conduct substantial research in what he termed “machine intelligence” and published his seminal paper “Computing Machinery and Intelligence” in 1950 (Turing, 1950). In this paper, he proposed an experiment called “The Imitation Game”, now called the “Turing Test”, under which a machine was considered intelligent if a human interrogator could not distinguish it in conversation from a human being. It was AI pioneer Arthur Samuel who popularised the term “machine learning”, describing it in 1959 as the “programming of a digital computer to behave in a way which, if done by human beings or animals, would be described as involving the process of learning” (Samuel, 1959). In other words, machine learning (ML) involved computers learning from experience, giving them the ability to learn without being explicitly programmed. Samuel appeared on television in 1956 to demonstrate a computer playing checkers against a human, having used machine learning techniques to program the computer to learn to play the game. The term “artificial intelligence” itself was coined by American cognitive and computer scientist John McCarthy for the Dartmouth research conference later in the same year, one of the first dedicated events in the AI field (McCarthy et al, 1955). See Karjian’s timeline of the history and evolution of machine learning for more details on the development of AI over the last eight decades.
How can machines learn without explicit instructions? The answer is data. In ML machines are trained with large amounts of data. Most machine learning involves developing algorithms (sets of rules or processes) that use statistical techniques to analyse and draw inferences from patterns in data (Xlong, 2023). After training and testing, the ML algorithms or models have learned from existing data and can make decisions or predictions for unseen data. The more data the models analyse, the better they become at making accurate predictions. ML models have been built for a range of tasks and find application in many different fields, including image recognition, speech recognition, recommendation systems, data analysis, fraud detection, medical diagnostics and many more. They are also used in natural language processing (NLP), the branch of AI that enables computers to understand, generate, and manipulate human language, including tasks such as machine translation (MT), text classification or summarisation.
ML models that are trained to recognise and generate plausible human language are called language models. Language models are trained to conduct a probability distribution over words or word sequences. Simply put, they look at all the possible words and their likelihoods of occurring to predict the next most likely word in a sentence based on the previous entry (Kapronczay, 2022). They do this by converting text to numerical representations called tokens, and based on the context, estimate the probability of a token or sequence of tokens occurring next. The simplest language models are n-gram models. An n-gram is a sequence of n words, e.g. a 3-gram is a sequence of three words. These models estimate the likelihood of a word based on the context of the previous n-1 words. One of the main limitations of n-gram models is the inability to use long contexts in calculating the probability of the next word. Language models are the basis of the technology behind autocomplete, speech recognition, optical character recognition and are also used in machine translation. For more information on types of language models and how they work, see Voita (2023).
Most ML models today are based on artificial neural networks (ANNs). These are ML models inspired by the neural networks in the human brain. The origins of ANNs go back to the work of Warren McCulloch and Walter Pitts who published the first mathematical model of a neural network in 1943, providing a way to describe brain functions in abstract terms and to create algorithms that mimic human thought processes (Norman, 2024). An artificial neural network is a statistical computational ML model made up of layers of artificial neurons (Mazurek, 2020). Data is passed between the neurons via the connections or synapses between them. A simple neural network consists of three layers: an input layer, a hidden layer and an output layer. The input layer accepts data for calculation and passes it to the hidden layer, where all calculations take place. The result of these calculations is sent to the output layer. Each synapse has a weight, a numerical value that determines the strength of the signal transmitted and how much it affects the final result of the calculation. During the training process, a training algorithm measures the difference between the actual and target output and adjusts the weights depending on the error, so that the ANN learns from its errors to predict the correct output for a given input (DeepAI.org). In this way, ANNs can be developed to become special-purpose, task-specific systems. The first artificial neural network was developed in 1951 by Marvin Minsky and Dean Edmonds. The Perceptron, developed by Frank Rosenblatt in 1958 was a single-layer ANN that could learn from data and became the foundation for modern neural networks.
ANNs with at least two hidden layers are referred as deep neural networks and were first developed in the late 60s. In 2012, there was an event that set off an explosion of deep learning research and implementation. AlexNet, a ML model based on a deep neural network architecture, won the ImageNet Large Scale Visual Recognition Challenge, a competition that evaluated ML algorithms’ ability in the area of object detection and image classification. AlexNet (Krizhevsky et al, 2012) achieved an error rate more than 10.8% lower than the runner up. Its success was largely based on the depth of the model and the use of multiple GPUs (graphical processing units) in training the model, which reduced the training time, allowing a bigger model to be trained. Deep learning transformed computer vision and drove progress in the late 2000s in many areas, including NLP.
Neural network architectures also transformed language models. Neural language models use deep learning to predict the likelihood of a sequence of words. Compared to n-gram models, they differ in the way they compute the probability of a token based on the previous context. Neural models encode context by generating a vector representation for the previous context and using this to generate a probability distribution of the next token. This means that neural language models are able to capture context better than traditional statistical models. Also, they can handle more complex language structures and longer dependencies between words. For further details on the mathematics behind these models, see Voita (2023).
Machine translation (MT) based on artificial neural networks is referred to as neural machine translation (NMT), which outperformed statistical machine translation (SMT) systems in 2015. NMT models learn from parallel corpora using artificial neural networks, carrying out translation as a computational operation. NMT offers improved quality and fluency for many language combinations in a variety of domains compared to previous MT systems, although the apparent fluency can sometimes make errors more difficult to identify. NMT models are, for example, the technology behind Google Translate, DeepL and Microsoft Bing Translator.
The reason for the current AI boom, generative AI models are capable of generating text, images, video or other data. They are often thought of as the models we can interact with using natural language. But how are they different from all the previous technology discussed? These models also work by learning the patterns and structure of their input training data, using this to generate new data. And they are still based on neural architectures. The difference is that prior to the emergence of generative AI models, neural networks, due to limitations of computer hardware and data, were usually trained as discriminative models in that they were used for distinguishing classes of data, classifying, rather than generating data, a good example being their application in computer vision. However, the availability of more powerful computer hardware and even more immense datasets have made it possible to train models that are capable of generating data. In general, generative AI models tend to be very large, while traditional models tend to be smaller. Generative models also tend to be multi-purpose, whereas traditional models tend to be task-specific. For a detailed discussion on distinguishing generative AI from traditional AI ML models and common network architectures for generative AI models, see Zaamout (2024).
Large language models (LLMs) are deep neural networks trained on enormous amounts of data and are capable of generating what appears to be novel, human-like content. They are the current technology behind many NLP tasks. They function in the same way as small language models, i.e., conducting a probability distribution over words as described above. The main differences are the amount of data on which they are trained and the type of neural network architecture, with most current models using the Transformer architecture. Transformer architecture is discussed in more detail below.
OpenAI introduced the first GPT model, a type of LLM, in 2018. GPT stands for generative pre-trained transformer. The transformer architecture is a neural network model that was developed by Google in 2017 (Vaswani et al., 2017) and has since revolutionised the field of NLP and deep learning, thanks to its attention mechanisms. Attention is a mathematical technique that enables a model to focus on important parts of a sentence or input sequence, allowing it to better consider context, consider relationships between words at a longer distance from each other and resolve ambiguities for words with different contextual meanings (Shastri, 2024). Transformer models are also capable of processing input data in parallel, making them faster and more efficient. Pre-training involves training a model on a large amount of data before fine-tuning it on a specific task. GPT models are pre-trained on a vast data set of text, containing millions of websites, articles, books etc, learning the patterns and structures to give them a general understanding of the language. After pre-training, the model is fine-tuned on specific tasks, for example translation, text summarisation, question answering or content generation. Following the first GPT, OpenAI introduced successive releases, the most recent being GPT-4o. GPTs can be used to write many different types of content, including essays, emails, poetry, plays, job applications or code. ChatGPT, the chatbot service developed by OpenAI, is based on task-specific GPT models that have been fine-tuned for instruction following and conversational tasks, such as answering questions. Although it is a conversational, general-purpose AI model and not an MT system, it can be used for translation.
Studies have shown positive results for generative models and LLMs in the translation of well-resourced languages but poor quality for low resource languages. For example, Hendy et al. (2023) tested three GPT models on high and low resource languages, finding competitive translation quality for the high resource languages but limited capabilities for low resource languages. Castilho et al. (2023) investigated how online NMT systems and ChatGPT deal with context-related issues and found the GPT system outperformed the NMT systems for contextual awareness except in the case of Irish, a low resource language, where it performed poorly. It should also be remembered that such studies are limited to small-scale test sets and may not be generalisable across language pairs, specific domains and text types.
Some drawbacks of generative AI and GPTs/LLMs also need to be considered.
- Transformer models are computationally expensive, requiring substantial computational resources during training and inference (when using the model to generate predictions) and training times and costs are high.
- LLMs come at a high cost to the environment. They have a high carbon footprint and as generative models have become larger and larger to improve performance, their energy requirements have become immense. Large amounts of water are also needed to cool data centres and demand has grown for the rare earth minerals required to manufacture GPUs.
Due to their highly complex architecture and the “black box” nature of the internal working of the models, interpreting and explaining why certain predictions are made is difficult. - Due to the way the attention mechanisms of transformers work, transformer models are very sensitive to the quality and quantity of the training data and may inherit and amplify societal biases present in the data (Vanmassenhove, 2024).
LLMs require a large amount of training data. In the case of machine translation, a lack of data generally means poor quality results for low-resource languages. - Hallucinations, i.e. the generation of text that is unfaithful to the source input or nonsensical Ji et al. (2023), occur across models used for natural language generation (NLG). In the case of machine translation, LLMs, like traditional NMT models, can produce hallucinated translations. Since LLMs tend to generate fluent and convincing responses, it is more difficult to identify their hallucinations, posing a risk of harmful consequences. Guerreiro et al. (2023) found that the types of hallucination differed between traditional NMT models and GPTs. Hallucinations in the case of LLMs also extend to deviations from world knowledge or facts. For more information on hallucinations in the field of NLG, see Ji et al. (2023).
The EU AI Act, the first binding regulation on AI in the world, was adopted by the European Council in May 2024. It aims to “foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it aims to ensure respect of fundamental rights of EU citizens and stimulate investment and innovation on artificial intelligence in Europe”. There are questions as to whether the Act will be effective in protecting the environment from the impact of AI, see Warso and Shrishak, (2024) and Laranjeira de Pereira, J.R. (2024), but it’s clear that at this point in the development of AI, it is time that proper consideration is given and action taken on its social and environmental consequences.
New developments in AI are happening at an ever-increasing pace and bring both opportunities and challenges to Translation and many other industries. We will continue to monitor changes in this space as well as the environmental repercussions of AI.
How has AI impacted on your role/industry? What is your experience?
Communication Across Language Barriers: Guidelines for Success
![](https://www.context.ie/wp-content/uploads/2017/09/translation-localisation-context-07-uai-258x172.jpg)
Communication Across Language Barriers: Guidelines for Success
Author: Ulrike Fuehrer | Director at Context
Successful communication is something we all strive for and at times may struggle with.
The aim of effective communication which leaves both sender and receiver satisfied, requires a deliberate approach when we do not speak the language of the other person, and our conversation is mediated by an interpreter. In Ireland, scheduled appointments with public sector organisations that are facilitated by an interpreter occur approximately 1000 times per working day. Additionally, unplanned events in Emergency Rooms, at Garda Stations or with social or asylum support services require language interpretation, if members of the public are not sufficiently confident to hold the conversation in English.
The lack of a common language can be a source of frustration to both parties, to the member of the public and the public service provider alike. Living in a country or in a world where you do not understand the – spoken or signed – language is deeply frustrating and leads to increased exclusion. The least we can do to initiate a virtuous circle of empowerment and equal access to public services, apart from supporting cultural awareness, community level solidarity and progressive state led policies, is to ensure that service users of all nationalities are well supported and can be heard.
In its Report on Refugees and Integration from November 2023, the Irish Joint Committee on Children, Equality, Disability, Integration and Youth references interpreting services in one of its 96 recommendations: ‘Refugees of all nationalities should be supported equally and offered the same services, in particular translation services.’ However, the recommendations extend solely to the use of remote online interpreting services, which may be suitable when it comes to exchanging facts and figures, but may not be appropriate for consultations on sensitive cases or with vulnerable children or adults. The lack of adequate video-conferencing facilities or even two-way telephone systems in most public service settings would be one obstacle, together with uncertainty about the role of an interpreter and how or where to source interpreting services.
If you currently use interpreting services for your client appointments or wish to prepare for when you will need an interpreter to assist, you may find these guidelines helpful:
1. Expect the interpreted appointment to take longer, schedule additional time
2. Establish what language the client speaks well before the actual appointment date
3. Book an interpreter of that language in good time, provide details of the reason for the appointment, so the interpreting company can select and brief the best suited interpreter
4. Before the appointment, introduce yourself to the client via the interpreter, and allow the interpreter to briefly outline their role, in both languages
5. During the appointment, talk directly to the client using plain language, and allow the interpreter to be both your and your client’s voice
6. Ensure that the interpreter meets the client in your presence only
7. Pick up on the client’s body language and ask for clarification via the interpreter
8. Summarise any actions/advice/instructions for your client at the end of the appointment
9. Rebook the interpreter for any follow-on appointments via their company
10. Provide any feedback and special requirements to the interpreting company.
Clients can contact us at interpreting@context.ie if you require staff training on ‘How to Work Well With Interpreters’ – we are happy to deliver the relevant training to you, onsite or online, to support you in communicating successfully with any service users who speak languages other than English.
Where is the Translation industry right now on the AI hype curve
![](https://www.context.ie/wp-content/uploads/2024/06/ai-hype-curve-context-uai-258x172.jpg)
Where is the Translation industry right now on the AI hype curve?
Authors: Angelika Zerfass | Ulrike Fuehrer | Miriam Finglass
Context AB and AI
May 2024 saw the inauguration of the Context Advisory Board as an information and consultancy resource for the operational Context team.
On the occasion, internationally renowned translation technology expert and Context Board member Angelika Zerfass, made a very welcome and meaningful input in the Context discussion on AI and Translation.
Key Takeaways from the Discussion
Here are the thoughts and thought-provoking nuggets we took away from Angelika’s presentation:
- AI is machine learning. Machines are trained with large amounts of data. They use statistics to discern patterns in the data in order to be able to make decisions or predictions for unseen data.
- Generative AI is capable of generating text, images, video or other data. It has been made possible by the availability of more powerful computer hardware and immense datasets.
- AI is pattern matching. It’s very useful in areas such as radiography/healthcare where, for example, X-ray patterns can be established in seconds to feed into diagnosis and patient care but, as it stands, mainly inadequate in situations where context knowledge and understanding is crucial.
- AI hallucinates. Where there is no content, it makes it up by using the most probable combination (of words, sounds, pixels…). While the result looks plausible to the human user at first glance, these most probable combinations state something that is simply not true.
- AI tools do not understand, cannot evaluate and do not know when something is incorrect, biased, inappropriate or untrue.
- AI systems have been shown to produce text (and images) that perpetuate gender, racial and other biases.
- Hence the content quality available on the internet may have been at its best up until recent years. As AI propagates its own mistakes and myths, content quality stands to deteriorate. Content may look great – and yet bear no relation to reality.
Where are the Human Competencies required?
- While large language models (LLMs) and other AI tools can generate images, videos, songs, texts and translations, they rely on human created and curated content as training material.
- Human translations continue to be an essential component in the quality segment of the market.
- Human intervention on machine translated output is required, new linguistic profiles can add value in:
- light or full post-editing of machine translated content
- continuous development of QA tools for machine translated output
- clean TMs, term lists, added metadata
- editing content created in the target language: checking facts, ensuring consistency, eliminating bias
- determining which texts are suitable for machine translation post-editing and which not. Possibly pre-editing texts to make them more suitable for machine translation
- We’ll need to hear from linguists as to the quality of the machine translated output and how that might vary by domain, language pair or text type. We’ll need their feedback on the post-editing effort needed and their experience of the translation process, considering job motivation and satisfaction.
- For smaller languages, insufficient training data is available; humans are crucial here as subject matter experts, product experts and language experts.
Environmental
The environmental impact of AI is huge. In their study, Strubell et al (2019) look at machine learning models based on the transformer neural architecture, commonly used for machine translation. The graphical processing unit (GPU) emissions generated when training a large model were equivalent to the output of 1.5 cars over the 20-year lifetime of those cars. And that’s only considering the training. This doesn’t consider the power and cooling requirements for the computers or the carbon emissions generated each time one of these systems is used. Luccioni et al (2023) highlight the additional emissions related to generative AI as compared to traditional “task-specific” systems.
Data Protection and IP
- Confidentiality of data processed by AI systems must be a priority.
- There are intellectual property considerations in terms of the source of data used in training AI systems and the copyright of its authors.
So where are we at Context on the hype curve that all new technology – all new products? – traverse. Perhaps more inclined to critically evaluate generative AI solutions, to discuss and pilot post editing models with our linguists and clients, to embrace the creation of new specialised job profiles, and – quite horrified at the environmental cost of AI.
Where do you sit on the curve?
References
Luccioni, A.S., Jernite, Y. and Strubell, E. (2023). ‘Power Hungry Processing: Watts Driving the Cost of AI Deployment?’ Available at: http://arxiv.org/abs/2311.16863
Strubell, E., Ananya, G., and McCallum, A. (2019). ‘Energy and Policy
Considerations for Deep Learning in NLP’. Available at: http://arxiv.org/abs/1906.02243