Pre-SMP (Lyon): Les rendez-vous de l’interprétation

LES RENDEZ-VOUS DE L’INTERPRETATION A LYON, January 23 and 24, 2020

Summary of Activities by Melanie Klemm (AIIC-Suisse) edited by Gillian Misener (AIIC-Canada)

Thursday, January 23, Salon Bellevue, 09:00 – 12:30

Atelier de la voix/Voice Workshop by Hans-Werner Mühle – AB Representative for AIIC France

Most interpreters do not use their voices correctly, and many are not aware of voice issues. The most frequent voice-related complaints include:

  • trembling
  • hoarseness
  • cracking voice
  • dry throat

Some interpreter training courses offer practice in articulation and pronunciation, but most do not teach interpreters how to use their voice effectively.

To illustrate that voice and posture go hand in hand, Hans asked the group to perform breathing and posture exercises. The natural position of the head should be forward and up, jaw and neck muscles should be relaxed. When the neck is correctly aligned the muscles are in balance. In a booth setting, interpreters tend to bend forward and in doing so, restrict the normal flow of air through the larynx.

To project their voices, opera singers use the thorax and the sinuses as resonating chambers. Certain consonants (P-T-K) force diaphragm recruitment. Interpreters should practice specific breathing exercises to mobilize the diaphragm and use the thorax as a resonating chamber.

Most people naturally organize their ideas so that ideas fit into one breath of air. This is also one of the reasons why interpreters chunk: longer strings of words must be organized to be expressed in one breath.

Thank you, Hans, for a rich session on such a relevant topic, where you skilfully combined theory, group and individual practice.

 

Thursday, January 23, Salon Louis XV, 14:00 – 17:30

Glossaries for Interpreters 2.0:  Tech-savvy terminology management by Josh Goldsmith

Glossaries workshop
© M. Klemm, 2020

Josh opened the half-day workshop by describing different online resources interpreters can use to find terminology. Participants discussed the pros and cons of digital terminology management tools: ideally, digital glossaries should be scalable, easy to share, import, and export, and be available both online and offline. Josh also described the advantages of including images in glossaries.

Participants had been asked to register for an InterpretersHelp trial subscription and to download an InterpretBank trial. During the workshop, participants had the opportunity to explore and compare the pros and cons of both technologies, discussing collaborative glossary-building, tech-supported translation suggestions, and self-translating glossaries

After exploring both glossary tools, Josh touched on extraction tools, manual and automatic terminology extraction from monolingual and multilingual documents, automatic glossary generation, vocabulary trainers, and the role of automatic speech recognition in digital terminology work.

 

Friday January 24, Salon Pauline (09:00-12:30 and 14:00-17:30)

Tablet Interpreting: A hands-on workshop by Josh Goldsmith

Tablet interpreting seminar
© M. Klemm, 2020

Josh’s full-day workshop on tablet interpreting offered 15 AIIC interpreters – 7 of whom also worked as trainers – the opportunity to take a deep dive into how tablets can be used for interpreting.

The training covered iPads, Android tablets and the Windows Surface, a hybrid between tablets and computers. Sessions focused on paperless assignment preparation, the use of tablets for consecutive interpreting, using tablets for Sim-Consec, digital terminology management, using tablets in the booth, and productivity tools.

In the afternoon, Josh was joined by a special guest, Techforword co-founder Alexander Drechsel, who helped participants practice using specific apps that are useful for both interpreters and interpreter trainers.

 

Friday, January 24, evening

Conference: Artificial Intelligence in Intellectual Settings

By Prof. Salima Hassas, Head of the Master in Artificial Intelligence, University of Lyon

On Friday evening PRIMS participants had the opportunity to hear from an expert in the field of Artificial Intelligence (AI). Prof. Hassas focused on the evolution of AI, pointing out that “artificial intelligence” was first used in the 1940s and since then has been explored using different approaches and evolved in waves.

AI has always been compared to human intelligence (HI) or to the natural object it imitates. For example, the artificial neurons in computer systems copy neural interaction and apply it to computers. Deep Learning (DL) is the evolution of artificial neuron interaction. When Deep Learning is applied to image identification, a computer is exposed to different images of the same object and generates a statistical model between input and output that defines the object. To do this, computers use one billion connections, each with a specific function. The same process that is used in image analysis is also applied to Deep Learning for language processing. AI is extremely reliable in areas that require precision, analysis and logical decision-making, and in areas such as disease diagnosis and image analysis, it is competitive enough to replace experts. The advantage of AI is that in contrast to humans, it does not experience fatigue.

AI and Language Processing

AI is capable of identifying grammar structures and vocal recognition under controlled conditions. However, as soon as other factors (such as speed, accent, irony and emotion) are added, AI performance decreases. AI requires a set of parameters to identify meaning in written and spoken communication. Furthermore, communication based on data may be biased and is influenced by culture and the environment, which makes AI difficult to control and rely on. Effective communication requires language comprehension, understanding of social behaviour, and emotional intelligence, which are capacities that robots currently do not have. AI is competitive in routine and repetitive tasks and in developing strategy and optimizing processes, as long as emotional factors are kept out of the equation. Emotions, like empathy and compassion, characterize human interaction and communication, and so in certain areas that require both empathy and analysis, AI is being introduced as an additional tool to aid with the analysis component.

Will machines replace interpreters?

AI in intellectual settings
© M. Klemm, 2020

The goal of interpreting is to capture the essence of one language and deliver it in another. As mentioned before, this requires strong analytical skills and additional components of communication, which AI cannot reproduce. Some researchers say that AI will never break the barrier of meaning. Nevertheless, other approaches are being used to understand communication. Humans learn languages from experience, and this same approach is now being applied to Deep Learning in the field of communication. However, it is important to point out that this approach will take a long time to mature. The most likely and optimistic scenario is that AI will complement the services provided by interpreters.

Note: We would like to thank volunteer interpreters Sebastien Longhurst and Merav Pinchassof (English booth) and Yuliya Tsaplina (Russian booth).

After the conference, participants enjoyed a delicious cocktail and had the opportunity to chat with both long-standing and recent colleagues in a friendly and relaxed atmosphere.

Don’t miss our article on the round table discussion around confidentiality & professional ethics.

You will find more interesting information on other PRIMS sessions here.