SenateOfCanada
KoreaSteno
54thCongress

IPRS Meetings & General Conferences of the 52nd Congress

52nd Intersteno Congress, July 13th – 19th, 2019

2019 Cagliari IPRS Meetings

IPRS sessions – Sunday, July 14th, 2019

Ethics and Technologies: Do businesses address ethical issues when developing new tools?

Iulia Mihalache, Canada

According to the TAUS Speech-to-Speech Translation Technology Report Technology (2017), real-world speech-to-speech translation (S2ST) applications have been developed over the past years to serve various needs in different areas such as the medical or the military fields, in humanitarian settings or in multilingual communication contexts. The spread of real-time technologies for communication, the spike in the use of social technologies, the blending or convergence of tools and professions, the new translation technology trends (augmented translation, virtual reality, deep learning, machine translation, Artificial Intelligence and big data), all these social and technological changes underline the need to help people around the world engage in communication without limits. And all systems are based on neural networks.

However, technology is not perfect and the use of translation technologies will always require human skills. Technology is the intermediair. Tools for speech translation that combine a speech recognizer with machine translation are also being tested for use in clinical settings. But translation tools are not adequate in their current form to translate clinical information.

In this context, how do businesses or users remains sensitive of such technological imperfections with regard to S2ST applications? Do businesses equip themselves with a technological and ethical framework which governs their product development? Ethics pertains to detecting bias in product development, complying with privacy laws, operating reliably, safely and consistently in order to prevent cyberattacks, tailoring the system to different audiences and contexts, making sure the users know if a system is being used, and how and why the system suggested certain outcomes. What is therefore the companies’ rhetoric around ethics in relationship to technology? Can S2ST tools be considered “ethical IT innovations”? Has artificial intelligence not only economic impact but also ethical impact?

The following innovations were mentioned: the wristband watch (with 44 languages), smart glasses, the translating pendant and ear pieces. These innovations are driven by values which are not associated with short-term profit (a view which translates to the idea that humans serve the machines) but rather with a wish to help people grow, by fulfilling higher individual and social needs such as reputation, creativity or belongingness.

The final conclusions are that there are various ethical challenges for translation technology developers: 1. privacy, security and reliability; 2. autonomy and agency; 3. economic implications; 4. translation quality and the impact of low quality translations on the users; and 5. human-machine interaction and the position of the human being.

The international Michela alphabets, an idea that is still current

Paolo Michela Zucco & Fabio Angeloni, Italy

The Michela method was born as “phonographic” and universal. The basis for English, French and German theories was illustrated in the first manual of 1882. Michela Zucco presents the most recent attempts to develop in practice the original theories for foreign languages and their adaptation to the digital scenario.

The Italian Senate uses Michela, the piano-like chorded machine. Are there possibilities for an international language? Is the French language the best for the Michela alphabet? Demonstrations shows messages of 10,000 words per hour in French; is that also possible in other languages?

Videos show the use of the Michela alphabet in projects. Paolo points to the Stenotyping project, to show the briefs and the translation in English, and the Arabic simplified Orthographic Theory (experimental system). The system is also used for “steno on music”, an artistic project. Michela’s shorthand codes were even used to construct a musical.

The projects facilitated greater exploration of the system application to other languages throughout the much more practical parts despite them still being at an experimental and investigative stage. This is actively ongoing and is now processing into the development of a theory for Spanish, to which it’s possible to apply, after minor adjustments, the Michela conflict-free theory for Italian in an advanced stage of completion.

PerVoice technological solutions in diamesic translation

Paolo Paravento, Italy

PerVoice is a leading provider in speech recognition, based in Italy. Their services include transcription, speech analytics, reporting, subtitling and broadcasting. After 20 years of research and development they are currently working with more than 31 languages and over 51 language models.

In business, much speech is lost. However, some businesses collect more than 1 million hours of speech per year. They are in need of getting to know their customers and the latest buying trends. They need speech analytics, as provided by PerVoice. PerVoice offers three business lines: media intelligence, reporting and contact centers. Examples of those lines are broadcast monitoring, live subtitling and speech analytics.

In 2011 the PerVoice Audio Synchro Suite was introduced to the market, later followed by, for example, the VERBAMATIC T-solution for court reporters. The aim being to introduce systems to support and optimize the work of reporters in courts, congresses or parliaments. Suite is a computer assisted transcription tool, delivering realtime verbamatic steno, enriched with MP3, PDF, Word or video formats. This solution includes speaker recognition and, if a video format is used, video word search. PerVoice provides cloud services, like Flyscribe Cloud, and on-premise services as well, like Flyscribe transcriptor. When privacy is important and when it is of the utmost importance that data is protected, the offline one-laptop-service of Flyscribe can be the solution.

All speech recognition technology used by PerVoice is firmly based on the use of deep neural networks, a.k.a. machine learning or Artificial Intelligence (AI). They rely on TV-recordings to train the acoustic and speech models of their speech recognition sytem. Here the motto applies: more data, better accuracy. PerVoice however does not claim an accuracy of 100%.

Sorizava Collaborative Artificial Intelligence Solutions

Sang Guy Kang, South Korea

Sorizava is a new provider in the field of automatic speech recognition, based in South Korea. They introduce, in their own words, an “Amazing Innovation to Change the Future of Shorthand”. Their claim to fame is Sorizava Alpha, an AI (Artificial Intelligence) solution that delivers live automatic speech recognition for reporters or stenographers. It aims at the growing global billion dollar market for automatic speech recognition.

Sorizava readily admits that Alpha is not perfect. Deep Learning lowered the word error rate remarkably. The Alpha network (4 layers, 1024 nodes) delivers recognition rates around 90-95%. In line with the ASR-solution of PerVoice, Alpha is not meant to replace reporters. Alpha is introduced as a partner for reporters, supporting their work. It aims at “human-AI collaboration”. So, what does Alpha do?

Firstly Alpha uses an AI-microphone that recognizes both small sounds (when one is speaking away from the microphone) and crosstalk/overlapping speeches, as it uses individual recordings per speaker. Secondly Alpha provides individual speaker recognition (multi recognition), including on-time insertion in the text. Thirdly Alpha automatically creates three files: text, video and SMI (subtitling), so video playback includes subtitling as well. Fourthly Alpha provides live subtitling for live streaming.

ASR-solutions like Alpha traditionally use word libraries to optimize recognition results. Sorizava calls their AI-version Hint Dictionary. It includes difficult to recognize terms such as names, placenames, jargon et cetera. These terms can be registered beforehand and during a meeting. Alpha also automatically enters punctuation marks and can be set to automatically delete meaningless words.

Sorizava expects that their application wil create a new type of job called “AI-stenographer”. They claim that reporters like working with Alpha, that reporters are more confident and motivated and that reporters indicate an increase of efficiency of 50%.

Official Reports and Body Language

John Vice, United Kingdom

John Vice, editor of debates in the House of Lords, gave a presentation about the different approaches editors use when it comes to reporting non-verbal communication during debates. With a lot of interesting examples he illustrated the challenges reporters have to face in order to capture significant unspoken events in the official report, for example MP’s who are coughing, shouting or even singing during debates. Non-verbal communication contains important information relating to the debate. It tells the reporter something about the atmosphere or the mood or intention of the speaker. Non-verbal communication comes in many forms. Kinesics is a well-known example and relates to the interpretation of body motion communication such as facial expressions and gestures. But also physical appearance, the use of space, paralinguistic information, such as the accent or fluency of the speaker, chronemics and haptics can be important for the reader in order to fully understand the debate.

Parliamentary reporters want their reports to provide transparency for readers who were not present at the debate. In order to do that it is helpful to reflect on some questions or assumptions. Who is the audience? What is the reporters relation to the debate? Is he or she an observer or an interpreter? What are the reporters terms of reference? What are the power relations? Who has the right to ask or insist on changes in the report?

When it comes to reporting significant unspoken events and non-verbal communication reporters use different approaches to verbally intervene and provide transparency for the reader. In general there are four strategies: only the words are reported, the reporter makes small changes to the words, a parenthetical description is used or the reporter alludes to or ignores the non-verbal. These strategies were illustrated with interesting examples of different cases in parliaments around the world.

Everyday linguistic and editorial choices in parliamentary reporting

Eero Voutilainen, Finland

In his presentation Eero Voutilainen presented his research on linguistic and editorial principles that are being used by parliamentary reporting offices. The method used for collecting the data was a survey study that consisted of 84 questions based on academic and professional literature and professional practice. The aim of the research was to find out what kind of practical linguistic and editorial norms are used in parliamentary reporting offices around the world and how much variation there is between these different practices.

The data collected stems from 36 national and 3 regional parliaments. It should be taken into account that the majority of these parliaments is situated in the global north and in western Europe. The respondents range from the heads of parliamentary reporting offices and reporters themselves to persons responsible for the development or approval of linguistic principles in parliamentary reporting offices.

The survey consisted of a Likert scale and a combination of multiple choice questions and open questions. The Likert scale was based on the question “do you do these changes in parliamentary reporting?” with 1 meaning practically never/not at all and 5 meaning practically always/completely. The linguistic and editorial principles were divided in different categories. During the presentation all these categories were illustrated with interesting examples from everyday editorial work.

After analyzing all the data some general principles seem to be current in different parliamentary reporting offices:

  • Spoken pronunciation and grammar are usually standardized towards written standard language
  • Dialectal and informal words are generally not changed
  • Planning expressions and self-corrections are usually edited out if they are not lengthy and informative
  • Tone regulators and rhetorically significant repetitions are generally not removed
  • Contradictions and improper words are generally not removed
  • Incorrect citations and figures are often corrected
  • Incorrect facts are usually not corrected, unless they are clearly innocent blunders

It has been stressed that the examples consist of decontextualized situations and that further practical discussion is needed and stimulated. A new platform for this discussion is Tiro, a new journal of professional reporting and transcription, of which the first issue is coming soon.

We have seen the reporting future and its name is… – Parliamentary reporting in 2030

Henk-Jan ErasDeru SchelhaasGerm Sikma, The Netherlands

In their presentation, Henk-Jan Eras, Deru Schelhaas and Germ Sikma of the Dutch parliament looked into the future parliamentary reporting. How will digitization and robotization have changed the profession and the official report by 2030?

In an interactive quiz Eras tested the views of the audience. No consensus was reached among the participants on three questions. Will robots have taken over the jobs of a parliamentary reporter? Will the video registration of a debate have replaced the textual report? Will parliaments have introduced a VAR?

Schelhaas showed how the current digital representation of the report follows the rules of the printed book. Publishing the report online allows the reader to customize the report according to his interests and needs. In 2030 we will see a further integration of the text and parliamentary data from various sources, which will make the report more relevant and easier to use.

Sikma gave a quick overview of trends and influences that will affect the job of a parliamentary reporter. In 2030 the reporter will need a different skill set and use different tools to assist him in his task, such as automatic speech recognition and virtual reality. Developments inside and outside the political sphere, such as a further demand for transparency and the possibility of online plenary sessions, will also have their effects.


General Conferences – Tuesday, July 16th, 2019

Diamesic translation: a theoretical framework for reporting and captioning disciplines

Carlo Eugeni, Italy

Carlo Eugeni strives to create a new theoretical framework for all disciplines that deal with turning spoken words into written words. First, he recalls what steps have already been taken in formulating a theoretical framework to analyse translation and variations in language. Eugeni stresses that the act of translation not only takes place between languages (interlingual), but also within one language (intralingual). We can for instance translate words from one historical period or social group to another. Eugeni’s research focuses on diamesic translation, which he defines as “any process, or product thereof, in which a combination of spoken and non-verbal signs carrying communicative intention is replaced by a combination of written signs, in the same language, reflecting, or inspired by, the original entity”. Diamesic translation is at the core of various jobs and disciplines, such as transcription, ASR, reporting and subtitling.

Diamesic translation can take different forms, depending on how closely one sticks to the original language. While litteratim translation translates all characters, verbatim translation translates only words, and sensatim translation strives to capture the meaning of what is being said, which can be summarized or simplified. Regardless of the technique used, the act of translation always involves three steps: encoding the source text (listening), decoding it into meaning (the language of the mind or “mentalese”), and recoding it into the target language.

Eugeni has applied this model to ecg’s of the brains of reporters and subtitlers. In the resulting images, the activities of encoding, decoding and recoding correspond with alpha, beta and gamma waves respectively. The relative strength of the waves differs depending on whether the translation is intralingual or interlingual and whether the translator has knowledge of the subject matter. Eugeni posits that the amount of working memory or energy available for translation is limited, so that if more effort is needed for listening, less is left to dedicate to recoding. The scans show that people who sit in silence or are only listening use their brains differently than people engaged in interpreting or reporting. The latter two show a similar distribution of brain waves, although these are more stratified in the case of the reporter, possibly because the acts of encoding, decoding and recoding are more separated in his work.

The presentation ends with an open question: what should we call the activity of the reporter? Eugeni provides many examples of possible answers, but the main point is that while the techniques used may differ, the job is essentially the same and can therefore be analysed through the same lens.

Linguistic ideologies and editorial principles in parliamentary reporting

Eero Voutilainen, Finland

Eero Voutilainen presents the results of a survey in which 39 parliaments from all over the world participated and that focuses on the question: what kind of linguistic ideologies and editorial principles are active in parliamentary reporting across the world?

The survey study consists of 84 questions: Likert scale questions, multiple choice questions and open questions. With the Likert scale questions, Eero hopes to find a deeper insight into the question: how well do the following claims describe your editorial principles in parliamentary reporting? The scale runs from 1 (not at all) to 5 (very much so). One concrete example of a Likert scale question is: “Chaotic speeches should be edited to appear more orderly for the sake of readability.”

The survey lead to the following conclusions:

  • Standardisation and clarification is generally supported in order to increase readability and understandability of the report.
  • Stylization of speech to make it more formal or dignified is mostly not supported.
  • In addition to content, the styles, tones and the authentic spoken quality of the session are largely seen as worth preserving.
  • The majority of the participants is prepared to break the rules of written standard language in order to preserve rhetorical choices.
  • More than half of the participants feel that they edit less than before.

Italian shorthand machines in the open source era

Giulia Torregrossa & Daniele Casarola, Italy

Giulia Torregrossa’s presentation starts with a brief history of shorthand machines. The Italian Senate has been using the mechanical Michela steno keyboard for a long time. In the 80s the traditional machines were replaced by electrical Michela keyboards. The current keyboards are fully digital, which makes them much more versatile. They can, for instance, also be used to produce real-time subtitles.

Acquiring a professional shorthand system (soft- and hardware) means an investment of some $2,000. The Open Steno Project was initiated to make automated steno systems accessible to more people. One way to achieve this is through lowering the initial costs to learn to use automated steno systems, which OSP achieved by developing free open source software called Plover. Plover is very versatile free software. For instance, with the help of Plover, a regular qwerty-keyboard can be converted into a keyboard with the Michela layout. Plover also makes it possible to easily produce braille and Arabic scripture on traditional Michela machines.

The Italian Senate regularly organises an event called A day at the Senate. On this day, the Senate amongst other things introduces Italian students to the Michela system. Thanks to the Plover software, a regular midi keyboard can now be used to simulate a professional Michela keyboard. Students can continue practising the Michela shorthand system at home, because they can use the Plover software for free and the Open Steno Project offers online courses.

Daniele Casarola has been a professional stenographer since 2017 and an autodidact at that. He decided to learn steno when his sight started to deteriorate. He tried different shorthand systems, but he found that they had shortcomings when it came to use by visually impaired persons. One day he discovered Plover. He was impressed by its versatility. He likes the fact that it can be used on many different devices, such as regular qwerty-keyboards and iPads.

Screen readers are important tools to make computers accessible for visually impaired persons. However, they are not very well suited for visually impaired stenographers, Daniele explains. A normal screen reader cannot read steno, for example. Also, when you are writing shorthand on a device, you don’t want the voice of the screen reader continuously interfering with the speech that you are recording. Someone helped him to develop a customized screen reader that is much more suitable for the job. For instance, this screen reader gives the stenographer feedback about where the cursor is situated in the text or about which word was just erased.

Harmonised training in real-time intralingual subtitling

Rocía Bernabé Caro & Estella Oncins, Spain

Bernabé Caro and Oncins are involved in the European co-funded project Live Text Access (LTA). The LTA is a collaboration between educational and non-educational partners, which gathers trainers, employers, service providers, end users, and certifiers.

The project aims to train (aspiring) real-time intralingual subtitlers and harmonize the theoretical and practical framework in which they operate. For this purpose, LTA will develop a MOOC (massive open online course) for respeakers and velotypists. It is open for everybody online and offers self-paced flexible study through a modular curriculum. No certified qualifications are required.

Which skills does a real-time subtitling professional actually need? Part of the LTA-project is finding the answer to this question through an online survey. 121 stakeholders, such as professionals in the field and end users, gave input, which will find its way into the curriculum. LTA will train in five different contexts (cultural events, parliamentary assemblies, workplace, broadcasts and education) and for three working settings (face-to-face, online and by relay).

More information can be found at the website of the LTA-project: ltaproject.eu.

Captioning software using automatic speech recognition

Tatsuya Kawahara, Japan

Only 50 percent of the video lectures of the Online University of Japan have subtitles because of the high cost of preparing and editing the captions by hand. Can ASR (automatic speech recognition) be used to caption lectures? Kawahara and his team investigated. They were interested in both pre-registered video captioning with post-editing and live captioning.

In their experiments, the team found that an ASR accuracy of 87% is a threshold for usability in video captioning. Editing time can be reduced by 1/3 with an accuracy of 93% compared to making captions from scratch. With a prepared script the accuracy of the ASR-software used is around 95%.

Live captioning lectures with ASR has proven to be promising, but it comes with its own challenges, apart from accuracy issues. Kawahara set up a test with two methods, both stenotype and ASR based, simultaneously shown on screen during a lecture. The audience found it hard to keep up with the ASR created results because of the large stream of text, whereas human editors have summarization skills for a more readable result.

Eclipse

Daniel Glassman, United States

Work less, finish faster is the main idea of Eclipse. Eclipse is a language-based advanced-intelligence program which supports numerous languages. This Computer Aided Transcription (CAT) software is much more than a collection of powerful features; it is a fully integrated program with intelligent conflict resolution for reporting. It enhances translation, team editing and learning from other people’s editing choices.

The most important thing in reporting is to understand words and terms correctly. The US legal community, for instance, uses words very precisely. Reporting is done verbatim: every word and grunt is recorded, because of the fact that in the legal environment the dropping of a single word like “yes” or “no” can completely change the outcome of for instance a lawsuit. Therefore verbatim reporting is so important in the legal environment.

Most people cannot read at the speed of the spoken word and so it is important to report the gist of what is spoken. Computer Aided Transcription (CAT) can help. Unfortunately CAT-applications don’t always transcribe correctly, because they don’t always understand the jargon or the context and have problems with homophones, and the editing takes a long time. Eclipse analyses, solves and resolves such conflicts.

For more information: find the stand of Eclipse at the exhibit at the Manifattura Tabacchi!

Application introduction and equipment display of internet stenography technology

Jianlong Xiao, China

Xiao Jianlong introduces the application of internet stenography technology and the equipment for this purpose. He shares with us the opportunities of the technology and the necessity of networking and collaboration, with the help of artificial intelligence and CART.

As a result of internet stenography technology, shorthand can be used for a wider range of applications. Furthermore, training in shorthand will become easier, because it is available on the internet. Also, the pass rates of the students will be higher. Another positive result is the fact that the customer costs will fall to the level of the internet costs, which means in fact that the income of the stenographer will become higher.

With internet stenography technology the time and space of a conference or debate are not a problem anymore. Artificial Intelligence recognizes the voice and converts that into the text automatically. At the same time many stenographers from all around the world collaborate in order to transcribe the conferences, because Artificial Intelligence can help a lot, but unfortunately it is not familiar with uncommon words. Therefore a collaboration between stenographers worldwide and AI gives the best results. AI and the stenographers share abilities like speed, general knowledge and specific knowledge of dialects or other languages. All these things can be done on one platform in the cloud.

Internet stenography technology itself is able to revise very quickly. The editing can be done in collaboration with the network of stenographers. The transcripts can not only be shown immediately at the conference, but at any place. In conclusion: the positive effects of this collaboration are efficiency and quality. At the moment Jianlong Xiao and his team are working hard to achieve global full-language communication access realtime translation (CART).

Towards the creation of an international library on shorthand

Boris Neubauer, Germany

Prof. Boris Neubauer has dedicated himself to digitizing shorthand resources into a digital shorthand library. He has therefore scanned numerous instruction books and full magazines with scientific articles on shorthand into a PDF-format.

When doing scientific work on shorthand with scientists around the world, it is easy to share digital files. Another reason is that digital resources can be easily searched. Furthermore, a digital library facilitates cooperation over large distances and thus enhances accessibility in various places worldwide.

When collecting books or articles for the digital library, some obstacles can occur, for instance time-consuming scanning, problematic access to collections of libraries and copyright. Copyright is the biggest problem, because officially publications can only be used 70 years after the author has passed away. In order to avoid problems, Neubauer uses a limit of 100 years.

Some examples of scanned publications are:

  • A History of Shorthand (1887). The print of this publication was so fine that 600 dpi had to be used.
  • Bollettino dell’Accademia Italiana di Stenografia Scientific Journal. This is a high level scientific journal on shorthand.
  • Intersteno Conference Records. They are scanned from 1887 up to 2017, because there is no problem with copyright.

Neubauer expects that traditional libraries will one day not be accessible anymore, so he hopes to have digitized as much material as possible by then.

Opening doors through legislation, machine shorthand and technological collaboration

D’Arcy McPherson, Canada

In his presentation, McPherson provides an insight into the ways the Debates and Publications Office of the Canadian Senate helps to increase the accessibility of the political process. The most important dimensions in terms of access are language (the official languages English and French, but also indigenous languages) and disability (for example visual, auditory, mobility or learning disabilities). McPherson shows how subsequent legislation has required the facilitation of full and equal participation of all people.

In the Canadian Senate, this facilitation is realised by the use of one French and one English team consisting of two reporters each: a writing reporter, who works on the Senate floor and takes care of interpretation, and a non-writing reporter. Alternating every half hour, they have 45 minutes to transcribe 10 minutes of spoken word. Accessibility concerns are also reflected in the setup of the newly renovated Senate chamber, for instance through the use of sign language interpreters and screens displaying closed captions. McPherson stresses that these captions are not only for the benefit of the deaf or hearing impaired; they can also help people focus and understand unclear speech or what is being said in noisy environments.

For the written report, an XML application is used to bring all types of information together and make them easily accessible. The text links to audio and video in a dynamic way. The data is automatically transferred to the translation office, allowing the creation of complete reports in both English and French. It is also published on the Senate website and made available to broadcasters, committees and the law clerks’ office. From links in the report, the reader can easily obtain more information, for example about a specific bill or senator.

Looking forward, McPherson stresses the importance of continuous communication with stakeholders and partners, building on progress and inclusion, applying advanced communications tools and integrating technological advancements. He expresses the hope that in a time when there is so much distrust and false information, reporters may be able to contribute to bringing about an age of truth.

Live subtitling in the Dutch House of Representatives

Michiel Haanen, Selma Hoogzand & Marleen Petrina-Bosch, The Netherlands

Selma Hoogzand begins this presentation by explaining the legal framework behind the subtitling service. In 2006, the Netherlands signed the United Nations Convention on the Rights of Persons with Disabilities, thus committing itself to ensuring access to information and communication technologies and systems for persons with disabilities, on an equal basis with others. Naturally, that also applies to information and communication by the government itself. In 2016, EU directive 2102 on the accessibility of websites and mobile applications of public sector bodies came into force. In 2018, both the UN Convention and the EU directive were made more concrete by the Dutch House of Representatives through the “Temporary order about digital accessibility”. This order states that all pre-recorded time-based media (audio-only, video-only, audio-video, audio and/or video) published after 23 September 2020 should be accessible, with an exception for live time-based media.

Today pre-recorded time-based media (i.e. past debates) are available on debatgemist.tweedekamer.nl. The official written report from the Parliamentary Reporting Office serves as subtitles. Live time-based media can be watched on www.tweedekamer.nl. However, only the weekly question time and a few high-interest debates have real-time subtitles.

In 2018, only France, Germany, the Republic of Ireland and the United Kingdom provided live subtitles for (some) debates. Marleen Petrina-Bosch explains which path the Dutch parliament took to join their ranks that same year:

  • 2002: parliament suggests that public broadcasting company provides subtitling
  • 2014: provisional subtitling set-up at Parliamentary Reporting Office, using the re-speaking technique
  • 2015: demo for management
  • 2016: green light from President of Parliament
  • 2017: pilot live subtitling
  • 2018: live subtitling of question time and pilot high-profile debates
  • 2019: extension to non-plenary sessions, addition of two workstations

Nowadays, the subtitling division is trying to raise its standards by professionalizing the training methods and developing quality standards.

Michiel Haanen looks at live subtitling from three different angles: that of the sender (speaker), that of the intermediate (subtitler), and that of the receiver (reader). The sender may be anyone who officially speaks in parliament (MP’s, ministers et cetera). A subtitler has to take certain properties of her or his speech into account, such as its logic, grammar, speed, fluency and intelligibility. The subtitler is a real multitasker, as she or he simultaneously has to listen attentively to what is been said, store information, re-formulate and summarize where necessary, re-speak fluently and operate the soft- and hardware. Finally, the reader has to be served properly by keeping the amount of errors limited and the subtitle publication speed in line with the average reading speed.

House of Commons Procedure: Access and scrutiny

Tony Minichiello, United Kingdom

Tony Minichiello, Managing Editor in the House of Commons in the United Kingdom, gave a presentation about the British “parliamentary procedure Bible”. The House of Commons has completely rewritten this document, thereby increasing citizen participation in politics and maximising the opportunities for MPs to hold the government to account, whilst maximizing accessibility.

In 1844, Thomas Erskine May wrote “A Treatise upon the Law, Privileges, Proceedings and Usage of Parliament”. This book has since then evolved into the “Standing Orders of the House of Commons”, which provides an in-depth analysis of the development of House of Commons rules. Therefore, finding the right information within it was hard for the public as well as for MPs. That’s why Minichiello’s department created a new “MPs’ Guide to Procedure”. This document was meant to be a lot more clear, accessible and usable for all people with an interest in the subject. In the creation of the guide, they used a type of “Ikea-thinking”, using lots of how-to’s.

This new testament, so to say, has a lot of new features in comparison with the old testament of Erskine May, like different colours (it is a lot more colourful in general), graphics, pages with frequently asked questions, et cetera. Furthermore, the use of language and the style of explanation have been changed. For example, in Erskine May’s book you could find words that are rarely used in English these days. To go out with a bang, Minichiello told the audience that in the online version of the guide, you can even see how procedures have worked in the past … in a video of the procedure when it was really used in parliament!

Audio Description: future perspectives into parliamentary accessibility

Joel Snyder, USA

Drawing from his almost four decades of experience in the world of audio description, the Director of the Audio Description Project of the American Council of the Blind, Joel Snyder (PhD), aimed to guide the audience into the world of audio description. Starting with the basics, Snyder explained that audio description is “the use of words that are succinct, vivid and imaginative to create or rather to convey the visual images from for example television and film”. Or to make it short: the visual is made verbal.

So what kind of subjects could for example parliamentary reporters describe so that in the future, debates will be audio described in a good way? Snyder showed the audience an audio described video clip, which made it very clear that audio description is a very difficult art. Among other things, an audio describer should pay attention to:

  • What colours can be seen?
  • At what moment should the description appear in the video?
  • What’s actually happening?
  • What to include? And what to leave out?
  • How specific should you be?

On that last topic, Snyder told the audience that “precision creates images”. The same goes for comparisons. As an audio describer, you should constantly be aware of what you’re trying to tell your audience. But be careful: keep your subjective judgements on what you see to yourself. For example: crying can be caused by joy as well as sadness. Don’t patronize the audience by interpreting this in a subjective way as an audio describer, Snyder explained.

To round up his presentation, Snyder spoke about the four fundamentals of audio description: observation, editing, language and vocal skills. Everyone present learned a very valuable audio description lesson: you should always stick to W.Y.S.I.W.Y.S., meaning “What You See Is What You Say”.

 

1887 - 2022 All Rights Reserved. Intersteno - International Federation for Information and Communication Processing