Subtitles occupy a curious position in the experience of screen media: they are simultaneously an aid to comprehension and a potential distraction from it, serving viewers with hearing impairments, language learners, and general audiences all at once, while raising persistent questions about translation, fidelity, and the relationship between spoken and written language. According to Mitterer and McQueen, in their book Foreign Subtitle Help but Native Language Subtitle Harm Foreign Speech Perception, understanding foreign speech is not easy in general, especially because of the unusual mapping between sounds and words. Audiences watching content in their native language can use lexical knowledge about how words ought to sound to learn how to pronounce an unusual foreign word in the movie. Subtitles are normally prepared in a movie to provide lexical information and support perceptual learning about foreign speech for those audiences.
According to Cai and Brysbaert in SUBTLEX-CH: Chinese Word and Character, subtitles also make video more accessible for people with hearing disabilities or deafness. It is hard to listen when an unfamiliar regional accent appears in an unfamiliar foreign language. The unusual foreign vowels and consonants may mismatch with native sound categories and may fail to match any native category entirely. This situation arises, for example, when a Malaysian listener, fluent in English and Chinese, watches a Thai movie. To adapt films into target language cultures, filmmakers use dubbing, voice-over, and subtitles to make movies more accessible to foreign countries. Subtitles can also translate speech, signs, and letters, and show sounds to aid the deaf and hard of hearing to understand and follow the plot of the film.
According to encyclopaedia sources, a subtitle is used to interpret previously unheard words in a movie for audiences. The ability to encode rules and to detect rule-violating events outside the focus of attention is vital for adaptive behaviour. Brain recordings reveal that violations of abstract auditory rules are processed even when the sounds are absent. When subjects performed a task related to the sounds but not to the rule, rule violations impaired task performance and activated a network involving supratemporal, parietal, and frontal areas (Schröger, Bendixen, Trujillo-Barreto, & Roeber, 2007).
According to Wikipedia, subtitles are displayed at the bottom of a movie or television screen and translate or transcribe the dialogue or narrative, also giving additional information about content like monologue, internal thought, and emotion. Those subtitles can be in different categories: narrative, forced, content, titles only, bonus, localised, and extended/expanded. While distributing content, subtitles can appear in one of three types: hard subtitles (also known as open subtitles), prerendered subtitles (also known as closed subtitles that are separate video frames overlaid on the original video stream), and soft subtitles (also known as soft-subtitles or closed subtitles).
The Positive Effects of Subtitles
According to Mitterer and McQueen, the greatest advantage of subtitles is the fact that they preserve the original audio track. Thanks to subtitles, actors speak with their original voices, and the ambience sounds are not disrupted by voice-over or dubbing recordings. When audiences read the subtitles, they better understand the action, emotion, and expression by the actors in the movie, thus becoming more interested and able to follow the plot. The subtitles in a movie also focus on the speech act of characters; audience members can follow the movie by reading the subtitles at the bottom of the screen. Subtitles serve as an aid not only for the deaf and hard of hearing, but also for people learning foreign languages or who need visual aid alongside audio input for study. Subtitles in the language of the film indicate which words are being spoken, and so can boost speech learning about foreign speech sounds and pronunciation. The listeners can learn to interpret an ambiguous phoneme on the basis of disambiguating lexical contexts.
According to Cai and Brysbaert, subtitles may enable lexical retuning also in second-language listening; that is, listeners may be able to retune speech-sound categories based on their knowledge about how foreign words ought to sound. English subtitles should be provided in most films because English is an international language and can provide extra information about the words being spoken by a character. According to encyclopaedia sources, subtitles are a very useful way of communicating what is being said on the movie to deaf people. In addition, foreign movies are often subtitled to allow people who do not know the language to follow the action.
According to UK news and magazine sources, subtitles can provide an additional source of information about the words being spoken, and hence the sounds being heard, and so ought to reinforce lexically-guided learning. Our database is the first to include information about the contextual diversity of words and to provide good frequency estimates for multi-character words and the different syntactic roles in which the words are used.
The Negative Effects of Subtitles
“Subtitles are limited both in time and space. An average subtitle consists of one or two lines of text, maximum 35 characters each. This limitation combined with the fact that one subtitle is at most displayed for four seconds gives us hard time to cover long utterances in dialogs.” (Richard Curtis, 2003)
According to UK news and magazine sources, another main disadvantage is that subtitles are mainly aimed at literate people with well-developed reading skills. They require the audience to read quickly enough while not only concentrating on the subtitles but also on the image above or just behind the subtitles; often this requires using both skills simultaneously. Is it truly possible to concentrate on both throughout a whole movie? For non-deaf people, they can still hear the emotion in an actor’s voice even if it is in a different language, but it is not easy to follow so many elements at once. Other countries prefer to dub programmes and films, where an actor speaking the language of the country where it is to be shown is recorded over the original language; but this is costly and time-consuming and only worthwhile for the production crew if it will be shown to a large number of people.
According to Wikipedia, the translation of subtitling is sometimes very different from the translation of written text. Usually, the editor watches the picture and listens to the audio sentence by sentence when a film is subtitled. The editor may interpret what is meant rather than translating how it is said, meaning being more important than form. This can be frustrating to those who know some of the spoken language, because spoken language may contain verbal padding or culturally implied meaning that is not adapted in the written subtitles. The result suggests that foreign subtitles are very helpful but that native-language subtitles provide no benefit or may even be counterproductive.
According to encyclopaedia sources, the native subtitles may be easier for observers to read but may provide misleading information about the phonological forms being spoken. If the audience is more concentrated on reading the subtitles than on watching the movie, they will not learn the new language. Lowering one’s sight towards the bottom of the movie while reading the subtitles can become tiresome or distract from appreciating details, facial expressions, and the emotions of the actors. The audience might have such a good time reading the subtitle that it prevents them from watching the actual movie and blocks full engagement with the action.
As Vanderplank (2010) demonstrates in his review of subtitle research, the effectiveness of subtitles as a language learning tool depends heavily on the viewer’s proficiency level and the degree of match between the subtitle language and the target language being studied. For students of applied linguistics, media studies, and language education, the function of subtitles raises productive questions about the relationship between visual and auditory processing, the limits of reading speed, and the degree to which meaning can be transferred across multimodal channels without distortion. The field continues to evolve rapidly as streaming platforms make subtitling available in dozens of languages and adaptive subtitle features begin to accommodate individual viewer preferences.
References
Vanderplank, R. (2010). Déjà vu? A decade of research on language laboratories, television and video in language learning. Language Teaching, 43(1), 1–37. https://doi.org/10.1017/S0272263109090664
Mitterer, H., & McQueen, J. M. (2009). Foreign subtitles help but native-language subtitles harm foreign speech perception. PLOS ONE, 4(11), e7785. https://doi.org/10.1371/journal.pone.0007785
Cai, Q., & Brysbaert, M. (2010). SUBTLEX-CH: Chinese word and character frequencies based on film subtitles. PLOS ONE, 5(6), e10729. https://doi.org/10.1371/journal.pone.0010729
Schröger, E., Bendixen, A., Trujillo-Barreto, N. J., & Roeber, U. (2007). Processing of abstract rule violations in audition. PLOS ONE, 2(11), e1131. https://doi.org/10.1371/journal.pone.0001131