Research Issues in Translation Studies
Gyde Hansen, December 29, 2005
In empirical TS, combining methods and data has become an important research technique. This procedure is often called “triangulation”. For example, data from first-person observations (TAPs) are combined with data from third-person observations (the observer) in order to reach inter-subjectivity. Qualitative approaches are corroborated or complemented by quantitative approaches. This latter combination is often looked upon as especially useful, because quantitative data, which are a result of measuring and counting, are regarded as more objective and reliable than qualitative data, which are a result of persons’ perceptions and more subjective interpretations of a phenomenon (see Hansen, this website: February 17, 2005).
Triangulation is frequently applied in social sciences and in many other disciplines, and the terms combination and triangulation are often used as synonyms for a mix of procedures to grasp complex phenomena and to confirm or complete a study. Triangulation is regarded as a multimix of material, strategies, methods, purposes, perspectives or investigators in an attempt to add rigor to a study.
The use of the triangulation metaphor has been heavily discussed and challenged, and much has been written about triangulation as a useful approach, especially in qualitative research. In its original meaning the term ‘triangulation’ refers to a geometrical procedure where a point is found by calculating the length of one side of a triangle, given measurements of angles and sides of the triangle formed by that point and two other given reference points. This means that in triangulation as opposed to any other combination of methods or data, reference points, i.e. prior knowledge, are used in order to gain further results or further insight.
Having the meaning of the original metaphor of a triangle in mind, combination and triangulation could be kept apart. Combination is useful for all kinds of information collection involving multiple methods, investigators, tools, observations and data. Triangulation, in accordance with the original meaning of the term, can be an additional procedure for obtaining new results or new knowledge from existing results, and can thus provide clarity and coherence to the investigation and description of complex phenomena. In complex research projects, where many aspects have to be taken into consideration, the differentiation between combination and triangulation is a means to keep the variety of different observations under control and to make it easier to discuss, repeat and reevaluate the study. For example, data from interviews or questionnaires about the personal background of subjects can be combined with product data (evaluation of target texts), or the same data can be combined with process data from introspection. Triangulated, the results of both combinations can complete each other or reveal gaps or discrepancies and thus provide new knowledge about the relationship between personal profiles, processes and products. A complex study gains flexibility and scope when new results can be located via new constructions of triangles from known reference points (= results).
November 27, 2005
laws lie somewhere between determinism, where a phenomenon is certain, and
total uncertainty, where one has no idea as to whether the phenomenon will materialize
and how. Probabilistic laws attempt to quantify uncertainty. For instance, if
it is known that the mean height of males in a population is
Uncertainty in empirical observation of phenomena may be due to the probabilistic nature of a phenomenon (i.e. variation is an intrinsic characteristic of the phenomenon). It may also be due to environmental reasons: the underlying entity itself may be regular in its nature and form, but its manifestations are influenced by external factors which are themselves probabilistic in nature or too complicated in their interactions with the phenomenon at hand to be predictable. Finally, uncertainty may be due to limitations or weaknesses in the detection and measurement of the phenomenon by man.
In many cases, researchers investigate the existence of a trend, not its quantitative contours. For instance, in TS, the explicitation hypothesis assumes the existence of a tendency, not how strong it is or to what extent a TT will be more explicit than the ST. In such a case, researchers are interested not in quantitative variation, but in its occurrence as such.
If such occurrence is irregular, this does not necessarily mean that the law is not true, or that it is probabilistic in nature. Other factors may have prevented it from being manifest. Lunar eclipses do not become probabilistic just because cloudy skies sometimes prevent people from seeing them. The trend to make TTs more explicit than STs may be universal without necessarily being manifest everywhere, for instance if cultural conventions or the client’s brief or time pressure etc. inhibit its expression in a particular set of translations.
Assuming that a phenomenon is probabilistic or “conditional” just because in empirical studies, it is found not to occur regularly or is found to occur only under certain conditions is the same as assuming that lunar eclipses are probabilistic or conditional upon the absence of clouds in the sky. When irregularities occur, the natural procedure in research is to try to find what causes this irregularity, starting with the removal of environmental factors which may interfere, for instance through experimental research. Only when it is thought that all interfering environmental factors and observer-related factors have been removed and there is still irregularity in the occurrence of the phenomenon will one conclude that this irregularity suggests that the law is probabilistic.
Dr Maria Filippakopoulou, August 17, 2005
Some literary quarters harbour the suspicion that “theory” in liberal arts is but an attempt to purvey methods of analysis that enjoy grand scale consensus, consensus of the kind that can bring out visible results. This concerns not this or that theory but rather major cultural movements, such as postmodernism, or psychoanalysis, which have more or less changed the way in which critics talk to one another and conduct their business.
This view about theory as the “new science” in literary studies, albeit sizzling with unspoken resentment, is intriguing and has some relevance for the objectivistic pretensions – or efforts, depending on where one looks at it - of TS research. I would like to make it clear that it is research in literary translation that I am mostly interested in here, which is why I chose to start with this view.
It is puzzling to see that an objectivistic, scientific discourse, in inverted commas or not, is pushed again to the forefront of public discussion of TS (this website); many amongst us would think that a fair amount of ground has been covered in this respect since André Lefevere published his seminal in many ways Literary Knowledge in 1977. He sought to free literary studies – referring to it as “a pseudo-scientific discipline with a weak core” (Lefevere, 1977: 26) - from the numbing embrace of logical neo-positivism and hermeneutics, restoring it instead as a source of literary knowledge (Lefevere 1977: 44-45). The fact that he became one of the leading scholars in TS by spearheading with Susan Bassnett the “cultural turn” testifies to the validity of his insights.
Assuming that the “scientific” discourse has been long and, in my mind at least, sufficiently debated, I would like to raise two questions related to it: the first one concerns research ethos and the second one research creativity. Undoubtedly the positivistic pursuit forges a certain research ethos. By insisting on the importance of “gathering data”, observing them with “objectivity”, “minimising bias” and so on, we advertise not only a specific methodology, but also a specific pedagogy: we encourage certain inclinations, privilege certain outlooks and educate young researchers to go for some rather than other. In effect, we tell students of translation what is the ethos they are to embrace in their professional enterprises.
However, this blanket insistence on the scientific ideal fails to mention the position of the observer in the institutional hierarchy, the use of her/his output in the knowledge market, its social implications. It does not train researchers to go beyond the immediate context of “data” to the conditions under which such data emerged and became worthy of observation in the first place, nor the specific ways in which they outflanked other competing data. The very nakedness of the neo-positivistic language readily distances researchers from the social make of their object of study, pre-emptying any desire to link it back to the society and culture from which it was “derived”.
Shall we contemplate for a moment the effects of such an outlook, when successful? The translation typologies which revved up the institutionalisation of TS - grounded on a basic distinction between literary and specialised translation, or simply applied TS - is one instance of this disconcerting success. This has worked to insert a wedge between the teaching/research of literary translation and that of applied translation in most British academic departments; this in turn, given the predominant political economy of current academia, has led to the overgrowth of the latter and the shrinking of the former. The discipline has indeed made huge strides in training translators for an English-dominated globalised market on the one hand, and utterly re-gentrifying literary translation on the other. If we wanted to forge an identity for practitioners, teachers and researchers of translation, which is that of professionals, well-adapted to the current cultural order, who seamlessly enter both the market and the market discourse, if we aimed at producing a marginalised, self-aggrandising - or disfranchised as the case might be - class of its own in the case of literary translators, we couldn’t have done better.
My second concern is with research creativity. I wonder how the born-again positivistic ideal caters for blue sky thinking which is - let’s be frank about it – crucial for the formation of hypotheses and selection of tools of analysis. Daniel Gile very correctly pointed out that “[t]he one essential advantage of science […] appears in the long run” (Gile 2005, this website). But the “long run” is made of our individual, far from well-rounded, usually hit-and-miss, in the first instance, attempts at making sense of translation phenomena. Quantitative considerations are all very well, but insisting too much on them could end up obscuring the lengthy, winding procedures by which we come by those hallowed quantitative observables. It doesn’t begin to tell us anything about how hypothesis are formulated in the first place, how they evolve, how they are discarded; it fails to show how new questions ever come to light. It’s far too tidy to provide any kind of enlightenment for people struggling to face those gaps in understanding, those blind spots in perception that only open-mindedness or, indeed, an “existential” mindset seems to be most adequate to grasp. It’s not a question of simply coming up with original research projects but also of ensuring the evolution of a discipline capable of self-reflection and re-invention.
Closing up, I would simply like to offer for this discussion the distinction between commentary and critique (Benjaminean in its origins). It should help to develop a research culture which exposes the available expertise in the field to the socio-ideological reality from which is draws its relevance – or not. But that’s a different argument.
Gile, D. 22 January 2005. The liberal arts paradigm and the empirical science paradigm (website on Research Issues).
Lefevere, A. 1977. Literary Knowledge. A Polemical and Programmatic Essay on its Nature, Growth, Relevance and Transmission. Assen/Amsterdam: Van Gorcum.
A response to Maria Filippakopoulou
October 22, 2005
In her essay, Maria Filippakopoulou (MF) criticizes “scientific” discourse, “long and sufficiently debated”.
Firstly, regarding the reasons why most of the texts in this web page refer to “scientific discourse”:
1. Contributors simply happen to be interested in scientific discourse. There is nothing to prevent colleagues interested in other types of discourse from sending contributions as well.
2. Many TS scholars (including some who are interested in literary translation) happen to conduct studies within the Empirical Science Paradigm (ESP). Since there are recurring problems in these studies (see for example the last paper in Hansen et al. 2004), it is perhaps legitimate to consider that the topic has not been sufficiently debated and that further clarification might be helpful.
MF raises an ethical objection to science. This traditional criticism usually refers to the fact that scientists are trained to be objective, not to play an ethical role in society. MF also claims that the scientific approach “does not train researchers to go beyond the immediate content of “data” to the conditions under which such data emerged and became worthy of observation in the first place”, and that it “distances researchers from the social make of their object of study, pre-emptying any desire to link it back to the society and culture from which it was “derived””. This is a somewhat puzzling statement: why could scientific methods not be applied precisely to study the social make up of their object of study, as in sociology, ethnology, political science, etc.?
According to MF, scientific discourse generated a division between literary and non-literary translation. I challenge this claim; when I was a technical translator in the early 1970s, a strong distinction between the two was already traditional in circles of professional translators who had no interest in research.
Finally, according to FM, scientific discourse “doesn’t begin to tell us anything about how hypotheses are formulated in the first place”. It does not, unless scientific methods are used to try to investigate precisely this question. One might ask whether non-scientific discourse does tell us something about how hypotheses are formulated.
The scientific paradigm is not exclusively quantitative. It accommodates qualitative methods as well. However, it is essentially critical, precisely because it recognizes the possibility of personal and sociological biases in scholarly analysis of reality. Perhaps M. Filippakopoulou will understand why both logic and evidence would lead scientifically trained colleagues to question her statement that someone’s popularity in a discipline proves “the validity of his insights”.
On this website page, so far, most of the contributions have focused on one paradigm, but they have not excluded the other(s). The Liberal Arts Paradigm has advantages which the ESP does not have. I would argue for complementarity and would welcome further contributions from both sides.
Hansen, Gyde, Kirsten Malkmjær and Daniel Gile (eds). 2004. Claims, Changes and Challenges in Translation Studies. Amsterdam/Philadelphia: John Benjamins.
(Posted on July 30, 2005)
Department of Translation and Interpreting
Terminology: what makes up the distinction between term and word
In the process of specialised translation, terms can in some cases clearly and without any problem be distinguished from words, whereas in others this is not so obvious, especially in cases where terms turn out to behave like words as in such disciplines like psychology, sociology, art & art criticism, leisure & tourism, etc.
For the translation of terms, I discern a number of steps of which - in the translation process- term recognition is one of the most important ones. I have a number of research issues in this respect:
▪ What is it that triggers the translator’s decision to translate the item in question as a term or as a word: is it, for example, the item’s morphology or etymology, the item’s meaning description and/or context hints as given in (specialised) dictionaries, indicators in the item’s context, the item’s behaviour in the source text regarded in terms of intertextuality, the explicitness and clarity of the subject area in question, the translator’s experience, etc?
▪ Can these triggers be used as guiding or discovery procedures?
▪ What can the translator do and what aids does he have at his disposal to make the appropriate decision: is this only term extraction tools or is there more?
▪ What is the use of the pre-translation macro-textual and micro-textual analysis?
▪ Are all these issues issues at all in the presence of translation memories and term banks?
▪ How can corpus linguistics help?
▪ Do the above research issues regarding term recognition also play a role in interpreting and if so, is this role the same as in translation; if not, what makes up the difference?
▪ In what way do theory and practice co-operate to help the translator / interpreter?
▪ Is it possible to generalise the findings of this type of research into rules and where will these rules be accommodated best: in theory or in practice?
In particular for students of specialised translation, it is important to know how and if words can be distinguished from terms. If an item is a term, it should be translated by a term (if there is one available - if not, the appropriate translation procedures should be applied), if it is not, the freedom of translation is greater and the student can decide what to do (under the constraints of the translation brief and the constraints of the target language & culture). Also for the teacher of specialised translation, in particular the teacher of terminology, the term-word distinction is important from a didactic point of view: how should he explain the difference between words and terms and provide the student with appropriate aids to solve problematic cases? Also for the professional translator the answers to the above questions can be relevant. Finally, the outcome of this research may also be of relevance to the discipline of terminology (and terminography): do terms and words behave similarly? If so, why have now one discipline instead of two, viz. terminology and lexicology? If there is a difference in behaviour, then what is it and how should it be accounted for?
Contributed on June 8, 2005
Visualisations can occur at certain stages in the comprehension process, and they may lead to creative translations. Creative translations can for our present purposes be defined as translations that show changes when compared with the source text, thereby bringing in something that is novel. In interpreting, visualisation as a type of deverbalisation was recognized as a method as early as 1968 by Danica Seleskovitch. (I am referring to the English translation of her book L’interprète dans les conferences internationales, 1978: 55), and it was explicitly recommended as a teaching method by Seleskovitch & Lederer (1989: 24-26).
What, actually, is visualisation in translation? For the translation of the sentence “Children have forgotten how to eat, completely forgotten how to eat”, taken from an article on famine in Africa, Seleskovitch and Lederer suggest visualising a scene of a little child with bony legs and arms and a blown-up belly, a picture often seen in the media, in order to avoid a mistranslation such as “les enfants ont oublié comment manger” (They have forgotten their good manners) (Seleskovitch & Lederer 1989: 25-26.)
This is an example taken from a teaching context. With the availability of empirical tools for documenting the translation process such as triangulations between Translog files and Think Aloud Protocols, Dialogue Protocols or Retrospective Interviews, it might be possible to actually observe visualisations normally hidden in the minds of the translators, and with the heuristic means of cognitive semantics at our hands we may now be able to see more precisely what types of mental visual images exist.
More specifically, I believe that notions like point of view, focus, prototypicality and Fillmore’s scenes-and-frames may help us to describe visualisations in greater detail. In the example just quoted we may say that the teachers suggested a prototypical (or stereotypical) scene. One might hypothesise that visualising, i.e. focussing on, prototypical elements of a scene will lead to adequate and creative translations.
In the kind of empirical research I propose to carry out one might begin by gathering types of visualisations and try to classify them. It will be important to see, if visualisations actually lead to adequate and creative translations. It might well be that this is not always the case. Translators might visualise things that are only in their minds but not in the text in front of them or they might focus on elements that are not prototypical. As a second step, we may try to find out if visual clues help to initiate creative translations. We might show (prototypical) pictures or give verbal descriptions of scenes to one group of subjects but not to another group and compare their translations. The problem, of course, is to keep the groups as a variable stable.
The kind of research I am proposing can be seen within the scope of investigating successful translation processes. (For more details see Kussmaul 2005.)
Kussmaul, Paul (2005). Translation through Visualisation” In:
Seleskovitch, Danica (1978): Interpreting for International Conferences,
Seleskovitch, Danica & Marianne Lederer (1989): Pédagogie raisonnée de l’interprétation. Collection Traductologie n° 4, Bruxelles : Didier Érudition.
Our knowledge about the world is partly experiential, i.e. obtained through direct sensory experience (what we see, what we hear, what we smell, etc.) and ulterior cognitive processing (the brain has to make sense of the sensory experience), and partly inherited, i.e. received from other people's statements, written or spoken (what we read and what we hear). Our perception of both 'reality' around us and other people's statements is distorted and limited by our sensory and cognitive limitations (we cannot see or hear everything, and there are limitations to the amount of information we can process), and by affective bias (essentially our likes and dislikes, personal ambitions, self-image etc.).
These limitations have been recognized from early times on. The so-called scientific method presented in many textbooks on research methods is a set of norms underlying research methods and rules for research criticism designed to push back such limitations to the best possible extent. These norms include the following:
The quality of scientific progress as a whole depends to a large extent on the individual scientists' compliance with these norms.
Note: This text introduces norms of science as it is defined by the so-called scientific method generally invoked in empirical disciplines, both natural and behavioural. I do not claim that these norms are universal, that science cannot be defined otherwise, or that research not in line with them is “unscientific” in any absolute sense of the word.
How does the scientific community make sure that individual members of the community comply with its norms? Firstly, through training. When scientists are trained, their instructors teach them not only theory and facts collected by other scientists, but also research methods which have been designed to implement scientific norms, in particular by raising their awareness with concepts such as validity and reliability and with tools such as inferential statistics. Students are thus socialized into these norms over several years of training. In some disciplines, such socialization starts during their undergraduate studies. In others, it only starts at graduate level. In all cases, it extends up to doctoral, and post-doctoral/habilitation level (the habilitation has been institutionalized in some countries as a post-doctoral qualification which gives access to the function of doctoral studies supervisor and which is a prerequisite to full professorship).
An interesting feature of the scientific community is that it institutionalizes tests of a sort for its members every time they want to publish results of their work in a reputable journal through peer-reviewing: a text submitted for publication is read critically by other members of the community who assess it and make comments and recommendations, in particular in favour or against its publication with or without corrections.
The scientific community has also made publication a vital part of the scientists' career, thus helping enforce the collective and communicative norms of science. Both the reputation and the chances of scientists to be promoted at university and research institutions are to a large extent determined by the number and quality of papers they manage to publish, in particular in reputable journals.
All these institutional measures combine to create considerable pressure on members of the community to comply with the norms throughout their career.
If my memory serves me right, the concept of “science” was never raised as an issue during my undergraduate and graduate studies in mathematics. Neither was the concept of “research methods”, which I encountered when I became a student of sociology. Many years later, when I migrated into TS, I found that both were central issues in the discipline.
Some TS scholars who come from established empirical disciplines such as psychology or neurophysiology tend to do research in compliance with the norms of the “scientific method”. I will refer to this type of research, sometimes mistakenly assumed to be characteristic of the natural sciences only, as the “Empirical Science Paradigm” (ESP). Other TS scholars come from a humanities background and tend to do research somewhat differently, in what I will refer to here as the "Liberal Arts Paradigm" (LAP). The Empirical Science Paradigm is demanding in terms of caution, of systematicity and of explicitness. By requiring individual authors to observe rigorously the 8 norms of the “scientific method”, it attempts to prevent authors from publishing claims without a relatively solid basis. The Liberal Arts Paradigm shares some of the norms, but allows authors to make claims that are not the only logical consequence of facts used to make the inference, to make them without informing the readers of the exact facts and methods used to make the inferences, and to make them without making sure that all relevant data have been used and point in the same direction.
I have found it sociologically counterproductive to try to determine which of the two is more “scientific” or which is more efficient to explore the world. Clearly, each has its advantages and its limitations: for instance, while ESP draws inferences more rigorously at the individual level, LAP can correct misperceptions through collective discussion, and gives scholars more freedom to express useful insights which cannot be tested through empirical methods.
fact to remember is that the two paradigms are distinct and may lead people
with the same understanding of a situation to express their views differently.
ESP scholars tend to only make claims which they can substantiate, while LAP
scholars tend to also make claims based on what they feel intuitively.
Misunderstandings in the literature can be explained by authors’ failure to
take into account this inter-paradigmatic gap (see for instance Pöchhacker’s chapter 11 and Gile’s
response in chapter 13 of Schäffner, Christina
(ed). 2004. Translation Research and Interpreting Research. Clevedon,
Is there anything special about “facts” as they are produced by science (in the sense of the ESP as explained in previous contributions on this page), as opposed to facts collected, presented or asserted outside of science? Are they more accurate, more reliable, more comprehensive in covering a given portion of reality, say translation process, translation quality, the translators' personality, etc.? Sometimes they are, if the same phenomenon has been the object of scientific enquiry by many scientists for a long time. However, this is not necessarily the case. Three reasons are given here by way of illustration:
When scientists start investigating a phenomenon, they often have a theory which they seek to prove. This may (inter alia) make them sensitive to some parts of reality and less so to others. Thus, they may misinterpret reality and “see” facts where other scientists with different theories would “see” other facts.
Another reason why scientific “facts” are not always reliable is the limited quality of tools used to collect them, be they physical tools (optical, electronic or otherwise) or intellectual tools such as observation grids, mathematical calculation methods, etc.
Thirdly, science is systematic, and may therefore look at specific parts of the phenomenon under study gradually rather than study it holistically. It may therefore take a lot of time before it covers all its important facets. In contrast, non-scientists who are in daily contact with the phenomenon may have deep intuitive knowledge of the same facets.
If so, why should one look to science to provide facts to explore the world or provide applications? The one essential advantage of science in this respect appears in the long term. Scientific facts are produced in compliance (to the best possible extent) with scientific norms that seek to reduce misperceptions and to correct errors and distortions through collective efforts, including criticism. Through this self-correction process, the factual basis collected and published in the literature eventually becomes increasingly accurate and reliable, whereas non-scientific facts may remain at the same level of reliability for a very long time. However, in order for the process to take its appropriate course, it is important that scientific norms be complied with.
Empirical research is based on data systematically derived from perception and observation of aspects of reality. In a research project, data collection, analysis and interpretation of the data entail choices as to the different methods, techniques and procedures which might be the most promising. In TS, many different quantitative and qualitative methods are used.
Quantitative methods are based on and proceed from the researcher's ideas and hypotheses about observed dimensions as well as calculable and measurable categories. Qualitative methods are based on interpretations of reports from the experiences and/or actions of individuals. Where focus in quantitative research is on relations between a few isolated variables in larger samples, focus in qualitative research is on relations between many variables that are investigated in smaller samples. Both quantitative and qualitative methods have advantages and limitations, but each mode of research gives its contribution in the attempt to increase knowledge. If we, for example, examine a human body, we can measure height, weight, foot size, blood pressure etc. But as soon as we have to describe a person's complexion, hair colour, feelings or perception of pain, we have to rely on interpretations and reports that are based on experience.
The choice of qualitative and/or quantitative methods has to be taken in relation to the particular research issue(s) under study. Sometimes quantitative methods can be used, like for example in TS, investigating the length of pauses taken or the number of key strokes made during translation processes; in other cases, purely qualitative methods are useful, for example in reports on translation problems or the personal involvement of the translator during the translation process. However, as qualitative data can in many cases be coded and counted, and as quantitative data and results always need to be interpreted and explained, both aspects will always be present. In TS, quantitative and qualitative methods can be used in a variety of combinations and triangulations. There is no universally "best way" of combining methods.
Qualitative research is "any type of research that produces findings not arrived at by statistical procedures or other means of quantification" (Strauss/Corbin: 10). It is indepth investigation of phenomena, taking as many variables into consideration as possible. It is interpretive, employing often naturalistic approaches to people's lives, experiences, emotions, behaviour, as well as cultural phenomena, social or political interaction, etc. It is "multimethod in focus" and an attempt "to make sense of, or interpret, phenomena in terms of the meanings people bring to them" (Denzin/Lincoln 1994: 2).
The assumption in qualitative research is that a person who experiences or perceives a phenomenon can also give the most precise description of it.
Data in qualitative research are derived from a variety of empirical material, such as observations and explanations from personal perception, case studies, field notes, life stories, diaries, interviews, questionnaires, all kinds of texts and documents, as well as films or videotapes. Accordingly, a wide range of interconnected methods is used in an attempt to explore the complexity of a phenomenon holistically, because it is assumed that the whole is more than the sum of its parts.
In TS, the most popular qualitative methods are introspective methods, such as think-aloud (TA), retrospection, interviews, questions and questionnaires. Using these methods, researchers hope to increase knowledge about, for example, translators' intentions, problems, strategies, decisions, attitudes and preferences. By investigating translation processes, for example, observers can register pauses; although why the translator stops writing and what he/she thinks during the pause, observers do not know. They have to rely on individual reports and interpret what the translators tell them.
Qualitative research is based on subjective reports, explanations and interpretations. In TS, we need qualitative methods, but we cannot make do with only specific, private findings that cannot be generalized. According to Gile (this website), "our perception of both 'reality' around us and other people's statements is distorted and limited by our sensory and cognitive limitations". Thus, the question is: how can we push back these limitations and follow the norms of the so-called scientific method, especially no. 4., according to the aspiration of objectivity? In other words, how can we, using qualitative methods, move from the individual, subjective level, represented in individual reports and interpretations, to a level of, if not objectivity, then at least subject-independency or inter-subjectivity? Especially regarding the fundamental scientific problem that data have to be gathered and interpreted by an observer, we also have to ask the question: how can bias from observers' effects, i.e. his/her interests, prejudices and attitudes, be minimized or avoided?
Additional questions arise due to the complexity of the field "translation". In many projects, the connected whole has to be taken into consideration, because the experimental conditions are complex situations with subjects and their multifarious individual backgrounds. How can we take such complex situations with many variables into consideration without renouncing the possibility of obtaining results that can be comprehended and perhaps replicated by other scholars? Answers can be found in interdisciplinarity, and especially in intermethodology.
Translation in itself is an interdiscipline (Snell-Hornby 1986: 18), in the sense that the complex phenomenon consists of inseparably connected aspects from different disciplines like linguistics, culture, communication and terminology. In TS, these disciplines are always relevant and thus an inherent part of the research issue translation.
But interdisciplinarity can also be understood differently, i.e. as an attempt to adopt methods and ideas from other disciplines bearing some resemblance to the multifaceted TS. Disciplines like psychology, sociology, cognitive sciences and health care share our questions as to research methods, because they also deal with complex issues involving individuals' attitudes, behaviour and reports. This kind of interdisciplinarity means that research issues, apart from and in addition to the usually "inseparable disciplines", can be investigated from different angles, using knowledge, methods, tools and techniques from different paradigms and disciplines, which at first glance might seem to have little in common with translation.
Qualitative methods are used in many disciplines in social sciences, psychology, human sciences and also in natural sciences. Especially in approaches close to empirical research, such as phenomenology, grounded theory, ethnography, psychology of perception and consciousness studies, great efforts have been made to accommodate qualitative research to some "scientific" norms; a balancing act between the special purpose and conditions of qualitative research on the one hand, and the requirements of natural sciences as to exactness, reliability, validity and credibility on the other hand. These empirical approaches from other disciplines provide us with useful discussions, attitudes, techniques and procedures. Most important for TS are: precise and transparent description, reflective attitude, communication techniques, coding procedures and combinations and triangulations of methods and data.
Contracts with publishers -
As authors of papers and monographs and editors of collective volumes, we are required to sign contracts with publishers. Such contracts are necessary not only to set out the precise duties and rights of all parties to the agreement, but also to protect them. Actually, since academic books in TS have a limited market and royalties never amount to huge sums, such protection is not essential for authors/editors. It can be much more important for the publishers, who could be taken to court by other publishers for violating copyrights and have to pay large sums. Out of honesty and out of respect for publishers who often take financial risks when publishing our work, copyright clauses should be taken seriously and observed.
On the other hand, there is no reason why scholars should grant exclusive copyrights forever to the publisher. Depending on the book or the paper, the publisher may make virtually all of the sales within 5 years from the date of publication and lose very little if the product is distributed for free afterwards. For the scholar’s career, it may be important to be able to distribute his/her article for a far longer time. Why not change the terms of the contract proposed by the publisher, for instance by introducing the possibility for the author to put her/his paper online on her/his personal website, either from the time of publication or a few years after the date of publication of the hard copy?
A contract is not the law. It is a commitment by two or more parties, and can be and should be negotiated until its terms meet the interests of all parties. Publishers find it convenient to propose a standard contract, with standard clauses, drafted by lawyers who have in mind the interests of their client (the publisher), not the interests of individual scholars. Some provisions in standard contracts (provided by one party) are unjustified and entail unacceptable risks to the other party. Examples will be given in another contribution on this site. Many people disregard them, saying that it is unlikely that they will be implemented. Just as unlikely as an accident in which you might injure someone and have to pay huge amounts in damages. Do you conclude from that that you can drive your car without taking out an insurance policy? Insurance costs money – negotiating changes in a contract before it is signed does not. There is no reason why scholars should accept standard contracts as they are. They can and should read them carefully, and negotiate changes in provisions which they do not like. In my experience, publishers have often been reasonable about it.