Research Issues in TS

Message from the initiator of this section

Discussion is an essential part of research, and input from researchers, young and old, is always welcome as a contribution to the community. Short pieces in this section reflect their authors' views at the time they are posted, and are essentially published for the purpose of providing such input and stimulating discussion. They are not to be seen as final statements, and may be changed or withdrawn at any time at the request of their author(s).

All members are invited to contribute short texts (preferably less than 500 words as a rule) in any language (subject to limitations in the character sets that can be supported technically) as input to the discussion. Please make sure that your contribution does not infringe any copyright and confirm this in writing when sending it in. Thank you in advance for your contribution and cooperation.

D. Gile
December 2004


Content


 

Research Issues in Translation Studies

 

Combination and Triangulation of Methods and Data

Gyde Hansen, December 29, 2005

 

 

In empirical TS, combining methods and data has become an important research technique. This procedure is often called “triangulation”. For example, data from first-person observations (TAPs) are combined with data from third-person observations (the observer) in order to reach inter-subjectivity. Qualitative approaches are corroborated or complemented by quantitative approaches. This latter combination is often looked upon as especially useful, because quantitative data, which are a result of measuring and counting, are regarded as more objective and reliable than qualitative data, which are a result of persons’ perceptions and more subjective interpretations of a phenomenon (see Hansen, this website: February 17, 2005). 

 

Triangulation is frequently applied in social sciences and in many other disciplines, and the terms combination and triangulation are often used as synonyms for a mix of procedures to grasp complex phenomena and to confirm or complete a study. Triangulation is regarded as a multimix of material, strategies, methods, purposes, perspectives or investigators in an attempt to add rigor to a study.

 

The use of the triangulation metaphor has been heavily discussed and challenged, and much has been written about triangulation as a useful approach, especially in qualitative research. In its original meaning the term ‘triangulation’ refers to a geometrical procedure where a point is found by calculating the length of one side of a triangle, given measurements of angles and sides of the triangle formed by that point and two other given reference points. This means that in triangulation as opposed to any other combination of methods or data, reference points, i.e. prior knowledge, are used in order to gain further results or further insight.

 

Having the meaning of the original metaphor of a triangle in mind, combination and triangulation could be kept apart. Combination is useful for all kinds of information collection involving multiple methods, investigators, tools, observations and data. Triangulation, in accordance with the original meaning of the term, can be an additional procedure for obtaining new results or new knowledge from existing results, and can thus provide clarity and coherence to the investigation and description of complex phenomena. In complex research projects, where many aspects have to be taken into consideration, the differentiation between combination and triangulation is a means to keep the variety of different observations under control and to make it easier to discuss, repeat and reevaluate the study. For example, data from interviews or questionnaires about the personal background of subjects can be combined with product data (evaluation of target texts), or the same data can be combined with process data from introspection. Triangulated, the results of both combinations can complete each other or reveal gaps or discrepancies and thus provide new knowledge about the relationship between personal profiles, processes and products. A complex study gains flexibility and scope when new results can be located via new constructions of triangles from known reference points (= results). 

 

 

When is a law probabilistic?

Daniel Gile

November 27, 2005

 

Probabilistic laws lie somewhere between determinism, where a phenomenon is certain, and total uncertainty, where one has no idea as to whether the phenomenon will materialize and how. Probabilistic laws attempt to quantify uncertainty. For instance, if it is known that the mean height of males in a population is 180 cm, it is possible to quantify the probability that in a random sample of 30 males, the mean height will be between 170 cm and 190 cm.

            Uncertainty in empirical observation of phenomena may be due to the probabilistic nature of a phenomenon (i.e. variation is an intrinsic characteristic of the phenomenon). It may also be due to environmental reasons: the underlying entity itself may be regular in its nature and form, but its manifestations are influenced by external factors which are themselves probabilistic in nature or too complicated in their interactions with the phenomenon at hand to be predictable. Finally, uncertainty may be due to limitations or weaknesses in the detection and measurement of the phenomenon by man.

            In many cases, researchers investigate the existence of a trend, not its quantitative contours. For instance, in TS, the explicitation hypothesis assumes the existence of a tendency, not how strong it is or to what extent a TT will be more explicit than the ST. In such a case, researchers are interested not in quantitative variation, but in its occurrence as such.

            If such occurrence is irregular, this does not necessarily mean that the law is not true, or that it is probabilistic in nature. Other factors may have prevented it from being manifest. Lunar eclipses do not become probabilistic just because cloudy skies sometimes prevent people from seeing them. The trend to make TTs more explicit than STs may be universal without necessarily being manifest everywhere, for instance if cultural conventions or the client’s brief or time pressure etc. inhibit its expression in a particular set of translations.

             Assuming that a phenomenon is probabilistic or “conditional” just because in empirical studies, it is found not to occur regularly or is found to occur only under certain conditions is the same as assuming that lunar eclipses are probabilistic or conditional upon the absence of clouds in the sky. When irregularities occur, the natural procedure in research is to try to find what causes this irregularity, starting with the removal of environmental factors which may interfere, for instance through experimental research. Only when it is thought that all interfering environmental factors and observer-related factors have been removed and there is still irregularity in the occurrence of the phenomenon will one conclude that this irregularity suggests that the law is probabilistic.

 

 

 

 

Translation research: commentary or critique?

Dr Maria Filippakopoulou, August 17, 2005

 

Some literary quarters harbour the suspicion that “theory” in liberal arts is but an attempt to purvey methods of analysis that enjoy grand scale consensus, consensus of the kind that can bring out visible results. This concerns not this or that theory but rather major cultural movements, such as postmodernism, or psychoanalysis, which have more or less changed the way in which critics talk to one another and conduct their business.

 

This view about theory as the “new science” in literary studies, albeit sizzling with unspoken resentment, is intriguing and has some relevance for the objectivistic pretensions – or efforts, depending on where one looks at it - of TS research. I would like to make it clear that it is research in literary translation that I am mostly interested in here, which is why I chose to start with this view.

 

It is puzzling to see that an objectivistic, scientific discourse, in inverted commas or not, is pushed again to the forefront of public discussion of TS (this website); many amongst us would think that a fair amount of ground has been covered in this respect since André Lefevere published his seminal in many ways Literary Knowledge in 1977. He sought to free literary studies – referring to it as “a pseudo-scientific discipline with a weak core” (Lefevere, 1977: 26) - from the numbing embrace of logical neo-positivism and hermeneutics, restoring it instead as a source of literary knowledge (Lefevere 1977: 44-45). The fact that he became one of the leading scholars in TS by spearheading with Susan Bassnett the “cultural turn” testifies to the validity of his insights.

 

Assuming that the “scientific” discourse has been long and, in my mind at least, sufficiently debated, I would like to raise two questions related to it: the first one concerns research ethos and the second one research creativity. Undoubtedly the positivistic pursuit forges a certain research ethos. By insisting on the importance of “gathering data”, observing them with “objectivity”, “minimising bias” and so on, we advertise not only a specific methodology, but also a specific pedagogy: we encourage certain inclinations, privilege certain outlooks and educate young researchers to go for some rather than other. In effect, we tell students of translation what is the ethos they are to embrace in their professional enterprises.

 

However, this blanket insistence on the scientific ideal fails to mention the position of the observer in the institutional hierarchy, the use of her/his output in the knowledge market, its social implications. It does not train researchers to go beyond the immediate context of “data” to the conditions under which such data emerged and became worthy of observation in the first place, nor the specific ways in which they outflanked other competing data. The very nakedness of the neo-positivistic language readily distances researchers from the social make of their object of study, pre-emptying any desire to link it back to the society and culture from which it was “derived”.

 

Shall we contemplate for a moment the effects of such an outlook, when successful? The translation typologies which revved up the institutionalisation of TS - grounded on a basic distinction between literary and specialised translation, or simply applied TS - is one instance of this disconcerting success. This has worked to insert a wedge between the teaching/research of literary translation and that of applied translation in most British academic departments; this in turn, given the predominant political economy of current academia, has led to the overgrowth of the latter and the shrinking of the former. The discipline has indeed made huge strides in training translators for an English-dominated globalised market on the one hand, and utterly re-gentrifying literary translation on the other. If we wanted to forge an identity for practitioners, teachers and researchers of translation, which is that of professionals, well-adapted to the current cultural order, who seamlessly enter both the market and the market discourse, if we aimed at producing a marginalised, self-aggrandising - or disfranchised as the case might be - class of its own in the case of literary translators, we couldn’t have done better.

 

My second concern is with research creativity. I wonder how the born-again positivistic ideal caters for blue sky thinking which is - let’s be frank about it – crucial for the formation of hypotheses and selection of tools of analysis. Daniel Gile very correctly pointed out that “[t]he one essential advantage of science […] appears in the long run” (Gile 2005, this website). But the “long run” is made of our individual, far from well-rounded, usually hit-and-miss, in the first instance, attempts at making sense of translation phenomena. Quantitative considerations are all very well, but insisting too much on them could end up obscuring the lengthy, winding procedures by which we come by those hallowed quantitative observables. It doesn’t begin to tell us anything about how hypothesis are formulated in the first place, how they evolve, how they are discarded; it fails to show how new questions ever come to light. It’s far too tidy to provide any kind of enlightenment for people struggling to face those gaps in understanding, those blind spots in perception that only open-mindedness or, indeed, an “existential” mindset seems to be most adequate to grasp. It’s not a question of simply coming up with original research projects but also of ensuring the evolution of a discipline capable of self-reflection and re-invention.

 

Closing up, I would simply like to offer for this discussion the distinction between commentary and critique (Benjaminean in its origins). It should help to develop a research culture which exposes the available expertise in the field to the socio-ideological reality from which is draws its relevance – or not. But that’s a different argument.

 

 

References

Gile, D. 22 January 2005. The liberal arts paradigm and the empirical science paradigm (website on Research Issues).

Lefevere, A. 1977. Literary Knowledge. A Polemical and Programmatic Essay on its Nature, Growth, Relevance and Transmission. Assen/Amsterdam: Van Gorcum.

 

 

Translation research: LAP versus/with ESP?

A response to Maria Filippakopoulou

Daniel Gile

October 22, 2005

 

 

In her essay, Maria Filippakopoulou (MF) criticizes “scientific” discourse, “long and sufficiently debated”.

     Firstly, regarding the reasons why most of the texts in this web page refer to “scientific discourse”:

1.  Contributors simply happen to be interested in scientific discourse. There is nothing to prevent colleagues interested in other types of discourse from sending contributions as well.

2.  Many TS scholars (including some who are interested in literary translation) happen to conduct studies within the Empirical Science Paradigm (ESP). Since there are recurring problems in these studies (see for example the last paper in Hansen et al. 2004), it is perhaps legitimate to consider that the topic has not been sufficiently debated and that further clarification might be helpful.

     MF raises an ethical objection to science. This traditional criticism usually refers to the fact that scientists are trained to be objective, not to play an ethical role in society. MF also claims that the scientific approach “does not train researchers to go beyond the immediate content of “data” to the conditions under which such data emerged and became worthy of observation in the first place”, and that it “distances researchers from the social make of their object of study, pre-emptying any desire to link it back to the society and culture from which it was “derived””. This is a somewhat puzzling statement: why could scientific methods not be applied precisely to study the social make up of their object of study, as in sociology, ethnology, political science, etc.?

     According to MF, scientific discourse generated a division between literary and non-literary translation. I challenge this claim; when I was a technical translator in the early 1970s, a strong distinction between the two was already traditional in circles of professional translators who had no interest in research.

     Finally, according to FM, scientific discourse “doesn’t begin to tell us anything about how hypotheses are formulated in the first place”. It does not, unless scientific methods are used to try to investigate precisely this question. One might ask whether non-scientific discourse does tell us something about how hypotheses are formulated.

     The scientific paradigm is not exclusively quantitative. It accommodates qualitative methods as well. However, it is essentially critical, precisely because it recognizes the possibility of personal and sociological biases in scholarly analysis of reality. Perhaps M. Filippakopoulou will understand why both logic and evidence would lead scientifically trained colleagues to question her statement that someone’s popularity in a discipline proves “the validity of his insights”.

     On this website page, so far, most of the contributions have focused on one paradigm, but they have not excluded the other(s). The Liberal Arts Paradigm has advantages which the ESP does not have. I would argue for complementarity and would welcome further contributions from both sides.

 

Reference

Hansen, Gyde, Kirsten Malkmjær and Daniel Gile (eds). 2004. Claims, Changes and Challenges in Translation Studies.  Amsterdam/Philadelphia: John Benjamins.

 

 

 

 

 

***

 

Marcel Thelen

(Posted on July 30, 2005)

Department of Translation and Interpreting

Maastricht School of International Communication

Zuyd University (Hogeschool Zuyd)

m.m.g.j.thelen@hszuyd.nl

 

TERMINOLOGY

Terminology: what makes up the distinction between term and word

 

In the process of specialised translation, terms can in some cases clearly and without any problem be distinguished from words, whereas in others this is not so obvious, especially in cases where terms turn out to behave like words as in such disciplines like psychology, sociology, art & art criticism, leisure & tourism, etc.

For the translation of terms, I discern a number of steps of which - in the translation process- term recognition is one of the most important ones. I have a number of research issues in this respect:

            What is it that triggers the translator’s decision to translate the item in question as a term or as a word: is it, for example, the item’s morphology or etymology, the item’s meaning description and/or context hints as given in (specialised) dictionaries, indicators in the item’s context, the item’s behaviour in the source text regarded in terms of intertextuality, the explicitness and clarity of the subject area in question, the translator’s experience, etc?

            Can these triggers be used as guiding or discovery procedures?

            What can the translator do and what aids does he have at his disposal to make the appropriate decision: is this only term extraction tools or is there more?

            What is the use of the pre-translation macro-textual and micro-textual analysis?

            Are all these issues issues at all in the presence of translation memories and term banks?

            How can corpus linguistics help?

            Do the above research issues regarding term recognition also play a role in interpreting and if so, is this role the same as in translation; if not, what makes up the difference?

            In what way do theory and practice co-operate to help the translator / interpreter?

            Is it possible to generalise the findings of this type of research into rules and where will these rules be accommodated best: in theory or in practice?

In particular for students of specialised translation, it is important to know how and if words can be distinguished from terms. If an item is a term, it should be translated by a term (if there is one available - if not, the appropriate translation procedures should be applied), if it is not, the freedom of translation is greater and the student can decide what to do (under the constraints of the translation brief and the constraints of the target language & culture). Also for the teacher of specialised translation, in particular the teacher of terminology, the term-word distinction is important from a didactic point of view: how should he explain the difference between words and terms and provide the student with appropriate aids to solve problematic cases? Also for the professional translator the answers to the above questions can be relevant. Finally, the outcome of this research may also be of relevance to the discipline of terminology (and terminography): do terms and words behave similarly? If so, why have now one discipline instead of two, viz. terminology and lexicology? If there is a difference in behaviour, then what is it and how should it be accounted for?

 

 

 

 

 Observing visualisations

 

Paul Kussmaul

Contributed on June 8, 2005

 

Visualisations can occur at certain stages in the comprehension process, and they may lead to creative translations. Creative translations can for our present purposes be defined as translations that show changes when compared with the source text, thereby bringing in something that is novel. In interpreting, visualisation as a type of deverbalisation was recognized as a method as early as 1968 by Danica Seleskovitch. (I am referring to the English translation of her book  L’interprète dans les conferences internationales, 1978: 55), and it was explicitly  recommended as a teaching method by Seleskovitch & Lederer (1989: 24-26).

What, actually, is visualisation in translation? For the translation of the sentence “Children have forgotten how to eat, completely forgotten how to eat”, taken from an article on famine in Africa, Seleskovitch and Lederer suggest visualising a scene of a little child with bony legs and arms and a blown-up belly, a picture often seen in the media, in order to avoid a mistranslation such as “les enfants ont oublié comment manger” (They have forgotten their good manners) (Seleskovitch & Lederer 1989: 25-26.)

This is an example taken from a teaching context. With the availability of empirical tools for documenting the translation process such as triangulations between Translog files and Think Aloud Protocols, Dialogue Protocols or Retrospective Interviews, it might be possible to actually observe visualisations normally hidden in the minds of the translators, and with the heuristic means of cognitive semantics at our hands we may now be able to see more precisely what types of mental visual images exist.

More specifically, I believe that notions like point of view, focus, prototypicality and Fillmore’s scenes-and-frames may help us to describe visualisations in greater detail. In the example just quoted we may say that the teachers suggested a prototypical (or stereotypical) scene. One might hypothesise that visualising, i.e. focussing on, prototypical elements of a scene will lead to adequate and creative translations.

In the kind of empirical research I propose to carry out one might begin by gathering types of visualisations and try to classify them. It will be important to see, if visualisations actually lead to adequate and creative translations. It might well be that this is not always the case. Translators might visualise things that are only in their minds but not in the text in front of them or they might focus on elements that are not prototypical. As a second step, we may try to find out if visual clues help to initiate creative translations. We might show (prototypical) pictures or give verbal descriptions of scenes to one group of subjects but not to another group and compare their translations. The problem, of course, is to keep the groups as a variable stable.

The kind of research I am proposing can be seen within the scope of investigating successful translation processes. (For more details see Kussmaul 2005.)

 

Kussmaul, Paul (2005). Translation through Visualisation” In: META, Vol. 50, n° 2, 2005, 378-391.

Seleskovitch, Danica (1978): Interpreting for International Conferences, Washington: Pen and Booth.

Seleskovitch, Danica & Marianne Lederer (1989): Pédagogie raisonnée de l’interprétation. Collection Traductologie  n° 4, Bruxelles : Didier Érudition.

 

Scientific norms
D.Gile, December 8, 2004

Our knowledge about the world is partly experiential, i.e. obtained through direct sensory experience (what we see, what we hear, what we smell, etc.) and ulterior cognitive processing (the brain has to make sense of the sensory experience), and partly inherited, i.e. received from other people's statements, written or spoken (what we read and what we hear). Our perception of both 'reality' around us and other people's statements is distorted and limited by our sensory and cognitive limitations (we cannot see or hear everything, and there are limitations to the amount of information we can process), and by affective bias (essentially our likes and dislikes, personal ambitions, self-image etc.).

These limitations have been recognized from early times on. The so-called scientific method presented in many textbooks on research methods is a set of norms underlying research methods and rules for research criticism designed to push back such limitations to the best possible extent. These norms include the following:

  1. Science is systematic: it looks at its object of study systematically, ideally leaving no stone unturned.
  2. Science is careful: it checks the evidence which is collected as well as the rationale followed, it systematically tests theories, and it avoids drawing conclusions and making claims when one or the other have weaknesses.
  3. Science is logical: the basis of its rationale in every study is Cartesian logic.
  4. Science aspires to be objective: it recognizes that personal bias is in the way of every scientist's attempts to explore the world, and therefore tries to reduce such personal bias or eliminate it whenever possible, partly through awareness-raising, and partly through procedures.
  5. Science is critical: criticism is one way for members of the scientific community to help each other, as it is often easier to see the flaws and make constructive suggestions to correct them in another researcher's work than in one's own.
  6. Science is collective: every scientist draws upon the work of other members of the community in terms of evidence, theories and methods, and every scientist contributes to the community by offering his/her own input on evidence, theories and methods.
  7. Science is communicative: in order for the collective building of science to occur, scientists communicate the results of their work in both oral papers and publications.
  8. Science is explicit: when communication his/her input to other members of the community, a scientist reports his work explicitly, so that they can understand what s/he has done and build upon it and/or help him/her by criticizing it. In particular, s/he backs up assertions with explicit evidence, be it in the form of data or in the form of references to findings by other scientists.

The quality of scientific progress as a whole depends to a large extent on the individual scientists' compliance with these norms.

Note: This text introduces norms of science as it is defined by the so-called scientific method generally invoked in empirical disciplines, both natural and behavioural. I do not claim that these norms are universal, that science cannot be defined otherwise, or that research not in line with them is “unscientific” in any absolute sense of the word.

[top of the page]


Institutional measures for norm enforcement
D. Gile, January 5, 2005

How does the scientific community make sure that individual members of the community comply with its norms? Firstly, through training. When scientists are trained, their instructors teach them not only theory and facts collected by other scientists, but also research methods which have been designed to implement scientific norms, in particular by raising their awareness with concepts such as validity and reliability and with tools such as inferential statistics. Students are thus socialized into these norms over several years of training. In some disciplines, such socialization starts during their undergraduate studies. In others, it only starts at graduate level. In all cases, it extends up to doctoral, and post-doctoral/habilitation level (the habilitation has been institutionalized in some countries as a post-doctoral qualification which gives access to the function of doctoral studies supervisor and which is a prerequisite to full professorship).

An interesting feature of the scientific community is that it institutionalizes tests of a sort for its members every time they want to publish results of their work in a reputable journal through peer-reviewing: a text submitted for publication is read critically by other members of the community who assess it and make comments and recommendations, in particular in favour or against its publication with or without corrections.

The scientific community has also made publication a vital part of the scientists' career, thus helping enforce the collective and communicative norms of science. Both the reputation and the chances of scientists to be promoted at university and research institutions are to a large extent determined by the number and quality of papers they manage to publish, in particular in reputable journals.

All these institutional measures combine to create considerable pressure on members of the community to comply with the norms throughout their career.

[top of the page]


The liberal arts paradigm and the empirical science paradigm
D. Gile, January 22, 2005

If my memory serves me right, the concept of “science” was never raised as an issue during my undergraduate and graduate studies in mathematics. Neither was the concept of “research methods”, which I encountered when I became a student of sociology. Many years later, when I migrated into TS, I found that both were central issues in the discipline.

Some TS scholars who come from established empirical disciplines such as psychology or neurophysiology tend to do research in compliance with the norms of the “scientific method”. I will refer to this type of research, sometimes mistakenly assumed to be characteristic of the natural sciences only, as the “Empirical Science Paradigm” (ESP). Other TS scholars come from a humanities background and tend to do research somewhat differently, in what I will refer to here as the "Liberal Arts Paradigm" (LAP). The Empirical Science Paradigm is demanding in terms of caution, of systematicity and of explicitness. By requiring individual authors to observe rigorously the 8 norms of the “scientific method”, it attempts to prevent authors from publishing claims without a relatively solid basis. The Liberal Arts Paradigm shares some of the norms, but allows authors to make claims that are not the only logical consequence of facts used to make the inference, to make them without informing the readers of the exact facts and methods used to make the inferences, and to make them without making sure that all relevant data have been used and point in the same direction.

I have found it sociologically counterproductive to try to determine which of the two is more “scientific” or which is more efficient to explore the world. Clearly, each has its advantages and its limitations: for instance, while ESP draws inferences more rigorously at the individual level, LAP can correct misperceptions through collective discussion, and gives scholars more freedom to express useful insights which cannot be tested through empirical methods.

The important fact to remember is that the two paradigms are distinct and may lead people with the same understanding of a situation to express their views differently. ESP scholars tend to only make claims which they can substantiate, while LAP scholars tend to also make claims based on what they feel intuitively. Misunderstandings in the literature can be explained by authors’ failure to take into account this inter-paradigmatic gap (see for instance Pöchhacker’s chapter 11 and Gile’s response in chapter 13 of Schäffner, Christina (ed). 2004. Translation Research and Interpreting Research. Clevedon, Buffalo, Toronto: Multilingual Matters.).

[top of the page]


"Scientific facts"
D. Gile, January 22, 2005

Is there anything special about “facts” as they are produced by science (in the sense of the ESP as explained in previous contributions on this page), as opposed to facts collected, presented or asserted outside of science? Are they more accurate, more reliable, more comprehensive in covering a given portion of reality, say translation process, translation quality, the translators' personality, etc.? Sometimes they are, if the same phenomenon has been the object of scientific enquiry by many scientists for a long time. However, this is not necessarily the case. Three reasons are given here by way of illustration:

When scientists start investigating a phenomenon, they often have a theory which they seek to prove. This may (inter alia) make them sensitive to some parts of reality and less so to others. Thus, they may misinterpret reality and “see” facts where other scientists with different theories would “see” other facts.

Another reason why scientific “facts” are not always reliable is the limited quality of tools used to collect them, be they physical tools (optical, electronic or otherwise) or intellectual tools such as observation grids, mathematical calculation methods, etc.

Thirdly, science is systematic, and may therefore look at specific parts of the phenomenon under study gradually rather than study it holistically. It may therefore take a lot of time before it covers all its important facets. In contrast, non-scientists who are in daily contact with the phenomenon may have deep intuitive knowledge of the same facets.

If so, why should one look to science to provide facts to explore the world or provide applications? The one essential advantage of science in this respect appears in the long term. Scientific facts are produced in compliance (to the best possible extent) with scientific norms that seek to reduce misperceptions and to correct errors and distortions through collective efforts, including criticism. Through this self-correction process, the factual basis collected and published in the literature eventually becomes increasingly accurate and reliable, whereas non-scientific facts may remain at the same level of reliability for a very long time. However, in order for the process to take its appropriate course, it is important that scientific norms be complied with.

[top of the page]


Qualitative and Quantitative Research and Empirical Translation Studies
Gyde Hansen, February 17, 2005 - Copenhagen Business School

Empirical research is based on data systematically derived from perception and observation of aspects of reality. In a research project, data collection, analysis and interpretation of the data entail choices as to the different methods, techniques and procedures which might be the most promising. In TS, many different quantitative and qualitative methods are used.

Quantitative methods are based on and proceed from the researcher's ideas and hypotheses about observed dimensions as well as calculable and measurable categories. Qualitative methods are based on interpretations of reports from the experiences and/or actions of individuals. Where focus in quantitative research is on relations between a few isolated variables in larger samples, focus in qualitative research is on relations between many variables that are investigated in smaller samples. Both quantitative and qualitative methods have advantages and limitations, but each mode of research gives its contribution in the attempt to increase knowledge. If we, for example, examine a human body, we can measure height, weight, foot size, blood pressure etc. But as soon as we have to describe a person's complexion, hair colour, feelings or perception of pain, we have to rely on interpretations and reports that are based on experience.

The choice of qualitative and/or quantitative methods has to be taken in relation to the particular research issue(s) under study. Sometimes quantitative methods can be used, like for example in TS, investigating the length of pauses taken or the number of key strokes made during translation processes; in other cases, purely qualitative methods are useful, for example in reports on translation problems or the personal involvement of the translator during the translation process. However, as qualitative data can in many cases be coded and counted, and as quantitative data and results always need to be interpreted and explained, both aspects will always be present. In TS, quantitative and qualitative methods can be used in a variety of combinations and triangulations. There is no universally "best way" of combining methods.

[top of the page]


Qualitative research, methods and data
Gyde Hansen, February 23, 2005 Copenhagen Business School

Qualitative research is "any type of research that produces findings not arrived at by statistical procedures or other means of quantification" (Strauss/Corbin: 10). It is indepth investigation of phenomena, taking as many variables into consideration as possible. It is interpretive, employing often naturalistic approaches to people's lives, experiences, emotions, behaviour, as well as cultural phenomena, social or political interaction, etc. It is "multimethod in focus" and an attempt "to make sense of, or interpret, phenomena in terms of the meanings people bring to them" (Denzin/Lincoln 1994: 2).

The assumption in qualitative research is that a person who experiences or perceives a phenomenon can also give the most precise description of it.

Data in qualitative research are derived from a variety of empirical material, such as observations and explanations from personal perception, case studies, field notes, life stories, diaries, interviews, questionnaires, all kinds of texts and documents, as well as films or videotapes. Accordingly, a wide range of interconnected methods is used in an attempt to explore the complexity of a phenomenon holistically, because it is assumed that the whole is more than the sum of its parts.

In TS, the most popular qualitative methods are introspective methods, such as think-aloud (TA), retrospection, interviews, questions and questionnaires. Using these methods, researchers hope to increase knowledge about, for example, translators' intentions, problems, strategies, decisions, attitudes and preferences. By investigating translation processes, for example, observers can register pauses; although why the translator stops writing and what he/she thinks during the pause, observers do not know. They have to rely on individual reports and interpret what the translators tell them.

References
Denzin, N.K. & Y.S. Lincoln. 1994. Introduction. Entering the field of qualitative research. In Denzin Norman K. & Yvonna S. Lincoln (eds.) Handbook of Qualitative Research. London: Sage. 1-17.
Strauss, A.L. & J. Corbin. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. London: Sage.

[top of the page]


Qualitative research in TS - interdisciplinarity + intermethodology
Gyde Hansen, March 2, 2005 - Copenhagen Business School

Qualitative research is based on subjective reports, explanations and interpretations. In TS, we need qualitative methods, but we cannot make do with only specific, private findings that cannot be generalized. According to Gile (this website), "our perception of both 'reality' around us and other people's statements is distorted and limited by our sensory and cognitive limitations". Thus, the question is: how can we push back these limitations and follow the norms of the so-called scientific method, especially no. 4., according to the aspiration of objectivity? In other words, how can we, using qualitative methods, move from the individual, subjective level, represented in individual reports and interpretations, to a level of, if not objectivity, then at least subject-independency or inter-subjectivity? Especially regarding the fundamental scientific problem that data have to be gathered and interpreted by an observer, we also have to ask the question: how can bias from observers' effects, i.e. his/her interests, prejudices and attitudes, be minimized or avoided?

Additional questions arise due to the complexity of the field "translation". In many projects, the connected whole has to be taken into consideration, because the experimental conditions are complex situations with subjects and their multifarious individual backgrounds. How can we take such complex situations with many variables into consideration without renouncing the possibility of obtaining results that can be comprehended and perhaps replicated by other scholars? Answers can be found in interdisciplinarity, and especially in intermethodology.

Translation in itself is an interdiscipline (Snell-Hornby 1986: 18), in the sense that the complex phenomenon consists of inseparably connected aspects from different disciplines like linguistics, culture, communication and terminology. In TS, these disciplines are always relevant and thus an inherent part of the research issue translation.

But interdisciplinarity can also be understood differently, i.e. as an attempt to adopt methods and ideas from other disciplines bearing some resemblance to the multifaceted TS. Disciplines like psychology, sociology, cognitive sciences and health care share our questions as to research methods, because they also deal with complex issues involving individuals' attitudes, behaviour and reports. This kind of interdisciplinarity means that research issues, apart from and in addition to the usually "inseparable disciplines", can be investigated from different angles, using knowledge, methods, tools and techniques from different paradigms and disciplines, which at first glance might seem to have little in common with translation.

Qualitative methods are used in many disciplines in social sciences, psychology, human sciences and also in natural sciences. Especially in approaches close to empirical research, such as phenomenology, grounded theory, ethnography, psychology of perception and consciousness studies, great efforts have been made to accommodate qualitative research to some "scientific" norms; a balancing act between the special purpose and conditions of qualitative research on the one hand, and the requirements of natural sciences as to exactness, reliability, validity and credibility on the other hand. These empirical approaches from other disciplines provide us with useful discussions, attitudes, techniques and procedures. Most important for TS are: precise and transparent description, reflective attitude, communication techniques, coding procedures and combinations and triangulations of methods and data.

References
Gile, D. 8 December 2004. Scientific norms. (website on research issues)
Snell-Hornby, M. 1986.
Übersetzen, Sprache, Kultur. In Übersetzungswissenschaft. Eine Neuorientierung. Snell-Hornby, M. (ed). Tübingen: Francke. 9-29.


Contracts with publishers - Fundamentals
D.Gile, April 2, 2005

As authors of papers and monographs and editors of collective volumes, we are required to sign contracts with publishers. Such contracts are necessary not only to set out the precise duties and rights of all parties to the agreement, but also to protect them. Actually, since academic books in TS have a limited market and royalties never amount to huge sums, such protection is not essential for authors/editors. It can be much more important for the publishers, who could be taken to court by other publishers for violating copyrights and have to pay large sums. Out of honesty and out of respect for publishers who often take financial risks when publishing our work, copyright clauses should be taken seriously and observed.

On the other hand, there is no reason why scholars should grant exclusive copyrights forever to the publisher. Depending on the book or the paper, the publisher may make virtually all of the sales within 5 years from the date of publication and lose very little if the product is distributed for free afterwards. For the scholar’s career, it may be important to be able to distribute his/her article for a far longer time. Why not change the terms of the contract proposed by the publisher, for instance by introducing the possibility for the author to put her/his paper online on her/his personal website, either from the time of publication or a few years after the date of publication of the hard copy?

A contract is not the law. It is a commitment by two or more parties, and can be and should be negotiated until its terms meet the interests of all parties. Publishers find it convenient to propose a standard contract, with standard clauses, drafted by lawyers who have in mind the interests of their client (the publisher), not the interests of individual scholars. Some provisions in standard contracts (provided by one party) are unjustified and entail unacceptable risks to the other party. Examples will be given in another contribution on this site. Many people disregard them, saying that it is unlikely that they will be implemented. Just as unlikely as an accident in which you might injure someone and have to pay huge amounts in damages. Do you conclude from that that you can drive your car without taking out an insurance policy? Insurance costs money – negotiating changes in a contract before it is signed does not. There is no reason why scholars should accept standard contracts as they are. They can and should read them carefully, and negotiate changes in provisions which they do not like. In my experience, publishers have often been reasonable about it.

 

[top of the page]


 

Description

Gyde Hansen, April 26, 2005 - Copenhagen Business School

 

Description of an observed phenomenon in empirical research is a kind of communication process. There is always an object of description and an activity consisting in describing it for a receiver. What is needed in order to get closer to objectivity is an unambiguous exchange of experience. Other researchers need the opportunity to take a stand on the validity of the described observations and to decide if they want to replicate the experiment. That is why descriptions in research have to be reflective, precise, careful, consequent, honest and sincere.

          The process of description is a sequence of impressive and expressive subprocesses: observation, perception, identification and classification, as well as verbalization and reception. An important goal of description in research is cognitive clarification, which entails finding the most precise expressions in order to facilitate optimal perception of the phenomenon under study. In this connection, it is of crucial importance that the sender's impressions are expressed in a manner such that the receiver understands exactly what is meant. This means that description processes are not static, but constantly influenced by pragmatic conditions.

          Description processes can consist of two complementary modes of description: an analyti­c mode and a synthetic mode.

          The analytic mode is a series of discriminating proce­dures and choices which aim at isolating the object of description systematically and at identifying and categorizing the phenomenon, so that there is no doubt as to the issue under focus. This dividing and categorizing procedure has its price, however, because the result of the description of an isolated object may be in contradiction to the way the object is experienced in its natural surroundings. As soon as we isolate, we risk losing the object, because it is taken out of its real mental connections. This explains why it is an advantage to complement the analytic mode with the synthetic mode of description.

          The synthetic mode of description regards the phenomenon as one among others in larger units and investigates it in connection to other phenomena or processes from its surroundings.

          Through a series of analytic and synthetic processes, the description becomes clearer and closer to adequately portraying the phenomenon. It is important to note that the process of description is dynamic and that both modes of description can be used complementarily through different kinds of classification and categorizations into new patterns, in an attempt to constantly improve clarity. As soon as one mode of description proves insufficient to characterize a phenomenon, the other mode can take over.

 

References

Moustgaard, I.K. 1990. Psychological Observation and Description. Bergen: Sigma.

[top of the page]


 

Description - Some examples from empirical TS

Gyde Hansen, June 2, 2005 - Copenhagen Business School

 

 

What is it that makes descriptions reflective, precise, careful, consequent, honest and complemen­tary, when we try to aspire to a certain degree of objectivity? Some examples:

          Being reflective means keeping under control the complex relationship between know­ledge production, the context of the research process and the involvement or influence of the personality of the observer. It means awareness as to the most common kinds of bias because of influence from the experimental situation like for example observer's influence, perspective, role and interests during the experiments and afterwards when interpreting data and results. Also potential institu­tional interests can have an influence. If such aspects are mentioned - or other aspects like for example   special incidents (e.g. misunderstandings) during the experiments - the reader of the description can take a stand on the study and its results in relation to that information.

          Being precise, careful, consequent and honest has to do with the scientific norm of being systematic and "ideally leaving no stone unturned". Having a hypothesis, often - even unconsciously - researchers tend to ignore observations, which don't exactly fit. Automatically we go for getting our ideas corroborated and confirmed, but observa­tions, which don't fit at first glance, should not end as trash. Later in an empirical study, in connection with other results and new patterns, they suddenly can emerge as being extremely relevant. Working systematically when describing data also involves keeping documentation, reflections and conclusions apart. The reader then gets a chance to draw his/her own conclusions from the same data.

            Describing complementary means categorizing and describing a phenomenon in focus both isolated and alternately according to its relations to other relevant pheno­mena in its surroundings. In research in translation processes for example, pauses are of great interest, because they can be measured. They also can be characterized and cate­go­rized, but in order to get insight into what is going on during the pauses, it is also necessary to look at non-pauses, i.e. what happens just before and immediately after the pauses and also at the completed translation product.

 

[top of the page]