Overkill and collateral damage in empirical research methodology

Daniel Gile

June 2, 2010

 

 

Research is targeted action, which aims at exploring and/or advancing towards answering research questions. It takes place in ‘economic’ space and under ‘economic’ constraints of limited time, labour, financial resources and human resources, both on the observer and on the observed side. Therefore, ‘economic’ considerations are naturally part of any research design.

            In ‘economic’ terms, overkill is a waste of resources: gathering and processing information takes resources, and the right balance needs to be found between the value of information and its cost in terms of time and labour – which may then be unavailable for other research operations. Overkill can also reduce the efficiency of the collection of more relevant information. For instance, roughly speaking, the longer a questionnaire, the higher the risk that respondents will not complete it. If it focuses on a particular phenomenon, certain questions are essential, others are relevant, and others could be useful for further investigations but are not really required for the investigation at hand. It is tempting to add them to the questionnaire – and this may be a wise strategy when it is thought that the same respondents or other respondents with similar characteristics may be difficult to access a second time – but this may entail collateral damage, namely a lower response rate, so careful consideration needs to be given to priorities.

            Another type of collateral damage resulting from overkill is more methodological in nature. Not unlike medical drugs which have both beneficial and unwanted (toxic) side effects, research action can cause damage – to itself. Ecological validity problems in experimental research are perhaps the best known example: in order to be able to observe and/or measure accurately, experimenters create a particular environment at the risk of removing it from natural conditions to such an extent that doubts arise as the question whether what they reveal is truly applicable to the real-life phenomena under investigation. Now that triangulation has become popular (not without reason) and that new technologies allow multiple data collection modalities, there may be a trend to systematically accumulate three or four observation methods such as direct human observation, keystroke logging, TAP and retrospective interviews, with perhaps the drawback that one or two of them add little information but may result in distortion of the process (for instance direct human observation in the working environment and TAP). Does each of the methods in combination with the others add enough value and cause sufficiently small potential damage to be included in the procedure? Rather than replicate a particular triangulation combination because it has been used by previous investigators, it may be wise to analyze the envisaged procedure in terms of cost/benefit and perhaps streamline if for efficiency.