_________________________________________________________________

VOLUME 5, ISSUE 1     PSYCHNEWS INTERNATIONAL          July 2000
                   -- AN  ONLINE  PUBLICATION --
_________________________________________________________________


SECTION C: ARTICLE


           DATA QUALITY IN ELECTRONIC INTERVIEWS


                     Kersten Vogt, Ph.D.


     It has become usual to analyze survey data with computers.
In the last decade computers were increasingly used in 
the data collection process itself. Widely known is the 
Computer Aided Telephone Interview (CATI) but several other 
forms of electronic interviews emerged in the last ten years. 
In the middle of the 90s Computer Aided Interviews (CAI) were 
still divided into Computer Assisted Data Collection (CADAC) 
where interviewers entered the data into computers and 
computerized self-administered questionaires (CSAQ) where 
the input of answers is done by the respondents themselves. 
Subgroups of CADAC were the Computer Aided Telephone Interview 
(CATI), the Computer Aided Personal Interview (CAPI) and the 
Stationary Computer Interview (SCI). Subgroups of CSAQ were 
Disk-By-Mail surveys (DBM), Electronic Mail surveys (EM),
Download Questionnaires (DQ) and online questionnaires.
Nowadays this differentiation is no more in keeping with the
times. The technical developments and the respondents' 
increased familiarity with computers brought up a wide range
of CAPI and SCI variants. 
     Advantages of data collection with computers are savings 
in time and money, a reduction of work because the data coding
is done by the computer, more pleasant presentation of 
questions because of the graphical possibilities of
computers, the application of non-reactive instruments (i.e.
measuring the time needed to answer a question) and an 
easier handling of filters. Advantages in form of better
data quality are seen in the reduction of interviewer bias,
the obstacles in forging interviews, the elimination of 
systematic biases by randomization of questions and response
formats, build-in checks of consistencies across answers,
and a higher standardization of the interview situation 
(see Saris 1991: 20-25).
     The aspect of data quality this article deals with
addresses a different issue. It has rarely been investigated,
whether computerized questionnaires produce the same results
as questionnaires used in traditional interviews. To answer 
this question, the results of an experiment are reported, in 
which the questionnaires of two traditional face-to-face 
interviews (ALLBUS 1996 and ALLBUS 1998) were presented by 
the computer and had to be answered by the respondents via 
computer keyboard or mouse-click. The layout of the 
computerized questionnaires corresponded 1:1 to the paper 
design.
     For testing construct validity, the correlations 
(PearsonŒs R) of an SES index built from school education, 
vocational training, occupational prestige and income with 
self-reported SES are compared for two traditional interviews 
and two electronic interviews (DBM and EM).


Correlations between Self-Reported Status and the SES Index 

                           Self-Reported SES
                 E-Survey     ALLBUS     E-Survey     ALLBUS
                   1996        1996        1998        1998

SES index         0,52**      0,51**      0,52**      0,55**
N                  235        2573         241        1668  


     Of course, the construction of the SES index itself could 
be further discussed on methodological grounds. Important for 
this article is the finding that an index built in the same 
way in all of the four surveys shows nearly the same 
correlations in these four with a self report of SES, which 
was worded exactly in the same way.
     In conclusion the construct validity of the SES index in all
of the four surveys is comparable (which degree it ever has). 
This coincides with other findings showing no effect of the 
medium computer when the layout of the computerized
questionnaire is a 1:1 translation of the paper design (see
Klinck 1998)
     Often the argument is used, that the tradional paper design
of questionnaires is not adequate for electronic presentations. 
Certainly this argument is right. But with findings in mind, that 
deviations in electronic presentations of questions produce 
deviations in answers, it is my assertion that deviations in 
electronic presentations need to be carefully tested before 
using them or to compute new norms of traditional instruments 
in their (deviating) format of electronic surveys (see Klinck 1998, 
Kubinger & Farkas 1991).



References

Saris, Willem E. 1991: Computer Assisted Interviewing;
        Newbury Park/London/New Dehli: Sage.

Klinck, Dorothea 1998: Papier-Bleistift- versus computerun-
        terstuetzte Administration kognitiver Faehigkeits-
        tests: Eine Studie zur Equivalenzfrage; in: Diag- 
        nostica 44/2: 61-70.

Kubinger, Klaus D. & Maria G. Farkas 1991: Die Brauchbarkeit
        der Normen von Papier-Bleistift-Tests fuer die Com- 
        putervorgabe: Ein Experiment am Beispiel der SPM von
        Raven als kritischer Beitrag; in: Zeitschrift fuer 
        Differentielle und Diagnostische Psychologie 12/4:
        257-266. 


Kersten Vogt, Ph.D., is research associate at Humboldt 
University Berlin, Germany, Institute for Rehabilitation 
Sciences, Dept. of Health Services Research.
His email address is kersten.vogt@rz.HU-Berlin.de


_________________________________________________________________