Eric G. Mart, Ph.D., ABPP | June 23, 2013
One of the recent advances in psychological testing over the last decade has been the use of computers to both administer and score various types of psychological tests. Years ago, if a psychologist wanted to administer an objective personality test such as the Minnesota Multiphasic Personality Inventory, the subject was given a booklet with hundreds of questions and a “bubble sheet” to record his or her answers. To score the test, the psychologist would laboriously count the responses for each scale using Mylar overlays, and then calculate the scores using the manual and a profile calculation worksheet. This process was time-consuming, and it was easy to make simple scoring errors.
With the advent of the personal computer, the process improved radically. These days, the same tests can be given on the computer, and some versions will even read the questions out loud to the subject if he or she has trouble reading or has visual impairments. Once administered, the test can be almost instantly scored and printed. This process greatly improves accuracy and save a great deal of time. These programs are available for a wide variety of tests of personality and cognitive ability.
But the use of computers for psychological testing also has potential disadvantages which many non-psychologists (and some psychologists) do not understand. One of the most problematic aspects of computerized testing is the indiscriminant use of narrative “canned” reports. Once the test is administered, the psychologist has a number of scoring options. The first of these is to use what is sometimes referred to as a profile report. When this scoring option is used, the psychologist is provided with the scores for the various scales of the test that were administered. Using these scores, the psychologist uses the manual and references to interpret the test.
The other option is for the psychologist to purchase what is sometimes referred to as a narrative or interpretive report. If this option is chosen, the test scores are accompanied by a boilerplate interpretation of the test, sometimes including diagnoses and treatment recommendations. While the use of such reports can be helpful in clinical settings, they are problematic in court-related cases. One such problem is that some psychologists use the computer-generated reports uncritically. With a few exceptions, it is not possible to know if the statements and descriptions in these interpretive reports are based on empirical research or the clinical experience of the author. Consequently, when a psychologist relies on these interpretive reports, he or she has no way of knowing the basis of the interpretations or the accuracy of the statements. A second problem with these reports is that there is a tendency for courts to place undue reliance on the computer-generated statements. Finally, some psychologists have obtained these narratives and simply cut and pasted selected statements directly into their reports. Worse, some mental health professionals wrongly believe that if they do not actually print out the interpretive report it is not discoverable. These practices are very problematic. The unattributed use of portions of the interpretive report is almost certainly a copyright violation, and “hiding the ball” by not producing the report creates ethical and evidentiary problems.
Attorneys who regularly cross examine mental health professionals who use psychological tests can address these problems by taking several steps. Discovery requests should specifically ask for the full printout of any narrative or interpretive reports that were generated in the case, and the expert’s report should be scrutinized side-by-side with the computerized test report. Further, if such interpretive reports were employed and the computer-generated statements incorporated into the expert’s report, the expert should be questioned about the underlying research and data that support these interpretations. Awareness of the use of computer-generated reports and a bit of careful inquiry can be helpful in revealing whether the expert used testing to come to scientifically defensible conclusions, or whether interpretations from these products were simply cut and pasted into the expert’s own report without sufficient regard for their scientific foundation.