This page is http://www.thelizlibrary.org/therapeutic-jurisprudence/custody-evaluator-testing/forensic-mmpi2.html
A Grammatical Analysis of the MMPI-2
"I can't say": truthful, thoughtful responding in theforensic context will (appropriately) invalidate the test.
By Elizabeth J. Kates, Esq.
This is not a radical article. It might seem that way at first, but upon reflection, the position should seem quite tame. Lawyers' standard advice to their litigation clients is to "answer questions truthfully". To think before speaking. Perhaps to respond succintly. But in any case, to respond "truthfully". Only rarely does the advice deviate from this, e.g. for fifth amendment purposes, or for questions that violate attorney-client privilege, in which case the client is told not to respond at all. In preparing clients for deposition and court testimony, it also is common to review the kinds of questions a client might get from opposing counsel, and among other things, to caution the client to recognize "traps", that is, questions that appear to be demanding a simple yes or no, blanket agreement or denial, but which in fact just cannot be answered in that manner if they are to be answered truthfully (or at least without being misleading). But what happens when clients are ordered to take forensic examinations in the context of litigation? The same kind of counseling is not done. Should it be? This article argues that the answer to that question is "yes".
Psychological tests given in the family law forensic context are inherently invalid. They do not measure what they purport to measure (anything at all pertaining to parenting capacity). They cannot discern past facts. They have no accurate predictive ability. They in fact rarely shed any light on the legal issues to be decided, but conversely, often enable forensic evaluators to mischaracterize the parties and justify, under the pretext of adding in "objective science", their biased opinions. The problem likely is present in other forensic litigation contexts, but is particularly bad in the area of family law. That there is little interrater reliability in psychological diagnosing is well-known. That the introduction of psychological forensics in a child custody case creates burdensome expense, complicates cases, and creates new problems is well known. Thatparenting evaluators frequently get it wrong is well-known. That no psychological tests can discern who is or is not a "good parent" is well-known. But that child custody evaluators actually have little or no wisdom to add in a family law case for some reason is not also well-known. And it should be.
Isn't avoiding these kinds of distortions, errors, and malfeasances exactly why clients are prepared for testimony and told when they should not respond at all? Of course. So why -- contrary to all other aspects of the litigation, and arguably contrary to multiple constitutional and statutory rights that clients have (first, fourth, fifth amendment, privacy, etc.), they are not prepared adequately for psychological tests? Because psychologists make a lot of noise about "coaching" and "invalidating the tests"? Because the psychological trade promotion industry has sold the public, including legislatures and judges, one heck of a bill of goodsregarding their so-called "expertise", which in fact amounts to nothing much at all in the area of parenting? Because the courts have become inundated with more and more unnecessary "expert testimony" in denigration of traditional rules of evidence and due process?
Psychological tests bolster the aura of expertise and the opinions uttered as "evidence" in court by forensics, creating a veneer of pseudo-objectivity and scientific validity.
The first line of defense, of course, is to object to having any client take these tests. Case law in multiple states indicates irreparable harm from them when mental health is not "at issue". That justifies an interlocutory appeal. But there's not always time or money for that. And some judges would forego ordering custody evaluations altogether if they merely duplicated what the judges recognize to be ordinary evidence. The idea of some kind of scientific or medical "assessment" that can ascertain "facts" that would be beyond the ability of the judge to discern (like a medical lab test) feels key. So if avoiding this snake oil isn't possible, then the client at least should be well-prepared.
The granddaddy of psychological personality tests is the MMPI, or MMPI-2 in its revised version. It is one of the oldest of the psychological tests, and the one that has been most studied, and also the psychological test most often administered by psychologists for use in court. So this first article on psychological test taking in the forensic litigation context will address the MMPI-2.
The MMPI-2 is what is called a "forced choice" test. The primary (now revised) version of the test has 567 true/false questions. (There is a short form, as well as an adolescent version, and also ultra-dubious versions translated into different languages.) The test taker is told that that he or she must answer "true" or "false" in response to the various statements. The problem with doing so is that most of the questions contain undefined words that could have multiple meanings, or are vague, or have compound parts, or do not permit exceptions, or contain false assumptions, or otherwise cannot -- if one actually were to think about them before responding -- be answered simplistically "true" or "false". For example, "Answer true or false: I have stopped beating my children." (The actual test questions are not given in this article, but they readily can be found in numerous relatively inexpensive books available to the public, e.g. mmpi-2 administration, mmpi-2 practitioner's guide, mmpi in court, etc.)
The instructions that litigants typically are given when they are sitting for the test do not clearly let them know that they have a third choice, rather than rushing through and guessing "true" or "false". That choice is "can't say". This is the choice of not responding to the question and leaving the answer blank. Not in a "non-cooperative" way, of course. But whenever the question cannot be truthfully and accurately answered either "true" or "false". Telling the client not to respond hastily, not to guess, and not to affirm or deny statements that cannot be affirmed or denied, is the same advice you would give the client preparing to sit for a deposition. Although your advising your client about how to take the test is virtually certain to irritate the psychologist administering the test, the overriding context here is in fact court. Legal advice prevails regarding what is or is not in the client's interests, or the goals. The court case is important. A forensic psychological test is not otherwise of any benefit whatever to the client.
In the context of litigation, every individual who is court-ordered to respond to a question, which includes to take a psychological test has the absolute right to respond only truthfully. That means that the litigant has the absolute right to select the "can't say" choice if there is any legitimate reason the litigant "can't say". If too many litigants did this, of course, that rapidly would make this and other psychological tests pretty useless in the courtroom context. And psychologists don't want that to happen because, like voo-doo, these tests help to validate the opinions of the forensic evaluator, and make those opinions appear to be important, grounded in mysterious and difficult scientific analysis, objective and authoritative. But what is good for the psychological industry is neither the lawyer's nor client's concern.
Litigants are told by psychologist test administrators to try to answer all the questions, and if they are unsure, to answer based on whether the statement seems "more true" or "more false" for them. That's absurd, but it does encourage the litigant to go ahead and pick the "less wrong" choice. Typically, the litigant is nervous, and fearful of not seeming cooperative, and so, unless someone thinks beforehand to tell the litigant he or she does not have to do this, the litigant in fact will. And risk getting into hot water. It's the lawyer's job to make sure that the client understands the difference between "not cooperating" and being a patsy. Note that the length of this particular test also aids this kind of "cooperation" by virtually guaranteeing that the test taker must read quickly, and answer quickly without giving much thought to the questions. This is the opposite of how litigants are instructed to respond to questions in depositions and in the courtroom.
What lawyers must understand, and what they must convey to their clients is: whether or not later down the road a client may or may not be cross-examined on an "admission against interest" made with respect to any specific test item, in effect, the client who is under compulsion to sit for a psychological examination nevertheless is in court. Everything said can and may be held against him. There is a cumulative effect in the responses to each item. Clients also should be told that this test is not in fact timed, to refuse to be rushed, that they are permitted to take all the time they need, and that if they grow tired, they are permitted to ask for a recess and to finish the test (and/or the rest of the tests they are expected to take) at another time.
At first look, the questions themselves don't seem all that difficult to answer. But the sheer number of questions discourages the litigant from thinking too much about any one question. The test is designed to get the subject to rapidly answer based on feelings, and on first impressions, rather than thinkng. The test contains "traps" such as the consistency items in which the same question essentially is asked again at a later point in the test (if it's answered "inconsistently", that's a ka-ching in the "lie" scale.) Therefore psychologist does not want the subject to pause and take much time before answering each question. Or to go back and forth to check previous responses. Thinking before responding to each item also would make the test unwieldy and unduly expensive from an administration standpoint, of course, but mostly, it is thinking that would throw a monkey wrench into how the test works.
The psychology industry, and particularly the test publisher, similarly do not want subjects to see the test questions in advance because this would encourage thinking instead of impressionistic responding. It is not because anyone is likely to easily remember all the questions or be able to "cheat" the test by selecting some kind of memorized well-adjusted pattern of true and false answers. It is because the test taker would have contemplation time. The test taker would realize how ridiculous most of the statements are, and not respond using gut-level "feelings" and guessing, but with the brain engaged. Truthfulness is not desired, nor is accuracy. Consider: this is the opposite of how lawyers would counsel their clients to respond to questions about themselves and their circumstances in any other context pertaining to their case! And why should it be different here? Because there's something valid and important that the psychologist is going to ascertain about a reasonably normal, healthy (albeit stressed) family law litigant by having him or her take a psychological test? Actually, no.
Even when the litigant is not court-ordered to take the test, however, it is the individual's absolute constitutional right to give only honest, thoughtful, truthful responses to any question asked. There is no penalty for "think time". No individual can be forced to speak falsely. But if, in fact, the litigant thinks, and thinks appropriately, about the various test questions, few of them will be able to be answered. Moreover, if there are too many "can't say" answers (sometimes as few as 30 will do it, although 70 is a sure bet) that will invalidate the test. It simply won't be able to be used at all. No inference of any kind can be made from an invalid test. Given that psychology really does not have the ability to do what psychologists pretend it can do, and given that so many custody evaluators simply spout horseshit, backed up by the implied objective validation of psych test magic, and aided by the handy phrases and suggestions given in the computer printouts of the results of these things, it may very well be preferable for the test to be completely invalidated than for the litigant to risk being misjudged, wrongly assessed, or incorrectly labeled. The psychology industry actively fights to keep this information from getting out. The test results are not in fact based on truthful answers but ways of sloppy thinking, and are dependent upon the subject not considering that the questions themselves frequently beg for an answer of "can't say". In fact, numerous articles about these tests, so-called "research" write-ups, repeat completely unverified assumptions, such as the following, in which the psychology industry fools itself:
- "...an estimated administration time of between 1 and 2 hours for the majority of cases. In patients with severe psychopathology this administration time may extend to between 3 and 4 hours...Greene (1997) has estimated the expectable range of omissions at between 1 and 15 for normal subjects and between 0- 20 for psychopathological patients. In general, the administration protocol is considered to be invalid if the respondent leaves 30 or more items unanswered in the first 370; if these omissions occur after item 370, clinical interpretation can go ahead for the basic clinical scales and validity scales, but not for the rest of the scales. Excessive omission of items is usually considered to be related to patterns of defensiveness, indecision, carelessness, fatigue or inability to read and understand the items (Butcher & Williams, 1992; Graham, 1993)." ASSESSMENT OF RESPONSE DISTORTION IN MMPI-2, Héctor González Ordi and Iciar Iruarrizaga Díez, Papeles del Psicólogo, 2005. Vol. 26, pp. 129-137 [emphasis added]
Litigants are entitled to know that in litigation, the same rules apply to answering any question to an evaluator as apply to testimony in a deposition or in court. They are entitled to know that no one has the right to force them to respond with a guess, to endorse a false belief, to answer what they cannot honestly say they know (particularly if a question is phrased in the absolute), or to respond true or false to a compound question, or one that has implicit assumptions or undefined terms, or to respond quickly and inaccurately to any question that actually requires some amount of contemplation, reflection or recall of history.
Prior to taking the test, too, every litigant should be given a little grammar refresher, e.g. the difference in the meaning of, for example, "I worry" versus "I have been worrying lately" versus "On occasions in the (recent/far) past I have worried". Every litigant should be reminded that all of us feel and behave differently in different kinds of settings and among different groups of people and at different times. Every litigant should be advised about the concept of the relative ("it depends") versus the absolute ("always" and "never"), as well as the dangers of answering without thinking, based on impressions or "feelings" (feelings aren't "thinking", and a "feeling" is not accurately a thought or a "belief".) It is easier and cheaper to prevent problems than to fix them after they have happened. In the forensic context, not having a "valid" MMPI-2 will avoid a good amount of discovery time, deposition time, cross-examination time, and countering-evidence time, including forensic consultant time, and all the attendant expense. Real evidence will have to make the day. Is this radical? Not in the least. It's no more and no less than every lawyer attempts to achieve for every other circumstance in which his or her client talks about the client's case... article continued below
Thus the answers given by a test taker stereotype the subject as being similar to various arbitrary groupings of people who selected similar patterns of responses in various groups of questions. One issue is that this is indeed a stereotype. Even assuming that the people in the stereotype group who share a common characteristic ALL responded in a certain way ("all" never happens), that still does not mean that some other, as-yet unidentified group of persons would not also respond in this way. Thus, the MMPI-2 has, for example, an identified group of questions that together constitute the K scale. Subtle liars tend to answer "false" to many of the questions that supposedly more honest persons would endorse as "true", and also people with higher educations from higher socio-economic backgrounds (especially those who are pilots with Air Force backgrounds and possibly government security clearances applying for jobs) also tend to answer the same way. For all we know, people from certain towns in upstate New York with graduate degrees, an interest in carpentry, musical ability and curly red hair or going through a divorce might also tend to answer the same way. But no one knows. That latter group of people hasn't been researched.
How people are grouped into categories is arbitrary. Sex? Religion? Geography? Family background characteristics? Assumed culture? Some psychological trait? And so forth. The possibilities and categories and subcategories and breakdowns are endless. Also fairly arbitrary, and subject to research bias, is the assumption made as to the identified trait shared in common among members of the group that supposedly is so important that it's moving the results, so important that it warrants joining possibly very different people together in a group as if they were very similar. That identified trait may or may not be what has created the similar responses. Mistakes are made, and often. This is one reason repeat studies sometimes do not replicate the findings of prior studies. This kind of assumptive categorization (stereotyping) usually occurs before any research does, and before any attempted replicating research. So not only is the categorization arbitrary, but when it includes personality traits or mental disorders, it's not even a sure thing that all of those in the group that was studied in fact had the characteristic in common.
The point is that the test came first, the categorization next, and the research afterward, in a kind of researchical hunt-and-peck ("let's see if the people in this newly-created category respond to specific test items similarly...") "Research" (basically hunting for patterns of responses, throwing out ideas for groupings that don't yield results and following up with more research on groupings that do) has gone on for decades. The groupings of people with various characteristics are arbitrary. Some are not even real, such as categories of people who are told to fake feeling depressed, or to lie about this or that to see if they can fool the researchers. Often the categories that presume to group people by a defining trait miss what is more important that the members of the group also might have in common. For example, if people with borderline personality disorder (BPDs) from the midwest tend to respond to certain questions a certain way, would BPDs from Alabama respond the same way? Who knows. Would Hispanic BPDs respond the same way as Scientologist BPDs? Who knows. (By the way, what's an "Hispanic"? Does everyone who is labeled "Hispanic" share the same cultural, physical, biological, familial, socio-economic and educational characteristics? Not even close. It's not a category of people who share real traits, but a political category.) Would green-eyed BPDs tend to respond the same way as brown-eyed BPDs? Who knows. And by, the way, how do we know that they really were BPDs? Faith. Faith that the researchers isolated the defining trait, faith that if the trait were something fuzzy such as a mental disorder rather than curly red hair, that they diagnosed it correctly, faith in the DSM that there even is such a disorder... Faith. Religion. Voo-doo...
What this all means is that, like any profiling or stereotype, what groups of people as a whole tend to do means nothing as far as the truth, the facts, or the reality, when applied to any given individual -- especially in a court case. It's an hypothesis about "maybe". (This is a huge flaw in the field of applied psychology generally, why psychology is not science.) The MMPI-2 scales are generalizations or likelihoods about people (with more or less accuracy) of the sort usually not otherwise admissible in a court of law as "evidence" of anything about the specific person in question. Making things worse, the test is now roughly seven decades old, and the 567 statements are written with much odd, archaic, provincial, and outdated -- and occasionally offensive -- wording and idiom usage. The language is solidly midwest, middleclass, "white folks" circa World War II. (Does anyone have "spells" any more? Do "girls" take the MMPI-2?)
A litigant's unusual or unique traits or circumstances could very well make that litigant appear to be similar to people grouped by some negative attribute. A person with unusually virtuous character might answer similarly to groups of people who are lying or "faking good" or being overly "defensive". A person with real medical problems could match groups of people who are hypochondriacs, lying or malingering or "faking bad" (not even getting into the HIPAA and medical privacy violations here of the medically-related test questions where that's not at issue in a court case.) A person who is unusually smart or creative could end up appearing to be just like a deviant. A police officer may come off aggressive and paranoid. If a subject's first language is not English, and he or she struggles with the meanings of deliberately equivocal words, the results are further distorted. A litigant under stress in a pending court case who inadvisedly leans his or her forced choice guesses (because the individual questions, if not all of the "scales", are fairly transparent) to those that he thinks will not harm him in court if isolated items were plucked out by opposing counsel on cross-examination (always a risk) will appear to be a liar, a "catch-22". The better response to any ambiguous item, which also will avoid the admission against interest problem, is "can't say".
The clinical psychologist is supposed to use test results from a person seeking therapy to generate hypotheses regarding what might be problems requiring therapy. The forensic evaluator is supposed to do likewise, and verify the test hypotheses against additional tests and a gathering of non-test facts. It doesn't happen that way. [note] The computer printouts of hypotheses for these tests [see sample report] are like astrology readings, and can be applied any which way anyone wants to apply them, positively or negatively. The evaluator can dismiss an "hypothesis" generated by the test based on the evaluator's belief that it doesn't apply ("Hispanic men tend to have this scale elevated; it doesn't mean anything"), or, with the application of cognitive bias, discover that in fact the seemingly nice, normal, well-behaved litigant is a secret nutjob with emotional issues ("She answered similarly to people who expend a lot of energy to keep from showing their anger..." [5 pages later] "Her obvious anger"...") Like an astrology reading, it can always be applied.
People are told they should respond either "true" or "false" because "there are no right or wrong answers." But that's not actually the truth. Although no individual answer by itself may be "wrong", various answers are grouped together to make sub-scores. Together these groups of answers contribute to what is or is not -- as judged by the test assessor -- to be the equivalent of a good or bad, or, essentially right or wrong, score for the collective group. And this is without even considering the problem of an evaluator who is deliberately or subconsciously biasing his or her evaluation.
The sub-groups, or "scales", purport to tell the psychologist such things as whether the subject appears similar to or maybe is a hypochondriac, depressed, hysterical or an attention freak, psychopathic, overly masculine or feminine, paranoid, self-critical, anxious, perfectionistic, confused or schizophrenic, manic or grandiose, or introverted or extroverted. There are many other scales, with scores based on collections of answers to selected items that have been compared to the average answers of various demographic groups. Over the years, more and more scales have been concocted. For example, some scales purport to illuminate such things as whether the subject has addictive tendencies, or is repressed or "overly controlled", hostile, a liar, or giving correct answers based on whether or not he or she is consistent in answering. There are multiple scales, and subscales, disputed and controversial scales, and newly invented scales being beta-tested. (Anyone wanting to know more about these details can refer to any of the many available books and articles on the subject. This article, however, is addressing something the rest don't.)
Additionally confounding the validity (or "reliability") problem with this and other psychological tests is that the same person could answer the questions very differently at another time, in another place, in another mood, or depending on the purpose for which the person is taking the test. That's most likely because people are guessing at what the statements mean and arbitrarily choosing "true" or "false" when, if honest, they should be skipping the question as a "can't say". That's also because today, having taken a walk through a spring garden on the way to the psych's office, Joe is feeling chipper and thinks he might like the work of a florist, whereas three months ago, when it was snowing out, this possibility didn't occur to him and he sort of thought he liked mechanic's magazines. (He no longer likes mechanics magazines, having been sued for nonpayment of an inadvertent renewed subscription by one of them.) Susie of course, couldn't say whether she "feels" that she "might like" the work of a nurse because "I think that I would like to be a nurse" is nonsensical, given that she actually has an R.N. degree, has worked for five years as a nurse, and is heading into med school. She also doesn't have a clue what a "mechanic's magazine" is.
The many questions on the lengthy test and the relatively limited time apparently allotted to respond encourage the spontaneity of giving "true" or "false" first impression responses that are essentially thoughtless. People who want to be cooperative will respond in ways that they would not respond if they gave more thought. People who have gone through school taking bubble tests with objectively right and wrong answers also get used to choosing what they think might be the "best guess" depending on the way they are leaning. They will, in the rush, seize upon words or part of the sentence, which, save for the limited response choice, is really not much different from guessing ink blots. It would be kind of like showing someone aRorschach inkblot and saying "This card depicts a butterfly. True or false?" None of this is a good thing if this test is being given involuntarily to assess a litigant's personality and mental functioning, and much is at stake depending on the evaluation of the answers, whether that be a court case or employment. It is possible that the litigant could guess his or her way into a "profile" that is similar to the "profiles" of answers given by people with personalities that are nothing at all similar to the way the litigant usually is.
The psychologist is supposed to use the test taker's scores, along with other information known about the subject, to generate hypotheses about the subject and the subject's personality. That might work for someone seeking therapy (or it might not). But for someone taking the test because he or she has been court-ordered to have a forensic psychological evaluation, the test administrator is not someone who (let's be real, okay?) knows anything much about the person (the conflicting allegations and hearsay statements received from both parties are very common and not established facts or evidence, especially when those allegations have yet to be tried in court. Neither are court pleadings. Neither do a few hours of talking with and observing the litigants in an artificial setting give more than a smidgen of information about how those people act "in real life".)
All the forensic evaluator has is a menu of test result options from which to cherry pick hypotheses, which easily can be manipulated any which way the evaluator wants them to be -- and most easily in the very common event in which the psychologist refuses timely and adequately to disclose the testing materials and data in discovery). [Ever see a detailed computerized astrological summary? We all know that Taureans are stubborn, Leos are leaders, and everyone is going to have a significant family event occur soon, and some kind of negative work issue... Know what "cold reading" is? Forensic psychology is an interesting variant, but with less potential for fact-checking. Yes, yes... the practitioners are serious and have studied for years in accredited schools. Here's another
Honest, thoughtful answers by most clients will mean that the client is far more likely than not to be unable to answer 70, 100, or even 300 or more out of the 567 questions. [The author of this article is unable to answer in excess of 400 of the MMPI-2 questions. Perhaps she can't read, or is a psychotic, or is just unfortunately tainted with that malady Pearson desperately wants to prevent, near memorization of the test questions, as well as which ones are on a number of scales. Or maybe Libras just have these issues...]
Lawyers' standard advice to test takers is to "answer honestly" because lawyers have been assured by psychologists that collecting groups of the client's answers from items that most people "endorse", in other words, say "true" to, supposedly will give the psychologist insight into whether the client is a liar. (Never mind that credibility is supposed to be within the province of the trier of fact, and that stuff like polygraph tests are banned from court evidence.) A client in a court case does not want to appear to be prone to lying. And the client's lawyer, snookered by decades of psychology trade promotion, so-called scholarly articles from every side, pro and con, about strengths and weaknesses of psychological tests, and so forth, may not have given much thought to how the client should approach the situation in order to avoid looking that way.
It's not that hard, though. The client is not clearly told that "can't say" might be the most honest answer. So here goes: the correct answer very well may be "can't say". The lay public's confidence in these tests, as well as that of many mental health professionals, is based on myths deliberately promulgated by the psychology trade promotion groups and test developers and publishers. The industry that deliversquack therapies of all kinds, also has created beliefs among the public that, by using these tests, psychologists somehow are able to discern things about the personality and mental workings of an individual, in a manner similar to the way physicians might do (objective) diagnostic testing for diseases. Well they can't. This fraud is aided by the terrorism of the so-called lie detection and malingering scales as well as the marketing propaganda and reams of literature instructing, pontificating and debating over the obscure and difficult fine points of protocol and research. For the most part, this conveys false messages. The tests are nonsense on many levels. And the MMPI-2 is supposedly the least flawed and nonsensical of them all.
If you're a lawyer, stop asking psychologists how to prep clients for these tests and start giving your clients legal advice. If you're a judge, stop wasting the time and resources of the court system, the litigants and the taxpayer on psychological forensics. Stop it. Stop it now.
-- liz
This webpage is: http://www.thelizlibrary.org/therapeutic-jurisprudence/custody-evaluator-testing/forensic-mmpi2.html
Discovery Issues:http://www.thelizlibrary.org/therapeutic-jurisprudence/custody-evaluator-testing/index.html
No comments:
Post a Comment