One of my long-term interests is understanding the nature of response biases in non-cognitive assessments, and suggest effective ways of combating them. Specific topics include:
- The cognitive processes behind motivated misresponse (aka impression management or faking) in high stakes personality testing. Specifically, situational and personal characteristics linked to applicant ‘faking good’ on employment tests; or patient ‘faking bad’ on diagnostic tests for access to treatments; etc. Within this topic, I have been involved in research of conditions influencing the susceptibility of forced-choice questionnaires to faking [3, 7] and their use in assessments of socially undesirable characteristics [2].
- Biases in assessments of other people or services, for example the halo effects in 360-degree assessments [1]; etc.
To overcome limitations of current models for faking, that assume that faking behaviour is consistent (individual fakes all items consistently) and varies only between person, I proposed a “Faking as Grade-of-Membership” model [4], which describes the so-called intermittent faking (i.e. faking on only some items or attributes but not others). The approach considers responses of every test taker as a potential mixture of ‘real’ (or retrieved) answers to questions, and ‘ideal’ answers intended to create a desired impression. Depending on the particular mix of response types in the test taker profile, grades of membership in the ‘real’ and ‘ideal’ profiles are defined. This approach has a great potential for identifying and controlling for intermittent faking in collected responses to self-report questionnaires.
Furthermore, together with my PhD student Miriam Fuechtenhans, we developed the Activate‐Rank‐Edit‐Submit (A‐R‐E‐S) model – a theoretical response process model of faking when responding to forced-choice questionnaires, which involves stages “Activate” (relevant information), “Rank” (choice alternatives), “Edit” (ranking), and “Submit” (final ranking).
References
- Brown, A., Inceoglu, I., & Lin, Y. (2017). Preventing Rater Biases in 360-Degree Feedback by Forcing Choice. Organizational Research Methods, 20(1), 121-148. doi: 10.1177/1094428116668036
- Guenole, N., Brown, A., & Cooper, A.J. (2018). Forced Choice Assessment of Work Related Maladaptive Personality Traits: Preliminary Evidence from an Application of Thurstonian Item Response Modeling. Assessment, 25(4), 513-526. DOI: 10.1177/1073191116641181
- Wetzel, E., Frick, S. & Brown, A. (2021). Does multidimensional forced-choice prevent faking? Comparing the susceptibility of the multidimensional forced-choice format and the rating scale format to faking. Psychological Assessment, 33(2), 156-170. DOI: 10.1037/pas0000971
- Brown, A., & Böckenholt, U. (2022). Intermittent Faking of Personality Profiles in High-Stakes Assessments: A Grade of Membership Analysis. Psychological Methods, 27(5), 895-916. DOI: 10.1037/met0000295
- Fuechtenhans, M. & Brown, A. (2023). How do Applicants Fake? A Response Process Model of Faking on Multidimensional Forces-Choice Personality Assessments. International Journal of Selection and Assessment, 31, 105-119. DOI: 10.1111/ijsa.12409
- Guenole, N., Brown, A. & Lim, V. (2023). Can faking be measured with dedicated validity scales? Within Subject Trifactor Mixture Modeling applied to BIDR responses. Assessment, 30(5), 1523-1542. DOI: 10.1177/10731911221098434
- Li, M., Zhang, B., Li, L., Sun, T., & Brown, A. (in press). Mix-keying or desirability-matching in the construction of forced-choice measures? An empirical investigation and practical recommendations. Organizational Research Methods.