An examination of rater selection, assessment methodology, and preference stability on preference and reinforcer assessment outcomes with children in general education
This study expanded on previous preference and reinforcer assessment research by comparing information presented by multiple informants (parent, teacher, typically developing student) and multiple preference assessment (PA) methods (verbal forced-choice [VFC], pictorial forced-choice [PFC], and multiple stimulus without replacement [MSWO]) to determine which informant/method could predict the reinforcing value (i.e., potency) of stimuli. Reinforcer assessment (RA) rankings were used to determine potency. PA rankings were compared to assess stability over time. Administration times were compared to determine which method was quickest. Rater/method rankings were compared with the corresponding RA rank orders. Agreement measures consisted of exact and adjacent agreement. Exact agreement was coded when the rater/method rank and the RA rank matched exactly. Adjacent agreement was coded when the rater/method rank was an exact match or within one rank order position from the RA rank. The results were that (a) teachers had the highest exact and adjacent agreement, (b) the VFC method had the highest exact and adjacent agreement, (c) the MSWO method was much shorter than any other method; and (d) the PFC and VFC methods had the highest combined mean adjacent agreement scores of rank order stability. Even though the teachers and the VFC method had the highest agreement, the mean agreement percentages were low. The results were consistent with previous research in so far as administration times and inability of caregiver to reliably predict reinforcers. Because the MSWO method could be administered in half the time and yielded the second highest adjacent agreement, the MSWO method should be investigated further because of the time saved. The PFC and VFC rank order stability measures were higher when adjacent agreement was used to evaluate stability. The scores suggest that typically developing student preferences change over time and need to be re-assessed periodically. Issues were raised concerning the manner in which the reinforcement rankings were obtained (duration of time engaged vs frequency of sessions selected). Using duration measures could result in reinforcer potency being underestimated, but overestimated when using frequency measures. Future research should compare outcomes when stimuli are obtained using both duration and frequency measures.