You are here

Development and Validation of the Client Ratings of Counselor Competence: Applying the Rasch Measurement Model

Download pdf | Full Screen View

Date Issued:
2016
Abstract/Description:
An important part of becoming a counselor is developing strong counselor competence, particularly for counselors-in-training. Thus, the main goal in counselor education is to develop students' competence to be capable to practice as a professional counselor. Assessing the competence of counselors-in-training remains the primary focus in counselor education and supervision (Bernard (&) Goodyear, 2014; McAuliffe (&) Eriksen, 2011; Swank (&) Lambie, 2012). There have been various attempts to measure the true construct of counselor competence (e.g., Hughes, 2014; Swank, Lambie, (&) Witta, 2012; Urbani, Smith et al., 2002). Those attempts tried to involve diverse voices around counselor competence in more comprehensive ways. Although there are numerous measures assessing supervisor ratings of counselor competence, there is still a lack of clients' voice in assessing counselor competence and performance in counselor education literature. In particular, there has been a deficit of direct measures to assess counselor competence by clients (Tate et al., 2014). Therefore, a new client-rated scale of counselor competence is required to provide invaluable information for enhancing a counselor's own professional competence as well as the quality of counselor preparation programs. The purpose of this study is to assess the psychometric properties using a Rasch model on a newly developed client-rated scale of counselor competence, named Client Ratings of Counselor Competence (CRCC). For this purpose of this study, the CRCC was developed, following the procedures for a scale development that the Rasch measurement model proposed. The development process consisted of (a) defining hierarchical attributes of what to measure, (b) generating a pool of items corresponding to the defined attributes, (c) determining the scale-type of measurement, (d) expert reviewing, (f) conducting a field test to a research sample, (g) evaluating the items using Rasch analysis, and (h) determining the final scale. Specifically, the initial pool of 85 items was generated and reduced to 36 items through expert review and a pilot test. The participants in this study were 84 adult clients who received counseling service from counselor trainees in a community counseling center. This study investigated diverse aspects of validity in the 36-item CRCC using the Rasch model, following the guideline by Wolfe and Smith (2007). In specific, content evidence, substantive evidence, structural evidence, generalizability, and interpretability evidence were investigated with the results of the Rasch analysis. The result showed that negatively worded items were commonly misfitted to the model. The rating scale analysis result showed that a 3-point rating scale format could be more appropriate than the current 4-point scale. In addition, the investigation of item difficulty hierarchy perceived by clients were mostly consistent with the assumed hierarchical structure in the test specification, empirically supporting microskills hierarchy (Ivey et al., 2013). The dimensionality analysis result showed the presence of possible additional dimension in the current CRCC. The reliability level of CRCC was acceptable as well as some bad items functioning differently across gender were detected with the DIF analysis. Additionally, the practicum level counselors-in-training in this study showed higher level of competence above the level that the current CRCC items could measure. Lastly, implications of the study, limitations, and future research were discussed. Some implications of the findings include: (a) the use of the Rasch model to assess the psychometric properties of the CRCC scale can make the developing instrument more valid and reliable, overcoming the major weakness of the classical test theory; (b) item difficulty level in the Rasch analysis can be a useful tool to empirically demonstrate whether a theoretical concept or model, especially with hierarchical or developmental structure, exists with real data; (c) the item-person map in the Rasch model can provide useful information for evaluating the instruments as well as interpreting the test scores; and (d) after more revisions and further validation studies, the CRCC could be utilized as additional assessment when counselor educators want to assess whether the trainees develop the competence above the expected level, especially from clients' perspective.
Title: Development and Validation of the Client Ratings of Counselor Competence: Applying the Rasch Measurement Model.
37 views
24 downloads
Name(s): Jo, Hang, Author
Jones, Dayle, Committee Chair
Robinson, Edward, Committee Member
Hundley, Gulnora, Committee Member
Bai, Haiyan, Committee Member
University of Central Florida, Degree Grantor
Type of Resource: text
Date Issued: 2016
Publisher: University of Central Florida
Language(s): English
Abstract/Description: An important part of becoming a counselor is developing strong counselor competence, particularly for counselors-in-training. Thus, the main goal in counselor education is to develop students' competence to be capable to practice as a professional counselor. Assessing the competence of counselors-in-training remains the primary focus in counselor education and supervision (Bernard (&) Goodyear, 2014; McAuliffe (&) Eriksen, 2011; Swank (&) Lambie, 2012). There have been various attempts to measure the true construct of counselor competence (e.g., Hughes, 2014; Swank, Lambie, (&) Witta, 2012; Urbani, Smith et al., 2002). Those attempts tried to involve diverse voices around counselor competence in more comprehensive ways. Although there are numerous measures assessing supervisor ratings of counselor competence, there is still a lack of clients' voice in assessing counselor competence and performance in counselor education literature. In particular, there has been a deficit of direct measures to assess counselor competence by clients (Tate et al., 2014). Therefore, a new client-rated scale of counselor competence is required to provide invaluable information for enhancing a counselor's own professional competence as well as the quality of counselor preparation programs. The purpose of this study is to assess the psychometric properties using a Rasch model on a newly developed client-rated scale of counselor competence, named Client Ratings of Counselor Competence (CRCC). For this purpose of this study, the CRCC was developed, following the procedures for a scale development that the Rasch measurement model proposed. The development process consisted of (a) defining hierarchical attributes of what to measure, (b) generating a pool of items corresponding to the defined attributes, (c) determining the scale-type of measurement, (d) expert reviewing, (f) conducting a field test to a research sample, (g) evaluating the items using Rasch analysis, and (h) determining the final scale. Specifically, the initial pool of 85 items was generated and reduced to 36 items through expert review and a pilot test. The participants in this study were 84 adult clients who received counseling service from counselor trainees in a community counseling center. This study investigated diverse aspects of validity in the 36-item CRCC using the Rasch model, following the guideline by Wolfe and Smith (2007). In specific, content evidence, substantive evidence, structural evidence, generalizability, and interpretability evidence were investigated with the results of the Rasch analysis. The result showed that negatively worded items were commonly misfitted to the model. The rating scale analysis result showed that a 3-point rating scale format could be more appropriate than the current 4-point scale. In addition, the investigation of item difficulty hierarchy perceived by clients were mostly consistent with the assumed hierarchical structure in the test specification, empirically supporting microskills hierarchy (Ivey et al., 2013). The dimensionality analysis result showed the presence of possible additional dimension in the current CRCC. The reliability level of CRCC was acceptable as well as some bad items functioning differently across gender were detected with the DIF analysis. Additionally, the practicum level counselors-in-training in this study showed higher level of competence above the level that the current CRCC items could measure. Lastly, implications of the study, limitations, and future research were discussed. Some implications of the findings include: (a) the use of the Rasch model to assess the psychometric properties of the CRCC scale can make the developing instrument more valid and reliable, overcoming the major weakness of the classical test theory; (b) item difficulty level in the Rasch analysis can be a useful tool to empirically demonstrate whether a theoretical concept or model, especially with hierarchical or developmental structure, exists with real data; (c) the item-person map in the Rasch model can provide useful information for evaluating the instruments as well as interpreting the test scores; and (d) after more revisions and further validation studies, the CRCC could be utilized as additional assessment when counselor educators want to assess whether the trainees develop the competence above the expected level, especially from clients' perspective.
Identifier: CFE0006466 (IID), ucf:51419 (fedora)
Note(s): 2016-12-01
Ph.D.
Education and Human Performance, Dean's Office EDUC
Doctoral
This record was generated from author submitted information.
Subject(s): Counselor Competence -- Rasch Model -- Client Ratings -- Client Feedback -- Scale Development -- Counseling
Persistent Link to This Record: http://purl.flvc.org/ucf/fd/CFE0006466
Restrictions on Access: public 2016-12-15
Host Institution: UCF

In Collections