Development and Validation of the Sources of Self-Efficacy Inventory (SOSI): Exploring a New Measure of Teacher Efficacy

Kevin M. Kieffer
James A. Haley VA Medical Center, Tampa
and
Texas A&M University
 

Robin K. Henson
University of Southern Mississippi

______________
Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans, April 25, 2000.
 
 

Abstract

The present study described the development and construct validation of a new instrument of teacher efficacy, the Sources of Self-Efficacy Inventory (SOSI), which was created to address shortcomings in previous measures that purported to measure this construct. Development of the SOSI was based on a model of teacher efficacy posited by Tschannen-Moran, Woolfolk Hoy, and Hoy (1998) that described four important areas of efficacy building information as proposed by Bandura (1997). The SOSI was examined in a sample of 252 precertification education teachers of varying experience levels at a large Southwestern university. Resultant factor analysis of the 35 SOSI items yielded four interpretable factors that contained many of the target items. However, many items were associated with non-intended factors and it was apparent that item and subscale revision was necessary. Results of a confirmatory factor analytic (CFA) study of another teacher efficacy instrument, the Teacher Efficacy Scale (TES), are presented to further explore the teacher efficacy construct.

Development and Validation of the Sources of Self-Efficacy Inventory (SOSI): Exploring a New Measure of Teacher Efficacy

        Albert Bandura (1977, 1997) presented self-efficacy as a mechanism of behavioral change and self-regulation in his social cognitive theory. An efficacy belief refers to a perceived ability to carry out actions that will successively lead toward a specific goal. Bandura proposed that efficacy beliefs were powerful predictors of behavior because they were ultimately self-referent in nature and directed toward specific tasks. Consequently, the predictive power of efficacy beliefs has been empirically demonstrated in the research literature (Bandura, 1997; Pajares, 1996; Tschannen-Moran et al., 1998).
        Researchers have applied Bandura's social cognitive theory concepts to teachers, among the first of which were Ashton and Webb (1982). These researchers argued that two items which had been previously used by RAND researchers (Armor et al., 1976; Berman et al., 1977) to study teacher efficacy actually corresponded to Bandura's self-efficacy and outcome expectancy dimensions of social cognitive theory. They labeled the two dimensions personal teaching efficacy (PTE) and general teaching efficacy (GTE), respectively. In an effort to further the study of teacher efficacy, Gibson and Dembo (1984) developed the Teacher Efficacy Scale (TES) to measure both of these constructs. The TES was the first substantative attempt to empirically develop a data collection instrument that tapped into this potentially powerful variable in teachers. The TES has subsequently become the predominate instrument in the study of teacher efficacy, leading Ross (1994, p. 382) to label it a "standard" measure in the field. Use of the TES has allowed researchers to classify teacher efficacy as one of the few teacher characteristics consistently related to positive teacher behavior and student outcomes (Anderson, Greene, & Loewen, 1988; Coladarci, 1992; Gibson & Dembo, 1984; Moore & Esselman, 1992; Podell & Soodak, 1993; Soodak & Podell, 1993).
        Recently, however, the TES has been scrutinized on the basis of the test authors' conceptualization of Bandura's (1997) self efficacy and outcome expectancy dimensions. In particular, Tschannen-Moran et al. (1998) have argued that the GTE dimension of the TES is a measure of external locus of control as opposed to outcome expectancy. With this in mind, Tschannen-Moran et al. proposed a multi-dimensional model of teacher efficacy that purported to more accurately coincide with Bandura's social cognitive theory. The model takes into account Bandura's (1997) four sources of efficacy building information: mastery experiences, vicarious experiences, social/verbal persuasion, and physiological/emotional arousal. Of these four, Bandura proposed that mastery experiences were the most powerful sources of information that result in bolstered self-efficacy.
        The model proposed by Tschannen et al. (1998) promises to result in new and potentially a more precise study of teacher efficacy. However, the empirical validation of this theory from multiple perspectives is necessary to substantiate its accuracy. The present study is attempt to explore a portion of this model in a sample of preservice teachers. Three questions guided the present study: (a) What is the structure of Sources of Self Efficacy Inventory (SOSI), an instrument developed to potentially assess Bandura's four sources of efficacy information?; (b) Is the structure of the TES valid in a sample of preservice education teachers?; and (c) What are the relationships between the TES, an established teacher efficacy instrument, and the SOSI?
Method
Participants and Procedure
Participants in the present study were 252 undergraduate students at a large Southwestern university who were enrolled in a junior level educational psychology course. During class time, students were given the opportunity to participate in completion of the two research instruments. The mean age of the participants was 20.94 (SD=2.35), and there were more females (218; 86.5%) than males. The majority of the respondents were nonminority students (215; 85.3%), although there were four (1.6%) African American, five (2%) Asian American, 22 (8.7%) Hispanic, and two (0.8%) Native American students in our sample (four students did not provide racial/ethnic origin information). A preponderance of the participants were at the junior college level (114; 45.2%) with smaller percentages at the sophomore (51; 20.2%), senior (80; 31.7%) and graduate student (7; 2.8%) levels.

Instrumentation
        Teacher Efficacy Scale (TES; Gibson & Dembo, 1984). The TES is a 16 item instrument that measures global (non-context specific) self-efficacy. The instrument contains 16 items in six point Likert format ('1' strongly disagree to '6' strongly agree) that measures the two efficacy constructs, PTE (nine items) and GTE (seven items), as described previously. Coefficient alphas for the two subscales were .4359 (GTE) and .7231 (PTE).
        Sources of Self-Efficacy Inventory (SOSI; Henson, 1999). The SOSI is a 35 item, Likert-type scale instrument ('1' definitely not true for me to '7' definitely true for me) that was constructed to measure self-efficacy in teachers. Four scales were constructed based on the work of Bandura (1997): Mastery Experience (nine items), Emotional/Physiological Arousal (seven items), Vicarious Experience (nine items) and Social Verbal Persuasion (10 items). The SOSI was developed after a thorough review of the literature, and items were specifically developed to tap into each of Bandura's (1997) four efficacy building areas. Both positive and negative historical events can potentially provide information that impact self-efficacy. For example, it is possible that a vicarious experience in which a preservice teacher witnesses an experienced teacher succeed can bolster the preservice teacher's own belief in his/her ability to succeed at the task. Furthermore, depending on the preservice teacher's attributions, witnessing an experienced teacher fail may also bolster the preservice teacher's efficacy if he/she perceives him/herself as having better skills than the observed teacher. The SOSI items were developed to potentially capture these varied sources of efficacy information. Coefficient alphas for the four subscales were .7081 (Mastery Experience), .6000 (Emotional/Physiological Arousal), .7797 (Vicarious Experience) and .4495 (Social/Verbal Persuasion). The items on the SOSI are presented in Appendix A.

Results

Construct Validation of the SOSI: Exploratory Factor Analysis
        We conducted an exploratory factor analysis (EFA) on the 35 items to determine instrument structure. We used a principal components extraction procedure on the 35 item correlation matrix. The eigenvalue-greater-than 1.0-rule (K1) and the Scree test (Cattell, 1966) were used to determine the number of factors to retain. Using the K1 rule resulted in the retention of 10 factors whereas examination of the Scree plot indicated four factors. Based on the recommendations presented in Zwick and Velicer (1986), we decided to use the number of factors indicated by the Scree test. Varimax rotation (Kaiser, 1958) of the four factors resulted in an interpretable solution.
        Based on the recommendations posited by Kieffer (1999), a comparison of oblique and orthogonal rotations indicated that the orthogonal rotation was appropriate to interpret (factor correlations ranged from .019 to -.318, indicating a maximum of 10% common variance by any two factors). Results of this analysis indicated that the item structure posited by the present authors did not withstand empirical scrutiny, as only portions of the four subscales clustered together on the EFA. Consequently, we intend to conduct further analyses to examine subscale structure. Further, because our subscales were not supported by the EFA, we did not correlate these with the TES subscales in an effort to provide evidence of score validity. Results of the EFA of the SOSI are presented in Table 1.

Confirmatory Factor Analysis of the TES
    In examining the structure of the TES with our sample of 252 preservice teachers, we developed and tested two falsifiable models. Model #1A (v=18) posited the instrument structure delineated by the test authors in which two factors account for the 16 items on the scales. Additionally, two items generated by the RAND group were included in the analysis. Model #2A (v=18) stated that their was only one factor responsible for the 18 test items. Results of the analysis using AMOS version 3.6 resulted in stronger support for the Gibson and Dembo (1984) model, although both the one factor and two factor models failed to indicate acceptable model-to-data fit on GFI and AGFI statistics (0.810, 0.759 and 0.861, 0.823, respectively). However, reasonable model fit was indicated by the root mean square residual statistic (RMSEA) on the two factor model (0.078) (see Kieffer, 1999 for an explanation of these fit indices). Results of the CFA of the TES are presented in Table 2.

Discussion

    As argued by Thompson (1994, p. 170), "replicability analyses are attempts to look at data from perspectives intimately associated with the sine qua non of science--finding noteworthy effects that replicate." Teacher efficacy has been one of the few variables consistently demonstrated important to positive teaching behavior and student outcomes. For example, Woolfolk and Roy (1990) noted that, "Researchers have found few consistent relationships between characteristics of teachers and the behavior or learning of students. Teachers' sense of efficacy ... is an exception to this general rule" (p. 81). Given the current and potential educational value of this construct, concerted effort has been placed on how to best measure teacher efficacy.
        In the present study, we presented a new scale designed to assess sources of efficacy building information. Such an instrument would help further the study of the teacher efficacy model proposed by
Tschannen-Moran et al. (1998). We also presented further construct validation information for the TES. Results from this study indicated that reasonable  model-to-data fit was generated by the TES scales and that further analysis of the SOSI is needed to clarify subscale composition.

References

        Anderson, R., Greene, M., & Loewen, P. (1988). Relationships among teachers' and students' thinking skills, sense of efficacy, and student achievement. Alberta Journal of Educational Research, 34, 148-165.
        Armor, D., Conroy-Oseguera, P., Cox, M., King, N., McDonnell, L., Pascal, A., Pauly, E., & Zeilman, G. (1976). Analysis of the school preferred reading programs in selected Los Angeles minority schools (Rep. No. R-2007-LAUSD). Santa Monica, CA: RAND. (ERIC Document Reproduction Service No. ED 130 243)
        Ashton, P., & Webb, R. B. (1982, March). Teachers' sense of efficacy: Toward and ecological model. Paper presented at the annual meeting of the American Educational Research Association, New York.
        Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215.
        Bandura, A. (1997). Self-efficacy: The exercise of control New York: W. H. Freeman.
        Berman, P., McLaughlin, M., Bass, G., Pauly, E., & Zellman, G. (1977). Federal programs supporting educational change: Vol. VII. Factors affecting implementation and continuation (Rep. No. R-1 589/7-HEW). Santa Monica, CA: RAND. (ERIC Document Reproduction Service No. 140432)
        Cattell, R.B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1, 245-276.
        Coladarci, T. (1992). Teachers' sense of efficacy and commitment to teaching. Journal of Experimental Education, 60, 323-337.
        Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology. 76, 569-582.
        Henson, R. K. (1999). The Sources of Self-Efficacy Inventory. Unpublished instrument. Hattiesburg, MS: University of Southern Mississippi.
        Kaiser, H.F. (1958). The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23, 187-200.
        Kieffer, K.M. (1999). An introductory primer on the appropriate use of exploratory and confirmatory factor analysis. Research in the Schools 6(2), 75-92.
        Moore, W., & Esselman, M. (1992, April). Teacher efficacy, power, school climate and achievement: A desegregating district's experience. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.
        Pajares, F. (1996, April). Assessing self-efficacy beliefs and academic outcomes: The case for specificity and correspondence. Paper presented at the annual meeting of the American Educational Research Association, New York.
        Podell, D., & Soodak, L. (1993). Teacher efficacy and bias in special education referrals. Journal of Educational Research, 86, 247-253.
        Ross, J. A. (1994). The impact of an inservice to promote cooperative learning on the stability of teacher efficacy. Teaching and Teacher Education, 10, 381-394.
        Soodak, L., & Podell, D. (1993). Teacher efficacy and student problem as factors in special education referral. Journal of Special Education 27, 66-18.
        Thompson, B. (1994). The pivotal role of replication in psychological research: Empirically evaluating the replicability of sample results. Journal of Personality, 62, 157-176.
        Tschannen-Moran, M., Woolfolk Roy, A., & Roy, W. K. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research. 68, 202-248.
        Woolfolk, A. E., & Roy, W. K. (1990). Prospective teachers' sense of efficacy and beliefs about control. Journal of Educational Psychology, 82, 81-91.
        Zwick, W.R., & Velicer, W.F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99, 432-442.
 
 

APPENDIX A
Items Contained on the SOSI

1. I have had many positive opportunities to teach.

2. I remember clearly those times when I have taught groups well.

3. I have learned about how to be a teacher by watching other skillful teachers.

4. Listening to others talk about teaching gives me useful information on teaching.

5. I have developed many of my teaching skills by actually teaching.

6. When I say the wrong things to a class, I become anxious.

7. Watching other teachers make mistakes has taught me how to be a more effective teacher.

8. I learn little about how to actually teach effectively from suggestions of others.

9. Often my attempts to teach children are not as successful as I would like.

10. The idea of being in a classroom as a teacher makes me nervous.

11. I have had meaningful opportunities to observe teachers in action.

12. The feedback I receive from others does not help me teach better.

13. I have learned a great deal from teaching in classrooms.

14. I get excited when I do something right to help a child learn.

15. My classroom observations are valuable to me.

16. When people I respect tell me I will be a good teacher, I tend to believe them.

17. I have made many mistakes when trying to teach children.

18. Educational textbooks and journal articles have helpful information on how to teach.

19. My fears of making mistakes affect my ability to teach.

20. I believe I can teach as well as the teachers portrayed in popular movies.

21. Feedback from other teachers is valuable to me.

22. When I make instructional mistakes, I am able to learn from the experience.

23. I have felt my heart beat faster or harder when I have done well with a lesson.

24. I often compare my own abilities to other teachers.

25. My coursework has helped me develop effective teaching strategies and skills.

26. I often wish that I had done things differently after teaching a lesson.

27. I have developed confidence in my own teaching by observing the mistakes that other teachers make.

28. I tend not to believe others when they tell me I will be a good teacher.

29. Teaching well gives me a positive sense of personal success.

30. When I see other teachers do poorly, I am able to learn how to teach more effectively.

31. The things I learn in coursework does not help me be an effective teacher.

32. There have been opportunities for me to teach well.

33. When I have made mistakes teaching, I have felt my heart beat faster and harder.

34. I am able to improve my own instruction by noticing the errors that others make.

35. I often get important feedback from my professors about my teaching ability.