Thursday, September 1, 2011

The role of gender in teaching effectiveness ratings of faculty.

The role of gender in teaching effectiveness ratings of faculty. ABSTRACT This paper examines the role of gender and its influence on studentratings of faculty teaching effectiveness. The study recorded professoreffectiveness ratings by 930 undergraduate students consisting of 472females and 458 males. The results reveal several gender differences.Generally, female students rated faculty effectiveness higher than malestudents. When gender of faculty was considered, female students ratedmale faculty higher than male students, but did not rate female facultyhigher than male students. Gender differences were also examined usingan integrated model for student rating behavior. This model included thetheories of motivation, grade leniency/stringency, and constructvalidity construct validity,n the degree to which an experimentally-determined definition matches the theoretical definition. , which have been integrated into a structural equation model.Previous research generally treated these theories independently. Theeffect of gender on the role each of the competing theories was studied.Rating behavior was generally consistent between male and femalestudents. Females seemed to exhibit lower academic expectations thanmales. The results of this study, in conjunction with previous studies,continue to show that significant bias exists in student ratings offaculty teaching effectiveness regardless of gender of students orfaculty. LITERATURE REVIEW Many studies have been conducted to study the various factors thatinfluence student ratings of professor effectiveness. Those that havefocused on gender differences have revealed inconsistencies related tofaculty evaluations. Some studies have shown higher ratings forinstructors by females, though in some instances same sex preferenceswere found also (Ferber & Huber, 1975, Tieman & Rankin-Ullock,1985). Others studies have shown little or no gender interactions(Elmore & LaPoint 1974, 1975, Wilson & Doyle 1976). Hancock,Shannon & Trentham (1993) considered gender and college disciplines(human sciences, liberal arts liberal arts,term originally used to designate the arts or studies suited to freemen. It was applied in the Middle Ages to seven branches of learning, the trivium of grammar, logic, and rhetoric, and the quadrivium of arithmetic, geometry, astronomy, and music. , etc) and found no uniform patterns. Theydid find female students rated instructors higher than did males.Feldman (1993) did an extensive analysis and found minor differences andinconsistent results, though students rated same sex instructorssomewhat higher. Fernandez, 1997 reviewed the literature and concludedthat gender differences were minimal with regard to rating of faculty.His own study supported these conclusions, stating "...that theeffect of student and faculty gender on teaching quality assessment isslight to non-existent". Other factors also impact teaching evaluations. Academic successexpectancy A mere hope, based upon no direct provision, promise, or trust. An expectancy is the possibility of receiving a thing, rather than having a vested interest in it.The term has been applied to situations where an individual hopes and expects to receive something, generally has been studied and found that there were small genderdifferences with females expectancy slightly less than males (Gigliotti& Seacrest, 1988). The effects of motivation on professor ratingsare probably the most agreed upon Adj. 1. agreed upon - constituted or contracted by stipulation or agreement; "stipulatory obligations"stipulatorynoncontroversial, uncontroversial - not likely to arouse controversy systematic influence in studentratings of faculty. It has been demonstrated that student motivation,represented by student interest and course type (elective/required),plays a significant role in student ratings of professor effectiveness(Howard & Maxwell, 1980; Hoyt 1973; Marsh, 1984; Marsh & Duncan,1992). Howard and Maxwell (1980, 1982) modeled the relationship betweenstudent motivation, student learning, expected grades, and studentsatisfaction with the instructor and field of study. These and otherstudies show that motivation and learning are more highly correlated cor��re��late?v. cor��re��lat��ed, cor��re��lat��ing, cor��re��latesv.tr.1. To put or bring into causal, complementary, parallel, or reciprocal relation.2. with ratings of professor effectiveness than is expected grade withprofessor effectiveness. The authors conclude that student motivationdrives the correlation between grades and student satisfaction with theinstructor. Therefore, the correlation between grades and ratings ofprofessor effectiveness is an expected artifact A distortion in an image or sound caused by a limitation or malfunction in the hardware or software. Artifacts may or may not be easily detectable. Under intense inspection, one might find artifacts all the time, but a few pixels out of balance or a few milliseconds of abnormal sound , rather than anindication of a direct relationship between grades and ratings ofprofessor effectiveness. Using path analysis, Marsh (1984) alsoconcluded that prior subject interest had a stronger impact on studentratings of various professor effectiveness characteristics than didgrades. Additionally, simple classifications (required versus elective electivenon-urgent; at an elected time, e.g. of surgery.electiveadjective Referring to that which is planned or undertaken by choice and without urgency, as in elective surgery, see there noun Graduate education noun )and expanded categories of course type have been found to besignificantly correlated with ratings of professor effectiveness(Aleamoni, 1981; Centra, 1993; Feldman, 1978; Marsh & Dunkin, 1992). Construct validity theory proposes that student ratings reflectstudent learning and, therefore, measure professor teachingeffectiveness. That is, higher student ratings for the instructorindicate greater student learning. Some studies have demonstrated thatclasses with the highest student ratings have also performed best onstandardized final examinations in multi-section classes (Marsh &Roche, 1997). In addition to the earlier studies that provided thefoundation for validity theory, numerous factor analytic Adj. 1. factor analytic - of or relating to or the product of factor analysisfactor analytical studies havebeen conducted to investigate the validity of student ratings (Cashin1988; Feldman 1989; Howard, Conway, & Maxwell 1985; Marsh 1984;Marsh & Duncan 1992). Using SEM, Greenwald and Gilmore (1997a&b) found support forgrade leniency le��ni��en��cy?n. pl. le��ni��en��cies1. The condition or quality of being lenient. See Synonyms at mercy.2. A lenient act.Noun 1. theory by suggesting that only grade leniency allows fora negative workload a grade relationship. This relationship is explainedby students' willingness to work harder in order to avoid very lowgrades. This negative relationship between workload and grades has beenobserved in other studies (Marsh, 1980). However, other explanationshave been offered for the negative relationship, such as subjectdifficulty and student capability (McKeachie, 1997 In an effort to integrate the competing theories, SEM analysis wasconducted by Hatfield and Kohn, 2004, which confirmed the presence ofall the competing theories and their interactions. HYPOTHESES Based on the literature and the findings of several studies, wepropose that there are differences in the way male and female studentsrate male and female faculty. H1: Female students rate faculty higher than male students. H2: Female students rate male faculty higher than male students H3: Female students rate female faculty higher than male students Theories of student rating behavior suggest a variety of hypotheseswhich form the basis of an integrated approach to study genderdifferences. Several theories have been offered in explanation of thepositive relationship between grades and ratings of faculty (Greenwald& Gilmore, 1997a). Grade leniency suggests that grades directlyaffect ratings of faculty. Construct validity and motivation assert thata third variable (learning positively affects both grades and ratings,thus, resulting in a positive relationship between grades and ratings.While these theories have been studied extensively in the past, theeffect of gender has not been studied when all three theories arepresent simultaneously. Grade Leniency/Stringency This theory suggests praise induces liking for the individualgiving the praise (Aronson & Linder, 1965; Hatfield & Kohn,2003). In the context of student ratings, praise is interpreted to behigh grades and liking is translated into high faculty ratings. Gradeleniency theory suggests that there is a causal causal/cau��sal/ (kaw��z'l) pertaining to, involving, or indicating a cause. causalrelating to or emanating from cause. relationship betweenexpected grades and ratings of faculty. Further, Greenwald and Gilmore(1997a) suggest that there is a negative relationship (grade stringency)between students working hard and expected grades. In courses that havestrict-grading policies students have to work hard in order to avoidvery low grades, yet overall grades are still lower than in classes witheasy-grading professors. These premises suggest the followinghypotheses: H4: The higher the expected grade, the higher the professoreffectiveness rating. H5: The higher the student effort (worked harder), the lower theexpected grade. Construct Validity This theory suggests that high instructional quality induces highstudent learning, which results in higher grades and higher professorratings (Cashin & Downey, 1992; Cohen cohenor kohen(Hebrew: “priest”) Jewish priest descended from Zadok (a descendant of Aaron), priest at the First Temple of Jerusalem. The biblical priesthood was hereditary and male. , 1981; Feldman, 1976 &1989; Marsh, 1984). Therefore, the following hypotheses are provided toevaluate construct validity: H6: The higher the student learning, the higher the professoreffectiveness rating. H7: The higher the student learning, the higher the expected grade. H8: The higher the professor effectiveness rating, the higher thestudent learning. H9: The higher the worked hard rating, the higher the studentlearning. Student Motivation. This theory suggests that student motivation positively affectsboth grades and ratings of faculty, through student learning, therebyresulting in a positive correlation Noun 1. positive correlation - a correlation in which large values of one variable are associated with large values of the other and small with small; the correlation coefficient is between 0 and +1direct correlation between grades and ratings offaculty (Aleamoni, 1981; Braskamp & Ory, 1994; Centra, 1993; Kohn& Hatfield, 2001; Marsh, 1984; Marsh & Dunkin, 1992). Studentmotivation results in more student learning and appreciation for thecourse and instructor, which leads to higher grades and higher professoreffectiveness ratings. Researchers have identified two measures ofstudent motivation: course-specific and general (Howard & Maxwell,1980; Marsh, 1984). These indicators of student motivation will beexamined in this study--student interest in the subject matter of therated course and course type (major or elective, versus required or corecourse). Student interest is a course-specific measure, whereas coursetype is a general measure. The following hypotheses are designed to testthe impact of student motivation in student rating behavior: H10: The higher the student interest, the higher the studentlearning. H11: Lack of choice in course (required or core courses) results inlower student learning. H12: The higher the student learning, the higher the expectedgrade. H13: The higher the student learning, the higher the professoreffectiveness rating. RESEARCH METHODS The student rating survey contained eight items, which studentsrated on a six-point Likert scale Likert scaleA subjective scoring system that allows a person being surveyed to quantify likes and preferences on a 5-point scale, with 1 being the least important, relevant, interesting, most ho-hum, or other, and 5 being most excellent, yeehah important, etc : (1) strongly agree, (2) agree, (3),slightly agree, (4) slightly disagree, (5) disagree, (6) stronglydisagree. The first six items were designed to examine professoreffectiveness, with the sixth item being a global item. Student learningwas assessed by item 7 and course specific student interest by item 8. 1. The course requirements, including grading system, wereexplained at the beginning of the semester se��mes��ter?n.One of two divisions of 15 to 18 weeks each of an academic year.[German, from Latin (cursus) s . 2. The professor provides feedback on exams and assignments. 3. The professor is willing to answer questions and assist studentsupon request. 4. The professor uses examples and practical applications in class,which aid in my understanding of the material. 5. The professor encourages students to analyze, interpret, andapply concepts. 6. The professor was effective teaching this course. 7. I learned a significant amount in this course 8. I am interested in the subject matter of this course. These items were selected based on past research, which suggeststhe desirability of global items that address professor effectiveness(#6) and student learning (#7) factors, and the need to control forstudent interest (#8) (Cashin, 1995). Items one through five addresscommonly used dimensions of professor effectiveness in student ratingresearch (Braskamp & Ory, 1994; Cashin, 1995; Centra, 1993; Feldman,1989; Marsh, 1991). Students completed a student data sheet that contained demographicitems, two grade-related items, and one general student motivation item:(1) The grade I expect to achieve in this course, (2) I worked harder inthis course than in most of my other courses, and (3) course type. Allresponse options were designed so that students could use opscan sheetsto report their ratings. The scale for expected grade was: 1.A, 2.B,3.C, 4.D, 5.F. The agree-disagree Likert scale noted above was also usedfor the 'worked harder' item. Five categories of course typewere provided: A. required by major/minor, B. elective in major/minor,C. general education requirement, D. free elective, E. program corecourse. These items reflect commonly used measures in testing for gradeleniency and motivation effects on student ratings of faculty (Greenwald& Gilmore, 1997a&b; Howard & Maxwell, 1980; Marsh, 1984). SAMPLE AND PROCEDURES Data were collected from students and professors in the threecolleges (business, arts and science, and education) at ShippensburgUniversity at the end of the first semester of the 1997-1998 academicyear. Classes were included in the sample from professors volunteeringand by request (in order to insure Insure can mean: To provide for financial or other mitigation if something goes wrong: see insurance or . Or you may be looking for ensure or inshore. adequate representation from allcolleges and departments), a mix of student classes (such as freshmanand senior), and a mix of professor characteristics (such as gender,race, degree, and rank). Nine hundred and thirty students, (472 females and 458 males) and44 professors (17 females and 27 males) were included in the sample,with the largest percentage (51) of faculty in Arts and Sciences, andequal percentages in Business and Education. The largest percentage ofstudents were seniors, 36 percent, followed by sophomores at 19 percent,juniors at 18 percent, freshmen at 14 percent, and graduate at 13percent. VARIABLES AND MEASURES The professor effectiveness dependent variable is a compositemeasure, developed by averaging the ratings of the six professoreffectiveness items. The reliability coefficient coefficient/co��ef��fi��cient/ (ko?ah-fish��int)1. an expression of the change or effect produced by variation in certain factors, or of the ratio between two different quantities.2. , alpha, for thecomposite professor effectiveness measure is 0.84. Expected Grade isboth a dependent and independent variable, and is used directly asreported. Student Learning, Student Interest, and Worked Hard are alsoused as directly reported in the survey instrument. The Course Typecategories were collapsed into a two-category independent variable: (1)major/minor/elective, and (2) required/core course. There are twomeasures of student motivation: Course Type and "Student Interestin the subject matter of this course". There are two measures ofgrade leniency: Expected Grade and "Worked harder in this coursethan in most other courses". Student Learning, a self-reportedrating, is a construct validity measure. The scales for five variables (professor effectiveness, expectedgrade, student learning, student interest, worked hard) were reversed sothat interpreting the findings would be more consistent with the waythese variables are typically referred to, e.g., low to high. Forexample, the higher the student learning rating, the more the studentlearned, etc. The course type variable is categorical That which is unqualified or unconditional.A categorical imperative is a rule, command, or moral obligation that is absolutely and universally binding.Categorical is also used to describe programs limited to or designed for certain classes of people. , and, thus did notneed to be reversed. ANALYSIS AND RESULTS Descriptive statistics descriptive statisticssee statistics. (means, standard deviations, andcorrelations) for all the variables used in this study are provided inTables 1, 2, and 3. The hypotheses will be tested on the within-classdata using structural equation modeling Structural equation modeling (SEM) is a statistical technique for testing and estimating causal relationships using a combination of statistical data and qualitative causal assumptions. (SEM) and the Amos 5.0 modelingsoftware. While there are many goodness-of-fit statistics in SEM, thisstudy will report three of the most popular measures (CFI CFIabbr.cost, freight, and insurance , NFI NFI Nasjonal Forskningsinformasjon (Norwegian Research Database)NFI National Fisheries InstituteNFI National Fatherhood InitiativeNFI National Forest Inventory (Australia)NFI Nutrition Foundation of India ,Chi-square/df), with Comparative Fit Index (CFI) being the primaryfit-statistic used in this study (see End Notes). Path coefficients aretested for significance using Critical Ratios (CR). Amos 5.0 reportsboth the CR's and the P values for each path so that levelsignificance can be determined. In addition, comparisons of professoreffectiveness will be made among the combined male and female, maleonly, and female only samples for professor effectiveness ratingbehavior. Average scores will be tested to see if there are differencesamong the groups using a difference of means test means testn.An investigation into the financial well-being of a person to determine the person's eligibility for financial assistance.means testNoun . Finally, a comparisonwill be made to determine if male and female students rate male andfemale faculty members differently. A one-way analysis of variance willbe performed to see if there are any significant differences in ratingbehaviors. A comparison was performed to contrast average professoreffectiveness rating scores among male and females combined, femalesonly, and males only. (see Tables 4 & 5). A test for the differenceof means was performed (assuming unequal variances) and a highlysignificant difference (Z = 3.453, P < .000) was found betweenFemales only and Males only. Thus, female average professoreffectiveness rating scores are significantly higher than males. Theseresults support H1 and similar findings (Benz and Blau 1995) which alsofound female ratings higher than males. An analysis was conducted to determine if the gender of eitherstudents or faculty played a role in rating of faculty effectiveness.Because of fewer missing data for this analysis, the total sample sizewas 936 instead of 930 students. The gender of the faculty was noted anda one way analysis of variance was conducted among the 4 combinations ofmale and female students and male and female faculty. Table 6 presentsthe averages and standard deviations of faculty effectiveness ratings,along with sample size. Table 7 presents the results of the analysis ofvariance indicating significant differences among groups (P < .001).Sheffe multiple comparisons were made between male and female studentratings for male and female faculty. Female students rated male facultysignificantly higher than did male students (mean difference = .177, P< .05), supporting H2. There was no significant difference in femaleand male students ratings of female faculty (mean difference = .103).Therefore, H3 is not supported. NEW PERSPECTIVES ON STUDENT RATINGS One of the problems with testing each of the above theories inisolation of each other is that intervening and moderating effects onthe predicted relationships are not taken into account. Such effects maysuppress To stop something or someone; to prevent, prohibit, or subdue.To suppress evidence is to keep it from being admitted at trial by showing either that it was illegally obtained or that it is irrelevant. or reinforce the predicted relationships. Thus, to accuratelyassess the presence of the theorized relationships, all the variables ofinterest need to be included in the same model. This section willintegrate the findings predicted by the various theories, usingstructural equation modeling. The impact of student gender will then bestudied to determine the role gender plays in rating behavior. Following similar methodology of Hatfield and Kohn 2004, all of thepredicted direct relationships proposed in the gradeleniency/stringency, construct validity, and motivation theories wereused to construct integrated structural models for males and females,males only, and females only. The initial analyses of these models arepresented in left hand column for figures 1, 2, and 3. Analysis of themodel reveals that the fit for all three models was very good (CFI = .94for all 3 models) and resulted in [R.sup.2] s of .35 (M & F), .41(F), and .28 (M) for professor effectiveness. However, several pathcoefficients were not significant. For each model, removal of thesepaths was evaluated by iteratively removing the path with the highest Pvalue greater than .05, rerunning the model with the path deleted DeletedA security that is no longer included on a specified market. Sometimes referred to as "delisted".Notes:Reasons for delisting include violating regulations, failing to meet financial specifications set out by the stock exchange and going bankrupt. , andthen inspecting the P values of the remaining paths for those P valuesthat were greater than .05. The procedure stopped when all remainingpaths were significant. The final models are presented on the right handside of figures 1, 2, and 3. [FIGURES 1-3 OMITTED] As a result of these procedures, both Course Type a StudentLearning (Motivation H12) and Professor Effectiveness a Student Learning(Construct Validity H8) were deleted from all three models (M & F,M, F). In addition, Worked Harder a Student learning (Construct ValidityH6) was deleted from the female model. Inspection of the modificationindices of the final models of both males and females, males only, andfemales only indicated that no additional paths would strengthen themodel. These results are presented in the right hand column of Figure 1.All paths of the final models are significant, the fit is very good (CFI= 1.00 (F) and .999 (M & F, M)), and [R.sup.2] s are .35 (M &F), .41 (F) and .29 (M). While the [R.sup.2] s of the final modelsremained at their original levels, the fit indices of both modelsimproved significantly. The [R.sup.2] for all models are quite strong.[R.sup.2] for females only is 46% higher than for males only. All threeof these [R.sup.2]values are much higher than usually reported in manystudies. In addition, there is a high degree of consistency in thestructural nature of all models with only Worked Harder a StudentLearning linkage being omitted from the female only model. Thus, 9 of the original 12 hypotheses were strongly supported,lending considerable support to the three theories of student ratingbehavior, regardless of gender. Interestingly, both grade leniency (H4)and grade stringency (H5) are supported. While grade leniency iscommonly understood and generally accepted, grade stringency (negativeworkload, expected grade relationship) has been rarely observed (Gilmoreand Greenwald - 1997a & b). In this analysis, it is not onlyobserved in the combined sample of males and females but in bothsub-groups of male only and females only. Moreover, the standardizedpath coefficient Path coefficients are linear regression weights expressing the causal linkage between statistical variables in the structural equation modeling approach. External links and referenceswww2.chass.ncsu.edu/garson/pa765/path. between Worked Harder and Expected Grade is muchstronger for females (-.31) than males (-.17). In all final models, the Professor Effectiveness a Student Learning(H8) was removed. This hypothesis is part of Construct Validity theoryand may indicate that the feedback loop between student learning andratings of professor effectiveness may not be as well defined asassumed. A possible explanation for this weakness is that highereffectiveness ratings may not be a good measure of teaching ability andthus does not lead to greater student learning. Although found in other studies, the Course Type (H12) link wasdropped from all final models. It is generally assumed that students aremore motivated mo��ti��vate?tr.v. mo��ti��vat��ed, mo��ti��vat��ing, mo��ti��vatesTo provide with an incentive; move to action; impel.mo in electives or courses in their major and less so inrequired courses. This facet facet/fac��et/ (fas��it) a small plane surface on a hard body, as on a bone. fac��etn.1. A small smooth area on a bone or other firm structure.2. of motivational theory then leads to higherprofessor effectiveness ratings. Our results do not indicate this to beso. Course Type is an indirect affect, influencing student learningwhich in turn affects professor effectiveness ratings. In an integratedmodel, Student Interest has a major impact (path coefficient--.70(M&F), .81(F) and .64(M)), eliminating the role of Course Typecomponent of Motivation theory. Thus, in an integrated model Course Typemay be a redundant variable with Student Interest providing a muchstronger indication of student motivation. CONCLUSIONS Some inconsistencies continue to show up in the study of gender onfaculty effectiveness ratings. In general, females students rate facultyhigher than did males students. When gender of faculty was considered,females students rated male faculty higher than did male students.Higher female rating scores have been observed in other studies and theresults of our study strongly support this difference. However, thedifferences ended there. In addition, our findings also provide strongsupport for the need to integrate theories that explain student ratingbehavior of faculty. Integration is necessary because of theinteractions and indirect effects among the theoretical premises.Structural equation modeling provides an ideal analytical methodology tostudy the complexities of student-rating behavior Using an integrated approach based on SEM analysis, we have foundsurprisingly consistent results among all the models. Structurally,little difference in overall student rating behavior based on genderdifferences was observed. Except for the removal of one path for females(Worked Harder a Student Learning, [H.sub.6], Construct Validity), thethree models are identical. All exhibit the identical simultaneouseffects of Construct validity, Grade leniency and Stringency, andMotivational theories The introduction to this article provides insufficient context for those unfamiliar with the subject matter.Please help [ improve the introduction] to meet Wikipedia's layout standards. You can discuss the issue on the talk page. . All models fit the data very well and have muchhigher coefficients of determinations than previously reported. Thesehigh values continue to lend support for the significant bias thatexists in rating of professor effectiveness that continues to be ignoredby many schools. Females experience the negative workload expected gradeeffect to a much higher degree than males. Some studies have found thatfemales have lower expectancy of success (Gigliotti and Seacrest 1988).The results of our study tend to support these findings. Faced with adifficult course, females may be less likely to have the confidence inthemselves and thus expect a lower grade though they will continue towork harder. This study continues to find significant bias, including genderbias, in student ratings of faculty, suggesting the need to re-considerhow student ratings are used to evaluate faculty teaching effectiveness.In order to more accurately evaluate professor effectiveness,administrators and faculty need to control for, or at least acknowledge,the complexity student rating behavior. ENDNOTES A 1.0 CFI or NFI suggests a perfect fit and if under .9 the modelcan probably be improved (Bentler and Bonnett, 1980). Chi-square/dfratios of up to 3 are indicative of acceptable fit models (Marsh andHocevar, 1985). CFI is less affected by sample size than is NFI or theChi-square ratio (Kline, 1998). REFERENCES Aleamoni, L.M. 1981. Student ratings of instruction. In J. Millman(Ed.), Handbook of teacher evaluation (pp. 110-145). Beverly Hills Beverly Hills,city (1990 pop. 31,971), Los Angeles co., S Calif., completely surrounded by the city of Los Angeles; inc. 1914. The largely residential city is home to many motion-picture and television personalities. , CA:Sage. Anderson, J.C. & Gerbing, D.W. 1988. Structural equationmodeling in practice: A review and recommended two-step approach.Psychological Bulletin, 103(3): 411-423. Aronson, E. & Linder, D.E. 1965. Gain and loss of esteem asdeterminants of interpersonal in��ter��per��son��al?adj.1. Of or relating to the interactions between individuals: interpersonal skills.2. attractiveness. Journal of ExperimentalSocial Psychology The Journal of Experimental Social Psychology is a scientific journal published by the Society of Experimental Social Psychology (SESP). It publishes original empirical papers on subjects like social cognition, attitudes, group processes, social influence, intergroup relations, , 1: 156-171. Bentler, P.M. & Bonnett, D.G. 1980. Significance tests andgoodness of fit Goodness of fit means how well a statistical model fits a set of observations. Measures of goodness of fit typically summarize the discrepancy between observed values and the values expected under the model in question. Such measures can be used in statistical hypothesis testing, e. in the analysis of covariance CovarianceA measure of the degree to which returns on two risky assets move in tandem. A positive covariance means that asset returns move together. A negative covariance means returns vary inversely. structures. PsychologicalBulletin, 88: 588-606. Bentler, P.M. & Chou, C. 1987. Practical issues in structuralmodeling. Sociological Methods and Research, 16: 78-117. Benz, C. & Blatt, S.J. 1995. Factors Underlying EffectiveCollege Teaching: What students Tell Us. Mid-Western EducationalResearcher, 8 (1): 27-31. Bollen, K.A. 1989. Structural Equations with Latent Hidden; concealed; that which does not appear upon the face of an item.For example, a latent defect in the title to a parcel of real property is one that is not discoverable by an inspection of the title made with ordinary care. Variables. NewYork New York, state, United StatesNew York,Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of : Wiley. Braskamp, L.A. & Ory, J.C. 1994. Assessing faculty work:Enhancing individual and institutional performance. San Francisco San Francisco(săn frănsĭs`kō), city (1990 pop. 723,959), coextensive with San Francisco co., W Calif., on the tip of a peninsula between the Pacific Ocean and San Francisco Bay, which are connected by the strait known as the Golden :Jossey-Bass. Bridgeman, W.J. 1986. Student evaluations viewed as a group processfactor. Journal of Psychology, 120: 183-190. Cashin, W.E. 1988. Student ratings of teaching: A summary of theresearch. Idea Paper No. 20. Manhattan: Kansas State University Kansas State University,main campus at Manhattan; coeducational; land-grant and state supported; chartered and opened 1863. There is an additional campus at Salina. Among the university's research facilities are the J. R. , Centerfor Faculty Evaluation and Development. Cashin, W.E. 1995. Student Ratings of Teaching: The ResearchRevisited IDEA Paper No.32. Manhattan: Kansas State University, Centerfor Faculty Evaluation and Development. Cashin, W.E. & Downey, R. 1992. Using Global Student RatingItems for Summative Adj. 1. summative - of or relating to a summation or produced by summationsummationaladditive - characterized or produced by addition; "an additive process" Evaluation. Journal of Educational Psychology, 84:563-572. Centra, J.A. 1993. Reflective faculty evaluation: Enhancingteaching and determining faculty effectiveness. San Francisco:Jossey-Bass. Chacko, T.I. 1983. Student ratings of instruction: A function ofgrading standards. Education Research Quarterly, 8(2): 14-25. Chapman, J.W. & Lawes, M.M. 1984. Consistency of causalattributions for expected and actual examination outcome: A study of theexpectancy confirmation and egotism EgotismSee also Arrogance, Conceit, Individualism.Baxter, TedTV anchorman who sees himself as most important news topic. [TV: “The Mary Tyler Moore Show” in Terrace, II, 70]cat models. British Journal ofEducational Psychology, 54: 177-188. Cohen, P.A. 1981. Student ratings of instruction and studentachievement. A meta-analysis of multi-section validity studies. Reviewof Educational Research, 51: 281-309. D'Apollonia, S. & Abrami, P.C. 1997. Navigating (networking, hypertext) navigating - Finding your way around. Often used of the Internet, particularly the World-Wide Web.A browser is a tool for navigating hypertext documents. studentratings of instruction. American Psychologist, 52(11): 1198-1208. Davis, M.H. & Stephan, W.G. 1980. Attributions for examperformance. Journal of Applied Social Psychology, 10: 235-248. Elmore, P.B. & LaPointe, K.A. 1975. Effects of teacher sex,student sex, and teacher warmth on the evaluation of collegeinstructors. Journal of Educational Psychology, 67,368-374. Feldman, K.A. 1976. Grades and college students' evaluationsof their courses and teachers. Research in Higher Education, 4: 69-111. Feldman, K.A. 1978. Course characteristics and variability amongcollege students in rating their teachers and courses: A review andanalysis. Research in Higher Education, 9: 199-242. Feldman, K.A. 1989. The association between student ratings ofspecific instructional dimensions and student achievement: Refining refining,any of various processes for separating impurities from crude or semifinished materials. It includes the finer processes of metallurgy, the fractional distillation of petroleum into its commercial products, and the purifying of cane, beet, and maple sugar anextending the synthesis of data from multi-section validity studies.Research in Higher Education, 30(6): 583-645. Feldman, K.A. 1993, College Students' Views of Male and FemaleCollege Teachers--Part II: Evidence from Social Laboratory andExperiments. Research in Higher Education, 34 (2) 151-211. Fernandez, M.A.M. 1997, Student and Faculty Gender in Ratings ofUniversity Teaching Quality. Sex Roles: A Journal of Research, 37:(n11-n12) 997--1003. Gigliotti, R.J. 1987. Expectations, observations, and violations:Comparing their effects on course ratings. Research in Higher Education,26: 401-415. Gigliotti, R.J., & Buchtel, F.S. 1990. Attritional at��tri��tion?n.1. A rubbing away or wearing down by friction.2. A gradual diminution in number or strength because of constant stress.3. bias andcourse evaluations. Journal of Educational Psychology, 82: 341-351. Gigliotti, R.J., & Seacrest, S.E. 1988. Academic successexpectancy: The interplay in��ter��play?n.Reciprocal action and reaction; interaction.intr.v. in��ter��played, in��ter��play��ing, in��ter��playsTo act or react on each other; interact. of gender, situation, and meaning. Research inHigher Education, 29: 281-297. Greenwald, A.G. & Gillmore, G.M. 1997a. Grading leniency is aremovable contaminant contaminant/con��tam��i��nant/ (kon-tam��in-int) something that causes contamination. contaminantsomething that causes contamination. of student ratings. American Psychologist, 52(11):1209-1217. Greenwald, A.G. & Gillmore, G.M. 1997b. No pain, no gain? Theimportance of measuring course workload in student ratings ofinstruction. Journal of Educational Psychology, 89(4): 743-752. Hair, J.F., Anderson, R.E., Tatham, R.L., & Black, W.C. 1998:Multivariate Data Analysis, 5th ed. New Jersey, Prentice Hall Prentice Hall is a leading educational publisher. It is an imprint of Pearson Education, Inc., based in Upper Saddle River, New Jersey, USA. Prentice Hall publishes print and digital content for the 6-12 and higher education market. HistoryIn 1913, law professor Dr. , 603-604. Haladyna, T. & Hess, R.K. 1994. The detection and correction ofbias in student ratings of instruction. Research in Higher Education,35(6): 669-687. Hancock, G.R., Shannon, D.M, & Trentham, L.L., 1993. Studentand teacher Gender in Rating of University Faculty: Results from FiveColleges of Study. Journal of Personnel Evaluation in Education, 6:235-248. Hatfield, L. & Kohn, J.W. 2003. Attribution Theory Attribution theory is a social psychology theory developed by Fritz Heider, Harold Kelley, Edward E. Jones, and Lee Ross.The theory is concerned with the ways in which people explain (or attribute) the behavior of others, or themselves (self-attribution), with something RevealsGrade-Leniency/Stringency Effects In Student Ratings Of Faculty, Academyof Educational Leadership Journal, Vol. 7, No.2.: 1-14. Hatfield, L. & Kohn, J. W., 2004, Student Ratings of Faculty:Back to Square One - Integrating Theoretical Perspectives UsingStructural Equation Modeling, Academy of Educational Leadership Journal,Vol. 1. No 10, 29-46 Holmes, D.S. 1972. Effects of grades and disconfirmed gradeexpectancies on students' evaluations of their instructor. Journalof Educational Psychology, 63(2): 130-133. Howard, G.S. & Maxwell, S.E. 1980. Correlation between studentsatisfaction and grades: A case of mistaken causation causationRelation that holds between two temporally simultaneous or successive events when the first event (the cause) brings about the other (the effect). According to David Hume, when we say of two types of object or event that “X causes Y” (e.g. ? Journal ofEducational Psychology, 72(6): 810-820. Howard, G.S. & Maxwell, S.E. 1982. Do grades contaminate con��tam��i��natev.1. To make impure or unclean by contact or mixture.2. To expose to or permeate with radioactivity.con��tam��i��nant n. student evaluations of instruction? Research in Higher Education, 16:175-188. Howard, G.S., Conway, C.G. & Maxwell, S.E. 1985. Constructvalidity of measures of college teaching effectiveness. Journal ofEducational Psychology, 77(2): 187-196. Hoyt, D.P. 1973. Measurement of instructional effectiveness.Research in Higher Education, 1: 367-378. Kennedy, W.R. 1975. Grades expected and grades received: Theirrelationship to students' evaluations of faculty performance.Journal of Educational Psychology, 67: 109-115. Kline, R.B. 1998. Principles and Practices of Structural EquationModeling. New York: Gilford Press. Kohn, J.W. & Hatfield, L. 2001. Student Ratings of Faculty andMotivational Bias--A Structural Equation Approach, Academy ofEducational Leadership Journal, 5(1):65-74. Marsh, H.W. 1980. The influence of student, course, and instructorcharacteristics on evaluations of university teaching. AmericanEducational Research Journal, 17: 219-237. Marsh, H.W. 1984. Students' Evaluations of universityteaching: Dimensionality, reliability, validity, potential biases, andutility. Journal of Educational Psychology, 76(5): 707-754. Marsh, H.W. 1986. Self-serving effect (bias?) in academicattributions: Its relation to academic achievement and self-concept.Journal of Educational Psychology, 78:190-200. Marsh, H.W. 1991. Multidimensional Students' Evaluations ofTeaching Effectiveness: A test of Alternative higher-Order Structures.Journal of Educational Psychology, 83: 285-296. Marsh, H.W. & Duncan, M. 1992. Students' evaluations ofuniversity teaching: A multidimensional perspective. In J.C. Smart (Ed.)Higher education: Handbook of theory and research, 8: 143-233. New York:Agaton. Marsh, H.W. & Hocevar, D. 1985. Application of confirmatoryfactor analysis In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis. It is used to assess the the number of factors and the loadings of variables. to the study of self-concept: First- and higher orderfactor models and their invariance in��var��i��ant?adj.1. Not varying; constant.2. Mathematics Unaffected by a designated operation, as a transformation of coordinates.n.An invariant quantity, function, configuration, or system. across groups. PsychologicalBulletin, 97(3): 562-582. MacCallum, R.C., Roznowski, M. & Necowitz, L.B. (1992). Modelmodifications in covariance structure analysis: The problem ofcapitalization capitalizationn. 1) the act of counting anticipated earnings and expenses as capital assets (property, equipment, fixtures) for accounting purposes. 2) the amount of anticipated net earnings which hypothetically can be used for conversion into capital assets. on chance. Psychological Bulletin, 111(3), 490-504. Marsh, H.W. & Roche, L.A. 1997. Making students'evaluations of teaching effectiveness effective. American Psychologist,52(11): 1187-1197. McHugh, M.C., Fisher, J.E. & Frieze frieze,in architecture, the member of an entablature between the architrave and the cornice or any horizontal band used for decorative purposes. In the first type the Doric frieze alternates the metope and the triglyph; that of the other orders is plain or , I.H. 1982. Effect ofsituational factors on the self-attributions of females and males. SexRoles, 8:389-396. McKeachie, W.J. 1997. Student ratings--The validity of use.American Psychologist, 52(11): 1218-1225. Miller, D.C. 1991 Handbook of Research Design and SocialMeasurement. Newbury Park, California The community of Newbury Park, California is located in the western portion of the City of Thousand Oaks and Casa Conejo, an unincorporated area of southern Ventura County. : Sage Publications This article or section needs sourcesorreferences that appear in reliable, third-party publications. Alone, primary sources and sources affiliated with the subject of this article are not sufficient for an accurate encyclopedia article. , Inc. Owie, I. 1985. Incongruence in��con��gru��ent?adj.1. Not congruent.2. Incongruous.in��congru��ence n. between expected and obtained gradesand students' ratings of the instructor. Journal of InstructionalPsychology, 12:196-199. Powell, R.W. 1977. Grades, learning, and student evaluation ofinstruction. Research in Higher Education, 7: 193-205. Ross, M. & Fletcher, G.J.O. 1985. Attribution at��tri��bu��tion?n.1. The act of attributing, especially the act of establishing a particular person as the creator of a work of art.2. and socialperception. In G. Lindzey & E. Aronson (Eds.), Handbook of SocialPsychology (vol. 2, pp. 73-122). New York: Random House. Simon, J.G. & Feather, N.T. 1973. Causal attribution forsuccess and failure at university examinations. Journal of EducationalPsychology, 64: 46-56. Stumpf, S.A. & Freedman, R.D. 1979. Expected grade covariation Noun 1. covariation - (statistics) correlated variationstatistics - a branch of applied mathematics concerned with the collection and interpretation of quantitative data and the use of probability theory to estimate population parameters with student ratings of instruction: Individual versus class effects.Journal of Educational Psychology, 71: 293-302. Tieman, C.R. & Rankin-Ullock, B. 1985. Student evaluations ofTeachers. Teaching Sociology Teaching Sociology (TS) is an academic journal in the field of sociology, published quarterly ( January, April, July, October) by American Sociological Association. Teaching Sociology publishes articles, notes, and reviews intended to be helpful to the discipline's teachers. , 12, 177-191 Jonathan Kohn, Shippensburg University Louise Hatfield, Shippensburg UniversityTable 1: Descriptive Statistics Males and Females: Correlations,Means and Standard Deviations(N = 930) Prof. Student Student Expect. Effect Learn. Interest GradeProf. Correlation 1 .565 .381 .350Effect. Significance .000 .000 .000Student Correlation 1 .570 .314Learn. Significance .000 .000Student Correlation 1 .360Interest Significance .000Expect. Correlation 1Grade SignificanceWorked CorrelationHarder SignificanceCourse CorrelationType Worked Course Standard Harder Type Mean DeviationProf. Correlation .065 -.182 5.34 .653Effect. Significance .047 .000Student Correlation .196 -.141 5.00 .959Learn. Significance .000 .000Student Correlation .053 -.174 4.75 1.224Interest Significance .107 .000Expect. Correlation -.111 -.079 4.17 .775Grade Significance .001 .016Worked Correlation 1 -.178 4.07 1.332Harder Significance .000Course Correlation 1 1.26 .441TypeTable 2: Descriptive Statistics Females Only: Correlations,Means and Standard Deviations(N = 472) Prof. Student Student Expect. Effect Learn. Interest GradeProf. Correlation 1 .619 .433 .367Effect. Significance .000 .000 .000Student Correlation 1 .602 .332Learn. Significance .000 .000Student Correlation 1 .417Interest Significance .000Expect. Correlation 1Grade SignificanceWorked CorrelationHarder SignificanceCourse CorrelationType Worked Course Standard Harder Type Mean DeviationProf. Correlation .053 -.191 5.41 .633Effect. Significance .255 .000Student Correlation .140 -1751 5.07 .970Learn. Significance .002 .000Student Correlation .056 -.203 4.71 1.247Interest Significance .223 .000Expect. Correlation -.209 -.041 4.17 .775Grade Significance .000 .372Worked Correlation 1 -.206 4.11 1.307Harder Significance .000Course Correlation 1 1.26 .441TypeTable 3: Descriptive Statistics Males Only: Correlations,Means and Standard Deviations(N=458) Prof. Student Student Expect. Effect. Learn. Interest GradeProf. Correlation 1 0.506 .340 .317Effect. Significance .000 .000 .000Student Correlation 1 .543 .284Learn. Significance .000 .000Student Correlation 1 .312Interest Significance .000Expect. Correlation 1Grade SignificanceWorked CorrelationHarder SignificanceCourse CorrelationType Worked Course Standard Harder Type Mean DeviationProf. Correlation .072 -.175 5.27 .665Effect. Significance .126 .000Student Correlation .251 -.107 4.94 .944Learn. Significance .000 .022Student Correlation .051 -.144 4.78 1.200Interest Significance .273 .002Expect. Correlation -.023 -.119 4.08 .770Grade Significance .618 .011Worked Correlation 1 -.150 4.03 1.358Harder Significance .001Course Correlation 1 1.26 .441TypeTable 4: Means and Variances for Faculty Effective Ratings forFemales and Males combined, Females only, Males onlyGender N Mean VarianceMales and Females 930 5.340 .426Females Only 472 5.412 .401Males Only 458 5.265 .442Table 5: Differences in Mean Faculty Effectiveness Ratings forFemales only and Males onlyDifferences Z Score P valueFemales--Males 3.453 0.000Table 6: Measures for Faculty Effectiveness Rating Scores by GenderGroups Female Fac. Female Fac. Male Fac. Male Fac. Female Stu. Male Stu. Female Stu. Male Stu.Average 5.43 5.33 5.40 5.223Standard 0.627 0.640 0.638 0.6767 DeviationSample size 189 159 283 305Table 7: ANOVA Table: Faculty Effectiveness Rating Scores by Gendergroups S.S. D.F. MSQ F Sig.Between Groups 6.65 3 2.218 5.269 .001Within Groups 392.30 932 .421Total 398.95 935

No comments:

Post a Comment