The Assessment of Teaching at a Large Urban Community .ppt
《The Assessment of Teaching at a Large Urban Community .ppt》由会员分享,可在线阅读,更多相关《The Assessment of Teaching at a Large Urban Community .ppt(56页珍藏版)》请在麦多课文档分享上搜索。
1、The Assessment of Teaching at a Large Urban Community College,Terri M. Manning and Denise Wells, Central Piedmont Community College Lily Hwang, Morehouse College Lynn Delzell, UNC-Charlotte,Presentation made to AIR, May 19st, 2003 Tampa, FL,Why we evaluate teaching?,We do teaching evaluation for two
2、 reasons (heavy emphasis on the 1st): 1. So faculty will have feedback from students that can be used to improve teaching. 2 . So chairs/division directors can have one consistent indicator of students perceptions about faculty (especially part-timers). These are often used as one of several means o
3、f teaching assessments for merit.,Problems in General with “Evaluation of Teaching” Tools,Most are created internally Committees dont always start at the beginning “what is good teaching?” Most are not tested for (at least) validity and reliability Many are thrown together rather quickly by a commit
4、tee whose goal is a usable survey tool,Very Few Tools are For Sale,Institutions are unique and what they want to measure is unique (undergraduate, graduate, continuing ed, literacy and distance ed courses) Because most institutions see them for what they are. happiness coefficients No one will stand
5、 behind them “our tool is a valid measure of teaching” They would never stand up in court So be very careful! Never site your teaching eval as a reason for not renewing a contract.,Problems with the use of them,The scores are used inappropriately and sometimes unethically (or at least stupidly) They
6、 are used for merit pay, promotion and tenure Scores are treated like gospel - “you are a bad teacher because you scored below the department mean on the tool”,Problems with use, cont.,Critical at the community college where 100% of the job description is “to teach” Used to make hiring and firing de
7、cisions Teachers are placed in a “catch-22” situation (do I pretend this tool measures teaching or blow it off you could be in trouble either way) Who is included in group means for comparison purposes,A Misconception,You get a bunch of people together Throw a bunch of questions together Call it a t
8、eaching evaluation tool And “hocus pocus” it is a valid, reliable, sensitive and objective tool You can make merit, promotion and tenure decisions with it no problem,What Makes a Good Questionnaire?,Validity it truly (with proof) tests what it says it tests (good teaching) Reliability it tests it co
9、nsistently over time or over terms, across campuses and methods Sensitivity (this is critical) it picks up fine or small changes in scores when improvements are made, they show up (difficult with a 5-point likert scale) Objectivity participants can remain objective while completing the tool it doesn
10、t introduce bias or cause reactions in subjects,Problems Inherent in Teaching Evaluation with Validity,What is “good teaching” It isnt the same for all teachers It isnt the same for all students We know it when it is not there or “absent” Yet, we dont always know it when we see it (if the style is d
11、ifferent than ours) Who gets to define good teaching How do you measure good teaching How can you show someone how to improve it based on a “likert-scale” tool (this is how you raise your mean by .213 points),Problems Inherent in Teaching Evaluation with Reliability,Students perceptions change (e.g.
12、 giving them the survey just after a tough exam versus giving it to them after a fun group activity in class) From class to class of the same course, things are not consistent Too much is reliant on the students feeling that day (did they get enough sleep, eat breakfast, break up with a boy friend,
13、feel depressed, etc.) Faculty are forced into a standard bell curve on scores There is often too much noise (other interactive factors, e.g. student issues, classroom issues, time of day),Greatest Problem . Sensitivity,Likert scales of 1-5 leave little room for improvement Is a faculty member with a
14、 mean of 4.66 really a worse teacher than a faculty member with a mean of 4.73 on a given item Can you document for me exactly how one can improve their scores In many institutions, faculty have learned how to abuse these in their merit formulas Faculty with an average mean across items of 4.88 stil
15、l dont get into the highest rung of merit pay,The Standard Bell Curve,Mean,IQ An Example of a (somewhat) Normally Distributed Item (key is range),Standard Deviation = 15,The Reality of Our Tool - Questions #1 of 17,734 responses from Fall 2000),Item Mean = 4.54, Standard Deviation = .77,Mean,1. The
16、instructor communicates course objectives, expectations, attendance policies and assignments.,What Would the Scores Look Like?,Standard Deviations Above and Below the Mean,Maximum Score = 5,How We Developed the Student Opinion Survey at CPCC,We started with the old tool An analysis was done (it was
17、rather poor and proof of administrative reactions to current issues) The old tool contained 20 questions mostly about the business of teaching (handing back exams, speaking clearly, beginning class on time, etc.) 91% of faculty received all 4s and 5s on each item The less sophisticated students were
18、, the higher they rated their teachers,Next,A subcommittee of the Institutional Effectiveness Committee was formed consisting mainly of faculty The committee spent one year studying the tools of other colleges and universities and lifting what we liked We found virtually nothing for sale What we did
19、 find were test banks of questions,Next, cont.,We started with 50-60 questions we liked off of other tools We narrowed the questions down We worked through every single word in each statement to make sure they were worded exactly like we wanted them and that they measured what we wanted We ended up
20、with 36 questions on the new tool,Next, cont.,We worked on the answer scale We found students had trouble processing the likert scale (it wasnt defined) Students liked the A-F grading scale but faculty didnt (it took far less time) We worked through the “excellent, good, fair, poor” type of scale an
21、d the “strongly agree to strongly disagree” scale. We tested two types during our pilot process.,Next, cont.,We wanted to create subscales with a wider range of scores than a 1-5 scale:The art of teaching The science of teaching The business of teaching The course The student,Next, cont.,We pilot te
22、sted the tool with about 10 classes and followed it up with focus groups (Fall 1999) We revised the tool We pilot tested again (many sections, about 400 students) with two scales (Summer 2000): A-F scale like grades A-E scale with definitions for each score,What We Found,Students rated faculty diffe
23、rently depending on the scale. Example:13. How would you rate 13. The instructor the instructor on encourages encouraging thinking and learning thinking and learning.A-F Scale Strongly Agree ScaleMean 3.56 Mean 3.48 St.Dev. .74 St.Dev. .71A 241 (68.7%) SA 203 (58.8%)B 75 (21.4%) A 107 (31.0%)C 28 (8
24、.0%) PA 31 (9.0%)D 6 (1.7%) D 4 (1.2%)F 1 (.3%) SD 0,More Testing,We took the first full data-set (Fall 2000) and did some comprehensive analysis on the tool. We found: Students rated the faculty in more difficult classes higher (we and the Deans thought the opposite would be true) Students rated mo
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
2000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- THEASSESSMENTOFTEACHINGATALARGEURBANCOMMUNITYPPT

链接地址:http://www.mydoc123.com/p-373227.html