Search This Blog

Tuesday, June 21, 2011

I'm too wordy for a "comment"


Linda,

I do not mean to suggest that "my contribution to a student's education is more valuable than yours." I am saying that my discipline has years of research to demonstrate small class sizes promote better student learning with respect to writing.
You could be right that the groups themselves will be no different than a traditional 134 class. But the idea that there would be 100 students in a room, regardless of how the groups were formed, is at issue for me with our particular discipline. That said, you are right that I have assumptions here. But as I have said, I am a detail person and I need to understand how things will work before I can commit. I'm still uncertain as to what the "linked" courses will look like for students or faculty, and thus I'm not yet comfortable committing to them.

More important to me based on your comments above is the issue of assessment. The assessment I'm talking about is programmatic assessment. That is, Sustain is proposing a new/different way of teaching and learning. As you say above, "what we are now doing is not effective"--so somehow someone assessed student learning to make that claim. How is Sustain going to demonstrate that it is "effective"? How are we going to know at the end of the two quarters if students are performing better/learning more? Are we going to compare grades in the individual courses? All the research I've read suggests that course grades are not indicative of actual student learning, which is why the "assessment movement" focuses on learning objectives assessment separate from course grades. So if we aren't relying on course grades, what will we rely on? For instance, when we assessed student writing as part of the ULO assessment project, we had a team of faculty from across colleges read essays from GE A3, C1, C2, C4, and D5 courses, as well as senior projects. Using a rubric, we scored these essays, completely independent of their course/assignment grades. We learned that students' writing skills tend to improve in Area A and lower-division C, but drop fairly significantly by junior and senior years. Students' grades in these courses, however, might be different than their score on our assessment; course grades involve more than an individual paper, for instance. So typically course grades aren't the best indication of student learning, at least according to the assessment movement.

So that's what I mean by assessment. I'm not talking about faculty assigning student grades--of course I believe that everyone will evaluate students fairly. I'm talking about the kind of data-driven assessment that experiments need. At the end of the program, how will we determine if it was successful? What does "success" mean to us? Are there plans to track these students through their entire CP careers? Doing so would give us even more evidence to demonstrate that the project increased students' abilities (and perhaps even motivation, like Ginger mentioned). Will there be student and faculty surveys pre- and post-project? Such indirect assessment would be useful, too, I think. In our program, for instance, we survey majors in the middle of their required curriculum and at the end. We also perform assessment of student writing independent of a class--students might write an explication essay, for example, and faculty score it according to a rubric we developed to test for explication skills. We use the data we collect to make curricular improvements. For instance, we created a new course this year because our majors weren't getting some of the knowledge/skills we thought they should. In three or four years, we'll test these students to see whether they have better mastery of that knowledge/skills than our current students did. That's how we'll know whether the curricular change we made actually improved student learning (otherwise, why did we make the change?).

This sort of assessment seems imperative to me for a project like Sustain. Doing it will provide a wealth of information about student learning and faculty collaboration, and will provide data to help more reluctant faculty/units to see the value of such an approach.


4 comments:

  1. Oh Rats! Kathryn, about your question of assessment. The type of assessment that you're referring to is exactly what NSF is interested in and what we have received the funding for! I'm sorry we didn't get to this conversation. If you like, we are happy to show you the plan and introduce you to the researchers that are contracted for the assessment. This really is where the majority of the $ is going...for analyzing the developmental changes in SUSTAIN participants (faculty and students) and a comparative group. In my view, it is a level of rigor in "programmatic assessment" that has never been done at Cal Poly (probably because it costs so darn much).

    As I said, I'm happy to show this to you and if you have ideas on it, also happy to hear them. We have been working with a set of researchers from Boston for this dimension. It is not perfect, but does examine deeper questions of developmental changes (not whether someone can calculate something, which is the grade level stuff).

    ReplyDelete
  2. About the room size...again, too bad we didn't get to answering these questions. We are likely to not have more than 35 people in a room at a time. We knew this because very practically, rooms at Poly are small. We will be meeting in 3-4 rooms in close proximity for almost all that is done. There may be occasions when we meet somewhere that will accommodate a larger group, especially when it involves the community partners.

    ReplyDelete
  3. Oh, Kathryn, by the way, I am interested in the research on class size. I actually haven't seen any research around this issue. Much of the literature that we've been poking around in examine the ecology of the learning environment, to include pedagogy, faculty disposition and so on, but not really anything about class size. So, if you could point me to some of this research, I'd be very grateful. It has been our commitment to have design decisions about the form of the learning to be grounded in evidence.

    If you're at all interested in the research that we have used for the initial field studies, I can share that with you. It is largely in the area of motivation and the link between autonomy and engagement for learning, role of meaningfulness for learning engagement and so on.

    ReplyDelete
  4. Linda--I would love to see the research you mention here. It has been my classroom experience that these things are true.

    ReplyDelete