Search This Blog

Tuesday, June 21, 2011

Further reflections: Kathryn

I’d like to respond to Pete’s posting, but my “comment” turned into another entry, so I’m putting it here instead.
I really appreciate Pete’s thoughts, but I think maybe I wasn't as clear as I could have been.
I don't distrust Pete, or Linda, or any individual in the room. In fact, our conversations over the past quarter have increased my belief that everyone in the room wants students to have a meaningful educational experience. And English doesn't inherently distrust any particular department or college. But we do have a history with some departments/colleges that indicates that some individuals don't value our discipline. My own history stems from working with the GE program for ten years. The fact that CENG has a different and reduced GE template suggests that CENG historically hasn't valued a well-rounded education for its students. But with respect to the individuals I've met in CENG, I would say that all of them value GE. So there's a disconnect between the "group" and the individual, and that's what I was trying to get at with trust. It's easier to trust individuals you work with and know than it is to trust an entity like a department or college. Does that make sense? But even so, I wouldn't say that I or the English Department distrusts any college or department inherently.
Pete asked what it would take for English to want to participate. I honestly can’t answer that until the fall when I can talk to my colleagues. Ginger, for instance, does want to participate, and there may be others who do as well. But one concern is this: if English 134, Coms 101/2, and English 145/8 are all offered as part of this project, it means that most, if not all, Sustain participants will complete all of their Area A GE requirements through this project. I’m wondering about that. There is no other GE area for which this will be the case, and Area A provides a foundation for all other writing-intensive courses in the curriculum. So if the experiment doesn’t work, an entire GE foundational area will be “sacrificed” and a primary University Learning Objective will go unmet. Of course, if the experiment does work (and we all hope it will), students will hopefully be better prepared. But I do worry about this issue, and it’s one reason that I proposed that English courses be tied to the projects but maintain a “regular” time slot. Additionally, I worry that if English classes are part of the “linked” courses, we lose the intimacy of small classes that we have fought hard to get and keep. National data illustrate over and over again that writing classes need to be kept small (even smaller than ours currently are) in order to achieve the best results. So in a way, the linking that Sustain is proposing actually violates a long-standing disciplinary tenet. Perhaps there are ways around this issue, but right now I don’t see them.
I also am still unclear about how we will assess the success of this project. How will we assess these students’ progress as compared to others? And what are our goals? Do we expect that Sustain students will perform better than traditional students? Or are we shooting for “as well as”? Who will perform this assessment? These questions, for me, are still looming large and I am hesitant to enter an experiment without understanding the assessment methods. Maybe these issues have already been decided and just haven’t arisen at our discussions, but I don’t know the answers.
Once again, I think I sound more negative than I feel. I am not opposed to this project, and I think collaborative teaching and learning can be extremely beneficial to students and faculty. But I’m not sure that English is willing to jump in with both feet just yet. Our “one foot in” approach, that of having our courses be tied to students’ projects but keeping a discrete class time, seems like the best approach for us as we continue to explore the issues above.


  1. Hi Kathryn, these comments are helpful. THank you so much!

    About the risk to "Area A"...I wonder if you might consider that all programs are in the same boat as you?...that all of them who are participting are at risk, so to speak. I think the boundary of "Area A" being at risk is really very artificial, in that there is no "Area A" in the real world, but something we have created, but it is no more at risk than something someone else might highly value, such as statistics. We could draw another boundary and someone could say "Mechanics is at risk" because it is a critical area of physics and it is within the grouped courses. For me, the distinction of Area A being more at risk is not present.

    But there is something saying "this is too important to risk," it seems to me you are prioritizing the value of that thing (whatever it is) in its current form over all the others. How do you know (how do any of us know) that the current form of what we are doing is its optimized form? Aren't we at just as much risk (more, because I believe there is much in the research that shows what we are now doing is not effective) by keeping things as they are.

    For me, our collective inability to collaboratively change the way we do things is the way in which we all are participating in a system of "my contribution to a students' education is more valuable than yours."

    I understand your concern about the intimacy of small in writing, but how do you know that the experience in the grouped classes would not in fact be more intimate? Maybe the groups would be smaller? I'm asking these questions in the spirit of seeing and questioning our assumptions...really, how are we certain that our assumptions are true?

    These are the things we didn't get a chance to talk about.

    The would students be assessed differently than they are now? How are you seeing assessment (formative or summative) as necessarily different than it is now?

    In the current system, I fully trust my colleagues teaching a course to come up with a reasonable and effective way to evaluate students' mastery. I'm confused about how this is any different than that. How are you seeing it as different? Or am I confused? Are you talking about wanting to see the process that others are using to assess students' individual learning? Maybe I'm confused about what you mean.

  2. What do you mean by failure of the experiment? Maybe we should define this. Is it that students do not pass the courses? we have failure now. Is it that the learning objectives aren't covered? This probably happens now also, but there is no body (thankfully) that polices coverage of learning objectives. I am not sure what failure would look like.

    I am wondering if you are thinking about the assessment that is funded by the NSF grants. We have not really laid this out either, but this assessment is generally at a developmental level and not at the content learning objectives level. The content LO's, as Linda said, will be assessed by the appropriate disciplinary faculty.

    All these questions brings up, for me, the point that we are having a hard time communicating. As we move forward to recruiting, this will continuously be an issue. I don't know how to resolve it.