I saw a wide variety of sessions at SIGCSE and had some amazing conversations.  With my position at CSNYC I am finding I approach SIGCSE a little different and I tend to try and assess papers and panels in a slightly different way.  While previously I looked for practices that either (1) echoed what I was seeing in my own classroom and offered clarity or (2) offered something new and challenged me to think about my classroom, students, or policy in a new way.

Now I look for a few different things.  First, is the paper aligned with what I know of cognition, cognitive science, or theories of learning? If not, have they collected enough data to convince me that it is not an artifact of an unmeasured influence instead of what is being claimed.  Second, could this become one of the implementable recommendations I make to the teachers I work with? Does it address a need that my teachers have or will it provide them with a tool or technique that can be applied easily? Third, could this yield a potential partnership?  CSNYC is beginning to craft a research agenda to measure the implementation in the city and we are looking for partnerships to leverage expertise as well as time for rigorous research.

With the shift of goals in mind, there were some standouts, and some papers I want to add to my “examples of not good research” on a training page for SIGCSE reviewers.  Let us focus on the good, and remember the list is colored by the papers I attended, I have not read the full proceedings and am not trying to imply “best in conference”.

Writing exam questions makes for better assessment outcomes.  I love the PeerWise research.  Aside from having a tool that is iterated on and used with lots of students, Paul Denny and the group at Auckland are using experimental design to be able to claim causality. (not to mention I’m biased towards quantitative data from my CMU background)  In his paper <insert title> Paul requires students to either write problems or just practice with problems that others have written.  With a very small treatment (Write 3 questions) but a very large N (over 700 students), Paul found a significant impact on an assessment of authoring questions.

I also went to a session on grand challenges in computing education.  It was a discussion session about what people would like to see come out of CS Ed research and what are the big challenges for CS Education researchers (such as straddling two departments).  There were some interesting points raised and my twitter feed (@lsudol) has a running list of them from Friday afternoon.

Finally, on Saturday morning I attended the Technology We Can’t Live Without session and saw Eric Allatta present some of the tools he has been developing with New VIsions.  Amazing.  He has a way to have rubric, grade, and student code, and student data about work patterns on the screen simultaneously.  Eric is devoted to streamlining his processes so that he can not only be a productive, engaged teacher, but also a good father and husband.  This devotion to efficiency has lead to some of the most efficient teaching practices I’ve seen by a third year teacher.

Overall, SIGCSE was a great chance to connect.  I had some fantastic meetings, leant some expertise to friends whose mission I believe in, and had the pleasure to hang out with some folks who I only see a few times a year but follow their lives on social media.  It is amazing to get the chance to step out of implementation into a space where people think you are doing good work and remind you that despite small setbacks from time to time the bigger picture is amazing.

Leave a Reply