Jumping Back In – Academic Papers all CS Teachers Should Read

I find that my browser tabs often are about 25% active work or feeds (email, google docs I’m working on, twitter, Trello, etc.) and 75% things I am waiting to read.  Right now I’m working on reading some follow up references for ICER, some articles from the CS Education list put together on Twitter by Aman Yadav, and some recently released CS curricula (Like San Francisco).

What pushed me past the threshold potential to write this post was an article “Five Academic Papers All Teachers Need to Read“.  In addition to the five papers presented, I thought I would expand upon the list as many teachers and faculty are preparing to return to the classroom for another year of eager young minds, who may (or may not) be waiting for you to educate them. First, I recommend you go and read those 5 papers. (now – seriously – before looking at my list and getting distracted, at least go print them out and add them to your reading pile)

Next, what are the papers that every CS teacher needs to read?

(1) Lister, R., Leaney, J., First Year Programming: Let All the Flowers Bloom (2003) SIGCSE

This paper changed the way that I assess and has formed a cornerstone of the assessment strategy we use at the Academy for Software Engineering.  Although Lister and Leaney are talking about examinations, I apply this to program assignments.  Instead of “Differentiated Instruction”, I am a proponent for “Differentiated Assessment”.  And not just by making our stronger students do *more* work, but deeper and richer work involving higher levels of Bloom’s Taxonomy.  Eric Allatta at AFSE likes to call this “Multiple Exit Points” again flipping the educational jargon of “Multiple Entry Points” on its head. We start every student at the same place, with a requirement 0 that will get them to a passing grade by re-implementing worked examples from notes. (All kinds of good things here – ask and I will share) Requirements 1 and 2 require more thought and reasoning and earn higher grades. An important part of this strategy is that ALL THREE REQUIREMENTS ARE THE SAME ASSIGNMENT. Here’s an example from a scratch project about loops and variables called Font Size.

(2) Lister, R., The Middle Novice Programmer: Reading and Writing Without Abstracting (2007) NACCQ

Yes, I am a fan of Raymond Lister.  This paper is for anyone (college or HS) who teaches programming and thinks that reading and writing code are just two sides of the same skill. They are not. Nope, don’t tell me they are – go and read it and then we can talk.

(3) Institute for Education Sciences, Encouraging Girls in Math and Science (2007)

The practice guide sums up every way CS Educators can easily encourage more girls to persist in their classrooms. Recommendations such as providing clear feedback, highlighting role models, and connecting to real-world applications are connected to specific examples.  The practice guides written by IES are awesome and targeted to educators so they are actionable.

(4) Institute for Education Sciences, Organizing Instruction and Study to Improve Student Learning (2007)

What can I say, IES was on a roll in 2007 for generalizable educational practices. Again written for teachers but research driven, this is another staple for my Introduction to CS Education class I taught at CMU.

(5) Mark Guzdial’s Blog

Let’s face it, new research comes out every day (or few months as the research cycle goes). Rather than point to one more paper, I would recommend Mark’s blog for updates, shameless student paper promotions, and careful thoughts. Lively discussions in the comments too!




August 24th, 2015, posted by ldelyser

Progressive Education and the Promises of CS Education

I just finished reading “Loving Learning: How Progressive Education can Save America’s Schools” by Little and Ellison.  Its been in my reading queue since Amazon recommended it to me, and it came up in library request roulette.

Throughout the book it surprised me how the ‘features’ of progressive education often align with the features of computer science education that is often touted to the media such as open ended projects, just in time learning, real world scenarios and issues, and sparking that interest in students who were other dissatisfied with school.

While I am on the fence about ‘progressive education’ vs. traditional education, especially in high schools where students have been acculturated into one or the other for years before arriving at our doors, I see many parallels between the lessons of progressive education and the attempts to bring computer science to the mainstream and incorporate the maker movement.

First, there is an ongoing conversation among instructors of early courses between content and student interest, skills and open ended projects. At AFSE we constantly have that conversation – how do you find the balance between student interest, self directed projects, and student choice and the need to cover specific curricular elements? Identifying the appropriate mastery goals for students (what every student should know or be able to do at 100% before moving to the next course) and including instruction and practice in those goals is important. How do you tailor instruction in a classroom of 30+ students (twice the size of what progressive education argues should be the max), provide appropriate individualized feedback at two levels – micro for the assignment and macro for the overall progression through the course, and tailor assessments? It’s a monumental job and requires a balance between outcomes and student choice.

Apparently traditional progressive education is actually high standards, just through alternative assessment.  From the book:

Over the past century, progressive schools have put a lot of effort and attention into developing effective alternative forms of assessments. Instead of the one-size-fits-all standardized exam .. we have always favored the sorts of evaluations supported by research and described in the landmark NRC report How People Learn, as those that “provide students with opportunities to revise and improve their thinking, help students see their own progress over the course of weeks or months, and help teachers identify problems that need to be remedied.”

This seems to fit well with the iterative nature of programming assignments as the move on the CollegeBoard to portfolio assessment for the CS Principles course.

Sean Stern, Eric Allatta and I  wrote a paper for the Every Child a Coder workshop in Boston this June.  I will post more about the workshop with a link to our paper when it becomes available, but the themes in the paper address our balance between content and rigor, assessment, and finding meaningful experiences for students.

More to come, but I’m curious what others see.

April 26th, 2015, posted by ldelyser


I saw a wide variety of sessions at SIGCSE and had some amazing conversations.  With my position at CSNYC I am finding I approach SIGCSE a little different and I tend to try and assess papers and panels in a slightly different way.  While previously I looked for practices that either (1) echoed what I was seeing in my own classroom and offered clarity or (2) offered something new and challenged me to think about my classroom, students, or policy in a new way.

Now I look for a few different things.  First, is the paper aligned with what I know of cognition, cognitive science, or theories of learning? If not, have they collected enough data to convince me that it is not an artifact of an unmeasured influence instead of what is being claimed.  Second, could this become one of the implementable recommendations I make to the teachers I work with? Does it address a need that my teachers have or will it provide them with a tool or technique that can be applied easily? Third, could this yield a potential partnership?  CSNYC is beginning to craft a research agenda to measure the implementation in the city and we are looking for partnerships to leverage expertise as well as time for rigorous research.

With the shift of goals in mind, there were some standouts, and some papers I want to add to my “examples of not good research” on a training page for SIGCSE reviewers.  Let us focus on the good, and remember the list is colored by the papers I attended, I have not read the full proceedings and am not trying to imply “best in conference”.

Writing exam questions makes for better assessment outcomes.  I love the PeerWise research.  Aside from having a tool that is iterated on and used with lots of students, Paul Denny and the group at Auckland are using experimental design to be able to claim causality. (not to mention I’m biased towards quantitative data from my CMU background)  In his paper <insert title> Paul requires students to either write problems or just practice with problems that others have written.  With a very small treatment (Write 3 questions) but a very large N (over 700 students), Paul found a significant impact on an assessment of authoring questions.

I also went to a session on grand challenges in computing education.  It was a discussion session about what people would like to see come out of CS Ed research and what are the big challenges for CS Education researchers (such as straddling two departments).  There were some interesting points raised and my twitter feed (@lsudol) has a running list of them from Friday afternoon.

Finally, on Saturday morning I attended the Technology We Can’t Live Without session and saw Eric Allatta present some of the tools he has been developing with New VIsions.  Amazing.  He has a way to have rubric, grade, and student code, and student data about work patterns on the screen simultaneously.  Eric is devoted to streamlining his processes so that he can not only be a productive, engaged teacher, but also a good father and husband.  This devotion to efficiency has lead to some of the most efficient teaching practices I’ve seen by a third year teacher.

Overall, SIGCSE was a great chance to connect.  I had some fantastic meetings, leant some expertise to friends whose mission I believe in, and had the pleasure to hang out with some folks who I only see a few times a year but follow their lives on social media.  It is amazing to get the chance to step out of implementation into a space where people think you are doing good work and remind you that despite small setbacks from time to time the bigger picture is amazing.

March 9th, 2015, posted by ldelyser

The recursive calls got away from me…

Hello world again!

Apparently, finishing a PhD sends you down a spiral of recursive calls for which a base case is hard to find.  Please be patient with me as I update the main files at csadvocate.org and with this blog to update some of the sidebar information and widgets and bring it back to fully functional again.

I was unhappy when in the midst of the craziness that was grad school, I let my domain registration slip and someone else took it.  To my surprise, however, my domain hosting never lapsed and still had all of my data from the old site.  I will repost the class web pages with the assignments and worksheets in case anyone would like to use them as (slightly outdated) references, and I am going to pick up blogging again now that all of my free writing cycles are not devoted to the thesis.

Some housekeeping:

CMU awarded me a Ph.D. in Computer Science Education in December of 2014.  The thesis is available here.  I have a feeling I will be blogging about parts and submitting parts to various conferences over the next year.  I am presenting Friday morning at SIGCSE about the levels of abstraction in student think aloud statements.

For those of you who don’t know, I am now working for the NYC Foundation for CS Education as a Program Manager.  Since 2012 I have been primarily focused on the opening of the Academy for Software Engineering (AFSE), as CS focused public unscreened high school in NYC.  This past summer I transitioned to a larger role at the foundation and I now work with AFSE, our sister school in the Bronx, BASE, and other programs in the city.  It is a very exciting time as we have spent the last two years scaling programs, and now have programs in over 100 schools.

My current interests at the moment involve providing meaningful CS to a wide variety of students in NYC, ensuring equity of access and participation (two very different things) and thinking about what research to conduct with this amazing situation I’ve been placed in.

More to come, welcome back!  Please ignore the dust bunnies.

March 5th, 2015, posted by ldelyser

Intelligent Tutoring Systems and Educational Data Mining ’12

If you follow my twitter feed (@lsudol) you will know that I just finished a 10 day double conference trip to the Intelligent Tutoring Systems Conference and Educational Data Mining Conference that was co-located. I presented a poster at ITS and a short paper at EDM.

The conferences themselves were a great mix of learning sciences and data analysis, both of which are so important to computer science educators. In the ITS community the focus is on building intelligent systems (ones that respond to students as individuals rather than prescribed interactions) in order to foster deeper learning. In CS we actually have a rich history of ITSs starting with the Lisp Tutor that was developed at CMU. The Lisp tutor and many others rely on a detailed, rule based cognitive model that evaluates any misconceptions or “buggy rules” that students may have as they produce code. Having been designed a number of years ago, the tutor relied heavily on cognitive scientists doing a large number of protocols with students to determine what those rules would be and how to correct them. Now, much of the work focuses on extracting from data the types of behaviors that students exhibit rather than qualitative interviews.

I saw some great papers and would highly recommend them for anyone interested in CS Education and taking a strong quantitative look at how students learn. We have some amazing data sets (Jamie Spacco and Matt Jadud‘s separate works for a start). Also the work that Tiffany Barnes and her students are doing is exciting.

At the Educational Data Mining conference, much of the work was centered around data mining techniques that would offer some predictive power as to whether a student would answer the next question correctly, or achieve competency on a post test. The idea being that if we can predict that from data, then we know when to intervene in the learning practice with scaffolding or support as needed. Really exciting work here as well with individual parameters being included in models that have been around for a long time, as well as some great generalizable work from Martina Rau about multiple representations and distributed practice during learning. Email or tweet at me if you want some recommendations – because depending on what you work on my recommendations will change. Or just go and read all the abstracts – the proceedings of EDM are freely available online.

Overall, some great papers, some great work for me – I built some new models on my thesis data as a part of a pre-conference workshop, and some great conversations that I hope to turn into collaborations in the future. I’m really excited by Ryan Baker‘s move to Columbia (and NY), Zach Pardos‘ work and his new position, the IRT details from MIT’s Physics Education Researcher Yoav Bergner, and once again Tiffany Barnes‘ work.

June 25th, 2012, posted by ldelyser

Lets just blame the Intro CS..

It should not come as a surprise that I have not been writing many blogs lately. I am focused on writing as many pages of my thesis document as possible per day and that means any other writing gets put aside.

But, as I was reading today’s ACM TechNews post, I came across this article.

Don’t get me wrong, I believe that a well rounded curriculum is needed to produce today’s and tomorrow’s computer scientists. And options for the students who wish to get a minor in CS is also good. But blaming a lack of skills on embedded programming on the focus at the University level on Java “in their introductory computer science courses”. Really? In fact one expert was quoted:

“To be blunt,” Dewar wrote, “adopting Java to replace previous languages used in introductory programming courses – such as Pascal, Ada, C or C++ — was a step backward pedagogically. Many universities went to Java because ‘that’s where the jobs are,’ but ironically may have produced a generation of programmers with over-specific but superficial skills who are now losing jobs to overseas competition with broader and deeper talents.”

I think this should start a larger conversation in our community of CS education researchers. What are the goals of our introductory courses? To get students a job (or at least a foundation in a language that will get them a job?), to teach them basic skills in programming? to introduce them to the discipline of computer science? to provide a service course to the rest of the university for students in the sciences, mathematics, and humanities who may need to implement small programs in order to solve problems in their own discipline?

Perhaps our intro courses are trying to be too many things to too many people. Perhaps we need to critically think about a CS for ____ where the blank gets filled in by the primary goal of the course. And if Embedded Systems is such an area of need within CS – maybe we need to include a course in the undergraduate curriculum focused on it. (In addition to the intro courses, data structures, algorithms, and other courses the author of the article indicated were taking too much of the student’s time)

May 7th, 2012, posted by ldelyser

They will drop their coffee

This post is inspired by a long conversation with a friend. I recently spent a week with a woman colleague from Berlin who was visiting the US. She had spent a couple of months here working with various faculty members and ended her trip with a week here to see NYC. We took one day to go shopping together at a nearby outlet mall, and as I have been doing lately, I was looking for some professional clothes to wear to meetings and eventually work when I am no longer a grad student.

Now being a woman, and living and working either in or just outside NYC I have a slightly less casual opinion of what is work “appropriate” even if it is not work required. To be frank most of my work clothes come from NY and Company or Ann Taylor and are slacks, shirts, skirts and dresses. I like to feel good about the clothes I’m wearing, especially when going to an important meeting, or a presentation, and even when I was teaching at the HS and college level I tried to always dress nice – often nicer than was required.

My friend and I found some wonderful dresses – very work appropriate – especially with a cardigan or jacket over them. Her comment was that she could never wear something like that to work or all the men in the department (CS) would “Drop their coffee” at her appearance.

This really made me think.

Its not the first time I have heard women CS colleagues specifically say they could not dress in what any other work environment would consider professional (and not even revealing or sexy – just nice) because the men in the department would tease them for dressing up for work.

As a discipline that lacks gender balance we spend lots of time taking pride in who we are. I know many women faculty who own “This is what a computer scientist looks like” T-shirts, and are proud to speak out about the benefits of joining a CS department.

My response to my friend was – “So? Whats wrong with making them drop their coffee?” If we want women and girls to see CS as an environment for all, a professional atmosphere that is welcoming, perhaps we need to think about the way we dress as well. High school students are surrounded by media images of professionals, from the ads they see to the TV shows that they watch. And clothing pays a large part in unspoken messages about who a person is and what they do. I know that many of the CS professionals who work in companies or in industry dress much better than the academics – but the academics are who students see and interact with first.

So here is my challenge. If you are happy with the clothes you wear to work every day – don’t change them. If you sometimes pass up a pretty blouse or a killer pair of shoes because you are worried that the men might “drop their coffee” in the department – go ahead – buy them and wear them with pride. I can guarantee that they may drop the coffee the first couple of times, but after a while they will get used to it.

April 10th, 2012, posted by ldelyser

Standards vs. Implementation

One of the largest criticisms of the Running on Empty report is that the states that score highly in terms of standards are some of the states that have the lowest implementation levels of those standards (as measured by CSTA surveys and number of students taking the AP computer science exam). Today, Change the Equation posted a commentary entitled that “Standards are Less than Half the Battle“.

This may be especially true in computer science, as the courses are most likely not required, and often only count as an elective credit for graduation. There are absolutely cases where exceptional standards are only partially or sparingly implemented, and with no assessment we cannot tell what is going on in classrooms.

Does this mean that standards are not important? no. They are a starting point. They express to school officials, who may not be experts in a particular domain, the important concepts in the field and the appropriate learning outcomes for students at a variety of levels. I applaud Change the Equation for pointing out that the high and low ratings from outside agencies may not reflect the ground truth. Perhaps we need to include parallel discussions about both improving standards and highlighting the implementation that would accompany classroom practices in a school where full implementation of excellent standards existed.

February 2nd, 2012, posted by ldelyser

News implies research hypothesis and leads to pedagogical interventions

Subtitle: Attrition, Attribution, Self Efficacy, and Feedback

So, for those familiar with the work I am pursuing for my thesis, it should be no surprise that I believe that feedback is an important part of any educational environment. The type of feedback we choose can have profound effects on student learning, but I am also convinced that it can impact their motivation as well. This is a post inspired by a recent WSJ article that was also picked up in the NYT as well.

There are many research hypothesis that have made their way into my “folder”. I keep a folder in my desk where I jot down notes based on other works, or just ideas for research studies I’d like to conduct if I have the time. I believe that this folder will become useful if I ever want to make a change in my general research direction, or if I eventually have students who are looking for a project but not sure where to start. Each idea contains references to the presentations or works that inspired the thought and I would let my students look through for something that interested them and then have them read the work so that we could talk about their take on whatever principle applies to CS (or Math or EdTech) education. There is a very large subsection of that folder that belongs specifically to self efficacy and its impact on learning, performance, retention, engagement, and perseverance.

Computer science is an interesting discipline. Most scientific disciplines have a level that you reach where the problems you are trying to solve are not prescriptive and take a number of attempts or false starts before you are on a path to success. For complex math proofs, it is often the case that you try several different methodologies before hitting on the one that works. For natural sciences, there is a reason why its called an “experiment” – what you try does not always work the first time when you are doing lab research. CS obviously has these problems as well – but whats interesting is that at the novice level, the first course even, we are often already into that state. Students are asked to complete a programming assignment that may have an abstract connection to what was done in class, but is rarely a direct translation of the lecture. Also, the feedback is much more immediate (in the form of a compiler and test cases) than the other disciplines where you might get a problem set back the next week with a marked grade.

Therefore, in CS, a novice’s experiences are comprised primarily of something telling you that you are “wrong”. Even when you fix one problem, the tools that we use when teaching do not celebrate the fix, but present the next bug. This has to have a negative impact on student self-efficacy and in turn their belief that they are being successful in this course of study. At some point I want to study this, and what we can do to the messaging that our pedagogical environments give our students in order to help them understand that there is more work to do in order to successfully complete the program, but also to celebrate the small victories in the problem solving process early on.

Because I know a number of teachers read this blog, I’d like to include here some of the untested things I’ve used in my classroom in the past to scaffold students, and would like to evaluate rigorously in the future.

First, when introducing a new concept give small, short programs that are very similar to the lecture. Or have students code with you in class. Whether you choose to “grade” these at the same level as other assignments or not (ie. 5 point practice programs vs. 100 pt. larger projects) the quick-fix of putting together a short successful program can front load the SE of the student. Give them some early successes to build confidence before tossing the larger, more rigorous assignment at them. I often did this in a embedded way. Using Bloom’s Taxonomy I would have three graded levels for each assignment. A passing grade would involve implementing what I called a one-step. The program would be one step further than the lecture. They could type in the notes (any code I posted online would be in .gif form so they could not just copy/paste but actually had to type), and make one change. Step two would be a little harder and involve some application of other knowledge or integration of other concepts. This would earn them either a B+ or A-. In order to get an A they had to “impress me”. The last points were open ended and require that students then create some application of the program. A great example is when I taught students to use buttons and click events. The lecture would show how to create one button and have a single text box pop up when the button was pressed. The one-step was to create a program with two buttons and two messages. The next level involved maintaining a state across button presses – counting how many times each button was pressed. The last level was up to them and I got everything from touch-screen type McDonald’s ordering menus, to games, to a four function calculator.

A second way I scaffolded students within each assignment was the use of unit testing. In addition to helping me out by providing automated grading, students could see how many tests they passed and failed, and rather than just seeing the next error many interfaces will show you all tests – with passed tests in green. Watching the number of green tests increase over time is a good way to feel like you are making progress, even if there are still errors to correct.

Overall, I think SE is an important measure of just how well we are creating content and pedagogy that has the chance of engaging students and reinforcing learning.

November 11th, 2011, posted by ldelyser


Recently I came across the 100Kin10 website. (Yes I sent an email to Jan Cuny about it to see if she knew about it and had fit the 10K project for CS educators with it)

The underlying premise of the initiative is that in order to produce the next generation of scientists who imagine, build, design, and create the future technologies and products that will keep America competitive in the next century, we need great teachers. Students need to see STEM fields as something exciting and worth the time and effort to study. The goal is to train 100K new teachers in the next 10 years. Based on statistics from the NCES (National Center for Educational Statistics) approximately 170K new teachers start work in a school district each year(based on 2005). Most of these replace leaving teachers although a few are new hires for expansion. Even if only 1/3 of these teachers are STEM teachers, that still means that there will be 56,600 teachers each year being hired. I wondered if the 100K was a little small of a number?

The movement is just getting started, and schools of education are committing to training teachers for the program. However, as a Computer Science Education PhD who is looking for a faculty position in the fall, it is becoming very clear to me that the movement is going to be very heavily science and mathematics based. My fear is that CS departments are going to think I belong in the education school, and that there are NO CS education positions in ed schools (so I’m applying Math ed – I have a Math ed undergrad and spent 10 years teaching high school math, and CS is a Math at the K12 level in NY). If I am having trouble finding such a position, whats going to happen to our share of the 100K?

Thinking about the next round of CE21 grant applications I was thinking about how to train teachers for the CS10K project (similar idea, shorter time and only 10K teachers). It would be helpful if I could get a position at a school of education and offer courses in pedagogy and methods for either preservice teachers, or for inservice credit for current teachers. I think even either a blended or wholly online approach may be the best way to reach the teachers who would be interested in such a course. Still thinking.

Overall, this post is a little bit of a ramble – I’m hoping that some of my followers who are also bloggers pick up the movement and start making the CS Education community aware and hopefully reach out to their schools of education to find out 1) if they are a cooperating institution, and 2) how can CS get in the mix.

November 8th, 2011, posted by ldelyser