Middle school math, continued

Me, aloud: “What should I write about tonight?”

Lisa: “Lisa!”

Challenge accepted.

Another evening of supervising/coaching/cheer leading middle school math homework. They’re focusing on simplifying expressions where there is subtraction of a subexpression in parenthesis. As I wrote about previously, there are technically a few different ways of approaching this (and that language was for the math-savvy, not something I would actually say to a middle-schooler).

In the end we settled on the “distribute the minus sign” approach, since they’ve also been practicing the distributive property as well. She still doesn’t see “the point” of simplifying expressions, and perhaps at this point she doesn’t need to. But she’s willing to play along if for no other reason than she cares about doing well in school. I’ll take that.

She had over-learned a rule that I suspect many of her classmates did, which is “when there are things in parentheses you reverse the signs.”  Nope! Not if the “things in parentheses” are simply being added. So we had to purge the “rule” and try to get to an explanation for why parentheses can essentially disappear when the subexpression is added, but the signs all flip when being subtracted. Still not the way I would choose to structure a curriculum, but it’s what we’re stuck with at the moment.

What was really gratifying, though, is how she was adding, subtracting and multiplying integers essentially in her head and getting the signs right about 95% of the time. This reminds me of what my high school math teacher once claimed: when you are studying level N of mathematics, you’re really solidifying everything you learned at level N-1.  Her mother thinks she’s just coming out of metamorphosis – her brain had turned to liquid goo at the onset of adolescence and is starting to solidify back into something resembling a brain again.

I’m keeping it short tonight – time for a good night’s sleep. Yay for a short work week this week!

When prior scores are not destiny

This post is for statistics and assessment wonks.  I’ve been really engaged in a bit of data detective work, and one of my findings-in-progress has whacked me up side the head, making me re-think my interpretation of some common statistics.

Here’s the setup. In lots of educational experimental designs we have some sort of measure of prior achievement – this can be last year’s end-of-year test score, or a pre-test administered in early Fall.  Then (details vary depending on the design) we have one group of students/teachers try one thing, and another group do something else. We then administer a test at the end of the course, and compare the test score distributions (center and spread) between the two groups. What we’re looking for is a difference in the mean outcomes between the two groups.

So, why do we even need a measure of prior achievement? If we’ve randomly assigned students/teachers to groups, we really don’t. In principle, with a large enough sample, those two groups will have somewhat equal distributions of intellectual ability, motivation, special needs, etc. If the assignment isn’t random, though – say one group of schools is trying out a new piece of software, while another group of schools isn’t – then we have to worry that the schools using software may be “advantaged” as a group, or different in some other substantial way. Comparing the students on prior achievement scores can be one way of assuring ourselves that the two groups of students were similar (enough) upon entry to the study.  I’m glossing over lots of technical details here – whole books have been written on the ins and outs of various experimental designs.

Here’s another reason we like prior achievement measures, even with randomized experiments: they give us a lot more statistical power. What does that mean? Comparing the mean outcome score of two groups is done against a background of a lot of variation. Let’s say the mean scores of group A are 75% and group B are 65%. That’s a 10 percentage point difference. But let’s say the scores for both groups range from 30% to 100%. We’re looking at a 10 point difference against a background of a much wider spread of scores. It turns out that if the spread of scores is very large relative to the mean difference we see, we start to worry that our result isn’t “real” but is in fact just an artifact of some statistical randomness in our sample. In more jargon-y language, our result may not be “statistically significant” even it the difference is educationally important.

Prior scores to the rescue. We can use these to eliminate some of the spread of outcome scores by first using the prior scores to predict what the outcomes scores would likely be for a given student. Then we look at the mean difference of two groups against not the spread of scores, but the spread of predicted scores. That ends up reducing a lot of the variation in the background and draws out our “signal” against the “noise” more clearly.  Again, this is a hand-wavy explanation, but that’s the essence of it. (A somewhat equivalent model is to look at the  gains from pretest to posttest and compare those gains across groups. This requires a few extra conditions but is entirely feasible and increases power for the same reasons).

In order for this to work, it is very helpful to have a prior achievement measure that is highly predictive of the outcome. When we have a strong predictor, we can (it turns out) be much more confident that any experimental manipulation or comparisons we observe are “real” and not due to random noise. And for many standardized tests across large samples, this is the case – the best predictor of how well a student does at the end of grade G is how well they were doing at the end of grade G-1. Scores at the end of grade G-1 swamp race, SES, first language… all of these predictors virtually disappear once we know prior scores.

What happens in the case when the prior test scores don’t predict outcomes very well? From a statistical power perspective, we’re in trouble – we may not have reduced the “noise” adequately enough to detect our signal. Or, it could indicate technical issues with the tests themselves – they may not be very reliable (meaning the same student taking both tests near to one another in time may get wildly different scores). In general, I’ve historically been disappointed by low pretest/posttest correlations.

So today I’m engaged in some really interesting data detective work. A bunch of universities are trying out this nifty new way of teaching developmental math – that’s the course you have to take if your math skills aren’t quite what are needed to engage in college-level quantitative coursework. It’s a well-known problem course, particularly in the community colleges: students may take a developmental math course 2 or 3 times, fail it each time, accumulate no college credits, and be in debt after this discouraging experience. This is a recipe for dropping out of school entirely.

In my research, I’ve been looking at how different instructors go about using this nifty new method (I’m keeping the details vague to protect both the research and participant interests – this is all very preliminary stuff). One thing I noticed is that in some classes, the pretest predicts the posttest very accurately. In others, it barely predicts the outcome at all. The “old” me was happy to see the classrooms with high prediction – it made detecting the “outlier” students, those that were going against all predicted trends, easier to spot. The classes with low prediction were going to cause me trouble in spotting “mainstream” and “outlier” students.

Then it hit me – how should I interpret the low pretest-posttest correlation? It wasn’t a problem with test reliability – the same tests were being used across all instructors and institutions, and were known to be reliable. Restriction of range wasn’t a problem either (although I still need to document that for sure) – sometimes we get low correlations because, for example, everyone aces the posttest – there is therefore very little variation to “predict” in the first place.

Here’s one interpretation: the instructors in the low pretest-posttest correlation classrooms are doing something interesting and adaptive to change a student’s trajectory. Think about it – high pretest-posttest correlation essentially means “pretest is destiny” – if I know what you score before even entering the course, I can very well predict what you’ll score on the final exam. It’s not that you won’t learn anything – we can have high correlations even if every student learns a whole lot. It’s just that whatever your rank order in the course was when you came in, that’ll likely be your rank order at the end of the course, too. And usually the bottom XX% of that distribution fails the class.

So rather than strong pretest-posttest correlations being desirable for power, I’m starting to see them as indicators of “non-adaptive instruction.” This means whatever is going on in the course, it’s not affecting the relative ranking of students; put another way, it’s affecting each student’s learning somewhat consistently. Again, it doesn’t mean they’re not learning, just that they’re still distributed similarly relative to one another. I’m agnostic as to whether this constitutes a “problem” – that’s actually a pretty deep question I don’t want to dive into in this post.

I’m intrigued for many reasons by the concept of effective adaptive instruction – giving the bottom performers extra attention or resources so that they may not just close the gap but leap ahead of other students in the class. It’s really hard to find good examples of this in general education research – for better or worse, relative ranks on test scores are stubbornly persistent. It also means, however, that the standard statistical models we use are not accomplishing everything we want in courses where adaptive instruction is the norm. “Further research is needed” is music to the ears of one who makes a living conducting research. 🙂

I’m going to be writing up a more detailed and technical treatment of this over the next months and years, but I wanted to get an idea down on “paper” and this blog seemed like a good place to plant it. It may turn out that these interesting classes are not being adaptive at all – the low pretest-posttest correlations could be due to something else entirely. Time will tell.

Music, math, learning and immediate gratification

Have you ever wanted to play guitar? I’ve been reflecting on my own learning lately, particularly after attending a workshop last night by one of my all-time favorite guitarists, Chris Proctor.

I started off in a fairly traditional manner, learning basic chords and strumming. This was in the context of a high school music class where every student either took beginning recorder or guitar lessons.  I enjoyed it so much that I wanted to continue after the course was over. Well, I knew this teacher had private students, and I knew which instruction book he was using (having seen his students carrying it around), so I picked up a copy of Solo Guitar Playing by Frederick Noad and dove in.

I didn’t think much of it at the time, but this book took a very different approach from the one my teacher had taken. In class, we had started with left hand positions for strumming chords – you lock down your left hand at various positions on the fretboard and then strike all of the strings simultaneously to produce harmonies. Noad’s book, by contrast, began with single-voice melodies – you learn the pitches of the open strings and start playing simple 3 note exercises on each string, ultimately combing strings to have a greater range of pitches. Well into the book we encounter the two-note chord – striking a base note with the thumb while the fingers play a melody. It wasn’t until much later – after a year’s worth of study – that I encountered anything that looked like a “traditional” guitar chord. By then it was slowly dawning on me that the longer pieces with names (pieces other than short exercises) were all composed in the 18th century.

Eventually I came to realize that there were (at least) two pathways to learning the guitar. One can start with the chords and work up to adding melodic embellishment and finally full melody with harmony. This is the path of the folks/blues/rock guitarist. The classical path reverses the order – one begins with simple melodies, builds up to simple two-note harmonies, and eventually arrive at full melodies and chords.

Both learning paths can take us to the same place – being facile both melodically and harmonically. I would argue, however, that beginning with chords favors the short-term feedback and gratification needed to get through the frustrations of first learning an instrument. With a good teacher and some guidance, one can play recognizable music in the first one hour lesson. The “fun” and reward of playing is available almost immediately.

Taking up a classical method requires a tolerance of delayed gratification. It’s not very “musical” to play three note ditties over and over while memorizing standard music notation and locating fingers on the frets. Mistakes (particularly buzzing strings) sound much more acute when playing single notes. If I had not already been learning some chords and “real” music to keep me amused, I might not have had the patience to stick with the classical instruction path.

Every since my epiphany around the dual approach to learning guitar (I think I first had this insight while in college) I’ve been paying attention to other “dual paths” to learning.  Nowadays I sometimes treat my own continuing statistics education as a dual-path approach. On the one hand we have the formal theory of conditional distributions described by density functions and how they interact. On the other is a purely computational/simulation approach – let’s try doing something a gazillion times and observing the distribution of the outcome. When I’m feeling a bit shaky on my theory, I often simulate a problem computationally to confirm that the results conform to my expectations.

In formal education I’ve recently come across two initiative by the Carnegie Foundation for the Advancement of Teaching: Quantway and Statway. These are two “developmental” math courses for community college (i.e, “remedial” courses for those who are not yet ready to cover college level mathematics). One of the key features is that they allow students to engage in real mathematics by focusing on reasoning and applied statistics, developing the formalisms (the parts students generally get stuck on) along the way. This is analogous to how people learn folk guitar – begin by making real music, however simple, and then pick up the formalisms (tonic and dominant chord theory, for example) in the context of making music. I’m paying close attention to an evaluation of these programs, and am curious whether this approach helps struggling students over the hump.

Back to my roots – middle school math

Once upon a time, I was a high school math teacher. Twice upon a time, actually. Both times I flunked out. Well, not really flunked out – voluntarily withdrew from the profession. The first time I was 23 and teaching in a private school that turned out to be a bit of a cult (no exaggeration). Six years later I was in a public school with a “mentor” teacher who would leave the class with me when he needed to meet with his real estate clients (guess what his moonlighting job was). Sure, I could blame my early exits on the particulars of the situations, but the second go-around also taught me that I really didn’t want to work with adolescents so intensely day after day. I thought I did, but when push came to shove I preferred working with adults, which I’ve done ever since.

Now I live with two 7th graders. It’s homework time. The distributive property. Simplifying algebraic expressions. And everything I’ve been studying about middle-school mathematics teaching and learning is coming to the surface. There’s the basic “how do you teach algebra” question, but I had that pretty well down from the get-go. Then there’s the attitude question: “why do we have to learn this stuff?” Tonight’s sticking point – developing the metal habit of careful accounting for terms and signs. It’s not that the procedure for distributing terms is difficult, but it entails really understanding each component of the process, why a particular transformation is used, and careful book keeping. That’s tonight’s struggle.

An example: simplify 6b – 2(2b – 7) = 21.  The tricky part here is keeping track of the minus signs. Actually, working through this has exposed some confusion in the student between unary minus (“eyebrow level minus”, as my old teacher called it) and binary minus (“belly-button minus”). So, what gets distributed here?  There are a couple of ways to approach it. One is to treat the term 2(2b-7) as the basic unit to be unpacked, and you get:

6b – [2(2b) – 2(7)] = 21

Then you’ve got to deal with that minus sign before the expression in brackets – essentially distributing a (-1) again.

Or, you can try to distribute (-2) across the expression, and end up with

6b -2(2b) -2(-7) = 21

My student was not tracking the minus signs accurately, in part due to some odd (but not entirely incorrect) use of parentheses that obscured what was happening to the signs of each term.

Here’s where we hit the wall – I had started with a simpler version of this where all signs were +. Then I moved to just having a minus sign inside the parenthesis – no problem. Then I switched to a minus sign after the 6b but a + sign within the parentheses – trouble! That’s where the diagnostic flag went up. How to either distribute that pesky minus sign, or block out the whole expression in parentheses. Both were not making sense to the student. Blocking out the whole expression in parentheses (my first expansion example above) was a non-starter. But distributing -2 as a factor ran into trouble because “that’s not a negative 2, that’s a minus!” (in the way only a 7th grade girl can whine).  In the end she worked enough examples that she’s learned to be aware of the -a(b-c) forms, and this expands to -ab + ac, but the learning is not robust. She hasn’t yet seen, for example, -a(-b-c) expressions, and I’m sure she’ll struggle as soon as she sees one.

Attitude. As soon as I tried to work an example I got the “just tell me if it’s right!” impatience. She’s not really trying to “get” it, but “get through” it. And – tonight at least – there isn’t a lot I’m going to do to change that attitude. That’s a longer term… I was going to use the word “battle,” but that frames the situation as one of domination. How to get her to take an interest in how algebra works? That’s the question. If she had some degree of curiosity tonight, I could steer her energy in a productive direction. But what I’m feeling is a sense of drudgery, that the homework is something to be gotten through and then moved past.

(Her sister, on the other hand, has an entirely different set of issues: she grasps the concepts quickly, is actually curious about how things work, aces tests, but is “rebelling” by not actually handing in her work and lying about what she actually has to do in a given night. That’s a later story…)

So what to do? This situation, right here, repeated day after day as a high school teacher, is where I hit a wall. If a student isn’t really latching onto the problem, how do I inspire? I had a high school senior once say to me “Mr. G., I know you’re trying hard, but really, I’m gonna take this course again in community college next year anyway, so I just don’t care.”  Anybody who has ever found a passion – or even a modest interest – in life knows that feeling of “latching on.” One long-term question I’ve been really curious about is how to “transfer” that attitude of curiosity from situations where it occurs naturally to those where it might take a little work.

Then again, why *should* a student be deeply curious about the ins and outs of algebra? I was, but I wouldn’t claim that everyone *should* be. I was never that curious about British literature as a student, and still am not. But if I were in school right now with a general ed. requirement that included British literature, I would try to understand what the instructor saw in the subject matter. In fact, that’s one of the joys of getting to know somebody new – understanding what they’re interested in even if it’s not my own personal interest. But as a 7th grader, I was either interested in something or I wasn’t… but I have to wonder how my interests may have been shaped by talented teachers. I had a 6th grade social studies teacher who made world history fascinating for a year, and that was the only time I enjoyed a social studies class through 12 years of schooling.

I actually just asked the 7th grader about this – whether she was “interested” in understanding. She said she was actually interested when she started the homework, “but then it got hard.” She agrees that she’s not always interested in math, “not like that boy in class who gets excited whenever the teacher is about to do something new.” It makes me wonder, how much interest is enough?

To be continued…