Diversity and dispositions

Another post that touches on my professional life as an educator / researcher. This too may take multiple installments to get the thoughts fully fleshed out, but I want to start sketching out the issue.

In a previous post on education I brought up the issue of diversity and variation. Here’s a snippet of what I wrote:

Variation, dispersion… it’s no exaggeration to argue that life itself could not function without it. Biological evolution critically depends on variation, in particular variation in “fitness” for passing on one’s genes. Fitness is a relative concept – an organism is not universally “fit,” but is fit insofar as it can function well within its environment. Change the environment, and the organism’s fitness may rise or fall.

Now, there are places where we want to tame variation. Manufacturing comes to mind, particularly when safety is a concern. In producing turbines for aircraft engines we don’t want variation in the stiffness of the material or the weight of the individual fan blades. Variation in that case is a problem – drift too far away from the center and things start to go wrong very quickly. So let’s bear in mind that in some cases, variation is a good thing, and in other cases variation is to be avoided.

Education is a process of guiding human development. So, do we want lots of Darwinian variation, whereby some people are more “fit” for their environment than others, or do we want aircraft manufacturing, with very tight tolerances for assuring uniformity in the components? (Hint: it’s not a black or white answer. “It depends.”)

I want to come back to this question of where we desire variation, where we want to control or eliminate it, and what a “healthy” balance looks like both within and between individuals. In particular, i want to discuss interests, attitudes, and dispositions. This is going to draw on some ideas I’ve been sketching out on standards and standardization, as well as attitudes among middle schoolers.

Let’s start interests writ large (I’m not going to analytically parse an exact definition of “interest” – let’s stick with the colloquial). It’s not controversial to hope that children and adults have a variety of healthy interests: sports, music, arts, academic subjects, Civil War re-enactments, bird watching, you name it. It also seems to be a generally agreed desire that children at least try a variety of things, and probably adopt a much smaller number as “main” interests, while continuing to cultivate a habit of curiosity and openness to new experiences.

Now let’s dive down a level. Parents and adults will differ on some of the particulars. Most would hope that their children develop some sort of strong interest in a socially acceptable, personally fulfilling and economically beneficial domain – arts, engineering, business, and the like.  But I doubt most parents would find a complete lack of interest in any of these things perfectly okay – we’d worry about the child, not just for their future but for their present sense of well-being. A child who exhibits little interest in anything may be exhibiting signs of depression. (Yes, I realize a child can have strong interests in anti-social domains, too. If I try to footnote every exception this will start to read like an academic journal article. I’m trying to avoid that).

So I’m going to postulate this: we care that our children develop interest(s) in some domains that we would consider “healthy” (a shorthand for fulfilling, productive and pro-social). But we don’t necessarily care about the particulars: computer science of culinary science; martial arts or fine arts. Or rather, across a society, individuals may care about these distinctions, but as a whole there’s a healthy mix of healthy interests.

It almost goes without saying that there are activities, pursuits, and lifestyles that few of us would wish for our children. Drug addiction leads to varying circles of Hell. Does anybody really want – as a first choice – that their child grow up to be an assassin for a gang? Backing off the obviously criminal, most of us would probably want more for our children than to sit on a sidewalk begging for spare change.

We have a healthy mix of positives, and a somewhat clear set of universal negatives. Are there any “must have” positives, something that pretty universally every adult wants for his/her child? And I mean this with some degree of specificity – not just “I wish my child to be fulfilled and happy.”  (This reminds me of a joke about a Jewish mother telling her son he can be anything he wants: a cardiologist, a neurologist, a dermatologist, a surgeon…)  At the moment I can’t think of any that jump out. Perhaps grow into a healthy romantic relationship of their own?

My main point, though, is that while we may have universal wishes for our children at a particular level of generality (I want my kids to find fulfilling work), we may disagree or even have no opinion about the particulars.

Now let’s talk about the “STEM crisis.” (STEM stands for Science, Technology, Engineering and Math). Lots of hand-wringing about how we aren’t producing enough STEM graduates in our schools. In particular, there are too few women choosing STEM careers. I’m asking – are these really problems?

What does it mean that we aren’t producing enough STEM-ready graduates? Generally it means that there are open jobs available on the market and not enough qualified individuals to fill them. In the US, that also means lobbying Congress to open up visas for skilled immigrants. But as economists and others have argued, this is not a STEM problem, it’s an economics problem. Basic supply-demand theory says if you raise the salaries for STEM employees, you’ll end up with a greater number of qualified applicants knocking on the door. So it’s not that there’s a STEM worker shortage – there is a shortage of workers willing to work for the current salary ranges.

I believe the supply-demand argument works up to a point. At some point, though, we’re going to bump into an interest limit. That is, there are reasonable, intelligent people who would say “I don’t care how much you want to pay me; you couldn’t pay me enough to major in engineering. I’d rather starve and live on the streets.” Perhaps this is a first-world problem, that those who have grown up in true poverty and suffering just couldn’t understand. But anybody who has been to an American university has run into students with this attitude. And it isn’t just STEM – change the subject to social work, kindergarten teaching, marketing… you’ll find people who would rather gouge their eyeballs out than partake of that work.

Likewise, what does it mean that there aren’t enough women interested in STEM careers? Superficially, it means that the proportions of women are lower than those of men in terms of STEM interest. Some of this, as has been documented, is due to barriers to women’s entry, including discouragement from teachers and an exclusionary culture in some STEM fields. So let’s assume that some of that gender disparity is due to structural impediments imposed from the outside. Still, at some point we’re going to hit the barrier defined by intrinsic interest – surely not every woman or man wants to go into a STEM field. And if not every, what is the “natural” base rate of interest? (again, given that this base rate is partially sensitive to the perceived rewards).

I’m choosing career interest as my illustrative case – we care that children become interested in something positive, but may care less about the actual details. What other choices are we happy with leaving up to general variation? Not every child will want to take up music for starters, and those that do will have different preferences for instruments and genres.

If we step back and think of our education system, there is not a lot of respect or room given for diversity of interests, at least until the upper levels of high school. The curriculum from K to roughly grade 10 is relatively standard. We give all kids a taste of everything – some they will take to, some they will want to reject, but they are required to at least try it (sort of like making sure kids eat their vegetables?). And we select winners (at least for university admissions) based on whether they were able to succeed (i.e., get A’s) at subjects whether or not they actually enjoyed them. There’s a whole other topic for discussion, but I wouldn’t be the first to question the social consequences of that selection policy.

Specialization appears to be something that is left to the after-school or non-school part of a child’s life. Perhaps that is fine. I just want to mark that as the case.

That’s all I’m going to write for now. My main point was to push back a bit on the hand-wringing over the “STEM crisis” and distinguish between general and specific wishes for our children. This is a work in progress, but at some point I want to develop a clearer idea of how variation plays out in society and education.

When prior scores are not destiny

This post is for statistics and assessment wonks.  I’ve been really engaged in a bit of data detective work, and one of my findings-in-progress has whacked me up side the head, making me re-think my interpretation of some common statistics.

Here’s the setup. In lots of educational experimental designs we have some sort of measure of prior achievement – this can be last year’s end-of-year test score, or a pre-test administered in early Fall.  Then (details vary depending on the design) we have one group of students/teachers try one thing, and another group do something else. We then administer a test at the end of the course, and compare the test score distributions (center and spread) between the two groups. What we’re looking for is a difference in the mean outcomes between the two groups.

So, why do we even need a measure of prior achievement? If we’ve randomly assigned students/teachers to groups, we really don’t. In principle, with a large enough sample, those two groups will have somewhat equal distributions of intellectual ability, motivation, special needs, etc. If the assignment isn’t random, though – say one group of schools is trying out a new piece of software, while another group of schools isn’t – then we have to worry that the schools using software may be “advantaged” as a group, or different in some other substantial way. Comparing the students on prior achievement scores can be one way of assuring ourselves that the two groups of students were similar (enough) upon entry to the study.  I’m glossing over lots of technical details here – whole books have been written on the ins and outs of various experimental designs.

Here’s another reason we like prior achievement measures, even with randomized experiments: they give us a lot more statistical power. What does that mean? Comparing the mean outcome score of two groups is done against a background of a lot of variation. Let’s say the mean scores of group A are 75% and group B are 65%. That’s a 10 percentage point difference. But let’s say the scores for both groups range from 30% to 100%. We’re looking at a 10 point difference against a background of a much wider spread of scores. It turns out that if the spread of scores is very large relative to the mean difference we see, we start to worry that our result isn’t “real” but is in fact just an artifact of some statistical randomness in our sample. In more jargon-y language, our result may not be “statistically significant” even it the difference is educationally important.

Prior scores to the rescue. We can use these to eliminate some of the spread of outcome scores by first using the prior scores to predict what the outcomes scores would likely be for a given student. Then we look at the mean difference of two groups against not the spread of scores, but the spread of predicted scores. That ends up reducing a lot of the variation in the background and draws out our “signal” against the “noise” more clearly.  Again, this is a hand-wavy explanation, but that’s the essence of it. (A somewhat equivalent model is to look at the  gains from pretest to posttest and compare those gains across groups. This requires a few extra conditions but is entirely feasible and increases power for the same reasons).

In order for this to work, it is very helpful to have a prior achievement measure that is highly predictive of the outcome. When we have a strong predictor, we can (it turns out) be much more confident that any experimental manipulation or comparisons we observe are “real” and not due to random noise. And for many standardized tests across large samples, this is the case – the best predictor of how well a student does at the end of grade G is how well they were doing at the end of grade G-1. Scores at the end of grade G-1 swamp race, SES, first language… all of these predictors virtually disappear once we know prior scores.

What happens in the case when the prior test scores don’t predict outcomes very well? From a statistical power perspective, we’re in trouble – we may not have reduced the “noise” adequately enough to detect our signal. Or, it could indicate technical issues with the tests themselves – they may not be very reliable (meaning the same student taking both tests near to one another in time may get wildly different scores). In general, I’ve historically been disappointed by low pretest/posttest correlations.

So today I’m engaged in some really interesting data detective work. A bunch of universities are trying out this nifty new way of teaching developmental math – that’s the course you have to take if your math skills aren’t quite what are needed to engage in college-level quantitative coursework. It’s a well-known problem course, particularly in the community colleges: students may take a developmental math course 2 or 3 times, fail it each time, accumulate no college credits, and be in debt after this discouraging experience. This is a recipe for dropping out of school entirely.

In my research, I’ve been looking at how different instructors go about using this nifty new method (I’m keeping the details vague to protect both the research and participant interests – this is all very preliminary stuff). One thing I noticed is that in some classes, the pretest predicts the posttest very accurately. In others, it barely predicts the outcome at all. The “old” me was happy to see the classrooms with high prediction – it made detecting the “outlier” students, those that were going against all predicted trends, easier to spot. The classes with low prediction were going to cause me trouble in spotting “mainstream” and “outlier” students.

Then it hit me – how should I interpret the low pretest-posttest correlation? It wasn’t a problem with test reliability – the same tests were being used across all instructors and institutions, and were known to be reliable. Restriction of range wasn’t a problem either (although I still need to document that for sure) – sometimes we get low correlations because, for example, everyone aces the posttest – there is therefore very little variation to “predict” in the first place.

Here’s one interpretation: the instructors in the low pretest-posttest correlation classrooms are doing something interesting and adaptive to change a student’s trajectory. Think about it – high pretest-posttest correlation essentially means “pretest is destiny” – if I know what you score before even entering the course, I can very well predict what you’ll score on the final exam. It’s not that you won’t learn anything – we can have high correlations even if every student learns a whole lot. It’s just that whatever your rank order in the course was when you came in, that’ll likely be your rank order at the end of the course, too. And usually the bottom XX% of that distribution fails the class.

So rather than strong pretest-posttest correlations being desirable for power, I’m starting to see them as indicators of “non-adaptive instruction.” This means whatever is going on in the course, it’s not affecting the relative ranking of students; put another way, it’s affecting each student’s learning somewhat consistently. Again, it doesn’t mean they’re not learning, just that they’re still distributed similarly relative to one another. I’m agnostic as to whether this constitutes a “problem” – that’s actually a pretty deep question I don’t want to dive into in this post.

I’m intrigued for many reasons by the concept of effective adaptive instruction – giving the bottom performers extra attention or resources so that they may not just close the gap but leap ahead of other students in the class. It’s really hard to find good examples of this in general education research - for better or worse, relative ranks on test scores are stubbornly persistent. It also means, however, that the standard statistical models we use are not accomplishing everything we want in courses where adaptive instruction is the norm. “Further research is needed” is music to the ears of one who makes a living conducting research. :-)

I’m going to be writing up a more detailed and technical treatment of this over the next months and years, but I wanted to get an idea down on “paper” and this blog seemed like a good place to plant it. It may turn out that these interesting classes are not being adaptive at all – the low pretest-posttest correlations could be due to something else entirely. Time will tell.

Brief thoughts on learning and meaning

I feel like I’m fighting a bug, so tonight’s entry will be short. My colleague Cynthia D’Angelo posted today about Zelda Speed Runs, a form of “speed gaming” I had not heard of. Basically, one learns – over time and with much community support – how to efficiently traverse a game, gathering all the goodies (or similar goals) in a minimum amount of time. A single run can take 18 hours, so this is not for the faint of heart.  Edit: As Cynthia pointed out in a comment below, the speed run can take 18 minutes, while a “normal” run of the game can take 20 hours.

My first reaction may be similar to many readers’: this is what people spend time getting good at? You have to understand, it takes many many repetitions of a game run (again, these can take 20 hours-plus once you get good at it) to be competitive. People record glitches in the game code that might afford one a shortcut to a particular goal. This is a huge investment in time.

But then I thought about some of my own challenges. Among other pieces, I’m starting to work on Bach’s Chaconne transcribed for the guitar. 256 bars (actually, 64 variations on a 4-bar pattern). Professional violinists spend a lifetime mastering this one piece – once you get all the notes down (and with runs of 32nd notes, there are a lot of them) you still have to develop a feel for how the various variations string together, when to hold back, when to cut loose. It’s going to take me a very long time to master (if I ever do).

So, speed gaming and playing the Chaconne. Neither is intrinsically worth more than the other. To the individual practitioner both activities are engaging. There are audiences that derive pleasure from supporting / spectating the practitioner.

What makes both of these “challenging” – and this is the thought I want to mull over at length in a future post – is rooted in the very nature of human learning. Learning is fundamentally about training neurons to fire in new patterns. This takes time and repetition (in most cases). It’s been noted that humans have to strike a balance between complete inertia – the inability to learn anything new – and over-learning, or the ability to learn new behaviors so quickly that we never adopt habits.  (I don’t think I’m getting that contrast exactly right – I’ll try to research it in that future post I keep talking about). In short, there are good evolutionary reasons why I can’t just read through the Chaconne once and have it completely memorized. I have to train my fingers and my memory to anticipate passages, train my fingers to fly quickly enough over the strings, even work out individual fingerings for each note to make the passage more efficient. It’s a great deal of work.

Oh yes, I wish I could learn it quickly and simply start performing it. But that’s not how we’re wired. It’s both frustrating and rewarding. And with that, I bid you all good night. Really, at some point soon I want to reflect more deeply about biological origins of slow learning, but not tonight. I’m too fuzzy-headed. And as Dragon-born, I still have to locate an Elder Scroll to learn the Dragonrend shout and save the world from extinction.

Oops I did it again

Every so often I find myself writing a variation on the same post – the idea that working with wood entails largely non-reversible operations. Essentially, woodworking has no “undo” command. I’ve written about that back in October here, in Feb 2011 here, Aug 2009 here… oh, there’s a post from June 2007 and one from June 2006, too. Eight years and counting of noting the same problem – rushing forward without stopping to think one or two steps ahead. The consequences of “just try it” are often irreversible.

A friend at work had a Spanish cedar platter that had once been her grandmothers. She packed it in a suitcase for a long flight and it cracked in transit. Fortunately it was a clean split and hadn’t completely separated into two pieces, so after looking at it I offered to glue it up. I wasn’t exactly sure what I would use for clamping pressure, but felt confident I could use a band clamp, a bunch of rubber bands, or if I had to, cut out clamping cauls on a bandsaw and use bar clamps. As it turns out, rubber bands did the trick. I flooded the joint with glue (in hindsight, I should have tried to be more economical and use a syringe that I’d forgotten I had) and clamped it up.

Yellow wood glue is a great adhesive. In a tight joint and with proper pressure, the adhesion will be strong than the wood itself. That is, if you try to re-break the platter, it will split somewhere other than on the glue line. It also cleans up with water, doesn’t smell, and isn’t carcinogenic. One problem, though, is that if you just try to smear it off the wood while still wet, it can leave an invisible coating on the wood that will show up as a blotch when you apply a finish. So the general rule is to let it cure for about 10 minutes and then carefully scrape it off as it gets “gummy.”  I’d flooded quite a bit on the joint (my first “mistake”) so I knew I’d be waiting more than 10 minutes to clean it up. Eventually, though, I did gently scrape most of it away.

Spanish cedar platter, glued

Spanish cedar platter, glued

Here’s where I rushed it – after an hour or so (when I knew the glue had set) I saw that one part of the repair was just every so slightly out of alignment. The halves were offset by maybe 5 thousandths of an inch, about the thickness of 2 sheets of paper. Not terribly visible, but I could still see it and more importantly feel the ridge as I ran my fingers over the joint. So then I did what I would normally do – I grabbed a small card scraper and started to scrape down the seam until the two sides were flush with one another.

Except… this platter was finished (varnish or shellac, I can’t quite tell yet, but I’m leaning toward shellac). Which means i put a nice scrape mark in the middle of a finished platter. Ugly! I hadn’t thought of the consequences of taking a scraper (or any abrasive) to the work – to do it right I’d probably have to strip off all the finish and re-finish the piece. Even that wasn’t such a problem, but this wasn’t my piece to play around with. Remember, it was my friend’s late grandmother’s. Now I’m feeling really badly that I might have made things worse. I’ll take it in and talk over options over the next day or two, once I figure out what the finish actually is (easy test – if I rub it with alcohol and it dissolves into the cloth, it’s shellac. Otherwise varnish). Please let it be shellac – it’s easier to play with (again, dissolves in alcohol) and I might be able to just fix that one area and give the whole platter one more good top coat or two to make it all look even.

I’m hopeful that I won’t be referring back to this posting six months from now, but history has a way of repeating itself.

I weep for humanity

Okay, a bit of hyperbole in this posting title. Still, I came across an honest-to-Zeus conspiracy theory web site today, and like a car wreck on the side of the highway, I couldn’t help but slow down and gawk. As I’ve mentioned in a previous post, I have a strong interest in how people come “to know” things, in particular with how they learn to reason with evidence. This site – and the controversy - are a reminder of how reasoning can go astray.

Without further ado, the web site documenting the Chemtrails conspiracy. (This links to a specific page on a large web site – it’s a good pedagogical example of where “reasoning” can go astray).

First, a quick summary of the “controversy.” Proponents of the Chemtrail conspiracy believe that commercial jet liners are being used by the Government (both US and New World Order) to spew toxic chemicals into the atmosphere, causing those bright white trails one sees behind airplanes at altitude. The motivation appears to be some form of population control through selective poisoning, as well as weather control.

Oh, you thought those were merely water vapor trails resulting from hydrocarbon combustion? You poor fool, let us enlighten you.  </sarcasm>

So, how did I come across this? Totally by accident (although I’d heard of this group in the past). I was looking at postings a friend of a friend had made on Facebook, and noticed it was cross-posted to the Facebook group belonging to the site linked above. I was curious, and that’s when the can’t-look-away browsing started.

I could easily spend a few paragraphs writing “OMG can you believe these people?!?” comments – I had a really visceral reaction to perusing that site that made me physically ill. It’s like I have to purge that out of my system to feel well again. But there is no need to put you, dear reader, through that ordeal. Let me instead address what I thought were some of the more interesting aspects of the Chemtrails culture, again referencing the page I’ve linked to.

First this:

Since the writing of my series of articles exposing contrails, multiple professional airline pilots have contacted me and thanked me for my stance against the contrail deception

All of them told me personally that they have never seen trails come out of jet engines and that they appreciate my work exposing the disinformation about contrails. Every one of these pilots knew that contrails are so rare that most people will never see one in their lifetime, and if they do occur, they are at high altitudes that cannot be seen from the ground.

Each of these professional pilots have flown most of their lives and have always had a deep interest in aviation. Some of them fly mainstream commercial jets while others fly large jets for major parcel carriers.

We could go through the pedantic exercise of tagging and categorizing the logical and rhetorical fallacies here, but I just want to hit the highlights. First we have an appeal to authority – professionals who ought to know what they’re talking about are telling us that contrails (the common term for those visible exhaust trails) never come from a jet engine, or if they do they are so rare and at such extreme altitudes that most people will never observe one.

Next:

Those spreading disinformation about chemtrails would like nothing more than for you to believe that short, non-persistent plumes coming out of jets are harmless contrails. If they convince you of this, then you will ignore these plumes and allow them to spray you without objection, and this is exactly what they want. 

They will tell you that they’ve seen contrails since they were children. They will tell you that contrails are scientifically proven to contain water vapor. They will tell you anything necessary to make you believe short trails are harmless. This is exactly how disinformation works.

So, the simplest explanation (water vapor is a product of hydrocarbon combustion, which combined with freezing temperatures at high altitudes produces visible vapor and/or ice crystals) should not be believed! This is exactly what “they” want you to think!

Further down the page:

Numerous popular movies, cartoons, advertisements, music videos, and other media now show trails coming out of jets. When the disinformation target audience sees this on a regular basis, they simply conclude—either consciously or subconsciously—that this is normal, so when they see it in the sky, they simply ignore it. This process is called “Normalization” and is probably the most popular method of disinformation used against the public today.

You get the idea.

At various points on that web page we also see examples of the “Hegelian Dialectic.” I’m not even going to try to assess the accuracy of their use of that term – what strikes me most is that by presenting a fancy sounding phenomenon from philosophy (named after a famous philosopher, no less!) the authors continue to imply a sense of intellectual gravitas.

Here’s the real problem: the structure of this “argument” makes it impossible to argue against any of the claims. Why don’t we sample the contrails/chemtrails and analyze them? Seems simple enough. Oh, but who is going to actually conduct these studies? The “scientists?” They’re bought and paid for. Meanwhile, other disconnected facts are brought forth as evidence: traces of aluminum pollution (yes, little known fact, aircraft actually shed nanoparticles of aluminum, but in trace amounts) and mercury poisoning in the soils – must be those chemtrails!

Now, assume that a reasonable person stumbles across this site, a person who is somewhat distrustful of authority and the government in particular. Will this site just excite their confirmation bias? Likely. That, combined with the admonition to not believe the “naysayers” and “debunkers” (who are either naive, conformist, or paid propagandists.  No, really, read this) ensures a perfect echo chamber. If you dive into that web site you’ll see links to 911 conspiracies, using EMF for mind control, the works…

Carl Sagan popularized a method for refining our “baloney detectors.” (See a description here). It’s nothing new – again, most people who study logic, philosophy of science, and rhetoric have come across these. I wish – and hope – that these would become part of the warp and weft of K-12 education. It’s all too easy for otherwise reasonable people to stumble across these sites, pause and say “well, it could be true…”

Mid-November, bring on the SAD

A personal interlude. My energy was subdued today, and at 8 PM it feels like it’s been dark for hours. My appetite has been gravitating toward comfort eating and I’m wanting to sleep more. My mental concentration has turned fuzzy. I used to joke that I must be part bear, as I like to hibernate. The reality is that I’ve been afflicted with seasonal affective disorder (SAD) most of my adult life.

You can google SAD to see the symptoms – the Mayo Clinic page is the first one that comes up for me and it’s pretty accurate. Fortunately, the worst of the mood symptoms can be treated with medication. I still find it hard, though, to rally my energy and concentration during the low-sunlight months.

This raises an interesting question, one first explored by Peter Kramer in his book Listening to Prozac: what does it mean to “feel like oneself?” Dr. Kramer was a practicing psychiatrist when SSRIs such as Prozac first hit the market, and he was struck by how many of his patients would claim “I feel more like myself than I have in such a long time.” Having a bit of a philosophical disposition, he explored this aspect of identity and the possibility of “cosmetic psychopharmacology” in his seminal work.

If my “self” has a baseline of cognitively sharp, personally friendly, generally upbeat, and energetic, then SAD makes me not feel like myself. This leaves me with a couple of choices. I can use medication, light-box therapy, force myself to exercise when it’s the last thing I want to do… basically push myself to do things that once came easily in terms of self care, in the hopes of minimizing the effects of SAD.  Or, I can just accept SAD like the change of seasons.

Again, the fact that I can even consider letting SAD run its course has to do with good medication. I’m not suffering the worst aspects of depression, that crippling soul-pain coupled with spiraling, obsessive thoughts. Given that I’m clinically in a safe place, what about the other symptoms? Increased lethargy, increased need for sleep, decreased enthusiasm for just about anything… None of them are killers. In fact, they’re not even that painful. The problem is, they’re not “me.”

The Buddhists talk about the parable of the two darts. The normal slings and arrows of outrageous fortune are like getting struck by a dart – it’s painful. But when we start bemoaning the fact we were struck by a dart, worrying about what it all means, seeking solace in sensual pleasure… that’s all suffering. It’s like getting hit by a second dart.  So with practice and awareness we can avoid the second dart – we can alter our response to the normal ups and downs of life. Reduce if not eliminate suffering.

I feel like I’m in exactly that position at the moment – how to orient my attitude and actions around the inescapable fact that my neurodynamics change with the seasons. I can reduce the pain of the immediate problem, just as one would dress and bind a dart wound. But to the degree that I struggle with “not feeling like myself” in these times, of pushing myself to do all sorts of things that I would normally want to do, but don’t want to do now… am I just causing suffering? I think the distinction has to do with those actions that will likely result in better health (and to be clear, I don’t consider my current state “different” than usual, but truly “worse.” This isn’t just an equally desirable alternative to “normal.”). If getting my butt in the saddle 2 or 3 times a week is going to alleviate the symptoms, that’s probably a good idea. Kicking myself for not reading as much, or “slacking off” on my guitar project, is not a good idea – that’s true suffering.

Surprisingly, I’ve enjoyed the discipline of pushing myself to write something every day of NaBloPoMo – it reminds me to look inward and see what my mind has been ruminating on today, and expressing that brings a sense of peace, if not resolution.

Guitar tunings and adaptive technologies

To pick up on my post about (inadvertently) learning to play the classical guitar, I want to reflect on ways modern ideas in guitar music lower the bar to musical enjoyment for both beginning and advanced players. This was inspired both by Chris Proctor’s workshop and reviewing a page on the CAST website for yesterday’s post on the roles of variation in education.

Even most non-players know that guitar strings are tuned to different pitches. So, what pitches should they be tuned to? There is what is known as “standard” tuning, and on a six string guitar those pitches are E2, A2, D3, G3, B3, E4 (see this Wikipedia page for an explanation of the numbers following the pitches), running from the lowest sounding string to the highest. Adjacent pairs of strings are tuned to an interval of a perfect fourth¹ – if you hum the opening notes to “Here comes the bride” the ascension of pitch from “Here” to “comes” is a perfect fourth.

Here’s the problem: if you strum the open strings together, a bunch of stacked perfect fourths doesn’t sound very good (heck, even a single perfect fourth isn’t all that consonant). So in order to get a decent chord out of the guitar, we have to start fretting notes – pressing on a fret shortens the length of the string and raises the pitch. From there we can get more pleasant intervals of thirds, fifths, and sixths² to blend together.

So if you walk into a guitar store and pick up an instrument tuned in standard pitch and strum the strings, you get mush. You can hear the pitches on this Wikipedia page. Yuck.

How about this for an idea – what if we tune the open strings to different pitches so that when all of the open strings are strummed together, they sound harmonious. This is known as open tuning, and opens up a whole lot of possibilities for players. For starters, the rankest beginner can instantly get a pleasing sound from the instrument. Want to play a different chord? The easiest thing to do is lay the index finger completely along a fret and strum the strings. Wow! Instant chords! Playing the 2 or 3 chords that comprise most folk and rock-and-roll tunes is now fairly straightforward.

But wait, there’s more! For advanced players open tunings open up all sorts of great musical possibilities. Bear in mind we have a resource constraint of four left-hand fingers (and occasionally the thumb) for altering pitches. A lot of instrumental music has 2 or 3 “voices” playing in tandem – there may be a bass line in the music with a lyrical melody played on top, for example. In standard tuning we need to devote at least one finger to hitting the right bass notes, another for the melody, and since we’re jumping around different strings those other two fingers are going to busy, too.

But life gets a little easier with open tunings. For starters, bass lines are often alternations between the tonic and fifth note in a chord. In standard tuning one or both of these would have to be held down with a finger, but in open tuning they’re simply available on open strings, leaving all four fingers free to do something else. Even when changing keys or chords, we can lay down the index finger to shift all of the pitches in tandem, and for the price of one finger we’re in a different key with 3 fingers left to do some interesting work. It may not sound like it’s much of a change from standard tuning, but it’s a big deal.

Again, thinking of a beginner, open tunings may this act of two-voice music much more accessible. The right hand thumb can practice alternative between two open bass strings while the musician concentrates on picking out a melody. The left hand work is much easier.

This whole shift to an open tuning reminds me so much of the philosophy of Universal Design for Learning – including the idea that we should remove unnecessary impediments from the learning environment, and build in supports for those who have differing needs. Open tunings provide a much easier on-ramp for an aspiring musician, including those who may have some limitations in left hand dexterity. I wonder how many kids might stick with the struggle of starting a new instrument if the strings were already tuned to something consonant?

So why aren’t all of the guitars hanging in the music store automatically tuned to open tunings? Mainly for the same reasons we still use a QWERTY keyboard, even though QWERTY was actually designed to retard speedy typing - tradition. We have a ton of guitar music (with chord diagrams to assist the beginner) written for standard tuning, and guitarists often watch one another’s left hand to work out chord sequences. It helps if those chord shapes are consistent. There are several possible open tunings in general use, and music publishing just hasn’t caught up to the concept that the guitar can be (re)tuned into several configurations. There are a couple of other reasons that favor standard tuning, but they’re a bit more technical.³

Thinking of picking up the guitar, even just to doodle? Do you have one gathering dust in your closet? I invite you to give an open tuning a try, and just play around – you’ll be surprised at how quickly real music emerges from under your fingers.


¹ Except for the change from G3 to B3, which is a major third. So much for consistency.

² Musicians use ordinal numbers to count the spaces between pitches. It’s 1-based, so a “unison” is two identical notes. A “second” is the interval from A to B, or D to E, or E to F – two adjacent notes. A “third” is the interval from A to C, B to D, C to E, etc. Some intervals sound more pleasing to the ear than others.

³ Three footnotes in a single blog post? Okay, one reason standard tuning is preferred has to do with reading standard Western musical notation. One learns where C4 is located on the fingerboard (well, the 3 different places it can be fingered easily) and that 1:1 correspondence gets locked into our brains. If you change the string tuning, the location of all the notes changes and sight reading goes out the window. Since there are easily a dozen common open tunings, we can’t continue to use standard music notation – we have to switch to a tablature format, which has actually been around since the middle ages.

The second reason given for standard tuning is that it’s also an equal temperament tuning, meaning one can switch keys in a musical piece without some notes starting to sound out of tune. Open tunings invite a just intonation, but can limit the keys that can be played in a particular piece of music.