Just build the damn thing

Another guitar building installment. I took some early vacation time today and went back to working on guitar #002. It’s time to start fitting the top and back plates to the sides, and that entails some fine adjustments with sandpaper and block plane to get the fit just right. The top and back are both slightly arched, so their intersections with the sides do not form simple planes.

Another design decision is whether to chisel out little notches in the lining to hold the ends of the transverse braces of the top. The top has a couple of main “bars” running underneath for structural strength. There is some debate about whether those bars should terminate by being glued directly into the side linings, or whether they should terminate just shy so that the top is held to the sides only by the top material all the way around. The debate has to do with acoustic theory – is it better to leave the top free to vibrate all around its perimeter (like a speaker cone mounted in a flexible rubber ring), or to anchor it in place by the transverse braces and transmit some of that vibration into the sides and back directly? Different builders take different approaches.

Well, I ended up munching a bit of the lining when I was attempting to cut out a pocket for the brace, and that got me thinking about whether I wanted to bother cutting these pockets or just leave the braces free. That led to feeling a bit overwhelmed at having to review the literature I could get my hands on regarding the acoustic advantages of either method, and…

Stop. Just stop.

Larry, just build the damn thing. It’s guitar #002 – you’re still figuring out where all the tricky spots are in this process. Yeah, you found one. Go the easy route for now (don’t integrate the braces with the sides) – it’ll probably sound fine. And if not, try anchoring the braces in build #003 or #004. You know there will be others.

I’ve mentioned in the past a retired hand surgeon advising me that building a guitar was like surgery – if you plan it out and execute each move carefully, it will probably come out okay. Well, surgeons don’t start in on live patients until they’ve had plenty of practice on, I suppose, other things (cadavers? animals?). I need to start treating #002 more as a learning experience and less as a masterpiece (which I already know in advance it won’t be, but I can’t let go of that striving for perfection in this case).

My partner was surprised when I told her about this – she doesn’t see this sense of perfectionism or “obsessiveness” (her word) in my other activities. I’m really trying to do this guitar differently than a lot of my other projects – but I may be taking it a bit too far. I’m losing the joy in the creative process by worrying over details that in the end are going to be swamped by other variables I can’t (yet) easily control for.

I’m glad to have a few days off to get my hands back in the game. I’ll fix the side lining and tomorrow will start trimming back the braces along the top and back.

Scaling up vs. spreading out

I’ve been thinking of scale lately. Not the musical sort, but the concept of size. One of the common questions asked in education research is “how well does this scale up?” It’s already no small feat to invent a practice or intervention that makes a measurable difference to students on the scale of a classroom or school. It’s another thing entirely to “scale it up” to a district, state, or nation.

But is “scaling up” even a proper goal? This recent article in The New Republic calls into question the wisdom of scaling up successful interventions in the international development arena. The basic argument is that “context matters.” To quote from the article:

The repeated “success, scale, fail” experience of the last 20 years of development practice suggests something super boring: Development projects thrive or tank according to the specific dynamics of the place in which they’re applied. It’s not that you test something in one place, then scale it up to 50. It’s that you test it in one place, then test it in another, then another. No one will ever be invited to explain that in a TED talk.

The scaling up dynamic seems to work reasonably well in some fields of medicine. A drug or vaccine is developed, tested on a small group of volunteers, found effective in a larger scale trial, and eventually comes to market. The drug “scales up” after passing several smaller-scale tests. We don’t feel the need to re-test the efficacy of a polio vaccine in each and every city of the US, for instance.

But medical research and social research are different in key ways. First, the nature of the “treatment” in medicine is often a pill – something that can be standardized, and whose mechanism relies more on chemistry than on behavior or cognition. That is, we know that if you administer a polio vaccine to 100,000 children, some (high) percentage will develop immunity to polio. It’s basic immunology. Sure, there will be outliers for whom it doesn’t work, and we need to have backup plans for those few.

But the New Republic article cites a similar medical-like intervention – de-worming children. After implementing a de-worming program in Kenya, the researcher found:

The deworming pills made the kids noticeably better off. Absence rates fell by 25 percent, the kids got taller, even their friends and families got healthier. By interrupting the chain of infection, the treatments had reduced worm infections in entire villages. Even more striking, when they tested the same kids nearly a decade later, they had more education and earned higher salaries. The female participants were less likely to be employed in domestic services.

So of course one would want to scale up that intervention. It’s a pill, after all. Completely standardized and easy to administer. Why not de-worm an entire nation? Well, they tried that program within several states of India, and the results were… unclear. I strongly suggest you read the full article for the nuances, but here’s the author’s punch line for this part of the argument:

In 2000, the British Medical Journal (BMJ) published a literature review of 30 randomized control trials of deworming projects in 17 countries. While some of them showed modest gains in weight and height, none of them showed any effect on school attendance or cognitive performance. After criticism of the review by the World Bank and others, the BMJ ran it again in 2009 with stricter inclusion criteria. But the results didn’t change. Another review, in 2012, found the same thing: “We do not know if these programmes have an effect on weight, height, school attendance, or school performance.”

The underlying point is this: many many things contribute to children’s health, school attendance, and intellectual development. Carrying parasites is one of many problems they face. Not that eradicating worms is not by itself an unquestioned “good thing,” but it may not (re)produce the outcomes one was expecting. And, like it or not, lots of “good things” have to get rated against one another when resources are limited.

So if something as controlled and unproblematic as giving a pill can have radically different results in different social settings, how in the world do we contemplate scaling up a much less standardized intervention like giving every child a laptop? Changing a textbook for an entire state? (I note in passing that textbooks are not like pills – teachers pick and choose which aspects of a book to use and which to ignore). Much of the Big Money in education research is looking for magic interventions that are scalable. As one who works in the trenches of evaluation, I can tell you just how hard it is to tease apart why some things work in some situations and not others.

Thus far I’m convincing myself that we should be conservative with our scaling up desires. Local context matters. Now let’s ask the question: when can one make a case for universal policy? I suspect this is a no-brainer for Policy 101 students, but my last policy analysis class was taught by an active alcoholic while I was in the throws of an undergraduate depression.

We have a national civil rights policy – rooted in education policy – that prohibits de jure discrimination by race. (Whether de facto discrimination is addressed appears to depend on the priorities of a given administration). But we don’t permit “local control” or “local choice” with matters of racial discrimination, nor should we. I can think of other nasty actions we try to outright prohibit on a large scale: corporal punishment and gender discrimination come to mind.

So where does local control run the risk that the locals will “get it wrong?” Where does a one-size-fits-all policy run the risk of having no effect or worse causing harm to a significant segment of the population? In particular I’m thinking of the Common Core standards movement, an attempt to bring unity to academic standards across the states (and, as a consequence, make it much easier and cheaper to have a single national achievement test). What “problem” does Common Core purport to solve? and is it as likely as not to cause problems if local variation is not supported?

I welcome any reader to chime in with a thought.

Middle school math, continued

Me, aloud: “What should I write about tonight?”

Lisa: “Lisa!”

Challenge accepted.

Another evening of supervising/coaching/cheer leading middle school math homework. They’re focusing on simplifying expressions where there is subtraction of a subexpression in parenthesis. As I wrote about previously, there are technically a few different ways of approaching this (and that language was for the math-savvy, not something I would actually say to a middle-schooler).

In the end we settled on the “distribute the minus sign” approach, since they’ve also been practicing the distributive property as well. She still doesn’t see “the point” of simplifying expressions, and perhaps at this point she doesn’t need to. But she’s willing to play along if for no other reason than she cares about doing well in school. I’ll take that.

She had over-learned a rule that I suspect many of her classmates did, which is “when there are things in parentheses you reverse the signs.”  Nope! Not if the “things in parentheses” are simply being added. So we had to purge the “rule” and try to get to an explanation for why parentheses can essentially disappear when the subexpression is added, but the signs all flip when being subtracted. Still not the way I would choose to structure a curriculum, but it’s what we’re stuck with at the moment.

What was really gratifying, though, is how she was adding, subtracting and multiplying integers essentially in her head and getting the signs right about 95% of the time. This reminds me of what my high school math teacher once claimed: when you are studying level N of mathematics, you’re really solidifying everything you learned at level N-1.  Her mother thinks she’s just coming out of metamorphosis – her brain had turned to liquid goo at the onset of adolescence and is starting to solidify back into something resembling a brain again.

I’m keeping it short tonight – time for a good night’s sleep. Yay for a short work week this week!

Diversity and dispositions

Another post that touches on my professional life as an educator / researcher. This too may take multiple installments to get the thoughts fully fleshed out, but I want to start sketching out the issue.

In a previous post on education I brought up the issue of diversity and variation. Here’s a snippet of what I wrote:

Variation, dispersion… it’s no exaggeration to argue that life itself could not function without it. Biological evolution critically depends on variation, in particular variation in “fitness” for passing on one’s genes. Fitness is a relative concept – an organism is not universally “fit,” but is fit insofar as it can function well within its environment. Change the environment, and the organism’s fitness may rise or fall.

Now, there are places where we want to tame variation. Manufacturing comes to mind, particularly when safety is a concern. In producing turbines for aircraft engines we don’t want variation in the stiffness of the material or the weight of the individual fan blades. Variation in that case is a problem – drift too far away from the center and things start to go wrong very quickly. So let’s bear in mind that in some cases, variation is a good thing, and in other cases variation is to be avoided.

Education is a process of guiding human development. So, do we want lots of Darwinian variation, whereby some people are more “fit” for their environment than others, or do we want aircraft manufacturing, with very tight tolerances for assuring uniformity in the components? (Hint: it’s not a black or white answer. “It depends.”)

I want to come back to this question of where we desire variation, where we want to control or eliminate it, and what a “healthy” balance looks like both within and between individuals. In particular, i want to discuss interests, attitudes, and dispositions. This is going to draw on some ideas I’ve been sketching out on standards and standardization, as well as attitudes among middle schoolers.

Let’s start interests writ large (I’m not going to analytically parse an exact definition of “interest” – let’s stick with the colloquial). It’s not controversial to hope that children and adults have a variety of healthy interests: sports, music, arts, academic subjects, Civil War re-enactments, bird watching, you name it. It also seems to be a generally agreed desire that children at least try a variety of things, and probably adopt a much smaller number as “main” interests, while continuing to cultivate a habit of curiosity and openness to new experiences.

Now let’s dive down a level. Parents and adults will differ on some of the particulars. Most would hope that their children develop some sort of strong interest in a socially acceptable, personally fulfilling and economically beneficial domain – arts, engineering, business, and the like.  But I doubt most parents would find a complete lack of interest in any of these things perfectly okay – we’d worry about the child, not just for their future but for their present sense of well-being. A child who exhibits little interest in anything may be exhibiting signs of depression. (Yes, I realize a child can have strong interests in anti-social domains, too. If I try to footnote every exception this will start to read like an academic journal article. I’m trying to avoid that).

So I’m going to postulate this: we care that our children develop interest(s) in some domains that we would consider “healthy” (a shorthand for fulfilling, productive and pro-social). But we don’t necessarily care about the particulars: computer science of culinary science; martial arts or fine arts. Or rather, across a society, individuals may care about these distinctions, but as a whole there’s a healthy mix of healthy interests.

It almost goes without saying that there are activities, pursuits, and lifestyles that few of us would wish for our children. Drug addiction leads to varying circles of Hell. Does anybody really want – as a first choice – that their child grow up to be an assassin for a gang? Backing off the obviously criminal, most of us would probably want more for our children than to sit on a sidewalk begging for spare change.

We have a healthy mix of positives, and a somewhat clear set of universal negatives. Are there any “must have” positives, something that pretty universally every adult wants for his/her child? And I mean this with some degree of specificity – not just “I wish my child to be fulfilled and happy.”  (This reminds me of a joke about a Jewish mother telling her son he can be anything he wants: a cardiologist, a neurologist, a dermatologist, a surgeon…)  At the moment I can’t think of any that jump out. Perhaps grow into a healthy romantic relationship of their own?

My main point, though, is that while we may have universal wishes for our children at a particular level of generality (I want my kids to find fulfilling work), we may disagree or even have no opinion about the particulars.

Now let’s talk about the “STEM crisis.” (STEM stands for Science, Technology, Engineering and Math). Lots of hand-wringing about how we aren’t producing enough STEM graduates in our schools. In particular, there are too few women choosing STEM careers. I’m asking – are these really problems?

What does it mean that we aren’t producing enough STEM-ready graduates? Generally it means that there are open jobs available on the market and not enough qualified individuals to fill them. In the US, that also means lobbying Congress to open up visas for skilled immigrants. But as economists and others have argued, this is not a STEM problem, it’s an economics problem. Basic supply-demand theory says if you raise the salaries for STEM employees, you’ll end up with a greater number of qualified applicants knocking on the door. So it’s not that there’s a STEM worker shortage – there is a shortage of workers willing to work for the current salary ranges. Edit: this article from Businessweek makes the same claim.

I believe the supply-demand argument works up to a point. At some point, though, we’re going to bump into an interest limit. That is, there are reasonable, intelligent people who would say “I don’t care how much you want to pay me; you couldn’t pay me enough to major in engineering. I’d rather starve and live on the streets.” Perhaps this is a first-world problem, that those who have grown up in true poverty and suffering just couldn’t understand. But anybody who has been to an American university has run into students with this attitude. And it isn’t just STEM – change the subject to social work, kindergarten teaching, marketing… you’ll find people who would rather gouge their eyeballs out than partake of that work.

Likewise, what does it mean that there aren’t enough women interested in STEM careers? Superficially, it means that the proportions of women are lower than those of men in terms of STEM interest. Some of this, as has been documented, is due to barriers to women’s entry, including discouragement from teachers and an exclusionary culture in some STEM fields. So let’s assume that some of that gender disparity is due to structural impediments imposed from the outside. Still, at some point we’re going to hit the barrier defined by intrinsic interest – surely not every woman or man wants to go into a STEM field. And if not every, what is the “natural” base rate of interest? (again, given that this base rate is partially sensitive to the perceived rewards).

I’m choosing career interest as my illustrative case – we care that children become interested in something positive, but may care less about the actual details. What other choices are we happy with leaving up to general variation? Not every child will want to take up music for starters, and those that do will have different preferences for instruments and genres.

If we step back and think of our education system, there is not a lot of respect or room given for diversity of interests, at least until the upper levels of high school. The curriculum from K to roughly grade 10 is relatively standard. We give all kids a taste of everything – some they will take to, some they will want to reject, but they are required to at least try it (sort of like making sure kids eat their vegetables?). And we select winners (at least for university admissions) based on whether they were able to succeed (i.e., get A’s) at subjects whether or not they actually enjoyed them. There’s a whole other topic for discussion, but I wouldn’t be the first to question the social consequences of that selection policy.

Specialization appears to be something that is left to the after-school or non-school part of a child’s life. Perhaps that is fine. I just want to mark that as the case.

That’s all I’m going to write for now. My main point was to push back a bit on the hand-wringing over the “STEM crisis” and distinguish between general and specific wishes for our children. This is a work in progress, but at some point I want to develop a clearer idea of how variation plays out in society and education.

When prior scores are not destiny

This post is for statistics and assessment wonks.  I’ve been really engaged in a bit of data detective work, and one of my findings-in-progress has whacked me up side the head, making me re-think my interpretation of some common statistics.

Here’s the setup. In lots of educational experimental designs we have some sort of measure of prior achievement – this can be last year’s end-of-year test score, or a pre-test administered in early Fall.  Then (details vary depending on the design) we have one group of students/teachers try one thing, and another group do something else. We then administer a test at the end of the course, and compare the test score distributions (center and spread) between the two groups. What we’re looking for is a difference in the mean outcomes between the two groups.

So, why do we even need a measure of prior achievement? If we’ve randomly assigned students/teachers to groups, we really don’t. In principle, with a large enough sample, those two groups will have somewhat equal distributions of intellectual ability, motivation, special needs, etc. If the assignment isn’t random, though – say one group of schools is trying out a new piece of software, while another group of schools isn’t – then we have to worry that the schools using software may be “advantaged” as a group, or different in some other substantial way. Comparing the students on prior achievement scores can be one way of assuring ourselves that the two groups of students were similar (enough) upon entry to the study.  I’m glossing over lots of technical details here – whole books have been written on the ins and outs of various experimental designs.

Here’s another reason we like prior achievement measures, even with randomized experiments: they give us a lot more statistical power. What does that mean? Comparing the mean outcome score of two groups is done against a background of a lot of variation. Let’s say the mean scores of group A are 75% and group B are 65%. That’s a 10 percentage point difference. But let’s say the scores for both groups range from 30% to 100%. We’re looking at a 10 point difference against a background of a much wider spread of scores. It turns out that if the spread of scores is very large relative to the mean difference we see, we start to worry that our result isn’t “real” but is in fact just an artifact of some statistical randomness in our sample. In more jargon-y language, our result may not be “statistically significant” even it the difference is educationally important.

Prior scores to the rescue. We can use these to eliminate some of the spread of outcome scores by first using the prior scores to predict what the outcomes scores would likely be for a given student. Then we look at the mean difference of two groups against not the spread of scores, but the spread of predicted scores. That ends up reducing a lot of the variation in the background and draws out our “signal” against the “noise” more clearly.  Again, this is a hand-wavy explanation, but that’s the essence of it. (A somewhat equivalent model is to look at the  gains from pretest to posttest and compare those gains across groups. This requires a few extra conditions but is entirely feasible and increases power for the same reasons).

In order for this to work, it is very helpful to have a prior achievement measure that is highly predictive of the outcome. When we have a strong predictor, we can (it turns out) be much more confident that any experimental manipulation or comparisons we observe are “real” and not due to random noise. And for many standardized tests across large samples, this is the case – the best predictor of how well a student does at the end of grade G is how well they were doing at the end of grade G-1. Scores at the end of grade G-1 swamp race, SES, first language… all of these predictors virtually disappear once we know prior scores.

What happens in the case when the prior test scores don’t predict outcomes very well? From a statistical power perspective, we’re in trouble – we may not have reduced the “noise” adequately enough to detect our signal. Or, it could indicate technical issues with the tests themselves – they may not be very reliable (meaning the same student taking both tests near to one another in time may get wildly different scores). In general, I’ve historically been disappointed by low pretest/posttest correlations.

So today I’m engaged in some really interesting data detective work. A bunch of universities are trying out this nifty new way of teaching developmental math – that’s the course you have to take if your math skills aren’t quite what are needed to engage in college-level quantitative coursework. It’s a well-known problem course, particularly in the community colleges: students may take a developmental math course 2 or 3 times, fail it each time, accumulate no college credits, and be in debt after this discouraging experience. This is a recipe for dropping out of school entirely.

In my research, I’ve been looking at how different instructors go about using this nifty new method (I’m keeping the details vague to protect both the research and participant interests – this is all very preliminary stuff). One thing I noticed is that in some classes, the pretest predicts the posttest very accurately. In others, it barely predicts the outcome at all. The “old” me was happy to see the classrooms with high prediction – it made detecting the “outlier” students, those that were going against all predicted trends, easier to spot. The classes with low prediction were going to cause me trouble in spotting “mainstream” and “outlier” students.

Then it hit me – how should I interpret the low pretest-posttest correlation? It wasn’t a problem with test reliability – the same tests were being used across all instructors and institutions, and were known to be reliable. Restriction of range wasn’t a problem either (although I still need to document that for sure) – sometimes we get low correlations because, for example, everyone aces the posttest – there is therefore very little variation to “predict” in the first place.

Here’s one interpretation: the instructors in the low pretest-posttest correlation classrooms are doing something interesting and adaptive to change a student’s trajectory. Think about it – high pretest-posttest correlation essentially means “pretest is destiny” – if I know what you score before even entering the course, I can very well predict what you’ll score on the final exam. It’s not that you won’t learn anything – we can have high correlations even if every student learns a whole lot. It’s just that whatever your rank order in the course was when you came in, that’ll likely be your rank order at the end of the course, too. And usually the bottom XX% of that distribution fails the class.

So rather than strong pretest-posttest correlations being desirable for power, I’m starting to see them as indicators of “non-adaptive instruction.” This means whatever is going on in the course, it’s not affecting the relative ranking of students; put another way, it’s affecting each student’s learning somewhat consistently. Again, it doesn’t mean they’re not learning, just that they’re still distributed similarly relative to one another. I’m agnostic as to whether this constitutes a “problem” – that’s actually a pretty deep question I don’t want to dive into in this post.

I’m intrigued for many reasons by the concept of effective adaptive instruction – giving the bottom performers extra attention or resources so that they may not just close the gap but leap ahead of other students in the class. It’s really hard to find good examples of this in general education research – for better or worse, relative ranks on test scores are stubbornly persistent. It also means, however, that the standard statistical models we use are not accomplishing everything we want in courses where adaptive instruction is the norm. “Further research is needed” is music to the ears of one who makes a living conducting research. :-)

I’m going to be writing up a more detailed and technical treatment of this over the next months and years, but I wanted to get an idea down on “paper” and this blog seemed like a good place to plant it. It may turn out that these interesting classes are not being adaptive at all – the low pretest-posttest correlations could be due to something else entirely. Time will tell.

Brief thoughts on learning and meaning

I feel like I’m fighting a bug, so tonight’s entry will be short. My colleague Cynthia D’Angelo posted today about Zelda Speed Runs, a form of “speed gaming” I had not heard of. Basically, one learns – over time and with much community support – how to efficiently traverse a game, gathering all the goodies (or similar goals) in a minimum amount of time. A single run can take 18 hours, so this is not for the faint of heart.  Edit: As Cynthia pointed out in a comment below, the speed run can take 18 minutes, while a “normal” run of the game can take 20 hours.

My first reaction may be similar to many readers': this is what people spend time getting good at? You have to understand, it takes many many repetitions of a game run (again, these can take 20 hours-plus once you get good at it) to be competitive. People record glitches in the game code that might afford one a shortcut to a particular goal. This is a huge investment in time.

But then I thought about some of my own challenges. Among other pieces, I’m starting to work on Bach’s Chaconne transcribed for the guitar. 256 bars (actually, 64 variations on a 4-bar pattern). Professional violinists spend a lifetime mastering this one piece – once you get all the notes down (and with runs of 32nd notes, there are a lot of them) you still have to develop a feel for how the various variations string together, when to hold back, when to cut loose. It’s going to take me a very long time to master (if I ever do).

So, speed gaming and playing the Chaconne. Neither is intrinsically worth more than the other. To the individual practitioner both activities are engaging. There are audiences that derive pleasure from supporting / spectating the practitioner.

What makes both of these “challenging” – and this is the thought I want to mull over at length in a future post – is rooted in the very nature of human learning. Learning is fundamentally about training neurons to fire in new patterns. This takes time and repetition (in most cases). It’s been noted that humans have to strike a balance between complete inertia – the inability to learn anything new – and over-learning, or the ability to learn new behaviors so quickly that we never adopt habits.  (I don’t think I’m getting that contrast exactly right – I’ll try to research it in that future post I keep talking about). In short, there are good evolutionary reasons why I can’t just read through the Chaconne once and have it completely memorized. I have to train my fingers and my memory to anticipate passages, train my fingers to fly quickly enough over the strings, even work out individual fingerings for each note to make the passage more efficient. It’s a great deal of work.

Oh yes, I wish I could learn it quickly and simply start performing it. But that’s not how we’re wired. It’s both frustrating and rewarding. And with that, I bid you all good night. Really, at some point soon I want to reflect more deeply about biological origins of slow learning, but not tonight. I’m too fuzzy-headed. And as Dragon-born, I still have to locate an Elder Scroll to learn the Dragonrend shout and save the world from extinction.

Oops I did it again

Every so often I find myself writing a variation on the same post – the idea that working with wood entails largely non-reversible operations. Essentially, woodworking has no “undo” command. I’ve written about that back in October here, in Feb 2011 here, Aug 2009 here… oh, there’s a post from June 2007 and one from June 2006, too. Eight years and counting of noting the same problem – rushing forward without stopping to think one or two steps ahead. The consequences of “just try it” are often irreversible.

A friend at work had a Spanish cedar platter that had once been her grandmothers. She packed it in a suitcase for a long flight and it cracked in transit. Fortunately it was a clean split and hadn’t completely separated into two pieces, so after looking at it I offered to glue it up. I wasn’t exactly sure what I would use for clamping pressure, but felt confident I could use a band clamp, a bunch of rubber bands, or if I had to, cut out clamping cauls on a bandsaw and use bar clamps. As it turns out, rubber bands did the trick. I flooded the joint with glue (in hindsight, I should have tried to be more economical and use a syringe that I’d forgotten I had) and clamped it up.

Yellow wood glue is a great adhesive. In a tight joint and with proper pressure, the adhesion will be strong than the wood itself. That is, if you try to re-break the platter, it will split somewhere other than on the glue line. It also cleans up with water, doesn’t smell, and isn’t carcinogenic. One problem, though, is that if you just try to smear it off the wood while still wet, it can leave an invisible coating on the wood that will show up as a blotch when you apply a finish. So the general rule is to let it cure for about 10 minutes and then carefully scrape it off as it gets “gummy.”  I’d flooded quite a bit on the joint (my first “mistake”) so I knew I’d be waiting more than 10 minutes to clean it up. Eventually, though, I did gently scrape most of it away.

Spanish cedar platter, glued

Spanish cedar platter, glued

Here’s where I rushed it – after an hour or so (when I knew the glue had set) I saw that one part of the repair was just every so slightly out of alignment. The halves were offset by maybe 5 thousandths of an inch, about the thickness of 2 sheets of paper. Not terribly visible, but I could still see it and more importantly feel the ridge as I ran my fingers over the joint. So then I did what I would normally do – I grabbed a small card scraper and started to scrape down the seam until the two sides were flush with one another.

Except… this platter was finished (varnish or shellac, I can’t quite tell yet, but I’m leaning toward shellac). Which means i put a nice scrape mark in the middle of a finished platter. Ugly! I hadn’t thought of the consequences of taking a scraper (or any abrasive) to the work – to do it right I’d probably have to strip off all the finish and re-finish the piece. Even that wasn’t such a problem, but this wasn’t my piece to play around with. Remember, it was my friend’s late grandmother’s. Now I’m feeling really badly that I might have made things worse. I’ll take it in and talk over options over the next day or two, once I figure out what the finish actually is (easy test – if I rub it with alcohol and it dissolves into the cloth, it’s shellac. Otherwise varnish). Please let it be shellac – it’s easier to play with (again, dissolves in alcohol) and I might be able to just fix that one area and give the whole platter one more good top coat or two to make it all look even.

I’m hopeful that I won’t be referring back to this posting six months from now, but history has a way of repeating itself.