Review: Boaler (2015) - What's Math Got to Do With It?
I have read a lot of research papers, articles, books, magazine articles, and blog posts about math education and find it impossible to keep them all straight. Writing a review helps me process and remember the main points and gives me something I can look back on. I hope it helps you too. Check out all my reviews.
What's Math Got To Do With It? is the hot math book of the moment, and author Jo Boaler is a current superstar of the math education world. Originally from England, Boaler moved to the US in 1999. Despite outrageous harassment from a pair of Californian mathematics professors, she has made a name for herself as a promoter of best practices in math education. What's Math Got To Do With It? is her attempts to synthesize research and present it in an accessible way to parents and teachers to improve the quality of math education in this country.
She highlights many of the well-known issues in math education: the ineffectiveness of lecture and rote practice, over-emphasizing summative assessments particularly multiple choice tests, tracking students by ability, and reduced expectations for female students. Along the way she make a mostly well-articulated case for how to improve math education in the United States.
I mention the US specifically, because this book is clearly written with the United States education system in mind. While some of Boaler's research took place in England -- including her significant longitudinal studies -- the book begins by raising alarms about the US's performance on international tests. In the book Boaler contrasts the practices of more successful countries (such as Finland and Singapore) with the practices of the United States. Rather than go through all of the ideas of the book, I want to share some of the big points that struck me as interesting or particularly well-argued.
Psuedocontexts
In discussing why many students see school math as useless, Boaler discusses ways in which students develop the idea that math is not applicable to their lives. She blames this partially on the idea of psuedocontexts. A psuedocontextual problem is one which applies mathematics concretely but does not allow students to rely on any of their real-world knowledge. For example, if you can buy a 1/4 of a pizza for $4 how much would a whole pizza cost? In the real-world, we know that buying larger quantities of food means paying a reduced rate, but as Boaler explains "none of this matters in Mathland."[31]Page 52 Dan Meyer is making a similar case in his talk on modeling which he has made available for free and is great.
I was unfamiliar with the concept of psuedocontexts prior to reading this book. I definitely understand her point and appreciate the way in which it can contribute to unproductive student beliefs; however, I am not sure I agree with the idea that contexts "should only be used when they are realistic and when the contexts offer something to the students, such as increasing their interest or modeling a mathematical concept."[32]Page 53 Maybe I am splitting hairs, but I think psuedocontexts can still be used beneficially as long as time and attention are spent helping students become aware of the assumptions their are making. Nearly any modeling of the world using elementary mathematics is going to involve assumptions being made; we just need to make sure that students are thinking about them.
Multiple-Choice Testing in America
While discussing the United States commitment to standardized testing, Boaler dropped this bomb: "It is hard to find a single multiple-choice question used in Europe--in any national assessment, in any subject, at any level, in any country--yet almost all of America's test questions are of a multiple-choice format."[33]Page 65 I found this to be shocking. I am not as familiar with testing practices because I work at a private school, but this section of the book contradicted some of my prior understanding.
First, I was shocked that multiple-choice questions play little or no role in assessing students in Europe. I know that multiple-choice tests are a poor way of assessing student achievement, but I was under the impression that other tests were too expensive because of how time consuming they are to grade.
My understanding is that no one had been able to develop a test that measures real student understanding that can be used widely and at a reasonable cost. Does this mean that Europe has a solution to this problem? Or, are they just testing so much less that the costs are relatively much more minor. If anyone has a recommendation for further reading on this topic I would great appreciate the reference.
Assessment - Chapter 4
In the same chapter where Boaler mentions testing in Europe, she addresses the distinction between formative and summative assessments[34]She uses "assessment for learning" to refer to formative assessments and "assessment of learning" to refer to summative assessments. as well as standards-based grading. While I agree with the conclusions she makes, I thought her argument was too unspecific to be useful to most parents and teachers. Clearer examples of both formative assessments and standards-based grading would have made it clearer.
One point she makes well is that percent and/or letter grades cannot go on tests for feedback to be constructive. She cites a study in which children were put into three groups.[35]She cites a study by Ruth Butler from the Hebrew University of Jerusalem. The description begins on page 99. One group received traditional grades, another received only comment-based feedback, and a third received both types of feedback. Unsurprisingly the students who received comment-based feedback outperformed the students who received traditional grades. However, the group receiving ONLY comments outperformed the group receiving BOTH comments and grades.
This fits with my experience with standards-based grading. I found that students and especially parents will gravitate towards the grade and ignore the feedback. Removing the grade forces them to engage with the feedback and standards in order to understand how they are progressing. Here are three additional quotes from the chapter that are great:
"When students are sent a label telling them where they stand compared to other students, rather than where they stand in their learning of mathematics, it offers no helpful information and is harmful to many students."[36]Page 90
"Diagnostic comment-based feedback is now known to promote learning and should be the standard way in which students' progress is reported."[37]Page 100
"Importantly, students should be given information that refers to the content area being assessed, not to other students."[38]Page 101
Tracking - Chapter 5
Boaler's argument against tracking in chapter 5 was the strongest part of the book. She outlines a clear case against tracking students by ability. I'll allow her quotations to demonstrate:
- "Countries that reject ability group (...) are among the most successful in the world."[39]Page 103
- "the United States was found to have the greatest amount of (...) tracking" in the world.[40]Page 104
- "[Tracking] has some serious flaws, including the difficulty of identifying students correctly when children develop at different rates".[41]Page 104
- "Research tells us that approaches that keep students as equal as possible and that do not group by 'ability' help not only those who would otherwise be placed in low tracks, which seems obvious, but also those who would be placed in high tracks too."[42]Page 105
- "At around seventh and eighth grade, students in the United States typically get placed into different levels of classes which determines their future for many years to come." Some of these students have the opportunity to finish algebra 1 before high school. "The critical information that schools rarely provide is that (...) students cannot take calculus unless they have already passed" algebra 1 before high school."[43]Pages 106-107
- "when students moved into ability groups in grade seven, positive growth-mind-set thinking reduced, and the students who were most negatively affected were those going into the top track."[44]Page 110
- "ability grouping not only limits opportunities, it influences the sorts of people our children will become." "Through my own research I have found that students in tracked classes in American high schools not only developed ideas about their own potential, but they began to categorize others in unfortunate ways -- as smart or dumb, quick or slow"[45]Page 114
Boaler does a wonderful job of clearly outlining the case for eliminating tracking. She shows how it could help the US improve its math education, but also how it would benefit all students including the high achievers. If you are making the case for eliminating tracking at your school, I would highly recommend this chapter as a persuasive case that could be shared with colleagues, teachers, parents, and administrators.
Conclusion
This is a great book! I think it suffers some from trying to write for too many audiences, but it makes many important points clearly and in a way that is accessible to most. I definitely recommend that you check this out.
July 30, 2015 @ 6:37 am
I'm a teacher in Scotland and would be happy to answer any questions you have about how we assess students. I think that the assessment practices are not as different as this book makes out but obviously I can only speak for Scotland.
July 30, 2015 @ 4:39 pm
Thanks for commenting Fiona. It is awesome to hear from someone across the Atlantic!
Let me ask a few more specific questions:
1. Does Scotland do a national assessment for all students? In the US, for example, all of the students in public school take a standardized test at the end of every school year in multiple subjects. This assessment is not designed by the teacher or the school but by the state or a government agency.
2. If yes to #1, how often do students take that assessment? Is it multiple choice? Are there essays or open-response questions?
3. If yes to #1, are those assessments taken by all students? Every year? On all subjects?
Interested to hear more details!
July 31, 2015 @ 9:26 am
1. In Scotland we do have standardized tests which are taken at the end of Primary 7 (11-12 years old) and S2 (13-14 years old). These assessments are used as a tracking tool and to provide Target Grades for pupils once they move into examination stage of secondary school. This test is a mixture of multiple choice questions and written answer questions. For the examination stage, S4 - S6 (ages 15 - 18) a written exam covering various levels (National 3 - lowest level to Advanced Higher - highest level) is taken every May. For the most part these exams are written answer only but up until very recently the Higher examination (2nd highest) was comprised of multiple choice for 40 out of 130 possible marks. This is all in maths but other subjects absolutely use multiple choice.
2. The P7 and S2 assessments are taken by all pupils unless there is a reason to withdraw them from the assessment. The S4-S6 stage pupils only sit the examinations if they are at the appropriate level. National 3 & 4 are assessed internally with 3/4 unit assessments whereas National 5 onwards has 3 unit assessments and an external exam provided by SQA (Scottish Qualifications Authority). Al pupils across Scotland who are sitting these levels will sit the exact same exam. These exams although they come from SQA are written by a team of teachers who are employed by SQA.
I think I'm getting this all right, it's quite a fiddly system when explaining it to people who aren't teaching it
August 2, 2015 @ 9:12 pm
Thank you for all of this info. I have two follow up questions:
1. Who corrects/grades these tests? Is it sent to a testing company or is it done by the school?
2. You said "These assessments are used as a tracking tool and to provide Target Grades for pupils once they move into examination stage of secondary school". I am interested in the concept of Target Grades. Can you tell me more about what that is and what that means?
As a follow up to the post...Boaler's NO MULTIPLE CHOICE ANYWHERE statement seems to be not very true based on this info. I wonder if anyone has done an analysis of all the standardized testing done around the world. Sounds interesting to me. Does the TIMMS have this info?
August 3, 2015 @ 3:54 pm
1. The tests at P7 and S2 are marked by an outside company with the help of computers as these assessments are taken online. Right/wrong questions are marked by the computer and written response questions are marked by employees of the outside company. At S4 - S6 stage some is marked internally by the class teacher but can be verified by SQA who provide our assessments. Verification just makes sure you are marking in line with SQA policy, not being too harsh or too lenient and if you have adapted the assessments that they are still at the required degree of difficulty. Pupils who are completing National 5, Higher or Advanced Higher level courses sit one externally marked exam and that is marked by a team of teachers trained by SQA. Teachers are not allowed to mark any assessments from their own school or schools they have previously worked in and since our marking is done online tests questions are put in to check your marking is up to standard.
2. This is a tough question to answer in writing so hopefully this makes sense. Target grades are generated using the results from previous year groups who achieve the same score and what they achieved in their final exams. For S4 pupils who've never taken a qualification course before have their targets generated by the outside company who administer the standardised test. Once they have completed a qualification their next target grades are generated by our local authority (equivalent to a county in the US). Target grades are what our headteacher (principal) evaluates department performance against. If we meet or exceed targets, we've done a great job. If not, there are issued to be addressed. It is a flawed system as there is not account of any problems that arise for students during the year like illness, family loss, mental health issues etc.
August 3, 2015 @ 5:17 pm
"Math With Bad Drawings" talked about testing in the US vs. UK a while back. I found it interesting:
http://mathwithbaddrawings.com/2015/06/17/us-vs-uk-who-grades-students/
And Finland (an oft-cited success story) doesn't test annually, just once at the end of high school:
http://www.seattletimes.com/seattle-news/finlands-educational-success-story-less-testing-more-trusting/ Don't know the form of those tests, but if it's only done once for each student, there's a chance it wouldn't have to be multiple-choice.
I agree with your complaint about Boaler's call for use of only very realistic contexts. As you imply, the mathematics of being totally real-world is insanely complicated. If elementary students are to do any kind of modeling at all, contexts will have to be greatly simplified from reality.
Furthermore, it doesn't leave room for being fanciful or silly. One look at her "YouCubed" site shows lots of fanciful, not-very-real-world contexts. So I'm not totally sure what she meant by that statement, but I don't think she meant what she said.
August 3, 2015 @ 9:55 pm
Thanks for sharing those two articles. They are really interesting and informative. It seems like a reasonable way to give better tests is to give them less often. With less total tests, more resources can go towards grading them which means actual graders instead of computers. I find learning about all of the international testing and schooling differences fascinating. Always interested in reading more there.
As for psuedocontexts, Boaler made the point that IF contexts are used, they should be true contexts where students can apply their real-world knowledge. She is still open to pure mathematical questions devoid of contexts. This cogs task, for example, is pretty context-less. Students likely don't know anything about cogs, and if they do, nothing in this task is likely to contradict their background knowledge. Still, I think there is more of a continuum of abstract to accurate contexts than Boaler acknowledges. We just have to make sure we aren't repeatedly giving kids unrealistic contexts that support the belief that there is a math/understanding for in school and a different one for out of school.