At researchED Vancouver in February, I had the opportunity to discuss Nillie Lavie’s *Load Theory of Attention* and how it relates to *Cognitive Load Theory* and AD/HD. If you didn’t get a chance to attend, you can find a YouTube version of my talk here. This blog post is a summary of how I think *Cognitive Load Theory* and *Load Theory of Attention* appear to complement each other.

Sweller’s *Cognitive Load Theory* discusses the concepts of intrinsic and extraneous cognitive load through the use of the two major players in cognitive architecture: the working memory and the long-term memory store. Currently, it is theorized that the long-term memory store is infinite in capacity; yet the working memory, or processing space, has limited capacity. Not only can facts/information enter the working memory from the environment, but it also has some ability to “search out” stored facts/information from our long-term memory. When the capacity of the working memory is filled, and processing power becomes limited, we say an individual is experiencing cognitive overload.

One could imagine that if a lot of information, or cognitive demand, is coming from the environment, that this may quickly take up working memory space and lead to cognitive overload. With respect to teaching, the way we set-up, explain, and execute our lessons is referred to *extraneous cognitive load*. In other words, *Cognitive Load Theory* states that the way we present information and tasks to the learner is important. For instance, inquiry-based learning has a high extraneous load for novice learners compared to worked-out examples; yet worked-out examples tend to have a high extraneous load for expert learners compared to inquiry-based learning (an oddity known as the *expertise-reversal effect*).

The same is true in the other direction. That is, one could imagine that if a question is intrinsically hard, then our working memory might try to “search out” a bunch of past knowledge from our long-term memory. With respect to teaching, *Cognitive Load Theory* states that we need to be mindful of how much mental effort is required of our learners to perform a particular task – this is known as *intrinsic cognitive load*. This type of cognitive load is often measured using *element interactivity*. For example, solving a related rates problem in calculus has high intrinsic load, as learners are required to keep track of many interacting elements (the picture or model, variable representing changing quantities, implicit derivatives, the original word problem); yet finding the derivative of *y = x^2* is likely to have low intrinsic load.

Keeping Sweller’s *Cognitive Load Theory* in mind, let’s turn our focus to Lavie’s *Load Theory of Attention*. We will keep the two major players from before (working memory and long-term memory), and add in one new player: the sensory memory. It is believed that the sensory memory holds stimuli coming from the environment just long enough to be transferred to our working memory. Lavie’s *Load Theory of Attention* (or *Perceptual Load Theory*) discusses attentional capacities through the use of task-relevant stimuli and distractors. She theorizes that our attentional resources are of finite capacity, and that the perceptual load of task-relevant stimuli determines whether or not distractors get processed.

One could imagine that, for a given task, there will be task-relevant and task-irrelevant information (or stimuli). Let’s call task-irrelevant stimuli *distractors*; these are the items we would like to keep away from our processing space. Suppose that the task-relevant stimuli don’t demand all of our attentional capacity in the sensory memory. In these cases, under *low perceptual load*, the leftover capacity is taken up by any number of distractors, and both the task-relevant stimuli and distractors are sent to the working memory space. Now distractors begin competing for processing space. For instance, consider a lecturer using PowerPoint. Assuming that you know the material very well, and it is easy to read the slides, you are in a scenario of low perceptual load. The free attentional capacity allows distractors to get through to your working memory space. You might begin to think about what is for dinner, the last song that played on the radio, why the person in front of you is wearing socks and sandals, or what new notification you received on your cell phone.

In contrast to the above scenario, imagine that the task-relevant stimuli do demand all of our attentional capacity in the sensory memory. In this case, under high perceptual load, the deficit in available attentional capacity results in distractors not taking up any of this space. Now only task-relevant stimuli are sent to the working memory for processing. For instance, consider the case where we are listening to a lecturer again. This time, we don’t know the material as well, and she is writing at the board in a handwriting style that is slightly messy. In this case of high perceptual load, our attentional capacity is maxed out trying to decode the handwriting and language used to explain the concepts. It is more challenging to think about distractors, as they aren’t vying for processing space in your working memory.

As you might be able to see, the two theories developed by Sweller and Lavie seem highly complementary. Of interest to me is that it does not seem as though either theory has acknowledged the other as of yet. However, it does seem as though *Perceptual Load Theory* and *Cognitive Load Theory* might offer insights into each other’s realm:

- How much of the extraneous load of a task comes from processing distractors?
- Do distractors affect intrinsic cognitive load?
- If we decrease extraneous cognitive load, does this always lead to less processing of distractors?
- How can we create lessons in such a way to ensure that perceptual load is “high enough”? And how high is “high enough”?

I’m sure there are other concepts that could be intertwined as well, but these are some of the first questions that come to my mind. As always, I welcome your thoughts and questions on this reflection.

]]>Consider the following problem: Find all roots of the function *y = (x-3)^2 + 4*.

Now, most of us will know this as the equation of a parabola in the *xy*-plane; one whose vertex is at the point *(3,4)*.

And most of us would be happy noting that this equation does not have any roots over the real numbers. For those of you who want to travel down the rabbit hole of complex numbers though, let’s take a walk.

Assume that we now have the equation *y = (z-3)^2 +4*, where *z = a + bi* is permitted to be a complex number. We can do a bit of algebra to find the two complex roots of this equation:

Now, one of my clever calculus students was trying to make the connection between distance and the modulus of a complex number. He was trying to connect the modulus of these particular solutions to the distance from the origin *(0,0)* to the vertex of the parabola *(3,4)*. He wrote down *|z| < 5*? and *|z| > 3*?

Notice that he was thinking about the right triangle formed below, where the 5 is the hypotenuse and the 3 and 4 are the legs:

I believe he was on this track because I had mentioned that the modulus formula for

*z = a + bi* is given by *|z| = sqrt(a^2 + b^2).* So it makes sense that he was thinking about the distance to the origin here (just not on the correct plane). After a bit more discussion, he was still adamant about *3 < |z| < 5*, which is certainly true for this particular example, since *|z| = sqrt(3^2 + 2^2) = sqrt(13)*.

Then I stopped and thought about this. I found it weird that the modulus of the solution was greater than the length of the smaller leg, yet smaller than the length of the hypotenuse. I dove in to see if it would work in general.

Assume *y = (x – a)^2 + b* is a parabola in the *xy*-plane that lies above the *x*-axis with *b > 1*. Extend this parabola naturally over the complex numbers and find its roots:

Consider the modulus of these complex roots:

Now, since b > 1, we can see that

and this is fascinating to me because it tells me that the complex modulus of our roots will lie on some kind of ring in the complex plane! In fact, we know *a < |z| < c*, where *a* is the *y*-coordinate of the vertex of the parabola and *c* is the distance of the vertex to the origin (see the blue triangle and parabola given above). In the Re/Im plane we would get a region that looks like this:

All in all we didn’t get too far discussing the complex modulus, but it was definitely still a bull’s-eye in my books.

]]>Here are the first few pages of a recent calculus midterm of one of my students who has been diagnosed with ADHD. I’ll let you take a peek at what you see before I give my reflection.

Now, I want you to go back and take a look at the first page, where question #1 ii required the knowledge of the derivative of *log_{3}(x)*. You can see that the student set up the equation* log_{3}(x) = b* in order to help him determine the derivative using the quotient rule. But the giant “*?*” beside *b’* caught my interest. (Of course, if there is anything else of interest to you please leave a comment!)

Now, he begins playing around at the top of the page, recalling rules for how to deal with logarithms. There is a *y = 3^x* and a *y = log_{3}(x)* indicating to me that he was thinking about potentially finding the derivative using inverse functions or implicit differentiation. However, not much happens here, so we will catch up in a few pages.

The next page is nothing special, in that he tackles the next couple derivative questions without making any more thoughts on the log base three problem he is having. But check out the top of the third page. Here, he correctly gets the relationship between exponentials and logarithms: *3^x = b* means *log_{3}(b) = x* (or vice versa). Then there is a little bit of play at the bottom of the page trying to re-write this relationship in various ways to potentially get a nice equation to differentiate. Aside from now having the inverse relationship solidified, not much headway is gained on the initial problem.

Finally, on the last page, we see one last attempt to think about *3^x = u*, perhaps a nod to the variables I use when doing the chain rule (*dy/dx = dy/du * du/dx*). This is the final attempt to determine the solution to the log base three problem, and the rest of the test continues in a normal fashion.

The most interesting thing from my perspective is embedding what I see in a cognitive load theory setting. We know that the working memory has limited capacity to hold and synthesize information. This information can come from either environmental stimuli, or as schema entering from long-term memory. I was always under the impression that trying to cut back environmental stimuli for students with ADHD was a must, as this allows the working memory to focus more on the task at hand. However, seeing this test had me thinking a bit deeper.

At the college level, we are typically good at minimizing outside distractions; doors are closed, rooms are quiet and I cross my fingers that maintenance has fixed any lights that are in strobe-mode. However, as I do not have ADHD myself, I cannot comment on what outside stimuli might still be entering the working memory. Perhaps a song that was heard earlier that morning? Whether or not he forgot his lunch at home? What plans are for after school? So let’s assume that some working memory space has been allocated to this.

Now it’s test time. Since this particular student is quite adept at mathematics, most schema enter the working memory quite effortlessly. We can see this demonstrated on page 2, where some complex derivatives are handled. From my perspective, it is actually the snag of not knowing the derivative of *log_{3}(x)* that pushes the working memory over its capacity. Look at how often he returns back to the problem – at the top of page 1, the top & bottom of page 3, and at the top of page 4.

Just how taxing is it on the working memory to be subconsciously processing this log base three problem over the course of four test questions? How debilitating would this be if there were not well-developed schema to draw from when writing this test? How much more success would there have been if he was able to dislocate this log base three problem from his working memory, instead of it continually returning back to occupy his focus? I find these questions super interesting, and I have thoughts, but no particular answers. If there are any readers who have studied cognitive load theory from the perspective of individuals with ADHD, I’d love to read a bit more on this topic.

]]>In my last two blog posts, I discussed the concepts of element interactivity, as well as intrinsic and extraneous cognitive load. We say information has high element interactivity if there are many elements of the information that must be processed together at the same time. High element interactivity generally implies high intrinsic cognitive load. Here, intrinsic cognitive load refers to a working memory load caused by the intrinsic nature of information that we are trying to process. Finally, extraneous cognitive load refers to a working memory load imposed by the pedagogical nature of the information being taught.

**Defining Understanding**

Now that we know about element interactivity, we can use this concept to define understanding. In a cognitive load setting, understanding is the ability to process all interacting elements in working memory at one time. Since the focus is on interacting elements, it does not make sense to define understanding to individual elements, such as learning one French vocabulary word (cat = chat).

Let’s analyze our previous examples. Consider the math fact *3 + 5 = 8*. According to our definition, if a learner is able to answer *3 + 5 = ?* correctly, without having process all of the interacting elements, we would say that she has demonstrated understanding of the question at hand. I would argue then, that using a strategy such as tallying up three and five on her fingers would display a lack of understanding. Even beginning with three fingers and counting up to eight, whilst being a more effective strategy, still displays a lack of understanding as she is processing some or all of the elements individually. Of course, I am not arguing that students shouldn’t be permitted to use these counting strategies. It is likely that these are crucial stepping stones in the learning trajectory, and the instructor needs to be mindful of when the student seems ready to move beyond these strategies.

**Understanding and Incorrectness**

One aspect of the definition that I am curious about is when the learner makes a mistake in the process. Consider solving for *x* in *3x – 10 = 5.* Is it possible for the student to understand, yet be incorrect? Are these mutually exclusive events? Let’s say the student solution is

*3x – 10 = 5
*

This is incorrect, but it still shows us that they understand the process of solving for *x*, and that they can process all of this information in working memory at once. Does understanding come down to a judgement call on the side of the instructor in these cases?

**Instructional Implications – A Case for Quick Math Fact Recall**

Let’s try to deconstruct our current pedagogy in light of this definition of understanding. Consider all of the multiplication facts that our students must recall. There is element interactivity amongst one individual fact (*3 x 4 = 12*), as well interaction amongst all of the multiplication facts for three, as well as interaction amongst all facts up to *9 x 9 = 81*! Working memory might get overwhelmed, as intrinsic load is high due to the number of facts that must be remembered.

Think also about what our current curriculum states: students should be comfortable with knowing other concepts, such as knowing *3 x 4 = 4 + 4 + 4 = 12*, building array models, or knowing about the commutative property. All of this increases extraneous cognitive load; thus requiring more time and effort for the students to move the facts to long-term memory. I would argue that this is why we have seen a shift to moving recall of the multiplication facts to later grade levels. In British Columbia, students aren’t expected to recall facts for 3s or 4s until Grade 5; and there is no mention of the harder facts like 7s, 8s or 9s.

To compare, I had my multiplication facts memorized by the end of Grade 3 in the 80s in Ontario. Some might argue that we were taught without *understanding* (this alternate definition is a bit fuzzy, but typically is interpreted as knowing how to complete a question utilizing a model). This is false, as I have many documents showing that we indeed used models. But the key difference here is that * the focus of instruction was on automatization of facts*, and that models were used to introduce concepts and as help when students weren’t understanding.

For such a large task, such as learning the multiplication facts, why not have students learn the individual facts first? Using techniques such as interleaved and spaced practice, and introducing new fact families after long exposure to previous ones, would be beneficial for learning. After students are comfortable with recall of the facts, then we can focus our teaching on developing *understanding* (the fuzzier definition) of how multiplication is connected to other concepts. Of course, once students can recall the multiplication facts, they have displayed understanding in the cognitive load sense, as they can process all of the elements together at once. So why would we want to learn our facts first, before connecting to other concepts? Once the facts are remembered well, then the can be retrieved quickly and efficiently, leading to lowered intrinsic load, and more working memory capacity to work on the current problem of connecting the fact to another concept.

In conclusion, I am not saying that we shouldn’t explain why certain facts are the way they are! This can certainly be done as motivation to the problem, and mixed throughout as needed; however, this should not be the focus of the learning because this increases extraneous load and not all students will successfully move the facts into long-term memory store this way.

]]>In my last blog post, I briefly summarized element interactivity. When elements must be processed in working memory simultaneously due to them being logically connected, we say the elements have high element interactivity. By supporting schemata development in our pedagogical practices, we can combat the strain on working memory that element interactivity causes. There are also two other ideas to keep in mind when reflecting on our pedagogy: intrinsic and extraneous cognitive load. I would like the topic of this post to be dedicated to summarizing and exploring these topics.

**Intrinsic Cognitive Load**

Working memory load that is imposed by the intrinsic nature of the information we are trying to process is known as intrinsic cognitive load. Perhaps this can be explained nicely through the use of an example.

First, let’s think about solving for *?* in the addition statement *3 + 5 = ?*. We have seen that, for novice learners, there are many elements to process here, leading to high element interactivity. Novice learners may have to process all of these elements separately, perhaps first counting to three, then counting up again to eight. In this instance, the high element interactivity causes intrinsic cognitive load. It would be a significant challenge to process anything else in working memory since all of the processing power is dedicated to making sense of the symbols and using the counting-up strategy.

For those who know the fact that *3 + 5 = 8*, this whole element can enter working memory, freeing up processing space. For expert learners with well-built schemata, this problem has low intrinsic cognitive load since they are able to interpret all the symbols in *3+5 = ?* as one unit, and come up with a solution to their interpretation quickly.

In summary, information can have either high or low element interactivity. High element interactivity necessarily leads to high intrinsic cognitive load due to the complex nature of the information. This is especially evident in novice learners. However, as schemata develop in these areas, learners are able to process the interactions of the elements more efficiently, decreasing intrinsic cognitive load.

**Extraneous Cognitive Load**

Working memory load imposed by instructional design is called extraneous cognitive load. For example, open-ended problem solving is a challenge for novice learners since they may be unsure of where to focus their attention. Too much working memory capacity is being used to understand the teaching pedagogy, that little to no information can be learned. Based on the *Borrowing & Reorganizing Principle*, as well as the *Narrow Limits of Change Principle*, direct instruction through studying worked examples provides one of the best practices for learning novel information. In general, studying worked examples with expert instruction has low extraneous load. In novel situations with new information, instruction with little to no structure leads to high extraneous cognitive load.

Of course, this comes with some caveats, as worked examples can be structured poorly. The way the instructor approaches examples can also lead to high extraneous load. For instance, when working on related rates problems in calculus, most instructors will read the entire question, then proceed to working through the problem. Due to the high element interactivity and intrinsic load present in these types of problems, solving the problem using the typical approach causes high extraneous load in novice learners. A better approach comes through understanding the *Split Attention Effect*: interweave solution steps with information from the problem to decrease extraneous load.

**Instructional Implications**

We have seen that when there are many interacting elements in a given problem, intrinsic load is necessarily high for novice learners. As instructors, our primary focus should be on schemata formation, as this leads to decreased intrinsic load. Well-built schemata also enter working memory as single elements, freeing up more processing space for other novel information.

When information is presented in a way that causes the learner to focus on aspects unrelated to the problem, this creates unnecessary extraneous cognitive load, leading to decreased working memory capacity. To combat this, we can present novel information through the use of direct instruction & studying worked examples. This will free up working memory by decreasing extraneous cognitive load. As our learners move from novice to expert learners, it becomes easier to vary our teaching pedagogy, as well-built schemata help to decrease intrinsic load in instances when extraneous load is high.

]]>**Elements & Schemata**

In one of my earlier posts, I discussed biologically primary and secondary knowledge. In short, primary knowledge is knowledge in which we are biologically programmed to learn, such as how to communicate to others within our culture. Secondary knowledge, however, we are not biologically programmed to learn.

To keep things simple within the framework I want to discuss understanding in, let’s assume that facts and procedures can be divided into two classes: elements and schemata. Elements are single pieces of information that can be processed within our working memory, such as knowing that the number 3 corresponds to the numerical amount three. Once known, elements can be placed together to begin forming schemata. For instance, a schemata for “3” may include knowing that 3 can be mapped to the word “three” or to three objects (cardinal), is the whole number after 2 and before 4 (ordinal), or that the number 3 may be used on your football jersey (nominal).

Schemata, once well-known, can be linked. For instance, a schemata about prime numbers may include knowing that 2, 3 and 5 are the first three prime numbers. In addition to this, elements can form sub-schemata. Our reference to the ordinal, cardinal and nominal interpretations for the number three might all be considered sub-schemata of the overall schemata we have for three. As we know, the beauty of schemata is that, once well-formed, they can enter working memory as a single element, freeing up working memory space for other information.

**Element Interactivity**

Element interactivity occurs when two or more elements must be processed simultaneously in working memory because they are logically related. Think about the multiplication fact *3 x 4 = 12*. There are actually five symbols that must all be interpreted at once due to them being logically connected. There are three numerals: 3, 4 and 12. There is the multiplication operation, which could be interpreted in a couple different ways (as an array, as repeated addition, as a multivariable function that returns the product). Finally, there is the equal sign, which is a symbol referring to the idea of 12 being equivalent in some way to the product of 3 and 4. As a novice learner, all five symbols must be processed individually in the working memory; whereas an expert learner has a well-built schemata that allows them to by-pass having to process all of the symbols every time they see a multiplication fact. In essence, an expert processes one element; whereas a novice may have to process all five elements.

**Instructional Implications**

As mathematics instructors, we need to be mindful of how the elements of our problems are interacting within the context of teaching our students. High element interactivity necessarily causes more working memory capacity to be used, increasing cognitive load. One potential way to combat curricular competencies involving high element interactivity is to re-visit pre-existing topics and ensuring our students have the well-formed schemata required to ease some of this cognitive load. Think about how challenging linear equations are for our students: they involve complex understanding of integers and fractions, as well as comprehension of how to manipulate all four of the main numerical operations. Before introducing equations, it would seem logical to review operations with integers and fractions so that students can consolidate their knowledge in these areas. By helping to create well-formed schemata in these topics, students can apply more working memory capacity to the new procedures that are intrinsic to linear equations, without applying too much working memory capacity to previous curricular topics. If consolidation does not happen, it is no surprise that the student struggles with linear equations, as the element interactivity is high and too much working memory is being allocated to topics that are not the focus of the lesson.

In my next blog post, I will explore two more interesting topics: intrinsic and extraneous cognitive load. We will see the interplay of element interactivity with these two topics and discuss instructional implications.

]]>I was asked by a colleague last week to prove an identity involving radicals. The two expressions arise when considering cosine of the angle pi/12. Normally, one would apply a sum or difference formula

and this would simplify to

(1)

However, when one of his students used a calculator, the calculator returned back an unusual expression:

(2)

He and the student were able to verify that these expressions evaluated to a similar decimal expansion, so must be equivalent. But then his student asked him how to prove the equivalence of expressions like this. He tried for a bit, unsuccessful – then he tormented me with this problem all Easter weekend. Eventually, I was able to show the equivalence using an old right triangle trick I saw a few years back.

Attach two right triangles together in such a way so that the right leg of the second, and the bottom leg of the first meet at a right angle. On the hypotenuses of the smaller triangles write root 6 and root 2, respectively. This is done so that the hypotenuse of the larger right triangle is root 6 + root 2 – matching up with the numerator of expression (1).

Our goal is to apply the Pythagorean Theorem on the large right triangle, so we need to determine the legs of the larger triangle. To do this, we will determine the legs of the smaller right triangles. For the root 2 triangle, we have the obvious choice of making the legs (1, 1). For the root 6 triangle, we could make the legs (root 2, root 4), (root 3, root 3) or (root 1, root 5). Notice that in expression (2), we have a root 3. This suggests we might want to try the (root 3, root 3) combination for the root 6 triangle. This shows us that the legs of the larger right triangle are both root 3 + 1.

Now we can apply the Pythagorean Theorem on the large right triangle.

Taking the square root of both sides gives

And finally, dividing both sides of the equation by 4 yields the desired result.

I suppose that the moral of the story here, besides seeing some really interesting mathematics, is that I never would have solved this problem unless I had seen the previous problem involving something similar. In general, I believe it is safe to state that in order to be successful solving problems, one should be exposed to many different types of problems (ever wonder how those Math Olympiad contestants get so “smart”?). From a cognitive science perspective this makes sense – it allows us to create problem archetypes (schemata) that we can draw upon to help solve future problems. And the more well-connected these schemata become, the easier it becomes to solve problems.

]]>#1)* Cumulative Review — Why Isn’t Everyone Doing It?*

I have recently read “*Accessible Mathematics*” by Steve Leinwand, in which he outlines 10 instructional shifts to help raise student achievement. One of those shifts is to shift toward giving ongoing cumulative practice at the beginning of your math lessons. It does not have to be terribly extensive – perhaps just four or five short recall-type questions to ensure that students are not forgetting past concepts. It seems obvious that we should be doing this – but many of us are not!

Why should we be doing it? Well, this was somewhat tied to the presentation that Yana and I gave at researchED. It seems that interleaved and spaced practice are highly effective strategies to increase long-term learning in our students. For instance, I saw a 10% increase in the discrimination of problem type when I used interleaved practice in my integral calculus class last year. However, there are some things that we don’t know about interleaving that warrant future studies – like how many problem-types should we include, or how interleaving affects attention in our students.

Why are we not doing it? Efrat discussed some of the practical limitations of using interleaved and spaced practice at researchED New York. Teachers typically list time investment, lack of support, or an incompatible system as reasons for not utilizing spaced practice. What might change their minds? It seems that teachers are interested in ongoing professional development in cognitive science, and time to work with colleagues in order to help ease them into implementation of such tasks. As this is an area of interest to me, please contact me if you or your school is interested in ongoing professional development in cognitive science – I would be happy to help!

#2) *Depressing — Why Aren’t We Collaborating?*

Continuing on the conversation, we could ask why *aren’t* we collaborating more as a community? Let’s take a look at an example from my life. I had a student come into my calculus class with a TI calculator stating that his teachers at high school said they would absolutely require a TI calculator for college calculus. *Literally, what?!* With tools like Desmos at our fingertips, why is there a need to drag around a $200 brick? In addition to this, my department doesn’t allow graphical display calculators on major tests anyway. So it looks like I will need to reach out to the local community and try to spread the Desmos love. Why? Let’s look at it form the alternate viewpoint: If I teach Desmos to my students this year, but when they move on, the next teacher doesn’t know how (or doesn’t want to know how) to use Desmos, these students are now potentially disadvantaged. In essence, a teaching tool is greater when we share it with others in the profession and we develop long-term learning goals using similar tools.

#3) *Planning — Using Space, Not Time*

In a presentation by Nat Banting and Ilona Vashchyshyn, we were asked to consider planning a lesson using quadrants labelled as “Teaching Actions”, “Teaching Spaces”, “Anticipation”, and “Improvisation.” In other words, when it comes to planning, we need to consider our space (the room, manipulatives, desk arrangement) and our actions (modelling, watching, telling). And Nat and Ilona see our actions and spaces situated on a continuum between anticipation and improvisation. In fact, there has to be some improvisation within our classrooms, since it would technically be impossible for us to plan all the possible divergence that may happen in any given lesson.

Of interest to me was their belief that false dichotomies arise when we believe an individual spends all their time within one of the half-planes. For instance, if we believe an educator continuously anticipates and does not improvise in the class, then they are defined as a traditional teacher. On the other hand, those who are thought to improvise all the time are branded as reform or progressive teachers.

This also works for the horizontal half-planes. If an educator is too focused on the teaching spaces, the lesson might be branded as a differentiated instruction type of lesson; and if an educator is too focused on the teaching actions, the lesson might be branded an inquiry type lesson. There is probably more to this conversation, but I am still trying to think more on these two particular diagrams.

#4) **Synthesis — Finding Your Balance**

In Saskatoon I tried to synthesize some reading that I have been doing as of late. The first bit of information was regarding non-routine cognitive tasks I originally heard of from Dan Meyer at OAME 2017. The main premise is that a mathematical task can either have a real-world context or not. In addition to this, a mathematical task can involve “real work” or “fake work.” There are certain verb choices that we make in a math class that lead to real work (question, predict, analyze, debate) or to fake work (evaluate, simplify). Finally, doing fake work in a real world context is overrated; that is, dressing up a routine task with the air of real worldness is overused in math education. However, pushing students to do real work not in a real world context is underrated; that is, we often fall short of allowing students to use meaningful verbs like question, predict or analyze outside of real world contexts. Think “Calculate when the phone will be charged given the model.” (routine, plug ‘n’ chug, dressed up in real world clothing) versus “Predict the y-value given the data.” (non-routine, analyzing data to predict, non-dressed up mathy question).

In addition to Dan’s thoughts on non-routine tasks, I embedded Steve Leinwands idea to lead lessons with data. My thoughts were that if we are interested in moving toward doing real work, data can help drive questioning, noticing and predicting. Provided things go well with the lesson, we can follow up with verbs that allow us to extend, such as generalize or debate. If you are interested in seeing a bit more, my slides from the conference can be found here.

Realistically, I think it would be quite the challenge to create every lesson as a non-routine cognitive task. To me, it feels unrealistic. Also, I firmly believe that the verbs recall, calculate and simplify have a place in mathematics classes and that they should be respected. For instance, John Mighton of JUMP Mathematics consistently reminds me that cognitive load is important – that is, our students require some skill in order to begin a rich-task such as data analysis. This skill comes with practice, which can easily be acquired via spaced practice involving recalling facts. However, on the other side, Bjork reminds me of desirable difficulties. Could non-routine cognitive tasks be shaped in such a way to support learning and long-term retention?

As I continue to navigate the large divide of what feels like a *fake world* of mathematics and a *real world* of mathematics education, I can’t help but wonder how we might all be able to help shift the collective from *fake work* to *real work*.

Take a moment to read the phrase: “The hungry caterpillar ate the juicy leaf.”

Now quickly complete the word by filling in a missing letter: SO_P.

Out of curiosity, did you complete the word using the letter U to make SOUP? According to Kahneman, author of *Thinking Fast and Slow*, after processing the words HUNGRY and ATE in a sentence, we are *primed* to select the letter U in the word above since SOUP is associated with the words HUNGRY and ATE. Let’s explore this a little bit, and see if and how we might think about using this idea in our math classrooms.

**What is the Priming Effect?**

An idea in our memory is associated with many other ideas. These associations may be categorical, such as connecting the words FRUIT and APPLE, or property-based, such as connecting ADDITION or MULTIPLICATION to COMMUTATIVITY. Ideas may also be associated through effects like how we may connect ALCOHOL to DRUNK, or CIGARETTE to CANCER. When primed with one of the links in an association, our mind has the ability to bring the other familiar and associated words into our working memory.

**What Does Priming Look Like?**

When priming occurs it is subconscious and Kahneman argues that we are likely not to believe it is occurring due to the way our brain functions (our brain allows us to believe that we are in full control). He mentions several studies in his book, but I will touch on only two to give you a sense of how priming is at work. In the first, participants were primed with images of money. The group that was primed with money images became more individualistic – less likely to help others and less likely to ask for help – on tasks that followed.

In the second group, it was shown that actions can also be primed. In this study, children read sentences involving words associated with the elderly such as FORGETFUL, BALD, GRAY, and WRINKLE. None of the sentences explicitly mentioned mentioned the elderly. When the participants were asked to walk down a hallway, they did so at a much slower pace than normal. The reverse association was true as well: children who were asked to walk slowly for a period of time were more apt to recognize words associated with old age.

**Can We Use Priming in Mathematics Class? **

I wonder if mathematics teachers have been using this idea already? In most classes and assessments, we tend to be explicit with word choice when we are asking students to perform a task. For example, if I want my students to think in a linear way, I could use an associated word like SLOPE or a similar word like STRAIGHT to help them recall ideas around linear functions. Use of certain cues to aid in recall are most likely beneficial since we know that recall of facts helps with both storage and retrieval strength. I could also see the argument of priming allowing students to access previous knowledge, which may be an appropriate action during the set-up of a teaching task.

On the other hand, we do have to be aware that priming may occur without our knowledge at any given time. That is, if we utilize unnecessary pictures or words to aid in a mathematical task, our students may be thinking about what we don’t want them to think about!

In closing, the priming effect is an interesting process to be aware of in our classrooms. However, Kahneman notes that the effect doesn’t work with all individuals, so we do not have to worry about students becoming zombies to priming effects. In addition to this, it seems that the priming effect has been under scrutiny for robustness, including replicability of certain findings. Perhaps we will have to wait to see what color the first coat is before delving deeper into this theory in our classrooms.

]]>Friday – May 12th:

Any of you who have attended both days at OAME and are out tonight, I am not sure how you are doing it. Here I am, nuzzled in a blanket at 9:02PM with a glass of Moscato contemplating writing a reflection because my internal battery is at 5%, wondering how long it will take to fully charge if I know at 9:22 I will be at 19%. Hint: the relationship is surprisingly non-linear (despite a correlation coefficient close to 1)!

Surprise is not a Surprise with Desmos

If you did not get the reference above, then I am not mad… just disappointed that I didn’t see you in Dan’s presentation this morning regarding the functionality of Desmos. I feel like I have grown so much over the past year through using the graphing calculator and the activity builder in my calculus classes, but am still learning more as the months progress. I have started to realize that with Desmos, I am consistently amazed, but never surprised (anymore). Two awesome functions of Desmos we got a peek at today were the geometry beta, and Desmos for the visually impaired. This was one of the few times in my life I got to *hear* what graph sounded like (the other being when I studied Fourier Analysis).

By the way, if you want the animation from the cell phone 3-Act, Dan tweeted it out to us. Play away.

Discussing Discussion

One of my afternoon sessions looked at the five practices for facilitating effective discussion in classrooms. This was a great connection to to Deborah Ball’s Domains of Mathematical Knowledge for Teaching (below). Regarding a lesson on perimeter of connected hexagons, we observed student work and strategies during two points in time – near the beginning of their pondering, and much later during the process. We had the opportunity to ask questions of the students to elicit how they were thinking about the problem, connecting them and us to Common Content (generalized math knowledge not specific to teaching) and Specialized Content Knowledge (knowledge specific to teaching) present in the problem.

One interesting aspect of this presentation that was not present in others was time near the beginning to discuss possible misconceptions and strategies of students as the problem progressed. Thinking about student misconceptions would fall under what Ball calls Knowledge of Content and Students, and thinking about different strategies to tackle the same problem falls under Knowledge of Content and Teaching. At the end of this questioning period, we had the opportunity to decide which three student solutions we would present to the class and in what order. We opted to choose a visual strategy to solve the problem first, followed by a tabular strategy, then finally a solution containing both a table and a picture. Interestingly, no students opted for a graphical strategy; although I argued that perhaps it was unnecessary to get the information required for this task. Understanding the progression from visual to table to graphical would be an instance of Horizon Content Knowledge, or knowing how one idea/topic connects to another. All in all, a very well-laid out execution of the pedagogy of classroom discussion!

Alternate Assessments

In the alternate assessment session, we saw non-typical ways to assess students. I thought these assignments were beneficial to have ready to go for when needed (use for students on vacation or sick for example). I was fond of the interview, mostly due to the fact that I have used it before with my elementary teachers. Giving onus to the student to develop and justify their mark is awesome. However, a good interview with prompts does take a lot of time to prepare for properly, especially if you want to mark it objectively.

One thing that caught my eye was the use of an alternate test format. Students could choose the alternate test, which was more open in the sense that a typical question involved *elaboration *(or explaining all he/she could on a particular topic.) which is a well-known strategy for learning. Of interest, was that the students didn’t necessarily see a decrease in stress/anxiety levels when comparing a typical test to a non-standard test. That is, students writing the more elaborative test had roughly equal nervousness as a student writing a standard test.

Igniting my Heart

Fermi Problem: How many objects can Matthew throw on the stage during his Ignite?

Hilarious.

Jimmy and Jon reminded us that we need to be our own teacher. We are not, and should not be, 40% Dan, 30% Marian and 30% Cathy with a dash of basil and sauce. We need to be thoughtful about which strategies and philosophies work for us and our students. This was a refreshing thing to hear, especially after the Twitter conversations I had had earlier in the day. Bouncing off of this, Kyle brought up the idea that the debate is not about automaticity, but how we get students toward automaticity. Often those arguing on Twitter forget that automaticity is definitely and end-goal of many teachers attending OAME, and to state otherwise is rude and uncalled for. I was reminded of this while having dinner with the teachers who inspired me last year to develop my interleaved project at Okanagan College. And to me, that’s exactly what OAME is: a place to gather with, and learn from, math educators of varying walks. Not everything that you see will resonate with you, and it does not have to.

As I finalize this post the morning of Saturday the 13th, I look back and realize how lucky I am to work with and be friends with such amazing educators. Here is to an amazing 2016-17 school year, and to many many more together.

]]>