«PROCEEDINGS OF THE 15TH ANNUAL CONFERENCE ON RESEARCH IN UNDERGRADUATE MATHEMATICS EDUCATION EDITORS STACY BROWN SEAN LARSEN KAREN MARRONGELLE ...»
This report focuses on three sections of an introductory mathematical proofs course taught using IBL. The structure of the course required students to present various assigned problems, which the class would then discuss together to encourage further student collaboration.
The 70 students also each completed a portfolio consisting of all assigned problems, some of which were also turned in as homework.
For the purposes of this research, we chose to evaluate ten problems from each student using a coding scheme developed using previous work related to mathematical proof. We chose two similar problems from each content area covered in class – one presented and one not. Our coding scheme consisted of two parts to evaluate the selected student work. The first level of the coding scheme is adapted from work by Harel and Sowder (1998) and the second level is adapted from Andrew (2009).
The first level of coding focused on categorizing student proof attempts as analytical or empirical (Harel & Sowder, 1998). Proofs that did not belong in either category were coded as other. Coded problems were deemed analytical more than 95% of the time. The second level of coding was used on these analytical student proof attempts. Using codes developed by Andrew 440 15TH Annual Conference on Research in Undergraduate Mathematics Education (2009) to address the results of Moore (1994), the second level coding focused on structure of student proof and identified errors with implications and steps in the proof. We were also interested in student use of definitions and assumptions since Moore (1994) suggested these were significant issues in a lecture-based introductory proofs course. Therefore, the second level of coding also recorded the number of assumptions and definitions used in each proof and kept a tally of those that were incorrect.
In direct contrast to Moore’s (1994) observation regarding his research, we found that students consistently used definitions and assumptions appropriately. The table below shows the percentages of incorrect definitions and assumptions for all problems coded.
Total Problems % Incorrect Definitions % Incorrect Assumptions 473 2.574 3.525 The low percentages may suggest that the students’ understanding of definitions and assumptions will be an asset to their future work in mathematical proof. This begs the question of whether the drastic variation in teaching method made a difference in the conceptual understanding and use of definitions and assumptions.
The second level of coding also utilized codes developed by Andrew (2009) related to the structure (S) and understanding (U) of proof. The table below describes the codes.
The table below shows the per problem average, as well as averages on presented (P) versus not presented (NP) problems, for structure (S) and understanding (U) codes.
U4 and U6 have the highest averages, implying that students had some difficulty addressing all components necessary to prove a statement. Though S8 was the most common Structure code, the low occurrence of S3 codes suggests that students are relatively competent in organizing their thoughts, even though they may struggle with expressing them using standard notation. Overall, there is less than half the number of recorded codes in the S category than in the U category, meaning that the bulk of student error did not lie with proof structure but with understanding proof techniques.
Each problem presented in class received on average approximately one more U code than those that were not presented. In fact, every code in the Presented category, excluding S8, has a higher average than the same code in the Not Presented category. This may seem unexpected since one might assume that students would commit fewer errors on problems that 15TH Annual Conference on Research in Undergraduate Mathematics Education 441 were discussed in class. However, due to the teaching method used in the class, there is a possibility that students merely copied down the problems they saw presented while not fully understanding what they were writing. Another explanation could be that since students took part in correcting the presented problems, they understood the common errors and learned how to avoid them when attempting similar problems on their own, resulting in a lower frequency of error. Also for the presented problems, students received on average almost three times as many U codes as S codes. Although this difference is lower for the non-presented problems, there are still almost twice as many U codes as S codes. This again suggests that students struggled more with understanding of implications within proofs than with language and notation.
Using the previously described coding scheme, we extended our research to code the part of the final exam that required proving theorems related to previously unseen definitions.
Students were asked to use new definitions and hence make assumptions related to ideas that they had not previously been exposed to.
The table below shows the percents of incorrect definitions and assumptions for all coded final exam problems.
Total Problems % Incorrect Definitions % Incorrect Assumptions 95 13.3 2.96 When comparing the final exam problems to the coursework, students had more than four times as many incorrect definitions in the final. Though this seems like an extreme difference, the reality is that 86% of definitions used on the final were used appropriately. In consideration of the unique circumstances that make up those found within the confines of a final exam (including time constraints and stress), one may consider this 86% rate commendable.
Moreover, students consistently used assumptions correctly, shown by only a slight variation (.565%) between the statistics of assumption use from the coursework and final exam. Thus, it is logical to conclude that the class successfully prepared students to properly use assumptions and definitions.
The table below shows the average structure (S) and understanding (U) codes for all final exam problems.
Total S3 S4 S5 S8 Avg S U4 U5 U6 U7 Avg U Problems 95.083.307 0.561.951.702.547.311.063 1.623 Students had limited issues with language and notation and regularly made conclusions for their proofs. Similar to the course statistics, the highest average codes in the final exam problems came from S8 and U4. U5 also had a high average in the final exam problems. Thus, once again it is clear that students struggled with fully understanding what was needed to prove all aspects of the problem. Overall, there is little difference between the final exam statistics and those of the course problems.
The high percentages of analytical proofs imply that this course provided students with a foundational understanding of formal proof development. Students used definitions and assumptions correctly over 95% of the time, which suggests that this particular IBL classroom environment gave students a firm foundation of how to correctly use definitions and assumptions. Fewer errors in the non-presented problems than in the comparable problems discussed in class further supports the claim that class collaboration prepared students to competently complete proofs on their own. Almost two implication errors (U codes) per problem suggest that at this level in their mathematical career, the observed students still struggle 442 15TH Annual Conference on Research in Undergraduate Mathematics Education somewhat with understanding how one step leads to the next. Less than one language and notation error (S code) per problem on average is evidence that this IBL class taught students how to convey their thoughts in an efficient and logical manner.
1. What categorizations of proof are most interesting to investigate in this context?
2. What analysis of the remaining final exam problems would be most beneficial?
3. Which of the U and S codes is most meaningful to focus on?
4. Is there another analysis of definitions and assumptions that would be meaningful in relation to this data?
Andrew, L. (2009). Creating a Proof Error Evaluation Tool for Use in the Grading of StudentGenerated "Proofs." PRIMUS, 19(5), 447-462.
Boaler, J. (1998). Open and closed mathematics: Student experiences and understandings. Journal for Research in Mathematics Education, 29(1), 41-62.
Harel, G., & Sowder, L. (1998). Students' Proof Schemes: Results from Exploratory Studies.
CBMS Issues in Mathematics Education, 7, 234-283.
Moore, R.C. (1994). Making the Transition to Formal Proof. Educational Studies in Mathematics, 27, 249-266.
Rasmussen, C., & Kwon, O.N. (2007). An inquiry-oriented approach to undergraduate mathematics. Journal of Mathematical Behavior, 26, 189-194.
Schinck, A.G. (2011). Why Use IBL? The Road to Present Day Inquiry-Based Learning. Jan.
2011. Web. 28 July 2011.
Weber, K., & Mejia-Ramos, J.P. (2009). An Alternative Framework to Evaluate Proof Productions: A Reply to. The Journal of Mathematical Behavior, 28, 212-216.
Weber, K. (2010). Mathematics Majors' Perceptions of Conviction, Validity, and Proof.
Mathematical Thinking and Learning, 12(4), 306-336.
Weber, K. (2005). Problem-solving, Proving, and Learning: The Relationship between. Journal of Mathematical Behavior, 24, 351-360.
May Hamdan Lebanese American University firstname.lastname@example.org Abstract In this paper I present a theoretical analysis (genetic decomposition) in the sense of APOS theory, of the cognitive constructions for the concept of infinite Riemann sums and the Fundamental Theorems of Calculus as a linking tool between the derivative and the integral, following Piaget's model of epistemology. This genetic decomposition is primarily based on my own mathematical knowledge as well as on my personal continual observations of students in the process of studying integration. I also present empirical data in the form of informal interviews with students at different stages of learning. The analysis of those interviews will later suggest a review of the initial genetic decomposition. Based on this analysis I also suggest instructional procedures that motivate the mental activities described in the proposed genetic decomposition. This study will shed new lights on the concept and make the connections more obvious between two key concepts in calculus.
Keywords: genetic decomposition, APOS theory, Calculus, integration, interviews, observations, Piaget
Introduction: Motives for the research:
Riemann sums and areas are generally taught in isolation from the antiderivative per se. This study was triggered by my dissatisfaction, as a teacher, with textbooks’ general tendency to overlook the role of Riemann sums as a bridge between derivatives and integrals; the reason being that Riemann sums are hard to teach as such: a fine understanding of the Riemann sums justifies that the area under the curve of a positive function can be interiorized into a continuous function; and applying the Mean Value Theorem to this function yields to the Fundamental Theorem of Calculus, and hence connects the definite integral to the indefinite integral. Connecting the two faces of the integral is a lot of work, no wonder the two types of integrals (definite and indefinite) are usually presented in isolation, as if one is a geometric meaning and the other is a detached analytical meaning. In this paper I discuss ways of promoting this connection and a genetic decomposition of the Riemann sums and the process as a whole. Calculus instructors tend to avoid this connection because it involves heavy x work, such as the transformation of the definite integral into a function G ( x) f (t )dt that a requires the cognitive operation of interiorizing the action of evaluating the area under a curve into a process, encapsulating the process into a function, and later differentiating that function and executing other actions on it. In general, this segment of the course is covered just about as lightly as the notorious delta-epsilon definition of a limit.
Framework for research In my study I adopt as framework for research an interpretation of constructivism and Piaget’s ideas on reflective abstraction (Dubinsky, 1991). This paradigm has been applied to diverse topics including functions, mathematical induction, calculus, quantification, and 444 15TH Annual Conference on Research in Undergraduate Mathematics Education abstract algebra, and equivalence classes and partitions (Hamdan, 2006) and has lead to major curriculum changes.
Preliminary Genetic decomposition
I assume that any successful instruction of mathematical constructions would take into consideration the cognitive structures, as well as the mechanism (reflective abstraction) on which these constructions are built. The preceding epistemological analysis serves as a guideline for planning
IV. (Challenging) Activities promoting construction of the definition of an integral by
reversing that of the derivative:
448 15TH Annual Conference on Research in Undergraduate Mathematics Education Students are challenged to look back at both the formal definitions of the derivative and that of the definite integral (as an infinite sum) and deduce how the two definitions are in retrospect) inverses of one another.
Note: I have already conducted informal interviews with the students in various stages of learning these topics. And I am in the process of analyzing them.
Arcavi, A (2003) The role of visual representation in the learning of Mathematics, ESM 52: 215-241 Asiala, M., Brown A., Devries, D.J., Dubinsky E., Mathews D. & Thomas, K. (1996) A framework for research and curriculum development in undergraduate mathematics education. CBMS Issues in Mathematical Education, 6, 1-32.
Bardini, C. Radford, L. (2005) Struggling with variables, parameters, and indeterminate objects or how to go insane in mathematics, proceedings PME confe 29, 2, 129-136 Bills, L. (2001). Shifts in the meaning of literal symbols. In M. van den Heuval-Panhuizen (Ed.), Proceedings of the 25th conference of the international group for the PME, Vol 2, (pp 161-168)Utrecht, Netherland: Freudenthal Institute.