PGCILT reflection

Having already had five year of lecturing experience before starting at LSHTM, and as many years running tutorials, my goals in enrolling in PGCILT were to identify gaps in my knowledge of various teaching philosophies, improve my ability to design learning activities, and to get feedback on my own teaching style. Over the course of the PGCILT training I have learned more about the various approaches to teaching, particularly gaining an appreciation of what the flaws of various approaches are and how later pedagogical frameworks aim to address these.

One of the most valuable exercises in PGCILT was the module creation. It pushed me to think about content delivery that is remote and asynchronous. With the COVID-19 pandemic leading to a shift in online content delivery in tertiary education (Sandars et al. 2020) the traditional student-classroom-teacher setup cannot be relied on for education at LSHTM. The use of learning networks shifts here from establishing a sense of community within the student cohort (Jorgensen 2003) to one where it is the only form of contact between students. In a teaching environment that is diverse over both time and space, the use of discussion forums is a valuable way of seeking clarification from peers rather than relying on email contact with the teaching staff. This has the additional effect of making students more responsible for their learning rather than relying solely on the expertise of the teacher to assist them. Taking this further by introducing flipped classroom methods, with readings and/or tasks to be completed prior to an occasional synchronous, yet remote, learning activity, such as a moderated review of formative assessment, helps foster a sense of ownership of one’s own learning with deadlines for engagement outside the self (Sandars et al. 2020).

The use of formative peer assessment here via code review (ROpenSci et al. 2020), moderated by teaching staff, is not something that I have yet implemented in my teaching but I plan on doing so in the MSc in Health Data Science “Data Challenge” module order to reinforce good coding practice as one of the learning outcomes around teamwork. The code review thus satisfies many of the criteria of effective assessment, as it is aimed at providing feedback with the aim of assisting rather than measuring their learning (Brown and Race 2013). In this module, students will be tasked with undertaking a real-world analysis for a partner organisation as part of a work-integrated learning approach to project work (Cooper, Orrell, and Bowden 2010). The problem posed by the partner will have an intended goal but neither a correct solution nor an obvious method; these are key to the creation of an effective constructivist approach to learning design (Kay and Kibble 2016). Students are therefore required, for their major piece of assessment for the module, to scope the project in discussion with the partner, identify appropriate data sources and data science methods to solve the problem, collaboratively solve the problem and write up a report and provide their code.

As students progress through the module they will learn the teamwork skills (both interpersonal and technical) required for effective project management. Code review will be conducted as formative assessment during the semester. The aim of code review is not merely to have peers determine whether or not the task is done “correctly”, as an exercise in peer marking, but whether or not the proposed solution adequately addresses the problem and does so in a way which is understandable. In this sense, the peer assessment is itself a constructivist learning activity as students must engage with the written solution critically in reference to their own attempt at solving the problem. Summative assessment in the module is the major project report and a presentation to the partner and teaching team, two forms of authentic assessment which demonstrate mastery of the skills matching the learning outcomes and are relevant to professional practice (Race 2014). A self- and peer-assessment will be used to allow students to reflect on their own contribution to the multiple tasks and team performance required to complete the entire project and to give team members an opportunity to alert the teaching team to team members who did not contribute meaningfully to the completion of the report. Such an assessment piece, paired with an attitudes survey, has been used successfully in a first year undergraduate mathematics unit for both reflection of the learning process and refine the design of the major project to promote effective teamwork (Czaplinski et al. 2016).

The observation activities, both with my partner, Assistant Prof. Thomas Cowling and with Assistant Prof. Steven Cranfield, were a good chance to review my own teaching skills. I’ve long aimed for a less hierarchical teaching style, using skills developed in improv comedy to build a rapport with, and take some pressure off, students in my attempts to engage them in the delivery of material (McCarron and Savin‐Baden 2008). The two class environments were different in terms of the cohort being taught and the amount of control I had over the material. The first observation, with Thomas Cowling, was in a short course I had helped develop: Introduction to Spatial Analysis in R short course (ISAIR, Sep 2019). The second was with Steven Cranfield in Statistics for Epidemiology and Population Health (STEPH, Dec 2019), an MSc module over which I have no control.

In the ISAIR course, my teaching was aimed at teaching spatial statistical concepts by extending attendees’ understanding of Generalised Linear Models to Generalised Linear Mixed Models and Generalised Additive Models. After two days of introduction to spatial data types and the computational tools required to process and summarise spatial data, a refresher of linear models and GLMs was given. The aim here was to remind attendees of key concepts and terminology which they may not have engaged with formally in some years, before introducing random effects which are structured and unstructured, Gauss-Markov Random Fields, and other spatial statistical concepts. Due to the nature of the material, much of the early exposure to a new topic was through the lens of social cognitive approach. The teaching in this short course was a mix of lecturing from slides and guided tutorial work. Students were introduced to a topic conceptually, shown its computational implementation, and then asked to complete the implementation from a partial solution with an answer guide provided by the teaching team. This approach, rather than a more constructivist one, was considered necessary given the introductory nature of the course, but of content at a high level, and the presumed lack of background exposure to spatial analysis concepts.

The nature of short course education, both inside and outside LSHTM, is that attendees have a wide variety of academic backgrounds, life experiences, professional histories and capabilities. Short course attendees at LSHTM may be new PhD students, mid-career researchers looking to improve their skills for an upcoming project, or established researchers or practitioners in industry looking to update decades old knowledge. It became clear on the afternoon of day 3 of ISAIR, when introducing certain spatial statistical concepts, that some students did not have the recommended background knowledge of GLMs and subsequent discussion with Module Organisers has led us to conclude that we need to be very explicit about the statistical nature of the course. This was reflected in course feedback and we intend to provide additional prior information to intending participants to allow them to judge whether or not they have the appropriate level of background knowledge to access the concepts in the course.

The above experience prompted more explicit communication with applicants for the Feb 2020 Modern Techniques in Modelling of Infectious Disease Dynamics short course (MTMIDD, for which I was a module organiser), ensuring that they had the relevant background in either infectious disease epidemiology or proficiency in R (preferably both) to be able to learn the material delivered. All material developed for this module was subject to internal peer review by another member of the teaching team to ensure that it was at an appropriate level, that the code examples were easy to follow and that the practical exercises reinforced the material in the slides. This “beta testing”, organised by Module Coordinator Assistant Professor Rosalind Eggo, was a valuable part of the initial delivery of the short course. It enabled me to identify activities where the leap from example to activity for students was perhaps a little large, and where I was attempting to include too many new ideas in a single session. Material in both ISAIR and MTMIDD should also be revised in the lead-up to redelivery in 2020/2021 to identify the threshold concepts (Cousin 2006; Meyer and Land 2006) that are pivotal in learners’ progression through the multiple days of the courses. Restructuring the learning activities around these threshold concepts can help reduce the amount of cognitive load spent on learning which extends participants’ knowledge and skills but is not strictly necessary for engaging in later sessions. These can instead be given as extension exercises, making use of the ideas of Differentiated Instruction to provide opportunities for students who have a firm grasp of the material to further reinforce or extend their own learning either within or without the classroom (Tomlinson 2005; Subban 2006; Konstantinou-Katzi et al. 2013).

For future offerings of ISAIR and MTMIDD, I am considering the inclusion of a pre-enrolment diagnostic test (Wilson and MacGillivray 2007) to ensure that participants have an appropriate understanding of the assumed knowledge before they attend (or even enrol). The aim here is not to gatekeep the course but to manage expectations about what students are expected to know, what they will be exposed to, and to provide additional resources that may help them prepare for the module (done in an ad hoc manner prior to MTMIDD) as well as support during the module.

Some of the material presented in MTMIDD has been adapted to be presented in the Introduction to Statistical Computation MSc module (Stat. Comp.) as a guest lecture, and at the LSHTM R User Group meetings. In each of these instances, I needed to pay particular attention to the fact that the participants in each setting were different, in terms of their interests, experience and capabilities. I received very positive feedback from students and the Module Coordinator (Prof. Antonio Gasparrini) about my teaching style and making the content accessible to non-experts. The practical activity in the Stat. Comp. module required students to critically appraise a provided graph (with the code used to generate it) of some data on life expectancy and gross domestic product across a number of years in various countries (Bryan and MacDonald 2015). Students were given a lecture session on the principles of graphical excellence (Tufte 1983) and then invited to discuss, in small groups, what was good and bad about the graph. I spoke to each group individually during the activity to ensure that they knew they would be contributing their ideas in a group discussion with the entire class. After hearing the class’s opinions about what makes a graph good and bad, each group of students was tasked with creating their best version of the graph (improving the bad things) and the worst version of the graph (breaking the good things, while only using the data provided) and to then present and discuss their modified graphs.

This social constructivist activity encourages students to draw on their experience of scientific and news media data visualisations, exposes them to the ideas that others may have had that they didn’t, and requires that they investigate the use of the software to determine how to achieve their aims, rather than being given an exhaustive list in the lecture of all the options available to them. In this way, we are making use of the theory of the Zone of Proximal Development (Vygotsky 1978) rather than strictly behaviourist “do what I just did” approaches, requiring students to extend what they have just learned while completing a task. Students reported enjoying the activity, though they found it difficult, as it provided them with a sense of play (breaking the graph) and pushed them to find their own solution to implement their collective plan.

The Stat. Comp. module has gradually been shifting away from the use of proprietary Stata software (also used in STEPH and a number of other modules at LSHTM) towards the free R software. R is free to use and has an active online community, and support for R education is available through ventures such as RStudio Education (https://education.rstudio.com/). As a scripting language with over 16,000 additional packages (as of 26 August 2020), there are many ways to implement an idea in R and many ideas contained in the MSc and short course modules at LSHTM which are easily implementable in R, giving users the flexibility to pursue their own analysis workflows rather than being restricted to a set of point and click menu systems or pre-built routines. Integration of RStudio with tools such as git (for collaborative coding with distributed version control) provides students with a powerful, integrated suite for conducting analysis that easily integrates into workflows of best practice (Wilson et al. 2017). The use of R, specifically RStudio, was recommended in our earlier module design assessment on the grounds of equity, minimal hardware requirements and extensibility. I (with the other Module Organisers) have written the ISAIR and MTMIDD short courses to use R and RStudio and I support its adoption at LSHTM.

The comments from the observations of both the ISAIR and STEPH sessions, and experience in other teaching settings at LSHTM, has reinforced, to me, the importance of: introducing the point of the session; keeping a close eye on students who may need more attention during the session; and closing the session in a way which learners are prompted to discuss their interpretation of the solutions and learned material. While my aim in class is to provide an environment in which learners are respectful of each other and can “just ask me for help”, cultural and individual differences in the class may mean that some students are more reluctant to ask for help but will willingly accept it if offered. This was highlighted in the feedback on the STEPH session, where I was observed to assist every student who raised their hand and asked for help but may have missed out on providing assistance to a group towards the front of the room who may have needed more help but were not vocal about it. Feedback from Steven Cranfield introduced Heron’s Six Categories of Intervention (Heron 1976) and how my teaching varies between the authoritative and facilitative approaches depending on which stage of the lesson we are at and what the needs of the students are in completing the activity to a point where they gain an understanding. I encouraged students to work in small groups to confer with each other, rather than working individually and waiting for me to go through the answer, in order to encourage them to talk their solutions through. Peer tutoring, specifically the explanation of a newly learned concept by one student to another, can reinforce one’s own learning (Evans, Flower, and Holton 2001) and ensures that students are able to seek feedback on their solution to a problem without waiting for the availability of teaching staff. Further, it can help foster confidence with explaining new concepts, facilitating contribution to the end of class discussion.

Students’ perception that mathematics and statistics problems have a single correct answer is at odds with mathematical and statistical modelling as practiced, i.e. that problem solving requires creativity and a critical eye to develop a defensible model from well-stated assumptions (Boaler 2016). As mathematics and statistics underpins, in some fashion, any serious engagement of a STEM topic there is scope for including constructivist approaches to model development and data analysis in many MSc modules taught at LSHTM. Where a particular statistical method is prescribed, as is often the case in STEPH, activities should be written to focus less on the generation of the correct p value for the hypothesis test or on the mechanics of deriving the test statistic, but on engaging with the whole cycle of data analysis (Hardin et al. 2015) and focussing on the doing of statistics (Nolan and Speed 1999).

Race’s discussion of learning outcomes (Race n.d.) argues that successful learning is linked to five factors: wanting to learn; taking ownership of the need to learn; doing; feedback on evidence of achievement; and making sense of what is being learned. Undertaking the PGCILT training has thus reinforced my belief that these factors can be achieved with the use of problem-based learning and flipped classroom/blended learning, that technology-enhanced learning is best achieved with free software with an active online community around it, and that student ownership of learning of best achieved by designing learning around cognitive learning theory and constructivist pedagogy, with different learning contexts requiring different pedagogies. All students encountered in the tertiary sector are adults who have voluntarily enrolled in a course. They have chosen to be there and we, as educators, must respect that they have a variety of life pressures competing for attention. While they are ultimately responsible for their own learning it is not reasonable for us to create and deliver the material in a way which suits only our own needs and transfers the entire burden to the student. We should, therefore, provide a curriculum which is flexible in delivery (making use of technology-enhanced learning, blended learning and asynchronous activities (Gordon 2014; Czaplinski et al. 2016)) and contains learning activities which aim to extend their experience of a topic in previous study (typically secondary education or previous undergraduate) by giving them experiences rooted in the exploration, implementation and diagnosis of ideas, and support during and outside class time to provide students with the opportunity to achieve a level of performance that meets learning objectives (Biggs 1999).

References

Biggs, John. 1999. “What the Student Does: Teaching for Enhanced Learning.” Higher Education Research & Development 18 (1):57–75. https://doi.org/10.1080/0729436990180105.

Boaler, Jo. 2016. Mathematical Mindsets: Unleashing Students’ Potential Through Creative Math, Inspiring Messages, and Innovative Teaching. San Francisco, CA: Jossey-Bass & Pfeiffer Imprints.

Brown, Sally, and Phil Race. 2013. “Using Effective Assessment to Promote Learning.” In University Teaching in Focus: A Learning-Centred Approach, 74–91.

Bryan, Jennifer (Jenny), and Andrew MacDonald. 2015. “Gapminder: Turned on Integration with Zenodo.” Zenodo. https://doi.org/10.5281/ZENODO.594018.

Cooper, Lesley, Janice Orrell, and Margaret Bowden. 2010. Work Integrated Learning a Guide to Effective Practice. http://site.ebrary.com/lib/alltitles/docDetail.action?docID=10382481.

Cousin, Glynis. 2006. “An Introduction to Threshold Concepts.” Planet 17 (1):4–5. https://doi.org/10.11120/plan.2006.00170004.

Czaplinski, Iwona, Sam Clifford, Ruth Luscombe, and Brett Fyfield. 2016. “A Blended Learning Model for First Year Science Student Engagement with Mathematics and Statistics.” https://eprints.qut.edu.au/96972/.

Evans, Warwick, Jean Flower, and Derek Holton. 2001. “Peer Tutoring in First-Year Undergraduate Mathematics.” International Journal of Mathematical Education in Science and Technology 32 (2):161–73. https://doi.org/10.1080/002073901300037609.

Gordon, Neil. 2014. “Flexible Pedagogies: Technology-Enhanced Learning.” The Higher Education Academy, 1–24.

Hardin, J., R. Hoerl, Nicholas J. Horton, D. Nolan, B. Baumer, O. Hall-Holt, P. Murrell, et al. 2015. “Data Science in Statistics Curricula: Preparing Students to ‘Think with Data’.” The American Statistician 69 (4):343–53. https://doi.org/10.1080/00031305.2015.1077729.

Heron, John. 1976. “A Six-Category Intervention Analysis.” British Journal of Guidance & Counselling 4 (2):143–55. https://doi.org/10.1080/03069887608256308.

Jorgensen, Daphne. 2003. “The Challenges and Benefits of Asynchronous Learning Networks.” The Reference Librarian 37 (77):3–16. https://doi.org/10.1300/J120v37n77_02.

Kay, Denise, and Jonathan Kibble. 2016. “Learning Theories 101: Application to Everyday Teaching and Scholarship.” Advances in Physiology Education 40 (1):17–25. https://doi.org/10.1152/advan.00132.2015.

Konstantinou-Katzi, Panagiota, Eleni Tsolaki, Maria Meletiou-Mavrotheris, and Mary Koutselini. 2013. “Differentiation of Teaching and Learning Mathematics: An Action Research Study in Tertiary Education.” International Journal of Mathematical Education in Science and Technology 44 (3):332–49. https://doi.org/10.1080/0020739X.2012.714491.

McCarron, Kevin, and Maggi Savin‐Baden. 2008. “Compering and Comparing: Stand‐up Comedy and Pedagogy.” Innovations in Education and Teaching International 45 (4):355–63. https://doi.org/10.1080/14703290802377158.

Meyer, Jan, and Ray Land. 2006. Overcoming Barriers to Student Understanding: Threshold Concepts and Troublesome Knowledge. Routledge.

Nolan, D., and T. P. Speed. 1999. “Teaching Statistics Theory Through Applications.” The American Statistician 53 (4):370–75. https://doi.org/10.1080/00031305.1999.10474492.

Race, Phil. 2014. Making Learning Happen: A Guide for Post-Compulsory Education. Sage.

———. n.d. “Making Learning Outcomes Work.” Assessment, Learning and Teaching in Higher Education. Accessed August 27, 2020. https://phil-race.co.uk/download/5626/.

ROpenSci, Brooke Anderson, Scott Chamberlain, Anna Krystalli, Lincoln Mullen, Karthik Ram, Noam Ross, Maëlle Salmon, and Melina Vidoni. 2020. “Ropensci/Dev_guide: Fourth Release.” Zenodo. https://doi.org/10.5281/ZENODO.3749013.

Sandars, John, Raquel Correia, Mary Dankbaar, Peter de Jong, Poh Sun Goh, Inga Hege, Ken Masters, et al. 2020. “Twelve Tips for Rapidly Migrating to Online Learning During the COVID-19 Pandemic.” MedEdPublish 9 (1). https://doi.org/10.15694/mep.2020.000082.1.

Subban, Pearl. 2006. “Differentiated Instruction: A Research Basis.” International Education Journal 7 (7):935–47.

Tomlinson, Carol Ann. 2005. “Grading and Differentiation: Paradox or Good Practice?” Theory into Practice 44 (3):262–69. https://doi.org/10.1207/s15430421tip4403_11.

Tufte, Edward R. 1983. The Visual Display of Information. Cheshire, Ct.: Graphics Press.

Vygotsky, Lev S. 1978. Mind in Society: The Development of Higher Mental Processes (E. Rice, Ed. & Trans.). Cambridge, MA: Harvard University Press.

Wilson, Greg, Jennifer Bryan, Karen Cranston, Justin Kitzes, Lex Nederbragt, and Tracy K. Teal. 2017. “Good Enough Practices in Scientific Computing.” PLOS Computational Biology 13 (6):e1005510. https://doi.org/10.1371/journal.pcbi.1005510.

Wilson, T. M., and H. L. MacGillivray. 2007. “Counting on the Basics: Mathematical Skills Among Tertiary Entrants.” International Journal of Mathematical Education in Science and Technology 38 (1):19–41. https://doi.org/10.1080/00207390600819029.

comments powered by Disqus