At work, my team and I are often engaged in the task of evaluating educational technologies – for their technical functionality, information security, digital accessibility, and (the focus of this post) pedagogical value. The results of these evaluations inform the advice and support we give to colleagues as well as various decisions we make internally, and as such, they play a pivotal role in our work. Moreover, the implications of these evaluations are even greater with widely adopted technologies. So you can imagine that evaluating a technology such as the institution’s Learning Management System (LMS) would be pretty important. Indeed, it is. And I say that based on personal experience: Our current LMS evaluation project has been a major focus of my attention lately.
For instance, today, I was formulating an argument for how a given LMS could be used to demonstrably enhance teaching and learning. I started by listing some ways one could use the LMS to implement strategies that research has shown promote greater learning gains. Below are a few examples, labeled by the instructional strategy.
- Use quizzing to promote learning. Here, the LMS could be used to administer frequent, low-stakes tests such that (a) students get regular practice and feedback on key skills and (b) the teacher gets automatic scoring and analysis of student responses.
- Clearly communicate expectations for student work. For this strategy, the LMS could be used to link rubrics directly to the corresponding assignments. By linking rubrics to assignments within the LMS, students receive descriptions of the criteria by which their work will be evaluated at the same time as they receive each assignment. In addition, the teacher can use the online rubric to efficiently grade student work and provide feedback in terms of the rubric criteria and then obtain a quick summary of student performance for each criterion.
- Prompt students to ask and answer “deep” questions that probe the relationships between key concepts. In this case, the LMS could be used to create an online discussion forum in which students contribute their reflections on the material, in response to targeted prompts and each others’ comments.
These use cases are quite encouraging illustrations of how the LMS could add value to teaching and learning. But, two points are worth noting here. First, each of these use cases illustrates a strategy that could also be implemented without a LMS or even without much technology at all (e.g., administer in-class quizzes on paper, include the relevant rubric within each assignment’s description, moderate in-class discussions or assign reflection exercises as homework). The main benefit that the LMS confers here is increased efficiency – through automation and/or coordination of particular steps in each strategy.
Second, even within these “best-case scenarios,” the use of technology does not guarantee students’ learning outcomes will improve. For example, putting a quiz online does not magically motivate students to take it seriously. Similarly, posting a rubric or discussion question online will not necessarily lead students to engage in higher-level thinking (especially if the rubric is oriented toward “bean counting” low-level components or if the discussion question is aimed at regurgitating facts). In other words, enacting these instructional strategies via technology does not obviate the need for solid instructional design.
Taken together, these two points strongly argue that it’s not really the LMS per se – or any edtech tool, for that matter – that is really driving the gains we might see from technology-enhanced learning: The tech is neither necessary (point 1) nor sufficient (point 2). Herbert A. Simon said it well: Learning results from what the student does and thinks and only from what the student does and thinks. The teacher [Ed: or the technology] can advance learning only by influencing what the student does and thinks.
That said, it’s still worth articulating criteria we can use to evaluate whether an educational technology can be valuable for teaching and learning. Here are a few evaluation criteria I’ve been using lately.
- The technology increases the efficiency of activities that are known to be productive for learning. In other words, you can document use cases where the technology makes an evidence-based instructional strategy more time-efficient and/or scalable. In our LMS example, administering quizzes to promote learning becomes more time-efficient thanks to a reduction in paper shuffling and the possibility of automatic scoring and feedback. Similarly, getting students to ask and answer “deep” questions becomes scalable because the LMS creates a forum in which all students can (or must) respond, regardless of class size. When instructional activities such as these are made more time- and cost-effective, they become more feasible and hence more likely to be implemented. The technology thus enhances learning by fostering more effective instructional strategies.
- The technology makes it harder to employ teaching and learning strategies that are known to (or likely to) impede learning. This criterion is essentially the double-negative of the first one, but evaluating a technology against this criterion is rather different from the first and can lead to different results. For example, you could have an educational technology that has multiple positive affordances à la criterion 1 but also has multiple short-comings à la criterion 2. We shouldn’t let a technology’s potential for good blind us to its possible negative consequences… or vice versa. The tricky part then comes in judging the relative weight of the two and estimating the technology’s net utility or overall expected value. Some natural considerations here are: the likelihood that each strategy would be implemented as expected (which could depend on the tools’ interface, the users’ characteristics, and more), the magnitude of the strategy’s positive/negative impact, and the uncertainty in all of these predictions.
- The technology has a reasonable learning curve. If an edtech tool is too difficult to learn and/or use, it will lose its power to support learning. Consider an educational technology where the interface is so poor that students must devote much of their cognitive capacity to getting it to work right. They will have little capacity left to learn the target material. Similarly, if the technology is too difficult for teachers to use, it will not end up with a high enough utility relative to criteria 1 and 2. All that said, it’s worth noting that I phrased this criterion carefully – in terms of “a reasonable learning curve” – because we should not rule out a tool just because it takes some time to learn to use… as long as there is a noticeable return on the investment in learning how to use it. As in so many issues related to teaching and learning, this is where context of use is strikingly important: The exact threshold for how much of a learning curve to tolerate should depend on how intensively and frequently the tool will be used.
- The technology collects and analyzes learning data and then produces meaningful results that teachers can use to adjust their instruction appropriately for their students’ needs. Note that the italicized part is critical here: A technology will not achieve a positive evaluation on this criterion merely by collecting, analyzing, and/or displaying data. It must also collect data that are valid or reliable indicators of learning, display results in a way that teachers can reasonably interpret and translate into action, and much more… (Come to think of it, that should be the subject of another post!) When a technology does achieve all that, it can benefit teaching and learning at two different time scales: (1) via a tight feedback loop in which teachers are able to review “actionable” results before the next class meeting and then tailor their lesson plan accordingly (e.g., reducing time spent on what the data show students already know in order to provide extra instruction where students are struggling) and/or (2) via a longer-term feedback loop whereby teachers (or instructional designers) use the results to guide their refinements to the instructional materials for the next cohort of students (e.g., adding/substituting practice exercises or explanations that target areas where the data showed students were having difficulty). In this way, the technology facilitates teaching effectively based on data rather than just by intuition.
- The technology invites reflection and innovation. I don’t want or expect any educational technology to substitute for a teacher. Rather, an educational technology is at its best when it complements and/or empowers the teacher to do something that wasn’t possible before. This criterion is a way of capturing the degree to which the technology still keeps the teacher’s (or the student’s, for that matter) head in the game – a desirable feature for learning at the “system level”. I would argue that a technology is more likely to invite reflection and innovation when it is flexible or adjustable, has a smooth and engaging interface, and leaves key decisions in the hands of its user. Even though this criterion is the least well defined of the four, I believe it is no less important because it captures the technology’s ultimate impact on teaching and learning via its capacity to unleash our own perspectives on what is and what can be.
In education as in other fields, technology has the power to help us do more and better, but it also has a tendency to be oversold. By recognizing that educational technology is not a panacea and by carefully assessing individual tools that come down the pike, we not only become better informed and more effective users of these technologies, we can shape their future.