Sidebar

How do you design, select, construct, implement, and evaluate an assessment task or activity?

This page will go through the actual design and implementation of assessment tasks and activities, covering the five key steps in doing so. In actual fact, you are attempting to address the question “how do you know if your students’ have learnt what you have taught?

Introduction

How does an assessment task ultimately come together? How does one move from the general principles underlying assessment tasks to the actual formulation of the assessments? This section aims to provide a step-by-step guide to developing assessments that provide useful evidence of students’ learning and teaching effectiveness.

Designing Assessments

The first step in developing an assessment activity is identifying the purpose of your assessment. Knowing clearly what the purposes of your assessment are will determine the kind of assessment to deploy, which may be either formative (low-stakes, practice opportunities) or summative (high-stakes judgements of what the student has learnt). Both types of assessment allow teachers to reflect on both student performance and teacher performance.

Student Performance

Formative assessment helps students develop their knowledge and skills in a low-stakes context, and identify areas of improvement.

Summative assessment produces a high-stakes judgement on students’ knowledge and skills at the conclusion of a process.

Teacher Performance

Formative assessment uses low-stakes activities that influence the teaching direction based on student performance.

Summative assessment involves high-stakes outcomes to indicate if the learning outcomes of the course have been met.

Selecting Assessment Methods

Once you have determined the purpose of your assessment, the next step is to choose an appropriate assessment format. An assessment serves as a measure of learning; knowing what you would like to measure will allow you to select the format that will measure it best.

Before you select the type of assessment, be very clear what are the learning outcomes you are measuring. Begin by specifying clearly and exactly the kind of knowledge (declarative, procedural, conditional), and content (general/discipline-specific), for which you wish to see evidence. Write this in the form of intended learning outcomes.

How do I craft effective student learning outcomes?

Here are some tips on writing good learning outcomes for your assessment:

  • Write with the student (who will be reading these outcomes) in mind, using language that is clear and easy to understand;
  • be precise in your language, using clear action words, such as ‘evaluate’ or ‘analyse’ that can be evidenced, as opposed to verbs like ‘understand’ or ‘appreciate’; and
  • focus your assessment on a few key areas to avoid overwhelming the student.
  • For more information, check out University of Wisconsin-Madison’s Guide on “Writing Student Learning Outcomes”

Related to learning outcomes is the operational or practical question of “what you would like to see your students demonstrate as part of the evidence of their learning”? Here are some considerations to help with your decisions to check for understanding:

  • Am I assessing direct or indirect methods of performance? Direct methods assess observable aspects of a student’s performance, while indirect methods measure an opinion, belief or attitude. Using a rubric to assess students’ presentation skills would be an example of a direct method.
  • Am I looking for an objective or subjective assessment format? Objective formats, such as MCQs and fill-in-the-blank questions, are usually easier to mark, but require skill and experience to set higher order questions; subjective formats (essay questions, etc.) are much more difficult to mark but allows for more open-ended types of questions.
  • Am I looking for learning progression (change or learning gain over time) or a particular point in time (end-of-unit mastery)? Measuring learning over time, either two or more time points, will require different approaches compared to measuring one instance in the learning process. In the former, you would design a pre-test and a post-test(s) at the start of learning and at the end of the learning respectively. This allows you to compare the ‘baseline’ knowledge/skills of your students prior to learning, with that after going through the instructional process. In the latter, the assessment will take the form of an end-of-a-topic test or a final examination.
  • What level of thinking skills does this assessment format measure? Your assessment may test varying levels of cognitive complexity — from lower-order thinking skills like memory recall and understanding, to higher-order ones, such as application, analysis, and evaluation of knowledge. (You may refer to the Revised Bloom’s Taxonomy for more information.)
  • What kind of software/tools does this assessment format require? You can also consider the grading required for your assessment format; objective measures often do not require much human attention, while subjective measures do.
  • Might this assessment format require any necessary accommodations for students? Based on the needs of your students, you may need to make changes to how learning is measured.
  • What are the requirements or ‘common assessment practices’ of my department? While we would like to design assessments to meet our own students’ needs, there are wider implications on how assessments are set, marked, graded and reported. The rule of thumb is that it is always good to find out from your department/faculty/school what are the current assessment practices, i.e., the ‘dos’ and ‘don’ts’.

Paying attention to these questions can help you determine which method best fits the purpose of the assessment you are setting.

Constructing Assessments

Now that you know the purpose and learning outcomes of your assessment, and how you want to measure that outcome(s), the next step is to construct, adopt, revise or create your assessment task(s). Depending on the assessment format you have selected, you may want to consider one (or more) of the following:

How to prepare a Table of Specifications?

A ‘Table of Specifications’ allows the teacher to construct an assessment which focuses on the key areas and weights those different areas based on their importance. It provides the teacher with evidence that a test has content validity, that it covers what should be covered (as well as what was taught). In other words, it serves as a guide or planning tool for the teacher to ensure that the assessment is ‘fit-for-purpose’, by assessing what is desired in terms of the learning outcomes.

The table usually takes the form of a simple matrix (see Figure below), which compares learning outcomes (see first column on the left) with types/methods of assessment (see top rows). For each assessment type, the teacher needs to decide on the weightage of that assessment task(s) in relation to the rest of the assessments for the course.

What criteria do you set for each level of performance or to accurately describe the quality of students’ work?

For assessment to collect useful evidence of students’ learning, there must be a set of criteria to guide the teachers (assessors) on what is quality student work and how best to make evaluative judgement on how well the students have performed in relation to the criteria. This often takes the form of a rubric.

Criterion (noun) – a distinguishing property or characteristic of anything, by which its quality can be judged or estimated, or by which a decision or classification may be made.

Besides the rubric, do ensure that you have prepared a marking scheme beforehand (best done during the formulating of assessment questions). The use of explicit marking criteria (or marking schemes) is a vital step to foster effective marking, which enhances accuracy, consistency, fairness, transparency, timeliness and feedback to students. A marking scheme may consist of the following:

  • general procedures for marking
  • questions/items/tasks
  • overall allocation/distribution of marks for the test
  • answer key, may include acceptable/unacceptable responses
  • assessment criteria, grade descriptors and mark range
  • sample marked scripts/items

See this detailed guide from the website by the University of Sussex:

https://staff.sussex.ac.uk/teaching/enhancement/support/assessment-design/marking-and-moderation

How to write good assessment items or formulate good questions?

Refer to the ‘Types and uses of assessment’ [How are different assessments used?]. More details will be provided in the next level.

Implementing Assessments

After constructing your assessment questions or items, they may be posted in our Learning Management System (Canvas), given during lectures and tutorials as handouts, inserted into our slides as PollEverywhere quizzes or mounted in our ExamSoft online e-assessment platform. Students should be given sufficient practice or training in using particular methods or equipment as part of the learning process, before being asked to demonstrate their knowledge or skill in using the tool during a test or exam.

What is important to take note when implementing your assessment is to ensure that all students have equal access to necessary resources to take or seat for the test/exam in a fair and unbiased way.

Students should be clear of the expectations and requirements for the assessment tasks and at the same time, be cognisant of the rules and regulations regarding sitting for tests or examinations, including the importance of plagiarism (or promote honest behaviours).

Evaluating

There are three key aspects of evaluation when thinking about assessment and its effectiveness.
First, we need to consider the usefulness of the information collected – how do we make sense of the evidence in order to guide us in making decisions about what happens next in our teaching and learning?

  • In most cases, we start with a set of results (e.g. test scores) and ask “Did they fully master the ideas, concepts, principles and theories required for the completion of the course?
  • If the pre- and post-test data are available, we can ask “How well did my students do now, compared to what they were capable of doing at the start of the course?” “Are there any learning gains?”
  • If we drill down the dataset (disaggregate the data or shift from an overview of data to a more detailed and granular view), we can further ask “Which learners made significant gains between initial assessment and the post-assessment? Which learners did not make significant gains?”
  • For each of the interpretations and inferences, we can formulate feedback for our students and for informing our own instructional decisions.

Second, we should evaluate how well our assessment questions and items are set and designed for measuring the learning outcomes. We should consider whether we are measuring what we set out to measure and whether those measurements are stable and consistent — or, in other words, the inferences based on the evidence collected should be both valid (truthful) and reliable (consistent).
We propose that in your ‘marker’s comments or report’, provide an analysis of the questions or items by asking the following questions:

  • Are the questions aligned with the learning outcomes? [See Table of Specification]
  • Are the questions pitched at the level in which the course is designed?
  • Are the criteria (e.g. rubric or marking scheme) clearly communicated to all markers (if any)?
  • Are there any discrepancies in the questions/answers during the marking process? Can the marking consistency be improved further?
  • Are there any errors (e.g. wrong answers provided, poor phrasing of question, missing information, etc)?
  • For MCQs, what do the item analysis reports say?
  • For performance tasks, does the rubric work well to differentiate the levels of performance?
  • How can feedback be effectively communicated to learners?

If possible, you may also wish to get feedback from the students themselves.

Third, all evaluations should be followed by providing timely and specific feedback to students as well as teachers. This helps to close the ‘feedback loop’ and promote a critical, reflective approach to teaching and more importantly, to improve your assessment practices.

  • Home
  • Teaching Expertise
  • How do you design, select, construct, implement, and evaluate an assessment task or activity?