I am a
Home I AM A Search Login
FACT SHEETS

Pain Education Assessment and Evaluation Strategies

Published

9 July 2021

GLOBAL YEAR

The 2024 Global Year will examine what is known about sex and gender differences in pain perception and modulation and address sex-and gender-related disparities in both the research and treatment of pain.

Learn More >

Curriculum assessment and evaluation are important steps in instructional design, whether the educational experience is a 30-minute single-topic in-service program or a four-year multifaceted sequence of courses. Assessment (a formative process to measure curriculum effectiveness) and evaluation (a summative process to judge curriculum quality) may involve the following steps (Note: although not synonymous, the terms “assessment” and “evaluation” may be used interchangeably in this document):

  1. Identify specific purposes, outcomes, or goals of the assessment method
  2. Establish a standard against which results of the assessment will be weighed
  3. Select feasible methods to conduct the assessment
  4. Conduct the assessment
  5. Analyze the results 
  6. Use the results to improve curricula
  7. Evaluate the assessment process

Following are basic principles: (1,2,5,6,8) 

  • No single method or tool is perfect; all have pros and cons.
  • Be clear about your learning objectives and what you really want to measure (outcomes).
  • Use explicit learning theories appropriate to articulate your learning objectives, design your educational experiences, and guide your evaluation strategies. 
  • A mixed-method assessment approach is likely to give you more information.
  • Qualitative methods may include interviews, focus groups, case studies, text analysis/written narratives, and observations.
  • Quantitative methods such as surveys are numbers-based and generate a score, performance metric, or rating.
  • Surveys are low effort with less impact; qualitative/observational tools are high effort but have the potential for major impact.
  • Be cautious in implementing a pretest/post-test methodology; this may only demonstrate accomplishment resulting from natural maturation or learning achieved from the pretest rather than from the curriculum.
  • Establish an interventional and control group. Beware of not using a control; if significant educational impact is identified in one interventional group of students alone, this indicates only that learning can occur.
  • There is a proportional relationship between stakes of assessment (e.g., competency in performing a life-threatening procedure versus knowledge of pain-assessment techniques) and number of data points involved (the former requiring more evaluation).
  • If using an instrument to measure outcomes, choose or create the appropriate assessment tool for the study population. The outcomes and study population dictate the tool, not vice versa.
  • Recognize “question fatigue”; short evaluations with mostly quantitative and one or two open-ended questions are well tolerated. 
  • The response rate will be higher if evaluations are completed immediately, though change in behaviors, attitudes, and progress may need to be assessed over time.
  • Assessment can drive learning.
  • Reject reproduction and memorizing in favor of executive tests that generate functional, interesting, and meaningful learning.

Competency Assessment

An important aim of clinical education is improved competence, meaning the learner gains the capacity to carry out duties successfully in the real world (3). Competency involves self-reflection, with personal assessment of practice allowing the individual to identify and seek learning opportunities to promote continued competence, change in behaviors, and professional growth (4).

The complexity of assessing competencies initially led to a reductionist approach wherein educators broke them down into smaller fragments of behaviors they could directly observe and assess using a checklist. However, this method has limitations. For example, does proficiency inserting an epidural catheter in a cadaver mean that the same person will have a similar level of performance while making the decision to provide epidural analgesia and execute the procedure in the operating room in a patient with a life-threatening disease?

As stated by Schuwirth and Ash (7), competency assessment should: 

  • Support development of an integrated competence
  • Be organized around content domains, rather than test formats
  • Value all forms of information, quantitative and qualitative
  • Combine summative and formative functions to inform and guide student learning
  • Be equitable through a balance of assessments that are standardized and tailored to the individual and by focus on improvement of competence rather than solely on detecting incompetence

Clinical competence can be assessed using mixed methods including but not limited to:

  • Patient management, case-based problem solving
  • Written tests (e.g., multiple choice)
  • Oral tests
  • Standardized patient interactions (e.g., Observed Clinical Skills Exams)
  • Computer-based clinical performance assessments
  • Medical simulation

Summary

  • Studying the impact of a pain education initiative is hard work.
  • Multidimensional assessment (quantitative and qualitative methods) may yield the best outcomes.
  • Building rigor into the assessment strategy is challenging but necessary.
  • Competence is contextual, constructed, and changeable and in part subjective and collective.
  • Education research presents rich faculty development opportunities.
  • New/modified assessment tools and creative strategies are needed.

Resources

National Center for Interprofessional Practice and Education

 

REFERENCES

  1. Bordage G, Dawson B. Experimental study design and grant writing in eight steps and 28 questions. Med Educ 2003;37(4):376-85.
  2. Brashers T, Owen J. Brief Primer on IPE Evaluation for University of Washington. http://depts.washington.edu/uwhsa/wordpress/wp-content/uploads/2013/02/Brief-Primer-on-IPE-Evaluation-for-UW-2.pdf (accessed September 4, 2017).
  3. Fishman SM, Young HM, Arwood E, Chou R, Herr K, Murinson BB, Watt-Watson J, Carr DB, Gordon DB, Stevens BJ, Bakerjian D, Ballantyne JC, Courtenay M, Djukic M, Koebner IJ, Mongoven JM, Paice JA, Prasad R, Singh N, Sluka KA, St Marie B, Strassels SA. Pain Med 2013;14(7):971-81.
  4. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR & for the International CBME Collaborators.  The role of assessment in competency-based medical education.  Med Teach 2010; 32(8):676-682.
  5. Oates M, Davidson M. A critical appraisal of instruments to measure outcomes of interprofessional education. Med Educ 2015;49(4):386-98.
  6. Ringsted C, Hodges B, Scherpbier A. ‘The research compass’: an introduction to research in medical education: AMEE Guide no. 56. Med Teach 2011;33(9):695-709.
  7. Schuwirth L, Ash J. Assessing tomorrow’s learners:  In competency-based education only a radically different holistic method of assessment will work.  Six things we could forget.  Med Teacher(2013); 35: 555-559. 
  8. van der Vleuten CP, Schuwirth LW, Driessen EW, Dijkstra J, Tigelaar D, Baartman LK, van Tartwijk J. A model for programmatic assessment fit for purpose. Med Teach 2012;34(3):205-14.M et al, 2012

 

AUTHORS

Antje M. Barreveld, MD
Assistant Professor of Anesthesiology, Tufts University School of Medicine
Co-Principal Investigator, HSDM-BWH NIH Pain Consortium Center of Excellence in Pain Education
Medical Director, Pain Management Center
Director, Substance Use Services
Anesthesiologist, Commonwealth Anesthesia Associates
Newton-Wellesley Hospital
Newton, Mass., USA 

Deb Gordon, RN, DNP, FAAN
Anesthesiology & Pain Medicine
Co-Director Harborview Integrated Pain Care Program
University of Washington
Seattle, Wash., USA 

 

REVIEWERS

Mary Suma Cardosa, MBBS, MMED, FANZCA, FFPMANZCA
Consultant Pain Specialist
Hospital Selayang
Malaysia

Chris Herndon, PharmD
Professor, School of Pharmacy
Southern Illinois University, Edwardsville
Edwardsville, Ill., USA

Share this