Choose Measures & Collect Data

There are many ways to assess student learning. Choosing the best measures for your course or program depends on the purpose of your assessment and, most importantly, on the SLOs that you have defined. Your SLOs should direct your choice of assessment measures. What are the specific skills or abilities that your students should demonstrate? In what context or condition should students demonstrate them. The choice of measure should consider all of these factors.

Broad Approaches to Assessment

There are several broad approaches to assessment that might direct your choice of assessment measures, to be described in the pages that follow.

  • Direct and Indirect Assessment
  • Formative and Summative Assessment
  • Embedded and Add-on Assessment
  • Qualitative and Quantitative Assessment

 

Direct and Indirect Assessment

Direct measures require students to demonstrate and yield tangible, visible, self-explanatory evidence of learning. Examples include:

  • Cumulative experiences such as research projects, presentations, theses, dissertations, oral defenses, exhibitions, and performances
  • Other significant course work (written work, performances, presentations, etc.)
  • Portfolios of student work
  • Scores and pass rates on appropriate licensure or certification exams
  • Scores on locally designed multiple-choice or essay tests such as final examinations in key courses, qualifying examinations, and comprehensive examinations
  • Score gains between entry and exit on published or local tests or writing samples
  • Ratings of student skills by their field experience supervisors or employers
  • Observations of student behavior, undertaken systematically
  • Student reflections on their values, attitudes, and beliefs

Indirect measures capture the attitude, perception, or opinion of a students’ learning and yield signs that students are probably learning. Examples include:

  • Student ratings of their knowledge and skills and reflections on what they have learned in the course or program
  • Questions on end-of-course student evaluations that ask about the course rather than the instructor or learning outcomes
  • Placement rates of graduates (employment or graduate programs)
  • Course grades and grade distributions
  • Quality and reputation of program (rankings)
  • Student, alumni, and employer satisfaction with learning, collected through surveys, exit interviews, or focus groups
  • Honors, awards, and scholarships earned by students and alumni
  • Student participation rates in faculty research, publications, and conference presentations

 

Formative and Summative Assessment

Formative measures focus on the learning processes taking place while a student is learning. Information from formative assessment can be used for immediate changes to curricular activities. Examples include:

  • Clicker quizzes on material studied for class
  • Midterm surveys about what is going well or poorly in the class
  • Journals or logs maintained by students
  • Counts of meetings between students or between students and faculty
  • Count of time spent studying or preparing for class

Summative measures focus on learning outcomes, or the knowledge, skills, or attitudes students take with them after a course or program of study is complete. Examples include:

  • Completed student work (paper, project, homework, etc.)
  • Final grades on exams grades that are linked to learning outcomes
  • Course or program portfolio
  • Course or program evaluation

 

Embedded and Add-on Assessment

Embedded measures are those that are already in use as or course or program work but also provide information for program or institutional goals. Examples include:

  • Course presentations
  • Research papers and projects in key courses
  • Student work from an experiential learning opportunity (internship, service-learning, study abroad)
  • Student performance on key assignments
  • End-of-course student evaluations tied to course outcomes rather than instructor qualities

Add-on measures go beyond course requirements and perhaps beyond program requirements and may seek information not easily elicited by embedded measures. Examples include:

  • Program portfolio
  • Published test (standardized or other)
  • Licensure exam

 

Quantitative and Qualitative Assessment

Quantitative measures are counts of occurrences or structured, predetermined response options that can be summarized into meaningful numbers and analyzed statistically. Examples include:

  • Test scores
  • Rubric scores
  • Average response on scaled survey items
  • Job placement rates
  • Counts of student presentations and publications

Qualitative measures are flexible, naturalistic methods and are usually analyzed by looking for recurring patterns and themes. Examples include:

  • Reflective writing
  • Patterns observed in student behaviors
  • Notes from interviews or focus groups
  • Class discussion threads

 

The Benefits of a Multiple Methods Approach

Best practices recommend a multiple methods approach to academic program level assessment since a single method can restrict the interpretation of student learning. However, the limitations of one method may prompt the selection of other methods. Altogether, multiple methods provide a more accurate frame for assessing student learning. More so, a combination of quantitative and qualitative assessment methods adds reliability and a more comprehensive approach to assessment. Using a multiple methods approach to academic program level assessment has several advantages:

  • Minimizes potential limitations of data collection and analysis inherent in a single method
  • Provides alternative methods for students to demonstrate learning outcomes that may not have been possible in other methods 
  • Provides a more complete understanding and interpretation of student achievement 
  • Values the diversity of different learning methods

 

Tips for a Balanced Approach to Assessment

  • Study the course or curriculum for existing, embedded assessments. Which ones might be used for program assessment data?
  • Use direct measures! No assessment of knowledge or skills should consist of indirect evidence alone.
  • Use both formative and summative assessment. Look for multiple points throughout a program to collect data on student learning that can be used for immediate improvements.

 

Determining a Timeline for Assessment

It is not necessary to assess every SLO every year. While some SLOs may be easily assessed each year (e.g. indirect evidence collected through program satisfaction survey), others may be more suited to intermittent assessment (e.g. paper from research methods course offered every other year).

The table below provides an example of how a program might manage the administration of its assessment measures. This example demonstrates a three-year cycle of assessment:

Outcomes & Assessment Measures

Year 1

Year 2

Year 3

Outcome #1

      Exit Interviews

x

x

x

Outcome #2

      Course papers/projects

 

x

 

Outcome #3

      Program Portfolio

x

x

x

Outcome #4

      Internship Evaluation

 

 

x