Lexicon


Definitions of Frequently Used Terms for the Open Pathways Project

 

  1. Goal:  A goal is what one, one’s colleagues, one’s college, or one’s department aim to achieve (Suski, 2009, p. 116).

 

  1. Outcome:  An outcome “refers to the destination rather than the path taken to get there; the end rather than the means; the outcome rather than the process, e.g. the outcome is not that students write a term paper, but that they write effectively in the discipline” (Suski, 2009, pp, 116-117).

 

  1. Learning Goals or Outcomes: Goals or outcomes refer to statements that show what students will do to demonstrate their learning.  Please note that, for purposes of the HLC Pathways Project, we will use the term learning outcomes.

 

  1. Objectives: According to Suski (2009, p. 117), objectives “describe detailed aspects of goals.”  We are not asking for objectives as part of the HLC Pathways Project.

 

  1. Competencies or Proficiencies: These are terms sometimes used to describe skills represented in learning outcomes (Suski, 2009, p. 117). For the HLC Pathways Project, we will use the language, learning outcomes rather than competencies or proficiencies.

 

  1. Standard/Benchmarks: These are synonymous terms that specify “targets against which we gauge success in achieving an outcome, e.g. 95% of students will perform at a specified level on a rubric” (Suski, 2009, p. 117).

 

  1. Performance Indicators/Results: These are quantitative indicators of overall student performance, e.g. the percentage of students who actually performed at a specified level of a rubric (Suski, 2009, p. 117).

 

  1. Rubric: A rubric is a scoring guide that specifies the criteria that will be used to evaluate student work.  Suski (2009) describes several types of rubrics, including:
  • Rating Scales: These rubrics typically specify traits that are important for outcomes/assignments and allow the evaluator to assign subjective scores, e.g. outstanding, very good, adequate, inadequate, etc.  However, the definitions of these levels of evaluation are not given.
  • Descriptive Rubrics:  These rubrics are similar to rating scale rubrics, except that each level (e.g. outstanding, adequate, etc.) is clearly defined.  For the HLC Pathways Project, programs should develop descriptive rubrics for each learning outcome identified for their program.

 

  1. Metacognition: Suski (2009, p. 123) defines metacognition as “learning how to learn and how to manage one’s own learning by understanding how one learns, thereby preparing for a lifetime of learning.”  According to Suski (p. 124), traits of metacognition include
  • Using efficient learning techniques.
  • Discussing and evaluating one’s own problem-solving strategies.
  • Critically examining and evaluating the bases for one’s arguments.
  • Correcting or revising one’s reasoning or arguments when self-examination so warrants.
  • Forming efficient plans for completing work.
  • Evaluating the effectiveness of one’s actions.

 

  1. Synthesis: Synthesis is the “ability to pull together what one has learned and see the big picture” (Suski, 2009, p. 185).

 

  1. Assessment of Student Learning: According to Angelo (1995 cited in Suski, 2009, p 4), assessment of student learning is an ongoing process involving the following steps:
  • Establishing clear, measurable, expected outcomes of student learning.
  • Ensuring that students have sufficient opportunities to achieve those outcomes.
  • Systematically gathering, analyzing, and interpreting evidence to determine how well student learning matches our expectations.
  • Using the resulting information to understand and improve student learning.

 

  1. Assessment and Grading: What is the Difference? (Suskie, 2009, pp. 10-11)
  • Grades focus on individual students; assessment focuses on entire cohorts of students.
  • Grades are holistic; they don’t allow us to identify student performance on specific outcomes.
  • Grades often include aspects other than student learning, e.g. class attendance, participation, timeliness of assignment submission, etc.
  • Grading standards may be vague or inconsistent.

 

  1. Direct Assessment of Student Learning: Direct assessment is any method that directly assesses student work (Walvoord, 2004, p. 3).

 

  1. Indirect Assessment of Student Learning: Indirect assessment assesses student learning indirectly through such methods as asking them to respond to surveys where they give their opinions on the extent to which they’ve mastered learning outcomes (Walvoord, 2004, p. 3).

 

  1. Formative Assessment: Assessment that produces results used to improve student learning.

 

  1. Summative Assessment: Assessment that produces results used for purposes of accountability or, in the case of individual students, a final grade.

 

  1. Embedded Assessment: Assessments used to provide information regarding student achievement of program goals that are embedded into courses with the program (Suski, 2009).

 

  1. Quantitative Assessment: According to Suski (2009, p 32) quantitative assessments “use structured, predetermined response options that can be summarized into meaningful numbers and analyzed statistically.  Examples are test scores, rubric scores, and survey ratings.”

 

  1. Qualitative Assessment: According to Suski (2009, p 32) qualitative assessments “use flexible, naturalistic methods and are usually analyzed by looking for recurring patterns and themes.  Examples are reflective writing, online class discussion threads, notes from interviews, focus groups, and observations.”  To be considered qualitative assessment, these experiences must be systematic and structured.

 

  1. Accountability: Accountability refers to “demonstrating the effectiveness of teaching and learning, programs, and services to a variety of audiences” (Suski, 2009, p. 61).

 

  1. Continuous Improvement: The intentional use of data to continuously improve current practices.

 

  1. Program Learning Outcomes/Curriculum Mapping/Alignment: The analysis and mapping of where in the program’s curriculum each learning outcomes is addressed.  This is often specified more precisely by indicating where the outcomes are introduced, reinforced, and assessed.

 

References

 

Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed). San Francisco, CA: Jossey-Bass.

 

Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, department, and general education. San Francisco, CA: Jossey-Bass.

Definitions of Frequently Used Terms for Activity 3 of the Open Pathways Project

  1. Performance Levels: Description of student performance at varying levels on each rubric trait.  We recommend that performance levels be described using measurable verbs, such as those recommended in Bloom’s Taxonomy, as revised by Anderson and Krathwohl (2001).  As students progress to higher performance levels, they should demonstrate skills requiring increasingly more sophisticated levels of cognitive ability than at lower performance levels.

 

  1. Standard/Benchmarks: These are synonymous terms that specify “targets against which we gauge success in achieving an outcome (Suski, 2009).”  For our assessment purposes we recommend the following qualitative benchmarks for each assessment point.
  • Associate Degree Programs
    • First Assessment Point = Introductory Benchmark Performance Level
    • Exiting Assessment Point = Milestone Benchmark Performance Level
    • Baccalaureate Degree Programs
      • First Assessment Point = Milestone Benchmark Performance Level
      • Exiting Assessment Point = Capstone Benchmark Performance Level
      • Master’s Degree Programs
        • First Assessment Point = Capstone Benchmark Performance Level
        • Exiting Assessment Point = Advanced Benchmark Performance Level

 

  1. Learning Outcomes: Goals or outcomes refer to statements that show what students will do to demonstrate their learning.  Each program learning outcome should represent the level of cognitive sophistication represented in the exiting assessment point’s benchmark performance level.

References

 

Anderson, L.W., & Krathwohl, D.R. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman, Inc.

 

Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed). San Francisco, CA: Jossey-Bass.