Middle States Commission on Higher Education back

Article on Published Tests

Published Tests and Assessments
In Higher Education

 

Linda Suskie

MSCHE Vice President

 

 

Published tests and assessments can have an important role in understanding and improving student learning in colleges and universities by adding dimensions and perspectives not available through locally developed tests, rubrics, and surveys. Published tests give colleges a sense of how their students compare against their peers, and some published tests provide detailed feedback that lets colleges easily identify relative strengths and weaknesses in their students’ performance. In addition, because published tests and assessments are typically developed by testing professionals, the quality of test questions and problems may be superior to what faculty and staff at individual colleges can develop. Published tests and assessments are not a panacea, however, for several reasons.

 

One of the great strengths of American higher education is its diversity. Our country offers community colleges, art schools, engineering schools, theological seminaries, nursing schools, liberal arts colleges, technical institutes, and research universities, to name just a few. Each type of higher education institution appropriately aims to instill a distinct set of knowledge, skills, and competencies in order to prepare its students for successful careers and service to society.

 

For example, while writing and critical thinking skills are important outcomes of any college education, the kinds of writing and critical thinking skills that students need vary according to their aspirations. Students at a culinary school need to learn to write succinctly, while those at a research university need to learn to write extensive, in-depth analyses. Art students need to learn critical thinking skills that emphasize creativity, while business students might find logical reasoning skills more important.

 

Because America’s college students have diverse needs and goals, there can be no one test that is appropriate for every college and every program. Published tests and assessments reflect this diversity; the table below gives examples of the variety of writing and critical thinking skills assessed by three higher education instruments.

 

Examples of Tested

Writing Skills

Examples of Tested

Critical Thinking Skills

ETS Measure of Academic Proficiency & Progress (MAPP)

  • Discriminate between appropriate and inappropriate use of parallelism.
  • Recognize redundancy.
  • Evaluate competing causal explanations.
  • Determine the relevance of information for evaluating an argument or conclusion.

ACT Collegiate Assessment of Academic Proficiency (CAAP)

  • Formulate an assertion about a given issue.
  • Organize and connect major ideas.
  • Generalize and apply information beyond the immediate context.
  • Make appropriate comparisons.

Council for Aid to Education Collegiate Learning Assessment (CLA)

  • Support ideas with relevant reasons and examples.
  • Sustain a coherent discussion.
  • Deal with inadequate, ambiguous, and/or conflicting information.
  • Spot deceptions and holes in the arguments made by others.

 

 

Some other tests and assessments, meanwhile, are not intended to measure college-level learning and are thus inappropriate for use at this level. The National Assessment of Adult Literacy, for example, tests the abilities to understand materials such as job applications, transportation schedules, and food labels and to perform arithmetic computations such as balancing a checkbook or figuring a tip—all important skills that are typically (and appropriately) taught at the basic (through grade 12) rather than higher education level.

 

Another concern with published tests and assessments is that, unless students have compelling incentives to give them their best effort, the results will not accurately reflect what students have truly learned. While students have a clear incentive to do their best on certification and licensure examinations, it can be difficult to motivate them to do their best on other published tests. Developing compelling yet ethical incentives is a challenge; because all tests and assessments have inherent imperfections, it is inappropriate to make any single test a “gatekeeper” on which a certain score must be earned in order to, say, pass a course or earn a degree.

 

Yet another concern with published tests and assessments available for higher education is that they often have more limited evidence of their quality than published tests used in basic education. While validation studies at the K-12 level can involve tens of thousands of students, studies of higher education tests often involve far smaller numbers of students from institutions that may not be a representative sample of all colleges and universities. While test publishers continue to work diligently to research and document the validity and reliability of their tests, at this time we cannot have the same level of confidence in higher education test results that we have at the K-12 level.

 

A final concern with some published tests is that they do not yield enough useful feedback to help colleges identify specific shortcomings and make necessary improvements. The Collegiate Learning Assessment, for example, yields only one global score reflecting a plethora of skills such as understanding data in tables and figures, marshaling evidence from different sources, and distinguishing rational from emotional arguments. Without specific feedback on student performance on each of these skills, colleges whose students perform poorly on the CLA have no idea which skills their students lack and cannot address deficiencies without further research. Indeed, the brochure “CLA In Context 2004-2005” notes, “To use the CLA in a diagnostic manner, you will need to combine the CLA results with other data you collect.” Many colleges may not wish to risk diminishing the quality of their students’ education by diverting scarce resources from the essential business of teaching and learning to costly research studies.

 

Published tests and assessments can yield valuable insight into student learning at the higher education level, but only if:

 

  • they correspond to the college’s goals for student learning,
  • they yield useful feedback that will help the college identify areas that need improvement,
  • they have convincing evidence of their quality (validity and reliability), and
  • students have compelling incentives to give the tests their best effort.

Because there is no one perfect instrument, published tests and assessments should only be used in combination with other evidence of student learning, including locally-developed measures, job placement rates, and the like, in order to draw a more accurate overall picture of student learning.

 

All who are concerned with the future of American higher education can take steps to ensure that students graduate with appropriate knowledge, skills, and competencies. First, we can continue to support the American system of accreditation, which requires all accredited colleges to provide clear, compelling, and appropriate evidence of rigorous student achievement. Second, we can continue to value the rich diversity of American higher education and acknowledge that no one test can adequately evaluate the knowledge, skills, and competencies expected of all of America’s college students. Finally, we can encourage the development and use of assessment tools appropriate to each field of study and each sector of American higher education, so that all students graduate fully prepared for successful careers and productive service to society.

 

Version: 4/20/06


© 2017 Middle States Commission on Higher Education