Assessment Selection Criteria
Selecting an employability skills assessment requires careful consideration of practical, technical, and content issues related to assessment design and implementation. Listed below is a set of assessment selection criteria that can be used when reviewing potential assessments. Not all may apply to your situation. When reviewing selection criteria, consider those that are most relevant to your state or local program. You can review the criteria on this page or move to an interactive exercise to create a customized Assessment Comparison Worksheet.
Click on the criteria listed below to view details.
The assessment you select should clearly describe content that aligns with the skills being taught and should utilize methods that allow students to apply skills in appropriate contexts. The assessment also should support the purpose and intended use of the assessment.
Back to top
As the Employability Skills Framework indicates, employability encompasses a wide variety of knowledge and skills that are both cognitive and non-cognitive. To select an appropriate assessment, users need to clearly identify the knowledge and skills being taught (and those not currently being assessed) and find an assessment that measures the content for which students are receiving instruction. As noted by the Board on Testing and Assessment, clearly defining employability skills and knowledge can be challenging since they must be defined at a level of specificity that recognizes specific subject matter or related applications (National Research Council 2011). Given the wide variety of skills, states or local programs may need multiple assessments to provide a realistic measure of employability. Further, users may want to consider which aspects of employability skills are most important to assess, ensuring that the selected assessment measures that content.
Context, or the situation in which skills and knowledge are demonstrated, is an important consideration for selecting an employability skills assessment. If the student is expected to apply a skill in a particular situation or within a discipline, then the assessment should provide an opportunity to demonstrate that application. In other words, the assessment should reflect the intended context in which skills will be used, such as the workplace or an academic setting. For example, for a workplace context, an assessment should ask students to respond to work-based scenarios and demonstrate how they might apply their skills at work.
Determining the appropriate assessment method involves reviewing how an assessment is administered, the item types (i.e., question, scenario, or simulation), and the response mode (i.e., select an answer, write an essay, assemble a portfolio, demonstrate a skill in a specific context) to ensure that the assessment effectively measures the knowledge, skills, or behaviors intended. For example, a multiple-choice question might be appropriate for assessing specific knowledge, but not for interpersonal skills or other complex skills and behaviors, which might be assessed more effectively by observation, simulation, or other types of constructed-response modes.
Determining the appropriate type of assessment, whether formative or summative, requires a clear understanding of the audience and purpose of the assessment and the intended use of the results. Summative assessments are administered at particular points in the instructional period, usually at the beginning and end, and measure skill mastery. Results often are used as an accountability tool to report on student achievement and program effectiveness and to make programmatic policy decisions. Formative assessments are integrated into the teaching and learning process as learning activities to measure the effectiveness of instruction and quality of student learning. Results are incorporated into ongoing feedback and used to inform and motivate student achievement and improve instruction based on student learning needs.
The ultimate goal of assessing employability skills is to determine how well an individual is prepared for work. Assessment results, therefore, should hold predictive value for employers, meaning that an individual’s score accurately predicts career readiness. Determining the value of assessment results to employers involves finding out if employers were involved in validating the skills measured by the assessment and reviewing data from assessment publishers on the labor market outcomes associated with different levels of results.
Alignment with employability skills constructs
The purpose of the assessment and intended use of results will determine the technical standards that should be applied to assessments. For high-stakes purposes, such as certification, hiring, or program evaluation, assessments should meet rigorous technical standards.
An important part of assessment selection involves obtaining and understanding technical documentation, which may be available on the assessment publisher’s website or by contacting the publisher directly. Technical documentation will help determine the quality of the assessment and how well it meets its intended purpose. For example, it should describe the test’s purpose and target audience, the skills and knowledge being measured, how assessment items were constructed, evidence of how the test was evaluated to meet common psychometric standards, and the limitations and uses of results.
Validity (i.e., content, construct, and criterion-related validity) refers to how well the assessment measures what it is intended to measure, including skills and knowledge, the underlying theoretical construct, and the relationship between test scores and external measures of success (e.g., job readiness or college success). In other words, how trustworthy are assessment scores in representing student proficiency in the skills and knowledge being assessed? Validity information helps the user understand how results can be interpreted and for what purposes and is often reported as correlation coefficients (ranging from .00 to 1.00). The higher the value of the validity coefficient, the stronger the correlation between the assessment scores and intended performance.
An assessment with high reliability will produce consistent results over time, each time it is given. Simply put, a reliability value indicates the degree of confidence a user should have in assessment results. Reliability is typically reported as a correlation coefficient (ranging from .00 to 1.00). The higher the value of the reliability coefficient, the more reliable the scores. Several types of reliability coefficients can be calculated, providing different types of information about the scores, so it is important to determine not just whether an assessment is reliable, but also what reliability means for that particular assessment.
Fairness means that an assessment is free from bias, ensuring that test-takers are able to demonstrate their degree of proficiency without the interference of unrelated external factors that may affect their performance. A fair assessment is not biased toward or against a particular population, nor will it employ regional or other stereotypes. Fairness factors include age, culture, socioeconomic status, race, and gender, as well as whether the assessment attempts to measure skills that students were not taught. Fairness is a significant concern in assessing employability skills, because eliminating cultural and social biases can be difficult when assessing interpersonal skills, for example. Users should consider how fair the test is and if it is appropriate for the target population and aligned with the skills and knowledge intended for assessment.
An appropriate assessment should adequately address both the skills you intend to measure and the practical needs of your state or local program. Assessments may meet some, but not all, needs of states or local programs, so weighing these considerations will help you select the right assessment.
Administering an assessment involves human and technology resources and time. Key considerations include how the assessment will be administered (i.e., orally, on paper, or by computer), how long an assessment takes to administer, where it will be administered, how often it will be administered, and who will administer it. Other factors include the availability, clarity, and detail of materials for administration and scoring procedures and the level of training or certification required for assessment administrators.
These issues include scoring methodology (scoring done by a person or machine), technology needs, scoring guidelines, training and certification for scorers, materials or resources to help interpret results, and credentialing. Users should consider the availability of supporting documents and information from the assessment developer. Assessment results should be readily available, in a usable format, easily understandable, and useful to key audiences. Some assessment vendors offer a credential for demonstrating mastery of employability skills, usually those associated with requirements for entry-level or intermediate jobs, on an assessment or series of assessments. The portability or applicability of the credential depends on a number of factors, including its recognition in other geographic locations, value to employers, unions, or industries, and employment hiring requirements.
Security and data storage requirements vary by assessment type and across specific assessment instruments. Generally, security and confidentiality procedures require that assessment materials and student data, including personal information and assessment results are secured (e.g., in a locked cabinet or locked storage room or stored on a secure computer or server) to prevent them from being compromised. Procedures also can include keeping questions confidential before an assessment is administered and limiting access to assessment materials to key staff. Similarly, student assessment data should be kept secure and released only to the student and other authorized individuals. Storage of and access to credential data often include added considerations. For example, a separate secure database may be required to issue and track a regional or state-certified credential, provide verification of credentials to employers, and report out to stakeholders. These procedures may require additional resources from a state or program to ensure security and confidentiality. Users should consider the availability of supporting documentation and information from the assessment developer about their security procedures and how data will be stored.
An assessment should be accessible to everyone seeking to take it. It is important, then, to choose an assessment with appropriate modified forms or clear requirements for accommodations for students with disabilities and other special needs. Users should also understand which accommodations are required by law or regulation.
Assessment costs include expenses for administration, training, and scoring. Specifically, these are costs related to the facilities, materials, and equipment needed to administer and score the assessment, training for assessment administrators, resources for protecting the security of assessment materials and results, and other human and material resources. Commercially developed assessments may require users to enter into a licensing agreement with the assessment publisher. Fees may be charged each time the test is administered or for a specific duration of time.