Glossary of Terms
Unless otherwise noted, these definitions originated from the NSF’s 2002 User-Friendly Handbook for Project Evaluation [pdf]. REU specifications have been incorporated by the CISE REU Assessment Work Group.
- Accuracy: The extent to which an evaluation is truthful or valid in what it says about a program, project, or material.
- Achievement: Performance as determined by some type of assessment or testing.
- Activities: The distinct education exercises or functions that take place throughout the REU experience, e.g. training workshops, lectures, social endeavors, that are designed to compliment and facilitate research projects [CISE REU Assessment Work Group]
- Affective: Consists of emotions, feelings, and attitudes.
- Anonymity (provision for): Evaluator action to ensure that the identity of subjects cannot be ascertained during the course of a study, in study reports, or in any other way.
- Assessment: Often used as a synonym for evaluation. The term is sometimes recommended for restriction to processes that are focused on quantitative and/or testing approaches. SEE “The Distinction between Assessment and Evaluation.”
- Assumptions: The beliefs we have about the program, the people involved, and the context and the way we think the program will work.
- Attitude: A person’s opinion about another person, thing, or state.
- Attrition: Loss of subjects from the defined sample during the course of data collection, particularly in longitudinal evaluation of REU students.
- Audience(s): Consumers of the evaluation; those who will or should read or hear of the evaluation, either during or at the end of the evaluation process. Includes those persons who will be guided by the evaluation in making decisions and all others who have a stake in the evaluation (see stakeholders).
- Background: Information that describes the project, including its goals, objectives, context, and stakeholders.
- Baseline: Facts about the condition or performance of subjects prior to treatment or intervention (i.e. the REU experience).
- Behavioral objectives: Measurable changes in behavior that are targeted by a project (e.g. publications, poster sessions, conference attendance, etc.)
- Bias: A point of view that inhibits objectivity.
- Case study: An intensive, detailed description and analysis of a single project, program, or instructional material in the context of its environment.
- Categorical scale: A scale that distinguishes among individuals by putting them into a limited number of groups or categories (e.g. undergraduate vs graduate student)
- Checklist approach: The principal instrument for practical evaluation, especially for investigating the thoroughness of implementation.
- Coding: To translate a given set of data or items into descriptive or analytic categories to be used for data labeling and retrieval. For example, categorical data (i.e. class standing) as a number (e.g. freshman=1, sophomore=2) for analysis; or open-ended survey responses placed into themes (e.g. initial interest in research)
- Cohort: A term used to designate one group among many in a study. For example, “the first cohort” may be the first group to have participated in the REU program at a given institution.
- Component: A physically or temporally discrete part of a whole. It is any segment that can be combined with others to make a whole, such as a component of the evaluation.
- Conceptual scheme: A set of concepts that generate hypotheses and simplify description, through the classification and categorization of phenomena, and the identification of relationships among them.
- Conclusions (of an evaluation): Final judgments and recommendations.
- Content analysis: A process using a parsimonious classification system to determine the characteristics of a body of material or practices, such as student journals.
- Context (of an evaluation): The combination of factors accompanying the study that may have influenced its results, including geographic location, timing, political and social climate, economic conditions, and other relevant professional activities in progress at the same time.
- Continuous scale: A scale containing a large, perhaps infinite, number of intervals. Units on a continuous scale do not have a minimum size but rather can be broken down into smaller and smaller parts. For example, grade point average (GPA) is measured on a continuous scale, a student can have a GPA or 3, 3.5, 3.51, etc. (See categorical scale.)
- Criterion, criteria: A criterion (variable) is whatever is used to measure a successful or unsuccessful outcome, e.g., grade point average.
- Criterion-referenced test: Test whose scores are interpreted by referral to well-defined domains of content or behaviors, rather than by referral to the performance of some comparable group of people.
- Cross-case analysis: Grouping data from different persons to common questions or analyzing different perspectives on issues under study. (see also Qualitative evaluation)
- Cross-sectional study: A cross-section is a random sample of a population, and a cross-sectional study examines this sample at one point in time. Successive cross-sectional studies can be used as a substitute for a longitudinal study. For example, examining today’s first year students and today’s graduating seniors may enable the evaluator to infer that the college experience has produced or can be expected to accompany the difference between them. The cross-sectional study substitutes today’s seniors for a population that cannot be studied until 4 years later.
- Demographic Indicators: Common demographic indicators for REU are gender (male, female), ethnicity (African American, Hispanic/Latino, Native American/American Indian, Pacific Islander)
- Descriptive data: Information and findings expressed in words, unlike statistical data, which are expressed in numbers (see Qualitative evaluation)
- Design: The process of stipulating the investigatory procedures to be followed in doing a specific evaluation.
- Dissemination: The process of communicating information to specific audiences for the purpose of extending knowledge and, in some cases, with a view to modifying policies and practices.
- Document: Any written or recorded material not specifically prepared for the evaluation.
- Effectiveness: Refers to the worth of a project in achieving formative or summative objectives. “Success” is its rough equivalent.
- Elite interviewers: Well-qualified and especially trained persons who can successfully interact with high-level interviewees and are knowledgeable about the issues included in the evaluation.
- Ethnography: Descriptive anthropology. Ethnographic program evaluation methods often focus on a program’s culture.
- Evaluation: The systematic determination of merit, worth, and significance of something or someone. See “The Distinction between Assessment and Evaluation.” (Source: Wikipedia, accessed online April 24, 2006 at [link]