The next stage in this process is to analyze the data that has been collected. This step involves several major components.
Record and Organize the Data
Analyzing Essays, Projects, and Other Open-ended Artifacts
These types of data sources often require a rubric and a panel of reviewers. Similarly, the analysis for these data requires more collaboration than the other forms discussed in this segment. Thus, it is important to form a panel of 2 or more experts in the field to review the data using a rubric. These experts can include any field related faculty members or instructors, alumni, employers, and professionals.
Suggestions for Successful Open-ended Artifact Analysis:- Hold a calibration session when the panel can meet prior to commencing the data evaluation process. This will allow them to define a common method of rating artifacts based on a common rubric or scale.
- Determine whether or not to hold a Simultaneous Rating Session or an Independent Rating Session:
- Simultaneous Rating Session – All panel members rate the artifacts at the same time and in the same location. The advantage to this session is that everything gets done at the same time and therefore, panel members can ask questions.
- Independent Rating Session – Each panel member rates artifacts on their own time and submits a rating to a predetermined person who tallies up the scores for each member. This format is best when panel members find it difficult to meet or for artifacts that are too lengthy and complex to evaluate in a group session. However, the availability of discussion and collaboration is minimized and the ratings may contain more variance.
- Learn more about rubrics
Analyzing Exams, Embedded Questions, and Other Close-ended Artifacts
Analyzing data for these types of artifacts can be quick and easy as the most important step is to determine whether the student/stakeholder correctly answered the questions.
Keep in mind, however, that questions used pertain exclusively to the learning outcomes being assessed. If not, it is recommended that the subtest sections or embedded questions within the exam be analyzed as separate scores according to learning outcome pertinence.
Report the Data
Reporting data involves describing the data that has been analyzed. The main questions answered are:- Were the criteria met?
- How many students or artifacts met the criteria set in the methods section?
- How many students or artifacts were included in the final assessment?
- What were the results of the assessment?
Minimal Requirements
As part of your planning, you needed to have included the minimum criterion for success in assessing students or for your unit's outcome. In your results section, you would now need to include this minimal requirement.
The minimal requirement would include basic information about the number and/or percentile of artifacts or students meeting the criterion.
Example 1: 80% (8 out of 10) of the papers assessed met the minimum criterion for success. The average score on the rubric was 2.5 (while the minimum criterion was 2.0).
Example 2: The 29 students assessed answered an average of 25 of 40 questions correctly. The minimum criterion of answering an average of 30 questions correctly was not met.
Disaggregating Results
Disaggregating the results of the exam/quiz questions or the rubric is a more comprehensive way of reporting data. Disaggregated data means looking at test scores or results by specific subgroups of students or outcomes. In the case of exam questions, disaggregating data can be done in several ways:
Example: 80% (8 out of 10) of the papers assessed met the minimum criterion for success. The overall average was 2.2 and the minimum criterion was 2.0. The rubric used for the assessment addressed four major competencies. For each indicator, the average number of points for students can be viewed below (the maximum number of points was 3):- Knowledge of Topic: 2.5
- Contrasting Different Perspectives: 2.0
- Using Outside Sources to Expand Argument: 2.7
- Appropriate Suggestions for Policy Change: 1.6
Reporting results (e.g., mean scores, percentiles) for individual questions or subsets of questions in an exam, quiz, or other closed-ended artifact.
Measuring individual questions is more effective when using few questions that address different knowledge/skills related to the outcome. When there are many questions, however, they can be grouped into subcategories according to major sets of knowledge and/or skills addressed.
Example: The 29 students assessed answered an average of 25 of 40 (63%) questions correctly. The minimum criterion of answering an average of 30 (75%) questions correctly was not met. The questions addressed four major components of cell biology. Below are the results as disaggregated by major components:- Knowledge of Cell Mitosis: Overall average = 40% of questions correct
- Identifying Cell Anatomy: Overall average = 80% of questions correct
- Understanding Functions of Cells: Overall average = 50% of questions correct
- Identifying Different Kinds of Cells: Overall average = 80% of questions correct