ai-assessment-student-teachers.jpeg” class=”attachment-medium-landscape size-medium-landscape wp-post-image” alt=”As ai becomes more integrated into learning, assessments can occur in diverse contexts and students develop practical skills.” srcset=”https://technicalterrence.com/wp-content/uploads/2024/05/Balancing-Bloom-evaluation-and-AI”https://technicalterrence.com/tech/ai/”>ai-assessment-student-teachers.jpeg 500w, https://technicalterrence.com/wp-content/uploads/2024/05/Balancing-Bloom-evaluation-and-AI”https://technicalterrence.com/tech/ai/”>ai-assessment-student-teachers-150×100.jpeg 150w” />
Key points:
- Maintaining academic rigor and revising assessments in response to generative ai
- Stay up to date on ai in education
- New group focuses on ai skills in education and workforce
- For more news on ai in education, visit eSN's digital learning hub
Earlier this school year, teachers had a conversation about prospective teachers pulling all-nighters to complete their lesson plans. Most teachers lamented their own experiences with taking too much time to develop high-quality lesson plans.
One faculty member spoke up and asked why we don't teach pre-service teachers to use strong prompting for lesson plan development and encourage them to use a generative ai tool to create lesson plans. The key is how the lesson plan is implemented to address the needs of the students within the classroom, and not the creation of the lesson plan itself. Alternatively, student teachers could be asked to critique the ai-generated lesson plan, showing their ability to analyze the lesson plan. Several faculty members responded, saying it is essential for future teachers to write those lesson plans. However, the majority may be wrong.
Educators have long been encouraged to focus on higher-level thinking skills. Two key tools that educators have been using for decades are Flora taxonomy and Costa's levels of questioning. Now, more than ever, educators should focus on the top four levels of Bloom's 1956 taxonomy (with assessment at the top) and the processing and application levels of Costa's levels.
In a world rich in generative ai, we must rethink how we view evaluation. Now that generative ai is capable of handling lower-level cognitive tasks, such as remembering and understanding, assessments must challenge students to engage in higher-order thinking. This includes analyzing data, evaluating scenarios, and creating new solutions that ai cannot easily replicate.
It is time for educators to ensure that all assessments beyond the most basic formative assessments now focus on the top four levels of Bloom's six levels. The basic knowledge (Level 1) of the original version of Bloom's taxonomy (1956) can now be generated by ai. For example, creating a state report that shows your capital and basic history would be easy for ai/” target=”_blank”>Claude.ai
Educators should return to the original version of Bloom's taxonomy, where evaluation, synthesis, analysis, and application are the four main levels. Due to the ability to use an older generation of technological tools, Bloom's taxonomy was changed to encourage creation, with creation being the top level of the taxonomy. However, easy development of new materials can be done with generative ai. Effectively and efficiently applying those creations, analyzing, integrating and evaluating them into existing systems or thought processes should be where educators focus in the future. As technology has changed the landscape again, it is time for assessment to return to the top of the taxonomy.
This is not to say that students should not be asked to perform tasks that align with Costa's level of questioning, nor to work at Bloom's levels of knowledge and understanding. However, assessments, particularly quizzes, tests, and assignments, need to be developed to focus on higher levels of Bloom. When teachers want to assess lower levels of Bloom, they should consider retesting ai/”>oral evaluations.
The rise of generative ai in educational contexts requires a strategic review of assessment methodologies to maintain the integrity and relevance of classroom instruction. By shifting the focus toward higher-order thinking skills such as analysis, synthesis, and evaluation, educators can ensure that assessments challenge students to engage deeply with content, encouraging originality and critical thinking.
Emphasizing the application of knowledge in diverse contexts helps students develop practical skills that transcend academic settings and prepare them for real-world challenges. Additionally, by integrating assignments that require unique, thoughtful, and personalized assessments, educators will cultivate digital literacy among their students. These are essential competencies in an increasingly ai-integrated world. This change combats the potential for academic dishonesty and should improve educational outcomes by promoting essential 21st century skills.
Ultimately, revising assessments in response to generative ai technologies is about maintaining academic rigor and preparing students to be thoughtful, innovative, and ethical contributors to a technology-rich society.