Key points:
In the rapidly evolving ai landscape, education is at the forefront. New artificial intelligence tools for educators and students emerge daily; of technology/newark-schools-khan-tutoring-ai.html” target=”_blank” rel=”noreferrer noopener”>I have tutors For curriculum creators, the educational ai market is booming.
However, ai-children-need-answers/” target=”_blank” rel=”noreferrer noopener”>The long-term impact of ai use on students is unknown.. As educational research on ai attempts to keep up with ai development, questions remain about the impact of ai use on student motivation and overall learning. These questions are particularly important for students of color, who consistently encounter more systemic barriers than their white peers (Frausto et al., 2024).
ai, which emerged in the wake of the COVID-19 pandemic and the related decline in student learning and motivation, refers to a wide range of technologies, including tools. like ChatGPT, that use vast repositories of data to make decisions and solve problems. Because the tool can help with tasks such as generating essays from prompts, students quickly integrated these technologies into the classroom. Although educators and administrators were slower to adopt these technologieshave begun to use ai both to manage technology/archive/2023/05/chatbot-cheating-college-campuses/674073/” target=”_blank” rel=”noreferrer noopener”>unregulated student use
A recent quick research review He concluded that students' motivation is impacted by their experiences inside and outside the classroom. The review highlights how student motivation is determined by more than just individual attitudes, behaviors, beliefs, and traits, but does not comprehensively address the effects of ai on student motivation (Frausto et al., 2024).
To understand how ai can impact the motivation and learning of students of color, we must examine the nature of ai itself. ai learns and develops based on pre-existing data sets, which often reflect ai-bias#:~:text=ai%20bias%2C%20also%20called%20machine,outputs%20and%20potentially%20harmful%20outcomes.” target=”_blank” rel=”noreferrer noopener”>social prejudices and racism. This reliance on biased data can lead to biased and potentially harmful results. For example, ai-generated images are prone to perpetuating stereotypes and clichéshow to exclusively generate images of leaders as white men in suits. Similarly, if we were to use ai to generate leadership curriculum, it would be prone to creating content that aligns with this stereotype. Not only does this further reinforce the stereotype and subject students to it, but it can also create content that is not relatable, leading students of color to disengage from learning and lose motivation in the course because of it. complete (Frausto et al., 2024).
This is not to say that ai is the only potential detractor. Discrimination is a persistent real-world factor that impacts students' motivation and learning experiences, and similar bias has previously been observed in non-ai learning and motivation tools that were created based on research predominantly focused on white students. middle class (Frausto et al., 2024). In any case, ai only serves as a reflection of the biases that exist in the global and educational sphere in general; ai learns from real data, and the biases it perpetuates reflect social trends. ai biases are not mystical; They are largely a mirror of our own. For example, Teachers also demonstrate comparable levels of bias toward the world around them..
When we think about the current use of ai in education, these deep-rooted biases may already be a cause for concern. Regarding student use, AIs have demonstrated subtle racism in the form of dialect bias: Students who use African American Vernacular English (AAVE) may find that the AIs they communicate with offer them less favorable recommendations than their peers. For teachers, a similar bias can affect the grades that ai-powered programs assign to students, preferring the phrases and cultural perspectives used in white students' essays to those of students of color. These are just some examples of the biases present in the current use of ai in education, but they are already setting off alarm bells. Similar instances of human-to-human discrimination, such as by teachers and peers, have been linked to lower motivation and learning in students of color (Frausto et al., 2024). In this way, it seems that ai and its biases may serve as another obstacle that students of color must face; ai learning tools and supports that have been designed and tested on white students to positive effect may negatively impact students of color due to built-in biases.
For humans, we recommend anti-bias practices to overcome these perceptions. With ai, we may still have the opportunity to incorporate similar anti-discrimination and bias awareness practices. This type of ai training has been a prominent point in the conversation around creating and using ai responsibly for several years, with companies such as ai.google/responsibility/responsible-ai-practices/” target=”_blank” rel=”noreferrer noopener”>Google publishes ai guidelines with emphasis on addressing bias in developing ai systems. Addressing the issue of ai bias with intentionality can help circumvent discriminatory outcomes, such as intentionally selecting large, diverse data sets to train ai and rigorously testing them with diverse populations to ensure equitable results. However, even after these efforts, ai systems may remain biased toward certain cultures and contexts. Even good intentions to support student learning and motivation with ai can lead to undesirable outcomes for underrepresented groups.
While the integration of ai and education is already occurring rapidly, there is an opportunity to address and understand the potential for bias and discrimination early on. Although we cannot be sure of the impact of ai on the educational and motivational outcomes of students of color, the research sets a precedent for bias as a detractor. By approaching the implementation of ai in education with intentionality and inclusion of perspectives, as well as with awareness of potential harm, we can attempt to avoid the inevitable and instead create an ai-powered learning environment that improves the experiences of learning for all students.
!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=();t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)(0);
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘6079750752134785’);
fbq(‘track’, ‘PageView’);