Since the arrival of generative ai tools in learning environments, educators have sought guidance for the safe and ethical use of ai in education. The recent Executive Order on the Safe and Trustworthy Development and Use of artificial intelligencepublished on October 30, 2023, provides much-needed guidance to ensure the responsible and safe development, deployment, and use of artificial intelligence (ai) systems and tools.
In preK-12 educational settings, educators and school leaders continue to work to clearly and consistently identify opportunities and challenges for ai systems and tools so that students and educators benefit from the recommendations and actions outlined in the Order. Executive (EO). EO is guided by eight principles and priorities (AH in EO), two of which we focus on here:
- ai policies must be consistent with advancing equity and civil rights.
- The interests of Americans who increasingly use, interact with, or purchase ai and ai-enabled products in their daily lives must be protected.
Priorities are particularly important in the preK-12 learning environment, where our students are often first introduced to ai systems and tools, learning to use them, and potentially becoming ai creators. As we know, education is a civil right and addressing the digital equity This gap requires powerful technology-driven learning opportunities, including a nuanced understanding of how and when to use ai-enabled products.
The danger of equating ai with humans
Ensuring the protection of our communities requires us to consider the biases inherent in ai systems, and much has already been written and shared on this topic. The EO highlights something that is crucial for educational leaders to remember: ai generates content based on what already exists. artificial intelligence systems use input from both machines and humans to make predictions and generate content. The data used by ai systems favors historical data and includes biases inherent in that data. Besides, ai-and-keep-it-after-they-stop-using-the-algorithm/” target=”_blank” rel=”noopener nofollow”>recent studies show that human users can unconsciously absorb automated biases from ai systems and tools and that those biases can persist even after they stop using the ai program. This means that even limited interactions with ai tools can have long-lasting effects. It is essential that educational leaders prioritize ai literacy for educators, students, and their communities, in addition to considering the biases, data privacy, and age restrictions they already consider when adopting ai tools.
However, our concerns go beyond bias; We want to warn against the anthropomorphization of ai. ai is not human and we should not use human-related terms to refer to these systems and tools because that can lead to misconceptions that cause harm not only to our students but also to our communities. It is important to remember that ai systems are just computers and they make mistakes. As such, we believe that the term hallucination should be replaced by mistake. Furthermore, the definition of generative ai (gen ai) in the EO highlights the results of these models as synthetic content. This is important because it helps reinforce the idea that ai is a tool that uses existing data to make predictions or generate content. In other words, you are creating a better approximation, or guess, based on what has already been created by humans.
ai does not create new ideas. Understanding and accepting this helps protect against the anthropomorphization of these ai systems and tools, which can be common in educational settings. It is important for students and educators to remember that ai is a tool and should be referred to as “it,” never “she,” “him,” or “they.” Educators should remember that ai can sometimes display human-like characteristics, such as having a voice or the ability to answer questions, but they are simply tools that humans have created and use to complement their uniquely human abilities.
ai to complement uniquely human skills
Still, ai is a sophisticated tool that can complement human skills when used properly. This idea of using technological systems to complement or improve human capabilities is not new and has been called ai-or-intelligence-augmentation-for-education/fulltext” target=”_blank” rel=”noopener nofollow”>increased intelligence (ai). ai focuses on the importance of keeping humans informed as these tools are developed and used in learning environments. For example, many educators are excited about the possibility that artificial intelligence tools could enable and expand individualized and differentiated learning for all students. A well-designed ai tool can increase a teacher's ability to provide relevant and timely feedback to students, which can lead to strengthening meaningful connections between a teacher and their students. By centering educators' abilities to care for their students as people, we can advance equity and civil rights.
We believe that technologies can be most valuable for teaching and learning when they complement human capabilities by placing the professional judgment of educators and the voices of students at the center. As much as ai systems and tools can support teaching and learning, it is essential to remember that human judgment will always be necessary. Human judgment is what will allow us to follow the EO's guidance, including the need for schools to ensure that ai policies are consistent with advancing equity and civil rights and that Americans' interests are protected.
Note: This post was written by human authors and not generated by ai.