Institutions across sectors are discussing how to address the continued growth of artificial intelligence (ai) tools in their environments. Educators are no different, as ai tools have already impacted the fabric of the classroom and teachers' office.
The current educational technology environment is as fluid as one could imagine, with new artificial intelligence tools emerging daily. Since ChatGPT, the poster tool for generative ai, was introduced about 14 months ago, some educators have attempted to ban its use. For example, tech/chatgpt-nyc-school-ban/index.html” target=”_blank” data-url=”https://edition.cnn.com/2023/01/05/tech/chatgpt-nyc-school-ban/index.html”>New York Public Schools Originally Banned aithat didn't work well, and the technology/180-degree-turn-nyc-schools-goes-from-banning-chatgpt-to-exploring-ais-potential/2023/10″ target=”_blank” data-url=”https://www.edweek.org/technology/180-degree-turn-nyc-schools-goes-from-banning-chatgpt-to-exploring-ais-potential/2023/10″>the policy was rescinded after only a few months.
The Biden administration described ai Guidance for Schools in Executive Order in October 2023, asked the US Department of Education to provide guidance for classrooms and considerations on equity and privacy issues, as well as recommending that ai tools include watermarks to identify ai-generated content. He The U.S. Department of Education released guidance in May 2023, addressing the need to ensure human decision-making within automated processes and ensuring fairness and the use of quality data to train ai tools.
At this stage, it is important to ensure that existing institutional policies address the issues raised by the use of ai tools. Teachai.org provides a set of sample recommendations on ai for educators that can be used to inform policy development. It is better to review existing policies in light of ai rather than create a new ai policy that may or may not fully align with other policies.
<h2 id="5-key-ai-policy-considerations-xa0″>Five key ai policy considerations
- Do not simply prohibit the use of ai in the development of tasks. Tools like MS Office and grammatically have built-in ai, so doing so would unnecessarily prohibit the use of many common tools.
- Ensure all ai tools comply with FERPA and ADA regulations. When using ai tools to develop student-specific items, such as personalized learning plans or IEPs, do not include personally identifiable information.
- Require instructors to be clear when and how students can and cannot use ai tools.. For example, an instructor might allow students to use ai to develop an outline, but not to write the narrative. At the course level, instructors should be clear about what level of usage they would like to see within the course, or task by task, if applicable. This is especially important if the institution does not have updated policies addressing the use of ai. Joel Gladd offers some sample curriculum language for educators to consider at various levels of ai integration.
- Ensure there is a human decision-making step in any automated ai process.. There is clear evidence that ai detectors tend to discriminate against non-native English speakers, often identifying their work as derived from ai. Ensure that the use of ai does not prevent creating an equitable environment for all.
- Consider how to ensure that the use of ai throughout the institution is as transparent as possible. Additionally, consider identifying ai-generated materials outside of the classroom.
As educational institutions navigate the rapidly evolving ai landscape, a thoughtful approach is required to leverage the benefits while mitigating potential risks and doing so in a transparent and responsible manner. The path to integrating ai into education is complex, but with careful policy development and consideration of key factors, it can lead to a more efficient, equitable and innovative learning environment for all.