After President Joe Biden recently issued his sweeping executive order on ai, much was made of how quickly the order came together. ai technology is accelerating “at dizzying speed” Biden said before signing the order.. “Biden wants to act quickly on ai safeguards,” technology/wireStory/biden-move-fast-ai-safeguards-sign-executive-order-104471329″ target=”_blank” data-url=”https://abcnews.go.com/technology/wireStory/biden-move-fast-ai-safeguards-sign-executive-order-104471329″>Read a headline about the statement.
As an educator who is on the front lines of ai‘s impact in the classroom, nothing about the national or local response to ai has felt rapid.
ChatGPT debuted a year ago this month. Even before that, the power of GPT and similar technologies had become clear. However, over the past 12 months, the response to ai across education has been fragmented and unclear. Overall, district and higher education leaders still seem unsure how to handle this new technology. In the absence of clear institutional guidance, many professors have had to craft their own ai policies on a class-by-class and student-by-student basis.
Not surprisingly, this hasn’t gone particularly well. On the one hand, we have seen overzealous instructors unfairly penalize studentsand on the other, it is naïve to think that a “do nothing” approach to ai is working.
Biden’s executive order is not designed to address gaps in ai policies in schools, nor do we want elected officials to dictate education policy; However, she draws attention to both the opportunities for ai in schools and some of the challenges and concerns surrounding it.
This alone is a big step in the right direction. Teachers need specific guidelines on ai teaching best practices and cheating prevention tools, as well as safety procedures for students interacting with this technology. ai tutors, ai detectors, and many more aspects of technology need to be rigorously tested and studied in an educational environment. Biden’s executive order marks the first small step toward achieving all that.
<h2 id="recommending-watermarks-ai“>Recommend ai watermarks
Biden’s order calls on the Commerce Department to develop guidelines to clearly label ai content using an integrated watermark on all ai-generated content, including videos, images and text. The idea is that these watermarks make it easier for people to differentiate between ai-created content and human-created work.
On paper, this looks like it will solve many of the problems currently caused by ai in classrooms by providing an easy way to verify ai-generated content. However, in practice this is more complicated.
Biden does not require companies to add watermarks, he only recommends it, so it is quite possible that students will still be able to use ai to generate content without watermarks. More importantly, watermarking technology is far from perfect. As with existing ai detection toolsstudies show Watermark detectors can be fooled. and produce a large number of false positives and negatives.
Still, I think it’s encouraging that the White House is dedicating resources to studying the best methods to effectively identify work generated by humans and machines. This feels vital in education and beyond.
<h2 id="ai-classroom-guidance-xa0″>Guidance on ai in the classroom
The executive order requires the Secretary of Education to create ai resources and policy guidance regarding ai.
“These resources will address safe, responsible, and non-discriminatory uses of ai in education, including the impact ai systems have on vulnerable and underserved communities.” the executive order says. “They will also include the development of an ‘ai toolkit’ for educational leaders.”
However, the order also recognizes the potential benefits ai has for education and calls for the creation of resources to help educators use ai tools, including personalized ai tutors.
I am excited about this aspect of the order and would like to see more national attention and resources for developing and testing ai tutors. Again, in theory this would seem incredibly useful for time-poor educators, but we need to see how well they perform in the real world.
Privacy and protection
The executive order emphasizes the safety of ai in general and its safe use in schools in particular. It also requires ai developers to share security test results with the government.
A draft guidance provided to federal agencies shortly after the order was signed identifies many uses of ai that pose risks to civil rights and student safety. These include tools that detect student cheating, monitor online activities, project academic results or make disciplinary recommendations, as well as online or in-person surveillance tools.
In his remarks before signing the executive order, Biden highlighted some of the possible ways ai could be harmful to students. “In some cases, ai is making life worse,” she said. “For example, by using teens’ personal data to figure out what will keep them glued to their device, ai is making social media more addictive. “It is causing what our Surgeon General calls a ‘profound risk of harm’ to their mental health and well-being.”