Key points:
Teachers are still struggling with the use of generative ai, including establishing policies and training around this technology, according to ai-with-detection-discipline-and-distrust/” target=”_blank” rel=”noreferrer noopener”>new research survey from the Center for Democracy and technology (CDT).
The survey reveals that educators have yet to resolve on their own many questions about students' responsible and safe use of ai, leading them to become overly reliant on ineffective generative ai content discovery tools, leading to it contributes to increased disciplinary action among students and to cultivating persistent distrust of students. .
New CDT survey research reveals that:
Fifty-nine percent of teachers report that they are sure one or more of their students have used generative ai for school purposes, and 83 percent of teachers say they themselves have used ChatGPT or another ai tool generative for personal or school use.
Although 80 percent of teachers report receiving formal training on policies and procedures for using generative ai, and 72 percent of teachers say their school has asked them for information on policies and procedures related to using generative ai On the student side, only 28 percent of teachers say they have received guidance on how to respond if they suspect a student has used generative ai in unpermitted ways, such as plagiarism.
Sixty-eight percent of teachers report regularly using an ai content screening tool, despite known effectiveness issues that disproportionately affect students who are protected by civil rights laws.
Sixty-four percent of teachers say students at their school have gotten in trouble or experienced negative consequences for using or being accused of using generative ai on a school assignment, an increase of 16 percentage points from to the last school year; and,
52 percent of teachers agree that generative ai has made them more distrustful of whether their students' work is really theirs, and this number is even higher in schools that ban the technology.
“In edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/” target=”_blank” rel=”noreferrer noopener”>our research last year“We saw schools struggling to adopt policies related to the use of generative ai, and we are encouraged to see great strides since then,” said CDT President and CEO. Alexandra Reeve. “But the biggest risks of using this technology in schools are not addressed due to gaps in training and guidance for educators on the responsible use of generative ai and related screening tools. As a result, teachers continue to distrust students and more and more students are getting into trouble.”
“Generative ai tools in the education sector are not going away, and schools are responsible for providing detailed guidance to educators not only on the benefits of the technology, but, more importantly, on its shortcomings and how to mitigate them effectively. responsible,” Givens added.
“Since generative ai took the education sector by storm last year, there have been many think pieces about whether its use is good or bad. What we know for sure is that generative ai and the tools used to detect it require schools to provide better training and guidance to educators,” says Elizabeth Laird, director of CDT's Civic technology Equity Project.
Laird added: “Educational leaders should not limit themselves to addressing generative ai, as this is just one example of ai use in schools at this time. We hope that the US Department of Education will follow through on its commitment to publish guidance on safe, responsible, and non-discriminatory uses of ai in education, as detailed in the Biden administration executive order. In the meantime, schools must continue to prepare teachers to make everyday decisions that support the safe and responsible use of all ai-powered tools.”
Protected classes of students remain at disproportionate risk of disciplinary action due to the use of generative ai. One of those groups is students with disabilities; 76 percent of licensed special education teachers are more likely to use an ai content screening tool regularly, compared to 62 percent of unlicensed special education teachers. As edtech-threats-to-student-privacy-and-equity-in-the-age-of-ai/” target=”_blank” rel=”noreferrer noopener”>previously reported by CDTStudents with an IEP and/or a 504 plan use generative ai more than their peers, potentially placing them at greater risk for negative consequences due to the use of generative ai.
This press release ai-detection-discipline-and-distrust-despite-increased-school-guidance/”>originally appeared online.
!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=();t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)(0);
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);
fbq(‘init’, ‘6079750752134785’);
fbq(‘track’, ‘PageView’);