As Director of Quantitative Research and Data Science, as well as Data Privacy Officer at Digital Promise, my goal is to demystify the complex world of data privacy, particularly in the realm of education and artificial intelligence tools. Having begun my journey as an Institutional Review Board (IRB) committee member during my graduate school years, I have committed to upholding ethical principles in the use of data, such as those outlined in The Belmont Report. Collaborating with researchers to ensure their work aligns with these principles has been a rewarding part of my career. For the past decade, I have grappled with the nuances of anonymous and de-identified data, a challenge shared by many in this field. In a time when student data is being captured and used more prolifically than we know, understanding how privacy is maintained is crucial to protecting our students.
Anonymous versus anonymous
He Education deparment define unidentified data as information from which personally identifiable details have been sufficiently removed or obscured, making it impossible to re-identify an individual. However, it may still contain a unique identifier that could potentially re-identify the data.
Similarly, the General Data Protection Regulation (GDPR) characterizes anonymous data such as information that does not relate to any identified or identifiable individual or data that has been anonymized to the extent that the data subject cannot be identified.
These definitions, although seemingly similar, often lack clarity and consistency in the literature and research. TO review of medical publications revealed that less than half of the articles discussing de-identification or anonymization provided clear definitions, and when definitions were provided, they often contradicted each other. De-identified data may be considered anonymized if enough potentially identifiable information is removed. as suggested in HIPAA data de-identification methods. On the contrary, others maintain that anonymous data is data from which The identifiers were never collected.which implies that deidentified data can never be truly anonymous.
Simplifying data privacy: Three key strategies for educators
As ai tools become prolific in classrooms, it’s easy to feel overwhelmed by the nuances of these terms. Additionally, our news feeds are flooded with these conversations related to student privacy: parents are concerned about data privacy, teachers supposedly don’t know enough about student privacy, and most school districts They still lack data privacy personnel.
In a time when the difference between anonymity and de-identification could be very important, what should educators do about the data collected by the artificial intelligence tools they might use? I offer three oversimplified strategies.
1 question.
In 2020, Visual Capitalist developed a Fine print length display from 14 popular apps and shared that the average American would need to spend nearly 250 hours reading all the digital contracts they agree to while using online services.
If you don’t want to spend hours researching whether the company collects and uses anonymous or de-identified data and how it defines it, you can always ask. Some examples of these questions include:
- What data will you collect?
- Can that data be connected to the students themselves?
- How will the data be used?
- Can a student or parent/guardian request that their data be deleted (If you live in California, the answer is usually Yes!), and how would they go about doing it?
2. Give students options.
The Belmont Report states that to defend the Respect for People In principle, people should be given the opportunity to choose what will and will not happen to them and, by extension, their data. Provide students with the opportunity to choose If they want to use an ai tool that makes use of their data whenever possible, it supports this important ethical standard and gives students autonomy as they navigate this technology-rich world.
3. Allow parents to give consent.
A deeper look at the principle of Respect for Persons shows that people with diminished autonomy have a right to protection. The common rule, or the federal regulations outlining the processes for ethical research in the United States, states that children are people who have not yet reached the legal age of consent and are one of many groups entitled to this protection. In practical application, this means that parental or guardian permission is needed to participate, in addition to the child’s consent.
To the greatest extent possible, parents should also be given the opportunity to understand and agree to the collection and use of their child’s data.
Let’s explore the nuances together
As someone who has been thinking about how to better protect student data since before you could wear your iPhone on your wrist, I regularly rely on these three strategies to better uphold the ethical principles that have guided my career. I ask questions when I don’t understand, I strive to give people autonomy over their choices and their data, and I seek consent when additional protection is needed. While these three practices won’t allay every fear one may have about using ai in classrooms, they will allow you to gather the information you need to make better decisions for your students, and I’m confident we can address the nuances together. !