A publication of the Association of California School Administrators
The deepfake dilemma
The deepfake dilemma
Navigating the risks and responsibilities in K-12 education
Navigating the risks and responsibilities in K-12 education
The rapid advancement and proliferation of generative artificial intelligence tools, particularly over the last year, has introduced significant opportunities and challenges for K-12 education. With the proliferation of generative AI tools, one of the most pressing concerns has been the emergence of “deepfakes” — hyper-realistic video or audio clips that can depict individuals in a way that appears incredibly real, even if the individual never actually engaged in the depicted activity. As school administrators and educators, understanding the risks associated with deepfakes is critical for maintaining a safe and supportive environment for students and staff.
What are deepfakes and why do they matter?
Deepfakes leverage AI technology to synthesize artificial video or audio of an individual that depicts them as saying or doing things that they never actually said or did. Unfortunately, this technology is quite accessible to anyone who wishes to create a deepfake. While the underlying technology can be used for benign purposes, it has alarming potential for misuse, including the spread of misinformation, reputational harm and cyberbullying. For school administrators, this creates new challenges that require immediate attention and strategic planning.
Impact of deepfakes on educators and students
Recent incidents have highlighted the damaging impact that deepfakes can have on school communities. In January, a principal in a Baltimore-area high school was the subject of a deepfake audio clip that falsely attributed racist and anti-Semitic remarks to him. This clip immediately went viral, leading to significant reputational damage to the principal and to the school district, who placed the principal on administrative leave pending an investigation. After several months, it was eventually determined that the audio clip was artificially generated, which exonerated the principal and led to the arrest of a teacher who allegedly created and circulated the audio clip as a means of retaliation against the principal. Needless to say, the damage to the principal’s reputation had already been done.
School districts risk reputational harm to their employees who may become victims of similar deepfakes. Due to their positions of authority, administrators and teachers are particularly vulnerable. If school districts are not aware of this technology and how to immediately respond to it, members of the school community may be inclined to believe the evidence without question. In addition, disciplining employees in response to deepfakes without a full evaluation of whether they are legitimate depictions of the individuals in question may expose school districts to liability for defamation, among other things. Ultimately, school districts are better served by becoming educated about deepfakes and coming up with a plan for how to swiftly address them should their employees become future victims.
Deepfakes have also been used to target students, particularly female students. In one incident from last October, male students at a New Jersey high school created pornographic deepfakes of their female classmates, leading to widespread outrage and demands for accountability from parents and from the school community. Although the school district immediately opened an investigation, parents admonished the school district for their perceived silence and alleged that the district had not done enough to publicly address the deepfakes or update school policies in order to combat improper uses of generative AI.
ADVERTISEMENT
To be sure, schools are required to report the distribution of pornographic AI-generated photos and videos of minors to law enforcement. During the fall 2023 semester, a student at a Seattle-area high school created and circulated deepfake images of his female classmates. The high school failed to report the incident to law enforcement. In a later statement, the district noted that their legal team had advised that they were not required to report “fake” images to the police, but acknowledged that if a similar situation arose in the future, they would do so. In general, if the generation or distribution of exploitative images occurs on school grounds or could have been prevented by policy safeguards, it is possible that courts may find reason to hold districts civilly responsible.
Discipline for misuse of deepfake technology
The ability to create explicit deepfakes has opened new avenues for cyberbullying and harassment. Schools must take proactive measures to educate students about the dangers of generative AI and its potential for misuse. A recent report by the Center for Democracy and Technology revealed that a significant percentage of students feel ill-equipped to identify AI-generated content, highlighting the need for educational programs on responsible AI use. Additionally, schools should ensure that their policies clearly outline the consequences for creating and distributing harmful deepfakes, particularly those involving minors.
Legal and ethical dimensions
As schools adapt to AI’s proliferation, there is a question about the extent to which schools have authority to discipline students for its misuse. In Tinker v. Des Moines School Dist., 393 U.S. 503 (1969), the Supreme Court held that although students do not shed their constitutional rights to freedom of speech while on school grounds, school districts may place certain restrictions on these rights. These limitations historically could not be enforced once students were off campus. However, as the internet began to proliferate, “off-campus” speech increasingly affected schools, forcing courts to determine exactly how far the school’s boundaries on constitutional rights run. For instance, a school may be allowed to punish a student’s cyberbullying even though this “speech” took place entirely within his or her own home and on a private device if it can demonstrate the speech substantially or materially interfered with school operations. (Kowalski v. Berkeley Cty. Sch., 652 F.3d 565 (4th Cir. 2011).)
Student misuse of AI-generated content likely raises ... First Amendment concerns. A school seeking to punish a student for off-campus misuse of AI would need to show that student misuse of AI substantially impacted the school.
Social media connects students to each other off campus in ways that Tinker, decided more than 50 years ago, could not have foreseen, testing the limits of school districts to discipline students for off-campus activities. In 2021, the Supreme Court held that a Pennsylvania high school violated a student’s First Amendment rights by punishing her for a profanity-laden Snapchat that she made off-campus. The Supreme Court noted that “the special characteristics that give schools additional license to regulate speech always disappear when a school regulates speech that takes place off campus.” The opinion observed that a student’s off-campus speech will generally be the parents’ responsibility and that if schools are allowed to regulate such speech, this covers essentially everything a student says outside of school.
Student misuse of AI-generated content likely raises similar First Amendment concerns. A school seeking to punish a student for off-campus misuse of AI would need to show that student misuse of AI substantially impacted the school. (See Tinker, 393 U.S. 503 at 509 [holding that to justify suppressing student speech that is otherwise covered by the First Amendment, school officials must demonstrate that the speech materially and substantially interferes with the operation of the school].) In instances such as the above-referenced events in Baltimore, it is obvious that deepfakes of school administrators can cause a substantial disruption to students, parents and the school district. However, not all instances of deepfake misuse will be so clear cut. A court may not always find that a school has authority to suppress certain AI usage if it occurred predominantly off-campus and was not directed towards the school. As many push for schools to develop significant AI discipline policies, the reach of these policies could be found unconstitutional.
Over the last two years, federal and state legislators have introduced legislation that criminalizes the creation and distribution of deepfakes. On Sept. 19, 2024, Gov. Gavin Newsom signed two bills into law, effective Jan. 1, 2025, which expressly criminalize the creation, possession and distribution of sexually explicit images of children, even if they are digitally altered or AI-generated. Similar legislation has also been introduced at the federal level.
Moving forward: Strategies for school administrators To effectively confront the challenges posed by deepfakes, school administrators should consider the following strategies:
Alex Lozada is senior counsel for Atkinson, Andelson, Loya, Ruud & Romo at the law firm’s Fresno/Pleasanton offices.
Student misuse of AI-generated content likely raises similar First Amendment concerns. A school seeking to punish a student for off-campus misuse of AI would need to show that student misuse of AI substantially impacted the school. (See Tinker, 393 U.S. 503 at 509 [holding that to justify suppressing student speech that is otherwise covered by the First Amendment, school officials must demonstrate that the speech materially and substantially interferes with the operation of the school].) In instances such as the above-referenced events in Baltimore, it is obvious that deepfakes of school administrators can cause a substantial disruption to students, parents and the school district. However, not all instances of deepfake misuse will be so clear cut. A court may not always find that a school has authority to suppress certain AI usage if it occurred predominantly off-campus and was not directed towards the school. As many push for schools to develop significant AI discipline policies, the reach of these policies could be found unconstitutional.
Over the last two years, federal and state legislators have introduced legislation that criminalizes the creation and distribution of deepfakes. On Sept. 19, 2024, Gov. Gavin Newsom signed two bills into law, effective Jan. 1, 2025, which expressly criminalize the creation, possession and distribution of sexually explicit images of children, even if they are digitally altered or AI-generated. Similar legislation has also been introduced at the federal level.
Moving forward: Strategies for school administrators To effectively confront the challenges posed by deepfakes, school administrators should consider the following strategies:
- Education and training: Implement training programs for students and staff regarding the responsible use of generative AI tools. Providing guidance on the ethical implications of AI-generated content can empower individuals to act responsibly.
- Policy review: Review existing disciplinary policies to incorporate clear guidelines regarding the misuse of deepfake technology, and ensure that these policies address the potential legal ramifications of deepfake misuse and outline procedures for reporting incidents.
- Communication with stakeholders: Maintain open lines of communication with parents, students and faculty regarding the risks associated with deepfakes, and regularly update the school community on any incidents and the steps being taken to address them.
- Collaboration with law enforcement: Establish relationships with local law enforcement to ensure proper reporting and response to incidents involving deepfakes.
- Promoting a safe environment: Foster an environment where students feel comfortable reporting incidents of cyberbullying and harassment.
Alex Lozada is senior counsel for Atkinson, Andelson, Loya, Ruud & Romo at the law firm’s Fresno/Pleasanton offices.
ADVERTISEMENT