A publication of the Association of California School Administrators
Ethics and AI
Ethics and AI
Concerns about AI use in K-12 education go beyond just ‘cheating’
Concerns about AI use in K-12 education go beyond just ‘cheating’
Integrating artificial intelligence in K-12 education presents ethical considerations beyond cheating. As AI awareness expands in our schools, K-12 administrators are called to navigate the complexities of its use, seeking to understand the broader ethical implications. This article explores these ethical considerations, extending the dialogue beyond this year’s educator concerns — such as cheating — to include data privacy, bias, equity and cyber safety — issues that are pivotal to address when using AI for educational purposes. It’s important to remember that AI also has the potential to revolutionize education, providing personalized learning experiences and enhancing student engagement.
Opening up the ethical dialogue
During the 2023-24 period, discussions surrounding AI primarily focused on issues of academic integrity and potential cheating within educational contexts. However, as the deployment of AI technologies broadens across various sectors, an increasing call for ethical dialogue extends beyond the notion of students cheating. According to Project Tomorrow (2024), as of June 2024, 47 percent of high school students nationwide already use AI, compared to 7 percent of educators. This statistic highlights the pivotal role of educators in leading our conversations about the responsible use of AI, addressing key concerns such as data privacy, social impacts, algorithmic transparency, equitable access and the prevention of bias. These issues are foundational for upholding the principles of fairness and equity in all areas where AI is utilized, and with such a high percentage of students already using AI, educators are in a prime position to build awareness and start having these crucial conversations with students.
To gain a greater perspective of what is happening globally in the non-education realm, major tech companies like Meta and Amazon have experienced significant scrutiny regarding the ethical implications of their AI systems. According to an article in The New York Times (2024), these tech giants have been harvesting vast amounts of personal data to train their AI models, raising concerns about privacy and consent. The companies were scraping publicly available information from websites, social media platforms and even personal blogs without explicit permission from users. This practice has sparked debates about the ethical boundaries of data collection and usage in AI development and education, and it certainly means we need to be critical users of these technologies.
Data privacy and protection
AI systems typically operate on large amounts of data, including sensitive student information. Administrators must ensure that these technologies adhere to stringent data protection standards, such as those outlined by the Family Educational Rights and Privacy Act (FERPA) and Children’s Online Privacy Protection Rule (COPPA) in the United States. Guidelines should be clear, transparent and rigorously implemented to protect student data from misuse or unauthorized access.
A thorough and systematic vetting process is essential when evaluating and selecting student-facing AI tools and platforms for adoption within school districts. A well-structured approach ensures that the technology enhances educational outcomes, aligns with district guidelines and meets high privacy and safety standards.
Leveraging resources like Common Sense Media, especially their section on AI tool product reviews (Common, 2023), is a reliable starting point. Common Sense Media provides in-depth reviews and ratings on various digital products, highlighting aspects crucial for educational settings, such as educational value, ease of use, safety and privacy. These reviews offer a preliminary assessment that can help narrow the choices based on expert and user insights.
Furthermore, to enhance these protections, educational administrators can also look to promising practices recommended by organizations like the Student Data Privacy Consortium, an initiative led by California IT in Education (CITE). The SDPC advocates for responsible stewardship of student data, offering resources and tools to help schools implement effective data privacy agreements and practices.
Building AI literacy skills
AI’s role in K-12 education is diverse, encompassing administrative automation, personalized learning and data analytics for enhancing school operations. Last year, San Bernardino County Superintendent of Schools hosted a series of AI readiness learning opportunities led by Designing Schools founder Sabba Quidwai and SBCSS’s Technology Services team. The three-part engagement introduced over 300 educational leaders in San Bernardino County to foundational AI literacy concepts using the SPARK framework, hands-on AI activities, and engaging policy and practice collaboration sessions. Following the SBCSS SPARK series, SBCSS’s Digital Learning team has led learning opportunities across the county’s 33 school districts for educational partners to collaborate on leveraging generative AI for productivity and efficiency. A notable example occurred after SBCSS’s digital learning services partnered with the county’s Special Education Local Plan Area administrators and Educational Support Services administrators. Together, they explored ways to use AI to personalize learning for students with disabilities and multilingual learners. Many ideas were utilized, such as prompt engineering techniques that can rework existing lesson plans to make them more inclusive. Here is an example of a lesson plan prompt using the PREP framework (Fitzpatrick et al., 2023):
Prompt: Design a detailed lesson plan where students create a digital story that illustrates a personal experience. Use the 5E lesson design experience to make your lesson. Students will later use this digital story to write a personal narrative.
Role: I am a fifth-grade teacher working on a personal narrative unit.
Explicit information: Include scaffolds for English language learners who speak Spanish as their first language. Also include scaffolds for the students in my class who have neurodivergent behaviors, including students with autism and ADHD. Design this lesson in a way that invites an asset-based thinking approach. Include a summative assessment with a single-point rubric for this project.
Parameters: The lesson should last over two hours. The story should include multimedia elements such as text, images, audio, and video, and be accessible to all learners.
This type of scaffolded experience was warmly welcomed by educators, primarily because it provided an in-depth lesson plan precisely personalized to meet the needs of all students. It kick-started their use of generative AI. Workshops throughout San Bernardino County included prompt engineering and design aspects, with prompts carefully crafted to yield strong outputs. These workshops encouraged educators to utilize the Universal Design for Learning framework (CAST) and incorporate culturally sustaining pedagogies (Paris & Alim, 2017) as they reworked or created new lesson plans using generative AI tools.
ADVERTISEMENT
While we continue to think of ways to leverage AI tools positively, we, as experts in our fields, should constantly be critiquing inaccuracies. The need for human analysis and evaluation of what goes in and out of AI tools is critical, especially knowing and understanding that AI results do not always accurately represent what is needed. Digital Promise released an AI Literacy framework to understand, evaluate and use emerging technology (Digital Promise, 2024). The framework states that educators should develop a robust understanding of AI literacy. This involves grasping how AI technologies function and evaluating their impact on educational practices and student well-being. The framework stresses the critical need for inclusive AI education for all educational partners, including students, educators and community members. This initiative is designed to foster a broad-based understanding and capacity for AI, which is essential in ensuring that AI technologies are used responsibly and effectively. The framework’s goal is to empower everyone within the educational ecosystem to participate actively in discussions and decision-making processes regarding the implementation and use of AI, thereby promoting a more informed and responsible use of these technologies in schools.
Integrating lessons on AI ethics, bias detection and fact-checking into the curriculum encourages students to question AI-generated content and develop responsible AI usage skills (CDE, 2023). Various AI literacy tools and initiatives can be employed to support this effort. For example, the Day of AI program created by the RAISE Initiative at MIT provides hands-on learning experiences for students, helping them understand the principles of AI and its applications in real-world scenarios. These tools are designed to be accessible and engaging, making it easier for students to grasp complex AI concepts and see their relevance to everyday life. By integrating such programs, schools can ensure that all students have the opportunity to develop essential AI literacy skills, preparing them for a future where AI plays a significant role. This collective understanding will empower the school community to engage with AI tools thoughtfully.
Algorithmic bias and fairness
Algorithmic bias is a significant concern in AI applications. If not adequately addressed, AI can perpetuate existing social inequalities (Buolamwini, 2023). For instance, if AI-driven educational tools are trained on biased historical data, they might lower expectations or offer less challenging material to specific demographic groups, reinforcing stereotypes and hindering academic progress. To counteract this, administrators should insist on transparency from AI vendors, prioritize using tools audited for bias and demonstrate fairness in their outputs. In addition, as we look at data generated by adaptive technologies, we should encourage educators to take an asset-based approach to meeting students’ identified learning needs using ideas generated by AI tools like ChatGPT or Microsoft Copilot. This example illustrates the need for educators to have robust professional learning. The approach should involve understanding AI technologies, recognizing embedded biases and grasping their impact on student outcomes.
The Seasons of CS program, part of the California Department of Education’s Educator Workforce Investment grant facilitated by SBCSS, the Sacramento County Office of Education and the University of California, Los Angeles, offered a robust professional development opportunity during CSPDWeek from July 22-26, 2024. This event provided almost 300 educators from grades 6-12 with computer science workshops that included AI literacy and emphasizing equity-minded learning. For example, the Everyday AI program by MIT and the Equity Minded Instruction in Computer Science workshop by SBCSS were two groups that provided resources. They fostered discussions on technological biases, drawing from the work of expert authors and researchers like Ruha Benjamin, Joy Buolamwini and Safiya Noble.
Equitable access to AI technologies
Equity in access to AI tools is a critical area that school leaders must address. All students should have equitable opportunities to benefit from AI technologies regardless of socioeconomic status, race or geographic location. This involves providing necessary infrastructural support, such as high-speed internet and modern devices, and ensuring that AI-enhanced educational tools are inclusive and accessible to all students.
Furthermore, as highlighted by Copur-Gencturk and Noguera (2024) in an EdSource article, the integration of AI in education should be approached cautiously. They argue that while AI tools can provide many benefits, such as helping teachers develop lesson plans and offering individualized support to students, more explicit guidance on their use is needed. This gap could lead to new challenges beyond concerns about cheating, plagiarism and data privacy, potentially affecting long-term learning outcomes and exacerbating existing inequities in education (EdSource, 2024).
AI’s implications for digital citizenship and cyber safety
Like any readily available and rapidly improving technology, AI tools pose a real risk of enabling and accelerating severe abuse in the form of harassment and cyberbullying. Schools are responsible for modifying their digital citizenship and anti-bullying efforts to instill an ethical framework that promotes responsible use and discourages harmful and potentially illegal uses of AI. Students must be educated on the ethical and legal obligations of using AI technology, understanding that it should not be a tool for bullying, harassment or intimidation.
Unfortunately, there have been several instances where students have abused AI to target their classmates. In 2022, a case in New Jersey involved students using deepfake technology to create malicious content, altering images of a fellow student to embarrass and defame them online (Tenbarge, 2024). Similarly, in 2021, students in California exploited AI chatbots to send abusive messages, impersonating others to spread false information and harassing individuals (Ryan-Mosley, 2023). Another incident occurred in Texas in 2023, when students used AI-generated voice technology to mimic a teacher’s voice and leave threatening voicemails to classmates (Ryan-Mosley, 2023). These incidents highlight the urgent need for comprehensive education on the ethical use of AI, ensuring students understand the potential impacts of their actions and the importance of maintaining a respectful and safe digital environment.
Essential elements of digital citizenship and AI cyber safety lessons include:
- Digital citizenship.
- Cyber safety and security.
- Real-life examples and case studies.
- Empathy and impact.
- Legal and ethical responsibilities.
- Collaboration and communication.
- Continuous learning and adaptation.
These components ensure a comprehensive, scalable curriculum that educates students on the responsible use of AI and the importance of ethical behavior in the digital world. They also foster critical thinking, empathy and a proactive attitude toward cyber safety and digital citizenship.
All students should have equitable opportunities to benefit from AI technologies regardless of socioeconomic status, race or geographic location.
Exemplar case study: Implementing AI ethically in school districts
Consider a school district introducing an AI system to help personalize learning for students with disabilities. The system analyzes individual student performance and suggests customized learning interventions. To implement this system ethically, the district took several steps:
- Educational partner engagement: The district involved teachers, parents and students in selecting and implementing the AI system being utilized.
- Training and professional development: Educators were trained not only on how to use the AI system but also on understanding its ethical implications.
- Continuous monitoring and evaluation: The district established a committee to regularly review the AI system’s effectiveness and adherence to ethical standards, adjusting as needed.
- Develop AI guidelines: Schools should craft clear guidelines that address the use of AI in education, focusing on ethical considerations such as privacy, bias and transparency.
- Enhance professional development: Offer ongoing training for educators on AI’s technical and ethical aspects, enhancing the AI literacy dialogue.
- Engage the community: Regularly involve parents and the broader community in discussions about AI, providing clear information on how AI technologies are being used in the educational context.
- Monitor and adapt: Continuously assess the impact of AI implementations and be ready to make changes to address any emerging ethical concerns or unintended consequences.
ADVERTISEMENT