Supervising AI in Education
by Taylor Weems
Artificial intelligence (AI) is increasingly making an impression on academic performance among the youth. Children tend to inadvertently jeopardize their privacy and cognitive processes when they lack guidance on maneuvering AI. Students are prone to manipulating AI and avoiding originality, copying information, and spreading misinformation through deep-fake pornography. In exchange, the relationship between peers and teachers further split due to an issue with comfort and safety in expressing oneself in classrooms, which brings doubt to both parties. AI is constantly advancing but should not be a system in progress alone. The education system must also be willing to understand this technology for the good of relating to its students and protecting their development. Therefore, schools must implement AI learning in their curriculum—particularly across modern North America. In a collection of surveys and studies, several methods can explain AI to children with distinct learning styles: oral, kinesthetic, verbose, and visual approaches. Tailoring how students process the consequences of AI can help them effectively utilize the tool, while educators can focus on using AI to their advantage for kids. Overall, including AI in schools aids the youth in maintaining their authenticity and mental capacity amidst competitive technology in the outside world.
AI in education, exploitive deepfakes, AI lowers creativity, technology on brain functioning, teaching AI
Introduction
In an ever-changing world, it is not only the people evolving within it but artificial intelligence (AI) as well. While the dynamic between mankind and manbot exists, the difficulty AI poses continues to gather less attention in North America. According to Pew Research, 97% of Americans own a smartphone, a common appliance which utilizes artificial technology (“Mobile Fact Sheet”). As exposure to AI progresses among all demographics, the one that raises the most concern is the youth. Due to unfamiliarity with AI, adolescents relinquish their privacy and cognitive abilities to a prominent device. A survey from a nonprofit research organization, Common Sense Media, reported that 70% of high schoolers in the U.S. use generative AI tools (Knibbs). However, this power comes with responsibility, which young people may overlook amidst the allure of social media and pressure to conform to societal trends. Middle-schoolers tend to receive their first smartphones, making them large creators and consumers of AI in social media (Ali et al. 1). Without fully developing their mental skills, each dependent is vulnerable to the impact of AI on their sociability, empathy, and morals. As teachers integrate artificial technology into education for personalizing learning styles and expressive assignments, there is worry about how to use this tool most effectively. Because there is no clear emerging standard declaring the rules for AI, students and teachers remain unaware of its risks on wellbeing (Knibbs). Even so, the negative consequences of this sophisticated technology are preventable if it is used under supervision and explained in a classroom.
Literature Review
Teaching AI in a classroom enables teachers to engage with students of various age groups and simultaneously enhance their understanding of AI in an environment susceptible to its abuse of power. Yet, comprehending why students welcome AI without suspicion is essential for practical AI supervision. Of the 70% of teens surveyed in Common Sense Media, older teens mostly used it to help with schoolwork (Knibbs). In a setting that should encourage questions and answers, the youth remain confused in school, leading them to seek a new source of help: AI.
Many teachers are uninformed that students resort to AI, which can build a communication gap between the two groups (Knibbs). An increasing rift as such raises concerns in the academic community because it dismantles the relationship among learners and educators. To illustrate, adolescents may demonstrate discomfort in asking for assistance or are absent of pride in their work from insufficient aid under educators. The lack of communication elicits an issue with trustworthiness, which is associated with low self-reliance (Khalil and Er 3). AI can appear as a safety net for the raw thinking of children, yet a classroom can pressure them to meddle with ideas to meet teachers’ expectations. In response, students find comfort in distancing themselves from their classes and expressing their genuine thoughts through a computer system.
Adolescents also seek Chatbots to explain topics which may be outside school curriculum but what one experiences during primary education. Another survey involving teenagers expressed that AI gives them advice on any subject they ask, appropriate or not (Nagelhout). Teens uncover puberty, sex, trauma, and controversy in school but may find their inquiries restricted by the board of education. Considering AI lacks a mind of its own, it cannot voice bias or boundaries within itself, which may enable it to be more interactive than an adult.
Another reason teens convert to AI is because they are bored or need to brainstorm (Knibbs). Reliance on AI can arise from frequent use, even when imagination is what teachers desire from students. Though AI offers resourceful inputs, simply reiterating them diminishes a person’s individuality. Mental capabilities such as intuitive analysis, critical thinking, and creative problem-solving distinguish humans from AI (Ahmad et al. 4). Adolescents must learn to nurture these elements as they mature before AI interferes with their cognition, and teachers can help demonstrate this value.
The decline in ingenuity among young people also corresponds to the increasing prevalence of plagiarism. Children are nearing self-handicapping disorders (SHD), which are strategies individuals make to avoid being productive by revisiting AI (Kangas-Olson 17). When students mistakenly believe they have SHDs, changes in their morals can be mirrored in the education system.
For example, a prominent issue between students and technology is the increase of cyberbullying. Beyond using AI to complete assignments, the intentions behind the tool become more troubling as some students use its capabilities to create deepfakes. Deepfakes generate from digital aids which alter the image or voice of a person for one to appear as someone else spreading false information (Hancock and Bailenson 2). A school should be a respectable institution, a safe place, and a foundation of learning–yet students are learning how to victimize others. Regarding the case of 15-year-old Francesca Mani, she was one of more than 30 girls at Westfield High School in New Jersey whose face was attached to pornography from deepfakes (Ryan-Mosley). Ergo, there is no age limit for becoming prey among peers who are predators–if it can happen to a child in high school, it can happen to a child in elementary school.
Hence, one study sought to educate grades 5-9 on what to reflect upon using social media, such as knowing that uploading personal data could make them a target of fake media (Ali et al. 3). The education system bears a major challenge regarding how it can help students safely use AI and take precautions against problems. Therefore, teachers must be willing to be skillful with AI rather than removing the technology in classrooms for students to succeed.
Analysis
Nonetheless, plagiarism was a concern in the classroom even before the advent of AI and remains a persistent issue today. Plagiarism is presenting someone else’s work without crediting it and displaying it as your own (Khalil and Er 4). Even with the risk of failing grades for dishonesty, students are willing to gamble their reputations for AI. But ChatGPT is beginning to master natural language processing (Khalil and Er 2). AI-generated texts are becoming human-like, potentially spoiling students’ statures outside of school. If they continue transforming others’ ideas into their own, they could further lose ingenuity in the real world. In one study, forty out of fifty essays AI produced showed prominent levels of originality when under inspection with the plagiarism detection tool Turnitin (Khalil and Er 10). The more AI learns to mimic human writing without detection, the more covertly students can apply it.
Likewise, a national survey found that in spite of teens advocating that AI can be used for fun, artificial intelligence disrupts students’ own creativity (Nagelhout). Teachers may attempt to assign more artistic assignments to distract children from echoing facts, but many fall short on achieving so. Despite initiatives to prevent students from falling back on AI, it unfavorably can lead the youth to explore more stealthy ways to implore it (Khalil and Er 5). Detecting plagiarism can be performed using AI algorithms, whereas assessing character requires teachers to understand and value each student.
A rift between teachers and students undermines the confidence and responsibilities shared by both parties. Instructors struggle to show interest in young aspirations, while the youth find it difficult to participate in classes that do not inspire their passions (Khalil and Er 3). This growing dynamic continues to reshape the competencies of a classroom, increasing the burden of withdrawal from authorities and peers in education.
Young people should be ripe with passion because the rules and professionalism found outside of school do not limit their imagination. Nevertheless, utilizing AI in their childhood can prepare them to censor themselves to meet others’ aspirations.
Depending on AI to generate that flair on projects diminishes children’s thinking ability, bringing them closer to an artificial mind (Ahmad et al. 4). Undoubtedly, AI relieves the stamina put into brainstorming projects but drives children to be impatient and lazy (Ahmad et al. 4). Adolescents who use AI for assignments requiring mental effort are avoiding challenges meant to inspire critical thinking. Reliance on AI obstructs reasoning and can shield children from experiencing the benefit of stress (Ahmad et al. 5). Idling children often struggle with self-actualization and develop procrastination habits, an SHD, which can lead to plagiarism also. This behavior may cause them to lose their natural writing style and count on AI for tasks (Kangas-Olson 17).
AI can mimic natural language not only grammatically but orally, too. What starts as copying and pasting texts under students evolves into copying and pasting pictures to videos. From the top computer science laboratories to cheap software, deepfakes provide convincing misinformation through verbal deception (Hancock and Bailenson 1-2). To be a victim of a deepfake can defame a person’s total character and humiliate their self-esteem–even more if the material is pornographic. But 15-year-old Francesca Mani kept her voice, calling on state and federal legislation for civil and criminal penalties on deepfake porn (Ryan-Mosley). Even with having opportunities to retreat from the world, Mani did not succumb to suicidal thoughts. The boys at her school exploited her photos without permission, yet her determination to paint the picture of her true self serves as a voice for many. In an interview, Mani agrees, “People should realize that when they start posting stuff on Instagram or any type of social media that it can happen to you” (Ryan-Mosley). Therefore, AI is greatly woven into children’s lifestyles, making it essential to guide effective usage.
In addition, artificial technology extends beyond teens and is becoming prevalent among preschool kids. The American Academy of Pediatrics recommends that parents limit screen time to an hour for children aged 2 years or younger because the brain is malleable (Small et al. 2). If toddlers spend less time interacting with their peers, it can impede their interpersonal skills as they are just beginning to practice it. It is essential for parents, plus teachers, to be aware of how simple activities such as watching videos can affect children’s cognitive growth. Increased screen time leads to poorer language use and executive functioning, especially in incredibly young children (Small et al. 3). Since early vulnerability to AI can stir habits harder to change later, regulating AI is vital. A new study recorded 8–12-year-olds experiencing more screen-time showed decreased brain connectivity between regions controlling word recognition and intellectual control (Small et al. 4).
In the end, AI itself is not bad. Whether to assist with homework or inspire creativity, it is how one uses AI which requires limitations. Trials arise when they lead to plagiarism, diminished cognitive function, or circulation of misinformation. However, the challenge in supervising AI lies in teaching technology in a class setting.
Discussion
Connecting with the youth about AI is not a one-step process, it means respecting how audiences comprehend concepts differently. A group of researchers examined how far their analogies and context go in teaching AI to grades 5-9 (Ali et al. 2). It is important to incorporate various strategies such as oral, visual, reading, or hands-on activities when evaluating a highly sophisticated phenomenon for all youths to register the information. For example, middle schoolers understand abstractions, so including video games to explain material can make abstract ideas concrete (Ali et al. 2). Further, the Board of Education could approve software for students to recognize deepfakes from original photos. A favorable starting point for comprehending the impact of deepfakes is immersive virtual reality, where one can build three-dimensional doppelgangers (Hancock and Bailenson 1). Allowing this enables individuals to experience the consequences of falling for such falsity and may help disobedient students develop empathy for victims. Schools should reflect on the case of Francesca Mani because she created a website called AI Help to advocate targets of AI like herself as well (Ryan-Mosley). To convey the seriousness behind deepfakes, scholarly institutions need to consider more means of supporting their students since they are becoming the root of cyberbullying.
As for those who prefer reading/writing, they could retain more information from sources which discuss the practices of AI privacy, and a following questionnaire would assess their knowledge. In contrast, verbal learners may enjoy speaking about AI in a classroom. According to MIT, involving students in decisions about utilizing AI in teaching provides feedback on tools beforehand and reinforces academic integrity (MIT Sloan Teaching & Learning Technologies). Chatting about how to include AI in homework without mere copying can assist children with grasping the limitations and applications of AI. Still, many students do not know what plagiarism classifies, and their ignorance suggests that an educator cannot just give them a definition but must allow them to relay their opinions and learnings. For instance, a discussion forum can help students feel less pressure and stress regarding paraphrasing (Kangas-Olson 21). Structuring a class-led discussion can also be engaging and interactive, instead of a question-answer ordeal. It can take the form of a game to appeal to younger and older audiences. Encouraging students to play with the idea of cheating in the classroom can allow them to explore a moral compass as a unanimous group (Kangas-Olson 30, 31). Inspiring students to define rules and logic in school can strengthen the relationship between educators and learners, further boosting academic honesty. As teachers listen and appeal to students’ values, their foundation of trust is re-established. Thus, the youth may be more willing to understand the principles behind AI from authority figures than through personal risks.
Alternatively, high schoolers who are hands-on claim that creative projects distract them from brainstorming with AI because it gives them the freedom to establish their ideas (Kangas-Olson 21, 23). Creative activities can help students brainstorm independently without confinement to the limits of their studies. Despite teachers upholding a classroom standard as to what students may research, the youths should not be subject to the confines of their studies if there is a shortage of imagination expressed in education. Assignments going beyond the basics foster ingenuity and keep children critically thinking to improve their competencies (Khalil and Er 12). If the education system cultivates AI into the school curriculum, students can feel comfortable with being original rather than machine-driven. Equivalent to toddlers already discovering highly advanced technology as soon as age 2, there is time for them to achieve their cognitive abilities before reaching the real world. In sum, teaching and learning about AI in institutions can bolster the authenticity, communication, and ethics missing in today’s youth.
Conclusion
Artificial intelligence (AI) presents unique opportunities and challenges for young people as they navigate their transition into the real world. By understanding and engaging with AI, adolescents can enhance their individuality and critical thinking skills rather than rely solely on AI’s credibility. While there is a growing admiration for AI, it’s essential for students to recognize the importance of their own abilities and perspectives, ensuring they avoid potential pitfalls that could affect themselves and others. To enable this understanding, it’s vital for children to develop both AI literacy and privacy skills. These competencies will empower them to navigate the complexities of technology and discern the nuances of AI-generated content, which often combines reliable and unreliable elements (MIT Sloan Teaching & Learning Technologies). Ultimately, teachers can foster a constructive environment where students learn to critically assess AI. By encouraging open dialogue and mutual learning, educators and students together can safeguard the mental well-being and safety of young individuals in an ever-evolving media landscape. This collaborative approach can help cultivate a generation of informed and independent thinkers, ready to harness the benefits of technology responsibly.
Works Cited
Ahmad, Sayed Fayaz, et al. “Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education.” Humanities and Social Sciences Communications, vol. 10, no. 1, June 2023, https://doi.org/10.1057/s41599-023-01787-8.
Ali, Safinah, et al. “Exploring Generative Models with Middle School Students.” Association for Computing Machinery Digital Library, vol. 31, no. 678, May 2021, pp. 1–13. https://doi.org/10.1145/3411764.3445226.
Hancock, Jeffrey T., and Jeremy N. Bailenson. “The Social Impact of Deepfakes.” Cyberpsychology Behavior and Social Networking, vol. 24, no. 3, Mar. 2021, pp. 149–52. https://doi.org/10.1089/cyber.2021.29208.jth.
Kangas-Olson, Grayci. “Managing Plagiarism and Artificial Intelligence in a High School Classroom.” Hamline University Bush Memorial Library, 2023, https://digitalcommons.hamline.edu/cgi/viewcontent.cgi?article=1976&context=hse_cp. Accessed 8 Oct. 2024.
Khalil, Mohammad, and Erkan Er. “Will ChatGPT Get You Caught? Rethinking of Plagiarism Detection.” Lecture notes in computer science, 2023, pp. 475–87. https://doi.org/10.1007/978-3-031-34411-4_32.
Knibbs, Kate. “Most US Teens Use Generative AI. Most of Their Parents Don’t Know.” WIRED, 18 Sept. 2024, www.wired.com/story/teens-generative-ai-use-schools-parents.
MIT Sloan Teaching & Learning Technologies. “Practical Strategies for Teaching With AI – MIT Sloan Teaching &Amp; Learning Technologies.” MIT Sloan Teaching & Learning Technologies, 7 May 2024, https://mitsloanedtech.mit.edu/ai/teach/practical-strategies-for-teaching-with-ai/#:~:text=This%20guide%20offers%20practical%20strategies%20to%20help%20MIT%20Sloan%20faculty.
“Mobile Fact Sheet.” Pew Research Center, 25 Apr. 2024, www.pewresearch.org/internet/fact-sheet/mobile.
Nagelhout, Ryan. “Students Are Using AI Already. Here’s What They Think Adults Should Know.” Harvard Graduate School of Education, 10 Sept. 2024, https://www.gse.harvard.edu/ideas/usable-knowledge/24/09/students-are-using-ai-already-heres-what-they-think-adults-should-know#:~:text=Half%20of%20teens%20surveyed%20have%20used%20generative%20AI,%20but%20few. Accessed 15 Oct. 2024.
Ryan-Mosley, Tate. “Meet the 15-year-old Deepfake Victim Pushing Congress Into Action.” MIT Technology Review, 1 Dec. 2023, www.technologyreview.com/2023/12/04/1084271/meet-the-15-year-old-deepfake-porn-victim-pushing-congress/#:~:text=Meet%20the%2015-year-old%20deepfake%20victim%20pushing%20Congress%20into%20action.%20Francesca.
Small, Gary W., et al. “Brain Health Consequences of Digital Technology Use.” Dialogues in Clinical Neuroscience, vol. 22, no. 2, June 2020, pp. 179–87. https://doi.org/10.31887/dcns.2020.22.2/gsmall.
Acknowledgements: My deepest gratitude goes to my English instructor for providing several resources and encouragement to build my research piece by piece. Soon-to-be Dr. Joann Yu remained patient throughout my writing process and never was too tired to offer feedback–despite my lengthy report. Rather, Ms. Yu inspired me to share my research with a larger audience, and I found comfort in the warmth of her smile. Additionally, I would be remiss in not mentioning my mother, Keisha Mouzon Carter. My mom helped me fall in love with writing. Beginning the first sentence of a paper can be exciting and challenging, but my mother’s courage always inspires me. She motivates me to choose my words carefully, to let my phrases come together intentionally, and to express my voice in all of my work.
Citation Style: MLA 9