AI glasses: another potential power that poses threats to schools
By Alexander J. Schorr
Photo: screenshot of Meta’s Ray Ban AI Glasses analysis by Surfshark
February 25, 2026 (San Diego) — AI in the United States has the potential for great power and change in the nation. It is very much a new “wild west,” and because there are no significant guardrails, there is great danger from it facing human beings when privacy and identity is confronted. The new AI glasses, for example, are a potential danger to school safety and academic honesty.
Back in January of 2026, Meta announced plans to potentially double production for its AI smart glasses by the end of the year. While these kinds of glasses offer potential for accessibility and language translation, they are being increasingly banned or regulated due to severe privacy and academic honesty issues.
Many schools and universities are updating their electronic device policies to include smart glasses: Bloomfield Hills School prohibits AI glasses during instructional time, and institutions like HSHP require students to wear "traditional prescription frames.” The subsequent college board has banned smart glasses during the SAT starting in Marching 2026 to prevent cheating via integrated cameras and AI assistants. Some universities have issued safety warnings, with the University of San Francisco (USF) for example warning its campus community after a man was reported using Meta AI glasses to record others.
As Meta’s AI-powered smart glasses become popular, students and teachers are reporting a rise in concerns for privacy and harassment at school campuses. The glasses’ ability to secretly record photos and videos, with instances of women being approached and recorded without consent, as well as worries about academic dishonesty.
This technology is also raising fears among immigrant and first-generation students about possible surveillance by law enforcement. With future models expected to become more discreet or clandestine, the debate around its appropriate use in educational settings is ongoing. The discreet nature of cameras on glasses makes it difficult to know when someone is being recorded, creating fears of harassment in private areas like locker rooms. Ai assistants can whisper answers into a student’s ear or scan test questions to provide solutions.
The deployment of Meta's Ray-Ban glasses by law enforcement addresses concerns about the built-in cameras and potentially facial recognition, allowing Customs and Border Protection (CBP) or Immigration and Customs Enforcement (ICE) agents to identify and detain individuals in real time.
As Meta continues to push for wider adaptations of AI glasses, schools and universities will likely need to update their policies and enforcement measures to address academic integrity and right to privacy. Lawmakers may need to consider new regulations of wearing recording devices in sensitive public spaces like educational institutions.
AI in Classrooms
Photo, right: screenshot of AI presentation by Panworld Education
AI is rapidly transitioning from a futuristic concept to a daily routine in classrooms, with 61% of teachers reporting the use of AI in their works as of early 2026. In addition to obvious fears of cheating, there are larger questions being asked in academia as to how these tools can be used for learning and reduce administrative burdens.
Platforms like DreamBox and Khan Academy’s Khanmigo analyze student performance in real-time to adjust lesson difficulty and instant hints on improvement.
Educators can use AI to draft lesson plans, create rubrics, and automate grading for objective assessments, which reportedly saves 7 to 10 hours per week.
These AI tools can provide real-time translation for multilingual learners and speech-to-text or text-to-speech services for services for students with disabilities as well. The tools are needed for early-stage brainstorming, code debugging, and music and art composition to stimulate originality.
The US Department of Education has emphasized that AI must support, not replace, human teachers. Some educators have advocated for the 70/30 Rule, which involves a balance of student participation and AI usage: students spend 70% of class time in active, human-led practice and 30% using direct instruction or AI assisted tools.
Misuses and Abuses
AI misuse occurs when AI is used with malicious intent by people or organizations. This is different from bias, which is a direct consequence of the biased data used to create AI products in the forms of videos and opinion pieces. Instead, AI misuse is a result of people who purposefully use AI for unethical and harmful actions, like violating privacy, censoring users on online networks, and the illegal gathering of data from the internet.
One harmful example of AI missuses are the deepfakes, which comes from the combination of “deep learning” and “fake media,” and describes AI models that create fake media in the form of false or misleading images, audio, and videos. There have been too many malicious instances where the usage of deepfakes have targeted certain individuals and groups by creating fictitious and pornographic videos of celebrities or fake audio for blackmailing, as well as stealing money from companies and individuals. These people are those who understand how to manipulate AI technology for negative purposes.
Additionally, since January of 2026, Elon Musk’s AI chatbot Grok has been overwhelmed with global controversy centered around its role in generating non-consensual sexualized imagery and offensive content, the most controversial being the undressing of minors. Due to Grok’s image-editing feature providing a “mass digital undressing spree,” it is estimated that Grok produced roughly 1.8 million to 3 million sexualized images since late December 2025.
Grok in particular has also generated content that violates standard safety norms attributed to racism, holocaust denialism, and encouraging unregulated misinformation. Musk and xAI have defended the platform as a bastion of free speech, characterizing regulation as “censorship.”
Despite this, and due to mounting pressure, X began using geoblocking in mid-January 2026 to restrict explicit image generation in certain legal jurisdictions. The platform now limits image generation to paid subscribers and has implemented technical measures to prevent editing real people into revealing clothing and nudity.
What can be done about this? Regulation would only be a start.
The European Union is aiming to regulate many activities considered to be AI misuses by imposing heavy fines on companies that do not comply with proposed regulations. Other activists are already protesting and trying to turn lawmakers’ attention to the dangers of deepfakes, especially in the context of the aforementioned “revenge porn.”
So far, the United States has positioned itself against the overregulation of AI, and strict regulations tackling misuse and abuse have yet to be seen.
Privacy and Ownership Concerns
Bad actors can and will exploit AI through the act of cyberattacks. They will manipulate AI software to clone voices, generate fake identities, and even convincing phishing emails and messages, all done with the intention to scam, hack, and steal a person’s identity or compromise their privacy. As with cyberattacks, malicious actors exploit AI technologies to spread misinformation and disinformation, influencing and manipulating people’s decisions and actions.
While organizations are taking advantage of technological advancement such as generative AI, only 24% of generative AI initiatives are secured. The present lack of consistent security poses a threat to expose data and AI models to hacks and infiltrations— with the global average cost amounting to USD 4.88 million in 2024.
AI also relies on energy-intense networks which have a significant carbon footprint. Training algorithms and programs running complex models mostly requires vast amounts of energy, and contributes to carbon emissions. One study estimates that training a single natural language processing model emits over 600,000 pounds of carbon dioxide, which is about 5 times the average emissions of a car in its lifetime.
Water consumption is a concern as well: many AI applications run on servers in data centers, which generate significant heat and need larger volumes of water in order to cool. One study found that training GPT-3 models in Microsoft’s US data centers consumes millions of liters of water. More so, handling 10 to 50 prompts can use roughly 500 million milliters— the equivalent to a standard bottle of water.
Though generative AI can become an avenue for creativity, with images capturing an artist’s work, the major question remains: who owns the copyright to AI-created content, whether fully generated AI or created with its assistance? Intellectual property (IP) issues involved with AI generated works are still creating ambiguity around ownership, as well as challenges for businesses.
One of the more uncertain aspects and risks of AI is its lack of accountability. Who is responsible when an AI system runs afoul? Who is held liable in the aftermath of an AI tool's damages?
AI does hold promise for benevolent creation and security, but it comes with unknown risks and potential dangers. Understanding and regulating AI’s potential through proactive steps will be able to minimize harm— while encouraging competition— but there is no current limit or control over its presence in the US.
