Vermont Releases First AI Guidance for Schools Amid Fast-Moving Tech Landscape
Research from late 2025 shows that less than half of teachers (48%) have participated in any school-provided AI training, and only 29% have received guidance on how to use these tools effectively.
The Vermont Agency of Education (AOE) released its Artificial Intelligence Guidance for Education on January 27, 2026, providing the state’s first comprehensive framework for how schools should approach generative AI tools like ChatGPT.
The ‘s document offers practical advice for educators, school leaders, and communities exploring these technologies while attempting to preserve the personal interactions that define Vermont’s educational approach.
But the guidance arrives amid a fundamental tension: technology companies are releasing new AI capabilities weekly, while state policy cycles traditionally move on a timeline of months or years.
A Framework Built on Vermont Values
The guidance reflects Vermont’s longstanding commitment to personalized, proficiency-based learning, emphasizing that AI should serve as a tool for deepening education rather than replacing teacher judgment or student critical thinking. Secretary of Education Zoie Saunders described the document as a “first step” in sharing best practices across districts.
The framework centers on what officials call a “human-centered” approach. This means educators lead implementation decisions, student well-being remains paramount, and AI assists rather than replaces human instruction. The guidance directly links AI capabilities to Vermont’s Act 77 personalized learning goals, suggesting the technology could scale individualized support and reduce administrative burdens for teachers.
Vermont’s approach aligns with federal guidance issued by the U.S. Department of Education in July 2025, which emphasized high-impact tutoring and career advising while maintaining educators’ central role. The federal guidance also clarified that formula and discretionary grant funds may support AI tools if they improve outcomes for learners.
The Professional Development Gap
The guidance’s ambitious goals face a significant obstacle: most Vermont teachers haven’t received training on these tools. Research from late 2025 shows that less than half of teachers (48%) have participated in any school-provided AI training, and only 29% have received guidance on how to use these tools effectively.
This gap between policy aspirations and classroom reality means many educators are navigating AI adoption largely on their own, despite the guidance’s emphasis on “empowering” teachers in their instructional roles.
Understanding the Technology’s Limitations
The guidance acknowledges concerns around academic integrity and the need for students to critically evaluate AI-generated information. What it doesn’t fully explain is why these concerns exist at a technical level.
Large language models like ChatGPT operate on probabilistic patterns, predicting the most likely next word in a sequence rather than verifying facts against a database.
This is easily cured if the user has refined prompting skills, but this architecture means AI can produce convincing but entirely fictitious information—a phenomenon technical researchers call “hallucination.” UNESCO’s guidance on generative AI describes these systems as “frequently unreliable sources of information” that tend to repeat standard opinions while potentially undermining minority viewpoints.
Modern systems increasingly integrate retrieval mechanisms that check claims against external databases, significantly improving accuracy for fact-checking tasks.
This technical reality reinforces a key point: the primary challenge isn’t just adopting a new tool, but developing professional skills to manage it effectively.
Student Connection Concerns
Beyond technical limitations, AI adoption carries social implications. The same Education Week research found that 50% of students feel less connected to teachers when AI is integrated into classrooms, and 47% of teachers worry about decreased peer-to-peer connections.
Students also frequently use AI for purposes beyond academic work, including relationship advice and mental health support—applications for which these tools weren’t designed or validated.
Vermont’s Mandatory Privacy Law Coming in 2027
While the AOE guidance is voluntary, Vermont schools face mandatory requirements under the Vermont Age-Appropriate Design Code (Act 63), signed into law June 12, 2025, and taking effect January 1, 2027.
This law—one of the strictest children’s privacy laws in the United States—applies to online services “reasonably likely to be accessed” by anyone under 18. It requires businesses to ensure their designs don’t result in “reasonably foreseeable” emotional distress, compulsive use, or discrimination. Services must default to the highest privacy level, minimize data collection, and prohibit using minors’ data to recommend or prioritize content without explicit request.
The law prohibits push notifications between midnight and 6 a.m. and generally restricts features that encourage addictive usage patterns. Any AI tool procured by Vermont schools must meet these design standards, regardless of educational utility.
Districts must also navigate existing federal and state requirements including FERPA, which protects educational records; COPPA, requiring parental consent for data collection from children under 13; and Vermont’s SOPIPA, ensuring ed-tech providers use student data only for educational purposes.
The University of Vermont explicitly warns that entering confidential data into generative AI tools may violate law or policy, as these tools “make no guarantees of protecting the data.”
Federal Funding and Local Implementation
The AOE recommends that supervisory unions and school districts create “Digital Learning Plans” focused on local needs. These plans serve as living documents to guide technology integration and align with state computer science standards.
Federal education funds can support AI-based instructional materials, high-impact tutoring, college and career pathway exploration, and professional development—provided these uses support improved outcomes and don’t replace teachers’ critical roles.
The Management Skills Imperative
UNESCO’s guidance emphasizes that teachers need high-quality training in “prompt engineering”—the ability to structure and critically evaluate instructions given to AI tools to achieve desired, ethical, and accurate results.
Experts increasingly describe a “Human-AI-Human” approach, where every interaction begins with human inquiry and ends with human reflection and verification. This ensures the tool provides support without users surrendering intellectual agency or critical oversight.
As Vermont State University’s teaching guidelines note, AI is designed for efficiency, but learning is a “slow” process requiring effort. Shortcuts can inadvertently undermine development of critical thinking and research skills.
What Happens Next
Vermont school districts face decisions that are no longer one-time procurement choices but ongoing management responsibilities. As AI capabilities evolve weekly, static policies quickly become outdated.
The AOE’s Educational Technology program will continue supporting districts through federal funding programs and digital learning plan development. Districts should prepare for the Vermont Kids Code enforcement beginning January 1, 2027, ensuring any AI tools meet mandatory design standards for youth protection.
The guidance itself may be updated as technology evolves and districts share implementation experiences. Schools that treat AI as a “sensitive instrument requiring constant oversight” rather than a finished product will likely navigate this transition most successfully.
For Vermont educators and families, the message is clear: AI literacy—understanding both the technology’s capabilities and limitations—will matter more than any single policy document in determining how effectively these tools serve students.



