AI, Education, and Fixing Incentives
You can’t throw more money at a broken system and expect it to improve. However, this has been the norm in K–12 education for decades. Since the 1990s, per-pupil spending has risen roughly 50 percent while test scores have remained stagnant. Now, as AI enters classrooms, with 60 percent of K-12 teachers already using it and the market projected to grow from $6 billion to $112 billion in the next decade, we risk making the same mistake with newer, shinier tools.
AI is not Flex Seal. You cannot simply slap it on a broken system and expect it to get better. Business executives are learning this the hard way. A 2025 MIT report found that despite $30-40 billion in enterprise AI investments, 95 percent of companies have seen zero measurable return on investment. The companies that have succeeded shared a common trait: they identified a specific problem, worked closely with the vendor, and measured results. The ones that failed just threw money at the name “AI.” The difference was accountability, which is exactly what’s missing in how we spend on education today.
The best results don’t come from government mandates but rather from free markets, where builders race to solve problems and get rewarded when they do. That requires two things education spending currently lacks: transparent spending and transparent outcomes. Providing both and acting on them will lead to improvements. Studies that have seen successful outcomes show the potential AI has to transform education.
In 2014, Harvard economist Roland Fryer studied what happens when you inject the best practices of high-performing charter schools into struggling traditional public schools in Houston. He identified five key practices: more instruction time, high expectations, frequent teacher feedback, data-driven instruction, and high-dosage tutoring. Elementary students gained roughly an extra half-year of math progress annually. Middle and high school students closed about half the achievement gap over three years. When Fryer isolated high-dosage tutoring specifically, tutored students improved roughly three times as much as their non-tutored peers.
This is precisely where AI has its greatest opportunity. The core advantage of generative AI in education is extreme personalization at scale. AI can pinpoint the exact concepts a student is struggling with and respond with targeted tutoring, custom lesson plans, real-time feedback, and data on gaps that even attentive teachers might miss. More screen time is not a requirement. AI-enabled curricula and grading tools work behind the scenes, helping teachers personalize instruction across a full class and giving even first-year teachers a clear picture of where each student stands from day one. The benefits are even more pronounced for students with special needs, where personalization is not just helpful but essential. AI tools are already helping nonverbal students communicate, adapting reading material for dyslexic students, and giving students with physical limitations new ways to complete their work.
Early studies are promising. A Harvard study found that students using a carefully designed AI tutor learned more than twice as much as those in active-learning classrooms, in less time, while reporting higher engagement. Stanford researchers built an open-source tool called Tutor CoPilot that, in a randomized trial of 900 tutors and 1,800 students, improved math proficiency by up to 9 percentage points at a cost of roughly $20 per tutor per year.
However, these results come with a caveat. The Harvard study involved fewer than 200 undergraduates at an elite university in a controlled experiment. Stanford’s tool was developed in partnership with a specific tutoring company in a structured environment. Both are highly curated settings with strong built-in incentives to perform. The question for policymakers is how to replicate those incentive structures at scale in public education.
Without a mechanism to measure outcomes and attribute them to specific products, school districts risk spending billions on AI products without realizing any benefit. Transparency is the solution.
Districts should be required to publicly report expenditure data (generally, including AI-related) alongside student outcome data. Contracts with vendors should include performance benchmarks and sunset clauses. If a product doesn’t move the needle within a defined period, the contract ends. Efficacy data from schools that successfully implement AI should be aggregated and shared so that best practices can spread, and failed experiments don’t get repeated elsewhere. This is how functioning markets work: buyers make informed choices, successful products scale, and ineffective ones get replaced. Education technology should be no different.
Accountability must also include safety. AI tools used by children need robust data privacy protections and safeguards against harmful content. Parents and educators won’t trust these systems to be effective if they can’t first trust them to be safe. AI can transform education. AI tutoring has the potential to deliver the kind of personalized, high-dosage instruction that Fryer’s research proved can close achievement gaps in schools and more at a massive scale. Done right, it could be the highest-return investment in a generation and equip every student with the tools to live thriving lives full of purpose, including having meaningful work in the future. However, without the correct mechanisms in place, we risk wasting billions and leaving students no better off.

Stay Informed
Sign up to receive updates about our fight for policies at the state level that restore liberty through transparency and accountability in American governance.