
At the AI Engineer World’s Fair in San Francisco this June, cofounder Benjamin Dunphy announced the new AI Education Summit. He introduced the free online event, stating that he could imagine a future of human-computer interaction and the potential of even the most “mediocre” of teachers to become “world-class” with the help of AI. The idea, he stated, was sparked by a previous AI Engineer talk given by Stefania Druga, a pioneer in AI education research, who will also co-organize the summit. Putting aside his rather condescending portrayal of educators, Dunphy raised concerns about the lack of preparedness of children, parents, and educators to “navigate this new reality [of AI] effectively and ethically.” The goal of the event, he stated, is to “foster a global community dedicated to AI education.” While certainly not the first conference on AI and education, the practical and industry-focused angle may bring together new players in the field, expanding beyond academia and research.
As a guest on O’Reilly’s recent episode of Generative AI in the Real World podcast, “Designing for the Next Generation,” Stefania Druga shared some insights based on her academic and industry research on how children will interact with, build, and learn from AI. She’s a proponent of the Socratic method of teaching and learning. In the episode, she discusses how her work on Cognimates, a tool built for children to learn coding, revealed the creative and sometimes unexpected ways that children interact with AI to explore things like unusual hairlines or backhanded compliments. Her work has also revealed that unlike many adults who are looking to have AI, or AI agents more specifically, do their work for them, many folks from the younger generations prefer the ability to adjust the level of autonomy given to AI. Based on her findings, Stefania imagines the need for something like a “knob for agency to control” the AI. Andrej Karpathy expressed a similar idea with the “autonomy slider” at his recent talk for the AI Startup School event in San Francisco. This observation seems to run counter to the dominant discourse in industry that emphasizes productivity gains, more often measured in terms of speed of execution or costs saved. When given the option, perhaps kids (and even adults) are looking for a learning companion rather than a way to cheat or offload their work. This challenge calls on AI innovators to build in collaboration with young learners, keeping their needs in mind. Cognimates, as described by Stefania, is a “copilot [that] doesn’t do the coding. It asks them questions.” She and her coauthor, Amy Ko, advocate for developing “design guidelines for AI coding assistants that prioritize youth agency and critical interaction alongside supportive scaffolding” (Druga and Ko 2025).
The question of how we can better use AI tools to assist us with learning in a way that also allows us to leverage the experience, knowledge, and skills that we already have—making them human centered—is an important one for the field of education. When we create AI-powered learning environments that demonstrate human-focused learning, new avenues for creativity, and innovation—and not only technological advancement through improved AI outputs—we’re much more likely to get children and adult learners excited about its prospects. At a time when many feel less and less control over their environment and futures, a vision of a collaborative future with AI may be more appealing than one that abdicates even more control to external forces.
The importance of bringing up to speed and empowering the education community has not been lost on governments either. In April this year, China announced its plans to integrate AI in curricula across different levels of education. Soon after, in the US, the president signed an executive order, Advancing Artificial Intelligence Education for American Youth, that aims to bring AI competency to both students and educators throughout the country. The order envisions public-private partnerships as central to achieving its aims. It states that “educators, industry leaders, and employers who rely on an AI-skilled workforce should partner to create educational programs that equip students with essential AI skills and competencies across all learning pathways.” Concrete outcomes that are spelled out in the order, some of which were envisioned to have taken place in 90 to 120 days. Those outcomes should perhaps soon be apparent as the AI Action Plan takes shape. Aside from establishing an Artificial Intelligence Task Force, led by the director of the Office of Science and Technology Policy, the order stipulates the establishment of a plan of action for several government agencies. A Presidential Artificial Intelligence Challenge must be established in 90 days, though it appears to still be in development. The order instructs the secretary of education, director of the NSF, and secretary of agriculture to take steps to prioritize research and funding for teacher training and education in fundamental computer science and AI skills for educating students and to “effectively integrate AI-based tools and modalities in classrooms.” And finally, it seeks to “increase participation in AI-related Registered Apprenticeships” across different industries.
Educators, policymakers, and industry leaders support such initiatives to foster AI literacy and the critical thinking skills that are necessary for leveraging AI technologies. The AI4K12 Steering Committee, for example, aims to establish “national guidelines for AI education for K-12.” However, some in the education community have also expressed concerns about the ability of this administration to implement such an order. Though some courts continue to block at least in part some of the administration’s plans to scale back funding, personnel, and resources, several of the institutions tasked with the implementation of this order are facing severe cuts. The National Science Foundation is potentially facing its lowest funding in decades, including cuts of hundreds of millions of dollars for STEM education, and the administration has called for the elimination of the Department of Education. Those who see this order as a potential step in the right direction are concerned about whether it’s feasible under such conditions. “Critically, achieving widespread AI literacy may be even harder than building digital and media literacy, so getting there will require serious investment—not cuts—to education and research,” write Daniel Schiff, Arne Bewersdorff, and Marie Hornberger in The Conversation. The administration’s attempts to scale back resources for these important institutions and research, whether fully successful or only partly, are a poor strategy for keeping up with the technological advances and needs of our children, educators, and society at large.
Continuing institutional support and development of the president’s order remains to be seen, though many private organizations have pledged to contribute. If the critics’ concerns are valid and implementation is lacking due to underfunding and insufficient resources, educators are left to their own devices or must seek alternative sources of funding and support. For some, an alternative appears to be for educators to turn to the very tech giants that have created the AI tools that have transformed the classroom.
Microsoft and OpenAI are preparing to announce a partnership with the American Federation of Teachers to establish a National Academy of AI Instruction to “help teachers better understand fast-changing AI technologies and evolve their curriculum to prepare students for a world in which the tools are core to many jobs.” Other large AI organizations, such as Anthropic, which has its own take on AI fluency and educational efforts, may be involved as well. This sort of public-private partnership could be fruitful if no other paths to resources are available. Or to put it another way, if the government provides little support and funding for educators and researchers to equip themselves to better understand the technologies and how best to leverage them for education, they are left with few options, given the ubiquity and already apparent impact of these tools.
There are no neutral players or positions in this context. The administration has its political agenda, for example, “Removing Red Tape and Onerous Regulation” and “Ensur[ing] that Frontier AI Protects Free Speech and American Values.” As Trump stated in a press conference to announce his AI Action Plan, “Once and for all, we are getting rid of ‘woke’—is that OK?” (also further emphasized through an executive order). More than ever, the industry’s agenda is becoming more apparent, which is invested in controlling how their technologies are regulated and what information they are forced to share about them. Anthropic’s stated public position, for example, is somewhat nuanced: “We share the Administration’s concern about overly-prescriptive regulatory approaches creating an inconsistent and burdensome patchwork of laws,” the company said, but added, ”We continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems, if the federal government fails to act.” Industry, of course, also has a fundamental market incentive to increase usage of their technologies among younger generations. Finally, educators and researchers must understand the impact of these technologies on students, society, and their own work, not only through how to leverage the technologies that are provided by industry but also by learning how AI is built and how to build it themselves.
AI literacy and competency can mean many things, but it does not just mean learning how to understand techniques for generating materials for the classroom faster or how to use an agent to schedule parent-teacher meetings or draft an email to become more productive workers. AI competency also means understanding how these technologies impact the learning experience and critical thinking skills that go beyond simply understanding if AI is hallucinating, and how they contribute to the spread of misinformation and contain bias. As with any producer or curator of knowledge, AI isn’t neutral but rather a product of its creators. Human-led decisions are made about the data they are trained on, and human-led decisions are made about the accuracy of their outputs. In order to understand those impacts, educators and researchers need access to resources that give them academic freedom to examine these technologies. They need to be able to measure their potential harmful effects as well as potential positive outcomes without outside influence. More than anything, they need the knowledge to help them navigate a technology that cannot be put back into its box. Support for their efforts to understand and leverage AI should not be tied to fealty to either an administration or the tech industry that is strongly invested in positive outcomes for their investors.
Artificial Intelligence
Radar