We are excited to announce the inaugural Tufts AI Safety x Tech@Fletcher AI Policy Hackathon, taking place February 7-8 at Eaton Hall on Tufts Campus with a $3,000 Prize Pool. Apply here! Applications close January 28th (rolling review).
Below, we outline our Theory of Change: the rationale behind this event, its design, and the specific impact we aim to achieve.
The Tufts AI Safety x Tech@Fletcher AI Policy Hackathon is a two-day sprint designed to bridge the gap between technical limitations and the political realities of AI governance. We are inviting students to form interdisciplinary teams and tackle one of two critical tracks: International AI Governance or AI x Cyber/Chem/Bio Risks. Participants will craft policy briefs under the guidance of experts from the Fletcher School and the Boston AI policy community, producing work that can withstand scrutiny across disciplines.
While technical AI development accelerates, governance mechanisms remain dangerously behind. We observe that talented students at elite institutions frequently undervalue policymaking, gravitating toward purely technical roles. This leaves a deficit of future leaders capable of bridging the gap between technical constraints and geopolitical realities. Furthermore, effective AI governance requires policies that use keen awareness of institutional or political mechanisms to navigate complex technical challenges. Currently, few students possess these skills. This limits the quality of new ideas and creates a shortage of future talent, resulting in policy proposals that are either technically infeasible or diplomatically naive.
Our hackathon integrates distinct intellectual spaces through competition. We have structured the event such that high-quality policy proposals must be inherently interdisciplinary to survive scrutiny. This rigor is enforced by our panel of judges, carefully selected to form diverse technical and policy backgrounds. By applying criteria that demand both technical feasibility and diplomatic viability, they create an environment where domain isolation is a failure mode. In addition, to maximize talent density, we allow individual registration to lower the barrier to entry. However, we actively advise and support team formation, as interdisciplinary groups are far more likely to succeed. This design ensures that teams cannot rely solely on their existing specializations but must instead synthesize across fields.
University students cannot solve complex governance problems in a single weekend. However, meaningful progress is possible because we address the talent gap as much as the policy gap. We aim for two specific results. First, we produce tangible outputs: policy briefs that have survived expert scrutiny. These serve as a stress test for current ideas and accelerate student learning. Second, and more importantly, we aim for long-term outcomes: altering career trajectories. Direct feedback from judges validates policy as a high-impact path, equipping students to lead at the intersection of diplomacy, security, and technology
We are aware of the failure modes inherent to short-format policy events. We have identified four specific risks and designed corresponding mitigations:
Risk 1: Lack of Technical Feasibility. Teams may produce ideas that are theoretically attractive but technically impossible to implement given current capabilities or frameworks.
Risk 2: Scope Creep. Teams may attempt to solve broad, systemic issues, resulting in shallow or generic proposals. This often compounds Risk 3, as broad scope frequently leads to vague enforcement mechanisms.
Risk 3: Lack of Enforcement Mechanisms. The International Governance track risks generating high-level abstractions that lack binding power or political leverage.