3 big questions to prepare leaders for the age of artificial intelligence
Daniel Shin Un Kang is an MPP graduate of the Blavatnik School of Government, author of The Super Upside Factor (Wiley, 2025), a former Y-Combinator founder and SoftBank investor. He draws on his experience at the intersection of technology and policy to reflect on how universities can prepare leaders for the age of artificial intelligence.

Students entering universities today may graduate into a world of Artificial Super Intelligence (ASI) defined as “a system that is much better than the best human at every cognitive task”. AI experts from OpenAI, Anthropic, and Deep Mind predict its arrival within 2–10 years. Universities have a short window to prepare future leaders for an AI-transformed world. The Blavatnik School of Government at Oxford is not an exception.
During my recent visit to the Blavatnik School, Dean Ngaire Woods posed the urgent version of this challenge in our coffee chat: “what is the role of education in the world of AI?”
My perspective as a former technology investor and founder is that we are living in a “horseless carriage” era – a period of rapid transformation where old methods are replaced with new technology before we fully understand its implications.
Rather than seeking risk-free solutions, institutions need rapid experimentation that budgets for survivable failure while positioning for transformational outcomes to develop leaders for a world where intelligence itself becomes abundant.
What roles should humans play when AI exceeds our capabilities?
“What can humans do that machines cannot?” is an old question that’s fading. Definitionally, ASI will surpass human reasoning and judgment. So, the more appropriate question is: what roles must remain human, regardless of capability?
Accountability is one clear answer. Even the most optimistic technologists will agree that a ‘Human-in the Loop’ model will likely endure because, since people ultimately bear the consequences of decisions and can therefore be held responsible in ways algorithms cannot. Human involvement also provides retributive legitimacy, as responsibility can be assigned when things go wrong, while cultural norms still resist the idea of a world fully run by technology.
That may sound anthropocentric, but given that the explicit purpose of almost all Large Language Models (LLM) developers globally is for the betterment of humans, I think this is a reasonable assumption.
Precedent suggests accountability requires human intervention. The Sarbanes-Oxley Act requires CEOs and CFOs of publicly traded companies to personally certify the accuracy of finances, with personal liability for false information. We see this with aviation safety inspectors personally signing off on the airworthiness of aircraft, or engineers using their personal seals to attest to the accuracy of information on major building designs. These examples reinforce a simple rule: we cannot sign off on decisions we do not understand.
For universities, the primary purpose hasn’t deviated too far from its origins. The central question is what kind of leaders and governance structures we want to cultivate to ensure human accountability, even if AI exceeds human capability.
Will generalist education survive the AI revolution?
Traditionally, it was possible to be a generalist, rather than a specialist, which involved trade-offs, but still offered a path to leadership. That balance may be shifting. AI is producing a “barbell effect” in human capability.
At one end, ordinary performance is being raised: non-engineers can now build software, first-time users can design polished graphics, and anyone can analyse data like a professional. On the other end, experts gain unprecedented leverage. When an average person has a 90% lift, an expert may have 100x lift from AI through effective prompting, workflow design and system orchestration with deep, specific knowledge.
The result is polarisation, with many now operating above the level of the previous top experts and a handful of people who leverage several factors greater than the average person.
We see this effect in many fields, including writing, as Paul Graham shares in his essay “Writes and Writenots.” We see the stark contrast in software engineering as well, with waves of layoffs of entry-level engineers, while top engineers are rumoured to receive $100m compensation packages. The power law is at play, with outsized value created and accrued by the top 0.1%.
Does this mean generalist education will vanish? Not entirely. A new type of “meta-generalist” may emerge – someone who can master domain vocabulary and systems and fluently design AI-enabled workflows across disciplines.
But the traditional value of surveying broad but static knowledge may erode as ASI itself generates cross-disciplinary insights. For universities, the imperative is to cultivate depth in leaders – to be that 0.1% who can use a 100x lift from AI – for areas where human oversight is essential.
At the same time, cultivating adaptive learners who can coordinate across humans and AI alike. This raises urgent questions: can our current structures deliver depth, agility and AI fluency, or do we require new methods of education?
How do we preserve essential human virtues?
Even the hardcore AI evangelists will agree that we still need good humans. We don’t want AI to amplify weak or corrupt people – especially not among leaders responsible for material decisions. Two qualities become particularly critical in an AI-abundant world: resilience and integrity.
Resilience, described by psychologist Angela Duckworth as “grit”, is the passion and perseverance for long-term goals, built from interests, practice, purpose and hope. From a different angle, Ann Masten emphasises the “ordinary protective systems” of caregiving, peer support and cultural narratives, with the greatest threats being those that compromise these systems. Both perspectives point to productive friction: like muscle fibres, humans strengthen through strain. We can still build resilience with AI by designing for it.
One avenue is through high-altitude problems. If AI is a rocket, raise the objective. Universities can permit the use of artificial superintelligence, but keep uncertainty, hard trade-offs, time pressure, and many stakeholders. A live crisis with overwhelming data, for instance, can not only test workflow orchestration, but also judgment under pressure and AI literacy.
Another route is to focus on foundational resilience – the abilities that strengthen without tools. Asking students to write a two-page policy brief from scratch, then defend it in a Socratic exchange, which helps to exercise recall, structure, live communication, frustration tolerance, and the habit of finishing.
Resilience can also be cultivated beyond the classroom. Multi-year cohort projects with reputational risk, alumni participation, and responsibility handoffs can develop conflict resolution, incentive management, culture building, accountability and safety networks.
Building resilience is crucial for developing leaders who can maintain independent judgment in a world of biased models with sycophantic tendencies, and who have courage to share independent thought in a world of instant AI-enabled thought and scrutiny.
Integrity is also under pressure as AI magnifies moral complexity and temptation. Beyond well-known societal risks – misinformation/deepfakes, privacy leaks, environmental costs – day-to-day grey zones are expanding in education and work. Some students ignore explicit bans and use AI for assignments or applications, normalising shortcuts.
On campuses, faulty justifications – “everyone does it,” or “employers can use AI so applicants should too” – are rising. Even if tomorrow’s norms become more permissive, that doesn’t excuse violations – the goal is to develop leaders capable of navigating high-stakes ethical decisions.
Expectations must also apply to faculty and staff on ethics and standards of education. AI has raised the bar, giving students the ability to fact-check and rebut lecturers in real time during class.
The response cannot simply be stricter policing. Setting clear AI guidelines and reinforcing them can be one way to help students practise upholding integrity. Debating and leading change to imperfect guidelines – rather than violating them – could be another way students can practise leading in the world.
The underlying challenge is cultural: shifting integrity from rule-compliance to a leadership competency embedded in how learning and assessment are designed. If accountability becomes a largely human role that depends on deep knowledge, the question becomes how do we build a culture where integrity endures when AI makes cheating easy.
Embracing the horseless carriage era
Early automobiles were literally “horseless carriages ” – designers replaced horses with engines without rethinking from first principles. Only later did obvious improvements emerge: better suspension for higher speeds, steering wheels instead of tillers, enclosed cabins for weather protection. We are in a similar phase with AI integration in education.
The way forward is iteration. There needs to be a shift in how we think about implementing guidelines, policies, and experimentation in education. We must become comfortable with being wrong. Instead of relying on deeply researched approaches optimised to be risk-free, a highly iterative approach of testing and improvement may be more effective.
High-probability strategies work well when the past is a reasonably good predictor of the future. But when the past is not a great predictor, an iterative process optimised for learning performs better. Focusing on speed of iterations and learning, while budgeting for survivable failure and positioning experiments for massive outcomes, may be the key to developing future leaders for the AI age.
For the Blavatnik School, the mission of preparing leaders for complex global challenges has never been more relevant – or more urgent.