GenAI in Education: Notes from the LSE-KCL-PKU Conference

Higher education knows AI changes learning. The question of what, concretely, to do about it is still open.

field notes
higher education
assessments
What I heard at the LSE-KCL-PKU conference on AI in education, what it confirmed, and the question nobody could answer.
Author

Jon Cardoso-Silva

First Draft

29 April 2026

Opening address at the LSE, KCL + PKU conference “Global Approaches to GenAI in Education” on 28 April 2026 at LSE. From left to right: Graham Wynn (KCL), Samantha Smidt (KCL), Sun Hua (PKU), Claire Gordon (Eden Centre, LSE).

Yesterday, I attended the first day of the Global Approaches to Generative AI in Education conference at LSE, a joint event with King’s College London (KCL) - my alma mater - and Peking University (PKU) now in its second edition. I presented a poster about the GENIAL framework and the data I am now beginning to analyse.

We all seem to agree that we have to adapt to the reality that students can now produce polished work with AI that tells us little about what they actually learned. I myself have been thinking and writing about this too. I left the conference with the sense that we still don’t know how precisely to implement the adaptations we need in Higher Education and that the answer might need to be contextual to the reality of each university and discipline-specific.

Here is a short summary of my notes from the day with my reflections about what I heard.

What is the university for?

The day opened with a dialogue between Professors Emma McCoy (Vice President and Pro-Vice Chancellor (Education), LSE), Shitij Kapur (Vice-Chancellor & President, KCL), and Jiang Guohua (Acting Dean of Graduate School, Peking University).

Emma McCoy shared how she cannot remember much about her first year as an undergraduate. She wouldn’t be able to solve a contour analysis problem from her first year at uni nowadays but as she put it, “it was never about that”. Indeed. That was a point that was reiterated throughout the day by other speakers: the university is a place to learn how to learn and to learn how to think for ourselves.

In his turn, Shitij Kapur delivered five compelling points of what he sees as the role and future of universities in the face of AI. To the best of my note-taking abilities they were:

  1. To redefine the essence of learning. When you complete a degree, you do forget all the essays/problem sets you’ve worked on and the diploma may just sit on a shelf, but the cognitive development you went through stays with you.
  2. To respond to a changing world of work. Just like how pilots had to be retrained to fly planes with autopilot, we need to retrain our students to work with AI as a tool, and to do so in a way that preserves the cognitive work that matters. Even there, pilots still need to know how to fly manually.
  3. To protect the dignity of scholarly work. The universities have been great protectors of deep, sustained attention, interpretation, and creativity. All of which only arise when in deep dialogue with fellow human beings. AI is the antithesis of that and the university will need to guard against the immediacy that AI provides.
  4. To critically evaluate AI’s commercial drivers. The real impetus of the developers of AI tools is entirely commercial. They have an “inexorable need for return” and profit. Universities have an important societal duty, to be the place where we critically evaluate AI and its impact on society.
  5. To consider what AI means for human development itself. Shitij thinks AI poses more than a new industrial revolution but a new stage in human development. Our own sense of self and identity is tied to our cognitive capabilities, and AI is changing that. We need to understand what that means for us as human beings and how we can adapt to it.

It may not sound like much when you see these as bullet points but his speech was delivered very eloquently and felt extremely thought out and precisely on point. It evoked in me a strong feeling to see how education can be redesigned to be made more human, despite the technological advances trying to force us the other way.

But then he told the room he had not written any of it. It was all AI-generated. This has become a common trick in speeches since ChatGPT came out but his provocation after that was what got everyone thinking: “Does it matter if this was written by AI or not?” The room went super silent. (The whole thing reminded me of a brilliant BBC spy show I’ve just finished watching: The Capture.) In the end as he was leaving the stage, Shitij Kapur seemed to roll back and admit he had written the speech himself, but I could still hear the “does it matter?” question resounding around me.

A feeling of longing

The widespread adoption and access of Generative AI by our students forces us to change how we think about that because they change how they (and we) do things in our disciplines. I share the experience Emma McCoy mentioned that she has not written a line of code herself since Claude Code came out (in my case, I rotate between Cursor vs GitHub Copilot vs other AI tools and I do feel a lot more productive because of it). AI coding tools are fantastic and if you are a coder it’s likely that when you first coded with an agentic tool you had a moment of “Wow, this is amazing, I can do so much more now!”.

Lately, though, I’ve started to feel that I long for the meditative flow I used to experience when coding “offline”, without any of the tools. It’s possible that others feel that way too. Unrelated to coding, my colleague Prof. Simon Walker made a similar point at the end of the day when he mentioned how he questions his own daily use of AI. Are we losing something when we use AI for a task? Are we all atrophying some of our own capabilities by offloading them to AI? Cognitive surrender certainly seems to be a real risk but I also worry we’re stripping out the joy of discovery and learning and replacing it with the amazement of being able to produce more and more polished stuff with AI.

I fear our students will miss out on that feeling of joy and discovery that comes from struggling with a problem and eventually solving it on their own. I fear I am missing out on that too sometimes. But is it worth bringing that back into the practice of coding? Or are we better off orchestrating agents to do that for us?

The answer to those questions will depend on the discipline. Content and disciplinary knowledge are still at the centre of what we teach. Emma mentioned a recent Wonkhe article (“AI literacy is not enough – universities must teach through disciplinary standards”) to make the point that when training people on AI or incorporating AI into the curriculum, we must be careful to do so in a way that is relevant to the standards and practices of each discipline. I think this applies even to the question of when to step back from AI: the answer will look different for a statistician, a lawyer, and a programmer.

What the LSE AI and Education fellows are building

LSE has this pioneering fellowship in AI and Education that is funding ten projects between 2025-2027 across the School to pilot and experiment with AI-enhanced approaches to teaching, learning, and assessment. The programme has an ambitious underlying goal to see how AI can transform the educational experience in our disciplines. I am one of the fellows and I intend to write more about the full scope of my project in a future post but you can see a part of it on a separate post.

The afternoon opened with a panel featuring three fellowship projects: Dario Krpan from Behavioural Science, Antonio Mele from Economics, and Jillian Terry and Chris Blunt from the LSE 100 team. A colleague from the Department of International Development, Diana Weinhold, also joined the panel to reflect on what AI means for the graduates universities are producing.

  • Dario showed us a full n8n workflow that he has built to support students in Behavioural Science programme(s), which seemed to be really well perceived by the students. He also shared some very good and fun games (done with Claude artifacts) his students have produced to simulate scenarios that help them understand the concepts they are learning in class.
  • Antonio ran two experiments in the Economics department.
    • In the first, 450 of 799 first-year students chose to use an AI tutor configured to maintain a Socratic stance across a 45-minute problem set session: it asked questions rather than giving answers, and held that stance in 93% of exchanges. Students who used it scored six marks higher on the exam than those who did not.
    • In the second, 31 MSc students compared human and AI feedback on referee reports without knowing the source. None preferred the AI for critical reasoning. 84% preferred the human feedback, which tended to challenge the student’s own thinking rather than polish the surface.
  • Chris Blunt and Jillian Terry described a conversation-based assessment in LSE 100, the compulsory interdisciplinary course all LSE undergraduates take.
    • Students had to write a 1,000-word commentary integrating multiple disciplinary perspectives, and could choose to prepare by having a structured conversation with either a peer from a different degree programme or with Claude AI. Only 15% chose to chat with the AI and it is possible they only did that because they had run out of time to find a peer to chat.
    • Of the 123 AI dialogues they coded, they found five categories of conversational quality: genuine dialogue (19%), question-driven (30%), drift toward extraction (21%), one-directional questioning (21%), and off-task (9%).
    • The AI tended to occupy progressively more of the conversation ‘space’ over time; and they also found that the most effective dialogues came from students who adapted the prompt template to their own questions rather than simply copying the default.
  • Diana Weinhold’s contribution was different in kind from the other presentations. Diana expressed how AI enables her to produce things she would not otherwise be able to create: frontend JavaScript code, certain mathematical formulations, etc. But she also stepped out of the classroom and got us to think about what role graduates will fill. She sees the most viable path as a bifurcation into two cohorts:
    • a smaller one trained as deep researchers who hold fundamental knowledge in human form, and
    • a larger one trained to direct AI toward useful ends.
    Each requires a different education strategy, and institutions that want to produce the first kind will need to find ways to resist market pressure, because market incentives reward the second.

Ideas from further afield

The afternoon and closing panels brought in speakers from outside LSE as well as students. Most of them were trying to address and propose answers to the questions that the morning left open.

Closing panel “The future of learning in an AI-transformed world” at the LSE, KCL + PKU conference “Global Approaches to GenAI in Education” on 28 April 2026 at LSE. From left to right: Christopher Leung (student at University of Cambridge), Georg Graepel (student at LSE), Oguz Acar (KCL), Simon Walker (Southampton/LSE/UCL), Tamas Makany (SMU)

Closing panel “The future of learning in an AI-transformed world” at the LSE, KCL + PKU conference “Global Approaches to GenAI in Education” on 28 April 2026 at LSE. From left to right: Christopher Leung (student at University of Cambridge), Georg Graepel (student at LSE), Oguz Acar (KCL), Simon Walker (Southampton/LSE/UCL), Tamas Makany (SMU)
  • Oguz Acar, who leads Generative AI work at King’s Business School, has been focusing his attention on how we (+ students) learn with AI. He noted how several meta-analyses suggest that the evidence on AI and learning may be positive but with a huge variance. He finds that this spread, the variance found across studies, is what makes this question interesting. The meta-analysis I know best, (Deng et al., 2025)’s review of 69 ChatGPT studies, illustrates this: the pooled effect is large but as later pointed out in a commentary to Nature Review Psychology by Yan et al. (2025), it is hard to know what the effect size is actually measuring and we might be just looking at how students perform better with AI rather than whether they actually learn better with it. (I have not surveyed every meta-analysis in the field, so Oguz may have had others in mind too.) Oguz also talked about his framework called PAIR (Problem, AI, Interaction, Reflection) that keeps human judgement in the loop at each stage of working with AI.

  • Tamas Makany, the Associate Provost (Teaching and Learning Innovation) at Singapore Management University, walked through how SMU has translated Singapore’s national “4 Learns” framework into a programme-level progression: students demonstrate core competencies without AI in Year 1, work with AI in teams in the middle years, and run very experiential capstones by Year 4. This is the closest I got to a concrete plan for how to re-engineer our education systems to keep them resilient against an “AI disruption”. I intend to learn more about their approach.

  • Georg Graepel and Christopher Leung, both students (Georg at LSE, Chris studying Natural Sciences at Cambridge), challenged the assumption that AI should be a “constant companion” in education. They argued that manual exploration exposes students to a wider and less predictable range of ideas than delegating to a language model, and that this unpredictability is where some of the most productive learning happens. They are reading and adapting from Schmidhuber’s formal theory of creativity, something I wasn’t aware of, which models a learning agent as one that seeks out states that are surprising but learnable.

  • Julia Chen, who directs the Educational Development Centre at Hong Kong Polytechnic University, called for educators to engineer ambiguity and non-linearity into their assessments. This resonated deeply with me. I wrote about Bjork’s concept of desirable difficulties in “The Wrong Test”. A desirable difficulty is a condition that makes learning feel deliberately harder during practice but leads to better long-term retention. Julia was arguing for building that kind of productive friction into assessment design so that students cannot shortcut their way through with AI. I would have loved to hear more about the specifics: what does an ambiguous, non-linear assessment actually look like in practice, and how do students respond to it? Julia might be receiving e-mails from me soon 🙃.

Staff & student manifestos: how we want to work with AI

(a) King’s Student AI Manifesto - KCLSU January 2026
(b) A Student Manifesto for Assessment in the Age of AI - LSESU 2024/2025
(c) Manifesto for the essay in the age of AI - Eden Centre (LSE)/King’s Academy (KCL) September 2024
Figure 1: Manifestos produced by students and staff at LSE and KCL from 2024-2026 about their vision for how to work with AI in education.

During a break I spoke with a group of KCL students who had brought a poster of the KCLSU AI Manifesto (January 2026), led by the postgraduate VP through a student partnership programme. It sets out five principles, from prioritising deep learning (of the human kind) over tool replacement to staff-student collaboration on implementation. They told me the LSE Student Manifesto for Assessment in the Age of AI had inspired their process. The LSE manifesto (2024/25), co-authored by twelve students and the Students’ Union education officer Wajiha Umar (a former student of mine!), was presented at the first iteration of this conference last year. The KCLSU manifesto includes a direct request that teachers not default to exams simply because they are the easiest option. Interestingly, this topic came up at the end of the day too when a student in the panel voiced the opposite stance where they felt that exams were the only way to ensure students were not using AI to do their work for them.

In the same vein, there is also another manifesto I had a small part in: the Manifesto for the Essay in the Age of AI (September 2024) was written by 34 academics from KCL and LSE after a workshop in June 2024. Where the student manifestos argue for what assessment should do for learners, the essay manifesto is more of a defense of the written coursework and the shared belief we had (still have?) that written assignments should still exist as a form of intellectual practice despite the rise of AI tools that can produce polished writing.

The question I left with

The people in that room may be reading different literature or running their own experiments, talking to people whose disciplinary and cultural backgrounds inform their perspectives about education yet we are all aligned on a similar conclusion. The performance-learning gap (or ‘paradox’ as Simon framed it during his panel in the afternoon) is a shared vocabulary among researchers, institutional leaders, and students thinking about AI in education. We all know that because students can produce polished work (or polished-looking work) with AI, we cannot trust their outputs alone as evidence of their learning.

What nobody claimed to have solved is the practical question: how, precisely, to redesign our systems so that the credentials universities grant still mean something, and so that those systems reward learning rather than polished output. The ideas I heard (Socratic tutors, structured frameworks like PAIR, programme-level progression, engineering friction into assessment, process-based assessment) are all plausible directions but I guess we all still need to accumulate enough evidence and shared experience to know which ones are the most effective and how to implement them in different contexts.

That is part of what I am working on. The GENIAL data from my DS105W course is one attempt to gather some more concrete evidence of how students adopt an AI suggestion or override it when doing their work, and what the traces of that use look like when read carefully. If the patterns hold up, they should help us design better.

References

Deng, R., Jiang, M., Yu, X., Lu, Y., & Liu, S. (2025). Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies. Computers & Education, 227, 105224. https://doi.org/10.1016/j.compedu.2024.105224
Yan, L., Greiff, S., Lodge, J. M., & Gašević, D. (2025). Distinguishing performance gains from learning when using generative AI. Nature Reviews Psychology, 4, 435–436. https://doi.org/10.1038/s44159-025-00467-5