
Algorithms of Education: Data and its role in education policy
Summary
How do educational policy studies need to shift to remain adequate to the emergence of powerful forms of technology? In ALGORITHMS OF EDUCATION, Kalervo N. Gulson, Sam Sellar, and P. Taylor Webb explore how, for policy makers, big data creates the illusion of greater control over educational futures. In this episode, Gulson and Sellar discuss new strategies for, and a new politics of, education.At the heart of the whole project is an interest in the way that new kinds of machinic cognition, artificial intelligence, emerging within education policy spaces.
Kalervo N. Gulson:What we are focused on is not what do policy makers think about how artificial intelligence is changing policy, but we did do some of that work. But we're also interested in how do people build cities that end up being used in governments. Oh, hello, everyone. I'm Clervo Golsan, professor of education policy at the University of Sydney in Australia. It's great to be here on the University of Minnesota podcast with my coauthor, Sam.
Sam Sellar:Hi, Cal. Hi, everyone. My name is Sam Sella, and I'm a professor of education policy at the University of South Australia.
Kalervo N. Gulson:And, unfortunately, we can't be joined by our other co author, Taylor Webb, who's a associate professor of education policy at University of British Columbia in Canada. So we're here super excited to talk about our new book, with University of New South Press called Outwardness of Education, How Datafication and Artificial Intelligence Shape Policy. So this book was written over a long time, disrupted like all of us, like COVID. We weren't able to work together as closely as we would have liked. Sam, for much of his book was in Manchester.
Kalervo N. Gulson:I was in Australia. Taylor was in Canada. And the book draws together work that we've been doing in different configurations, and now it's finished. We thought it might be really interesting to look back and see what it does to ask ourselves about the different perspectives, people, and to see actually if as co authors we have different perspectives on what it's about.
Sam Sellar:So, Cal, do you wanna kick us off with a a brief overview of, what we aim to do in the book?
Kalervo N. Gulson:Yeah. Absolutely. So our book's an exploration of what lies behind the use of Darwin in contemporary education policy. While science fiction tales of artificial intelligence eclipsing humanity are still very much fantasies, in the outlook we aim to tell real stories and more than speculate about data and algorithms and machines and how these are transforming education governance today. We explore how for policy makers, today's ever growing amount of digital data creates this illusion of greater control over the educational futures of students and the work of school leaders and teachers.
Kalervo N. Gulson:But for us, the increased diversification of education, and by that we mean this increased use of data that has been used in government, offers actually less and less control as algorithms, artificial intelligence further abstracts the educational experience and distance policy makers from teaching and learning. And so what we're doing in the book is suggesting that schools and governments are increasingly turning to synthetic governance, and this is our central concept that we'll talk about a bit more. And this is a governance where one is human machine becomes less clear as a strategy for optimizing education. In the book, we have a number of empirical case studies looking at things like data infrastructures, facial recognition, and the growing use of data science in education. And we conclude the book by saying that we we wanna go beyond debates about separating humans and machines to actually try to think about these things together to develop a new critical politics of education.
Kalervo N. Gulson:So, Sam, let's get into it. Well, I wanna start this with the parts into the book. Like like all books, you know, we have multiple ways. I would get them into things. But I think one of the most interesting parts is connected to our interest in speculation and science fiction.
Kalervo N. Gulson:And we open the book with a scene from Alex Garland's film, X. Macapa, which is about artificial general intelligence. So much of the impetus for the book comes from science fiction, this novelty and excitement in thinking about AI ongoing. What role does science fiction and speculative thinking, play in the frame of the book? Yeah.
Kalervo N. Gulson:I think
Sam Sellar:it it's really important. And in many ways, it's where the book had its original genesis in some of our conversations about, I guess, science fiction perspectives on AI, on what we would call artificial general intelligence, the kind of AI that we see in films like Terminator, for example. But this isn't a book about artificial general intelligence or AGI, but what we do try to do, I think, is to take seriously the possibility that AGI might emerge, and to think about how the emergence of this version of artificial intelligence could quite dramatically transform education and education politics and policy. So if this were to occur, how would we need to be thinking now in order to keep pace with these kinds of developments that we often hear talked about and promised by a range of commentators on AI? This kind of science fiction perspective is really the kind of key entry point into the problem that we try to address in this book, and it's been really helpful for us, I think, back to a number of conversations that we've had over time as part of various projects that we've been involved in together where we've been really excited by what we're hearing from research participants about really quite amazing technical developments.
Sam Sellar:I think back to a conversation that we had a few years back in Vancouver about quantum computing, for example, and the transformative possibilities of quantum computing. And I also think about, you know, our our discussions about, technologies like DeepMind's AlphaGo, and I'm sure we'll come back to that later in our conversation as well. So I think these have been really important provocations to thought and set the scene for the book in many ways.
Kalervo N. Gulson:Yeah. I think that provocation for thought is is something I was wanting to get a little bit into because we are talking in the book about speculative ideas, like the possibility that things will happen. But we are also tying that into things that are already occurring. And so do you think our thinking with science fiction helps us to be a little bit more comfortable with that kind of speculation, working in this kind of speculative register?
Sam Sellar:I think it certainly helps us and, you know, readers will see, scattered throughout the book, particularly at the opening of each of the chapters, references to science fiction texts that have been particularly helpful for us. And so I think it does give us license or at least we use it as an opportunity to think in a more speculative register throughout the book. In my experience, talking about the ideas in the book and and various, projects and pieces of research that inform the book, I I found that it bothers other people a little bit. I think people perhaps struggle a little bit with this speculative approach that we take. You know, we're talking about what might happen, not what we're seeing happening right at this moment.
Sam Sellar:So we are taking a bit of a departure from an empirical basis for the argument to suggest that we're not quite at this point yet where AI is really having, a dramatically transformative impact on education and society. But our gamble is that we're not too far off that. And if we want to be in a position where we can actually keep pace with those developments, then we need to be undertaking, new kinds of theoretical and methodological developments now so that we can actually make sense of these changes as they occur.
Kalervo N. Gulson:Do you think that some of the uncomfortableness, with what we're talking about when when we present it is connected to the teachers, school leaders, and and other academics' experiences in research on identification. And and if so, I don't think there's a whole thing of diversification is in what we're talking about in in the book.
Sam Sellar:Yeah. I mean, I think that's partly responsible for some of the questions and concerns that people have raised about the approach that we take here. I think there is a general view that, amongst many educators and critical scholars of education, that numbers and digital data have a potentially negative effect on the educational project, that they can be quite limiting. They promote an instrumental form of rationality, certain ways of measuring performance and holding people to account for that performance. And I think there is a general view that these are, negative developments.
Sam Sellar:And I think us starting to think about datafication and digital data and AI in more, I wouldn't say positive senses, but with a more open mind to the possibilities of what these developments might produce, kind of sits a little bit disjunctively with that critical view of numbers and the role that these kinds of data have played in education up until this point.
Kalervo N. Gulson:Yeah. I think I think, you know, for me too, it's that, it's this role of of of data and then and then it's this role of machines. And then it's this role of humans and how these things are together. And I can remember sitting in the bar of my hotel in Manchester when we were working in school in early two thousand and twenty. Seems an age ago, doesn't it?
Kalervo N. Gulson:I probably didn't keep track of the years here, but I've been very what I can remember very distinctly is when we had this stroke of inspiration. We've been working quite solidly for a week or so at that stage about thinking of what holds this book together. And I can remember it was quite late in the day. I I was sitting in the hotel bar, and we came in, and we just had this stroke of inspiration about the central idea we're trying to call it synthetic governance that really holds the book together. So do you wanna talk a little bit about this idea?
Sam Sellar:Yeah. I think that's important given its prominence throughout the book. So, I mean, I guess at the heart of the whole project is an interest in the way that new kinds of machinic cognition, artificial intelligence, emerging within education policy spaces. And we're interested not only in the AI, but the way that that gets taken up by human actors in those policy spaces. So we're interested in the combination of the human and the machine.
Sam Sellar:And I think to emphasize the distinction between machines and humans, you know, we don't want to do that in a way that's too artificial because humans and machines have an incredibly long history of coevolution together, and we'll talk more about that. I think the concept of synthetic governance really helped us to think about the synthesis of the two, the bringing together of two forms of thinking and action within education policy. So, we define synthetic governance then as an amalgamation of, on the one hand, human classifications, human rationalities, values, calculative practices, and on the other hand, the rise of algorithms, data infrastructures, and AI. And we argue that this synthesis creates new potential for thought and action in education. So synthetic governance is not human or machine governance, but human and machine governance.
Sam Sellar:And as I just noted before, I don't think there's anything particularly revolutionary about saying humans and machines interact in very fundamental ways, but it's that interaction that's really at the heart of what we try to explore here. So we argue that synthetic governance arises from conjunctive syntheses, and here we're borrowing an idea from, the work of Gilles Deleuze and Felix Guattari, their notion of, a conjunctive synthesis that brings together and integrates data driven human rationalities and computational rationalities. And it gives rise to a form of cognition that traverses both machines and human bodies. And so we argue that this development involves performance and administrative data being increasingly generated, collected, and analyzed in different configurations in order to govern synthetically. So what we're arguing is, you know, there's a long history of human policy actors working with data of various kinds to make decisions about schools and education systems, and we're seeing the rise of, new computational capacities to do that.
Sam Sellar:What happens when you bring the two together? I think what I've probably emphasized here is the synthetic side of synthetic governance, And I think it's important to come to the governance side of the ledger as well. We've both come to this book, Cal, as policy scholars and our research is located in Education Policy Studies, yours perhaps more so in geographies of education policy. I I tend to see myself as a sociologist of education who's been interested in assessment data, digital performance data, and now the rise of AI and big data analytics. That's sort of the trajectory of my research interests over recent years.
Sam Sellar:And so we're both interested in AI, I think, but not so much from a technical standpoint. You know, we're not interested in how we create new forms of AI or even how they shape teaching and learning in the classroom particularly, but we are interested in how the introduction of AI will have an impact on the governance of education. And there's a lot of governance in this book. We talk about digital governance, network governance, and, of course, synthetic governance. So so what do we mean by this term governance?
Kalervo N. Gulson:Well, the first thing we don't mean is government. So governance, for us, is not government. And we do talk about both governance and policy interchangeably in the book. And I think that that reflects actually an an ongoing part of our field where those things are becoming synonymous. And what that does is reflects the the the turn to governance, the government's turn, a move away from the state focus to systems levels of governance.
Kalervo N. Gulson:We're interested in these ideas of multilateral and network formulations of governance, and the the important part for us is that these formulations attempt to anticipate and predict and frame problems. Those terms really come up a lot for us in the book. And I think this is very important because what we're suggesting is that earlier approaches to education policy emphasized ideas of deliberation, decision making, and rational choice. And we're trying to sort of set up a little bit of a distinction in those particular kinds of rationalities around that. And we're looking at types of governance that operate through networks of public and private and voluntary actors, and that's very much located in a lot of the work around network governance in education.
Kalervo N. Gulson:But what we're also interested in is what happens when there's the human and the nonhuman in this in this government? What are nonhuman policy rationalities? Can we even conceive of those things? And I think that we're very interested in the book to actually play through that and to see to see where we might end up in thinking about those kinds of arguments.
Sam Sellar:And so the governance turn that you've just spoken about has occurred, I guess, within the last two to three decades, and it's involved that broadening of the set of actors who are involved in governing societies, going beyond the state to include companies, non government organizations, philanthropic actors, and others, broadening out who is involved in these networks of governing. In many ways, I think we're talking about broadening that out even further then to include the non human into those networks. So that's sort of a shift, I suppose, that's happened over the past few decades, but you're also interested in a longer history policy sciences and some of the aspects of policy science that persist today that have been with us for a much longer period of time.
Kalervo N. Gulson:You know, I think absolutely, I think that this is what that Tyler and I had had been doing pretty much over the last decade Mhmm. Which is which is trying to look at the kind of continuities and disruptions of the rationalities in policy. And there's this idea that policy sciences as something that merges with systems thinking in the nineteen fifties also seems predicated on notions of being able to somehow develop a form of certainty about decision making. If we then jump forward to when, the three of us kind of started to enter into the kind of field of policy studies and education. At that stage, it was, it's around critical policy studies and the idea of, locating policy or something that's ad hoc, uncertain, difficult to really have any kind of prediction around.
Kalervo N. Gulson:And then we end up where we are now where there's like a return to the policy sciences. You know, we see quite a lot of work emerging. We're in this idea of new kinds of certainty, but more discussions of causality in in policy making. And so I think what holds our focus in the book together is this notion that there's a political rationality prediction that tends to kind of be rehearsed and recuperated across the with the decades from the fifties. Some of these, focus on predictions, cognitive, devise a technical approaches to the provision and administration of schools and systems.
Kalervo N. Gulson:Some of this is around connecting to formal techno rationality of governance. And what we see now is that approaches like education data science, which puts together different disciplinary backgrounds from computer science to psychology to neuroscience coming together and saying there's a particular way of thinking that's occurring now around policy. And I think that what seems new because much of what I've just described, you could probably say to a greater or lesser extent, for any time from the fifties to an hour. But what seems to be new in the present is AI and as a particular modality of thought. And I think what we're trying to spend a small amount of time in the book trying to actually understand and theorize is that what would be a machine thinking machine thought about policy.
Kalervo N. Gulson:And so if you think about that's where we're located governance, I think it's worth us kind of digging into this idea of thought or a bit more. And I think what's so interesting, Sam, and and it's it's, it would one we've been so good to have Tyler here in the room as well. There's so much of our time working together. We've had so many conversations about concepts and concept work in education and a sense of sort of unease about whether we have the right concepts in critical policy studies for looking at cutting edge developments like artificial intelligence, big data. And so in the book, I think we we really try to get to our interest in this by answering the question of, you know, how do we begin to grasp the the nonhuman quality of contemporary education governance.
Kalervo N. Gulson:And so I was wondering if, like, from what you're seeing, what do you think are the most important concepts actually for us to do that in the book?
Sam Sellar:Yeah. I mean, I think it's a book full of concepts. I would say that. There's a lot of concepts in this book. But I think for my money, two of the the most important ones, and and I guess two concepts that have provided a really important provocation for the the whole project, are concepts that we draw from, Catherine Hales on the one hand and the work of Luciana Parisi on the other.
Sam Sellar:And I think it's these two concepts, that really underpin our argument about the role that AI might come to play in education policy and governance. So I'll just say a bit about each of them in turn. So first, we kind of I suppose we lay the foundation for the theoretical framework of our book by drawing on Catherine Hales' work on cognition and particularly her concept of non conscious cognition. So Hales argues that cognition is a material informational process that doesn't necessarily involve consciousness or what we might call thinking, to distinguish thinking from cognition that is nonconscious. And so, I mean, some obvious examples of nonconscious cognition would be, certain kinds of animal cognition, you know, insect thing in insect cognition, for example, But also our own cognition.
Sam Sellar:Not everything that we cognize comes to presence in our own consciousness, so we don't actively think about everything that we might be cognitively involved in, kind of, processing. So we have, conscious thought and unconscious thought within ourselves as well. And so this concept of non conscious cognition, I think, helps us to think about machinic cognition, which is a central issue for us. And then, on the other hand, I think Luciana Parisi's work is really, really helpful because she offers us a perspective on non conscious cognition that is quite novel. So I think for many people, AI still appears to be a limited form of nonconscious, machinic cognition, a quite instrumental rationality.
Sam Sellar:I think many people would take the view that, yes, AI might involve a kind of thinking, but it's a very limited kind of thinking, and it can't really go beyond what humans program the AI to think and do. And what Luciana Parisi argues is no. With some of the developments that we've seen in AI over the past few decades, we might be getting to the point where AI can actually think in creative and novel ways. So she points particularly to the shift from, symbolic rule based approaches to AI, what we could call good old fashioned AI, to the rise of machine learning and then specific subsets of machine learning, such as deep learning and reinforcement learning, which involve training layers of algorithms in artificial neural networks on very large datasets. And in the process, she argues, these algorithms learn not only from the data on which they're trained, but also from the other algorithms in the network.
Sam Sellar:There's a kind of meta learning that occurs. Algorithms are learning from the data and from the other algorithms. And in the process, perhaps something new can emerge that goes beyond what the programmers of these systems initially envisaged. So I think it's this combination of nonconscious cognition taking us beyond human thinking and then Luciana Parisi's notion of a sort of automated thinking that gets us to the point where we can start to speculate about the creative potential of AI in education policy.
Kalervo N. Gulson:Yeah. I I think it does get us there, and I think it also pushes us right up against the issues of where do we locate the children in our eye, and and how do we actually start to think about that. I think we ended up in this kind of idea of synthetic thought a little bit It would locate what we're trying to really, we ended up moving from governance to these theories of AI to thinking that we really have to try and figure out where where this sits around theories of technology and then of course, ideas.
Sam Sellar:Yeah. I think, throughout the book, we we kind of engage with different perspectives on philosophy of technology and I and, you know, underpinning the notion of the synthesis, synthetic governance, there's a whole tradition of of philosophy of technology. I'm thinking particularly of the work of thinkers like Bernard Stiegler who have pointed to the continual co evolution of humans and technology. And Hales points to this as well. She she refers to, the notion of techno genesis, the idea that humans and technics continually co evolve together.
Sam Sellar:And so I think the the synthesis that we're talking about in this book is most definitely not new. I think its history goes back as far as human history and human culture and and and the emergence of human technology. But what I do think is interesting about the present moment is the possibility of a step change. Mhmm. And I guess this is the gamble of the book.
Sam Sellar:You know? If we do see the emergence of artificial general intelligence, really powerful new forms of technology, then perhaps that will be a development matched only in significance by the emergence of of writing thousands of years ago. You know, we might be on the cusp of a really important shift in our technical development, and I think the book in the book, we really try to grapple with that prospect and think about how, education policy studies might need to shift in order to remain adequate to the impact of that change in our field. So, I think the concepts that we use in the book, you know, this speculative approach that we've been talking about and some of these concepts that we've just talked through, they don't emerge from the empirical studies that we cover across the different chapters, particularly the later chapters of the book. We were kind of doing that empirical work, weren't we?
Sam Sellar:And at the same time, searching around for concepts that seem to kind of match up with the new directions that we were seeing emerge in different places. And so we've always been doing that conceptual work in parallel with the empirical field work. And I know it's a particular interest of of Taylor's, but also an interest for both of us. And that's Bourdieu's notion of fieldwork in philosophy, that you do, you know, take a philosophical approach to empirical studies. And I think that's certainly an approach that has underpinned our experience across the studies that we draw on and and talk about in this book.
Sam Sellar:Probably, what we do here is undertake a bit of a thought experiment with our own work. You know, we draw on studies that we had started before the idea of this book fully emerged, and we've used the book as an opportunity to go back and think about those approaches, and to think about our own methods and the limitations of those methods. So it's not just a book about new concepts. It's also a book about new methods or at least the need for new methods. So I'm just wondering, Cal, what what are your thoughts on how can our methods and and also our concepts change the way we think about the problems that we address in our own research, but also the problems that really exercise the minds of education policy makers.
Kalervo N. Gulson:The challenge that's sort of posed in what you said is whether or not we ended up being able to actually have come to the limits of our our methods. You know, we we ideally are taking a position that methods don't just investigate. They don't intervene in the phenomenon. This is how kind of the catalyst, and they they help us to reconfigure the problems that we pose about phenomena. But I think we're we're we're all kind of acutely aware of this problem that, you know, we are using social science methods, human based methods to try to investigate non human decision making.
Kalervo N. Gulson:And and I think that we really grapple with that quite a lot, in in the book, and I think we were acutely aware of work that had been emerging and saying that social science, you know, didn't have a role anymore, right, in big data. And understanding how we dial artificial intelligence was changing the the social world. So I think where we end up is that there there is a role for things like ethnography in understanding words, in understanding artificial intelligence. And so, you know, we attempt in the book to think undertaking things like interviews with technical people gives us a different way to govern. Because what we are focused on is not what do policy makers think about how artificial intelligence is changing policy, but we did do some of that work.
Kalervo N. Gulson:But we're also interested in how do people build things that end up being used in governance. So in some ways, the kind of a hand on that, is it really new methods, or is it almost the new targeting of methods at a at a different a different kind of site of investigation? Developers, not users, and and so forth. So I think we end up there. Whether or not we end up with new methods for investigating education governance is perhaps not the case.
Kalervo N. Gulson:But I think we end up in a new focal point for investigating education governance. I think that that's what's kind of interesting.
Sam Sellar:Yeah. I mean, I think that's right. I think we were involved in a number of different studies in which we were drawing on quite well established qualitative research methods, ethnography, interviews with policymakers and, software developers and, business intelligence managers in in different organizations. But those methods were providing a window onto developments that were quite new, I think. And so we we did really have to grapple with the fact that we were getting told about these things, but our methods weren't really getting at
Kalervo N. Gulson:No.
Sam Sellar:No. The technical developments themselves. And so I think we did wrestle a little bit with that disjunction between what we were able to capture empirically
Kalervo N. Gulson:Yep.
Sam Sellar:And what we were able to then perhaps say in a more speculative way about the potential impact of what was happening on the technical side?
Kalervo N. Gulson:Like, I think that that unsettling takes us up into the kind of reason for compromitization. You know, that's Tyler's done a lot of work around this, in in policy and and so the drawing on of and and Isabelle Stingers, I think we've we've grappled some of what what can help us not just recognize things that are unsettling, but also figure out a way in which we can look at the possibilities of that you know, concept. And so we see here privatizations focus on specific situations in the way that there's contestation, provisional settlement around authority and, acceptance of different forms of governance. So we're watching these like, simultaneous forms of governance happening, I think, in in the book, you know, this Anglo form of governance, and then this, like, uniform that we're calling, you know, synthetic governance. And privatizations help us to see the ways in which there's a simultaneous in the way that problems and solutions are created in policy.
Kalervo N. Gulson:I think that why it works so well for us in the book, I think it took us a while to get to this as the kind of reason for why it was working is that, you know, if we look at Skinner's idea of quantization, which talks about as being the pastels always involve experimentation and possibility. So it's not just mapping something, so there's mapping contestation, but it's experimentation in order to understand changes in thought and to unsettle common sense. And I think that we were we're seeing that, I think, the work that we were doing ourselves and what we're looking at. And so but also really importantly and I think that this is the part where when we're talking with people about this, it's sometimes a struggle to to really impart this, but we're we're really saying that there's no outside of algorithms in AI in education governance. It's not something coming from the outside that we can see as discrete.
Kalervo N. Gulson:It's now it's now completely implicated. And so problematization experimentation becomes necessary because it actually helps us to see what's possible when machines and humans are are conjoined in synthetic guidance. And so we get there, and for Stem, she takes up privatisation as a possibility of transformation. So it has a political a political role. And so we see then that experimentation can happen alongside debate about educational and political implications of AI and diversification and policy.
Kalervo N. Gulson:And so we have these lines of looking at the education field implications. And I think what became so interesting once we got there is to is to look at what actually allows that to happen. Like, what do you have to build in order for for these experiments and and and changing elements seems to happen? And so years ago, we became quite interested in the work of infrastructure as as some way into this. So it didn't really appear immediately obvious.
Kalervo N. Gulson:We're part of a large mapping. Both of us and and Taylor, we're part of a large mapping of data infrastructure in The US, Canada, Japan, and Australia. And so some of that Google work sits in this book. It's I'm really kind of interested because when you had done a lot of thinking about that infrastructures, clients are working together, and then we have got into Easterling in quite a bit. So what do you think it is about Easterling's concept of infrastructure?
Kalervo N. Gulson:How do you think that shapes our thinking, actually, in the book?
Sam Sellar:Yeah. I mean, I think in many ways, it's an interest in data infrastructure that is where we kind of begin the empirical part of this work because the three of us, you know, were working together with other good colleagues on a project investigating data infrastructure. And, you know, my interest in data infrastructure was most certainly prompted by reading Keller Easterling's book, Extra Statecraft. And, you know, there's there's been some really interesting writing about data infrastructure in the field of, big data studies, let's call it. But what what's particularly, helpful for me in the work of Keller Easterling is the very broad definition of infrastructure that she offers.
Sam Sellar:So I think when when people talk about infrastructure, what commonly comes to mind is, a set of material things roads, rails, pipes, cables, the material infrastructure of of societies and urban spaces. And that's certainly a part of what Keller Easterling describes as infrastructure, but she goes beyond that to point to a range of, less tangible things that also make up part of the urban infrastructure. So standards, regulations, things that others in infrastructure studies have been highlighting as as important elements of infrastructure for a long time too. And then beyond that, she talks about code and and software being part of our contemporary social infrastructure. And we can also think about, you know, radio waves, Wi Fi, mobile technologies, the Internet of things.
Sam Sellar:So for Easterling, infrastructure isn't just, you know, material stuff, it's it's an active form. It's the material and the immaterial operating together to shape the way that we inhabit urban spaces and to shape the kind of the affordances and limitations of those spaces. And I think she draws quite heavily on, you know, the work of Foucault again, particularly his concept of dispositif,
Kalervo N. Gulson:to think about
Sam Sellar:infrastructure as, an arrangement, an assemblage, a dispositif that configures contemporary life. And clearly now data play, a very significant role in that social infrastructure. And so I think linking back to what we've just been discussing in relation to, Hale's work on non conscious cognition, I think data we could see data infrastructure as shaping the cognitive ecology of contemporary life, perhaps most significantly through new kinds of digital platforms. I'm thinking of Netflix, Amazon, that create a very different form of infrastructure today. And so it's that I think that is most interesting about her work and that certainly underpins the way that we think about infrastructure in this book.
Sam Sellar:You know, just thinking a bit further about the concept of infrastructure, Really, it's what makes synthetic governance possible. It's the kind of bedrock on which that form of governance begins to emerge. And in the book, we also focus on what gets done when machines begin to become involved in education governance on the basis of that infrastructure. You know, we've seen over time, a move from the analog collection of data in schools. I'm thinking of, you know, taking attendance records in schools, grading tests, paper and pencil tests, to the, collection and analysis of that data in digital formats.
Sam Sellar:Then on that basis, you get very early forms of artificial intelligence as people start to train algorithms on that digital data, and that's bringing us up to where we are today. So it's it's the data infrastructure that makes the AI possible and machining thought in education possible. And in the book, we look at one particular example of that, which is the use of AI for facial recognition technologies. And I know that this is a particular interest of yours. So so why do you think facial recognition is such a good example of what might happen as machines begin to get involved with education governance?
Kalervo N. Gulson:The the important part about it is that it's it's something that's so controversial. Facial recognition seems to strike at the heart of people's concerns about privacy and being always visible. On the one hand, really in education, it's not that prevalent. But the the point of it being in the book is it could so easily be incorporated into existing data structures. And it is in some of the examples that that we that we talk about in in the book.
Kalervo N. Gulson:It could so easily be just another part of student information systems. Some of the examples I was already part of the tenants taking. It is more controversial in the public imaginary, completely benign and mundane in the way which is actually applied in education. So there's there's that part. It's such a good example because it's so easily incorporated.
Kalervo N. Gulson:But so it's it's part of the, you know, computer vision field. And so it uses machine learning, creates face accept facial signatures, and identifies a face. Such a good example because it's this non domain specific technology. The the same technology that's used in the culture is the same as used to in policy and surveillance systems. In fact, some of the same companies like that actually have education products with fashion recognition also provide products for police security services.
Kalervo N. Gulson:So it it's a it's a really good example in one way. And I because it kinda talks to the abstraction and the ability of these technologies, that can come into governments. But the the other key part really is that it's can really reshape what's understood and about governing and learning in schools. And just one way to think about this is to think about not just the identification of faces that that so much interest, so not just locating a face in a classroom, for example, and say, this student is here, but it claims about what's actually being ultimately meaning from when things are captured and analyzed. And so one of the examples of how should we be in the classroom is to create a score about student engagement.
Kalervo N. Gulson:And one of the ways in which it does one of the ways that the system can do that is to take photos every second of the classroom, run the stream of the networks, and create a score about learning, attention, engagement. But how it connects learning engagement to the facial recognition system is that if your eyes are closed, then you're not engaged. And so it shapes what we understand about engagement. And then when that information is used in in in a school as a as an administrative device, it's it's mostly narrowing what we understand is is engagement. So it looks like it's real time data about student engagement, but the assumption is that you can somehow rather get inside the head to understand that.
Kalervo N. Gulson:It completely precludes the idea that you might close your eyes to think deeply. Mhmm. It it ascribes a a a a a certain meaning to it, and it converts that into a set of scores. Mhmm. So while it's definitely not prevalent, it's facial recognition is definitely a super interesting example, I think, of the ways in which this this can start to shape practices.
Kalervo N. Gulson:Mhmm. The potential that's there. And I think that that kind of potential, even in emergent use, is something that we've really been trying to do through the book. And and I think that that, I've got seeking I will be preparing for this discussion where I've been having your long term interest in AlphaGo. Mhmm.
Kalervo N. Gulson:D one's AlphaGo. And then we've we've talked a lot about this. And and you've talked to about, you know, having a kind of ongoing thought experiment with yourself about AlphaGo. And I think that, you know, be interested to hear you talk a little bit more about, you know, the resonance for some of the work that we do about merging data science in education, and it's got a relationship to go. So I just sort of feel what it's definitely more about.
Sam Sellar:Yeah. I mean, absolutely. I'm I'm always happy to talk about AlphaGo. For me, it just feels like, DeepMind's AlphaGo is one of the best examples we have of the kind of creative potential of AI that I find most interesting. So I think what I've been trying to do for a number of years now is to think about, think about AlphaGo and now AlphaZero and what those AI technologies can do, and then in parallel to think about what the implications might be if we were to see that capacity introduced into educational spaces.
Sam Sellar:So AlphaGo isn't really an an educational example. I mean, it is insofar as, these algorithms are machine learners, and we're talking about machine learning. But they haven't seen, much application in education yet, and I don't think we're at the point where we're seeing the application of AI that is as developed as AlphaGo in education. So, again, this is another one of those situations where we are speculating rather than looking at empirical developments in education. But what I think is most interesting about AlphaGo is its capacity to change the way that we think about the game of Go.
Sam Sellar:So I really love there's a a terrific documentary about AlphaGo that was, published in 02/2017. And at the heart of the documentary is the thirty seventh move in the second game that was played in a a flagship series of games between a Go master, Lee Sedol, and AlphaGo. So move 37 in the second game occurred when Lisa Dole, a human player, had gone outside to take a a cigarette break. So he was, you know, pretty stressed playing against, AlphaGo. He'd already lost the first game in this five game series, and it was a really high profile series of games set up by DeepMind to really show off, you know, its developments in this area.
Sam Sellar:I think Lisa Dole was feeling the pressure of, you know, playing for the for human Go players against this, artificial intelligence. So he goes outside, has a cigarette break, and comes back in to see, the thirty seventh move in the game had been played by the computer. There was a human player that moved the stone, but he was following the directions of of the AI. And Go, for people who aren't familiar with the game, is, I guess, similar in some ways to chess. It's a board game.
Sam Sellar:You place black and white stones on the board and you try and, gain control of as much territory as possible. But where it differs from chess is its complexity. So chess has a a finite number of possible moves that you can make in any point in a game, and so does Go, but the number of possibilities is massive. And so we can't produce AI that plays the game of Go in the same way that previous forms of AI like Deep Blue have played chess. They were rule based.
Sam Sellar:So if you think about Deep Blue, it could look at a chessboard and calculate the absolute best possible move at any point in time. Whereas AlphaGo can't do that. It has to form a hypothesis about what might be the best move and then test that out. So coming back to move 37, this was a situation in which the, computer made a move that no one had ever considered previously. So move 37 took everyone by surprise.
Sam Sellar:The people watching the game, you could almost hear the air being drawn out of the room. People thought it was a bad move. They didn't really understand why the computer had played this move. And when Lee Sedol came back in and saw the move on the board, you could see the shock on his face too. He'd been taking maybe two minutes to play each of his previous moves, and he spent twelve minutes looking at move 37 trying to decide how he could respond just because it was so novel.
Sam Sellar:No one had seen this move before. Anyway, he went on to lose the game. He went on to lose the series, and we've now come to view that move as actually opening up a new way of understanding the game of Go. So, you know, we've been playing this game for thousands of years. These Go masters thought they had it all pretty much under control, and yet AlphaGo has taught us how to play the game differently, how to value different kinds of moves, and it's now training human Go players to become better at playing Go, and we've seen them rise up the world rankings as a result.
Sam Sellar:So it's this possibility that AI could actually radically change the way we do something that we've been doing as humans for thousands of years and open up new perspectives on something that we thought we knew very, very well that really interests me. And I wanna kind of take that possibility and transpose it into education to say, well, what if we brought AI to bear in such a way that the education problems we thought were enduring, were really difficult to solve, they've been with us for a long time. Could AI open up a very new perspective on those and then help us to think differently?
Kalervo N. Gulson:Yeah. I I don't think that that that is something that I've talked about a lot isn't about the creating new thinking or some creating AI into thinking differently, but there's no guarantees that we're gonna like that new thinking. There's definitely a guarantee that for many populations the way in which education is currently done doesn't work out very well for them either. So it's working ethic and racial studies in education shows that very clearly. I guess we've also been quite cognizant, though, that we are in a field where critique is both valued and seen as not enough.
Kalervo N. Gulson:That's that has been, I think, a pretty enduring part of debates in cougar policy studies. So we get to this point where we're proposing that we think is required to understand something that's changing. It's coming, whether you like it or not. Mhmm. We think it's pretty important to actually embrace that.
Kalervo N. Gulson:And by embracing, we just think how can we start to think about this? What can we impose as what's occurring? We've been speculating about some of the opportunities and and and some of the risks that are there. We've been trying to think about the role of humans, what it means, to think about humans and machines. But we we get to this point, I think and I think we got to this point in in in our conversations and in the world of the book where it's like, okay.
Kalervo N. Gulson:So what would critical politics of education actually look like if we accept our argument to this point? When we have this this idea that a critical politics education looks like a synthetic politics. So for you, what does that look like? And then how how do you think that relates to the finish?
Sam Sellar:Yeah. I mean, I think the question of politics is obviously an important one and one that we grappled with throughout the writing of the book, and we were pushed to grapple with by critical readers along the way and audiences where we tried out some of these ideas. I think, you know, on the one hand, what we've done in this book is think with developments in technology in quite again, I'm not sure if positive is the right word, but we've we've kind of lent our thinking to those developments and gone with them rather than rushing to judgment about the potential impact of those technological developments and whether they're good or bad for education, educators, young people, and so forth. So, we've kind of reserved that judgment to try and think with the possibilities of technological changes that we're seeing today. And I guess in the process, we haven't necessarily spoken about politics in a way that people are becoming more accustomed to hear in discussions about technology and its impact on societies today.
Sam Sellar:Towards the end of the book, we do try and map that out a little bit, and I think in writing that final chapter, I was quite influenced by the work of Shoshana Zuboff, her writings on surveillance capitalism. And so we've we've drawn on that and and some other ideas to map out, I guess, some different possibilities for a critical politics of technology and a and a critical politics of education today. And there's at least three possibilities that we highlight. So we talk about the first option to work towards supporting the use of those sorts of technologies and and to kind of argue that they can help us to solve wicked problems in education. This would be the position that is most commonly associated with tech companies, big tech, and other kind of promoters, I suppose, of the potential benefits of technology, the kind of Silicon Valley position on the rise of AI, I would say.
Sam Sellar:I guess another possible response to these changes is what we call appropriation, highlighting the need to engage critically with these developments, but also recognizing the possibility that with the right politics, the right kinds of regulation, the right ethical frameworks, we might be able to harness the benefits of AI, new forms of data analytics, and ward off some of the potential negative effects. So we could use AI for good And, you know, this this notion of developing AI for good is is widespread and underpins the work of of a range of organizations that are trying to actually deliver on the social benefits of technological change. So that's what we call appropriation, recognizing that AI is here to stay, but also recognizing the need to engage with it critically and manage the way that we put it to use. We then talk about a third possibility, which we call acceptance. This position differs from the previous two because it it doesn't necessarily take up a supportive or a critical position.
Sam Sellar:It simply involves recognizing the inevitability of the rise of AI and perhaps where one feels that it is having negative effects, just seeking to avoid those. So this is what Zuboff, talks about as hiding, finding ways to disengage from, these forms of technological change rather than actively promoting them or seeking to critique them and control them. But, you know, if you if you try and hide from surveillance capitalism, if you try and hide from facial recognition technologies, there's still a kind of acceptance that these technologies are here, and we have to live with them. But, you you know, the person who who chooses to hide seeks to kind of coexist with them in a particular kind of way. So we see those as three possible responses to the question of how do we respond to the rise of of synthetic governance, but we don't really settle on any of those.
Sam Sellar:We come back to problematization, don't we? And that's where we we really finish the book. And so I think this notion of problematization is, for us, not only methodologically important, you know, as you mentioned earlier, it's the way in which we seek to kind of move beyond thinking about existing education problems and their potential solutions to think more openly about the possibilities of technology and how it reframes the problems that seem important to us today, But we also come back to it at the end of the book, I guess, in response to this question of, you know, what kind of politics do we need if we are to engage with new synthetic forms of governance. I think we briefly discussed this towards the end of the book, and and, Kyle, you're just gonna read a a short poll a short paragraph, sorry, where we try and define what we mean by synthetic politics, and I suppose also by problematization.
Kalervo N. Gulson:Yeah. So I think that this so this is where the book finishes. So I think it's a good spot for us to finish our discussion. So it's a quote. So we think it is vitally important that we develop a critical synthetic politics that responds not to fears that technology will get away from us, but being a singularity, so much as the politics of networks that become so diffuse as to resist meaningful intervention.
Kalervo N. Gulson:A synthetic politics begins from the premise that there is no outside of algorithmic decision making and automatic thinking. We must think with and through our implications with other modes of cognition as a kind of co learning of all made systems. A particular rationality is needed to be open to the coadaptation of humans and machines by recognizing that machine learning is the latest iteration in a longer history of thought that has never been limited to the human. Education is a site where we can embrace synthetic thought with a careful and articulated view of the risks rather than reacting against it or embracing it uncrically. Education is a site in which we can remain we can remain open to the uncertainties, risks, and possibilities of synthetic products.
Kalervo N. Gulson:So we look forward to seeing what people think. It's, it's been great talking with you. Yeah.
Sam Sellar:It's been great talking to you, Cal, as well.