“Thinking, Fast & Slow” and What It Means for AI and Teachers
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Levi Belnap: Welcome to episode one of Unsupervised Learning, a show where we bring members of the Merlyn Mind team together for quick conversations about the latest, and greatest, and craziest happenings in artificial intelligence, technology, and education. We're here to learn what else is going on in the world? And to think out loud, why does it matter, and how can it guide us in the important work we're doing to bring artificial intelligence to education? If you're an educator or an AI technologist, we hope you find these conversations useful. Let's go. OK, Aditya, I'm so excited for us to start this conversation together. For everyone listening crosstalk-
Aditya: Yes.
Levi Belnap: Aditya, why don't you introduce yourself first?
Aditya: Hey. Hi, guys. This is Aditya. I'm a research scientist at Merlyn Mind with a background in AI, mission learning and all of the awesome words.
Levi Belnap: Yes. We got one of the smartest people on the team to join us here today. My name's Levi Belnap. I lead strategy for Merlyn Mind. If you don't know, Merlyn Mind is an artificial intelligence company that is bringing AI to education in a very different way, to try to assist teachers with the help while they teach. Have a digital assistant that can help them orchestrate learning, use voice control over the classroom technology they use every day. Really trying to say, how can we take the best of AI, bring it into classrooms to help teachers do what they do best, which is to teach and connect with students? Well, in the process of us trying to build this world-changing technology, we have a lot of really incredible people inside of the company thinking about what's happening everywhere else in the world. What is AI useful for in other industries? How could that influence what we do, and potentially influence how teachers think about AI? This was Aditya's idea of bringing this conversation together so we could share with the world some of what we're learning, and how it could maybe be useful for other people. Aditya, I don't know if you want to give a little bit of a summary of that.
Aditya: That's exactly the idea. In order to do the big, grand goal that Levi talked about is we are trying to help teachers. We're trying to give them the appropriate tools and support so that they can do what they do wonderfully well, which is teach. We keep hearing about so many awesome things happening in AI, and we should be aware of those. We just wanted a peek behind the scenes understanding of how we, at Merlyn Mind, try to develop these solutions. What goes through, what's the process to develop the appropriate solutions. By looking through at AI, what's happening around how it's relevant to us, how we can use that solution, the appropriate support tools that we can create for the teachers. Just trying to have a brainstorming and the conversation around this on how we do that.
Levi Belnap: Fantastic. This will evolve over time, but we're going to start today by highlighting a few articles, looking at AI happenings and evolutions, and what's happening in a few different industries. Aditya is going to teach me about those. I'm going to ask him some questions as a layman, who doesn't understand AI as well as he does. Hopefully, together we can learn some stuff. Aditya, why don't you just start on the first topic?
Aditya: Yeah. I just wanted to highlight, I think this is an interesting article that I've found. Do you know about this theory of thinking fast and slow by Daniel Kahneman?
Levi Belnap: Yeah. I read Daniel Kahneman's book a long of time ago, "Thinking Fast and Slow." It actually had a huge impact on my life and the way that I think about crosstalk-
Aditya: Same here.
Levi Belnap: ... a lot of things, really. This is not the piece of the theory we're going to talk about today. But there was a thing I read in that book years ago, this idea of the peak-end phenomenon that people remember the highest or the lowest thing in a situation, and then the last thing. I'm constantly thinking about how can I engineer this situation with my kids so that we end well. Because if we end poorly, they only remember how bad the trip was, or the vacation, or the road trip.
Aditya: They only remember the end, yeah. That's an interesting thing. The one issue we're going to talk about is this notion of system one and system two that they talk about in the book, where it's all about the way human mind works is that you actually have a system one, a system two. System one is something that does things fast, thinks fast, uses heuristics to make decisions, whereas system two is something that is a slow thinker. Analyzes it a bit more and understands it a bit more. That idea was something the researchers crosstalk-
Levi Belnap: Before we jump too far, the word heuristics I think is a hard word. Even when I hear it, I rarely know what heuristics means. It's just what does it mean? System one, how is it different from system two? I'm a human walking through my day and suddenly there's a moment where I have to make a decision. Let's say it's a car is driving past me. I'm about to walk in front of it, which system stops me from walking in front of the car?
Aditya: System one because you react. It's a reactive thing. You don't analyze. You don't say, "Oh, the car is coming. I should move out." There is no slow thinking happening over here. You're not analyzing the situation. You're reacting from your experience. Your heuristics are pulling up your past experience immediately and you're reacting. Then you actually realize that you think later, you act first, and that's your system one.
Levi Belnap: Okay. System one is fast reactions, solving problems where you really don't have time to really think about them. Then system two crosstalk-
Aditya: Also, things that it's a part of your experience. For example, if somebody now says, "Oh, what is one is plus one?" You will say two. You don't actually try to do your math in your head. Even if it's math that you're potentially scared of, you will never think twice to say one plus one is two. On the other hand, if somebody gives you a huge problem with a digit multiplied by a digit, you can't do that, except if you are a superhuman computer. But then otherwise, you actually have to still go through the whole process. That's your system two, trying to solve the problem. It needs more resources to actually solve it.
Levi Belnap: OK. What's another example of system two that folks could understand, outside of a math problem? Everyday life I'm walking through, what happens to me where I'm like, "Wow, I got to stop and think about this?"
Aditya: When you're making a big decision. When you're trying to make decisions of, "Oh, I need to make a decision on whether I have to proceed with this project or not." You actually think through it. You don't make a heuristic decision.
Levi Belnap: I got to buy the car now. I'm not worried about getting run over by a car, but I got to buy a car. I don't know if I want a big car, a small car, a red car, a white car. I got to put all these different pieces together.
Aditya: You evaluate, you take the pros and cons, you evaluate it and then reach a decision. You don't go by heuristics. I think this. That's the heuristic. Heuristic is basically it's suboptimal. It's not perfect. It's not something that you're trying to analyze through the details. You're just going by your intuition. Intuition is a very good example of a heuristic.
Levi Belnap: OK.
Aditya: When you say, "Oh, I took the decision intuitively," that means your system one kicked in. It was not system two that was doing things for you.
Levi Belnap: OK. Yeah, teach us more about what does AI have to do with system one and system two in Daniel Kahneman's theory?
Exactly. Researchers have always had this issue with AI as AI is solving narrow AI problems, which means you can develop systems that solve a particular problem. Really, it can do image recognition really well. It can do speech recognition really well. But for each of those tasks, you need to build a separate system. System that will separate the architecture, that will separate solution. If you think about how people talk about intelligence, human intelligence, we don't have separate system. You don't have separate human who does one thing. One human does speech recognition, the other human does image recognition, no. We all do all those things very well. There was this whole problem of general AI versus narrow AI. The researchers are trying to see if the reason or the solution heading toward general AI is trying to use this architecture of fast and slow. They call it SOFAI, slow and fast AI. This architecture where they're trying to have two solvers, they call it solvers. Solar one, solver two. Solver one is again, fast, quick, heuristic-based. If it's an already seen problem, you can actually compress into a model. That enables you to solve the problem quickly. While it's an unseen problem, you go to solver two, which tries to use its meta. Again, this is something that we would talk about it, meta admission rate, which is trying to learn from and develop a model for what it's observing. So that over time, things from solver two become part of solver one, which happens with us as well. If you do crosstalk-
Levi Belnap: OK. Humans do that well. Today, if I have a narrow AI application that doesn't have this type of system one and system two solvers, what happens when the AI can't solve a problem that it was built to solve? Is there some other path or it just breaks? It doesn't solve that.
Aditya: Most of the cases, it just breaks, it doesn't solve. Or it ends up giving a solution, which could have drastic consequences because it's not trained for it. It's not used for it. You're trying to put something that doesn't know how to do it into a task, which has never seen. That could have really bad consequences. One of the challenges has always been about artificial, general intelligence as trying to develop an AGI. That's what they call it. Is how do we build an AI that does general intelligence? It's like crosstalk-
Levi Belnap: For people who don't know what AGI means, artificial general intelligence. Just talk about what does that mean in theory, if you had AGI.
Aditya: It's basically crosstalk-
Levi Belnap: As smart as a human, it can do whatever a human does?
Aditya: Let's not go there. Basically, you can never actually get that. That's my personal thinking as of now, because human intelligence is beyond just solving tasks. We do have hundreds of other things that we do, which cannot really be mathematized or put into a model like empathy.
Levi Belnap: When you get to the heart of Merlyn Mind, this company was started because we don't believe AI can replace humans. We need AI to help teachers. There's no possible way for a computer to teach with empathy, emotion, and connection with humans. We have a very strong position internally at Merlyn Mind on what AI can and can't do, but back to AGI so keep going.
Aditya: Right. It's about all these different tasks. Humans, again, coming back to the intelligence, doing aspects of intelligence are human. There are several aspects of it. The fact that you can transfer one learning to the other, you can actually learn from a few examples. That's what humans are good at, one-shot learning or two-shot learning that people talk about. You actually show two examples of a cat to a toddler. He or she will be able to recognize the cat and identify a cat. As long as you see a cat on the street, they go, "That's a cat."
Levi Belnap: No matter what color, how big, how small, how long.
Aditya: Exactly. They don't need to generalize examples. That's something that was missing in the traditional AI. Things have changed now. Things have started to become this whole one-shot, two-shot learning is evolving much better. How we can actually do things and transfer your learnings from one domain to the other.
Levi Belnap: OK.
Which again, humans are really good at. AGI is again, moving it further. It's not just about from one domain to the other, but from one task to the other, keep it general. It should be able to do tasks at a general level. You're not training it to do each task independently, but you should be able to learn to make a general intelligence task. That's all about AGI. This paper, this research project that these guys have done, researchers, the SOFAI ones, I tried to take it one step further. The proposal and the belief is that will lead us to actually go closer to as AGI by having this decision maker, who can decide whether you have to go by solver one, or solver two and things of that sort. The interesting part though, is which I really like, if you read Dan Brown's book, Origin. At the end of the book, the big AI that solves the problem of origin and everything, actually had a similar structure. You wouldn't believe that. I don't know whether it was intentional or not, but he actually talks about the eventual AI being something that is part system one and system two kind of thing. Maybe there's something to it.
Levi Belnap: Yeah. That's cool.
Aditya: That's an interesting thing. Yeah.
Levi Belnap: Why does it matter to ... I think we understand why it matters to AI. I think what I just heard you say is AI has had a very difficult time replicating the way the human brain works. It's just really complicated. Looking at what Daniel Kahneman learned, maybe the human brain has this kind of system one, system two component to it that makes it very efficient at instant tasks. And then very efficient at complex tasks because it uses different processes. So if AI can figure out how to structure a system similarly, maybe it can be very good at the instant, repetitive tasks that it knows very well. And have a way to also deal with things that are more complex that weren't exactly what it was trained for. Why does it matter to us, to education, to the way you think about AI applied to education?
Aditya: Yes. So several things. We talked about a general human doing so many things, now think about a superhuman like a teacher. She does thousands of things.
Levi Belnap: I think this idea that a teacher is superhuman is actually core to why we exist. Maybe dig into that a little bit. What does that mean? Why do we think of them as superhumans?
Aditya: We were actually talking to a few teachers. One of the teachers, we were just looking at her workflows in a classroom.
Levi Belnap: Yeah.
Aditya: You don't imagine. I think you were there, you might remember, but this video where the teacher was actually teaching. At a particular instant, there was one moment when we paused her lesson to see what she's going through. Oh my God, she actually had five or 10 things running in panel at that instant. One second seems like she was concerned about what she's teaching, of course. She was doing a hybrid learning, so she had to care about people on Zoom over there asking her questions, whether they're active, whether as you know, what they're doing at home. After that, one of the students over there on Zoom was actually asking her, "Miss, my Zoom is down. What do I do?" Things like that, she's trying to solve a tech problem. One of the students in the classroom said, "Miss, can I go to the restroom?" She has so many things to keep track of. Then of course, she's talking the whole technology problem. She's trying to connect multiple wires and everything. That is superhuman, man. If that starts in just one room, what else is crosstalk-
Levi Belnap: Teachers are incredible.
Aditya: One institute doing hundreds of things. That's not possible. That's humanly not possible, and hence, she's superhuman.
Levi Belnap: Yep.
Aditya: We talked about humans having this structure of system one, system two and whatnot, and they are doing things, what about a superhuman? They do hundreds of things. Any support that you can give is going to make a difference. That's the whole idea behind building Merlyn and everything that we've done. Now, if we can aim towards having an assistant that has system one, system two, it can enable this meta understanding of, "Oh, what are the things that the assistant can take away from the teacher to handle that's all system one stuff?" The teacher has trained the system, one of the mock Merlyns, to help her out in a few of these tasks. Then only things that Merlyn thinks, "I can't do it because they're not part of system one, it's my system two." The system two for me is my teacher. I can push it on to the teacher. It's the augmented side of it that we talk about always.
Levi Belnap: Yeah. That's so interesting.
Aditya: It's the assisted intelligence. It's like your system one is Merlyn and system two is your teacher, as an educational environment. You can actually maybe envision something like that so that you can offload a bunch of these heuristic related tasks that are not nearly necessary for the teacher to be putting her attention on. And actually let her enjoy what she's really enjoying her doing. It's teach the kids, empathize with the kids, humanize with the kids. We don't expect AI to be doing that anytime soon. Actually, have that experience, it's going to change the way ... If we think of that in the direction, it's going to change the way we teach and learn in classrooms. Yeah.
Levi Belnap: That's really exciting. One question. Is system one or system two better? Is there one way to think better than the other?
Aditya: I wouldn't say better. I would say crosstalk-
Levi Belnap: I remember Kahneman said no. It's not that one is better.
Aditya: No. It's more about which is more complex, is what I would say.
Levi Belnap: Right.
Aditya: There's no better. Well, both are needed.
Levi Belnap: Yeah.
Aditya: You can't survive without the other. Right now, a single superhuman is doing hundreds of system ones and hundreds of system twos. All we're saying is maybe we can help taking a few of your system one stuff so that you can be offloading some of these other things.
Levi Belnap: Well, one may or may not be better than the other. At times one is necessary in order to solve the problem and the other system can't solve it, right?
Aditya: Exactly.
Levi Belnap: System two takes time by definition, right? You have to have time to devote a different level of thinking, and attention, and focus, which means in a classroom, with all of the things teachers have going on. Any problem that requires a system two like, "I got to focus, pay attention, think about this problem," they may not be able to do that, for lack of resources time. I think maybe that's one of the promises, I guess, of trying to build that crosstalk-
Aditya: That's the hope.
Levi Belnap: ... collaborative solution.
Aditya: Exactly. That's the hope. We are going to do this only by working with teachers, of course. We are going to have a conversation with them and what they think is the right direction to go toward, but this is an idea. This is something that we could actually think about positioning a way of helping out these superhumans for what they do. It's learning stuff.
Levi Belnap: Well, Aditya, this is a fantastic conversation about system one and system two, and AI, and how it applies to education. I hope those listening have enjoyed this. I hope it's helpful. We look forward to feedback. Any questions, any thoughts on other topics we should cover, we look forward to the conversation. Thank you for joining us for this episode of Unsupervised Learning. We hope you'll join us for the next one. Until then, keep on learning.
DESCRIPTION
What can AI researchers learn from the different ways the human brain processes information? What does this mean for AI generally, and what does it mean for education and applications of AI in education? Aditya Vempaty and Levi Belnap from the Merlyn Mind team explore these questions as they discuss Daniel Kahneman's concept of System 1 and System 2 from his book "Thinking, Fast and Slow" and explore recent work from AI researchers applying this approach to artificial intelligence.