May 3, 2026

The Human Edge: Smarter Decisions in the Age of AI ft. Cheryl Strauss Einhorn | My EdTech Life 362

What happens to your judgment when AI starts answering before you finish the question? In this episode, Dr. Fonz sits down with Cheryl Strauss Einhorn, founder of Decisive, decision-sciences educator at Cornell and the University of Miami, and author of the new book The Human Edge: Smarter Decisions in the Age of AI (Cornell Publishing, May8).

Spotify podcast player badge
Goodpods podcast player badge
Apple Podcasts podcast player badge
Amazon Music podcast player badge
Pandora podcast player badge
RSS Feed podcast player badge
Spotify podcast player iconGoodpods podcast player iconApple Podcasts podcast player iconAmazon Music podcast player iconPandora podcast player iconRSS Feed podcast player icon

What happens to your judgment when AI starts answering before you finish the question?

In this episode, Dr. Fonz sits down with Cheryl Strauss Einhorn, founder of Decisive, decision-sciences educator at Cornell and the University of Miami, and author of the new book The Human Edge: Smarter Decisions in the Age of AI (Cornell Publishing, May 15). Cheryl spent over a decade as an investigative journalist at Barron's, where her bearish company stories halted stock trading, shut down companies, and once helped send a CEO to ten years in prison. That work taught her something every educator and student needs to hear right now: the hardest part of any decision isn't the information. It's the judgment.

Cheryl walks us through her AREA Method (Absolute, Relative, Exploration & Exploitation, Analysis), a system for complex problem solving built to check our cognitive biases and protect our thinking when we work with AI. She unpacks why "AI as a time-saver" is a myth doing real damage in classrooms, why students still feel pressure to run their assignments through AI to "polish them up," and why the eight moments of human judgment in any complex decision are the things educators should be teaching first.

We also get into the cautious advocate's question: what mindset shift do tenured professors need to feel their expertise still matters? How do we lower the barrier for the educators who haven't started using these tools yet? And what does learning look like five years from now if we get this right?

Whether you're a K-12 teacher, a professor, a district leader, or a student wondering how to keep your own voice in your own work, this conversation will give you a framework to stop outsourcing your thinking and start leading the machine.

Chapters
00:00 Introduction to AI and Decision Making
04:59 Cheryl's Journey and the Area Method
09:57 Understanding the Human Edge in AI
14:59 Reclaiming Agency in Decision Making
20:10 The Area Method Explained
24:48 Challenges and Insights from Teaching AI
31:06 The Role of Educators in AI
38:37 Debunking AI Myths

πŸ“š Connect with Cheryl:
Website: areamethod.com
Book: The Human Edge: Smarter Decisions in the Age of AI (May8)
LinkedIn: https://www.linkedin.com/in/cherylstrausseinhorn/

πŸ™Œ A huge thank you to our sponsors who keep this mission going:
β˜• Comeback Coffee for keeping us caffeinated and ready to podcast
πŸ“š Book Creator for empowering student voice in classrooms everywhere
🏫 Peel Back Education for helping teachers do more with less
πŸ€– Eduaide.AI for putting thoughtful AI tools in educators' hands

We do what we do for you and because of you. Thank you for listening, sharing, and engaging with the show.

Stay Techie ✌🏼

Peel Back Education exists to uncover, share, and amplify powerful, authentic stories from inside classrooms and beyond, helping educators, learners, and the wider community connect meaningfully with the people and ideas shaping education today.

Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.

Support the show

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

πŸŽ™οΈ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

00:00 - Welcome And Sponsor Thanks

02:07 - From Investigative Journalism To Decisions

06:01 - How Her Books Led Here

09:53 - Defining The Human Edge

11:57 - Using AI Without Cognitive Offload

17:17 - Reclaiming Agency And Exercising Judgment

20:50 - The AREA Method Explained

27:12 - Student Mistakes And The Perfection Trap

32:19 - Helping Educators Adopt AI Safely

38:33 - Decision Making In Five To Ten Years

41:03 - Debunking AI Efficiency Myths

45:19 - Directing AI Research Without Overload

50:21 - Lightning Round And Final Wrap

Welcome And Sponsor Thanks

Dr. Alfonso Mendoza

Hello, everybody, and welcome to another great episode of My Ed Tech Life. Thank you so much for joining us on this wonderful day. And wherever it is that you're joining us from around the world, thank you as always for all the likes, the shares, and all the support. We appreciate you listening to the show, sharing the content, and engaging with our content. And of course, we would love to thank our amazing sponsors. We would love to thank Comeback Coffee for keeping us caffeinated and giving us energy to continue to podcast. Thank you so much to Book Creator and Peelback Education along with eduay.ai. We really appreciate all of your support because we do what we do for you and because of you. So thank you for that support. And today I am so excited because we are going to speak to a wonderful author. We're going to be talking uh AI, but kind of in a different way than we've usually been talking about it here in the show. So I'm just really excited to welcome the author of The Human Edge, Cheryl Strauss Einhorn. Cheryl, how are you doing this evening?

Cheryl Strauss Einhorn

Good, thank you.

From Investigative Journalism To Decisions

Dr. Alfonso Mendoza

Thanks so much for having me. Absolutely, Cheryl. Thank you so much. And I'm so thankful that you reached out. I I was really impressed, you know, with you reaching out, all the information, seeing your work, and I think that this is such a great conversation because, like I said, the way that we've been talking about AI here on the podcast lately has just been mainly the integration, the ethics, you know, the privacy and so on. And this kind of lines up with that, but with a little bit uh of a twist or something a little bit different, which is talking about decision making. But before we get into the meat of the matter, Cheryl, I would love for you to introduce yourself a little bit to our audience. If you can give us a little bit of background and what your context is within the scope of education.

Cheryl Strauss Einhorn

Sure, absolutely. So I work currently in decision making. I got there in a really unexpected and circuitous route. My background is in investigative journalism. I spent over a decade at the business magazine Barron's. And while I was there, I ended up specializing in what you would call the bearish company story. Stories that take a skeptical look at a company's finances or at their strategy. And when those stories came out, there was often a very outsized reaction because I was writing about primarily publicly traded companies. At times the stock exchange would halt shares from trading because they needed an orderly order flow in order to resume trading, and there were too many buy and sell orders after a story came out. About half a dozen companies actually went out of business after I raised questions about the way they were operating. And for one company, actually the CEO ended up being sentenced to 10 years in jail after a series of investigative pieces. And this really led me to think about how do I know that I'm telling stories that are true, that should be told, that I'm marshalling the right evidence, and that I'm really bringing a critical eye to what we now know are our cognitive biases, these mental shortcuts that we all have that lead us to assumptions and judgments that are largely based on the past and that really need to be checked and challenged when we are solving complex problems. And as I started to think about how I could ensure an ethical decision-making process for these stories, I recognized there was no system out there that actually focused on these mental mistakes, these assumptions and judgments that we make. And I ended up developing for myself a system that I now call my area method that is really meant to challenge these cognitive biases and think about how we can expand our knowledge while improving our judgment. And that led me to decision making. Currently, now I am the founder of a decision sciences company called Decisive. We offer coaching and training and workshops to individuals and companies. And we also teach at a variety of higher education institutions. So right now we have a course in complex problem solving at Cornell that we've been teaching for years, and a course on artificial intelligence and decision making that we're just finishing in about two weeks at University of Miami. And those are students who are using the new book you mentioned, The Human Edge, Smarter Decisions in the Age of AI, that Cornell is publishing and that we're going to be talking about today.

Dr. Alfonso Mendoza

That is fantastic. Now, Cheryl, um, you have been writing several books. So this isn't your first book. So for our audience members, uh, you know, I want I would love for them to know, you know, you have written several books in the past, and I just kind of want you to walk us through that arc. So I want my audience to know, um, you know, especially the teachers and the professors listening here as well, how they can understand how one book kind of led to the next one and pretty much laid the groundwork for the human edge. So I feel that this is definitely a necessary chapter, especially with so much AI that is being used not only in the K-12 space, but also in the higher ed space. And obviously now we know that that has uh you know great implications also within industry. So tell us a little bit about that.

How Her Books Led Here

Cheryl Strauss Einhorn

Thank you so much for asking. So when I left Barrens, I began teaching at Columbia University, both at the business school and at the graduate school of journalism. And I was teaching about my area method, and I realized that I love books and that my students shouldn't have to get it the first time. When they hear a lecture, they should also have a way to go home and have something physical that they can go back to. And so I wrote my first book, Problem Solved, which is about how do you make complex decisions, either personal decisions or professional decision, and follow a system for high-stakes decision making. So that's problem solved. And then as I continued teaching at Columbia and expanded what I was doing in the business school, I wrote my second book, which is called Investing in Financial Research, a decision-making system for better results. And that book is also about the area method, but specifically applies to how do we actually make a financial or an investment or a business decision using a system for high-stakes decision making. And then based on what I learned in my first two books, I wrote my third book, which is called Problem Solver. And that is about what I call problem solver profiles and the psychology of decision making. By studying how people used the area method, both from problem solved and investing in financial research, I was able to identify that there's five dominant ways that people approach decisions. And I gave them each archetypes. And that way we could have a lexicon to think about the key underlying factors that drive what we're optimizing for in our decisions. And so Problem Solver goes through these five different types of decision-making profiles, helps anybody who would like to identify their own problem solver profile, and then learn how to work better with others to really make collaborative decisions, no matter whether somebody aligns to the way you solve problems, or as we know much more frequently, somebody makes decisions in a totally different way than we do. How do we actually find that strong way that we can better understand each other's decision-making strengths and make better decisions together?

Defining The Human Edge

Dr. Alfonso Mendoza

Excellent. Well, now talking more a little bit about, well, excuse me, talking more now about the human edge. Um, you know, now this suggests something that humans retain something that AI can't replicate. So obviously there are many things that I know that we may offload, that we may use AI to help us with maybe, you know, outlining or doing something, you know, we want to complete a task. But I want to know, you know, for again, for our audience that is K-12, higher ed, that are listening to right now and feel that AI is just like really closing in on every single task that they've been able to assign and do before. What exactly is the human edge and why did you feel that this book was should be released now at such an important time?

Cheryl Strauss Einhorn

Thank you for that question. So, with all of my background in decision making, what I realized is that what AI doesn't have is our judgment. And our judgment is what makes our decisions uniquely ours. When we're working with AI, it's giving us other people's answers. So it can tell us what other people have done. So it is asking us to actually make visible something that until now has been invisible. What is our own secret sauce? What are our values? What are we optimizing for? What do we care about in our decisions? Who do we care about, including? All of the moments that make us the chief decider in our own life, AI doesn't have. And so what I realized is we need a way to help people to recognize where human judgment is in decision making, how to identify it, and then how to use it with AI so that it is actually strengthening our decisions and specifically working tailored to our means.

Dr. Alfonso Mendoza

Excellent. Now, can can you unpack that a little bit, you know, because like so let's say, you know, teachers right now, professor, well, actually, I don't know, maybe some professors, but we know and we hear in the news, you know, students are using this technology. As a matter of fact, there was an article coming out from uh the University of Penn State also talking about how, you know, there's so much cognitive offload because Penn State said, hey, we're gonna be forward thinkers and we're gonna go ahead and allow students to use AI. And then they're slowly and slightly seeing some of the consequences of some of that offloading. So it talk us through that process. Like, what should professors, students, teachers, or anybody in a professional field, what are some main things that we should kind of pause and take in so it's still our voice, our decisions, and like you mentioned, our secret sauce that AI doesn't have.

Cheryl Strauss Einhorn

Yeah. So within complex problem solving, there's eight moments where human judgment is really called for. And I'm happy to go through those. But if you want a couple of big takeaways for the listeners, we all recognize that we can let AI draft something for us, but we need to own the final version. And so when we're working with AI, I think one of the things that's truly been a disservice is that we often have heard it spoken about as a time saver. It speeds you up. So my way of thinking is it speeds up one piece of the work, which means reinvest the time that you've saved for anything that you're doing with AI to make sure that you are completely reviewing it so that it's what you intended. It's tailored for your specific needs, it's accurate. You would feel comfortable telling somebody that you've actually used AI in this piece of your work because you'd be willing to handle the consequences. So one thing that I would say is you can use it to draft, but the final draft always has to be yours. The second thing is that AI looks for patterns. Humans are motivated by purpose. And so we need to always be checking and ensuring that we are the ones who are telling AI what our incentives and our motives are. A third thing is when the hammer falls, it falls on you, not on AI. So how are you actually checking? Because AI doesn't care about consequences, what we do, we know that a reputation can take a lifetime to build, and it can be lost so very quickly. So how are you actually not only for yourself, but also modeling for your students how to check the sources that it gives you, especially because when you think that you are familiar with somebody's work, it can often suggest something to you that it knows that you are already open to, and you can accept that with confirmation bias and not necessarily go ahead and recognize that it was entirely fabricating. I had this experience just recently. I looked for a citation, it gave me something by uh a colleague who I respect, whose work I'm familiar with. And at the last minute before I took the citation, I thought, I should just check she actually wrote this, and it didn't exist at all. And so those are just a couple things that I think people can take away. And I actually want to just mention to your listeners, I wrote an article for Inspiring Minds, which is put out by Harvard Education. And this article actually goes through some of these high-level takeaways that sum up pieces of the human edge that could be of use to the audience.

Reclaiming Agency And Exercising Judgment

Dr. Alfonso Mendoza

Excellent. Yes, and if you don't mind, Cheryl, like, you know, uh after the show, if you can share that link, and then we can definitely put it in the show notes for our audience members to hear and learn a little bit more from you uh from that article as well. Uh now I want to talk a little bit of, like you mentioned, and I want to unpack a little bit of you're saying, you know, we talk about AI and we all talk about how it may save us time or you know, and things of that sort. Now, sometimes we'll say, like, let's reallocate some of that time to like you mentioned, let's go through the outputs, like make sure that we comb through it, let's make sure that, you know, obviously, like you just mentioned right now with that issue with the citation, that you know, it wasn't an accurate citation. So, as we know that, you know, the LLMs, the outputs may not be 100% accurate. So we definitely need to take a pause on that, as opposed to just simply like, hey, let's just move fast with this, copy, paste, and we're good to go. But, you know, talking about reclaiming agency, you know, and holding to it. So I know that that's something that you kind of talked a little right now a little bit of that. So I just want to unpack that because, you know, like I mentioned, professors worried since 2022, teachers have been worried since 2022, because obviously the students can easily use, you know, the these uh chatbots and everything to just get those answers very quickly. So I want to ask as far as the call to reclaim agency, you know, is it something that is actually holding, can hold, or is it something that we kind of tell ourselves just to reassure ourselves, but we are still quietly just handing over more and more to AI as far as making that decision or help having it make the decision for us?

Cheryl Strauss Einhorn

I honestly believe that we can maintain our agency. The brain is a muscle, right? And it needs exercise, just like our bodies need exercise. And so, really, you want to always be asking yourself first. Whatever you're thinking about using AI first, ask yourself what you're expecting to see. Make sure that you are training yourself to rely on yourself first, because just that act of asking yourself first means that when AI gives you an answer, you are much better prepared to engage with it, to engage in a skeptical conversation, to challenge it. And you can get better results from AI also by doing this. One of the things that I have found that becomes uh true is when you have one of these AI tools that has become your favorite, and people pick favorites, right? And the tool begins to know you, they seem to think that it will make AI more effective and efficient. And the truth is it's the opposite because it actually will have some selection bias. It will say, gee, I know that you're particularly interested in this, but I also have really picked up you're pretty risk averse. I may not suggest these things to you because they may not actually seem to be something that I've seen that we've ever had in our in our conversations. So you need to be asking yourself, and you need to be continuously then checking and challenging the outputs that AI can give you to make sure that you're not narrowing, to make sure that you are always exercising your own thinking.

The AREA Method Explained

Dr. Alfonso Mendoza

Excellent. I love that. So and I love that you mentioned that, you know, that the brain is a muscle. We need to exercise that, exercise discernment, take our time with those outputs. But it's very interesting, like you said too, as well. Um I think many times as we have become so familiar with these tools and we are immersed in them, mainly some out of curiosity, some are using it daily, or some are pushed to use it daily, as I hear, you know, a lot in industry, especially NVIDIA, stating you have to use at least half of your salary worth of tokens. If not, I'm gonna be very upset. So they're pushing you for the to be able to use that and they just want you to use it, but then of course, not taking into account. It's like, okay, you know, the more that we use it, obviously, the more that we may rely on it. But and then kind of like you said, we may stop exercising that muscle, but we need to strengthen that muscle even more. So now I want to ask you about your your area method, okay? So imagine, like right now, like I'm I'm an educator in the deep South Texas. I mean, literally in the South Pole of Texas, there's actually a sign that says the South Pole of Texas. So I am in the South Texas community, or for any college professor, also, you know, I can you walk us through the area model and what that would look like in practice for a professor or a student that is sitting in front of a computer, either using Claude, Chat GPT, Copilot, or whichever LLM of their choice, trying to either write an essay or maybe try and solve a problem, do an outline. You know, can you walk us through that?

Cheryl Strauss Einhorn

Sure. So the area method is a system for complex problem solving that is built on a collaborative backbone and is really meant to help us to check and challenge these cognitive biases that we talked about to expand our knowledge and improve our judgment. The first A in area is absolute information from up close on the target of your decision. The R in area is relative information. That's the ecosystem. Put that primary source information that's from the subject of the decision into the broader context. What do experts say about it? What are best practices? What mistakes can you make by learning from others without having to make them yourselves? And then the E in area is exploration and exploitation. I call them the twin engines of creativity. Exploration is getting beyond document-based sources to identify good prospects and ask them great questions. That's interviewing. What can you actually learn about the difference between the map and the terrain? Right? Document based research will tell you maybe how things should work, but people can actually tell you how do they exist in the real world? And that can ensure that you can have a decision that is actually built to operate in the context and environment that you're in. Exploitation. Is this new part of problem solving that I've inserted where you have a couple of creative exercises to specifically check your assumptions against evidence? This really thinks about the diagnosticity of the data. Sometimes we have a piece of data, if it confirms a lot of hypotheses but has one insurmountable hurdle, that is not going to work for you. So how do you actually prevent that? Another example of something like that would be something like a fever tells you something is wrong without telling you anything about what is wrong or where to look in the patient. And so that step is new, and that's really meant to help you think about which part of the data is most meaningful to the decision that you're making. And that finalysis puts the pieces of the process back together to help you come to a conviction that you have confidence in that will succeed for you. And so that's sort of how you move through area. It takes the different parts of thinking that we normally do as one piece together, and it separates it out into individual silos so that you don't skip something and you have an opportunity to look at each part of the process independently. So by the time you've gotten to the end, you see the decision much more clearly.

Student Mistakes And The Perfection Trap

Dr. Alfonso Mendoza

But being able to take that pause, you know, and that was very in the, you know, in the very beginning, 2022, where you know, I thought it was like, okay, all of this is correct, you know, it's coming in from this. But the more that I researched, and it wasn't until like March 2023 when I wrote a small research paper for it, you know, for a class in my university, that I was thinking to myself, oh my gosh, like this thing really, it there are knowledge cutoff dates that we need to take into account. There are so many variables and so many little things, you know, as far as the way that the data comes through. But many times because those aren't talked about often, many users may not know about those things and they feel that whatever it is that they put in, uh, you know, the output is going to be 1,000% correct, and then they'll just use it as nothing. And it's very interesting that now, you know, many times you may be talking to people and there's that anthropomorph anthromorphization of AI where we just say, oh, I'll just ask chat and chat will tell me. Oh, oh, I'll just ask Claude and Claude will tell me. And it's it's interesting, you know, calling them by their first name and being like, Oh, I'll just ask Claude, and Claude's got the answers, don't worry about it. I'll just build it with Claude and do those things. And so it it just seems like now it's just definitely become immersed in our in our day-to-day, uh, not only use, but just in our language. And with that anthropomorphization, it I just feel that many times we put too much into it and and and thinking that this is something that is going to be a hundred percent right a hundred percent of the time, and it really isn't. So I love the way that you described, you know, the area method because it's a great way that if you are going to be using these tools, it's a great way for you to stop, check, acknowledge, it's almost take a deep breath, and let's go through this and then don't just assume that it's gonna be 100% correct. So that is wonderful. Uh, you know, so that's definitely gonna be very useful. So for our audience members listening, you know, I hope it this is gonna be a great clip that we'll be releasing. That way you guys can check it out and learn a little bit more about the area method. Now, again, we we've talked a lot about you know examples that you have given, professors, students, and so on. But to this day, in your experience, you know, working at Cornell, working at Columbia, working with students or working with professors too as well, um, I want to ask you, Cheryl, and I don't know maybe how honest you want to be, but I just want to ask, like, what are some of the things that you're still seeing, you know, some of the mistakes that might be being made by, you know, our university students, or now as professors are trying the the tools a little bit more. What are some of those things that you see?

Cheryl Strauss Einhorn

Well, I'll tell you one thing that is promising, and then I'll tell you one thing that I think is is um not that surprising, but we have to work on, I think, as educators. And so the thing that's promising is when I started teaching the AI class this semester at University of Miami in the business school, and asked the students what they're worried about with AI, they're aware of the they're aware of the concerns. They know that they should protect their thinking, and they know that they don't want to lose their ability to make decisions. So that's very promising. That the message is getting out that you want to stay in control of the machine. The thing I think we have to work on as a society is that, you know, the students want to be close to perfect. And what that means is when they're getting ready to submit an assignment, even if they've been guided that this assignment is for you, for your thinking, not for AI, they're very tempted to run it through AI to polish it up, so to speak. And that's exactly what we as educators don't want them to do. Because we're here to help them build those critical thinking skills, which means we need their authenticity, their individuality to come through. And so that was a bit of a hurdle at the beginning of the class. Here we have this class, um, the human edge in AI, it's all about how do you lead the machine, where and how to exercise human judgment. And yet we were getting some papers that clearly seemed to have terms that would not have been written in a regular sentence and that didn't seem to come from undergraduates. And so when we really let them know we want to hear you, we're here to work with the human. He's gonna lead the machine. It took a bit, it took a bit of repetition. But I think that's something that we all really want to stress. We don't want them to be perfect, we're not perfect. That's the human struggle, and that's the common experience. But we all can learn to be better if we're willing to actually submit work that's ours.

Dr. Alfonso Mendoza

Excellent. Now, that really does sound too like there has to be that mindset shift, you know, and so that's very important to really change that as you know the technology changes, and now I I don't know, you know, at least for my part, and my professors during my doctoral program were very like they're very pro and really started experimenting. Obviously, my my doctorate was in educational technology, so of course they're already gonna be like trying to experiment with it and so on. They even brought me in to share, you know, a little bit with some of my uh classmates as well. And so that mindset shift is not very easy for a lot of people. Now, I'm talking about, you know, business, I'm talking, but it because uh, you know, my bubble, I'm in a bubble that's the education bubble. You really see like two polarizing sides, and it's always like the let's move fast and break things, but let's use it in experiment, you know, and let's go, go, go. And then you have the ones that are holding out and still to this day maybe holding out. I myself, I like to consider myself a cautious advocate and just kind of be in the middle, but try and bring in both sides, both conversations and see, you know, where the happy median might be if there is that happy median, but at least have the conversation there on both sides. So I want to ask you, you know, because professors that have been working, that are tenured, that maybe maybe have been in that role, in that position for a very, very long time, what kind of mindset sh mindset shift has to occur in order to make them feel like their expertise still matters and that their work did not get devalued? And I think that's where the struggle is right now. And because you are in higher ed, I would love to hear your perspective on that.

Helping Educators Adopt AI Safely

Cheryl Strauss Einhorn

Yeah, uh teachers are the experts in their classrooms, right? It they're not just content experts, they're experts in their students, they know what the unique needs are, where they need greater depth, where they need more nuance. AI doesn't have any of these things. And so using an AI tool is not going to diminish the quality of the educational experience if you are the person actually in charge and using the tool as a support system. That's one thing I would say. The other thing I would say is there's a recent Pew research study that actually the vast majority of Americans are not using it. And so I would say to that audience, if you haven't really started to engage, how can you lower that barrier to entry? Maybe you actually pick any of the tools as you know they're free to sign up and easy to play around with. And pick something really low stakes. Ask it to, you know, help you to plan a vacation, see what it does, or use it in a way, say, I want to have a five-minute lesson in classical music, just to get people who are not using it who feel that there is a challenge or a confrontation to using this tool to simply lower the barrier to try it out. And I think that is also a great service for anybody who is recognizing that we all want our students to have the best educational experience possible. We want to prepare them to be critical thinkers. We want to prepare them for jobs we know of and jobs we don't know of. And so using this tool in some capacity in terms of how we are going to work with the students is also meeting the students where they need to be, as knowing that they need to be prepared for a life outside of school.

Decision Making In Five To Ten Years

Dr. Alfonso Mendoza

Excellent. You know, and that's something very well said because, you know, it's something that for a lot of time, a lot of times, like you mentioned, because I've been thinking about that a lot, because I see some of my friends that are also within the the K-12 space, education space, it's we are very early adopters in a lot of the technology because we're always trying to find ways and resources to help us as teachers, like we talked a little bit about, to help us reallocate some time, because it always seems like time is against us. So if we can find something that might be able to help us do some of those tasks, just so I can now take the, you know, that extra time that I may have left and reallocate it to maybe working with a small group, maybe working with my tier two, working with some students that really need that additional support, or just getting ready to plan for the next day. I mean, that is something that is very useful. But then we do on social media, we'll see comments from, and this just happened this week where you know, people from the outside of education, people that are not in industry, these could be moms, dads, people that are small business owners. There are some people that are really just tired, and the comments in the comment section are just like, I'm so tired of AI, I'm so tired of hearing about AI, and they just get really frustrated. Meanwhile, in education, we're really just pushing it and pushing it and pushing it because, like I said, we have seen what it might be able to do and the potential that it has. And I mean, we are seeing a lot of platforms that are out there. As a matter of fact, it just seems like there's a new platform that comes out every day, you know. But the ones that are doing the work and are being very cautious and are taking the time to kind of move a little bit slower now and being more intentional are kind of slowly separating themselves, you know, from uh, you know, other applications that may be out there that are just in it to just do a big money grab and let's go, let's bail and so on. So it's very interesting, you know, that we do talk a lot about that. So I want to ask you now, too, you know, in the next 10 years, let's say five to 10 years, I want to ask you as AI becomes more and more integrated within our daily lives, as as it has already. I mean, it's been since 2022, and now we're in 2026. Like you mentioned, we're using it to, hey, can you help me? What can I make? What can I cook with this picture of my refrigerator? You know, maybe a short itinerary, uh, you know, something like that where you can outline, you can put your thoughts together and so on. So I want to ask you, how will decision making evolve that you may see in the next five to ten years as the tools become more and more part of our daily lives?

Cheryl Strauss Einhorn

I think it's gonna be even more important to know how to access our metacognition, right? Our thinking about our thinking. Because we are gonna need to check and challenge where the tool is making assumptions and judgments. Right? We're gonna need to hold it accountable. It's data can be biased, can be outdated, can be incomplete. And we need to be able to understand not only the assumptions that the algorithm has baked into it, but we also need to make sure that it is working for our specific and unique and human purposes. Right? We were talking about, you know, when the hammer falls, it falls on you, not on AI. So if AI makes a poor financial decision, you can't blame the tool. You've got to be ready to take the ownership. So I think we all really need to learn where are these uniquely human moments? Problem definition is a big one. Motivation, giving AI the specifics of our context, directing the research, understanding the type of analysis that it's doing. Like I said, challenging the biases, and and also thinking about stakeholders, right? How how is the clarity, access, and equity of information that AI is is distributing? All of these things we need to stay in touch with and in ahead of when it comes to working with AI tools.

Dr. Alfonso Mendoza

Excellent. So now let me ask you, just as we kind of just start wrapping up, you know, just some questions to end with. And it's about, you know, AI myths. Maybe there's something there that you can share with us that you would love to debunk. As we know, there is no shortage of AI myths that are circulating. Always you hear AI is going to replace teachers. You hear that AI is neutral, AI is making everything more productive. So I want to ask you, which of these myths about AI and AI and decision making would you love to debunk?

Cheryl Strauss Einhorn

Well, I liked all three of what you said. I don't think that it makes us more efficient if we're really using it while we're reinvesting. We're we're taking time beforehand to identifying what it is that we actually want to be working on with the tool and why, and sharing that information with AI. And then after its output, we are again checking and challenging the output that it's given. I think if we can debunk the myth about the efficiency, we can all use it better. And that actually will make our work better for us.

Dr. Alfonso Mendoza

Nice. Excellent. That's a great answer. All right. And then my next question for you looking forward, you know, if all our audience members, and not just our audience members, but the whole world, you know, you're talking about teachers, professors, anybody that gets their hands on your book, The Human Edge, and takes just one thing away from your book, what would that one thing be? And also, that this is a maybe we'll make it two-parter. I want to know, in your eyes, you know, five years from now, if the readers stick to what is written in the book, what does learning look like in those five years, whether it's K-12 or higher ed?

Cheryl Strauss Einhorn

I think the single most important thing is you need to maintain being the chief decider in your own life. So you need to continue to exercise those problem solving and decision-making skills that make you uniquely you. And in five years, if education is really succeeding, we have incorporated into the curriculum that pedagogy of what it looks like to truly be training people to develop and to hone their problem-solving and decision-making skills.

Directing AI Research Without Overload

Dr. Alfonso Mendoza

Excellent. Yeah, and you know what? That that's something too that does come out a lot is the pedagogy. It I mean, it's the art of teaching, as much as technology is out there, and we know in the news we've heard just a lot of bad rap about technology because it's the screen time, it's the types of programs that are being used for student learning where they just sit, you know, for 30 minutes per day, per subject, having to answer so many questions because this is that additional support or a supplement for teaching. But sometimes that supplement takes center stage, and a lot of teachers just use that as their tier one instruction. So I love that you bring that uh as talking about pedagogy and really training teachers to use the technology intentionately so they can also be very effective with their students and also working definitely on the the decision-making process as well. And I think that the critical thinking skills, and like you're talking about the area model, I think is some or area method is something that is going to be very useful even in the K-12 space. So for our audience members that are listening right now, please make sure that you rewind a little bit, go back to that air to the area method, or you can also purchase Cheryl's book, The Human Edge, Smarter Decisions in the Age of AI, so you can learn a little bit more about that. But Cheryl, I want to ask you also just somewhat of a closing question because I always love to end the show with my last three questions, which I hope you'll be ready for. But I want to ask you, today's conversation has been very enlightening for me, uh, especially with the area method, and especially hearing your perspective on what is happening in higher ed and how that too can affect the K-12 space, which is really where I'm I'm most familiar with and have done most of my work. But I want to ask you, Cheryl, what is one question that I should have asked you today that I didn't, and what would that answer be to that question?

Cheryl Strauss Einhorn

Well, I think one of the questions is probably um where do we think that most people think that AI is gonna help them and support them the best? And what is sort of the human judgment moment that would come to use AI in that way most effectively? And I probably would answer that by saying most people think that AI's superpower is research, right? It has just a vast Wealth of information that used to take us so very long to put together, and even then was always going to be more incomplete than what an AI tool has vacuumed into its um program. And what I would say is the human edge moment in that is that we don't want to just gather information. Now with AI, the problem is analysis paralysis, because it just can give us too much. And so what we really want to know, and the human judgment moment, is how do you actually direct the research so that you are guiding AI on what specifically you want to learn about and what are the constraints that would make information outside of the boundaries so that you're not getting everything. We don't want everything. We don't want to take the time to search through and worry that the answer must be in there somewhere. What we want to do is be directing AI because we know enough about the problem that we're solving to ensure it's giving us answers that are actually going to be specific, applicable, and move the process forward.

Dr. Alfonso Mendoza

Yes. Excellent. Excellent man, I wish I would have asked that question, but I'm glad I'm glad you asked it and I'm glad that you answered it. But that was such a wonderful answer. Cheryl, I this has been an amazing conversation, and I just thank you so much for taking a little bit of time out of your day to come on the show and speak with not only me. I I the way I do this is I always say this is this is my personal professional development for you know 45 minutes to an hour that I get to have with an expert and ask questions that I get to share with the world. So thank you so much for enlightening me and learning a little bit more because this to me is very important. As as I told you, I always love to kind of be in the middle, that cautious advocate, and really help bring both sides of the table a little closer to have that dialogue, that discourse, and see, you know, where what are the good things? You know, obviously, what are some of the bad, but let's work together towards a solution and let's not just be all like let's move fast and break things and or be like, no, no, I don't want to do this. Let's ban it, let's get rid of it. Let's just have some dialogue and see where we can go and see how we can grow together as a community. So, Cheryl, thank you so much for your time. Now, before we go to these last three questions that I always ask my guests, Cheryl, can you tell my audience where it is that they can go ahead and connect with you if they would love to contact you? Uh to I know the book is coming out May 15th. So our audience members make sure that you look out for that. And we'll make sure we put the link to your website too uh in the show notes. But let our audience members know where else they might be able to connect with you.

Cheryl Strauss Einhorn

Yeah, so in addition to the website, are calm, I hope that people will connect with me on LinkedIn, look for me on Instagram, and I look forward to hearing feedback about the book. This has been such an interesting conversation. Thank you so much for your great questions and prometting me share with your audience.

Lightning Round And Final Wrap

Dr. Alfonso Mendoza

Yes. No, thank you. Just having an amazing guest like you, and it's great that you know you were able to reach out, we connected, and just to learn more about the great work that is being done and that you're doing is something that's great because we definitely need that in within our education space to continue to grow together. But before we wrap up, Cheryl, I always love to end the show with the last three questions. So I hope I always send them in the invites. So I hope you got to see them. If you didn't, it's okay. We'll make sure that you know, well, you know, we'll take care of you. So here we go. So I want to ask you, as we know, every superhero has a pain point or weakness. So for Superman, kryptonite was what weakened him. So what I want to ask you, Cheryl, is in the current state of education, all right, whichever you'd like to pay give, it could be K-12 or higher ed or all of education, what would you say is your current edu kryptonite?

Cheryl Strauss Einhorn

Oh, I can really go down a rabbit hole. When I find something interesting, I need to take my own advice about the research because I really, when I find something interesting, I can really go after a lot of details.

Dr. Alfonso Mendoza

Excellent. All right, great answer. Now, question number two, Cheryl, is if you could have a billboard with anything on it, what would it be and why?

Cheryl Strauss Einhorn

So it would be let's help people get along better. And the reason for that is, you know, for too long we've told people that decision making is a solo activity. And that's just not true. Right? At some point, other stakeholders are involved, and we need other people to guide us, expand our perspective, and also to help carry out and and make our decisions successful in the world. And so we all need a way to just get along better.

Dr. Alfonso Mendoza

Excellent. And my final question to you, Cheryl, is if you could trade places with anyone for a single day, who would that be and why?

Cheryl Strauss Einhorn

So I would suggest Marion Croak. I don't know if you know who she is. She actually wrote the foreword to the human edge, and she is an unbelievable woman. She's in the inventor's hall of fame. She has over 200 patents to her name, and she was one of the key people behind voice over internet protocol. And she basically talks about how we can all build curiosity by looking at the world with why we can just take any small moment and ask why is it that way? And that can lead you to, well, what else could it be? And so I think it would be really fascinating to have an opportunity to live in such a curious mind and see how she sees the world for a day.

Dr. Alfonso Mendoza

Excellent. Well, Cheryl, thank you so much again for your time. It has been an honor to have you on my show. Thank you so much for just your great responses to these questions. And audience members, don't forget the book, The Human Edge, Smarter Decisions in the Age of AI, coming to your bookstore or you know, anywhere that is books are sold, May 15th. So please make sure that you check it out. And we'll make sure we link the website, Cheryl's LinkedIn. We'll get her Instagram also, and we'll put that in the show notes so you can go ahead and connect with her to maybe even reach out, ask questions, or just see more of the work that she's doing. So, Cheryl, thank you so much for today. And for our audience members, please make sure to visit our website at myedtech.life where you can check out this amazing episode and the other 361 wonderful episodes where I promise you you will find some knowledge nuggets that you can sprinkle onto what you are already doing. Great. And again, big shout out to our sponsors, Comeback Coffee, Peel Back Education, Book Creator, Eduaid. Thank you again for believing in our mission and supporting our show so we can bring some amazing, amazing guests and amazing conversations into our education space so we can continue to grow professionally and personally. And until next time, my friends, don't forget, stay techie!

Cheryl Strauss Einhorn Profile Photo

Founder, Decisive

Cheryl Strauss Einhorn is the creator of the AREA Method, a decision-making system for individuals, companies, and nonprofits to solve complex problems. Cheryl is the founder of the decision-sciences company Decisive, offering leadership training, curriculum, coaching, and professional development services, and is an adjunct professor at Cornell University. She is the author of the award-winning books Problem Solved about personal and professional decision-making and Investing in Financial Research about financial and investment decisions and Problem Solver, about the psychology of decision-making and Problem Solver Profiles. For more information, check out Cheryl's TED talk and visit areamethod.com.