Safe AI for Kids: How Chatperone Protects Your Child ft. Caleb Hurd | My EdTech Life 356
In this episode, Dr. Alfonso sits down with Caleb Hurd, founder of Chatperone, an AI chat platform built specifically to keep kids safe online. As a father of two and a 20-year tech industry veteran, Caleb built Chatperone not for investors or school district contracts, but for his own children first. What started as a personal solution has become a mission to give parents and educators peace of mind in a world where AI chatbots are already in the hands of our kids.
From the Character AI controversy to COPPA compliance gaps, Caleb and Dr. Fonz unpack the real dangers lurking in unguarded AI platforms and why Big Tech is getting it wrong. They also explore what it truly means to put the child at the center of EdTech design, how parents and teachers can work together to guide healthy AI interactions, and why data ownership matters more than ever in 2026.
Chapters
00:00 Introduction and Context Setting
06:04 The Birth of Chatperon
12:01 Navigating Parental Concerns
17:48 Building Trust with Parents
23:52 Challenges with Big Tech
29:52 The Future of Child Safety in EdTech
30:47 Understanding AI in Education
35:46 Building Solutions with Children in Mind
40:39 The Philosophy Behind Chatperone
42:44 Navigating the Current Educational Landscape
47:46 Reflections on Parenting and Technology
Sponsor Shoutout
Thank you to our sponsors: Book Creator, Eduaide.AI, and Peel Back Education for supporting My EdTech Life.
Get 3 Months of Book Creator Premium Access Free!
Use Code: MyEdTechLife
Stay Techie ✌️
Peel Back Education exists to uncover, share, and amplify powerful, authentic stories from inside classrooms and beyond, helping educators, learners, and the wider community connect meaningfully with the people and ideas shaping education today.
Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.
Thank you for watching or listening to our show!
Until Next Time, Stay Techie!
-Fonz
🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨
00:00 - Welcome & Sponsor Thanks
00:54 - Meet Caleb Hurd And Chatperone
02:24 - Why A Parent Built A Safer Chatbot
05:01 - From Home Use To Wider Demand
07:37 - Parents And Schools: Shared Responsibility
10:42 - Guardrails, Custom Filters, And Alerts
13:55 - Data Ownership And Trust
17:07 - What Big Tech Gets Wrong
21:42 - Bringing Parents Into School AI Plans
24:45 - Student Fears And Workforce Futures
28:32 - Designing For Children First
31:34 - Product Philosophy: Peace Of Mind
34:06 - What Must Change Now
37:36 - Closing Reflections And Quick Fire
Dr. Alfonso Mendoza
Hello everybody and welcome to another great episode of My Ed Tech Life. Thank you so much for joining us on this wonderful day. And wherever it is you're joining us from around the world, thank you as always for all of your support. As always, we appreciate all the likes, the shares, the follows. Thank you so much for engaging with our content, resharing our content, and reaching out to us because of the content. And again, all of this wouldn't be possible if it wasn't for our amazing sponsors. Thank you so much to Book Creator, Angie Wade, and Peelback Education for believing it in our mission to bring these amazing conversations and amazing guests to tell us and talk to us about the work that they're doing within the ed tech space. And today I am really excited for our guest who is joining us today. And his name is Caleb Hurd. Caleb, how are you doing today?
Caleb Hurd
I'm doing great. Thank you so much for having me on.
Dr. Alfonso Mendoza
Excellent. Well, Caleb, thank you so much for reaching out, you know, via email. And I know that we've connected on LinkedIn. So I'm just really excited to talk a little bit more about ChatPerone. And I mean, this is something that is very exciting. And as I dove in and reading some of the blogs and, you know, the info that you have on your website, this really got me excited. So I'm thankful that you are here. Thank you for accepting the invite. And I'm just really excited to get going on our conversation today. So, Kalo, for audience members who may not be familiar with your work just yet, can you give us a little brief introduction at what your context is within the Ed Tech space?
Why A Parent Built A Safer Chatbot
From Home Use To Wider Demand
Caleb Hurd
Sure, absolutely. So um it's good to meet all your audience members. Uh my name's Kalo Kurd. I've been working on uh and launched a product called Chat Perome. And so it's like a chaperone, but for chatting. So chatperone. And uh what it is, I'll go ahead and give you sort of the story that formed it because that that will uh explain sort of what it does as well. Um, I have two very bright kids. I've been working in the tech space for uh 20 years, and um, I have an eight-year-old and a 12-year-old, and they're very curious and they're very tech savvy, a little more tech savvy than me, which is embarrassing because it's what I do for a living. Um, but um I I started going through and sort of paying attention, started paying attention to how they were interacting with ChatGPT and other AI chatbots. And I started reading news articles about some interactions that, you know, had bad outcomes and I didn't want uh my kids to kind of go down that path. Um, and so I actually built a tool just for myself. It wasn't intended uh to be something that I watched the public um because I found that there were no really good parental controls in place for chatbots. And there were some really basic ones, but it wouldn't, there weren't any controls that would bring the chatbot down to the level of the child talking to it. Um they really aren't filing, you know, following COPA compliance where um they're they're getting consent from the parents for certain interactions and data, et cetera. Um and they're also, I think most crucially, there weren't any alerts that were letting parents know, hey, your child just had this conversation. Uh, they had a tough time with some kids at school, maybe you should talk to them about that. Um, so parents are kind of left in the dark. And it's it's sort of like the internet launching, you know, where it's it's new and exciting to some, it's scary and frustrating to others. Um, but kids just sort of slip to the cracks, even though they're a primary consumer of these new services. So I build this for my kids and I let them play with it and they loved it and they had independence and access to a tool that, you know, I even made like a homework mode so that it would not give them the answers. It would walk them through how to solve problems and kind of teach them along the way. Um and as I as I sort of built this for myself and my own kids, I started talking to other parents and realizing that this is something that many parents wanted access to. So that's what ChatProne was born out of. Um, and um, I'm currently working on an education version as well because there's also very similar challenges in uh the tech, the educational uh teacher space. Um, I'm really laser focused on safety for children. Um, I don't want to, it's very easy in this kind of burgeoning space to lose your focus. And I'm really focused on making sure that these interactions are safe and educational and useful and that the authorities in kids' lives, such as teachers and parents, can appropriately get involved in sort of guiding them through and making it a useful technology. Because this is a technology that, whether we like it or not, it's part of the workforce already. Like it's already kind of arrived. And our responsibility is to prepare these children to be successful in the workplace. And they're going to be using these tools. And so I want them to have a good positive experience where the tool doesn't kind of dumb dumb them down by, you know, giving them the answers and make them not exercise their minds and arriving at the answer. And also don't want them to create unhealthy attachments or get exposed to information that they really shouldn't be at the appropriate age level. So that's that's the space that I'm really focused on. And um, I know that's a very long intro, but uh, I would just end it by saying, you know, it's just me. I built it uh from scratch. And uh there's no VCs, there's no investors involved. I don't have to worry about, you know, how are we going to monetize children's information? I really answer just strictly to the parents that use the tool and in the future the schools and teachers that use the tool. And I really like it that way because I don't have to worry about, you know, feeding their data back or trying to figure out how to, you know, maximize profits. I really can just focus on what I'm concerned about, which is helping children.
Parents And Schools: Shared Responsibility
Dr. Alfonso Mendoza
That is fantastic. I mean, there's definitely a lot to unpack there within that answer, but definitely, you know, being a founder and one of the most emotionally charged spaces in tech, we're talking about child safety. And I mean, even from last year, you know, coming in October, I think very famous cases, you know, happening with character AI and many more, you know, that that we heard a lot of students going through some rough times, building those parasocial relationships and falling into just depression and falling into so many things due to the chatbots, the answers that they're getting, the relationships that are being formed. So talk about an emotionally charged space here. And, you know, many times founders really are just chasing those enterprise contracts. They want those school district contracts, but you licked it. You did it in reverse, and the way that you're explaining this to me and and explaining it to our audience who's listening that you went to the parents first or straight to the parents. So I want to ask you, you know, being a founder, and of course, this being an uh, you know, an ed tech podcast where, you know, we talk a lot of things that tie into schools and technology and so on, but you flipping it and going straight to the parents, what was the first reaction from many people around you since you've been in the tech space for 20 years? What were their initial reason reactions to your vision? And were there was there anybody maybe questioning or maybe talking you out of it and seeing like, no, no, we got to go to the schools first?
Guardrails, Custom Filters, And Alerts
Caleb Hurd
Yeah, it's a it's a really great question. I don't think it's an exclusive question, right? Um, I think that this is something that should be solved uh both at schools and in the household simultaneously, because the reality of the matter is like kids are using this technology already. It's already, you know, um being used to answer homework questions, write papers. It's already being used to uh in the education space, just sort of hidden in behind the scenes. And it's happening at home. And so it's that classic problem where um parents and teachers really have to come together and align on how they're gonna manage exposing children to this technology and managing this technology. And you really can't solve it, in my opinion, in one space or the other. It really has to be solved in both. And I think um if you have a platform that starts off focused on parents, and I I chose that area primarily because I am a parent, right? Because I built it uh to solve my own problems. And um, if you start there, you can kind of work your way backwards towards tools that teachers need in order to deal with this sort of technology. Um probably the biggest thing that the biggest tool that I I see in the future, and there's a lot of great tools in this space already that are focused in schools. I'm trying not to, you know, build a bot that will educate kids directly or do any of these things. Nothing wrong with those solutions. I think they're phenomenal. And I think that space is growing with some really great players on it. Um, but I really wanted something that whether you're a teacher or a parent, um, if you are exposing a child, and I I use the word child, youth is probably the better word. If you're exposing a teenager or a young kid to this technology, I want you to use a platform that you know they're going to be safe with, that is not going to hallucinate and forget that it's talking to a child. Uh, I reinforce and I not to get too kiki, but every 20th message, I'm sending a reinforcement uh reminder to the system, the age of the child, how to conduct itself, not to, you know, not to touch on certain topics. There's customizations in the tool as well, so that if you're a school that, you know, um wants to focus on a particular worldview or you want to avoid a certain topic because it's sensitive to your school and you want to teach it in the classroom, you can go in and add that custom filter and that chat bot will avoid that topic or it'll approach it the way that you prefer. So my point is that there's a peace of mind of like a child is interacting with a tool, working on homework. Is the child going to be safe? Is it gonna be a healthy conversation? Is it gonna be in the spirit and the voice of that particular educator or that school that they want it to be in? And um back to your original question, though, of the two, you know, separate spaces, the the parents and and the teachers, I think they both interlock, but I also think they've got to be solved independently as well.
Dr. Alfonso Mendoza
Excellent. Yeah, no, for sure, definitely. There's a lot definitely to unpack there too, as well. As we started talking about, most importantly, is just that interaction, that human interaction with the technology. And, you know, it's already 2026, you know, we're already, I kind of believe it's already in a the March soon. So this year is already uh taking up real quick. And since 2022, there's been so many things that have occurred, you know, not only in the just AI in general space, um, generative AI in the business enterprise, but also in education, and obviously with you know, these devices, you know, cell phones and iPads and so on being made available to our youth, you know, ha and then them having access to these two platforms maybe that are definitely not guardrailed or protected, you know, for child use, where like we were talking to you, we're talking a little bit earlier about some of those cases that we heard about last year and all those things that are happening. So I want to ask you, you know, as someone who myself who's been studying this space, as a matter of fact, you know, my whole dissertation was based on interviews from 2022 and learning so much from users all around the world, you know, many times the the ed tech industry has a history of making promises where they, you know, make big promises to families but underdeliver. So I want to ask you, you know, as a parent as well, and coming up with a platform like this, how do you go about building that trust with your users, the parents that are concerned about their uh students accessing uh questionable AI platforms?
Data Ownership And Trust
Caleb Hurd
Yeah, it's it's such a great question. Um I think the the short answer or maybe the medium answer to that question is um I equip parents with the same tools I need to manage, you know, my children. I don't have time to read every single conversation that they have, I don't have time to sort of police, you know, everything that's happening. And if I even if I did have time, there's not a sense of independence that the that the child has or the youth has in interacting with the system. And so in order for, especially my 12-year-old as an example, to to give her the most sense of independence by by monitoring the conversations and by having to alert me just on the things that are really matter that I need to dig in and kind of understand. And there's different levels of alerts too. So it's like if I don't, and if I don't feel like, you know, going after the moderate alerts that are really kind of informational, hey, you might want to talk to uh your kid. They seemed a little bit sad about this particular topic. Um, and then there's ones that are more severe. It's like, you really should have a conversation with your kid. Um by giving them those tools, it's it's less about them trusting the platform, although that's important too. And it's more about giving them the tools to trust themselves to parent their kids and be involved in those conversations. Um, and I try to give as much support in that process as possible. So not only am I sending, you know, a message saying, hey, um, your child had a conversation that might be bullying related. That's, you know, it and it seems kind of serious. Here's some supporting uh information that you can read about how to navigate a conversation with your child about bullying. So um, you know, I guess the first thing is I'm a parent. So, you know, I would only build a platform in a way that uh that that would mean my own children's, I mean, they're power users, right? Um, so from a safety perspective, uh, I have confidence from that side. But on the other side, I'm really giving the parents tools to read every conversation that they want, to monitor the ones that matter. Um, so the confidence really comes from the parent being equipped to uh ultimately be the one to raise that child and have all the information that they need available to do that.
What Big Tech Gets Wrong
Dr. Alfonso Mendoza
That is fantastic. And that, you know, I I think that's one thing that really, from what I'm hearing, and especially you coming on the show and as a parent building this, as we know, you know, I think many of the things and maybe even some of the concerns too can be that there's a version of child safety that turns into surveillance and control, but it's disguised as protection. So, and I understand, you know, especially with uh many platforms that are out there that are being used, but I I really love where you're coming from uh on this, where you, you know, I this is something like, you know, I'm giving to my own children and the way that you are supporting them, and also the way that the platform is helping you as well to be able to see and uh or if there are any concerns, if there are any alarms, any types of specific conversations that shouldn't be had, uh, you know, and again, just having those things uh and having the platform alert you to make sure that as a parent, you can still have those crucial conversations with your children and you know, letting them know and you know, showing them what is right, what is wrong, how to interact with uh the platforms and in general with generative AI altogether. And I think that's something that is very well needed. And I think it's it's it should be like you said, it's like a partnership within you know, you have the home, you have a district also as well, and building that up. So I guess, you know, also and seeing how big tech, we know that Google's, we know the Microsoft's, we know that they have the resources to build in child safety, but they either just keep getting it wrong because they may be focusing on the wrong thing. So I want to ask you, you know, what are some of the things that they may be focusing on that are helping you get things right? Because from what it seems and what I'm seeing at in the website itself, you know, you really thought this through. So what are what are what do you see since you've been in the tech space for 20 years? What is it that they're missing out on that you are getting right?
Bringing Parents Into School AI Plans
Caleb Hurd
Yeah, it's a great, it's a great question. Um so random fact, uh, you you were talking about big tech. I'll talk about tiny tech here for a moment. So I was actually Elf on the Shelf's first software engineer hire. So I was the first technologist that was hired by Elf on the Shelf, if you're familiar with that brand. And so I spent a lot of time learning about uh COPA, Child Online Privacy Protection Act. Um, I know that there's FERPA for the education system, which is very similar uh as well when it comes to protecting kids' information. And I think more than anything, Elf on the Shelf, I got exposed to a family-focused, family-first company. Um, and even though it was a for-profit, their goal really was family moments and creating those memories. And I really, I realized one, I picked up the technical skills needed to handle children's information correctly and sort of from the ground up. And two, um I learned that you can be a for-profit, but also uh put group that you're serving before your profits. And so when I when I launched this company, that really I wanted to capture that same heart that I sort of got exposed to at that company and learn the use the skill sets that I picked up as well in that very kind of niche particular space. As far as what large companies are doing wrong, that is is helping me kind of fill this area. Um I would love, I would love for my company to fail spectacularly because the large uh entities out there get it right. Like it would be it would be delightful for me to wake up one day and realize that OpenAI or Microsoft uh went to my website, saw all my controls, ripped them right off and put them in there. So like um, I would, I would shut my company down with a smile on my face and go about go about my business. Because the unfortunate truth is that uh they are treating, and character AI made this very clear last year, they are treating under 18-year-olds as future customers. Like everybody's saying, oh, you got to be 18 to use a platform, but they all know that the kid is just gonna click through and two seconds later, they're accessing a system that's not designed for them. And this isn't an accident, in my opinion. I don't, I don't sit in those meetings, I don't know for sure, but I can't imagine that it's an accident that this is the way that these massive multi-billion dollar companies have designed this. And that's because if if they can get your 15-year-old to use their product exclusively today, when that 15-year-old becomes an adult and they're a paying customer, they're hooked already on your product and maybe not in a healthy way. And um, so it feels to me like it's intentional. Um, unfortunately, these things take time to sort through. And, you know, uh, like I said, I hope that they correct the lack of parental controls, alerting, and the thoughtfulness in the process of involving parents and children interacting with this technology. Um, but you know, the reality of the matter is uh when you're owned by investors and you're answering to a board of directors, uh, you know, you that that often gets put down on the priority list, safety for users. And to sort of give you an idea of the intentionality behind building this tool, I don't actually own any of the data. I very intentionally in the contract, you can go on chatburn and read the contract during sign a process. I intentionally make sure that all the data owned by users is owned by them and not by me. So even if I do sell the company in the future or something happens, you know, I win the lottery and move on with life or whatever. I'm just using that as an example rather than the proverbial bus. Um my predecessor still can't take that data and do anything that would be harmful to the child, because at the end of the day, I've intentionally made sure that the platform is built legally and technically so that it protects uh the child in that in that situation. And I've been talking about parents a lot, but educators as well. You know, you don't want to be in the news because you used you you t you told your students to go use an AI bot, and either there was a bad experience with that, or that company decided to go resell your information or use it for trading. Or do many of the things that you can do, you know, um, or market to the kid in some way, or sell that data to marketers to market to the child. So like I I keep talking about parents, but these problems also apply to schools in almost an identical way. And having a safe platform that all this can happen on in a way that you feel comfortable and also that you own the data and can access the data and see everything if it's there, I think it's I think it's absolutely important. Do I think any of these large corporations are gonna plug these holes anytime soon? I doubt it. Um, because you know, unfortunately, money, uh, money is king when it comes to those profits are king. So um, yeah, I I don't know if that was an answer to your question.
Student Fears And Workforce Futures
Designing For Children First
Dr. Alfonso Mendoza
No, that's fantastic. But I was gonna add too, I always say now, like data is the new currency, so data is king now too, as well. That's right. And that's one of the things, you know, go going through my studies, you know, probably like in 2023, I think in March, you know, that's why that's when I really started getting to research AI and so on. And then of course I fell into the term uh data re data renteership, which is when I started analyzing that and seeing and thinking to myself as a you know digital learning coordinator at a school district, and the amount of platforms that are free, you know, to use. I'm thinking to myself, you know, it's like, well, they've got to be making their money somehow. And then of course, there we go. We start seeing, you know, how the data itself becomes that currency and they start using that data. And so I really do appreciate the transparency as far as what you said uh with ChatPro, as far as the way that the user is the owner of that data, because even now, with so much technology and so many platforms that are coming in to school districts, you I want to say like usually like on a monthly basis and then on a yearly basis, and then of course more apps start coming along. For the most part, a lot of CTOs are not informed of that. You know, yeah, they they've got the initial initials like like KAPA, FERPA, and all of that good stuff. But in the fine print, you know, some of these platforms, the data is not even stored in the US, it's stored, you know, elsewhere. And so those are the things that I always uh caution DTOs about, any directors about to please make sure you read the fine print, especially on that data component, because as we know, you know, he who controls the data can control so many things too as well. And of course, we're talking about profit, we're talking about data and how data can be the new currency now. So thank you so much for sharing that, and thank you also as well for being very transparent as far as the way that you have set up your platform, Chaperon. So, like you mentioned, the day that you do win the lottery and you do just walk away, you know, everything is gonna be okay and the data is still gonna belong to the user. So that's something that is fantastic. So I kind of want to talk a little bit more too, as far as I want to talk more about schools now. Now, for myself, being in education now going on 20 years, one of the main things is when we talk about learning communities and we talk about those things within a school district, oftentimes we think soup, you know, mid-level directors, coordinators, teachers, and then students. But one piece that we often forget about is the parents. And this is something like I talked to you earlier in the very beginning that I mentioned, this needs to be kind of a a partnership where we need to include parents into or include them in these important conversations and decisions that are being made at districts as well. So I want to ask you, you know, based on me talking a lot to teachers and parents as well, and how they wish they had more of a bigger uh I guess partnership in that sense, from where you sit, what has the conversations or what has the conversation been like, maybe at some school that you visited or some schools that maybe you're currently working with and with parents about I AI and what's going on with it?
Product Philosophy: Peace Of Mind
What Must Change Now
Caleb Hurd
Yeah, that's a it's a really great question. I um I joined the AI governance board at my school, and I'm I'm gonna use a conversation that happened uh in that uh context to answer that question because I thought it was it was extremely revealing. And um I had one reaction to it uh when the conversation happened six or so months ago, and now I have a different reaction to it. Um but we were talking about AI, and the the first answer to your question was I realized very quickly in the room that had teachers and had parents and had students all as part of this advisory board. I realized that um the the parents that were mostly tech workers, we knew what AI was and how it worked from an industry perspective. Um, the teachers in the room really, some of them had never used it, and there seems to be a lot that there seemed to be um uh hesitation in that room. I wouldn't say hostility in that particular conversation, but I've picked up on that in in other conversations where there's a lot of uh resistance to it. And and I actually agree with a lot of the resistance, by the way. I think that um much like when iPads first inundated the education system, there were a lot of unintended consequences. And I think there were teachers that were saying, hey, this is not going to be a great outcome. And in retrospect, they were right. Right. And I think uh I think in a lot of ways, some of the same concerns they expressed then are now being expressed about AI. And I think that there's a lot of legitimacy in those concerns, which is why I think solutions that are limited, tailored, and intentional in the school system are so critical. But um, but also, you know, so the first thing that I I did in that um I worked with some of the other parents, and we actually presented like this is kind of what AI is, this is how it works. It's basically, it's basically autocomplete on your phone, but instead of just suggesting the next word, it's suggesting the next paragraph. Right. And uh and it's trained instead of on millions of single messages, it's trained on the internet. Like that's that's sort of it in a nutshell. And um by educating the teachers on what it was, they started to understand what its weaknesses are. You know, it's a psychophan, it will validate you no matter what. And the reason for that is because it's being trained by billions of users interacting with it, and they love to be uh validated rather than given the correct information. And so it has turned, you know, AI into a psycho fan in a lot of cases. And it will lead you down a road where it's confidently wrong, and you'll just be along for the journey unless you are an expert or have critical thinking to sort of kind of pick at it and be like, are you sure are you right? Um, and you know, hallucinations and other things that it that are weaknesses of the system. But I think the thing that surprised me the most in that conversation didn't come from the educators in the room. Um, I think what surprised me most in that conversation was that the students in the room were actually afraid of AI. And they were actively uh afraid of the uncertainty of their future career prospects because they had questions like, Am I even gonna have a job? Like, what's gonna happen to junior level roles once I get into the real world? Are they gonna exist anymore? Like, what is this technology gonna do to uh the student community, you know, as a whole? And it actually led me down the a path of looking at the last hundred years of newspaper headlines saying, you know, people are gonna be automated away by machinery, you know, going back to the 1900s. Like this has been an ongoing headline for a long, long time in a lot of different categories. And people that say, like, oh, but you know, this is different. AI is different. It's it's it's gonna impact many industries simultaneously. The reality of the matter is that it is going to impact industries, but industries also are gonna arise. And the people that do succeed in the workforce in the future are the same as the people that learned how to use computers when the PC hit. They're the same group of people that learned how to use the internet when the internet became popularized. It's it's just another tool and a long set of tools that extends humanity and what we're capable of doing. And um, so even though I agree with the trepidation that I sense from students, from teachers, uh, from the education world, um, I think that there is a middle ground where we learn how to use this technology safely so that children aren't being hurt, we use it safely so that kids continue to be educated rather than kind of, you know, robbed of their ability to think. And I think that those solutions, uh, chapterone is just one of them, right? I think those solutions are emerging and will continue to emerge. And I think at the end of the day, as long as the child is the center of a solution, and then you row out to the next authority figures of that child, the parents and the teachers, and then you move into features that administration and others want to manage it. A lot of companies are doing it the opposite way. They're sitting down with those people that write checks. It's nothing wrong with them. They're phenomenal people. Um, I'm sure a lot of them are listening now. We love you. Um, but when you design a piece of software with that person first, you're going to get a lot of administrative functions that will fit the school system. You won't necessarily get what's best for the very last person on the run, which is the child, the youth that's interacting with that technology. And I think those solutions should be built upside down.
Dr. Alfonso Mendoza
Excellent. Yes, I love everything that you said there. I mean, it really falls in line with the very beginning, you know, of our conversation and just with your introduction, you know, and and the way that you see things. And I think that your perspective, your lens, and has been very helpful and has been something that has been essential to build what you have built through ChatCorone and of course your mission. And so I want to ask you a little bit about that, you know, as far as the the product philosophy and the mission, because one of the things that catches my eye is the following, you know, I go to chatcorone.com and we'll make sure we link that in the show notes so everybody can check that out. But right away in the very front, it'll say AI chat for kids, comma, and then it says peace of mind for parents. Now, as we know, most products in this space often lead with fear-based marketing. You know, they talk about yes, the predators, the dangers, the worst case scenarios, but you need it with peace of mind and and empowerment per se for parents. So I want to ask you, you know, was that something that was a deliberate strategic choice? And really, what does that say about you as well? Because like you you just told us, no, as a parent, you built something like this because you saw the need for your kids. So tell me a little bit about that, the product, the the philosophy behind your product.
Closing Reflections And Quick Fire
Caleb Hurd
Yeah, I think it ties in with uh my previous answer, right? Which is that um just like those previous technologies, the internet, the PC, um, and and you know, social media and other other technologies that have been introduced, there is sort of this um healthy middle ground. Maybe not with social media, I'm gonna put that one aside, but these other technologies, there's uh uh a middle ground where um all of the promises that big tech is making, hey, it's gonna, it's gonna do your laundry in a year, you know, whatever their big promises are uh are likely that's that's one extreme. And and uh, you know, they sort of are have blinders on about the negative impacts to people, the negative impacts to children. Um, and then on the other extreme, you know, people that uh haven't yet adopted the technology or played with it or understood it, uh, that kind of don't want things to change. I think that because there's going to be a middle ground where this technology evolves in a way that is useful and safe and and uh hopefully doesn't damage uh people that are using it. Um if that's the reality of the future, that's where we're headed, then we should find positive and engaging ways to make it safe, right? If the internet is coming, let's find ways to regulate it and manage it and put guardrails around it uh in a way that is uh safe for the users that interact with it. Um so, you know, peace of mind for parents. I feel like, again, if I was venture capitalist backed, um, and I said things like, the users are gonna own their own data, uh, we're never gonna take advantage of that data and try to make money off of it. Or we're gonna focus not on fear-based marketing, we're gonna focus on positive messaging, positive marketing, peace of mind for parents, peace of mind for teachers, um, they would not be happy because those are very effective ways to make a lot of money. And um I didn't build this to make a lot of money. I really built it uh to be impactful and effective. And so um I intentionally choose that positive messaging because I truly believe that this technology can be we can thread the needle and do it right. And I truly believe that. And so that's the messaging that flows out of that is really just an outcome of that worldview.
Dr. Alfonso Mendoza
I love it. That is great. Thank you so much for sharing that and sharing just so much insight today. Caleb, and uh again, I'm just really excited that there are wonderful people such as yourself, the founders, creators, that are really that really have that, you know, child-first approach and shaping the products and you know, making those connections, and especially like for me, just peace of mind for parents. And I I'm not a parent myself, but I know that that when we conversations with various parents, you know, not only from my school district, but other school districts around, and then just in uh social media spaces, you know, the importance and and that this topic is uh for them, and many of them just find themselves at at a loss for words because they they're not sure what to do. They just feel lost. And of course, with so much tech that is out there, so much information already, and then for them to navigate that space and just find something that is gonna be good and solid and is really has a a great heart for the mission, I think that something like Chaparral would be something that would be fantastic for them to really look into and to learn more. But before we like now that we're kind of wrapping up our conversation, I have like two final questions that I do want to ask you that might be a little bit more deep, maybe maybe one not so deep, but you know, just uh in based on everything that you were talking about, you know, I want to ask you what is the single most important thing that every parent, every educator, and every policymaker needs to understand about this current moment and time that we're living at that we're living in right now before it's too late to get it right. You know, what can we do now to make sure that we're on the right track and making sure that we're taking care of our youth during this time when so many platforms can definitely uh you know cause harm for them, you know, through social media, through deep fake, you know, and just being used unethically and irresponsibly. What can we do?
Caleb Hurd
Yeah, I think it's again, you with the great questions. You you you come with some some phenomenal questions. I love them. Um, I'm gonna give a two-part answer. One that is focused on um the regulatory side of it, and then I'm gonna give an answer that's focused on the teacher and parent experience, right? So on the regulatory side, I think the biggest thing is um these companies are trying to make an end run as they always do for maximizing profits. And um really the only check and balance that that can that can not stop that progress, it's it's good progress. It doesn't necessarily need to be stopped, but slow it down so that we don't hurt people along the way is our regulatory bodies. And so I think on on the governmental and and business side of it, um they need to catch uh rules such as KAPA and FERPO. They need to catch these rules up to the latest uh technologies and tools that children are interacting with. And they need to protect them from outcomes like what happened with character AI, where there were unintended consequences. I mean, if if you sent the developers of character AI down and said, like, did you intend for this to happen? Of course they're gonna say, no, of course not. I mean, these are humans, they're have kids of their own, I'm sure, like they're good people, but um, but there was nobody that slowed them down and said, hey, let's think through the implications of what we're building and will that some of the potential outcomes that could happen from that. And so I think from a regulatory perspective, I think it's an ideal time. You know, everybody says Congress is dysfunctional. I would like to think that children and their safety may be the one of the only topics that maybe brings Congress together to agree that um, you know, we should protect uh this this particular group. Uh I think anybody on the political spectrum would agree with that. So that's my hope on the regulatory side. On the parent and teacher side, there's there's always this period of time when a technology uh arises and there's this sort of like period of uncertainty, right? As it's as it's growing and and you know, we don't understand it, we don't know what it's going to look like in the end, et cetera. And so we're sort of in that period of uncertainty where um unfortunately these parents and teachers, uh, it's difficult. I mean, it's like whiplash. You wake up every day and something new is launching, something new has changed, and how do you keep up with it all? Um, I would encourage them, and this is probably an unrealistic encouragement, an unrealistic encouragement, but I would encourage them to engage in um meaningful, open, not judge, non-judgmental conversations with their students, with their children, and ask them, how are you using these tools? You know, teach them to me, right? Um, learn them yourself, see how they're interacting with them, investing just a little bit of time in a way that makes the child or the youth feel like they are uh a peer educating their parent or educating um their teacher on what's happening. You you can cut right through the noise, you cut right through all the headlines and all the change. And if if the student sits down with you and says, let me show you, you know, what's happening, and you're not judging them, you're not, you know, jumping right in, oh my gosh, I can't believe you had it write that homework assignment for you, right? You're learning and you're educating them and you're saying, hey, that's really great. Also, you know, when I wrote papers, one of the challenges of writing papers was kind of making mistakes and doing it wrong. You know, how are you learning that part of it? I think connecting with the students, um, connecting with the technology, learning it yourself, using it yourself, you know, go in and pretend you're a kid and try to write a homework assignment for yourself. If you lay out a lesson plan as a teacher, go sign up for Chat GPT, pay the 20 bucks to get the latest version of it and be sat, you know, pretend you're Sally Sue and try to write your own paper to it and see how it functions. A lot of this uncertainty can be cleared up by learning it yourself and spending time with students and your children directly. Um, because at the end of the day, like that's where the source of information is at, and that's where the trust needs to be built. So that as these it doesn't really matter if it's chat grown or if it's regulatory or whatever, like the relationship between those three groups, the child, the parent, and the teacher, that's where all the magic happens. And so I would encourage everybody to focus on that.
Dr. Alfonso Mendoza
Hopefully. Great answer, great answer. And then the last one, um, Caleb, this might be maybe just I don't know, maybe you'll look back at this episode years later and then be able. To share this, but maybe it this might be one of those uh proud dad moments. So I wanted to ask you when your kids grow up and they look back at this moment where they say, My dad built chaperone, built this wonderful app, you know, during this critical time, you know, like I I know it's been four years, you know, since 2022, going into 2026 since Jap Chat GPT came out. But what do you want them to say, or what do you think they'll say about the choice that their dad made when the world was still trying to figure out AI and children?
Caleb Hurd
Yeah. Yeah, it's a great question. Um I think if if chat grown is able to in any way nudge this conversation and this industry towards um, first of all, letting youth learn rather than having the answers given to them directly, right? Um so that so that their critical thinking abilities don't atrophy during this time. It's a critical time. It doesn't take long to sort of pick the cohort of youth that are affected by this right now. Um, it doesn't take long for that cohort to be impacted for life because uh they had a period of time where everything was being served up to them on a silver platter. Um if chatcrum can even just nudge that conversation forward, um, then I think I I would be a proud dad, you know, uh years from now that that I was able to, you know, accelerate us towards um having safe AI interactions that are educational as well.
Dr. Alfonso Mendoza
Excellent. Caleb, thank you so much. I really appreciate all of your insight. Thank you so much for just really sharing not just your platform, sharing your heart. I mean, I think throughout the whole conversation, just picking up on your mission, your vision. It just really is something that I really truly felt that it does come from the heart, and especially, like you said, you know, because of work children taking care of that. And you said, you know what, I want to make sure that other children are taken care of. And I think that speaks volumes of not only you, but the work that you're doing and the mission of Chat Perone and obviously, you know, peace of mind for parents. So I think that is something that is huge. So thank you so much for joining us today here on this conversation. But before we wrap up, I always love to end the show with these last three kind of quick fire questions. So hopefully you're ready to go here. So as we know, all right, as we know, every superhero has a pain point or weakness. And for Superman, that kryptonite was what weakened him. So I want to ask you, in the current state of tech or technology, I want to ask you, what would you say is your current edu kryptonite or tech kryptonite, I should say?
Caleb Hurd
Yeah, it's a great so uh I would say I would say I'm impatient, right? I I see the solution and uh and and in this case I built it. I want others to build similar tools as well. Um and I just want to make an end run to getting this into the hands of the right people, getting this into the educators' hands, children's hands, parents' hands. Um so I would say Mike my biggest weakness right now in this particular point in time is I'm just impatient to get this to, you know, uh to the next uh level, and not just chat prom, but really this problem as a whole. I'm I'm just impatient for us to sort it out as as a as a society.
Dr. Alfonso Mendoza
Excellent. All right, great answer. Question number two if you could have a billboard with anything on it, what would it be and why I would say chat prompt safe AI for children. Perfect. Excellent. Simple to the point. I love it. All right, and the last question, Caleb, is if you could trade places with anyone for a single day, who would that be and why?
Caleb Hurd
Uh you mind if I give you two answers?
Dr. Alfonso Mendoza
Sure, not a problem.
Caleb Hurd
All right, so my first answer would be um Fred Rogers, because Mr. Rogers, I think um he saw the same inflection point that television had. Uh, it could be used as a tool for bad or good. And he decided that that new technology, that new tool, um, could be used for good to help children feel better about themselves, accept themselves, learn. Uh, and I just I just think he's and honestly, I'd want to be him for a day because I mean, I don't think I've ever seen a human being happier than Fred Rogers. So, you know, it just sounds like a phenomenal 24 hours. Um, and I think I would take away a lot of philosophical lessons from that experience. And the second answer is I would love to be my eight-year-old son or potentially my 12-year-old daughter, just for a day, just to see, like, you know, what they really think of this technology, what they really are afraid of or excited about. Or uh, you know, I just I I talk to them and I listen uh and I assume that I understand, but I would love to just be them for a day and and really get my head wrapped around, you know, this problem and a lot of other technologies that are coming out, a lot of other things that are changing in their lifetime and understand how how it looks from their perspective.
Dr. Alfonso Mendoza
Excellent. Well, great answers, Caleb. Well, thank you so much. I really appreciate the time that we spent together here. This was a very wonderful conversation. And again, thank you so much for all the work that you're doing, you know, through the platform and just really seeing things in a different light. And I'm all about that. And I'm just thankful that I got to be here today and learn more about your work and chaperon and what it could do for students. And for all our audience members that are listening to this episode, please make sure that you check out the show notes where we will link chatperone there. So you can go ahead and definitely check that out. And we will make sure that we also uh link the best place that you can connect with Caleb. And actually, Caleb, uh, let us know if our audience members would love to contact you. What would be the best way that they might uh be able to contact you?
Caleb Hurd
Sure. Um caleb at chatperone.com. So and that's C-A-L-E-B.
Dr. Alfonso Mendoza
Perfect. And we'll make sure we link that in the show notes too as well. So thank you all. I appreciate it. And for our audience members, please make sure you visit our website at myedtech.life where you can check out this amazing episode and the other 355 episodes. So thank you as always for all of your support. And I promise you, in those episodes, you're definitely going to find some knowledge nuggets that you can sprinkle onto what you are already doing great. So please make sure you stop by the website, check out those episodes, and also want to give again a big shout out to our wonderful sponsors. Thank you so much to Book Creator, Edu8, and Peel Back Education for allowing us to bring these amazing conversations into our education space so that we may continue to grow and learn professionally and personally as well. So thank you all for all of your support. And my friends, until next time, don't forget, stay tachie.

Founder
Caleb Hurd is a technologist, dad of two, and the founder of Chatperone: a safe AI chat app built specifically for youth with full parental visibility.
With 20 years of experience across companies like Chick-fil-A, Citrix, and Cox Inc., Caleb has spent his career at the intersection of consumer technology and how it directly impacts people. He was one of the first software engineers at Elf on the Shelf's digital platform, where he built COPPA-compliant systems for millions of children - giving him a firsthand understanding of what "safe by design" actually means in practice.
The idea for Chatperone started at home. Like most parents, Caleb couldn't find an AI tool he trusted to interact safely with his own kids. Youth are an afterthought in today's growing landscape of AI chatbots. He wanted one that wouldn't do their homework for them, wouldn't have unsupervised conversations he'd never see, and wouldn't expose them to content designed that parents or an educator should be involved in. So he built it himself.
Caleb is currently completing a master's degree in AI and is building Chatperone independently: no VC funding, no PR firm, just a parent solving a problem he lived.















