July 10, 2025

Episode 328: Job Christiansen

Episode 328: Job Christiansen

AI in Schools: Innovation or Illusion? | Ep. 328 with Job Christiansen Is AI making classrooms smarter—or just noisier? In this episode of My EdTech Life, I sit down with Job Christiansen, a fellow cautious advocate, educator, and instructional technology specialist, to ask the hard questions about AI’s role in education. From privacy concerns to the illusion of “safe use,” we unpack what most educators, tech leaders, and decision-makers aren’t being told about the tools flooding our schools.

AI in Schools: Innovation or Illusion? | Ep. 328 with Job Christiansen

Is AI making classrooms smarter—or just noisier?

In this episode of My EdTech Life, I sit down with Job Christiansen, a fellow cautious advocate, educator, and instructional technology specialist, to ask the hard questions about AI’s role in education. From privacy concerns to the illusion of “safe use,” we unpack what most educators, tech leaders, and decision-makers aren’t being told about the tools flooding our schools.

Job doesn’t just read privacy policies, he tests them, creating teacher and student accounts to see what’s really happening behind the interface. This episode dives into why most AI tools may still be stuck in the substitution phase and what it’s going to take to truly shift toward responsible, innovative, human-centered use.

🎧 Whether you're in K–12 or Higher Ed, this one will challenge your thinking.

 TIMESTAMPS:
 00:00 Welcome and Guest Intro
 02:00 Job’s Journey Into EdTech
 06:00 First Reactions to ChatGPT in 2022
 10:00 Early Adoption vs. Caution in Schools
 13:30 AI's Substitution Trap & SAMR Concerns
 19:00 Redefining “Safe” in AI Tools
 24:30 What Job Found Testing Student-Facing AI Apps
 30:00 Historical Accuracy and the AI “George Washington” Problem
 36:00 The Concept of AI Pollution and Knowledge Dilution
 43:00 Transparency, Trust, and Teacher Responsibility
 47:00 Final Takeaways and Reflective Advice
 50:00 Where to Connect with Job
 54:00 EdTech Kryptonite, Billboards, and Historical Curiosity

🙏 Big thanks to our amazing sponsors:
 🔹 Book Creator
🔹 Eduaide.AI
🔹 Yellowdig

💬 Visit www.myedtech.life to explore more episodes and support the show.

👋 Stay curious. Stay critical. And as always—Stay Techie.

Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.

Yellowdig is transforming higher education by building online communities that drive engagement and collaboration. My EdTech Life is proud to partner with Yellowdig to amplify its mission.

See how Yellowdig can revolutionize your campus—visit Yellowdig.co today!

Support the show

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

00:30 - Welcome and Introduction

02:06 - Job's Journey into EdTech

06:49 - First Encounters with ChatGPT

10:59 - Teacher Adoption of AI Tools

19:27 - Safety Concerns with AI in Education

28:08 - Testing AI Guardrails for Students

36:52 - Historical Figures and AI Accuracy

45:03 - AI Pollution and Information Quality

53:04 - Final Thoughts and Closing Questions

Fonz Mendoza: 

Hello everybody and welcome to another great episode of my EdTech Life. Thank you so much for joining me on this wonderful day and, as always, thank you, as always, for your support. We appreciate all the likes, the shares, the follows. Thank you so much for your feedback too well, as that is always welcome. As you know, we do what we do for you to bring you some amazing conversations that will help us continue to grow within our education space. A wonderful guest, somebody that I follow on LinkedIn and somebody that shares a lot of great posts and a lot of great insight about AI in education, so I would love to welcome to the show Job Christensen. Job, how are you doing today?

Job Christiansen: 

I'm doing. Great Thanks for having me on the show, Fonz.

Fonz Mendoza: 

Excellent. Well, I'm excited to talk to you, job. I know after a couple of posts on LinkedIn, I kind of started seeing you know that we do. After a couple of posts on LinkedIn, I kind of started seeing you know that we do have a couple of mutual friends and kind of posting within the same post and I was like I really like your insights, I really like what you have to share and, again, the reason that I reached out to you was just because I would really love to just amplify your voice and hear a little bit more about your perspective and experience within the education space and mainly with AI in education. So, before we dive into the conversation, can you give us a little brief description about of you know? Excuse me, can you give us a little brief bio and what your context is within the education space?

Job Christiansen: 

absolutely so. I'm a relative newcomer to the education space. My background actually is that I'm I went to school and studied history, I got a bachelor's and master's in history and then I never really could get a strong career started with those degrees. So I did a couple different things. I worked for some nonprofits and then so three years ago I actually saw a job opening at a school, basically for basic tech support, and especially at one of the nonprofits I'd worked at, I had been hired basically because I had like website skills on my resume. Right, it's always those like hard skills that they were looking for at that time. This was 10 years ago.

Job Christiansen: 

So I worked for this nonprofit for three years and over the course of those three years they just continually found, like they figured out, that like, oh, like this is broken, maybe Job can fix it, instead of going to like the contracted IT guys that they had. Right, it was a really small nonprofit, like 12 of us, so anytime you can cut costs with just like Job fixing it. So I just would tackle things, approach things, start playing with all these different stuff, like it was like Salesforce database and then like the VoIP phone system, just all these different tools. I just kind of cut my teeth on and I, I I began. I'm not formally trained in technology, I just jumped in and learned it by using it and playing with it, so anyway, so I applied to this school and I think they really they really liked that attitude of like you can just learn by doing and you have that like go get them heart. And so I was basic tech support for this school.

Job Christiansen: 

So that was three years ago and it's a K-12 private Christian school. So just to give you some of that background too, because that plays into just how I think of things and approach things. Think of things and approach things and I think that having that humanities mindset of those history degrees has given me like a unique perspective. So now where I am at the school, after the first year the tech director who hired me stepped aside and a new tech director came in, and then this last year, instead of just being tech support, this new tech director saw that I worked really well with teachers and so he kind of moved me over to being what basically is some sort of like a tech coach.

Job Christiansen: 

I'm an instructional technology specialist now, so I work with the teachers to help them use the tools more effectively in their classrooms. Use the tools more effectively in their classrooms. But through this whole process, I really found that I, even though I was hired for like technology and that's what a lot of the you know like I'm the guy everyone called up for, like hey, my computer's not turning on job, and I was there in five minutes. I was, you know, I was like the tech support you really wanted, but that wasn't enough to keep me passionate and going.

Job Christiansen: 

So I've now been pivoting, and some of the stuff that I write about is even it's not so much the technology that excites me, it's really just I view technology as a vehicle. But what I really have gotten excited about over the last few years is the learning process, and especially how do we just get better at learning? And so I see at times technology can help that and sometimes it can't, and so that's kind of my approach and my brief synopsis of all that.

Fonz Mendoza: 

Excellent. Well, that's good to know and that's good to hear your background too as well, as I think that that will definitely, you know, lend itself to this conversation very well. Especially, you know, and talking a little bit about that coaching, with your coaching experience, which you saw in education and especially with AI in education, and being that you are in a private school also as well, it's very interesting just to get those perspectives as well, because one of the things is, you know, in the public school sector, it's very interesting just to get those perspectives as well, because one of the things is, you know, in the public school sector, it's depending on the size of your school, usually the bigger you are, the more platforms you get to have, as opposed to a smaller school, due to funding, you have to be a little bit more just tight with your money and budgets and so on. So, you know, getting that experience and, of course, that perspective from private school too, as well as teachers and students, how they are interacting with a lot of platforms. I would definitely love to hear that. So I want to ask you, joe, just on your own, before getting into education, I would love to hear what your thoughts were.

Fonz Mendoza: 

November 2022, taking it way back. And as soon as you know ChatGPT was out, what were your initial thoughts? Were you an early adopter? Were you did you? Were you kind of a wait and see kind of guy, or did you just really kind of wait it out until you said, okay, let me see what this is all about? So I would just love to hear your experience through that.

Job Christiansen: 

Yeah, so I had started at the school that previous August. So I've been there I don't know. So November was like three months in, right. So I'm three months in. I was aware it came out. I went and made an account at OpenAI ChatGPT. I typed one prompt in like not knowing how to prompt, right, and it was something about like create a guidelines or policy for our school.

Job Christiansen: 

I saw it and then I was like kind of cool and then I didn't touch it for a long time. My thought was just this was I'm kind of a slow adopter in general with technology in my life. I just kind of grew up that way. I was kind of behind the curve, so I want to be aware of things, but I don't always use the things. And so at the time I was just apprehensive because I didn't know really what AI was and I was worried. It was kind of like the way that it's portrayed in media, that it's had like a life of its own, and so it was kind of like skeptical but maybe optimistic. Skeptical, but I'm not someone who's just gonna like jump in and use it right out the gate excellent.

Fonz Mendoza: 

Yeah, then that for me was something very similar, like I'm kind of an early adopter, same thing Just kind of went in and then I just kind of backed off a little bit just to seeing as how things were moving and especially within education and seeing and learning more. Because before and I kind of wanted to it kind of goes to a post that you kind of put up recently where I honestly thought, oh my gosh, like this is really magical, you know, and I was like, oh my goodness, this is great and this is going to go ahead and do a lot of transformation, and so then, kind of seeing the way things were going and understanding a little bit more about how LLMs really work and so on, and following other people from both sides Obviously I want to hear, you know, from the, I guess, for AI crowd or, you know, the early adopters. And then, of course, we've got sort of the cautious advocates that are kind of in the middle just kind of seeing things through, and then you've got, you know, some of us that may just kind of hang back a little bit more, but it was very interesting where I just kind of said you know what? Let me kind of slow things down and understand more that. You know, not everything has to be AI, but the way that the perception was is like, oh my gosh, this is going to save me time, this is going to save me from burnout, this is going to save me from, you know, this situation and so on.

Fonz Mendoza: 

Just, I guess, creating work, or having something ready lesson plans, of course, or get rid of the Sunday scaries, as a lot of platforms kind of you know, they prey on those things and saying like, oh, this is the way we're going to sell. And I'm going to go back to Micah Shippey, who was on the show a couple of months back saying, you know, fear, uncertainty and doubt are what sells, and that's really what they kind of do, you know. And coming back from Misty, there are many platforms that are out there and you're kind of starting to see kind of like the top five that are really kind of getting out in the forefront and kind of being, I guess you would say, the educator favorites. But I want to get your thoughts on that as far as when you started seeing it, you know, with your teachers or you know, were your teachers early adopters too, and as you were kind of going through and helping them out. Were you seeing some of the things that they were working on and what were your thoughts on that?

Job Christiansen: 

I'd say we had a handful of early adopters, but in general, it's just a lot of like caution, and so I think actually where I've really started to see it like creep in or just appear is not with the tools that are built as-specific, but in the tools we're already using. I started to notice there'd just be AI features start to appear, and that's when I started to become a lot more conscientious of this is going to be in here whether or not we are actively choosing to sign on. We're using AI. It's just appearing in the tools we're already using, and so, unless we're just going to get rid of all the tools, we need to figure out how to use it.

Fonz Mendoza: 

Okay.

Job Christiansen: 

Excellent.

Fonz Mendoza: 

So now your teachers, as far as being able to use it, and were they using it? How were they using it? Was it mainly just for, like you know, worksheet creation, was it? You know what were the initial, you know ways that they started adopting the technology yeah, I think, I think especially that first year we kind of put out a like.

Job Christiansen: 

It was kind of like banned. It was basically, um, I think especially like no students, um, but if teachers want to, they can. And then the second year it was more okay. Here's kind of like some rough guidelines.

Job Christiansen: 

I, I don't know, it's kind of foggy in my mind um, um, I would say the teachers that use it I know the one that was a really early adopter and now is kind of like one of the main people in the building I go to if I want to have a discussion about AI. She generally, I think, uses it to help create like lesson plans and things. She's not someone who uses worksheets, so it's not like that doesn't appeal to her. It's all about like lesson plans, rubrics, helping to revise or come up with new projects. Think of more like using Add a Thought Partner is how that teacher especially is using it. I'm trying to think about the other teacher, but generally those teachers are specifically a lot more tech savvy, and so I don't actually get a lot of interaction with them because they don't need me. They don't need to ask me for questions or even to necessarily fix their equipment. Um, they just can pick up something and run with it, and so, um, they just were given the freedom to do that on their own excellent.

Fonz Mendoza: 

so, as the kind of uh, you know, of course, the progression of ai from 2022 and its initial stages, like you said, you know, uh, thinking of it as a thought partner and, of course, the progression of AI from 2022 and its initial stages, like you said, you know thinking of it as a thought partner and, of course, we have so many names for it too as well, that, you know, kind of letting people know, like this is not going to or should not be your tool to just offload everything, but it's just supposed to be that tool to help supplement or help improve, you know, the learning process or maybe within the lesson plans and so on.

Fonz Mendoza: 

So I want to ask you, you know, now that you've seen this kind of shift and now that you're more familiar with a lot of the platforms out there, what are your initial thoughts now that you're seeing, you know? Now, it's basically maybe about five platforms that are really, you know, kind of garnering everybody's attentions and everything, as I saw at ISTE. Usually it's about those five that are there, that are kind of, you know, weeding themselves out and coming up to the forefront. What has changed as far as your perception of the use of AI in education?

Job Christiansen: 

the use of AI in education. I think, like widespread there's. I guess it's like you said, that they, that educators, are recognizing that it can help save time. It's hard for me to kind of. This is partly why I'm on LinkedIn a lot and I'm interacting with people, because I want to get that outside perspective, because otherwise I'm going to end up in my own little private school bubble, right, and I don't want to just stay there. I want to know what's happening at other places and where this field of education is shifting. It's hard to get that sense, though, on LinkedIn and even in other places, because it's all. It feels like it's all over the place. I feel like there are people that are creating all their lesson plans and using it as thought partners and using it in innovative ways with students, and then there are people way over here that are just like we're not even touching it yet, right, like it's not even allowed in our classrooms or schools, and so it's all over the place. So the people that are picking it up and running with it, my impression, I guess I would say, is that they are a little I don't want to say like overly optimistic, but in general, I think they see the benefits of it. But where I get a little cautious is on, like, the safety and the data privacy side. So I think that's where that shift happened and especially I don't even know if most people recognize this but where I get a little.

Job Christiansen: 

I'm less enthusiastic about the big tools that are well-known now in the like, you keep mentioning the big five, which I probably could name at least three, and they're just wrappers for the LLMs and so it's just like prepackaged prompts for educators to use. And once I put two and two together and figured that out, I'm like, oh, this is less exciting. I thought they actually built a brand new. They call it tools, right. You go into one of the big five, right, and there's like 150 tools specifically for educators, right, and they call them tools. And I'm just like the tool is just a pre-programmed prompt that's talking an API back to the LLM. So if you know what you're doing which some of the teachers I interact with do they don't even go to the big tools, they just go to the LLM, because they can actually get what they want faster than trying to work through a prepackaged prompt If you don't know what you're doing, and I think that's where the shift has kind of come.

Job Christiansen: 

Educators don't have a lot of them don't have the time. I refuse to believe they don't have the skill. I think we are capable of learning and picking up new things, but I think a lot of them just don't have the time to go to ChatGPT and learn how to prompt the way they can get what they want. So they pick up one of the big five tools and someone like Charlotte shares with them hey, I can get you like a rubric and a lesson plan in 10 minutes. And I think to themselves oh wow, like I only had 10 minutes today and I couldn't get done what I wanted to do, I'm going to try this and so that's where that shift is happening. But I still think it's like if you're using one of those big five tools instead of like an LLM or actually learning all the background, I feel like you're stopping yourself short of actually using it in a creative and innovative way and it just becomes another tool. Like it just becomes another tool and the big thing is that it's a substitution.

Job Christiansen: 

I think I hinted at this in one of my recent posts I was talking about like AI. We're still at the level of substitution, and we're not. It's.

Job Christiansen: 

I have not seen much that's at the level of modification and reinvention, which what I'm talking about is the SAMR model for using ed tech in schools, and this year I've started. I have a newsletter at my school that I started and I've started actually educating the people in my district about like the SAMR model, cause I think this is actually kind of not I don't know how many teachers actually know about it, right, and so everything that I'm seeing from most of these big tools are, like you said, like you can just make a worksheet. That's the same as what they had before. We're just making worksheets faster. So we're just doing the same thing we did before, but we're doing it faster. But are we actually doing it better? And what I care about, my guiding light is now how do we learn better? So if worksheets were working before, why do we actually need AI? We can just keep doing what we were doing before. You know what I mean.

Fonz Mendoza: 

Yeah, no, and that makes perfect sense. You know, and there's a lot of things that I want to unpack there. So one of the things that we'll kind of go back to is the safety issue, because I know you mentioned it right now, you know during this answer, and I know that you've pointed out that safe isn't really a neutral term in that sense.

Job Christiansen: 

So I want to ask you, you know, in your opinion, and I know you did post about this on LinkedIn how should school leaders you know with your experience and what you have seen, define and communicate what is safe use, or actually what safe use means in that practical and developmental terms. This is a really big question. So, partly this year, just to give you some historical context because, like I said, I come from a history background, so I approach everything from context. Figure out the context. First, we put together an AI task force and I was on that to come up with a formal policy. So we're discussing safety, formal policy so like what we're discussing like safety, and so some of the things we looked at other policies from other schools and what we kind of ended up on safe use really kind of relates to like ethical use. So the way that we're thinking about AI tools is it's kind of the same way we've been approaching other edtech tools and so safe use has to do with not using it to inflict harm on others not.

Job Christiansen: 

I don't want to just use not phrases, but especially where it gets tricky with AI, it is a lot more convoluted than other or previous ed tech tools, so I guess I want to approach it from like okay, let's talk about just like the teacher side. So what would safe use for teachers look like? Teachers are still responsible for any output, anything that they create with it, but, as anyone who's used AI knows, ai is not necessarily always going to give you verifiable or authentic not authentic their word is escaping me but sometimes the output's just going to be wrong. And so safe use is actually putting on your thinking cap and actually vetting what you're getting. Where I see some really big flags and some shocking stories from educators is when they put information into an AI tool that contains private information about students. That contains private information about students, and so the tools on the back end we don't know where that data is going. So that is an unsafe use, right? That's like buying a billboard and just posting that student's information on there. You don't know who's driving by that billboard now and taking down that student's information. So on the teacher side, that's definitely like unsafe use. On the student side, it's all kind of the similar things Like don't put your private information in there, but where I would actually almost think of safe use as a misnomer. I don't know that there is a genuine, real safe use in the sense that if you think something's safe, you can use it freely without any harm coming to you or to others. And I don't know if AI has actually I'm hesitant to say AI has actually reached that level or especially even the ways that these companies are trying to put these protections and guardrails around the package and the wrapper that they're pulling from the big LLMs. I'm not sure I haven't really seen good guardrails and so in that sense I'm hesitant to actually say it's safe. If we want to actually jump into what do I think some of the issues are, I actually, when I test out an AI tool, I don't just read their privacy policy.

Job Christiansen: 

I create a teacher account and then, especially if it has like a student facing side, I have a dummy student account and so I'll make assignments from the teacher and then I'll send it to my student account and interact with it as a student would. And so where I really don't see these tools as being safe are the things that I test and sometimes it puts me in a dark mood, but I, from my background working in tech support, I would actually see sometimes what students type in Google, right? So if you just take how a student uses Google, how they might approach using an AI tool, they're probably going to start using it the same way as they used Google and so students might start typing in things that are giving social, emotional cues, things like I don't feel safe, I might be depressed. I've tested. I've literally typed into an AI tool like I didn't explicitly say I was running away, but I was typing in questions as if like how?

Job Christiansen: 

do I buy a plane ticket to meet up with my friend, right? And I wanted to see if the AI tool would pick up on. Does it actually know that the student wants to run away? Because an adult would pick up on that and they would intervene, and so some of the tools did, okay. And then where I get hesitant, though, is it just says, like I sense you're in distress, please talk to an adult, but then it might just move on and the student can just keep typing in and it would just move on as if nothing happened, whereas an actual adult would recognize that and intervene and need to have a conversation like there's something's going on with the student, and so, in that sense, none of the AI tools that I've used played with. That sense, none of the AI tools that I've used played with and now I have a list of over 20, 25 of them, I think have sufficient guardrails to actually say these are safe for use. You can trust your kid using it without any intervention, right? Because to me that's what safe means is. They can just go on there. No one ever needs to read the logs. It's going to alert us if something happens.

Job Christiansen: 

None of the tools properly alert Like I don't know what other schools are doing, I just know what our school does.

Job Christiansen: 

But if there's signs that a student might be in emotional distress or is thinking of harm to themselves or others for other things like Google search engines we have tools to help be alerted to that, and none of the AI tools that I've played with are actually alerting us of that nature.

Job Christiansen: 

So that's why I'm kind of flabbergasted that people are excited to put AI in the hands of students, when a student can just type in there that I'm depressed and no adult at the school is going to be alerted and so, and now we don't know what the AI tool is even going to like give them for advice, wise. Now I've played out that scenario. None of the scenarios does the AI tool suggest the student like continues with that train of thought. Like they do recommend talking to someone. Sometimes they even might provide some names or phone numbers. I'm not sure if they actually gave contact info because that's like regionally specific, right, but um, I just felt like they're not doing their due diligence and especially when you get into the situation of teachers and adults at schools are mandated reporters. None of the AI tools are really taking the place of that mandated reporting. They don't have the proper. I mean, there's a legal issue there, and so that's where I land on safe and unsafe with students.

Fonz Mendoza: 

That was a great answer. I loved it. I mean everything that you described and just covering there. You know, oftentimes, like I said, as educators sometimes we get overtaken by the excitement of getting shiny stickers or fluorescent shirts or getting invited to a party, you know, for a particular app, and we're just there and we think like, oh, okay, they're pretty cool, cool people. And sometimes we tend to overlook the fact of you know, once I use this app, is it doing what it should do? Is it, does it have the proper guardrails there for student safety and is there anything that is going to warn or alert a teacher?

Fonz Mendoza: 

If there is an issue like you talked about and you're absolutely right in thinking about that you know, usually I'll go look in the terms of services too as well, and I'm just there looking at details and, you know, looking very closely, and for the most part it always says like you know, no, we don't. You know we don't keep your information. But as you keep going and keep going, then they'll say, oh yeah, by the way, we would use your information for third parties and so on. So it's almost like we're going to give you what you like here in this first page, but we know you're not going to continue reading, but in this next page, that's where we're going to go ahead and put in that is, should anything happen, you can't come up against us and we will only, um, you know the? I think it said something like if you are, you know, pursuing something, there's only a short, you know little fee, uh, that they would have to pay on their part, or actually. No, they said like they're not responsible for anything at all and they think you would have to go to against their third parties. You know, should there be, there be any litigation.

Fonz Mendoza: 

And I'm thinking, I'm always thinking to myself it's like something happens in a school district. You can't come, you know, after the application, but now you have to go to that third party which, like you said, for the most part their APIs they're connecting to either open AI or cloud or anything else that is out there. So now you know, a school district can't go up against somebody that big or a big entity. It's almost like oh wow. So it kind of goes with accountability too and I know that's something that you stress also in some of the you know posts that I saw too as far as that AI has no accountability and no consequences for errors or hallucinations. So I want to get your thought process on that in this question being that you have, you know, a history background.

Fonz Mendoza: 

One of my biggest concerns is always applications where they say hey, you can go ahead and talk to George Washington, you can go ahead and talk to Martin Luther King, or you know Amelia Earhart or you know anybody else, and to me, like Rob Nelson said a couple of episodes back, it's like that digital necromancy what the history that they're getting is, because, since these applications are scraping everything, you know whose history are they getting and is it in line with what that particular state is seeking, and so what answers are they getting? So those are my concerns. There Now for yourself, having that history background, what are your thoughts on applications that are student-facing, where they can go ahead and talk to, you know, a historical figure?

Job Christiansen: 

regard because initially, even though I'm a little bullish on like AI for students, I did think that if you're going to use AI with students doing some sort of interactive chat like this would actually be really beneficial. So, with my experience with some of the ones I tested out, with some of the ones I tested out, the best ones are the ones that aren't just relying on how the model was trained, basically.

Job Christiansen: 

So if you're going to do that as a teacher, you should be having some sort of a not a script but, like I think, you can like attach files or be very clear in how you want that historical figure to respond, so you're not just trusting AI to come up with oh, the historical figure had acted in this way and had this background and this history. If I was going to do that, I would say, okay, I want to create a chat activity about George Washington, like your example. As the teacher, then I should go in and provide AI with these are the characteristics of George Washington. These are some of the famous historical events I want you to touch on and reinforce for the student's learning, and then you can tie that learning back to the standards you tell it to. Okay, we want to make sure the student understands like. These are the main points I want them to get across. That's how the educator can stay in control of that learning process. It sounds fun and flashy to just. You can whip it up and literally create an account and from not having an account to making this activity for students, you can do that in less than a minute. You can make an account, create that activity of just a plain George Washington and share that with your students without any extra information. But is that actually what's going to help the learning process and outcomes best for students.

Job Christiansen: 

Again, that's what I keep coming back to. No, you need to provide more as the teacher, and sometimes that might be. Hey, I pulled these historical web links and I put them in as links. So now that AI chat tool is going to be pulling from there. So it's almost like the student is interacting with. Like what if you could type and ask this historical article about George Washington? That's a little bit more appropriate, I would think. That's where I feel about it.

Job Christiansen: 

I haven't necessarily like the exercises, like the, the tests that I did. My go-to because this is my background is in like ancient history and especially archaeology. I did some archaeological work for a couple years and so what I do is I've made some activities where a student is interacting with a generic archaeologist and I want to reinforce these points. So I think, especially that's a much you can make a much stronger case for just like a generic person. There's a lot of weird ethical scenarios surrounding. Hey, you're going to be interacting with this hypothetical historical figure, george Washington.

Job Christiansen: 

But you're right, how does AI actually really know what George Washington was like when historians have been studying him for 200 years and we're still like discussing stuff Like that's.

Job Christiansen: 

It's a big gray area. And then that ties to the whole safety thing of if a student's interaction with this George Washington chat activity, if that's their only experience and that's how they're learning about George Washington, now how is that going to inform and shape their view of history when it wasn't shaped by the human, the AI or the educator in the process, or actual historians? Because I don't know how much AI tools are scraping. My understanding is there's a new model that's kind of put together every six months and so they feed a bunch of training model data in there, but what they're feeding it isn't everything. And I know some of these LLMs are going out to the internet and pulling live information, but they never pull everything. They're pulling a few. And so that's where it's really important as the educator, when you're interacting with something with the students, is to make sure that you're keeping your critical thinking in the loop, and especially historical context.

Fonz Mendoza: 

Yes, I love it, and this is actually Joe. This is kind of like a nice segue to this next question, talking about a post that you put up like two weeks ago, talking about AI pollution. You know, of course, we're talking about the information and the way that LLMs work, and so that was something that was very interesting that kind of came up and how you mentioned that there's a higher value on knowledge and information prior to the AI detonation of 2022, like you put. So we're talking're talking, but I mean it makes sense. So, uh, tell us a little bit more about your thoughts on that. As far as ai pollution, possibly diluting, uh, untainted human knowledge- yeah, so that I'm trying.

Job Christiansen: 

I don't remember the original source, but that terminology kind of came in.

Job Christiansen: 

I saw an article um and so, just to break it down, I mean, you did a pretty good summary, but the um, basically they did research and they found that almost everything that's been created since that release of the LLMs back in 2022 has now has bits and pieces of, like AI generated or influenced information. So now they're calling that AI the pollution. And so now, if everything that's been made since 2022 has that polluted or tainted material in there, when they're training new models, they're not just taking everything prior to 2022 and feeding it, because that's what they use to train the model in 2022. You want to keep updating it, so you're going to feed it new information, but the new information is tainted, so the new models progressively, theoretically, are getting more and more tainted as this goes on, goes on, um, unless, as I was pointing out, there's now a um, now, theoretically, there would be a higher value on what's not been like what you hasn't used ai to um create it.

Job Christiansen: 

um, I'm trying to think how much more to unpack it, but so my thoughts on that are this actually puts a premium on original thought and on those who can think and communicate and create without using AI tools. So I wonder if this is going to create like a new, almost a hierarchy or a rolling class of here's the people. Like if this was back in the day I don't remember if I posted this, but I was like if this came out hundreds of years ago, how would like ago, how would like they're skipping me Anyway, how would like Newton and Da Vinci, would they have used AI tools or not? And like is there a premium on their information? Or are there certain people and creators operating at a certain level that they can put out super high quality stuff that doesn't need that, not tainted at all? Or maybe they are using AI, but it's in such a way that, with their human critical thinking, what they're putting out doesn't have that taint, right? There's not like hallucinations or false data in there.

Job Christiansen: 

So these people now everything they do is at a premium and their information not their information, but what they produce, their product now if their thinking now becomes the product that they can now sell to the big AI models open AI Now we'll probably want to buy these philosophers, theoretically, information, their writing and their thinking, because they can use it to train their data and have a more pure AI thinking algorithm, versus the rest of us who can't think at that level and there's going to be tainted stuff in here. Now our stuff has less value. Are we going to ascribe different value and worth to thoughts? And this is getting really murky and philosophical.

Job Christiansen: 

But I think we need to be talking about it because it's nice and fun to jump into an AI tool and create something I do it all the time with like AI images but if we're not really thinking about the long-term consequences and what is the actual future we're creating like? We think we're creating a future where teachers can create all their lesson plans in one hour right for the next several weeks because they have this AI assistant and now they have so much more time, but you can't actually create all that with tainted information. Now, how is that subverting the education process and the learning process? And now, now what's going to happen 20 years from now, when students grew up only learning from AI-tainted and polluted information. Now, they can't ever produce pure information either, because everything they have in their head is tainted already, right? So this is getting really murky, and that's where I'm kind of like let's put the gas pedal or the brake pedal on a little bit and think about this.

Fonz Mendoza: 

Yes, no, I love that and I'll let you know kind of my reflection, being at ISTE and moderating an AI panel. What it seemed like is that there was this kind of switch now, where before, I think, we kind of went at it, in other words, we're using it and implementing it in everything, but now, you know, we're 2025, july 2025. And now the conversation is like okay, now let's kind of reel it back and kind of just make a pause and kind of start really focusing on that teacher aspect, where before it was just go, go, go, whatever you can figure out on your own. There were no rules.

Fonz Mendoza: 

No, you know, maybe many districts didn't have anything in place, but now those conversations are slowly coming back and being okay. Now we need to be very cautious, have that responsibility as far as how we're going to talk to people in our district, you know, and not just including district members, but we're talking about parents too, because some of the questions came up where that came up, where. Well, what if there is a parent that chooses to opt out of using one of those platforms? Or have you, as a district, informed parents of the platforms that are being used in the classroom, because sometimes we know that, as individual teachers, they go to a conference, they may come back and they may start using a tool that may not be something that is allowed in the district.

Fonz Mendoza: 

And now you're have you told your parents that this is being used and how that information is being used and how you're using it to input some of the student information? So those are some of the conversations now that are kind of coming to fruition and kind of slowing down a little bit. But, as far as you know, seeing the those big five, you know for the most part and then you're starting to see also a lot of smaller apps that are kind of coming out and, you know, trying trying to really get into this space. And I know that there's a lot of money that is backing this space, a lot of investment that is going into a lot of these applications, and so they're moving forward, they're doing their thing. And you know, my thing was is like you know how long will this last and if it's something that is just going to continue to grow year after year which I think it will, you know, because everybody's like really pushing it and you know it's already in most of our classrooms and it's being used.

Fonz Mendoza: 

But I want to ask you just to kind of wrap up you know if, if, for our listeners that are out there that this is the first time they get to hear your thoughts and your experience and so on, from this conversation. What is one thing that you would hope that they would kind of carry forward and be one of those practices that maybe they either use themselves or something that they may share with their district. So what is one key takeaway that you would just love to share with our listeners?

Job Christiansen: 

just love to share with our listeners. Hmm, I'd say that the biggest thing is ask questions. I I actually didn't anticipate, like a year ago that I'd be talking about AI like this. I pretty much I jumped in less than a year ago I and just tried to learn as much as I could about AI because I was asking those questions, and so that's one of the biggest things is whether you're just a parent or an educator. If you haven't heard, like policy from your school, I would start asking those questions. And I think, at some level, the types of questions you should be asking are, like you said, what are our tools being used? What is the policy or what's the perspective towards AI use, like, what's the vision and long term plan? Because sometimes it might be well we're just trying to fill this gap for three months, but without the foresight of are we still going to be doing the same thing three years from now and we don't know what's going to happen three years from now.

Job Christiansen: 

But if you're just putting a bandaid on, like I'll be honest, I shared mid-year I I had some teachers asking me. They weren't asking about ai specifically, but I had some teachers asking like how do I help change reading levels. They wanted to differentiate reading text for their students, and so I saw this as an opportunity. It took some time, and I went in and I showed them a couple of ai tools that they could actually, like um, put in their text and then change the reading levels so that they could help students, and I didn't have this long conversation about what AI is, I just knew right now we're mid-year they just need this little thing, and so I don't even think they necessarily use it that much. All they knew how to do was I, I can change reading levels. Right, that's like a really quick, small thing, and that isn't necessarily something that I think needs to be communicated to the parents, because that's actually something where there's not really that much of a risk of hallucination.

Job Christiansen: 

Right, the information's already there.

Job Christiansen: 

It's just changing the um, the appropriate reading level, and so I think where parents especially need to be asking questions, though, are when it gets into the gray areas of are teachers using ai to help give feedback and grades?

Job Christiansen: 

And that's where I've seen some lawsuits around the country, where teachers have been doing that and providing comments without being transparent, and so that goes in tandem with asking questions On the other side, if you're actually like a user of AI, the big thing and this is what we're reinforcing in our policy that we're going to be rolling out transparency. So when I use AI whether it's to I mean, you never really see this on my posts on LinkedIn because I don't use AI to write my posts, but if I used AI to help write text, I'd put a little disclaimer. I used AI, I used like this model, like you say who it was, and then that's like part of what I would consider good standard operating practice. Now with AI Be transparent that you use it and what you used it for. Otherwise, you're going to start to mislead people about who you are and what your thoughts actually are.

Fonz Mendoza: 

I love it. Well, thank you so much, job. I really appreciate it. This is such a great conversation and I really want to say thank you so much for just kind of meeting with me here, having this talk and hearing your perspective, and I think I definitely took a lot of value, a lot of valuable gems. I should say that I definitely want to dive in and you definitely had a lot of great soundbites that I can't wait to share. But thank you so much. Now, before we kind of wrap up with our last three questions that I always ask all my guests, I would love to give you a little bit of time. Can you just tell us, for our audience members that are listening, and especially if they are on LinkedIn as well, you know, or you know, if they're on different social media platforms? Can you please let our audience members know how it is that they might be able to connect with you?

Job Christiansen: 

Yeah, so I'm most active on LinkedIn. You can look me up with just my name, job Christensen. I don't think there's anyone else out there with that name. I'm not. I use other social media, but not it's all like personal. So I also, if you like longer form stuff, I do have a personal. I have a blog website that's called Seek Grow Align and that's where some of it is like personal blog stuff, but also that's where I put like book reviews. So, uh, if I read a book and mostly I'll do this with education focused books, but sometimes non-education, but anyway I'll read a book and then I'll write like my reflection book review on it and it'll just be pages and pages, just stuff that the character count doesn't fit on LinkedIn, so I post it there, um, and that actually helps with my learning process. So if you want to know like deeper thoughts, it's all on that separate site.

Fonz Mendoza: 

Excellent. And what was the site? One more time.

Job Christiansen: 

It's SeekGrowAlign.

Fonz Mendoza: 

Okay.

Job Christiansen: 

I can send it to you.

Fonz Mendoza: 

Yeah, is it SeekGrowAligncom.

Job Christiansen: 

Yes, SeekGrowAligncom.

Fonz Mendoza: 

Perfect, excellent, we'll definitely make sure that we link that on the show notes as well. Uh, all right, but before we wrap up, again, last three questions. So, job, I hope you're ready to answer. And here we go. Question number one as we know, every superhero has a weakness or a pain point. For superman, we know that kryptonite kind of weakened him. So I want to ask you, job, in the current state of AI in education, I would love to know what your edu-kryptonite is.

Job Christiansen: 

My edu-kryptonite, like what I spend a lot of time talking about. It has to be the whole safety issue, particularly with the fact that we can't hold AI accountable for its output. So I would say that lack of accountability is just like that thorn that's constantly jabbing in my side. I'm like, yeah, this is cool. I use a new tool, and then I'm like, wait, oh, we can't have accountability yet, and so it's just constantly there and I don't know how to resolve that. There's just tension when I'm playing with tools.

Fonz Mendoza: 

Excellent, all right. Great share, great answer. I love it, all right. Question number two Job is if you could have a billboard with anything on it, what would it be and why?

Job Christiansen: 

I would say probably for a billboard something along the lines of let's get better, or like keep learning so that let's get better phrase. It came to my mind a few weeks ago and I don't remember why it sparked it.

Job Christiansen: 

But did you ever see the old 90s show Frasier, yeah, yeah. So in one episode Frasier's brother, niles, comes on and he actually ends up like running Frasier's show. Frasier Gets Sick, so Niles does on and he actually ends up like running Fraser's show. Fraser gets sick, so Niles does like the radio psychiatry thing, and Niles comes up with this catchphrase and it's let's get better. And so in my own world, let's get better relates to let's just keep learning and growing. I have a growth mindset. I'm a lifelong learner. My mantra now is just like let's get better. You're never really going to be done learning and you're never going to reach this like perfect stage. And so I just have that like in my head now and I keep thinking about it.

Fonz Mendoza: 

Well, hey, that works. That's definitely a great, great message to share because it can fit into so many you know categories in life. Like you mentioned you're, you're, you're never, you never stop learning. And for anybody that's going to get better at anything, it's just to continue to pursue that you know, that knowledge and just practice, and it's just repetition and things of that sort to get to that point. So I really like that. Let's get better. And it's so simple yet so powerful right now. You got me really thinking on that and I'm like, yes, let's get better, you know All right, and my last question for you Job would be is if there is one person that you can trade places for or trade places with for a single day, who would that be and why?

Job Christiansen: 

Man, I know you gave me time to think about this, but uh, do they have to be like living people?

Fonz Mendoza: 

no, no, it could be anybody, anybody.

Job Christiansen: 

Um, I'm sorry. Sometimes it takes me a while to think of things.

Fonz Mendoza: 

Oh, don't worry about it, joe, it's all good. Don't worry, we can edit that part, but just anybody.

Job Christiansen: 

It can be anybody. I'm trying to think about the things that really drive me and like I want to know about, so um, like the mysteries at keeping a bit night.

Job Christiansen: 

Okay, so this is a guy that probably, like very few people have heard of. His name was Howard Butler, and this goes back to my archaeology days. Howard Butler worked at Princeton and so Princeton sent this expedition in, I want to say, 1905. It started in Jerusalem and they went up through what is now Israel, palestine, jordan and Syria. Right Then they were cataloging and mapping a lot of ancient historical sites as they went, and so these are some of like the earliest records we have of like Western documentation of these ancient sites.

Job Christiansen: 

So the site that I worked at in Jordan, this is like the first stuff we have of like western documentation of these ancient sites. So the site that I worked at in jordan, this is like the first stuff we have, and so we reference this, reference all this stuff. So, but the thing is like howard butler didn't. He did some really good documentation, but I know there's stuff that's missing and I wish I could have been him for like the day he first came to this archaeological site that I worked at. It's called umel jamaa in jordan, and I wish I could have been him for like the day he first came to this archaeological site that I worked at. It's called Umel Jamal in Jordan, and I wish I could have been him for that day to see it like in the state it was in 120 years ago.

Job Christiansen: 

I know what it's looked like the last 10 years, but a lot of it's like collapsed, fallen down. There's a lot of stuff that's happened in between, and even though he has really good documentation, the photographs are not good and there's stuff missing from documentation and I just wish I could like see and experience that wonder of what it was like in that state. Right, that's just I love. I just love like like stuff preserved in time, and so it just would have given me a lot of perspective on the people and what happened there.

Job Christiansen: 

That will. It's basically lost right when I'm never going to, we're never going to know certain things that Howard Butler saw.

Fonz Mendoza: 

Excellent, that is a great answer. I love that. I think that's the very first person, or you're the very first guest, that goes back and, you know, chooses a historical figure in that sense, you know, and that's very interesting. You know, like I said, I never thought about that. You know, a moment caught in time, you know, and being able to go back to the way it was when it was first discovered, because now we only get to see what we may know now and, depending on when you get to make that trip, like you said, you know you went over there and you saw it in a very different state. So, yeah, that's very interesting. I had never thought about that. But another thing that causes me to kind of pause and, you know, think about those things, like you know, really capture those moments and so, yeah, love it.

Fonz Mendoza: 

Well, joe, thank you so much. I really appreciate it. Again, thank you from the bottom of my heart for being a guest and, you know, sharing your experience and, of course, sharing your thoughts of AI and education. Like I said, I know a lot of audience members are definitely going to take some gems that they can sprinkle onto what they're already doing. Great, so thank you for that and for our audience members, please make sure you visit our website at myedtechlife myedtechlife where you can check out this amazing episode and the other 327 episodes where, I promise you, you will find a little bit of something for you to continue to grow and to continue to learn, as always. Thank you so much to all our sponsors, thank you so much to Book Creator, thank you so much EduAid, thank you so much Yellowdig, and if you are interested in being a sponsor of our show, please don't hesitate to reach out to me. We would love to collaborate and work together with you. But, as always, guys, from the bottom of my heart, until next time, don't forget, stay techie. 

Job Christiansen Profile Photo

Job Christiansen

Eclectic Inquisitive / Lifelong Learner / Dad

Job Christiansen is an Instructional Technology Specialist for Lake Center Christian School, a K12 private school in Ohio, USA. As an Instructional Technology Specialist, Job partners with educators to bridge pedagogy and practice with technology tools in learning spaces to transform the learning process. A lifelong learner, passionate about driving change through a growth mindset for developing learning communities to raise up 21st Century Citizens equipped to handle modern challenges.
With a background in the humanities fields of history, archaeology, and music, Job brings an eclectic and interdisciplinary approach to technology in education. A relative newcomer to K12 education, Job’s experience in the non-profit and security sectors position him with unique insights into the issues surrounding data privacy, ethical use of AI, and the degradation of digital literacy in schools around the world. In addition to exploring inquiry and PBL in education, you can find Job training for marathons, building LEGOs, and living through curiosity and play wherever he ends up in the world!