July 10, 2025

Episode 328: Job Christiansen

Episode 328: Job Christiansen

AI in Schools: Innovation or Illusion? | Ep. 328 with Job Christiansen Is AI making classrooms smarter—or just noisier? In this episode of My EdTech Life, I sit down with Job Christiansen, a fellow cautious advocate, educator, and instructional technology specialist, to ask the hard questions about AI’s role in education. From privacy concerns to the illusion of “safe use,” we unpack what most educators, tech leaders, and decision-makers aren’t being told about the tools flooding our schools....

Spotify podcast player badge
Goodpods podcast player badge
Apple Podcasts podcast player badge
Amazon Music podcast player badge
Pandora podcast player badge
RSS Feed podcast player badge
Spotify podcast player iconGoodpods podcast player iconApple Podcasts podcast player iconAmazon Music podcast player iconPandora podcast player iconRSS Feed podcast player icon

AI in Schools: Innovation or Illusion? | Ep. 328 with Job Christiansen

Is AI making classrooms smarter—or just noisier?

In this episode of My EdTech Life, I sit down with Job Christiansen, a fellow cautious advocate, educator, and instructional technology specialist, to ask the hard questions about AI’s role in education. From privacy concerns to the illusion of “safe use,” we unpack what most educators, tech leaders, and decision-makers aren’t being told about the tools flooding our schools.

Job doesn’t just read privacy policies, he tests them, creating teacher and student accounts to see what’s really happening behind the interface. This episode dives into why most AI tools may still be stuck in the substitution phase and what it’s going to take to truly shift toward responsible, innovative, human-centered use.

🎧 Whether you're in K–12 or Higher Ed, this one will challenge your thinking.

 TIMESTAMPS:
 00:00 Welcome and Guest Intro
 02:00 Job’s Journey Into EdTech
 06:00 First Reactions to ChatGPT in 2022
 10:00 Early Adoption vs. Caution in Schools
 13:30 AI's Substitution Trap & SAMR Concerns
 19:00 Redefining “Safe” in AI Tools
 24:30 What Job Found Testing Student-Facing AI Apps
 30:00 Historical Accuracy and the AI “George Washington” Problem
 36:00 The Concept of AI Pollution and Knowledge Dilution
 43:00 Transparency, Trust, and Teacher Responsibility
 47:00 Final Takeaways and Reflective Advice
 50:00 Where to Connect with Job
 54:00 EdTech Kryptonite, Billboards, and Historical Curiosity

🙏 Big thanks to our amazing sponsors:
 🔹 Book Creator
🔹 Eduaide.AI
🔹 Yellowdig

💬 Visit www.myedtech.life to explore more episodes and support the show.

👋 Stay curious. Stay critical. And as always—Stay Techie.

Peel Back Education exists to uncover, share, and amplify powerful, authentic stories from inside classrooms and beyond, helping educators, learners, and the wider community connect meaningfully with the people and ideas shaping education today.

Authentic engagement, inclusion, and learning across the curriculum for ALL your students. Teachers love Book Creator.

Support the show

Thank you for watching or listening to our show! 

Until Next Time, Stay Techie!

-Fonz

🎙️ Love our content? Sponsor MyEdTechLife Podcast and connect with our passionate edtech audience! Reach out to me at myedtechlife@gmail.com. ✨

 

00:30 - Welcome and Introduction

02:06 - Job's Journey into EdTech

06:49 - First Encounters with ChatGPT

10:59 - Teacher Adoption of AI Tools

19:12 - Safety Concerns with AI in Education

27:53 - Testing AI Guardrails for Students

36:37 - Historical Figures and AI Accuracy

44:35 - AI Pollution and Information Quality

52:36 - Final Thoughts and Closing Questions

WEBVTT

00:00:30.115 --> 00:00:33.438
Hello everybody and welcome to another great episode of my EdTech Life.

00:00:33.438 --> 00:00:40.390
Thank you so much for joining me on this wonderful day and, as always, thank you, as always, for your support.

00:00:40.390 --> 00:00:43.100
We appreciate all the likes, the shares, the follows.

00:00:43.100 --> 00:00:48.323
Thank you so much for your feedback too well, as that is always welcome.

00:00:48.323 --> 00:00:57.131
As you know, we do what we do for you to bring you some amazing conversations that will help us continue to grow within our education space.

00:00:57.131 --> 00:01:13.283
A wonderful guest, somebody that I follow on LinkedIn and somebody that shares a lot of great posts and a lot of great insight about AI in education, so I would love to welcome to the show Job Christensen.

00:01:13.283 --> 00:01:15.087
Job, how are you doing today?

00:01:16.209 --> 00:01:16.590
I'm doing.

00:01:16.590 --> 00:01:18.540
Great Thanks for having me on the show, Fonz.

00:01:18.941 --> 00:01:19.323
Excellent.

00:01:19.323 --> 00:01:21.030
Well, I'm excited to talk to you, job.

00:01:21.030 --> 00:01:25.028
I know after a couple of posts on LinkedIn, I kind of started seeing you know that we do.

00:01:25.028 --> 00:01:50.629
After a couple of posts on LinkedIn, I kind of started seeing you know that we do have a couple of mutual friends and kind of posting within the same post and I was like I really like your insights, I really like what you have to share and, again, the reason that I reached out to you was just because I would really love to just amplify your voice and hear a little bit more about your perspective and experience within the education space and mainly with AI in education.

00:01:50.629 --> 00:01:58.353
So, before we dive into the conversation, can you give us a little brief description about of you know?

00:01:58.353 --> 00:02:04.914
Excuse me, can you give us a little brief bio and what your context is within the education space?

00:02:06.457 --> 00:02:07.140
absolutely so.

00:02:07.140 --> 00:02:11.266
I'm a relative newcomer to the education space.

00:02:11.266 --> 00:02:26.574
My background actually is that I'm I went to school and studied history, I got a bachelor's and master's in history and then I never really could get a strong career started with those degrees.

00:02:26.574 --> 00:02:27.945
So I did a couple different things.

00:02:27.945 --> 00:02:48.412
I worked for some nonprofits and then so three years ago I actually saw a job opening at a school, basically for basic tech support, and especially at one of the nonprofits I'd worked at, I had been hired basically because I had like website skills on my resume.

00:02:48.412 --> 00:02:51.566
Right, it's always those like hard skills that they were looking for at that time.

00:02:51.566 --> 00:02:52.449
This was 10 years ago.

00:02:53.032 --> 00:03:08.352
So I worked for this nonprofit for three years and over the course of those three years they just continually found, like they figured out, that like, oh, like this is broken, maybe Job can fix it, instead of going to like the contracted IT guys that they had.

00:03:08.352 --> 00:03:15.038
Right, it was a really small nonprofit, like 12 of us, so anytime you can cut costs with just like Job fixing it.

00:03:15.038 --> 00:03:25.621
So I just would tackle things, approach things, start playing with all these different stuff, like it was like Salesforce database and then like the VoIP phone system, just all these different tools.

00:03:25.621 --> 00:03:28.913
I just kind of cut my teeth on and I, I I began.

00:03:28.913 --> 00:03:43.944
I'm not formally trained in technology, I just jumped in and learned it by using it and playing with it, so anyway, so I applied to this school and I think they really they really liked that attitude of like you can just learn by doing and you have that like go get them heart.

00:03:43.944 --> 00:03:49.093
And so I was basic tech support for this school.

00:03:49.941 --> 00:03:55.872
So that was three years ago and it's a K-12 private Christian school.

00:03:55.872 --> 00:04:01.693
So just to give you some of that background too, because that plays into just how I think of things and approach things.

00:04:01.693 --> 00:04:11.968
Think of things and approach things and I think that having that humanities mindset of those history degrees has given me like a unique perspective.

00:04:11.968 --> 00:04:35.512
So now where I am at the school, after the first year the tech director who hired me stepped aside and a new tech director came in, and then this last year, instead of just being tech support, this new tech director saw that I worked really well with teachers and so he kind of moved me over to being what basically is some sort of like a tech coach.

00:04:35.720 --> 00:04:42.172
I'm an instructional technology specialist now, so I work with the teachers to help them use the tools more effectively in their classrooms.

00:04:42.172 --> 00:04:47.404
Use the tools more effectively in their classrooms.

00:04:47.404 --> 00:04:56.322
But through this whole process, I really found that I, even though I was hired for like technology and that's what a lot of the you know like I'm the guy everyone called up for, like hey, my computer's not turning on job, and I was there in five minutes.

00:04:56.322 --> 00:05:03.966
I was, you know, I was like the tech support you really wanted, but that wasn't enough to keep me passionate and going.

00:05:04.646 --> 00:05:18.168
So I've now been pivoting, and some of the stuff that I write about is even it's not so much the technology that excites me, it's really just I view technology as a vehicle.

00:05:18.168 --> 00:05:27.641
But what I really have gotten excited about over the last few years is the learning process, and especially how do we just get better at learning?

00:05:27.641 --> 00:05:37.668
And so I see at times technology can help that and sometimes it can't, and so that's kind of my approach and my brief synopsis of all that.

00:05:38.189 --> 00:05:38.589
Excellent.

00:05:38.589 --> 00:05:47.125
Well, that's good to know and that's good to hear your background too as well, as I think that that will definitely, you know, lend itself to this conversation very well.

00:05:47.125 --> 00:06:21.427
Especially, you know, and talking a little bit about that coaching, with your coaching experience, which you saw in education and especially with AI in education, and being that you are in a private school also as well, it's very interesting just to get those perspectives as well, because one of the things is, you know, in the public school sector, it's very interesting just to get those perspectives as well, because one of the things is, you know, in the public school sector, it's depending on the size of your school, usually the bigger you are, the more platforms you get to have, as opposed to a smaller school, due to funding, you have to be a little bit more just tight with your money and budgets and so on.

00:06:21.427 --> 00:06:33.201
So, you know, getting that experience and, of course, that perspective from private school too, as well as teachers and students, how they are interacting with a lot of platforms.

00:06:33.201 --> 00:06:34.644
I would definitely love to hear that.

00:06:34.644 --> 00:06:43.790
So I want to ask you, joe, just on your own, before getting into education, I would love to hear what your thoughts were.

00:06:44.050 --> 00:06:46.495
November 2022, taking it way back.

00:06:46.495 --> 00:06:51.675
And as soon as you know ChatGPT was out, what were your initial thoughts?

00:06:51.675 --> 00:06:53.502
Were you an early adopter?

00:06:53.502 --> 00:06:54.505
Were you did you?

00:06:54.505 --> 00:07:02.656
Were you kind of a wait and see kind of guy, or did you just really kind of wait it out until you said, okay, let me see what this is all about?

00:07:02.656 --> 00:07:05.225
So I would just love to hear your experience through that.

00:07:05.908 --> 00:07:09.971
Yeah, so I had started at the school that previous August.

00:07:09.971 --> 00:07:11.346
So I've been there I don't know.

00:07:11.346 --> 00:07:13.107
So November was like three months in, right.

00:07:13.107 --> 00:07:14.485
So I'm three months in.

00:07:14.485 --> 00:07:17.127
I was aware it came out.

00:07:17.127 --> 00:07:20.449
I went and made an account at OpenAI ChatGPT.

00:07:20.449 --> 00:07:31.245
I typed one prompt in like not knowing how to prompt, right, and it was something about like create a guidelines or policy for our school.

00:07:32.187 --> 00:07:36.982
I saw it and then I was like kind of cool and then I didn't touch it for a long time.

00:07:36.982 --> 00:07:47.115
My thought was just this was I'm kind of a slow adopter in general with technology in my life.

00:07:47.115 --> 00:07:49.083
I just kind of grew up that way.

00:07:49.083 --> 00:07:56.244
I was kind of behind the curve, so I want to be aware of things, but I don't always use the things.

00:07:56.244 --> 00:08:05.088
And so at the time I was just apprehensive because I didn't know really what AI was and I was worried.

00:08:05.088 --> 00:08:15.673
It was kind of like the way that it's portrayed in media, that it's had like a life of its own, and so it was kind of like skeptical but maybe optimistic.

00:08:15.673 --> 00:08:23.283
Skeptical, but I'm not someone who's just gonna like jump in and use it right out the gate excellent.

00:08:23.644 --> 00:08:39.251
Yeah, then that for me was something very similar, like I'm kind of an early adopter, same thing Just kind of went in and then I just kind of backed off a little bit just to seeing as how things were moving and especially within education and seeing and learning more.

00:08:39.251 --> 00:09:14.101
Because before and I kind of wanted to it kind of goes to a post that you kind of put up recently where I honestly thought, oh my gosh, like this is really magical, you know, and I was like, oh my goodness, this is great and this is going to go ahead and do a lot of transformation, and so then, kind of seeing the way things were going and understanding a little bit more about how LLMs really work and so on, and following other people from both sides Obviously I want to hear, you know, from the, I guess, for AI crowd or, you know, the early adopters.

00:09:14.101 --> 00:09:26.327
And then, of course, we've got sort of the cautious advocates that are kind of in the middle just kind of seeing things through, and then you've got, you know, some of us that may just kind of hang back a little bit more, but it was very interesting where I just kind of said you know what?

00:09:26.327 --> 00:09:29.533
Let me kind of slow things down and understand more that.

00:09:29.533 --> 00:09:45.089
You know, not everything has to be AI, but the way that the perception was is like, oh my gosh, this is going to save me time, this is going to save me from burnout, this is going to save me from, you know, this situation and so on.

00:09:45.149 --> 00:09:58.721
Just, I guess, creating work, or having something ready lesson plans, of course, or get rid of the Sunday scaries, as a lot of platforms kind of you know, they prey on those things and saying like, oh, this is the way we're going to sell.

00:09:58.721 --> 00:10:10.041
And I'm going to go back to Micah Shippey, who was on the show a couple of months back saying, you know, fear, uncertainty and doubt are what sells, and that's really what they kind of do, you know.

00:10:10.041 --> 00:10:22.653
And coming back from Misty, there are many platforms that are out there and you're kind of starting to see kind of like the top five that are really kind of getting out in the forefront and kind of being, I guess you would say, the educator favorites.

00:10:22.653 --> 00:10:35.416
But I want to get your thoughts on that as far as when you started seeing it, you know, with your teachers or you know, were your teachers early adopters too, and as you were kind of going through and helping them out.

00:10:35.416 --> 00:10:39.450
Were you seeing some of the things that they were working on and what were your thoughts on that?

00:10:39.961 --> 00:11:06.735
I'd say we had a handful of early adopters, but in general, it's just a lot of like caution, and so I think actually where I've really started to see it like creep in or just appear is not with the tools that are built as-specific, but in the tools we're already using.

00:11:06.735 --> 00:11:24.690
I started to notice there'd just be AI features start to appear, and that's when I started to become a lot more conscientious of this is going to be in here whether or not we are actively choosing to sign on.

00:11:24.690 --> 00:11:25.451
We're using AI.

00:11:25.451 --> 00:11:31.985
It's just appearing in the tools we're already using, and so, unless we're just going to get rid of all the tools, we need to figure out how to use it.

00:11:33.860 --> 00:11:34.123
Okay.

00:11:34.443 --> 00:11:34.785
Excellent.

00:11:35.360 --> 00:11:41.826
So now your teachers, as far as being able to use it, and were they using it?

00:11:41.826 --> 00:11:43.725
How were they using it?

00:11:43.725 --> 00:11:47.323
Was it mainly just for, like you know, worksheet creation, was it?

00:11:47.323 --> 00:11:59.793
You know what were the initial, you know ways that they started adopting the technology yeah, I think, I think especially that first year we kind of put out a like.

00:11:59.953 --> 00:12:01.042
It was kind of like banned.

00:12:01.042 --> 00:12:06.760
It was basically, um, I think especially like no students, um, but if teachers want to, they can.

00:12:06.760 --> 00:12:10.168
And then the second year it was more okay.

00:12:10.168 --> 00:12:13.441
Here's kind of like some rough guidelines.

00:12:13.500 --> 00:12:26.869
I, I don't know, it's kind of foggy in my mind um, um, I would say the teachers that use it I know the one that was a really early adopter and now is kind of like one of the main people in the building I go to if I want to have a discussion about AI.

00:12:26.869 --> 00:12:32.030
She generally, I think, uses it to help create like lesson plans and things.

00:12:32.030 --> 00:12:36.048
She's not someone who uses worksheets, so it's not like that doesn't appeal to her.

00:12:36.048 --> 00:12:42.447
It's all about like lesson plans, rubrics, helping to revise or come up with new projects.

00:12:42.447 --> 00:12:48.296
Think of more like using Add a Thought Partner is how that teacher especially is using it.

00:12:48.296 --> 00:13:02.590
I'm trying to think about the other teacher, but generally those teachers are specifically a lot more tech savvy, and so I don't actually get a lot of interaction with them because they don't need me.

00:13:02.590 --> 00:13:08.028
They don't need to ask me for questions or even to necessarily fix their equipment.

00:13:08.028 --> 00:13:17.705
Um, they just can pick up something and run with it, and so, um, they just were given the freedom to do that on their own excellent.

00:13:18.166 --> 00:13:49.528
so, as the kind of uh, you know, of course, the progression of ai from 2022 and its initial stages, like you said, you know, uh, thinking of it as a thought partner and, of course, the progression of AI from 2022 and its initial stages, like you said, you know thinking of it as a thought partner and, of course, we have so many names for it too as well, that, you know, kind of letting people know, like this is not going to or should not be your tool to just offload everything, but it's just supposed to be that tool to help supplement or help improve, you know, the learning process or maybe within the lesson plans and so on.

00:13:49.929 --> 00:14:00.331
So I want to ask you, you know, now that you've seen this kind of shift and now that you're more familiar with a lot of the platforms out there, what are your initial thoughts now that you're seeing, you know?

00:14:00.331 --> 00:14:08.341
Now, it's basically maybe about five platforms that are really, you know, kind of garnering everybody's attentions and everything, as I saw at ISTE.

00:14:08.341 --> 00:14:15.379
Usually it's about those five that are there, that are kind of, you know, weeding themselves out and coming up to the forefront.

00:14:15.379 --> 00:14:22.879
What has changed as far as your perception of the use of AI in education?

00:14:22.980 --> 00:14:57.766
the use of AI in education.

00:14:57.766 --> 00:15:06.649
I think, like widespread there's.

00:15:06.649 --> 00:15:20.047
I guess it's like you said, that they, that educators, are recognizing that it can help save time.

00:15:20.047 --> 00:15:21.029
It's hard for me to kind of.

00:15:21.029 --> 00:15:30.295
This is partly why I'm on LinkedIn a lot and I'm interacting with people, because I want to get that outside perspective, because otherwise I'm going to end up in my own little private school bubble, right, and I don't want to just stay there.

00:15:30.295 --> 00:15:35.551
I want to know what's happening at other places and where this field of education is shifting.

00:15:35.551 --> 00:15:44.370
It's hard to get that sense, though, on LinkedIn and even in other places, because it's all.

00:15:44.370 --> 00:15:45.614
It feels like it's all over the place.

00:15:45.614 --> 00:16:05.037
I feel like there are people that are creating all their lesson plans and using it as thought partners and using it in innovative ways with students, and then there are people way over here that are just like we're not even touching it yet, right, like it's not even allowed in our classrooms or schools, and so it's all over the place.

00:16:05.037 --> 00:16:26.366
So the people that are picking it up and running with it, my impression, I guess I would say, is that they are a little I don't want to say like overly optimistic, but in general, I think they see the benefits of it.

00:16:26.366 --> 00:16:31.641
But where I get a little cautious is on, like, the safety and the data privacy side.

00:16:31.641 --> 00:16:45.332
So I think that's where that shift happened and especially I don't even know if most people recognize this but where I get a little.

00:16:46.456 --> 00:17:06.463
I'm less enthusiastic about the big tools that are well-known now in the like, you keep mentioning the big five, which I probably could name at least three, and they're just wrappers for the LLMs and so it's just like prepackaged prompts for educators to use.

00:17:06.463 --> 00:17:11.179
And once I put two and two together and figured that out, I'm like, oh, this is less exciting.

00:17:11.179 --> 00:17:13.937
I thought they actually built a brand new.

00:17:13.937 --> 00:17:15.662
They call it tools, right.

00:17:15.662 --> 00:17:22.610
You go into one of the big five, right, and there's like 150 tools specifically for educators, right, and they call them tools.

00:17:22.610 --> 00:17:30.443
And I'm just like the tool is just a pre-programmed prompt that's talking an API back to the LLM.

00:17:30.443 --> 00:17:48.991
So if you know what you're doing which some of the teachers I interact with do they don't even go to the big tools, they just go to the LLM, because they can actually get what they want faster than trying to work through a prepackaged prompt If you don't know what you're doing, and I think that's where the shift has kind of come.

00:17:49.092 --> 00:17:51.834
Educators don't have a lot of them don't have the time.

00:17:51.834 --> 00:17:54.977
I refuse to believe they don't have the skill.

00:17:54.977 --> 00:18:05.767
I think we are capable of learning and picking up new things, but I think a lot of them just don't have the time to go to ChatGPT and learn how to prompt the way they can get what they want.

00:18:05.767 --> 00:18:17.001
So they pick up one of the big five tools and someone like Charlotte shares with them hey, I can get you like a rubric and a lesson plan in 10 minutes.

00:18:17.001 --> 00:18:23.405
And I think to themselves oh wow, like I only had 10 minutes today and I couldn't get done what I wanted to do, I'm going to try this and so that's where that shift is happening.

00:18:23.405 --> 00:18:48.279
But I still think it's like if you're using one of those big five tools instead of like an LLM or actually learning all the background, I feel like you're stopping yourself short of actually using it in a creative and innovative way and it just becomes another tool.

00:18:48.279 --> 00:18:54.442
Like it just becomes another tool and the big thing is that it's a substitution.

00:18:54.589 --> 00:18:58.817
I think I hinted at this in one of my recent posts I was talking about like AI.

00:18:58.817 --> 00:19:02.530
We're still at the level of substitution, and we're not.

00:19:02.530 --> 00:19:03.132
It's.

00:19:03.393 --> 00:19:15.436
I have not seen much that's at the level of modification and reinvention, which what I'm talking about is the SAMR model for using ed tech in schools, and this year I've started.

00:19:15.436 --> 00:19:37.420
I have a newsletter at my school that I started and I've started actually educating the people in my district about like the SAMR model, cause I think this is actually kind of not I don't know how many teachers actually know about it, right, and so everything that I'm seeing from most of these big tools are, like you said, like you can just make a worksheet.

00:19:37.420 --> 00:19:39.836
That's the same as what they had before.

00:19:39.836 --> 00:19:41.477
We're just making worksheets faster.

00:19:41.477 --> 00:19:46.475
So we're just doing the same thing we did before, but we're doing it faster.

00:19:46.475 --> 00:19:50.398
But are we actually doing it better?

00:19:50.398 --> 00:19:56.317
And what I care about, my guiding light is now how do we learn better?

00:19:56.317 --> 00:20:00.720
So if worksheets were working before, why do we actually need AI?

00:20:00.720 --> 00:20:03.674
We can just keep doing what we were doing before.

00:20:03.674 --> 00:20:04.897
You know what I mean.

00:20:06.721 --> 00:20:08.715
Yeah, no, and that makes perfect sense.

00:20:08.715 --> 00:20:11.153
You know, and there's a lot of things that I want to unpack there.

00:20:11.153 --> 00:20:30.263
So one of the things that we'll kind of go back to is the safety issue, because I know you mentioned it right now, you know during this answer, and I know that you've pointed out that safe isn't really a neutral term in that sense.

00:20:30.283 --> 00:20:56.240
So I want to ask you, you know, in your opinion, and I know you did post about this on LinkedIn how should school leaders you know with your experience and what you have seen, define and communicate what is safe use, or actually what safe use means in that practical and developmental terms.

00:20:56.240 --> 00:20:57.820
This is a really big question.

00:20:57.820 --> 00:21:03.946
So, partly this year, just to give you some historical context because, like I said, I come from a history background, so I approach everything from context.

00:21:03.946 --> 00:21:08.444
Figure out the context.

00:21:08.444 --> 00:21:16.587
First, we put together an AI task force and I was on that to come up with a formal policy.

00:21:16.587 --> 00:21:27.794
So we're discussing safety, formal policy so like what we're discussing like safety, and so some of the things we looked at other policies from other schools and what we kind of ended up on safe use really kind of relates to like ethical use.

00:21:27.794 --> 00:21:53.150
So the way that we're thinking about AI tools is it's kind of the same way we've been approaching other edtech tools and so safe use has to do with not using it to inflict harm on others not.

00:21:53.230 --> 00:22:18.869
I don't want to just use not phrases, but especially where it gets tricky with AI, it is a lot more convoluted than other or previous ed tech tools, so I guess I want to approach it from like okay, let's talk about just like the teacher side.

00:22:18.869 --> 00:22:22.381
So what would safe use for teachers look like?

00:22:22.381 --> 00:22:46.959
Teachers are still responsible for any output, anything that they create with it, but, as anyone who's used AI knows, ai is not necessarily always going to give you verifiable or authentic not authentic their word is escaping me but sometimes the output's just going to be wrong.

00:22:46.959 --> 00:22:55.019
And so safe use is actually putting on your thinking cap and actually vetting what you're getting.

00:22:55.019 --> 00:23:10.878
Where I see some really big flags and some shocking stories from educators is when they put information into an AI tool that contains private information about students.

00:23:10.878 --> 00:23:22.006
That contains private information about students, and so the tools on the back end we don't know where that data is going.

00:23:22.006 --> 00:23:24.728
So that is an unsafe use, right?

00:23:24.728 --> 00:23:28.699
That's like buying a billboard and just posting that student's information on there.

00:23:28.699 --> 00:23:33.456
You don't know who's driving by that billboard now and taking down that student's information.

00:23:33.456 --> 00:23:37.940
So on the teacher side, that's definitely like unsafe use.

00:23:37.940 --> 00:23:52.002
On the student side, it's all kind of the similar things Like don't put your private information in there, but where I would actually almost think of safe use as a misnomer.

00:23:52.002 --> 00:24:05.531
I don't know that there is a genuine, real safe use in the sense that if you think something's safe, you can use it freely without any harm coming to you or to others.

00:24:05.531 --> 00:24:24.113
And I don't know if AI has actually I'm hesitant to say AI has actually reached that level or especially even the ways that these companies are trying to put these protections and guardrails around the package and the wrapper that they're pulling from the big LLMs.

00:24:24.113 --> 00:24:32.700
I'm not sure I haven't really seen good guardrails and so in that sense I'm hesitant to actually say it's safe.

00:24:32.700 --> 00:24:43.462
If we want to actually jump into what do I think some of the issues are, I actually, when I test out an AI tool, I don't just read their privacy policy.

00:24:43.970 --> 00:24:57.118
I create a teacher account and then, especially if it has like a student facing side, I have a dummy student account and so I'll make assignments from the teacher and then I'll send it to my student account and interact with it as a student would.

00:24:57.118 --> 00:25:15.438
And so where I really don't see these tools as being safe are the things that I test and sometimes it puts me in a dark mood, but I, from my background working in tech support, I would actually see sometimes what students type in Google, right?

00:25:15.438 --> 00:25:39.532
So if you just take how a student uses Google, how they might approach using an AI tool, they're probably going to start using it the same way as they used Google and so students might start typing in things that are giving social, emotional cues, things like I don't feel safe, I might be depressed.

00:25:39.532 --> 00:25:40.694
I've tested.

00:25:40.694 --> 00:25:48.698
I've literally typed into an AI tool like I didn't explicitly say I was running away, but I was typing in questions as if like how?

00:25:48.718 --> 00:25:51.352
do I buy a plane ticket to meet up with my friend, right?

00:25:51.352 --> 00:25:54.076
And I wanted to see if the AI tool would pick up on.

00:25:54.076 --> 00:25:57.844
Does it actually know that the student wants to run away?

00:25:57.844 --> 00:26:06.221
Because an adult would pick up on that and they would intervene, and so some of the tools did, okay.

00:26:06.221 --> 00:26:32.634
And then where I get hesitant, though, is it just says, like I sense you're in distress, please talk to an adult, but then it might just move on and the student can just keep typing in and it would just move on as if nothing happened, whereas an actual adult would recognize that and intervene and need to have a conversation like there's something's going on with the student, and so, in that sense, none of the AI tools that I've used played with.

00:26:32.634 --> 00:26:48.940
That sense, none of the AI tools that I've used played with and now I have a list of over 20, 25 of them, I think have sufficient guardrails to actually say these are safe for use.

00:26:48.940 --> 00:26:50.506
You can trust your kid using it without any intervention, right?

00:26:50.506 --> 00:26:51.789
Because to me that's what safe means is.

00:26:51.789 --> 00:26:54.737
They can just go on there.

00:26:54.737 --> 00:26:56.842
No one ever needs to read the logs.

00:26:56.842 --> 00:26:59.358
It's going to alert us if something happens.

00:27:00.211 --> 00:27:05.096
None of the tools properly alert Like I don't know what other schools are doing, I just know what our school does.

00:27:05.770 --> 00:27:24.259
But if there's signs that a student might be in emotional distress or is thinking of harm to themselves or others for other things like Google search engines we have tools to help be alerted to that, and none of the AI tools that I've played with are actually alerting us of that nature.

00:27:25.451 --> 00:27:46.094
So that's why I'm kind of flabbergasted that people are excited to put AI in the hands of students, when a student can just type in there that I'm depressed and no adult at the school is going to be alerted and so, and now we don't know what the AI tool is even going to like give them for advice, wise.

00:27:46.094 --> 00:27:47.637
Now I've played out that scenario.

00:27:47.637 --> 00:27:53.541
None of the scenarios does the AI tool suggest the student like continues with that train of thought.

00:27:53.541 --> 00:27:56.094
Like they do recommend talking to someone.

00:27:56.094 --> 00:28:00.362
Sometimes they even might provide some names or phone numbers.

00:28:00.362 --> 00:28:18.955
I'm not sure if they actually gave contact info because that's like regionally specific, right, but um, I just felt like they're not doing their due diligence and especially when you get into the situation of teachers and adults at schools are mandated reporters.

00:28:18.955 --> 00:28:24.539
None of the AI tools are really taking the place of that mandated reporting.

00:28:24.539 --> 00:28:26.800
They don't have the proper.

00:28:26.800 --> 00:28:37.387
I mean, there's a legal issue there, and so that's where I land on safe and unsafe with students.

00:28:38.590 --> 00:28:39.615
That was a great answer.

00:28:39.615 --> 00:28:40.156
I loved it.

00:28:40.156 --> 00:28:44.074
I mean everything that you described and just covering there.

00:28:44.074 --> 00:29:03.893
You know, oftentimes, like I said, as educators sometimes we get overtaken by the excitement of getting shiny stickers or fluorescent shirts or getting invited to a party, you know, for a particular app, and we're just there and we think like, oh, okay, they're pretty cool, cool people.

00:29:03.893 --> 00:29:11.513
And sometimes we tend to overlook the fact of you know, once I use this app, is it doing what it should do?

00:29:11.513 --> 00:29:19.615
Is it, does it have the proper guardrails there for student safety and is there anything that is going to warn or alert a teacher?

00:29:20.559 --> 00:29:36.375
If there is an issue like you talked about and you're absolutely right in thinking about that you know, usually I'll go look in the terms of services too as well, and I'm just there looking at details and, you know, looking very closely, and for the most part it always says like you know, no, we don't.

00:29:36.375 --> 00:29:38.078
You know we don't keep your information.

00:29:38.078 --> 00:29:44.521
But as you keep going and keep going, then they'll say, oh yeah, by the way, we would use your information for third parties and so on.

00:29:44.521 --> 00:30:01.160
So it's almost like we're going to give you what you like here in this first page, but we know you're not going to continue reading, but in this next page, that's where we're going to go ahead and put in that is, should anything happen, you can't come up against us and we will only, um, you know the?

00:30:01.160 --> 00:30:08.972
I think it said something like if you are, you know, pursuing something, there's only a short, you know little fee, uh, that they would have to pay on their part, or actually.

00:30:08.972 --> 00:30:14.395
No, they said like they're not responsible for anything at all and they think you would have to go to against their third parties.

00:30:14.395 --> 00:30:16.559
You know, should there be, there be any litigation.

00:30:16.619 --> 00:30:20.897
And I'm thinking, I'm always thinking to myself it's like something happens in a school district.

00:30:20.897 --> 00:30:34.103
You can't come, you know, after the application, but now you have to go to that third party which, like you said, for the most part their APIs they're connecting to either open AI or cloud or anything else that is out there.

00:30:34.103 --> 00:30:39.617
So now you know, a school district can't go up against somebody that big or a big entity.

00:30:39.617 --> 00:30:41.304
It's almost like oh wow.

00:30:41.304 --> 00:30:55.238
So it kind of goes with accountability too and I know that's something that you stress also in some of the you know posts that I saw too as far as that AI has no accountability and no consequences for errors or hallucinations.

00:30:55.238 --> 00:31:02.882
So I want to get your thought process on that in this question being that you have, you know, a history background.

00:31:03.423 --> 00:31:50.404
One of my biggest concerns is always applications where they say hey, you can go ahead and talk to George Washington, you can go ahead and talk to Martin Luther King, or you know Amelia Earhart or you know anybody else, and to me, like Rob Nelson said a couple of episodes back, it's like that digital necromancy what the history that they're getting is, because, since these applications are scraping everything, you know whose history are they getting and is it in line with what that particular state is seeking, and so what answers are they getting?

00:31:50.404 --> 00:31:51.608
So those are my concerns.

00:31:51.608 --> 00:32:04.125
There Now for yourself, having that history background, what are your thoughts on applications that are student-facing, where they can go ahead and talk to, you know, a historical figure?

00:32:19.950 --> 00:32:27.393
regard because initially, even though I'm a little bullish on like AI for students, I did think that if you're going to use AI with students doing some sort of interactive chat like this would actually be really beneficial.

00:32:27.393 --> 00:32:42.619
So, with my experience with some of the ones I tested out, with some of the ones I tested out, the best ones are the ones that aren't just relying on how the model was trained, basically.

00:32:43.250 --> 00:33:07.618
So if you're going to do that as a teacher, you should be having some sort of a not a script but, like I think, you can like attach files or be very clear in how you want that historical figure to respond, so you're not just trusting AI to come up with oh, the historical figure had acted in this way and had this background and this history.

00:33:07.618 --> 00:33:14.817
If I was going to do that, I would say, okay, I want to create a chat activity about George Washington, like your example.

00:33:14.817 --> 00:33:22.439
As the teacher, then I should go in and provide AI with these are the characteristics of George Washington.

00:33:22.439 --> 00:33:31.392
These are some of the famous historical events I want you to touch on and reinforce for the student's learning, and then you can tie that learning back to the standards you tell it to.

00:33:31.392 --> 00:33:34.690
Okay, we want to make sure the student understands like.

00:33:34.690 --> 00:33:36.855
These are the main points I want them to get across.

00:33:36.855 --> 00:33:41.044
That's how the educator can stay in control of that learning process.

00:33:41.044 --> 00:33:44.297
It sounds fun and flashy to just.

00:33:44.297 --> 00:33:54.029
You can whip it up and literally create an account and from not having an account to making this activity for students, you can do that in less than a minute.

00:33:54.029 --> 00:34:01.677
You can make an account, create that activity of just a plain George Washington and share that with your students without any extra information.

00:34:01.677 --> 00:34:06.479
But is that actually what's going to help the learning process and outcomes best for students.

00:34:06.589 --> 00:34:07.915
Again, that's what I keep coming back to.

00:34:07.915 --> 00:34:12.635
No, you need to provide more as the teacher, and sometimes that might be.

00:34:12.635 --> 00:34:17.141
Hey, I pulled these historical web links and I put them in as links.

00:34:17.141 --> 00:34:22.142
So now that AI chat tool is going to be pulling from there.

00:34:22.142 --> 00:34:25.454
So it's almost like the student is interacting with.

00:34:25.454 --> 00:34:31.425
Like what if you could type and ask this historical article about George Washington?

00:34:31.425 --> 00:34:34.759
That's a little bit more appropriate, I would think.

00:34:34.759 --> 00:34:40.852
That's where I feel about it.

00:34:40.872 --> 00:34:44.559
I haven't necessarily like the exercises, like the, the tests that I did.

00:34:44.559 --> 00:34:50.140
My go-to because this is my background is in like ancient history and especially archaeology.

00:34:50.140 --> 00:35:04.491
I did some archaeological work for a couple years and so what I do is I've made some activities where a student is interacting with a generic archaeologist and I want to reinforce these points.

00:35:04.491 --> 00:35:11.754
So I think, especially that's a much you can make a much stronger case for just like a generic person.

00:35:11.754 --> 00:35:17.146
There's a lot of weird ethical scenarios surrounding.

00:35:17.146 --> 00:35:23.122
Hey, you're going to be interacting with this hypothetical historical figure, george Washington.

00:35:23.743 --> 00:35:36.600
But you're right, how does AI actually really know what George Washington was like when historians have been studying him for 200 years and we're still like discussing stuff Like that's.

00:35:37.762 --> 00:35:39.024
It's a big gray area.

00:35:39.024 --> 00:36:17.540
And then that ties to the whole safety thing of if a student's interaction with this George Washington chat activity, if that's their only experience and that's how they're learning about George Washington, now how is that going to inform and shape their view of history when it wasn't shaped by the human, the AI or the educator in the process, or actual historians?

00:36:17.540 --> 00:36:22.262
Because I don't know how much AI tools are scraping.

00:36:22.262 --> 00:36:35.900
My understanding is there's a new model that's kind of put together every six months and so they feed a bunch of training model data in there, but what they're feeding it isn't everything.

00:36:35.900 --> 00:36:45.171
And I know some of these LLMs are going out to the internet and pulling live information, but they never pull everything.

00:36:45.171 --> 00:36:46.895
They're pulling a few.

00:36:46.895 --> 00:37:01.384
And so that's where it's really important as the educator, when you're interacting with something with the students, is to make sure that you're keeping your critical thinking in the loop, and especially historical context.

00:37:02.225 --> 00:37:04.413
Yes, I love it, and this is actually Joe.

00:37:04.413 --> 00:37:15.504
This is kind of like a nice segue to this next question, talking about a post that you put up like two weeks ago, talking about AI pollution.

00:37:15.504 --> 00:37:32.778
You know, of course, we're talking about the information and the way that LLMs work, and so that was something that was very interesting that kind of came up and how you mentioned that there's a higher value on knowledge and information prior to the AI detonation of 2022, like you put.

00:37:32.778 --> 00:37:36.426
So we're talking're talking, but I mean it makes sense.

00:37:36.426 --> 00:37:40.676
So, uh, tell us a little bit more about your thoughts on that.

00:37:40.676 --> 00:37:50.178
As far as ai pollution, possibly diluting, uh, untainted human knowledge- yeah, so that I'm trying.

00:37:50.198 --> 00:37:53.891
I don't remember the original source, but that terminology kind of came in.

00:37:53.931 --> 00:38:22.355
I saw an article um and so, just to break it down, I mean, you did a pretty good summary, but the um, basically they did research and they found that almost everything that's been created since that release of the LLMs back in 2022 has now has bits and pieces of, like AI generated or influenced information.

00:38:22.355 --> 00:38:24.720
So now they're calling that AI the pollution.

00:38:24.720 --> 00:38:39.940
And so now, if everything that's been made since 2022 has that polluted or tainted material in there, when they're training new models, they're not just taking everything prior to 2022 and feeding it, because that's what they use to train the model in 2022.

00:38:39.940 --> 00:39:12.745
You want to keep updating it, so you're going to feed it new information, but the new information is tainted, so the new models progressively, theoretically, are getting more and more tainted as this goes on, goes on, um, unless, as I was pointing out, there's now a um, now, theoretically, there would be a higher value on what's not been like what you hasn't used ai to um create it.

00:39:12.765 --> 00:39:34.126
um, I'm trying to think how much more to unpack it, but so my thoughts on that are this actually puts a premium on original thought and on those who can think and communicate and create without using AI tools.

00:39:34.126 --> 00:39:42.153
So I wonder if this is going to create like a new, almost a hierarchy or a rolling class of here's the people.

00:39:42.153 --> 00:40:27.311
Like if this was back in the day I don't remember if I posted this, but I was like if this came out hundreds of years ago, how would like ago, how would like they're skipping me Anyway, how would like Newton and Da Vinci, would they have used AI tools or not?

00:40:27.311 --> 00:40:28.293
And like is there a premium on their information?

00:40:28.293 --> 00:40:31.659
Or are there certain people and creators operating at a certain level that they can put out super high quality stuff that doesn't need that, not tainted at all?

00:40:31.659 --> 00:40:35.065
Or maybe they are using AI, but it's in such a way that, with their human critical thinking, what they're putting out doesn't have that taint, right?

00:40:35.065 --> 00:40:38.190
There's not like hallucinations or false data in there.

00:40:38.770 --> 00:41:15.799
So these people now everything they do is at a premium and their information not their information, but what they produce, their product now if their thinking now becomes the product that they can now sell to the big AI models open AI Now we'll probably want to buy these philosophers, theoretically, information, their writing and their thinking, because they can use it to train their data and have a more pure AI thinking algorithm, versus the rest of us who can't think at that level and there's going to be tainted stuff in here.

00:41:15.799 --> 00:41:18.701
Now our stuff has less value.

00:41:18.701 --> 00:41:22.621
Are we going to ascribe different value and worth to thoughts?

00:41:22.621 --> 00:41:27.001
And this is getting really murky and philosophical.

00:41:27.692 --> 00:41:48.171
But I think we need to be talking about it because it's nice and fun to jump into an AI tool and create something I do it all the time with like AI images but if we're not really thinking about the long-term consequences and what is the actual future we're creating like?

00:41:48.171 --> 00:42:09.444
We think we're creating a future where teachers can create all their lesson plans in one hour right for the next several weeks because they have this AI assistant and now they have so much more time, but you can't actually create all that with tainted information.

00:42:09.444 --> 00:42:59.075
Now, how is that subverting the education process and the learning process?

00:42:59.075 --> 00:43:10.541
And now, now what's going to happen 20 years from now, when students grew up only learning from AI-tainted and polluted information.

00:43:10.541 --> 00:43:18.414
Now, they can't ever produce pure information either, because everything they have in their head is tainted already, right?

00:43:18.414 --> 00:43:26.838
So this is getting really murky, and that's where I'm kind of like let's put the gas pedal or the brake pedal on a little bit and think about this.

00:43:30.101 --> 00:43:36.686
Yes, no, I love that and I'll let you know kind of my reflection, being at ISTE and moderating an AI panel.

00:43:36.686 --> 00:43:56.262
What it seemed like is that there was this kind of switch now, where before, I think, we kind of went at it, in other words, we're using it and implementing it in everything, but now, you know, we're 2025, july 2025.

00:43:56.262 --> 00:44:11.704
And now the conversation is like okay, now let's kind of reel it back and kind of just make a pause and kind of start really focusing on that teacher aspect, where before it was just go, go, go, whatever you can figure out on your own.

00:44:11.704 --> 00:44:12.969
There were no rules.

00:44:13.148 --> 00:44:20.483
No, you know, maybe many districts didn't have anything in place, but now those conversations are slowly coming back and being okay.

00:44:20.483 --> 00:44:37.211
Now we need to be very cautious, have that responsibility as far as how we're going to talk to people in our district, you know, and not just including district members, but we're talking about parents too, because some of the questions came up where that came up, where.

00:44:37.211 --> 00:44:42.911
Well, what if there is a parent that chooses to opt out of using one of those platforms?

00:44:42.911 --> 00:44:59.561
Or have you, as a district, informed parents of the platforms that are being used in the classroom, because sometimes we know that, as individual teachers, they go to a conference, they may come back and they may start using a tool that may not be something that is allowed in the district.

00:44:59.983 --> 00:45:11.213
And now you're have you told your parents that this is being used and how that information is being used and how you're using it to input some of the student information?

00:45:11.213 --> 00:45:18.634
So those are some of the conversations now that are kind of coming to fruition and kind of slowing down a little bit.

00:45:18.634 --> 00:45:31.740
But, as far as you know, seeing the those big five, you know for the most part and then you're starting to see also a lot of smaller apps that are kind of coming out and, you know, trying trying to really get into this space.

00:45:31.740 --> 00:45:42.347
And I know that there's a lot of money that is backing this space, a lot of investment that is going into a lot of these applications, and so they're moving forward, they're doing their thing.

00:45:42.347 --> 00:45:59.901
And you know, my thing was is like you know how long will this last and if it's something that is just going to continue to grow year after year which I think it will, you know, because everybody's like really pushing it and you know it's already in most of our classrooms and it's being used.

00:46:00.001 --> 00:46:12.827
But I want to ask you just to kind of wrap up you know if, if, for our listeners that are out there that this is the first time they get to hear your thoughts and your experience and so on, from this conversation.

00:46:12.827 --> 00:46:25.052
What is one thing that you would hope that they would kind of carry forward and be one of those practices that maybe they either use themselves or something that they may share with their district.

00:46:25.052 --> 00:46:31.574
So what is one key takeaway that you would just love to share with our listeners?

00:46:36.039 --> 00:46:37.423
just love to share with our listeners.

00:46:37.423 --> 00:46:39.469
Hmm, I'd say that the biggest thing is ask questions.

00:46:39.469 --> 00:46:45.804
I I actually didn't anticipate, like a year ago that I'd be talking about AI like this.

00:46:45.804 --> 00:47:07.128
I pretty much I jumped in less than a year ago I and just tried to learn as much as I could about AI because I was asking those questions, and so that's one of the biggest things is whether you're just a parent or an educator.

00:47:07.128 --> 00:47:16.376
If you haven't heard, like policy from your school, I would start asking those questions.

00:47:16.376 --> 00:47:25.521
And I think, at some level, the types of questions you should be asking are, like you said, what are our tools being used?

00:47:25.521 --> 00:47:32.543
What is the policy or what's the perspective towards AI use, like, what's the vision and long term plan?

00:47:32.543 --> 00:47:46.523
Because sometimes it might be well we're just trying to fill this gap for three months, but without the foresight of are we still going to be doing the same thing three years from now and we don't know what's going to happen three years from now.

00:47:46.603 --> 00:47:56.211
But if you're just putting a bandaid on, like I'll be honest, I shared mid-year I I had some teachers asking me.

00:47:56.211 --> 00:48:02.782
They weren't asking about ai specifically, but I had some teachers asking like how do I help change reading levels.

00:48:02.782 --> 00:48:07.204
They wanted to differentiate reading text for their students, and so I saw this as an opportunity.

00:48:07.204 --> 00:48:31.590
It took some time, and I went in and I showed them a couple of ai tools that they could actually, like um, put in their text and then change the reading levels so that they could help students, and I didn't have this long conversation about what AI is, I just knew right now we're mid-year they just need this little thing, and so I don't even think they necessarily use it that much.

00:48:31.590 --> 00:48:34.769
All they knew how to do was I, I can change reading levels.

00:48:34.769 --> 00:48:49.072
Right, that's like a really quick, small thing, and that isn't necessarily something that I think needs to be communicated to the parents, because that's actually something where there's not really that much of a risk of hallucination.

00:48:49.293 --> 00:48:50.945
Right, the information's already there.

00:48:51.039 --> 00:49:07.621
It's just changing the um, the appropriate reading level, and so I think where parents especially need to be asking questions, though, are when it gets into the gray areas of are teachers using ai to help give feedback and grades?

00:49:07.820 --> 00:49:31.251
And that's where I've seen some lawsuits around the country, where teachers have been doing that and providing comments without being transparent, and so that goes in tandem with asking questions On the other side, if you're actually like a user of AI, the big thing and this is what we're reinforcing in our policy that we're going to be rolling out transparency.

00:49:31.251 --> 00:49:44.150
So when I use AI whether it's to I mean, you never really see this on my posts on LinkedIn because I don't use AI to write my posts, but if I used AI to help write text, I'd put a little disclaimer.

00:49:44.150 --> 00:49:56.032
I used AI, I used like this model, like you say who it was, and then that's like part of what I would consider good standard operating practice.

00:49:56.032 --> 00:50:00.030
Now with AI Be transparent that you use it and what you used it for.

00:50:00.030 --> 00:50:07.010
Otherwise, you're going to start to mislead people about who you are and what your thoughts actually are.

00:50:08.914 --> 00:50:09.434
I love it.

00:50:09.434 --> 00:50:10.945
Well, thank you so much, job.

00:50:10.945 --> 00:50:11.945
I really appreciate it.

00:50:11.945 --> 00:50:23.829
This is such a great conversation and I really want to say thank you so much for just kind of meeting with me here, having this talk and hearing your perspective, and I think I definitely took a lot of value, a lot of valuable gems.

00:50:23.829 --> 00:50:29.896
I should say that I definitely want to dive in and you definitely had a lot of great soundbites that I can't wait to share.

00:50:29.896 --> 00:50:31.577
But thank you so much.

00:50:31.577 --> 00:50:38.709
Now, before we kind of wrap up with our last three questions that I always ask all my guests, I would love to give you a little bit of time.

00:50:38.709 --> 00:50:49.548
Can you just tell us, for our audience members that are listening, and especially if they are on LinkedIn as well, you know, or you know, if they're on different social media platforms?

00:50:49.548 --> 00:50:53.885
Can you please let our audience members know how it is that they might be able to connect with you?

00:50:56.389 --> 00:50:59.833
Yeah, so I'm most active on LinkedIn.

00:50:59.833 --> 00:51:03.363
You can look me up with just my name, job Christensen.

00:51:03.363 --> 00:51:06.550
I don't think there's anyone else out there with that name.

00:51:06.550 --> 00:51:08.673
I'm not.

00:51:08.673 --> 00:51:12.566
I use other social media, but not it's all like personal.

00:51:12.566 --> 00:51:19.923
So I also, if you like longer form stuff, I do have a personal.

00:51:19.923 --> 00:51:31.409
I have a blog website that's called Seek Grow Align and that's where some of it is like personal blog stuff, but also that's where I put like book reviews.

00:51:31.409 --> 00:51:52.802
So, uh, if I read a book and mostly I'll do this with education focused books, but sometimes non-education, but anyway I'll read a book and then I'll write like my reflection book review on it and it'll just be pages and pages, just stuff that the character count doesn't fit on LinkedIn, so I post it there, um, and that actually helps with my learning process.

00:51:52.802 --> 00:51:56.791
So if you want to know like deeper thoughts, it's all on that separate site.

00:51:57.972 --> 00:51:58.394
Excellent.

00:51:58.394 --> 00:51:59.143
And what was the site?

00:51:59.143 --> 00:51:59.726
One more time.

00:52:00.981 --> 00:52:02.266
It's SeekGrowAlign.

00:52:02.967 --> 00:52:03.168
Okay.

00:52:03.880 --> 00:52:05.025
I can send it to you.

00:52:05.447 --> 00:52:07.046
Yeah, is it SeekGrowAligncom.

00:52:08.220 --> 00:52:09.786
Yes, SeekGrowAligncom.

00:52:10.208 --> 00:52:14.461
Perfect, excellent, we'll definitely make sure that we link that on the show notes as well.

00:52:14.461 --> 00:52:18.853
Uh, all right, but before we wrap up, again, last three questions.

00:52:18.853 --> 00:52:21.922
So, job, I hope you're ready to answer.

00:52:21.922 --> 00:52:23.244
And here we go.

00:52:23.244 --> 00:52:28.579
Question number one as we know, every superhero has a weakness or a pain point.

00:52:28.579 --> 00:52:32.206
For superman, we know that kryptonite kind of weakened him.

00:52:32.206 --> 00:52:42.590
So I want to ask you, job, in the current state of AI in education, I would love to know what your edu-kryptonite is.

00:52:46.383 --> 00:52:50.693
My edu-kryptonite, like what I spend a lot of time talking about.

00:52:50.693 --> 00:53:01.614
It has to be the whole safety issue, particularly with the fact that we can't hold AI accountable for its output.

00:53:01.614 --> 00:53:07.112
So I would say that lack of accountability is just like that thorn that's constantly jabbing in my side.

00:53:07.112 --> 00:53:09.206
I'm like, yeah, this is cool.

00:53:09.206 --> 00:53:19.186
I use a new tool, and then I'm like, wait, oh, we can't have accountability yet, and so it's just constantly there and I don't know how to resolve that.

00:53:19.186 --> 00:53:22.826
There's just tension when I'm playing with tools.

00:53:24.108 --> 00:53:24.930
Excellent, all right.

00:53:24.930 --> 00:53:26.313
Great share, great answer.

00:53:26.313 --> 00:53:27.474
I love it, all right.

00:53:27.474 --> 00:53:36.115
Question number two Job is if you could have a billboard with anything on it, what would it be and why?

00:53:40.260 --> 00:53:53.788
I would say probably for a billboard something along the lines of let's get better, or like keep learning so that let's get better phrase.

00:53:53.788 --> 00:53:59.407
It came to my mind a few weeks ago and I don't remember why it sparked it.

00:53:59.599 --> 00:54:05.472
But did you ever see the old 90s show Frasier, yeah, yeah.

00:54:05.472 --> 00:54:12.173
So in one episode Frasier's brother, niles, comes on and he actually ends up like running Frasier's show.

00:54:12.173 --> 00:54:13.842
Frasier Gets Sick, so Niles does on and he actually ends up like running Fraser's show.

00:54:13.842 --> 00:54:20.322
Fraser gets sick, so Niles does like the radio psychiatry thing, and Niles comes up with this catchphrase and it's let's get better.

00:54:20.322 --> 00:54:27.902
And so in my own world, let's get better relates to let's just keep learning and growing.

00:54:27.902 --> 00:54:29.306
I have a growth mindset.

00:54:29.306 --> 00:54:30.488
I'm a lifelong learner.

00:54:30.488 --> 00:54:33.896
My mantra now is just like let's get better.

00:54:33.896 --> 00:54:41.052
You're never really going to be done learning and you're never going to reach this like perfect stage.

00:54:41.052 --> 00:54:45.651
And so I just have that like in my head now and I keep thinking about it.

00:54:47.001 --> 00:54:48.023
Well, hey, that works.

00:54:48.023 --> 00:54:55.007
That's definitely a great, great message to share because it can fit into so many you know categories in life.

00:54:55.007 --> 00:54:58.090
Like you mentioned you're, you're, you're never, you never stop learning.

00:54:58.090 --> 00:55:09.494
And for anybody that's going to get better at anything, it's just to continue to pursue that you know, that knowledge and just practice, and it's just repetition and things of that sort to get to that point.

00:55:09.494 --> 00:55:10.336
So I really like that.

00:55:10.336 --> 00:55:11.159
Let's get better.

00:55:11.159 --> 00:55:14.170
And it's so simple yet so powerful right now.

00:55:14.170 --> 00:55:31.686
You got me really thinking on that and I'm like, yes, let's get better, you know All right, and my last question for you Job would be is if there is one person that you can trade places for or trade places with for a single day, who would that be and why?

00:55:37.700 --> 00:55:44.916
Man, I know you gave me time to think about this, but uh, do they have to be like living people?

00:55:45.738 --> 00:55:47.880
no, no, it could be anybody, anybody.

00:55:50.342 --> 00:56:13.954
Um, I'm sorry.

00:56:13.954 --> 00:56:17.135
Sometimes it takes me a while to think of things.

00:56:17.934 --> 00:56:19.436
Oh, don't worry about it, joe, it's all good.

00:56:19.436 --> 00:56:23.697
Don't worry, we can edit that part, but just anybody.

00:56:32.494 --> 00:56:32.994
It can be anybody.

00:56:32.994 --> 00:56:57.273
I'm trying to think about the things that really drive me and like I want to know about, so um, like the mysteries at keeping a bit night.

00:56:57.354 --> 00:57:02.077
Okay, so this is a guy that probably, like very few people have heard of.

00:57:02.077 --> 00:57:07.568
His name was Howard Butler, and this goes back to my archaeology days.

00:57:07.568 --> 00:57:14.139
Howard Butler worked at Princeton and so Princeton sent this expedition in, I want to say, 1905.

00:57:14.139 --> 00:57:20.773
It started in Jerusalem and they went up through what is now Israel, palestine, jordan and Syria.

00:57:20.773 --> 00:57:31.581
Right Then they were cataloging and mapping a lot of ancient historical sites as they went, and so these are some of like the earliest records we have of like Western documentation of these ancient sites.

00:57:31.601 --> 00:57:34.449
So the site that I worked at in Jordan, this is like the first stuff we have of like western documentation of these ancient sites.

00:57:34.449 --> 00:57:39.500
So the site that I worked at in jordan, this is like the first stuff we have, and so we reference this, reference all this stuff.

00:57:39.500 --> 00:57:44.329
So, but the thing is like howard butler didn't.

00:57:44.329 --> 00:57:53.905
He did some really good documentation, but I know there's stuff that's missing and I wish I could have been him for like the day he first came to this archaeological site that I worked at.

00:57:53.905 --> 00:57:55.099
It's called umel jamaa in jordan, and I wish I could have been him for like the day he first came to this archaeological site that I worked at.

00:57:55.099 --> 00:58:03.273
It's called Umel Jamal in Jordan, and I wish I could have been him for that day to see it like in the state it was in 120 years ago.

00:58:03.860 --> 00:58:08.992
I know what it's looked like the last 10 years, but a lot of it's like collapsed, fallen down.

00:58:08.992 --> 00:58:26.067
There's a lot of stuff that's happened in between, and even though he has really good documentation, the photographs are not good and there's stuff missing from documentation and I just wish I could like see and experience that wonder of what it was like in that state.

00:58:26.067 --> 00:58:29.159
Right, that's just I love.

00:58:29.159 --> 00:58:42.733
I just love like like stuff preserved in time, and so it just would have given me a lot of perspective on the people and what happened there.

00:58:43.099 --> 00:58:43.581
That will.

00:58:43.581 --> 00:58:48.809
It's basically lost right when I'm never going to, we're never going to know certain things that Howard Butler saw.

00:58:50.172 --> 00:58:51.932
Excellent, that is a great answer.

00:58:51.932 --> 00:58:52.539
I love that.

00:58:52.539 --> 00:59:02.447
I think that's the very first person, or you're the very first guest, that goes back and, you know, chooses a historical figure in that sense, you know, and that's very interesting.

00:59:02.447 --> 00:59:04.112
You know, like I said, I never thought about that.

00:59:04.112 --> 00:59:22.548
You know, a moment caught in time, you know, and being able to go back to the way it was when it was first discovered, because now we only get to see what we may know now and, depending on when you get to make that trip, like you said, you know you went over there and you saw it in a very different state.

00:59:22.548 --> 00:59:24.653
So, yeah, that's very interesting.

00:59:24.653 --> 00:59:25.945
I had never thought about that.

00:59:25.945 --> 00:59:35.628
But another thing that causes me to kind of pause and, you know, think about those things, like you know, really capture those moments and so, yeah, love it.

00:59:36.110 --> 00:59:37.532
Well, joe, thank you so much.

00:59:37.532 --> 00:59:38.673
I really appreciate it.

00:59:38.673 --> 00:59:47.133
Again, thank you from the bottom of my heart for being a guest and, you know, sharing your experience and, of course, sharing your thoughts of AI and education.

00:59:47.133 --> 00:59:52.967
Like I said, I know a lot of audience members are definitely going to take some gems that they can sprinkle onto what they're already doing.

00:59:52.967 --> 01:00:12.548
Great, so thank you for that and for our audience members, please make sure you visit our website at myedtechlife myedtechlife where you can check out this amazing episode and the other 327 episodes where, I promise you, you will find a little bit of something for you to continue to grow and to continue to learn, as always.

01:00:12.548 --> 01:00:25.206
Thank you so much to all our sponsors, thank you so much to Book Creator, thank you so much EduAid, thank you so much Yellowdig, and if you are interested in being a sponsor of our show, please don't hesitate to reach out to me.

01:00:25.206 --> 01:00:28.730
We would love to collaborate and work together with you.

01:00:28.730 --> 01:00:35.840
But, as always, guys, from the bottom of my heart, until next time, don't forget, stay techie.

01:00:35.840 --> 01:01:05.811
Thank you.
Job Christiansen Profile Photo

Eclectic Inquisitive / Lifelong Learner / Dad

Job Christiansen is an Instructional Technology Specialist for Lake Center Christian School, a K12 private school in Ohio, USA. As an Instructional Technology Specialist, Job partners with educators to bridge pedagogy and practice with technology tools in learning spaces to transform the learning process. A lifelong learner, passionate about driving change through a growth mindset for developing learning communities to raise up 21st Century Citizens equipped to handle modern challenges.
With a background in the humanities fields of history, archaeology, and music, Job brings an eclectic and interdisciplinary approach to technology in education. A relative newcomer to K12 education, Job’s experience in the non-profit and security sectors position him with unique insights into the issues surrounding data privacy, ethical use of AI, and the degradation of digital literacy in schools around the world. In addition to exploring inquiry and PBL in education, you can find Job training for marathons, building LEGOs, and living through curiosity and play wherever he ends up in the world!