WEBVTT
00:00:30.115 --> 00:00:33.438
Hello everybody and welcome to another great episode of my EdTech Life.
00:00:33.438 --> 00:00:41.249
Thank you so much for joining us on this wonderful day and, wherever it is that you're joining us from around the world, thank you, as always, for all of your support.
00:00:41.249 --> 00:00:43.808
We appreciate all the likes, the shares, the follows.
00:00:43.808 --> 00:00:46.600
Thank you so much for interacting with our content.
00:00:46.600 --> 00:00:56.990
As you know, we do what we do for you to bring you some amazing conversations so our education space can continue to grow and we continue to amplify many voices and many perspectives.
00:00:56.990 --> 00:01:00.651
So I wanna give a big shout out to our sponsors.
00:01:00.651 --> 00:01:02.767
I wanna give a shout out to Book Creator.
00:01:02.767 --> 00:01:13.587
Thank you so much for your support, Eduaide and Yellowdig as well, for believing in our mission of bringing you these conversations week in and week out.
00:01:13.587 --> 00:01:14.450
So thank you for all that you do.
00:01:14.471 --> 00:01:17.820
And today I'm very excited to have a two-time guest.
00:01:17.820 --> 00:01:22.810
And you may be saying well, Fonz, you already had a four-time guest, you've got two-time guests, and so on.
00:01:22.810 --> 00:01:26.403
Well, it's because that's the way the show works.
00:01:26.403 --> 00:01:45.027
It's like sometimes, you know, I want to catch up with my previous guests, and especially now in the age of AI in education, and I want to get their perspectives so they can bring in their expertise and, just you know, amplify their voices and give them a platform to share their knowledge and their practices and their perspectives.
00:01:45.027 --> 00:01:47.993
So I would love to welcome to the show Jen Manly.
00:01:47.993 --> 00:01:50.706
Thank you, Jen, for joining me here today.
00:01:50.706 --> 00:01:51.868
How are you this evening?
00:01:51.868 --> 00:01:53.010
I'm great.
00:01:53.090 --> 00:01:53.793
I'm excited.
00:01:53.793 --> 00:01:58.865
I know you said two-time guest, but we're having a really different conversation, so I think it's going to be great.
00:01:59.248 --> 00:02:02.503
Yes, absolutely, and a very different conversation, for sure.
00:02:02.503 --> 00:02:19.320
But, Jen, for our guests that are watching right now or listeners that may not be familiar with the first show that we did as far as that topic is concerned, but if you can give us a little brief introduction of what your context is in the education space, Totally.
00:02:19.401 --> 00:02:19.681
Yeah.
00:02:19.681 --> 00:02:20.842
So my name is Jen Manly.
00:02:20.842 --> 00:02:27.514
I have been in education for I don't know 13 years now, something like that.
00:02:27.514 --> 00:02:29.625
It feels like it has not been that long.
00:02:29.625 --> 00:02:32.793
I started as a middle school computer science teacher.
00:02:32.793 --> 00:02:38.902
I taught high school computer science and now I teach a course at the University of Maryland every semester.
00:02:38.902 --> 00:02:44.391
The current course I'm teaching is gender, race and computing, so it's a really interesting class.
00:02:44.812 --> 00:02:52.117
My context in education I create content to help teachers work less without sacrificing their effectiveness.
00:02:52.117 --> 00:03:10.175
I believe in keeping great teachers in the profession, and the way that I kind of attack that problem is by thinking about how we can apply productivity science to the work that we do, and then also like setting boundaries around our time and viewing teaching as you know, teachers as professionals.
00:03:10.175 --> 00:03:18.848
But my context for this episode and what we're going to be talking about is I have been teaching about the ethics of AI as a computer science teacher.
00:03:18.848 --> 00:03:23.171
I've taught it to middle, high school and now college students for the last eight years.
00:03:23.171 --> 00:03:25.585
Eight years, seven years, 2018.
00:03:25.585 --> 00:03:30.395
So about four years before Chad GPT was released to the general public.
00:03:30.395 --> 00:03:51.211
And something I'm really passionate about is helping educators use AI and view AI from a, I would say like ethical lens, but really more being critical about when we're using it, understanding that it is a tool, but also that it's not a net neutral tool.
00:03:51.211 --> 00:03:53.239
So I'm excited to talk with you about it today.
00:03:53.740 --> 00:04:23.627
Yeah, and I'm really excited about it too as well, because one of the reasons that we're talking about it just prior to recording the show is really a particular post that really stuck out and I know that we'll get into it because, just like you mentioned right now, you definitely share a lot of great content as how to be very critical of AI and when to use it and when it has its place, and maybe some you know when you just could do a basic Google search to you know to do something like that as far as research or finding something out, and so we'll get into that.
00:04:23.627 --> 00:04:40.199
But I want to ask you you know, I know that you've been doing this since 2018, as far as teaching computer science, the ethics of AI, working with middle school, high school and then, of course, higher ed, I want to ask you let's go back to November of 2022, when the news broke out.
00:04:40.199 --> 00:04:42.944
It's like hey, chad GPT is available.
00:04:42.944 --> 00:04:47.713
What was your initial reaction and your initial thoughts as you heard the news?
00:04:47.713 --> 00:04:49.507
And, of course, this is being released.
00:04:50.699 --> 00:04:53.329
Yeah, so I would say my initial thoughts.
00:04:53.329 --> 00:05:03.403
Well, I guess I can't like pinpoint exactly what it was released, but I want to look at, like, the first two months right after something that was I was really concerned.
00:05:03.403 --> 00:05:26.879
And the reason that I was really concerned was because it came out and there was this mass um accepting of it, especially in the education space, without any context or consideration of a lot of the critical components of AI um that we in the computer science space have been talking about for years.
00:05:26.879 --> 00:05:34.122
And I think back to, you know, maybe like December, January so December of 2022, January of 2023.
00:05:34.122 --> 00:05:43.031
And I'm watching ed tech experts people who have been in ed tech for a very long time, you know, starting to come out and publish books on using AI in education.
00:05:43.031 --> 00:05:53.704
And I just remember feeling like I have been teaching this for four years at this point and I would not consider myself an expert you know, I'm not somebody who is at.
00:05:53.704 --> 00:06:06.591
You know, at the time I now I feel pretty confident in my own prompt engineering right, but at that time I was not using it and programming it in the way that you know.
00:06:06.591 --> 00:06:14.752
All of these experts at Google, amazon, like people in the tech space had been using AI for much longer than that.
00:06:15.262 --> 00:06:31.586
Lots of people had been very critical, and so it was surprising and also concerning, when ChatGPT was released publicly, that in the education space particularly, it came off as a mass acceptance.
00:06:31.586 --> 00:06:39.507
And to me it was surprising because I don't feel like that is the energy that we have for most things in education.
00:06:39.507 --> 00:06:45.903
Like most things in education, it takes us time to fully accept, right Like I think about.
00:06:45.903 --> 00:06:47.848
I was on a national curriculum writing team.
00:06:47.848 --> 00:06:50.108
We wrote that curriculum for a year.
00:06:50.108 --> 00:07:25.168
We then piloted it for a year with teachers in a classroom before we then released an updated version of that curriculum to, you know, for the masses, and so I think it's a mix of concern and surprise for the masses, and so I think is a mix of concern and surprise, and then also understanding that this is something our students immediately had access to, and so what are you doing to make sure that students are still understanding how to use it responsibly and that it's not detracting from their overall education experience?
00:07:25.168 --> 00:07:26.644
I know that was a lot.
00:07:27.146 --> 00:07:33.682
No, no, no, it was actually perfect because it was actually very similar views and I don't know.
00:07:33.682 --> 00:07:42.310
It's very weird, and I had Rob Nelson on the show a couple of episodes back and we're talking a little bit about the AI Fight Club, where you've got really two sides.
00:07:42.310 --> 00:07:48.476
You've got those that are all in and gung-ho and then some that are a little bit more, you know, cautious, cautious.
00:07:48.476 --> 00:07:54.923
Well, maybe I consider myself more kind of trying to be in the middle, obviously, because I love to bring both sides of the conversation to the table.
00:07:54.923 --> 00:08:05.982
But then there's also the other side where it's like no, no, no, like we're going too fast which is something that I truly believe in too as well that this whole move fast and break things doesn't really work.
00:08:06.423 --> 00:08:09.550
And you know, we've seen a lot of things that have kind of failed.
00:08:09.630 --> 00:08:15.762
We've seen some things that you know show some promise, but at the same time, it's like who's really to decide?
00:08:15.762 --> 00:08:22.105
Like, yes, this is going to be very effective and this is going to be the most, this is going to be the solution to education's problems.
00:08:22.105 --> 00:08:27.254
Because I always go back and I always say, well, you know, there really isn't anything new under the sun.
00:08:27.254 --> 00:08:30.148
I remember people feeling the same way about the Internet.
00:08:30.148 --> 00:08:52.375
You know people feeling the same way about iPads in the classroom and then Chromebooks, and these are going to be the things that are going to, you know, drive up test scores and are going to revolutionize education, and it just seems like, yes, at the very beginning there was a huge acceptance, which was very scary because everybody just started jumping in and diving in and not being very cautious to the other side.
00:08:52.375 --> 00:09:03.885
As far as I always focus on the data privacy side, I always focus on do parents know?
00:09:03.885 --> 00:09:09.129
I know that parents may be familiar with this, but do parents know that this is being used in schools?
00:09:09.369 --> 00:09:33.081
and this needs to go past just the tech form that you sign at the beginning of the year and say hey you know you can in at a time with, you know, dealing with burnout, dealing with poor teacher retention, and then everybody jumps on the boat saying, yep, this is going to be it.
00:09:33.081 --> 00:09:43.340
This is going to personalize the learning, this is going to figure exactly what is wrong with each student and give them exactly what they need to be able to, you know, succeed.
00:09:43.340 --> 00:09:48.625
And that's really what teachers, we want our students to succeed, but at what cost?
00:09:48.625 --> 00:09:51.432
And is it really going to be effective?
00:09:51.432 --> 00:09:59.142
And so I know what the last conversation that we had, we were talking and dealing about how can teachers, you know, fight burnout?
00:09:59.142 --> 00:10:00.703
And you talk about agile.
00:10:00.703 --> 00:10:03.489
You know, what is it?
00:10:03.489 --> 00:10:04.090
Agile?
00:10:04.090 --> 00:10:05.292
What's the word?
00:10:05.292 --> 00:10:06.554
It's okay because we can cut this.
00:10:06.554 --> 00:10:07.662
Yeah, what is it?
00:10:08.143 --> 00:10:10.669
It's called Scrum, but it's like project management.
00:10:10.669 --> 00:10:12.303
Yeah, okay, perfect, let me go back into that.
00:10:12.767 --> 00:10:19.846
I know in our previous episode we talked about you know project management, like kind of like Scrum Masters, you know using agile to do that.
00:10:19.846 --> 00:10:34.011
And so what I see here is people that are really saying like, hey, we can just easily put a student on a computer, it's going to figure out exactly what they need and we can all take it from there and that's it.
00:10:34.011 --> 00:10:38.251
And then, of course, now we've got the other sites like, no, no, we need that human connection.
00:10:38.251 --> 00:10:46.863
We need all of this human connectivity as far as making sure that the teacher is still present, they're active, they're engaging students.
00:10:46.863 --> 00:10:50.071
So we're seeing so many different perspectives.
00:10:50.150 --> 00:11:06.224
But I want to ask you because I know the previous show we talked about you know project management and it just seems like, hey, this product it's going to help you just manage your classroom and it's going to manage your workload, and you really just come in and you need a worksheet.
00:11:06.224 --> 00:11:10.360
Hey, just prompt it and I can make you a thousand worksheets in 30 seconds.
00:11:10.360 --> 00:11:12.509
So I want to ask you about that.
00:11:12.509 --> 00:11:26.913
Now, in your perspective and in your classroom experience, and maybe what you have seen, you know both at the middle school, high school and even higher ed level what is it that you're seeing now as far as teachers' acceptance of this?
00:11:26.913 --> 00:11:29.306
Is it really making their lives a lot easier?
00:11:29.306 --> 00:11:31.312
Is it really making them productive?
00:12:02.429 --> 00:12:22.302
Yeah, I think it's a really good question and I actually spoke about this at iTech Iowa in November of last year October of last year and one of the things that I think like a good first question for teachers is is this actually going to make you faster?
00:12:22.302 --> 00:12:38.913
Right, like there are certain tasks that teachers are outsourcing to AI, that it is faster to outsource it to AI and the product that AI is giving you is something that is going to be, you know, usable in your classroom.
00:12:38.913 --> 00:12:40.157
That's going to make sense for your students.
00:12:40.157 --> 00:12:43.475
So a good example of this is like creating a rubric.
00:12:43.475 --> 00:12:45.599
Right, you have created an assignment.
00:12:45.599 --> 00:12:49.612
You want it to create a rubric for you that you're going to use to assess students.
00:12:49.612 --> 00:12:57.184
That is a very easy task for a CLOD or chat GPT, right, and we're not dealing with data privacy.
00:12:57.184 --> 00:13:02.265
So that might be a situation where a teacher you know I think about.
00:13:02.727 --> 00:13:06.557
When I was teaching middle school, I had four preps and none of them.
00:13:06.557 --> 00:13:08.562
I had anybody else teaching them, right?
00:13:08.562 --> 00:13:15.277
So I had 230 students, four unique preps, two of them didn't have curriculum and I was a first and second year teacher.
00:13:15.277 --> 00:13:30.489
That was all on my own, because there were only three other teachers in the entire district that were teaching any of the classes that I was teaching Right, and I think back to where I was then being able to outsource creating the rubric would have actually been extremely helpful.
00:13:30.489 --> 00:13:43.110
Right, being able to outsource potentially a worksheet right, like I want it to make me a note catcher, that is something that is very helpful and is much quicker than how I could do it for myself.
00:13:43.591 --> 00:13:55.641
The challenge is that a lot of teachers I guess I shouldn't say a lot, some teachers are looking at AI and they're saying, well, if it can help me with this thing, then I want to use it for everything.
00:13:55.641 --> 00:14:07.346
And that is not like A, and this is, I think, the post that you saw from me right, where I was saying don't use AI for things that you can Google, and there's lots of reasons for that.
00:14:07.346 --> 00:14:10.600
But to me, the number one reason is it's actually less efficient.
00:14:10.600 --> 00:14:20.903
Like, if you're trying to save time, but maybe you're not great at prompt engineering, you're going to go back and forth with ChatGPT or Claude or whatever you know system you're going to use.
00:14:20.903 --> 00:14:29.255
Let's actually be critical of how much time using this tool is taking us, and maybe there are things and certainly for teachers.
00:14:29.295 --> 00:14:59.205
There are a lot of things that you can do faster yourself, right, you can do faster because you already have a template that you've used for other units and you're just going to reuse it, right, and so I think, like that's number one is there are absolutely things that AI can help teachers be more productive, help teachers be more efficient, do it more quickly, and then there are other things that it's less efficient to use AI.
00:14:59.205 --> 00:15:01.779
It's not giving you the output that you want.
00:15:01.779 --> 00:15:32.189
It's not creating output that's friendly for kids or, like you talked about and this is actually a big concern of mine it doesn't actually consider the fact that it doesn't actually consider the fact that we're using, you know, over that, over reliance on it.
00:15:32.235 --> 00:15:37.721
You mentioned something like you know, if I can use it for this, well then that means I can use it for this and this and this.
00:15:37.721 --> 00:15:49.721
And you know, essentially the way that I saw AI and playing around with it, even before the release of 2022, with, you know, there was Writer, there was Jarvis, there was you know, other programs and so on.
00:15:49.721 --> 00:16:00.763
To me they seemed more like they were just, like you mentioned, productivity tools and tools that can help you kind of do some like copy, you know, make it a little bit faster, and so on, especially if you're doing marketing.
00:16:00.763 --> 00:16:19.549
But it just seems like there's always something in education that we kind of try and just put those you know square pegs in the round holes of education and just make it fit in Like we just got to do it because it's going to help out, because if it helps out on the outside, it's definitely going to help me out with everything here.
00:16:19.549 --> 00:16:29.285
And I do agree with you that there are some things that it will definitely make things a lot easier, possibly like some things that you may need to translate, some things that you would just need to.
00:16:29.285 --> 00:16:36.999
You know, especially with the translation portion.
00:16:36.999 --> 00:16:38.003
I believe that that is definitely very helpful.
00:16:38.003 --> 00:16:41.615
Or I know I had Paul Matthews here on the show stating just different reading levels or Lexile levels just with the same passages and things of that sort.
00:16:41.615 --> 00:16:42.718
So those things.
00:16:42.718 --> 00:16:54.362
But just to want to do everything on it, it can be a lot more waste of time, especially, like you mentioned, if you are not putting in the right input to give you that output that you want.
00:16:54.362 --> 00:16:59.259
You can spend too many minutes or too many hours just trying to get certain things.
00:16:59.259 --> 00:17:17.081
But then I know that there are platforms that already pre-prompt for you per se and they have a myriad of plethora of tools that are there that you just click and say, hey, I need a rubric, and I just tell it like this is what it's going to do, and then it's going to go ahead and produce it for me, and of course it's not going to be for free.
00:17:17.081 --> 00:17:21.381
Most of these things they have the freemium, and so those are some of the things too.
00:17:21.481 --> 00:17:43.761
That talking about the adoption within many districts is something that to me, also kind of worries me and concerns me, because with mass adoption and not knowing really where the industry may be going, or that uncertainty that, as Chad GPT continues to change, to grow, it definitely wants to be more profitable.
00:17:43.903 --> 00:17:50.363
So then, as those platforms that connect to those APIs, then those prices start going up too as well.
00:17:50.363 --> 00:18:01.703
And then, of course, accessibility to teachers, to districts, to schools you know now you've got, you know upper echelon districts that have access to this, and then, of course, smaller districts may be out of luck.
00:18:01.703 --> 00:18:25.077
But also the changes in that information, you know, as far as the knowledge cutoff date, like we were talking about, so going to Google and maybe finding something that is, you know, right on par with you know, 2025, the news, or something that's most recent, makes more sense than to just say, hey, type it in cloud and type it into chat GPT and tell me what it gives you.
00:18:25.077 --> 00:18:31.624
And even some of these platforms, in their privacy or terms of service, it'll say knowledge cutoff date 2023.
00:18:31.624 --> 00:18:34.820
And I believe it was July 2023 for a lot of them.
00:18:34.820 --> 00:18:39.557
So we're not actually getting you know, or is it when they say they can search the web?
00:18:39.557 --> 00:18:41.381
But is it really?
00:18:41.501 --> 00:18:42.644
searching the web.
00:18:42.663 --> 00:18:51.940
Yeah, so I want to ask you now on that as far as you know, cause I know you, you talk a little bit about the ethics and the data privacy and with your students.
00:18:51.940 --> 00:18:52.861
So I want to ask you as far as some you know, because I know you talk a little bit about the ethics and the data privacy with your students.
00:18:52.861 --> 00:19:01.232
So I want to ask you as far as some of these pitfalls you know, how is it that you address this with your students as you're teaching them about AI?
00:19:02.374 --> 00:19:15.740
Yeah, totally, so, okay, so I guess I'll go through how I actually teach this with students, how I actually teach this with students.
00:19:15.740 --> 00:19:19.515
So I'm really fortunate that I now teach at a university that allows us to have these types of conversations, right, these critical tech conversations.
00:19:19.515 --> 00:19:46.608
So the number one way, the place I start from, is that AI is not a net neutral tool, right, like I think sometimes we look at it especially through the lens of, you know, well, it's a computer that's making decisions, and so it's neutral, because computers can't be biased and it's like, well, the people who program them have implicit biases that they may not even be aware of.
00:19:46.608 --> 00:19:55.529
Right, and so no tool that is programmed like AI is net neutral or or not biased.
00:19:55.529 --> 00:20:08.795
That's the starting point, and then the next piece of this is understanding all of the different ways that AI it is non-neutral, right.
00:20:08.795 --> 00:20:09.115
So.
00:20:09.115 --> 00:20:11.636
So a couple of places that we can talk about this.
00:20:11.676 --> 00:20:15.858
The first is understanding the environmental impact of AI usage.
00:20:15.858 --> 00:20:17.200
Right, huge demand on water resources.
00:20:17.200 --> 00:20:55.898
So, ultimately, when we think about AI and the environmental impacts of AI, we're talking about the data centers that are needed in order to run, you know, all of these different searches that we want to have, and when you think about it as one search right or one prompt, it's not that much.
00:20:55.898 --> 00:21:00.836
But the problem with generative AI is that it's not just you using it right.
00:21:00.836 --> 00:21:15.740
We've done this mass adoption of generative AI and so you are part of this bigger usage that is contributing to increased water usage, which is a problem because lots of people lack access to clean water.
00:21:15.740 --> 00:21:21.297
That's contributing to lots of energy uses burning of fossil fuels.
00:21:21.297 --> 00:21:28.701
These are not neutral things, and if you don't care about that, we can also talk about the hidden labor of AI, right.
00:21:28.701 --> 00:21:32.180
So there was this study that came out right after ChatGPT not study.
00:21:32.180 --> 00:21:43.442
The article that came out from Forbes right after ChatGPT was released to the general public, and the way that ChatGPT trained out racism, misogyny, sexism, right.
00:21:43.442 --> 00:21:50.394
All of these things that exist, because ChatGPT's knowledge base is the entire internet and the internet is problematic.
00:21:50.394 --> 00:22:01.174
They paid African workers $2 an hour to be exposed to these extremely traumatic and problematic things, and it's hidden labor, right?
00:22:01.215 --> 00:22:09.757
We think, well, AI is a computer, it's a robot, but it takes people to be able to do that work, and so you know, we can think about the environmental impacts.
00:22:09.757 --> 00:22:15.201
We can think about the hidden labor we can also think about.
00:22:15.201 --> 00:22:22.465
You know what groups are further marginalized by AI usage, right?
00:22:22.465 --> 00:22:28.009
So this is something that I talk about with students to use AI for grading.
00:22:28.009 --> 00:22:36.243
I think using AI for grading is incredibly problematic because, number one, your K-12 students can't consent to their data being used.
00:22:36.243 --> 00:22:50.882
But also, when we think about biases and how they manifest, people are like, well, AI can't tell that this is coming from you know, a Black student or a Brown student or a female student.
00:22:52.305 --> 00:22:58.048
But there are ways that AI is biased against certain groups that are not explicit.
00:22:58.048 --> 00:23:04.486
So, for example, one of the stories I talk about is Amazon used to have a hiring algorithm that was secret.
00:23:04.486 --> 00:23:08.891
They didn't tell anybody they were using it until they decided not to use it anymore.
00:23:08.891 --> 00:23:15.835
And the reason they decided not to use it anymore is because they found that the hiring algorithm was discriminating against female candidates.
00:23:15.835 --> 00:23:23.903
Because the knowledge base for that hiring algorithm was successful, amazon engineers who were predominantly men, right?
00:23:23.903 --> 00:24:20.719
So certain characteristics that were coming up in female resumes were not being accepted or seen as qualified simply because the knowledge base consisted mostly of men.
00:24:20.719 --> 00:24:47.021
And so there's all of these ways that AI is non-neutral, but we don't talk about it, right and I think that's the first piece is understanding that you can make an informed decision about when you're going to use AI personally and if you really think about it, it's probably not as often as you're currently using it.
00:24:48.346 --> 00:25:05.007
And I want to highlight a couple of things like you're talking about, especially that energy usage, because over the weekend I saw somebody post and it was part of a thread where you know, we're 2025 now and they were actually April 2025 and they just they.
00:25:05.007 --> 00:25:17.057
They posted like, oh my gosh, I just found out about how much energy is being used and so on, and of course, everybody using with the Ghibli trends and the action figure trends and all of that, they don't think about those things.
00:25:17.057 --> 00:25:23.120
You know, it's just like hey, I want to fit in, I want to do what everybody's doing and we just follow suit and follow along.
00:25:23.120 --> 00:25:43.642
So I followed this post on this thread and this is somebody that is very well known, but this was what they posted and it says it says here it says here's one thing that might help though it uses massive amounts of energy, but compared to the energy we use on meat production, it is very small.
00:25:43.642 --> 00:25:47.685
But it actually has the ability to solve this problem, which gives me hope.
00:25:47.685 --> 00:25:54.630
Eating one less hamburger a day would have a far bigger impact than using AI less, than using AI less.
00:25:55.590 --> 00:25:56.811
And I was thinking to myself.
00:25:56.811 --> 00:26:07.404
I was like, okay, very interesting, which that sparked a huge conversation on LinkedIn and everybody was just like posting and everything talking about this.
00:26:07.404 --> 00:26:42.004
And you know, obviously it's it's coming more to the forefront as far as that is concerned, but there's still so much hype around it where you just had, you know, companies doing their big annual conference and showing like, hey, here's our new AI library, and now we've got this and we've got that, and really all that hype covers all what we talked about, because I remember seeing that 60 Minutes interview, too, about the data workers and getting paid that very, very small wage of $2 a day and the horrific things that they were seeing.
00:26:42.004 --> 00:26:43.408
It just really blew my mind.
00:26:43.408 --> 00:26:51.016
The other thing that I wanted to talk about, too, is as far as the use of AI we were talking a little bit about.
00:26:51.016 --> 00:26:53.019
You talked about grading.
00:26:53.019 --> 00:27:00.258
Obviously, we'll talk about some plagiarism detectors, as we know that they don't work.
00:27:00.298 --> 00:27:18.500
But I saw a recent post too as well, on TikTok, by somebody that I follow, stating that, for example, Turnitin we all know is a plagiarism detection platform that now they're kind of relabeling or rebranding in a way, because I guess there's so much hype that these detectors don't work.
00:27:18.500 --> 00:27:25.824
Now they're saying, oh, we are an integrity checker, so now it's kind of like, well, let's flip it around.
00:27:25.824 --> 00:27:28.695
And to me I'm thinking you're still doing the exact same thing.
00:27:28.695 --> 00:27:34.321
Now you're just relabeling for profit, and that's really what it is, and that's the way that I see it.
00:27:34.321 --> 00:27:43.119
So what are your thoughts on plagiarism detectors?
00:27:43.119 --> 00:27:54.589
And maybe now that you hear a little bit about because I've not just heard it from Turnitin, but there's some other articles that have come up from higher ed stating oh, it's the integrity of education, so we can still use AI, but we want to show them how to use it with integrity.
00:28:03.755 --> 00:28:04.418
So I know it's a two-part question.
00:28:04.418 --> 00:28:05.342
There might be a lot there, but go for it.
00:28:05.342 --> 00:28:05.702
Yeah, let's do it.
00:28:05.702 --> 00:28:07.371
So you know, let's talk about original turn it in right, the original turn it in.
00:28:07.371 --> 00:28:09.697
That's not doing AI detection, that's just doing plagiarism detection.
00:28:12.301 --> 00:28:21.075
I think it would be an irresponsible use of Turnitin to just base your interpretation of plagiarism just by looking at the percent right.
00:28:21.075 --> 00:28:23.960
Like the percent is really like.
00:28:23.960 --> 00:28:32.625
If Turnitin flags it, that is, you going in and looking at what's flagged right, that's really an indicator to you as an educator.
00:28:32.625 --> 00:28:42.940
Something about this is a little bit off, and maybe it's that the student didn't cite their sources correctly, maybe it's that they pulled entire quotes and like that's not really great, but they did cite it right.
00:28:42.940 --> 00:28:49.000
Like it's really a flag for you to actually then go in and look deeply at every paper.
00:28:49.000 --> 00:28:53.459
It's a helper tool, right, like if you have and like I said, when I taught middle school I had 230 students.
00:28:53.459 --> 00:29:05.279
Now teaching college, I often have 120, 130 students a class Like I don't have the ability to read every single paper that a student turns in through the lens of is this plagiarism or not?
00:29:05.279 --> 00:29:07.668
So turn it in for plagiarism.
00:29:07.668 --> 00:29:18.850
We'll talk about AI separately, but for plagiarism is a helpful tool insofar as this is a flag for me to then be able to say this paper looks questionable.
00:29:18.850 --> 00:29:20.842
Let me look at this a little bit more deeply, right?
00:29:20.842 --> 00:29:27.522
I still think it would be irresponsible to accuse a student of plagiarism without doing that deep work yourself, right?
00:29:27.522 --> 00:29:49.829
And so the problem with AI plagiarism detectors is that since the advent they have said, there's been that disclaimer that, like, we're not 100% accurate at detecting AI, and what they have found in researching a lot of these AI detectors is that they tend to be biased.
00:29:49.829 --> 00:29:59.817
They tend to ping more regularly incorrectly for neurodivergent students and for students whose first language is not English, right, like?
00:29:59.817 --> 00:30:04.864
There's a big thing that came out recently that was talking about how the MDASH is an indicator of AI.
00:30:04.864 --> 00:30:26.846
I've always used the MDASH, I love the MDASH, it's one of my favorite pieces of punctuation, and so for me again, as somebody who's understood AI, I would never use an AI detector as an indicator of anything, because they say on the front end this actually is not accurate.
00:30:26.846 --> 00:30:31.205
So we know that, and we know again that it's biased towards certain student groups.
00:30:32.638 --> 00:30:41.416
What I think is interesting for educators is that when you start receiving student work that is written with AI.
00:30:41.416 --> 00:30:51.410
You can see it Like there are certain qualities that I pick up on, where this doesn't necessarily sound like something a student already turned in that they wrote in class.
00:30:51.410 --> 00:30:56.792
There's an excessive use of bullet points, and all of the bullet points are formatted exactly the same way, right, with a few different words switched out.
00:30:56.792 --> 00:30:57.555
We were talking before we started.
00:30:57.555 --> 00:30:59.902
Exactly the same way, right, with a few different words switched out.
00:30:59.902 --> 00:31:06.014
We were talking before we started recording about how sources right, like a good place to check.
00:31:06.014 --> 00:31:23.908
If you're like, I kind of think that this might be, AI is checking to see if the sources exist, because a lot of times they don't right, they're made up sources, made up resources, and so that's like a good first step up resources, and so that's like a good first step.
00:31:23.928 --> 00:31:26.234
But I think you know, ultimately, especially as these tools get better, they get more human.
00:31:26.234 --> 00:31:29.342
Students learn how to prompt right.
00:31:29.342 --> 00:31:50.861
There's no guarantee that it's not written by AI if you give them an assignment and they can go do it at home or they can do it on a computer where they have access to those tools, and so for me, like even at the college level I look at having conversations around AI usage with students as just that, as starts of conversations, right?
00:31:50.861 --> 00:32:04.433
So if I suspect that a student uses AI, I'll say that I'll say, hey, this doesn't sound like you, potentially right, these cards sound like they were written by AI.
00:32:04.433 --> 00:32:13.055
Right, and give them an opportunity to be upfront about it and to be honest like that's how I approach a lot of conversations about plagiarism.
00:32:13.256 --> 00:32:18.295
Anyway, right, this place of what caused the student to need to cheat?
00:32:18.856 --> 00:32:32.256
Right, Because a lot of times, students don't want to cheat, but they're short on time or they don't understand what they're doing and they make a decision to plagiarize or to use AI in ways that are not necessarily acceptable.
00:32:32.256 --> 00:32:44.729
But I think, like, as we're navigating this, it really we have to stop looking at it as we're trying to find when students are using it and we're like it's like a gotcha moment.
00:32:44.729 --> 00:32:50.548
It's really opening a conversation, especially if you are an educator that is using AI.
00:32:50.548 --> 00:33:00.438
Right, if you're using AI in certain components of your classroom, but you expect your students not to be using it at all, it's a little bit of a disconnect.
00:33:00.438 --> 00:33:24.815
So, and then I think the last thing I'll say, and then I would love I think this is a bigger conversation, right, but, like I think it forces us to be creative about how we design assignments and assessments so that it's either not possible to use AI or it's disadvantageous for students to use AI, that it actually is better for them to not use it at all.
00:33:25.185 --> 00:33:27.371
Yeah, and you hit on a lot of great things there.
00:33:27.371 --> 00:33:34.297
The one that I really want to highlight too, like you mentioned, the original intent of like, turn it in, and, like you said, it's a flag.
00:33:34.297 --> 00:33:47.406
It's for me, as a teacher, to go and say, okay, this, I'm using this as a tool to help me give proper feedback, but I think it just goes back now to just that over-reliance of well, this is what it gave me.
00:33:47.406 --> 00:33:49.131
You cheated, you know, that's it.
00:33:49.131 --> 00:33:56.875
I'm just going to give you the zero, because this is all AI and it's just amazing how quickly that just turned into that.
00:33:57.538 --> 00:34:11.096
And now one of the things that I see, too, is just a lot of platforms stating hey, let's work on your students' writing, so, as they input their own writing, in their own words, this platform is going to give them that feedback immediately that they need.
00:34:11.096 --> 00:34:11.945
And I'm thinking, okay.
00:34:11.945 --> 00:34:30.619
So I know that as a teacher, you know it can be difficult to give feedback to maybe up to 30 kids and maybe in some schools, like you know, just depending on the class size, but there still has to be that human component of at least checking it and checking it and saying, ok, this is the feedback that it's giving.
00:34:30.619 --> 00:34:33.364
But now let me go and do a once through.
00:34:33.364 --> 00:34:49.192
And because that, over reliance, that is what scares me, and I've said this from the very beginning too, because as soon as this came in, teachers using it, and then all of a sudden it's like, hey, like this output, like the very first output, it's like, oh, this is truth, this is gospel.
00:34:49.192 --> 00:34:51.266
Here you go, guys, here's your handout.