WEBVTT
00:00:30.115 --> 00:00:33.777
Hello everybody and welcome to another great episode of my EdTech Life.
00:00:33.777 --> 00:00:39.173
Thank you so much for joining us on this wonderful day and, wherever it is that you're joining us from around the world.
00:00:39.173 --> 00:00:43.491
Thank you, as always, for all of your likes, your shares, your follows.
00:00:43.491 --> 00:00:45.351
Thank you so much for just interacting with our follows.
00:00:45.351 --> 00:00:47.524
Thank you so much for just interacting with our content.
00:00:47.524 --> 00:00:49.453
Thank you so much for all the great feedback.
00:00:49.453 --> 00:00:57.984
We really appreciate it because, as you know, we do what we do for you to bring you some amazing conversations that'll help nurture our education space.
00:00:58.085 --> 00:01:05.328
And today I am really excited to have on a wonderful guest that I have followed and then connected with on LinkedIn.
00:01:05.328 --> 00:01:20.504
She is putting out some amazing things and things that will really get you, you know, at least stopping and thinking and kind of meditating, because sometimes you know, we move too fast, we break things and then we're like, oh, if I would have just maybe thought about that a little bit more.
00:01:20.504 --> 00:01:26.370
So today I would love to welcome to the show Angeline Corvalia, who is joining us today.
00:01:26.370 --> 00:01:28.284
Angeline, how are you doing today?
00:01:29.085 --> 00:01:30.227
I'm doing great, thank you.
00:01:30.227 --> 00:01:30.728
How are you?
00:01:31.411 --> 00:01:34.825
I am doing excellent and thank you so much for connecting with me.
00:01:34.825 --> 00:01:51.673
Like I mentioned earlier, I know Saturday mornings are very precious, but I do appreciate you taking the time out of your day to just share your passion, share your world with us and just to really just share what you have been amplifying on LinkedIn and through.
00:01:51.673 --> 00:01:58.685
You know all your various posts, and we're going to be talking about an amazing conference that you will be speaking at as well.
00:01:58.685 --> 00:02:00.087
So there's a lot to cover.
00:02:00.087 --> 00:02:18.050
But before we get into the meat of things, angeline, if you can give us a little background introduction and just what your context is within the digital safety space, education space and even parent space, so just that way our audience members can get to know you a little bit more.
00:02:19.080 --> 00:02:20.646
Okay, thank you so much for having me.
00:02:20.646 --> 00:02:29.909
There's some things that are always worth giving our time for, and spreading messages your own and others is definitely worth the time.
00:02:29.909 --> 00:02:32.818
So I have a very eclectic background.
00:02:32.818 --> 00:02:34.823
It would take me a while to tell the whole thing.
00:02:34.823 --> 00:02:38.693
I'll just start with the whole online safety space.
00:02:38.693 --> 00:02:50.968
It actually found me because when my daughter was born she's nine now when she was born, I was a CFO in a financial institution and I was working there for 15 years total.
00:02:50.968 --> 00:03:01.358
After that, you know, I never wanted that to be my future, so I quit and I was working for a software provider in sales.
00:03:01.358 --> 00:03:02.219
They put me in sales.
00:03:02.219 --> 00:03:18.149
That wasn't the best decision, but I did that for around three years and then I said I have to do my own thing and my intention was actually to do digital transformation and like a little bit health for the kids on the side, because that was the part of the CFO that I liked.
00:03:18.149 --> 00:03:20.257
But then I noticed people on LinkedIn.
00:03:20.257 --> 00:03:32.211
They were posting things for kids and I felt like the medium wasn't something that kids were going to listen to, or it was actually long articles, for example.
00:03:32.211 --> 00:03:35.689
So I'm thinking, oh, I'll just have some fun and create a video.
00:03:35.689 --> 00:03:39.526
So I started asking these experts, can I create the videos?
00:03:39.526 --> 00:03:40.838
And they're like, yeah, sure.
00:03:40.838 --> 00:03:45.532
And I got a very big positive reaction to these videos.
00:03:46.841 --> 00:03:54.687
And then, after this was a year and a half ago, about a year and a quarter I realized you know, there's so much that needs to be done.
00:03:54.687 --> 00:04:07.584
You know, after spending then three months around three months mainly focused on that, I realized I have to do this, I have to do this full time, I have to do this, I have to do this full time, I have to help with that.
00:04:07.584 --> 00:04:17.410
And AI is one thing I'm especially interested in, and the reason I first got interested in it is actually from my time at the software provider.
00:04:17.410 --> 00:04:22.706
They were doing work for customers and AI was the thing for the developers.
00:04:22.706 --> 00:04:26.254
It was the thing for the tech experts, and I wasn't a tech expert.
00:04:26.319 --> 00:04:40.165
I was the bridge between tech and business and I felt, you know, this was when ChatGPT had just come out recently and the world was changing and people who weren't tech didn't feel like they were part of it.
00:04:40.165 --> 00:04:44.812
So I was like I need to help these people understand.
00:04:44.812 --> 00:04:45.904
So that's how it started.
00:04:45.904 --> 00:04:56.564
And then I had two characters my, my activity is called Data Girl and Friends and I had Data Girl and then someone suggested why don't you have AI?
00:04:56.564 --> 00:05:07.552
So I have Isla AI Girl too, and Isla talks about AI and Data Girl talks about online safety and all the other privacy aspects.
00:05:08.901 --> 00:05:38.766
That is wonderful, and what I love, though, is just when I get to talk to guests, and this is what I love the most about this and amplifying people's voices in their work is just to hear the background that they're coming from, what they're seeing and how they're trying to either saw something and trying to find a solution to, or working along with that in this case, company Like you say, you're making videos and then all of a sudden seeing, hey, there's a need for this now, because now you're seeing some things and this is, I think, fantastic.
00:05:38.766 --> 00:05:54.230
And Data Girl and Friends is something that I know that I would love to share with my parents, and so that's why I'm thankful that you are willing to be here today to learn a little bit more about that, because, as part of my job, I do get to work with parents on a monthly basis.
00:05:54.230 --> 00:05:58.987
We talk, and we have these conversations about data, the data privacy.
00:05:58.987 --> 00:06:02.721
The most recent conversation that I did with them, I posted on LinkedIn.
00:06:02.721 --> 00:06:31.762
We were doing one on sharing team, where parents are just oversharing you know pictures and so on, and then I talked to them also about you know these AI platforms that now can take some of those pictures and you know basically undress those pictures, and then there's the extortion aspect of it, and so we go deep into those conversations because I know that, although it's a tough topic, you know, just to inform parents and letting them know just the dangers and also talking to them about AI and chatbots.
00:06:31.762 --> 00:06:37.793
So kind of going into that information, into kind of that, I guess.
00:06:37.860 --> 00:06:39.403
Path now into the conversation.
00:06:39.403 --> 00:06:46.074
I know that you have spoken very much about AI and you're very vocal about it.
00:06:46.074 --> 00:07:07.485
But I want to ask you you know, as far as AI literacy is concerned, I know that AI is moving like at a very rapid pace and it just seems like every day or every second there's a new app, a new company, something new, and all these models are coming out that are reasoning and all that good stuff new, and all these models are coming out that are reasoning and all that good stuff.
00:07:07.485 --> 00:07:19.533
But I just want to ask you do you think that, with this and moving as fast as it is, do we need to focus more on that AI literacy side or do we need to focus more on implementing more robust AI safety regulations?
00:07:21.021 --> 00:07:29.935
I'm going to say the AI literacy, because I no longer trust government or industry to solve it.
00:07:29.935 --> 00:07:37.471
I don't like to make political statements, so I don't want to get political, because that distracts often from the message.
00:07:37.471 --> 00:07:45.446
But any government, any government, because if we wait, we do need regulation, we need responsible industry.
00:07:45.446 --> 00:07:52.050
But you said exactly, it's moving very fast and if we wait, then we're going to wait.
00:07:52.050 --> 00:07:55.451
There's enough examples of parents who have been fighting.
00:07:55.451 --> 00:08:03.742
I know one parent, jesper Graugard, in Denmark, who's been fighting for the privacy of his kids for five years.
00:08:03.742 --> 00:08:10.735
He's clearly in the right but he can't really move very fast.
00:08:10.735 --> 00:08:23.000
And other countries like Norway, where they managed to actually get change in the government but it took a year and a half for the government to actually make rules about privacy for kids in schools.
00:08:23.000 --> 00:08:25.730
In a year and a half, how much has happened?
00:08:25.730 --> 00:08:28.209
So I think the literacy is most important.
00:08:28.209 --> 00:08:30.206
Long answer, short question.
00:08:30.206 --> 00:08:37.653
And the issue with this literacy is exactly that it's moving so fast.
00:08:39.140 --> 00:08:42.671
And just I was thinking when you're talking about working with parents.
00:08:42.671 --> 00:08:51.032
One whole aspect of my concept, my way of working, is that parents need to understand that they're not going to know.
00:08:51.032 --> 00:08:54.024
Their kids are going to know they're not going to know.
00:08:54.024 --> 00:09:05.724
So the way to keep kids safe, what I try to bring about the short videos that parents and kids should watch them together and talk about them together and teach each other.
00:09:05.724 --> 00:09:10.722
Because there needs to be a trust, because the kids are going to know, they're going to know parental controls.
00:09:10.722 --> 00:09:18.048
There's always a way around the parental control that their, their friends are going to tell them how they're using ai and they're going to try it out.
00:09:18.048 --> 00:09:21.604
So there needs to be a trust and the parents just aren't going to figure it out.
00:09:21.604 --> 00:09:26.172
So that's kind of my way of seeing it.
00:09:27.033 --> 00:09:44.649
Yeah, and you know what I love that you brought it back to the parent aspect of it, because I know, with my work with parents and I'm coming in just from education and actually coming in from business and marketing into education, and I know that there's a term that is used quite often.
00:09:44.649 --> 00:09:54.431
This is, oh yeah, you know, our learning community and our learning community and you know, and sometimes what I feel like is that we don't include the parents as much in that learning community.
00:09:54.490 --> 00:10:01.133
It just seems like it's the upper management and then, of course, the mid-level and then the teachers and then students.
00:10:01.133 --> 00:10:07.628
But I love that you touched on the fact that parents need to know the students are already using it.
00:10:07.628 --> 00:10:14.715
The students are already, obviously, because of their friends and they see things on, you know, social media and things of that sort.
00:10:14.715 --> 00:10:29.667
They are already familiar with a lot of the apps, but the parents aren't, and so I love the way that you bring that together and saying these short videos are for parents and their, you know children to watch together and have those conversations.
00:10:29.667 --> 00:10:41.606
And that's really the job that I get to do with our parent liaison specialist or our parent engagement specialist, I should say is that the goal is we tell them it's like we're having these conversations.
00:10:41.606 --> 00:10:49.374
But I'm giving you these resources also as well, both in English and Spanish, because those are the predominant two languages here, where I live along the border.
00:10:49.374 --> 00:11:11.263
But these are resources to have those conversations with your son or daughter, just at least to get them to think for 0.5 seconds, you know, before they click send or whatever it is that they're going to do or share, because of maybe the long-term consequence of that that might happen later on, and also even talking to parents about that too as well.
00:11:11.263 --> 00:11:16.308
Like, hey, when you're posting something about your child, is this something that you would like posted about yourself?
00:11:16.811 --> 00:11:28.100
Because later on, you know, with students and with AI, like I said, there's even more of a danger now, I think, or at least it's heightened because of what can be done with these apps.
00:11:28.100 --> 00:11:38.471
So I love that, that the work that you're doing in bridging that gap between parent and student or child in this case, and bringing that together.
00:11:38.471 --> 00:11:51.192
So let's talk a little bit about you know more on that parent side, because I would love to pick your brain and learn more and see how I may also share what you're doing with parents as well.
00:11:51.192 --> 00:11:57.410
So I know you've spoken about AI powered predators and chatbots and the automating of the child grooming.
00:11:57.410 --> 00:12:05.010
Can you walk us through like an example of what are some of the flags or some of the things to see when this might be happening?
00:12:46.529 --> 00:12:59.864
Well, obviously it's about change in behavior, right, and just before I go into more detail, there's one thing that I really want to mention in terms of how, in my view, what I'm trying to achieve needs to be different that parents, they need to admit more, be able to admit that they don't know things.
00:12:59.864 --> 00:13:01.213
They don't have the answers.
00:13:01.213 --> 00:13:04.896
It's the same, it's a societal thing, right?
00:13:04.896 --> 00:13:10.722
If you're in a meeting at work, who's going to be the one to say I don't understand what you're talking about?
00:13:10.722 --> 00:13:12.713
Can you please explain it in a simpler way?
00:13:12.713 --> 00:13:25.427
It's hard because we, in general, all the societies that I've lived in I lived in six different countries it was always like you're supposed to know, and asking questions and meaning you don't know is hard.
00:13:25.427 --> 00:13:35.039
But with the tech world and kids and parents, we have to admit we don't know, because that's part of the problem is, kids think they know better, especially in terms of privacy.
00:13:35.039 --> 00:13:46.580
So, yeah, just before I say that that, even the signs, I would say the first sign is openness.
00:13:46.600 --> 00:14:07.885
I just recently have been speaking a lot with Megan Garcia, who recently lost her son, sol, and she's going to speak at the conference we're going to talk later and we've been talking about her experience and one of the things that she noticed was a change in behavior, in the sense that he was talking less to her and less honest, less open.
00:14:07.885 --> 00:14:11.500
It's a first sign you know that something is wrong.
00:14:11.500 --> 00:14:18.143
And another thing is just if they want to be alone with their device.
00:14:18.143 --> 00:14:28.553
You know it's tempting to let them be in the bedroom or be alone, but I've heard a lot of experts say the worst things happen in the bedroom, even on.
00:14:28.553 --> 00:14:29.817
You know all these.
00:14:29.817 --> 00:14:33.994
I talk a lot about the online world, but I don't spend much time on it.
00:14:33.994 --> 00:14:38.678
I talk a lot about the online world, but I don't spend much time on it, like Discord and things where the kids can watch.
00:14:38.818 --> 00:14:53.398
You know Roblox Roblox, where they can have the games and you think it's you know, it's not dangerous, but actually it can be and it's good, especially if they're younger kids, to have them always in the room with you.
00:14:53.398 --> 00:14:57.254
Yeah, so those are signs, basically just change in behavior.
00:14:59.138 --> 00:15:15.472
And you know and that's very interesting because that's something that does come up with and the talk that I have with parents is many times they may think like, well, you know, it's just the puberty, it's just you know the age and you know they're in that awkward stage and you know they start isolating themselves.
00:15:15.472 --> 00:15:17.694
And I always just's just, you know the age and you know they're in that awkward stage and you know they start isolating themselves.
00:15:17.694 --> 00:15:27.061
And I always just say, like you know, if there's a sudden change, you know that that is something that should kind of be noted and kind of just start asking and just doing.
00:15:27.061 --> 00:15:31.934
You know, the parent thing is like, you know, just observing is everybody is, are you okay?
00:15:32.054 --> 00:16:09.624
You know noticing some of those behaviors and because, like you mentioned, and you mentioned Megan Garcia, and that's something that I did bring up with our parents when we had our meeting this past year I think it was the November meeting and talking about how easy it is to access, you know, these chat bots on your devices on computers and how easy it is to even open up an account devices on computers and how easy it is to even open up an account, and so I played a clip that when Megan was getting interviewed, where she mentioned it's move fast and break things should not be something that should be done when it deals with students and especially the lives of a child.
00:16:09.624 --> 00:16:16.754
So going into that, you know through your work and what you've been doing, and I hey this is what needs to change.
00:16:16.754 --> 00:16:41.000
What would be some of those things that you would ask to be changed?
00:16:42.290 --> 00:17:03.143
I would ask them to have their products looked at and created together with experts like psychologists and psychiatrists, behavioral experts, even teachers, because they're largely left out of the discussion, and this would already be a big step forward, right?
00:17:03.143 --> 00:17:07.338
I mean, I recently learned about the existence of grief bots.
00:17:07.338 --> 00:17:13.321
When I found out about this, I was speechless for 20 minutes.
00:17:13.321 --> 00:17:24.823
For people who don't know, these are AI chatbots that are actually created in a copy of a person who's passed away and they are apparently for grief.
00:17:24.823 --> 00:17:38.277
But when I psychologists that I know they're like obviously we weren't involved in this, because this is extremely dangerous and risky, right, the way it's being done, especially towards kids.
00:17:38.277 --> 00:17:49.105
So this is what I would ask Can you just get non-technical experts to assess your product for whether it's safe or risky?
00:17:51.672 --> 00:18:36.423
This just needs to be done more across different industries and expertise levels mentioned that it was so important for her that you have that co-creation of these applications, with not just, I guess, your end-all goal in mind of obviously just getting on the app and just keeping people on the app at any age level, but also, if it's something that's supposed to be used for young adults, or children, for that matter, that they do get that feedback.
00:18:36.423 --> 00:18:42.482
And so, for me, what I see many times is there is the influencer market.
00:18:42.482 --> 00:18:45.941
You know, you get people that are, you know, have a heavy following.
00:18:45.941 --> 00:18:51.863
They get used and say, hey, we'll give you our product or we'll pay you this much to promote our product.
00:18:51.863 --> 00:18:54.979
And really sometimes it's like, well, are you even?
00:18:54.979 --> 00:19:01.063
Are they even, you know, taking into account the privacy, the data, the dangers that might occur?
00:19:01.063 --> 00:19:03.576
Or is this just simply a paycheck for them?
00:19:03.576 --> 00:19:10.509
And I'm just going to put it out there and, you know, without any regard to, you know, their own personal beliefs or views or anything.
00:19:10.509 --> 00:19:13.355
It's just like, hey, this is what I do, I'll just share it out there.
00:19:13.355 --> 00:19:36.693
But I do believe that there is something that's very important and that's, you know, making sure that everybody is at the table, because it kind of brings back to the ethics of it and as far as ethical use of AI and you know, going into the different biases and the outputs and the uncertainty of those things, I mean just to get more people involved in getting that feedback.
00:19:36.693 --> 00:19:39.019
I think that's something that's fantastic.
00:19:39.119 --> 00:19:41.732
And obviously we talk a lot about guardrails.
00:19:41.732 --> 00:19:48.493
Now, my big viewpoint has always been it's how can you put a guardrail on something that you don't own?
00:19:48.493 --> 00:19:56.199
Because a lot of these applications are plugging into a system that's kind of you, you know that large language model.
00:19:56.199 --> 00:19:58.237
They're pulling that data from there.
00:19:58.237 --> 00:20:06.874
So, if you don't own that, many companies say, oh well, we're putting guard rails and these safety rails and I'll hear it in all the education platforms well, we've got guard rails in place.
00:20:06.874 --> 00:20:20.000
I was like, but how, if you don't own this, is it just somebody putting in code that says, if this, then don't do this and that's your, if this, then don't do this and that's your guardrail, and I don't think that that's very safe at all whatsoever or ethical.
00:20:20.000 --> 00:20:26.179
On that, what are your thoughts on just AI, ethics and what's you know?
00:20:26.179 --> 00:20:31.935
And, in this case, for these companies, what could they do better to improve that?
00:20:33.278 --> 00:20:59.006
Well, I think that, exactly as you said, I mean these companies overestimate their ability to control things and giving them the benefit of the doubt, giving them the benefit of the doubt that they honestly believe that what they're putting out there can be controlled, then they need to trust.
00:20:59.006 --> 00:21:17.797
You know that there are people on the other side and I think part of the problem is actually that, obviously, the industry, the AI industry, the creators, are a lot in in a little click and I sometimes feel that I'm probably pretty, I don't know them.
00:21:17.797 --> 00:21:25.520
So I I just saying they probably live in their own little world in san francisco or something and honestly have no idea.
00:21:25.520 --> 00:21:31.121
I have no idea what, um, you know, they're kind of distorted reality.
00:21:31.121 --> 00:21:32.688
I just what I've.
00:21:32.688 --> 00:21:57.538
You know, I hear them talking about creating new beings or some strange things or religions and and so, yeah, I would tell them talk to normal people, see, normal people spend some time out of Silicon Valley and I do believe, going back to something you said before that in the end, I don't know how long this end is going to be, and sometimes it's hard to keep said before that in the end, I don't know how long this end is going to be, and sometimes it's hard to keep believing this.
00:21:57.538 --> 00:22:07.273
But in the end, the winner will be the one that puts the most people on the table, because AI is going to be the most intelligent.
00:22:07.273 --> 00:22:10.321
More information that it has, the more useful it's going to be.
00:22:10.910 --> 00:22:17.551
I work with a lot of people from Africa and I have yet to have an AI system.
00:22:17.551 --> 00:22:24.044
I would love someone to show me one that can produce a non-biased image of an African.
00:22:24.044 --> 00:22:31.782
I mean, you know and it's just when, even so far that I had to ask my African partner.
00:22:31.782 --> 00:22:34.474
I'm like, can you just send me pictures of Africans?
00:22:34.474 --> 00:23:00.119
Because I can't trust any system that I get that is not biased, for students, for example, who need to learn about Africa, if it's not been fed with proper information about that the continent, the countries in the continent.
00:23:00.119 --> 00:23:03.115
So the winner is going to be the one that figures out.
00:23:03.115 --> 00:23:12.013
I have to bring the most people on the table, so my system is really fair and useful for more people.
00:23:12.934 --> 00:23:14.355
And I agree with that.
00:23:14.355 --> 00:23:14.796
That.
00:23:14.796 --> 00:23:48.030
What you said just really, yeah, advocate of AI, and she's out there also spreading the word, but we did a presentation together because here in the state of Texas they are slowly rolling out the use of AI for grading constructive responses or shortened little essays, as opposed to using manpower to read through these essays.
00:23:48.030 --> 00:23:57.798
Obviously, it would take a lot of time to do that if you're doing it in person with more people, but now they're just saying, okay, we're going to do a small percentage time to do that if you're doing it in person with more people, but now they're just saying, okay, we're going to do a small percentage, just to kind of test it out.
00:23:58.580 --> 00:24:18.883
And going back to what you were saying, so, for example, an AI model being used in Africa and an AI model being used here, I know that even today, when I've gotten into some of the image generators and you put in you know, show me, like just janitor, you get a certain look, you know.
00:24:19.042 --> 00:24:26.955
Then, for doctors, you get a certain look, for you know a lot of things, and I'm like, wait a minute, like this is very unusual, this is very weird.
00:24:26.955 --> 00:24:31.022
And so by you know countries, even you know.
00:24:31.022 --> 00:24:32.404
Now it's like it.
00:24:32.404 --> 00:24:36.173
You know countries, even you know.
00:24:36.173 --> 00:24:38.182
Now it's like how are they perceiving us Like if they put there like an American?
00:24:38.182 --> 00:24:39.508
You know, what does that look like to them too as well.
00:24:39.508 --> 00:24:44.660
So going back to that, it's that information, is it accurate information?
00:24:44.660 --> 00:25:11.161
And that's kind of very scary too, because even when you use an image generator, where there'll be like hey, you know, put yourself in here or put in a prompt and I describe myself and I'll put there, you know, hispanic male, every single output that I get, hispanic male, it always gives me a beard or a mustache and it makes me look well, I mean, it makes me look a little bit more bigger filled out.
00:25:11.201 --> 00:25:14.503
Oh really, yes, yes, A little bit more, you know, filled out, a little bit more bigger, filled out.
00:25:14.503 --> 00:25:16.913
I should say, oh really, yes, yes, a little bit more, you know, filled out, a little bit more robust.
00:25:16.913 --> 00:25:19.280
And so I'm like this is very interesting.
00:25:19.280 --> 00:25:40.038
You know, as you're putting in these prompts, you know there still needs to be a lot of work being done with this, but you know the fact that people around the world, educators especially, are like oh my gosh, this is the greatest thing in the world, because now we can do this quickly, now I'm able to do this in 20 seconds.
00:25:40.038 --> 00:25:49.075
But my biggest concern is yes, he can do it in 20 seconds, but how accurate is it if it's just statistically predicting the next word?
00:25:49.115 --> 00:26:07.242
The other thing is that the knowledge cutoff date is something that we brought up there at that conference too, because there's a lot of applications that teachers are using and they're purchasing for their teachers and, in the terms of service, it'll tell you, the knowledge cutoff date is 2023.
00:26:07.242 --> 00:26:10.961
We are already well into 2025.
00:26:10.961 --> 00:26:18.713
Well into 2025.
00:26:18.713 --> 00:26:24.151
So how accurate is this going to be if the data there is at 2023 and now in the state standards, have you know, have been updated for a lot of our content area, here in Texas at least.
00:26:24.633 --> 00:26:42.693
So those are a lot of the things that I know many people don't look into and maybe they just want to turn a blind eye because they're like, oh, the magic, the whistle, this is the shiny object that's going to, you know, create my lesson for me and I'm done, and that's what really concerns me too as well.
00:26:43.214 --> 00:26:53.932
So, kind of going and touching on that a little bit, you know, I know that you've compared and saying like you know, like what we were talking about a little earlier, those that bring more people to the table.
00:26:53.932 --> 00:27:02.061
So it's almost like we're comparing it to an AI race and it's definitely a competition, you know, without anybody.
00:27:02.061 --> 00:27:50.766
Just really, it's just like all hands on deck, everybody just go, go, go.
00:27:50.766 --> 00:28:06.756
Your perception and, in your experience and from the lens of the world that you live in which is, you know, data Girl and Friends and all the amazing people that you're connected with in your network, you know how do you envision?
00:28:06.756 --> 00:28:19.035
You know AI as a force for good, or do you envision it as a force for good like, maybe 10 years from now, or is there many more pitfalls that are going to be coming that we should be worried about?
00:28:21.459 --> 00:28:22.922
I try to be positive.
00:28:22.922 --> 00:28:31.061
I need to be positive, I need to believe that it's possible, the good AI can be a force for good.
00:28:31.061 --> 00:28:31.743
It can.
00:28:31.743 --> 00:28:36.000
It can be used well.
00:28:36.000 --> 00:28:51.616
It doesn't look like it's necessarily going in that direction right now because of exactly massive problems, you know, we were discussing before with the image generations that the one, the, the ones that create pornography.
00:28:51.616 --> 00:28:56.152
Kids are obviously, you know, interested in this, so they use it, they create it.
00:28:56.152 --> 00:29:00.351
They don't understand the, the weight of what they're doing.
00:29:00.351 --> 00:29:11.342
Um, so all sorts of things, and also these ai relationship chatbots, they're all completely, you know, overwhelming and influencing, especially if you give it to young kids.
00:29:11.743 --> 00:29:25.666
I was talking to uh, I think it was megan who said that met, you know, someone whose daughter had had their first relationship with, with, with an abusive AI chatbot boyfriend at 13.
00:29:25.666 --> 00:29:29.240
So this is a person whose first relationship.
00:29:29.240 --> 00:29:33.170
I mean, this is the influence in a whole world going forward.
00:29:33.170 --> 00:29:36.974
So this is a lot of reason to be negative, right about it.
00:29:36.974 --> 00:29:49.798
But, on the other hand, what the world I'm trying to create is one where all of the tech connects us all over the whole world in a way that we've never been connected.
00:29:49.798 --> 00:29:50.612
They figured out.
00:29:50.612 --> 00:29:53.901
They make one product and it's sold in the entire world.
00:29:53.901 --> 00:29:58.307
What we haven't figured out is the other side of it.
00:29:58.307 --> 00:30:07.137
Right, so we can take this connection that tech gives us and push together for a responsible tech.
00:30:07.437 --> 00:30:09.879
Right, because individuals and mean AI can help in really a lot of ways.
00:30:09.879 --> 00:30:27.182
It can help us to be very efficient and it can help us to be more creative.
00:30:27.182 --> 00:30:40.264
It can help us to know each other better, because in the moment that and I need to call out Bill Schmarzo because he's the first person I heard say this that AI can help us to be more human, and some people hate that statement.
00:30:40.264 --> 00:30:41.813
Some people like that statement.
00:30:41.813 --> 00:30:59.038
I like it because it's, I think, because there are things that we can do as humans that AI probably I don't want to say probably won't be able to do is be understand, be sentient, understand emotions, understand context.
00:30:59.038 --> 00:31:02.305
All of this like real context, life experience.
00:31:04.331 --> 00:31:10.363
And if you have AI, if you use AI, then you understand which parts of you are uniquely.
00:31:10.363 --> 00:31:13.085
You and kids can learn that from a younger age.
00:31:13.085 --> 00:31:14.975
Right, they actually have to.
00:31:14.975 --> 00:31:35.041
They should understand who am I, what makes me unique, what kind of person am I, because if you're using AI and they are and you don't know who you are, then you can more easily be influenced and this is something that kids can then learn earlier and then you're actually going stronger into the world because you know yourself better.
00:31:35.041 --> 00:31:37.419
So that can be a positive output of AI.
00:31:37.419 --> 00:31:54.961
But we have to be more intentional with it and we have to kind of force that use because, as you say, the tech companies are obviously they have billions in funding that they have to get return on, so they're gonna go for the for the money first yeah, no, absolutely, absolutely.
00:31:55.523 --> 00:32:02.362
So I want to kind of just, uh, turn the conversation over now because to talking about the Global Online Safety Conference.
00:32:02.362 --> 00:32:06.922
So this is something that I did see recently that was posted on LinkedIn.
00:32:06.922 --> 00:32:13.902
I have already signed up for it, too, as well, and just looking at the list of speakers, this is going to be an interesting conference.
00:32:13.902 --> 00:32:17.070
So can you tell us a little bit more about this conference?
00:32:17.070 --> 00:32:21.942
Well, first of all, if you can, or have some background, how did this conference idea come about?
00:32:21.942 --> 00:32:27.942
And then tell us a little bit about what the goal of this conference is and why people should sign up for it?
00:32:29.250 --> 00:32:30.634
So the idea came about.
00:32:30.634 --> 00:32:41.566
Just after a year of being in this space, I met some amazing people, a lot of amazing people like this online safety.
00:32:41.566 --> 00:32:55.835
This, you know, an AI, a responsible AI community that somehow I have built on LinkedIn is so amazing and it's full of I call them like individual warriors really passionate people.
00:32:55.835 --> 00:33:06.246
A lot of them are individuals or small companies, small organizations fighting to survive, making a real difference, and I'm thinking.
00:33:06.246 --> 00:33:16.098
I was thinking these people could actually achieve a lot more if they were working together, if they knew each other more.
00:33:16.098 --> 00:33:17.058
So I said let's do a conference.
00:33:17.058 --> 00:33:20.806
And I talked to a few nonprofits that I work with.
00:33:20.806 --> 00:33:23.339
Will you support me to do this conference?
00:33:23.339 --> 00:33:26.718
It was in November and I was in a time.
00:33:26.718 --> 00:33:28.402
It's urgent.
00:33:28.402 --> 00:33:29.997
So I said I'm going to do it in three months.
00:33:29.997 --> 00:33:42.485
I said in three months we're going to do this conference and we talked about it with the partners and also one, andy Briarcliff, who's been a lot of support as well.
00:33:42.485 --> 00:33:45.038
He's been in the space for a lot longer.
00:33:45.550 --> 00:33:50.137
You know how are we going to define it, so we'll just be very general.
00:33:50.137 --> 00:33:52.519
We're going to call it an online safety conference.
00:33:52.519 --> 00:34:01.458
It has to be global, because that's what I said before, we have to work together more and we just put it out there and see what comes back.
00:34:01.458 --> 00:34:03.816
What are people interested in?
00:34:03.816 --> 00:34:11.757
You know who wants to talk, and we got this massive just so many people, so much energy came back.
00:34:11.757 --> 00:34:15.958
I was just putting out messaging we're stronger together, stronger together.
00:34:15.958 --> 00:34:38.000
We have to know each other, and I just it was like every day something would come in and said I can't believe this person is speaking, I can't believe this person wants to speak, like I always, ever since I heard the existence of of the AI data labelers, I always wanted to meet an AI data labeler or a content moderator.
00:34:38.000 --> 00:35:03.382
And there was a Facebook content moderator from South Africa who contacted and wanted to speak, and I'm like, yes, that's exactly what the so all sorts of people from 16 countries, different ages contacted and all across the spectrum of different topics and experiences are going to talk.
00:35:03.543 --> 00:35:12.411
And what's important is is that we did not go for any influencers, like you said, we intentionally we're not.
00:35:12.411 --> 00:35:35.519
We don't have, you know the, the, the keynote speaker who is going to bring in the audience like no, we want to hear from the people who need to be heard, um, and and it's quite unique, and we also made the conference like 12 hours a day so that people from all over the world can speak in their time zone.
00:35:35.519 --> 00:35:45.335
And we made it free because and online, fully online because then the barriers to actually attending are gone.