Shifting from Fear to Curiosity — GPTZero conversation with a Catholic School Board Director

Tom D’Amico, the Director of Education at the Ottawa Catholic School Board (OCSB), recently shared on CBC News a balanced approach OCSB is taking to integrating AI with the educational fabric. While we’ve chatted with many schools, we found D’Amico’s approach to be an especially forward-thinking one, and we were thrilled to sit down with him to discuss it. 

The OCSB, guided by D’Amico, takes an exploratory approach to the potential of AI, rather than a reactionary stance. D’Amico explains a philosophy that is grounded in learning and conducting pilot projects, before implementing any policy. The OSCB relies on several key stakeholders, from students to educators to parents, to fully shape their AI guidelines. Representation is a top priority for them around AI adoption. Additionally, ethics are at the forefront of OCSB's AI strategy. They want to ensure that technology enhances rather than replaces the human element in education.

Please find the full transcript of the interview below. It has been edited for brevity and clarity. 

GPTZero: We've chatted with quite a few schools, and some have blocked AI across the board. Some leave it up to educators. Our team at GPTZero was excited when we saw your unique and forward thinking view on AI, as it very much aligned with us.

Most students, teachers, and school boards don't have guidelines on how to use AI. You've presented not only guidelines for the implementation of AI, but also toolkits for staff and a website for parents. How did you go about designing the guidelines, toolkits, and resources? And how did you determine there was a need? 

D’Amico: After ChatGPT came out in November 2022, we were hearing from educators saying, “We need to block this. Every student's going to cheat!”

One of our strategic commitments is “be innovative.”  So blocking is not something that we normally would do. But we did recognize that this was a pretty instrumental change. It wasn't just a digital tool. It really was something that could be transformative. So we dedicated a school year to learning about generative AI, doing different pilot projects, and getting ready for implementation for the following year. 

“We didn't want to write a policy because we didn't have the knowledge to write a policy, and we needed to find out what it was all about.”

So we started by taking a stance: “We're not gonna block this.” But we also recognized the concerns.So we spent the year learning. We partnered with another learning organization who spent some time in our district, helping us with the work and creating a roadmap. And we really listen to students, to educators, and to parents. 

One of the key developments that we just released in May were our guiding principles. That is our foundation. Those principles are: 

  1. Humane and ethical use: Ensuring AI is used responsibly to benefit humanity.
  2. Educational focus: Leveraging AI to enhance learning and teaching experiences.
  3. Championing equity and justice: Addressing and overcoming biases in AI tools to promote inclusivity.
  4. Transparency: Being open about AI usage, age limitations, and data protection measures.
  5. Safeguarding privacy and data security: Protecting student data with robust privacy practices.

GPTZero: Very well said. It's still so new. There's so much to learn internally before you can share your learnings externally. And that foundational guidance is critical. Tell us about the AI tool kits you’ve developed. 

D’Amico: The tool kits are for our staff and students. They comprise things we learned throughout the year. There's just so much out there on AI, and we knew that would be overwhelming to our staff. 

For example, if you search for guidance around AI prompting. You’ll find all sorts of frameworks, and prompting is currently so important with AI. So we created our own OCSB Prompting Guide, which we will provide to all of our staff just as a suggested tool. They can still use other prompt frameworks.

When we get into assessment and evaluation, a lot of it is about proper citation. So we will provide resources to our staff, so they know how to cite AI properly, and so they can show their students how to cite AI.

For any tools that we put on our board portal for our students or our staff, we commit to have already done a privacy impact assessment for that tool. 

We will also provide staff with a framework for assessing to make sure that they address and safeguard students’ data and privacy. We’ve updated our “Acceptable Use” guidelines. We call them “responsible” in order to include AI technology and language. 

We know that our staff and our principals are going to be inundated by parents with questions. We want to support them because they have a very tough job, and that’s all very time-consuming.

 So we've been getting input from parents across the system about common things they need to know.And we will be putting that on our website in June. So there'll be a section for parents with some very basic things. They wanna know how we're protecting data. So that will be on the website. Parents also want to know what tools and resources do we recommend that they use with their children at home. So that will be another area so those are some of the key areas in our toolkit.

GPTZero: What do you see, on the other hand, in terms of the challenges in implementing AI and continuing to refine your approach? 

D’Amico: Well, one of our motivations for doing this work is equity. We learned so much during the pandemic. Our district serves 50,000 students. We had many without Internet access, without computers or devices.And we tried to address that digital divide. 

“So we need to make sure that now that there are these new tools that can help students with learning, we're not increasing that divide.” 

Were we to ban AI, some students would still use it at home, and have access to data and have parents that will buy them the best tools.Others would be disadvantaged, and that digital divide would increase. So that's a challenge and a concern we have to address.

Costs are a big concern. There's lots of free tools that don't address the safety and privacy that we need. So we have to be cautious with the free tools. There are some fabulous paid tools, but we need to consider budget constraints.

We know that there's bias in the data sets and some of the images that are created. We saw what happened with Google around images that were not reflective of different races. So we need to teach that.

We’ve seen some of the power of Deep Fakes. When students violate that power, there will be discipline.  But we don't want to ban it. We want to teach them why they need to have that responsible use. And we know AI can be used to cheat. So that's another area that we have to teach the appropriate use of AI. Again, it comes down to AI literacy. So that is a really key area of how we plan to address some of those concerns moving forward.

GPTZero: What are the specific AI tools that OCSB is excited about implementing? 

We are using Magic School right now. 

We are piloting a feature set from SchoolAI with all our new teachers. It’s a great online mentor for classroom management and field trip forms. They’ll still have an in-person mentor at school, but they’ll also have this agent available to them. 

We’re also using Brisk because of our Google ecosystem. 

For students, we have been using Kahoot and a chatbot called Milo for math. 

I’m personally very excited about the potential for streamlining language. There’s so much potential for newcomers. For example, this year we had a student in Grade 2 from Ukraine who was sitting in the hallway crying. The principal walked by, but didn’t have the language to help that student. But now, with seamless translation, every principal has a phone that they can speak into, and have that translated automatically into Ukrainian. The student could speak Ukrainian back to the principal who would have found out what that issue was.

So I'm really excited about not viewing that as a deficit, but viewing the language students do speak as an asset. It really flips it around and makes students proud of their culture and their languages. So that's an area that I'm excited about. Certainly personalized learning, and some of the the tutoring and supports that students can get

GPTZero: What steps is OCSB taking to ensure that the usage of AI in the classroom aligns with educational goals and even ethical considerations? 

D’Amico: A big principle for us is ethics, so that’s an underlying point in all the documentation we put out. 

We also want our staff and our students to know that we don't want them to overly rely on AI. So we have a major investment in outdoor learning. For example, an educator might take a lesson plan and have AI adopt it for outdoor usage. 

“We don't want to replace teachers or educators with technology. We want to enhance the work they already do. And the same thing for our students.” 

We also want to address the concerns of bias and of protecting data. If I'm a school principal and I'm doing a presentation, and it's a hundred percent AI generated, I would have a concern with that, because it might be 80% generated and 20% I’ll need to convert to deal with the nuances of my own school. 

GPTZero: And how are you involving students and teachers as you go about this decision-making process? 

D’Amico:  We have a student Senate representing our 17 high schools. So the student Senator, the two co-presidents from each of the high schools. 

We asked them a series of questions about how they’re using AI, what their concerns were, etc. We ran an input session with them and they gave us good insights. 

We also had a staff book club this year on John Spencer’s The AI Roadmap. We also invited students to come and speak to educators about it. The student voice on their fears about AI and their thoughts about it was really important. 

We're doing a series of focus groups with students across the schools, and it’s not just for co-presidents. We’ll represent students across all different areas of school life to get their feedback and viewpoints. This past Saturday we had 250 staff show up on a Saturday for an AI summit that we ran. And I would say the highlight of the day was the student panel. One of the last statements was from a student who said, “My message to teachers for next year: don't over rely on AI. I don't want to lose the relationship I have with my teacher.” That’s a powerful message. 

For our parents, we have school councils at every one of our schools. So we are preparing presentations for their principals to deliver to the parents on the school councils. 

We had a day focused on AI where any parent in the system could come to give us input and tell us where they want answers, where they’re struggling, etc. And then we have a Parent involvement Committee. They know they always have a venue to me as the director with any concerns that come up.

We just have to be open to hearing when things aren't working, and when we’re over-relying on AI. We know we'll make mistakes, and we've made some mistakes. But we'll address those mistakes moving forward.

GPTZero: Would you say the general consensus from teachers, parents, and students have been more optimistic around where AI is going? 

D’Amico:  I'd say there's been a shift. The first response was fear. Now we’re shifting from fear to curiosity.

Now, we're curious about what we can do within the hands of talented educators and talented administrators and students. We’re curious about how they can improve their work, or be the next inventor or the next person to solve a big problem, thanks to their brilliance and the use of AI tools. So that's where we've shifted.

We’ve found there are three types of staff: those that are enthused and ready to go all-in, those that see AI is a fad that will go away, and the largest group: those that say it looks interesting but they’re very overwhelmed as an educator, and they don’t have time to learn new stuff or take on more work. 

So those are three perspectives, and they're all valid and all ones we have to address. 

GPTZero: Absolutely. One of the things you mentioned in your previous interview was how AI evens the playing field where students used to get help from parents. How do you think about student equity? 

D’Amico: Why do we assume everyone is cheating? And if they are cheating, why are they cheating? So we did some research in those different areas. We shared the Stanford study that said that students aren’t cheating more than they were before AI. 

For those that are, they’re doing it because they’re overwhelmed. They don’t have proper timelines or skill sets for time management. They don't see the possibility of success, so that gets into the pedagogy of what we can do differently to support them.

In looking at equity - if I went home and I have two parents at home, one can help me with my homework every night and show me my spelling mistakes and my grammar errors.I take that and I do more polished work. I hand it in. No one saw that as cheating, everyone saw that as parents involved in the child's education.

Many kids in our system, for all kinds of reasons, don't have a parent that can help them at home, or don't have an older sibling that can help them. Their parents may need to be working an evening shift to pay the bills. If that student has access to AI, they can use it to improve, find grammar mistakes, find other points of view in their academic work. They enhance their work. So that's how we're trying to shift the narrative. To say, these are tools to enhance work and to equal the playing field, not to increase the digital divide.

GPTZero: One last question for you.What do you envision as the future of AI and education both within OCSB and more broady? 

D’Amico: I think that in the near future, the guardrails will be put in place. So we will be better at protecting student data. I also think language tools will become seamless, which we've been waiting for for many years. I think staff and students will all see that they now have an assistant to help them in whatever way they need help.

I see our assessment practices evolving to have more focus on process and less on product, which is a good thing. I also see our assessment practices evolving to look at more modalities instead of just always written text. 

I see us moving forward with students being more creative, and their creativity being unleashed because they thought they couldn't draw or they couldn't sing or couldn't make music, but now they can. I also definitely see a big impact on accessibility for students with special needs.

And I see us saving time for our staff and our administrators are freeing up time for them to prioritize other areas.

Hopefully, we’ll get better at predictive analysis. Now that all staff have a potential data analyst with them 24/7, we can find remedies where before, there were challenges.

And the last area for the future is it's gonna hopefully move to more appropriate use of tools to improve mental health and wellbeing. So while we're focused on academics, our student services department is looking at how we can use chat bots responsibly to help students deal with stress and pressure that they face every day. So if that can lead to improved relations and improved mental health, that's a real win for everyone.