麻豆传媒入口

Digital illustration of AI prompt
dem10 / Getty
Technology Integration

Setting Ground Rules Around Original Writing and ChatGPT

Generative AI tools like ChatGPT have the power to revolutionize education, but educators must first wrestle with weighty ethical and practical concerns.

October 6, 2023

Your content has been saved!

Go to My Saved Content.

Michelle Zimmerman can鈥檛 predict the future. But a few years ago, when researching her 2018 book , she met a handful of people who could. Speaking with artificial intelligence experts, some of whom had been in the field since the 1960s, she learned in hushed whispers about a conversational AI chatbot being developed to respond to queries with remarkable speed and fluidity. Ask a question, get a succinct and polished answer on demand. Request a five-paragraph essay on To Kill a Mockingbird and read it in seconds, thesis statement and all.

Zimmerman realized such a tool would represent a quantum leap for education when it appeared. So she got to work. Without a name or even a particularly clear timeline, she began imagining a world where AI had totally upended teaching and assessment as we know it. Since she couldn鈥檛 create effective lesson plans or test the writing capabilities of a piece of software she鈥檇 never seen, Zimmerman began wrestling with big questions like these: What does it mean to create something original and unexpected when AI is a contributor? When is it ethical to ask AI to assist with an assignment like writing an essay or submitting a science report? And when, to put it bluntly, is it just cheating?

To figure it out, she convened a focus group of high school students at Renton Prep, the private school outside Seattle where she serves as executive director. If nothing else, it would get her students thinking about the big ethical conundrums around writing and AI awaiting them in college and beyond. 鈥淚 figure it does not do much good if you鈥檙e an adult saying, 鈥極h, we won鈥檛 accept that assignment because it鈥檚 plagiarism,鈥 if you don鈥檛 discuss it with students,鈥 she says. 

Late last year, Zimmerman鈥檚 planning was put to the test when the world was introduced to ChatGPT, the generative AI chatbot she鈥檇 heard about years earlier, developed by OpenAI, a nonprofit founded in 2015.  Released to both rapturous and apocalyptic reviews, ChatGPT was initially heralded in the press as a to the student-penned essay and a ready-or-not educational revolution. By February, it had , becoming the fastest-growing consumer application of all time. By May, one Common Sense that more than half of kids over the age of 12 had tried it.

As schools enter for the first full year in a post-AI world, many are grappling with the same types of concerns that Zimmerman and her students have been working through. Namely, how do you set ground rules that acknowledge AI while spelling out parameters for how it can鈥攁nd cannot鈥攂e used in schoolwork? 

A FIRST TAKE AT DRAWING BOUNDARIES

The same immense processing power that makes ChatGPT such a useful tool for learning also makes it a particularly tempting vehicle for cheating, mainly through passing off blocks of generated text as original work without attribution. That鈥檚 left districts and schools scrambling to create comprehensive academic integrity policies that spell out how (or if) students can use ChatGPT responsibly. 

As part of its guidance on AI, Carnegie Mellon鈥檚 Eberly Center, which provides teaching support for faculty, shared a handful of touching on several schools of thought. Instructors might choose to ban generative AI tools outright, with violators facing consequences akin to those for plagiarism of any form. But they might also create policies that fully permit the use of generative AI, as long as it鈥檚 acknowledged and cited like any other source. A third option is more nuanced鈥攏either a free-for-all nor a knee-jerk ban. It lets teachers permit AI use for certain activities, such as brainstorming and outlining, or special assignments, such as ungraded ones, but forbid it in all other contexts.  

Given how fast AI is evolving, developing a comprehensive policy around safely using AI is challenging, though not impossible. 

After researching existing guidance from all over the world, Leon Furze, a British-Australian educator pursuing a doctorate in AI and writing instruction, recently penned a specifically for secondary schools. One of the first of its kind, Furze鈥檚 document provides a framework for how educators can think about the bright red lines that must be drawn around AI use. Its various sections run the gamut from data privacy, access and equity, and academic integrity to assessment and even professional development, proposing lines of inquiry that schools can explore to create their own unique policies. Take a section on citations and references, for instance, which asks schools to consider three key questions:

  • How can AI-generated material be appropriately cited and referenced in research and writing?
  • What guidelines will be provided to staff and students regarding the appropriate citation and referencing of AI-generated material?
  • What tools and resources will be made available to support appropriate citation and referencing of AI-generated material?

If you鈥檙e looking for a copy-and-paste formula for how to deal with plagiarism or other topics, you might be better served asking ChatGPT directly. You won鈥檛 find it here. As Furze explains in an introduction, 鈥淭he suggestions here should form part of a wider discussion around updating your existing cyber/digital policies, and should involve members of your school community including parents and students.鈥&苍产蝉辫;  

Since students will be most impacted by the new rules, it may be worth broaching the subject with them directly. This year, Kelly Gibson, a high school English teacher in rural Rogue River, Oregon, best known for her thoughtful education takes , is speaking plainly with her students about using AI responsibly. While her district is still ironing out its own guidance, she plans to explain some commonsense ground rules. Students must always receive permission before using AI, and they should know the consequences of being caught cheating. Over time, as students gain more experience with AI tools, she hopes they鈥檒l realize for themselves why its impersonal tone and track record of distorting or inventing facts makes it unsuitable for generating long-form writing.

鈥淭here are frequent errors because it鈥檚 a word predictor,鈥 she says. 鈥淚f all a student is going to do is put in the prompt the teacher gives them, there is a high probability that they鈥檙e going to get a very simplistic paper.鈥

BRAVE NEW WRITING

In response to concerns that schools were losing in the battle to keep tabs on student originality, this April, four months after the release of ChatGPT, the plagiarism detection company Turnitin released . For decades, the company鈥檚 standard offering has checked student writing against enormous databases looking for what the company describes as 鈥渟imilarity,鈥 which may or may not amount to actual plagiarism, depending on context like quotation marks and proper attribution.  

With the new update, customers still receive the same similarity rating for a submitted paper but now also receive a 鈥淟evel of AI鈥 score that examines each sentence and indicating how much text it believes was generated by AI. The software, , is still in its infancy and is far from exact science. The company claims its false positive rate is less than 1 percent, but some independent checks on early versions of the software found a much more frequent rate of errors, particularly for English learners, leading some researchers to call it 鈥.鈥&苍产蝉辫;

So do AI checkers work? 鈥淚n short, no,鈥 reads a portion of , clarifying that no tool has yet been able to 鈥渞eliably distinguish鈥 between human- and machine-generated text. To that end, a number of colleges, including Vanderbilt, the University of Pittsburgh, and Northwestern, aren鈥檛 using them at all. Still, Turnitin says it has analyzed a massive 65 million papers since April of this year, flagging 3.3 percent for containing at least 80 percent AI writing; around 10 percent of the papers it鈥檚 processed featured over 20 percent AI writing (though the software鈥檚 accuracy may decline the less AI writing it detects and as AI writing itself becomes more human sounding).

Taken together, these early figures indicate that students are already using AI tools in their work鈥攖hough probably not overwhelmingly. That puts educators in an awkward position. 鈥淚 don鈥檛 want to spend my entire year hunting for examples of AI writing and looking for cheating,鈥 says Marcus Luther, a high school English teacher in Keizer, Oregon. 鈥淥ne, I don鈥檛 trust myself to be successful at that, and two, I don鈥檛 trust the tools. And most importantly, I don鈥檛 want to take that mindset into how I read student work. I want to set expectations, but I also want to be affirmative in how I look at students鈥 writing.鈥

WHAT WOULD SHAKESPEARE SAY?

Beyond black and white issues like plagiarism, it will be difficult to create a blanket set of rules at the start of the year, simply because the technology is changing so quickly. Google is currently beta testing a generative AI tool, called 鈥,鈥 that will integrate its Bard AI technology directly into Google Docs. With a few keystrokes, students will be able to generate a few paragraphs鈥 worth of material inside the word processor they鈥檙e already using. The new feature has the potential to change how we approach writing, normalizing AI output as a starting point. The blank page, once the bane of even mature writers, may soon seem as quaint as the slide ruler. 

Dialogue may already be as important as policy. 鈥淚鈥檓 very much unsure of what the process looks like in terms of them forming their own original writing,鈥 says Luther, 鈥渟o I think it鈥檚 really appropriate to have conversations with students about how they feel about AI.鈥 Now, he plans to ask his students to consider the murky ethics of AI and what choices they would make in his shoes. As teachers, when and how would they let students use AI? Would they consider a poem or novel created using a generative AI tool to be wholly original? And what is being lost if we use AI in place of thinking for ourselves? 鈥淚 want to, as much as possible, be transparent in bringing the philosophical issues into the classroom with humility,鈥 he says. 鈥淚 don鈥檛 want to pretend like I have answers that I don鈥檛.鈥

Recently, Zimmerman conducted a similar thought experiment with the students in her focus group. Following a conversation on Shakespeare, she asked them to use ChatGPT to play around with generating love letters鈥攁n intimate subject to most teenagers. As they were having fun injecting humor and emotion into their letters, she dropped a sly question: What if you got a letter from someone you liked and began to question whether it was from the heart or generated by AI? 

鈥淭here was this little gasp that came across the kids, and they looked at each other, because it鈥檚 one thing if you talk about content that wasn鈥檛 original to them, and it鈥檚 an assignment that they turn in,鈥 she says. 鈥淏ut when it鈥檚 very personal and it鈥檚 something that they want to know is real and unique, it hits them in a different way.鈥

THE HUMAN TOUCH

For Gibson, the high school English teacher, her in-class AI discussions will have to wait a few weeks while she reviews the fundamentals of critically analyzing a text and forming a strong argument. 鈥淲hat I鈥檝e found with thesis creation is that very often kids have an idea of what they want to talk about, but they don鈥檛 know how to write it as a thesis statement,鈥 she says. 

Gibson envisions letting students use a tool like ChatGPT to refine, but not create, their arguments. Typically, she asks students to complete a custom graphic organizer in class to deconstruct the parts of an essay and build their argument before writing the final version at home. 鈥淵ou could potentially look at the final essay and not worry about whether ChatGPT was involved because you saw what students were able to put into the graphic organizer from the get-go,鈥 she says. She often loads her organizers with detailed and specific parameters that require students to interact with the assignment in meaningful ways. 鈥淔or anybody to get anything above a D, they鈥檙e going to have to do a lot of interacting with whatever ChatGPT spits out.鈥

Once students master the basics of argumentation, they rarely need such scaffolds. Then the goal becomes turning them into more competent鈥攅ven joyful鈥攚riters by making them care about the work they鈥檙e producing, explains Katy Wischow, a staff developer at the Reading and Writing Project at Columbia University鈥檚 Teachers College. 鈥淲hen there鈥檚 an authentic purpose to writing鈥 it doesn鈥檛 feel like busy work,鈥 she says.

That tracks with a philosophy that Zimmerman has been trying to impress on her students for years鈥攏amely that exploring their lived experiences, cultural backgrounds, and views of the world is crucial to their education. Their stories are something AI can never replicate, but the technology might help sharpen the finished product. Recently, a student who is half Indian and half Pakistani used ChatGPT to brainstorm and refine questions to ask her parents about decades-old ethno-national tensions that are typically never spoken about. In the process, she learned about generational trauma, which sparked several meaningful prompts she can explore in her writing. 

To some of Zimmerman鈥檚 students, this is the true opportunity in AI鈥攏ot as an instant-gratification homework machine, but as a resource they can tap to help them create the kind of deeply personal and expertly polished work that matters to those around them. Not long ago, Zimmerman asked another student, 鈥淲hat is it you wish AI will accomplish?鈥 She found herself unprepared for his answer and more than a little crushed. 鈥淗e said, 鈥業 hope AI will help our teachers actually want to know us better.鈥欌

Provided teachers develop this intimate knowledge of their students as writers, and AI is welcomed into the process as a subordinate partner, perhaps we won鈥檛 be talking about counterfeit work as much as we think.聽

Share This Story

  • email icon

Filed Under

  • Technology Integration
  • ChatGPT & Generative AI
  • English Language Arts
  • 6-8 Middle School
  • 9-12 High School

Follow 麻豆传媒入口

麻豆传媒入口 is an initiative of the 麻豆传媒入口.
麻豆传媒入口庐, the EDU Logo鈩 and Lucas Education Research Logo庐 are trademarks or registered trademarks of the 麻豆传媒入口 in the U.S. and other countries.