Is it immoral to get ChatGPT to write our materials?

“The dawn of a new era.” That’s what the run-of-the-mill Edtech enthusiast will blare at full lungs when talking about AI, especially AI in Education. Credit where credit is due, yes, AI and, more specifically, generative AI is quite disruptive. Among some of its perks are, notably, generating pictures of dogs getting increasingly happier and producing racist pictures (here). Of course, there are many more useful uses of AI, such as generating texts, exercises and, ultimately, writing coursebooks and other educational materials.

This begs the question, is it moral to get ChatGPT to write our coursebooks and materials?

Well, it depends on who you ask and their definition of morality. For instance, from a consequentialist perspective, using GPT to write our coursebooks would be moral if it brought about better education and happiness to a greater number of people than not using it would. It is, however, odd to talk about allowing generative AI to write our coursebooks if the choir has been singing tunes about authenticity, real-world content, student protagonism, and inclusion. AIl AI does is precisely the opposite. It generates inauthentic text and steals artists’ work (and jobs) to create images; it dehumanizes Education by allowing a machine to decide what goes in a coursebook and washes inclusion down the drain when it takes the jobs of so many DEI specialists and adds bland, sanitized texts to the book. When AI writes a coursebook, writers, editors, artists, and designers either lose their jobs or get paid less “because they’re using AI”, even though it was a client’s or publisher’s demand. When AI writes a coursebook, students get a material made by a machine, based on other materials out there, meaning the book will be a generic color-by-number collection of dull exercises to be plowed through in class. Everybody loses, but the ones making money out of this.

If you take a principled perspective to analyze the morality of having AI write a coursebook, you could look at it from Kant’s categorical imperative which states that, when deciding whether or not a course of action is moral, one should imagine themselves as being a universal legislator, meaning that this action would become a universal law to be followed by everyone. In this case, imagine a world where every publishing house, packager, and school uses AI to generate their materials. In the long term, AI-generated materials would feed on AI-generated materials which fed on AI-generated materials, and so on and so forth up to a point where coursebooks and other educational resources would be so artificial they’d probably lose their value completely, causing the ‘universal law’ to collapse on itself. Also according to Kant, we should not act, if the goal is to act morally, using other people as only a means to an end, but also as an end in themselves. For example, by forcing authors, artists, and designers to use generative AI, either by direct orders or market pressure, we would be robbing these people of the professions they so diligently worked on to realize their authentic selves in the world by transforming education. These professionals, once an end in themselves, would become a means to correctly prompt AI to do the work they were once responsible for.

However, there are those, of course, who would disagree. Niccolo Machiavelli, when describing what a prince should do, states that everything that can be done which would result in more power to the doer should be considered a moral action. In that sense, if all we really care is putting out more and more coursebooks on the shelves to be bought by schools and students and make a lot of money in the process to put out more books on the shelves in a vicious circle of instrumentalizing Education and our professions, well, so be it then. I believe that this will be a fad-ish vicious cycle that will violently collapse on itself when schools and learners find out that nothing beats a human teacher, a human writer, a human artist, and a human designer because Education is essentially, and exclusively, a human capacity.

One response to “Is it immoral to get ChatGPT to write our materials?”

  1.  Avatar
    Anonymous

    As someone who loves Philosophy, this post was a pleasure to read. Oh, and let’s not forget the damage chatGPT prompts on does to the environment – it gulps 500ml of water for every 5-50 prompts it answers. And from a Kantian categorical imperative perspective, this action would also be harmful – our planet would no longer exist or at least parts of it.

    Leo Gomez

    Liked by 1 person

Leave a comment

I’m Bruno

Welcome to ELT in Brazil’s official website. Here you’ll find live and recorded courses for teachers on language and language teaching/learning, blog posts, and lesson ideas for your classes.

Let’s connect

bruno.albuquerque.elt@gmail.com