Considerations for using AI text and image generation tools
Since the launch of Chat-GPT in 2019, there has been an onslaught of machine learning-based text and image generators accompanied by so many opinions about the impacts these tools could have on productivity, how we think about work, and even the nature of humanity. Many public and private organisations are incorporating generative AI tools into their workflow, or thinking about how they can get started in 2024.
At Antistatic, we’re glad to see lots of folks thinking about the ethical and political implications of these tools, in addition to the purported transformative benefits. To add our two cents to the conversation, we have developed a set of considerations we think will be useful for anyone thinking about how they might incorporate text and image generation AI tools into their work.
Writing is about ideas just as much as it is words. Reading, drafting and editing are not skippable steps on the way to a final writing product — they are processes through which we develop and hone the ideas we are trying to communicate. Generative AI claims to provide a shortcut around these laborious tasks by jumping straight to the output. What might you be missing if you don’t do the reading or thinking-through-writing?
The writing process has many parts. Be clear on which bits you are outsourcing. Discussions about using generative AI for writing often focus on the particular type of document or output — letters, marketing emails, reports, policy papers, submissions analysis. We think it’s much more useful to consider what part of the process the tools are being applied to and why — and to be specific when talking about it with colleagues, clients, or the public. There is a big difference between using text generators to come up with ideas or undertake analysis, versus using them to finesse the wording once the central ideas have been agreed on.
Words and text don’t appear out of nowhere, they are developed and built on over time. We think it’s vital to understand and acknowledge the history and genealogy of the ideas we’re developing or working on (our thinking on this topic is informed by of feminist citational practices and Indigenous data sovereignty). Many current generative AI tools are anti-citational, cutting ideas off from links to their source material. Until generative AI tools can cite correctly and broadly, we think there are significant ethical and practical limitations to their use.
When images are circulated they will be divorced from their context. As soon as an image is published or shared — and especially when posted online — it is no longer under the control of their creator. Regardless of the copyright law in your country or how robust the accompanying metadata is, text and images can be circulated quickly and easily, divorced from their original context. A caption stating an image is AI-generated won’t stop it being circulated or understood as a “real” photo.
If you’d feel uncomfortable disclosing how you’re using generative AI tools, think about why. People don’t just read books or look at paintings because they are quality content. Art brings us into relationship with the maker, and lets us imagine and appreciate the process and skill of creation. If you circulate a generative image or piece of text, especially without sharing how it was made, would it undermine the reciprocal nature of the relationship between yourself and the viewer or reader? Would it go against the purpose of the work, or even undermine your authority? If so, maybe think twice before plowing on.
In late 2023, we published AI and other stories, a compendium that brings together essays and articles we’ve written over the past few years that may be relevant to anyone grappling with generative AI and how these tools might affect our lives now and in the future.
—
Cover image: Philipp Schmitt & AT&T Laboratories Cambridge, Data flock (faces). CC-BY 4.0. This image was sourced from Better Images of AI, a non-profit collaboration dedicated to sourcing and sharing images that support wider public comprehension of AI technologies, applications and governance.