Beware of AI

I frequently use searches on the Internet to gather information, on which I base parts of my journal entries. I will have a topic in mind, be unsure of some detail, and use a search engine to obtain more information. Those searches often turn up Wikipedia entries. Because Wikipedia is compiled from various sources and can be edited by users, it is prone to errors. The system is designed to identify and refer errors to human editors who try to correct them and evaluate the content for other issues, such as the overuse of technical language or speculation. The result is that I have occasionally passed on misinformation through my journal entries.

I do, however, actually enter each word into my journal. Except for quotes indicated by quotation marks, I do not use my computer's copy and paste function to prepare my entries. In that regard, I intend to own the mistakes that appear here. Specifically, I do not use a digital writing assistant or artificial language assistant to produce content. However, the tools I use for writing do employ large language models. Large language models are computer programs that compare text from various sources, including much of the Internet, to examine word order and content for patterns and then apply those patterns to generate text. These models are often called artificial intelligence or AI.

I use a program from Apple called Pages to write my journal entries. I have Microsoft Word installed on my computer and use it for many writing tasks. I have been using Microsoft Word for thirty years since version 1.0, but it now has some features that I do not like. One of those features is a language generator called Copilot. If this feature is selected, the user enters text into a pop-up window, and the program suggests expanding the text. I don’t want my computer to do my writing for me, and I have only used Copilot a couple of times to try to figure out how it works. I have deleted the content from those experiments and don’t want it saved.

The auto-correct functions of the program are sometimes helpful and sometimes annoying. I’ve had experiences when the spelling or grammar corrections suggested by the computer are incorrect or change the meaning of what I intend to write. These functions, however, can be turned off. I frequently turn off both the spelling and grammar checking functions when writing poetry.

I have learned to use a grammar checking program called Grammarly. This program works because you can select its function after you have written an article or block of text, and it will make suggestions to correct grammar and spelling errors. It is helpful to me in correcting punctuation errors. Human editors frequently add or delete commas from my work, and the program does help decrease the need for them to do so. I use the program because it has a feature that searches for plagiarism. I do not intentionally copy the work of other writers, but the style and sometimes the words used by writers that influence me show up in my writing. So far, the program has not detected plagiarism in any of my writing, but I run it on text I intend to submit for publication as an added layer of scrutiny.

I am concerned about the possibility of machine-generated errors finding their way into my writing. The Google search engine now frequently gives me AI-generated content as its first response to a search. For example, when I did a search for “large language models” this morning, the first article at the top of my search window is labeled “AI Overview.” After a summary paragraph, there is a pull-down menu titled “show more.” If I scroll past the AI overview, the following suggestion is the Wikipedia entry. Scrolling down yields more articles, many of which I trust from familiar sources. I have pulled down the “show more” function of the AI just to see what the system generates.

Friends who are college professors say that they have encountered entire papers assigned to students generated by AI generators such as Chat GPT, GPT-3, and others. Some university professors have embraced the tools and taught students how they work, even giving tips for using them to generate a starting point when faced with writer’s block. They see such programs as tools to enhance human learning when correctly employed.

Even though there are days when I struggle to start writing a journal entry or other writing task, I am not ready to use any of those programs. NPR, CBC, Axios, and The Guardian all have run stories this week about a summer reading list that was printed in the Chicago Sun-Times. The article contained fifteen titles along with their authors. The majority of the books listed do not exist. The authors were familiar authors who have written books, but the titles were not of any published books. The tenth book on the list, “Salt and Honey,” has a title that has been used for books, but the story's description does not match any of them. Another book credited to Maggie O’Farrell describes a book by Charlotte McConaghy. The author of the article and the Sun-Times have confessed that an AI generator was used to create the article. That is concerning enough since many of us have trusted newspapers to provide accurate information. All writers and editors bear responsibility for fact-checking. My teacher friends clearly state that students who use Chat GPT or other programs must fact-check their papers before submitting them. In the case of the article about books, fact-checking is very easy, and if the author or an editor had done so, the mistakes could have been identified before publication. The Sun-Times and many other newspapers have recently reduced their staff. An editor would have caught the writer misusing the tool in previous years. However, human editors are becoming scarce in journalism these days.

There is plenty of misinformation and disinformation on the internet. I’m trying not to add to the problem. The result may include spelling, punctuation, and grammar errors. Accept them as a sign that an actual human is creating my content.

Made in RapidWeaver