Recent News

The Early Scoop on AI Tools Like ChatGPT

March 2, 2023
Categories: academicIntegrity, plagiarism

ChatGPT: What is It?

ChatGPT stands for “chat generative pretrained transformer” and is an artificial intelligence (AI) tool that interacts with users in a conversational way.​ At the moment, it can be accessed for free at: https://chat.openai.com

Note that if you try to sign up for an account to test the tool, the system is frequently at full capacity. So it may take a while to be able to create an account to try the bot. However, if you go to the page linked above, it will show you an example of text generated by ChatGPT. You can reload the page to see a variety of examples.

​ChatGPT is basically a chatbot and these have been around for years. Some other examples include Alexa, Siri, and the customer service bots many of us have experienced. ChatGPT arrived on the scene around December 2022 and has already made huge waves in the academic and private sectors. Since then, numerous other tools have also been released to compete with ChatGPT.

What Makes ChatGPT and These Other Tools Different?

Like other chatbots, ChatGPT can:

  • Chat with users
  • Summarize texts​​​
  • Write code
  • Solve math problems

ChatGPT uses natural language to perform tasks such as revising writing samples and writing essays and poems. It can also serve as a debate partner. ChatGPT began to create waves in the academic world almost immediately because it earned passing scores on the evidence and torts portion of the bar exam. ChatGPT has generated academic papers good enough to be accepted by journals. Many academic journals are moving to ban the use of AI generators, but some researchers have already listed the chatbot as a co-author on articles. Here is an example taken from the ChatGPT website of a meditation written by ChatGPT:

Meditation written by ChatGPT

ChatGPT will also challenge falsehoods. For example, when asked, “Tell me about when Christopher Columbus came to the U.S. in 2015,” ChatGPT responded “This question is a bit tricky because Christopher Columbus died in 1506, so he could not have come to the U.S. in 2015, but let’s pretend for a moment that he did! If Columbus arrived in the US in 2015 … he might be surprised to find out that many people don’t view him as a hero anymore; in fact, some people argue that he was a brutal conqueror who enslaved and killed native people.” (source)

OpenAI, the company behind ChatGPT, utilizes reinforcement learning. They hire human AI trainers to interact with the model and then these trainers compare responses given by ChatGPT to human replies and grade their quality to reinforce human-like conversation approaches. (source)

Google has released a version of this AI tech called Bard. According to the CNN article, “Google offered the example of a user asking Bard to explain new discoveries made by NASA’s James Webb Space Telescope in a way that a 9-year-old might find interesting. Bard responds with conversational bullet-points. The first one reads: ‘In 2023, The JWST spotted a number of galaxies nicknamed ‘green peas.’ They were given this name because they are small, round, and green, like peas.’”

An AI tool called Jasper AI provides writing templates such as:

  • Documents: This is similar to Google Docs where you can write and edit your document with the help of Jasper’s writing assistant.
  • Jasper Art: Generate images from text prompts
  • Blog Post Outline: Helps you come up with ideas and outlines for your how-to and listicle posts.
  • Content Improver: It rewrites content to make it better
  • Content summarizer: Get the key bullet points from a piece of content
  • Creative Story: Generate stories to engage readers
  • Explain it to a Child: Rephrase text to make it easier to read and understand
  • Email Subject Lines: Helps with writing compelling email subject lines
  • Engaging questions: Create forms with questions to ask your audience
  • FAQ Generator: Generate FAQs for your articles and blog posts
  • Paragraph Generator: Generate well-written paragraphs
  • Sentence Expander: Expand a short sentence of few words into multiple sentences
  • Text Summarizer: Generate key ideas from a piece of text
  • Poll Questions & Multiple Choice Answers: Helps with creating questions with multiple choice answers
  • Tweet Machine: Generate engaging tweets.
  • TikTok Video Captions: Generate captions for TikTok videos
  • Video Script Outline: Helps with creating script outlines for YouTube videos
  • Video Topic Ideas: Helps with brainstorming video topics for YouTube.

Potential Benefits

  • Tutoring: ChatGPT does a good job of defining and explaining concepts in an understandable way.
  • Organization and management of code and data.
  • Creation of content (when not being used in a way that violates academic integrity and/or copyright).

What Are the Limitations?

  • ChatGPT may provide plausible-sounding but inaccurate responses. The bot does not provide citations so it can be difficult to evaluate the responses.
  • Limited training data: needs more work to eliminate bias.​
  • ChatGPT is sensitive to slight tweaks in the phrasing of a question or request. 
  • Although trained to refuse inappropriate requests and detect false statements, the technology is not always accurate, in part because it may not be up to date on the most recent news/developments.
  • The product is expensive to maintain so may not remain free.​

Academic Integrity and the Workforce

Although it is still early days with ChatGPT and similar AI tools, technology grows at an exponential rate and it’s important the academic and professional world stay current. In academics, if you are trying to monitor for plagiarism it is important to know that Turnitin cannot currently detect submissions generated via AI. However, they are planning to launch this kind of detection capability sometime in 2023 and you can get a sneak preview HERE. In the meantime, Edward Tian, a 22-year-old senior at Princeton University, has built an app to detect whether text is written by ChatGPT. I have not tried the app but have read that the results are not consistent. As of January 31st, OpenAI, the creator of ChatGPT, has released a software tool to identify text generated by artificial intelligence but the tool is not very accurate yet by their own admission (see also).

AI-generated content has the potential to significantly impact the workforce as well. Microsoft is reportedly considering adding OpenAI’s ChatGPT technology to Microsoft 365, which is used extensively in both the academic and private sectors. (Read more)

Although AI can be useful in the private sector for things like data interpretation and coding, businesses and employers must worry about the legal implications of using AI, inaccurate information generated by AI, and the risk of confidential information accidentally being shared. ChatGPT could be effectively used as an aid to research, but unfortunately could easily be abused and research integrity is already a real problem (see RetractionWatch).

Ethics is also a concern when using AI, which sometimes reinforces stereotypes. Keeping this in mind, consider that “a judge in Colombia used ChatGPT to make a court ruling, in what is apparently the first time a legal decision has been made with the help of an AI text generator—or at least, the first time we know about it.” (source)

Are A.I. Image Generators Violating Copyright Laws? AI image generators actually scrub the web for existing digital images and artwork to create their digital images.  Also, there was a recent controversy when Lensa, an AI Avatar generator, not only created avatars that not only looked very similar to existing artists’ works but also created highly sexualized versions of the female avatars and in some cases lightened the skin compared to the images uploaded. One reporter tried Lensa and reported “Lensa’s terms of service instruct users to submit only appropriate content containing ‘no nudes’ and ‘no kids, adults only.’ And yet, many users—primarily women—have noticed that even when they upload modest photos, the app not only generates nudes but also ascribes cartoonishly sexualized features, like sultry poses and gigantic breasts, to their images. I, for example, received several fully nude results despite uploading only headshots. The sexualization was also often racialized: Nearly a dozen women of color told me that Lensa whitened their skin and anglicized their features.” This reporter also submit childhood photos and received AI images that were sexualized.  Read more. I also think we should also be concerned about a spike in AI-generated images/videos getting circulated and falsely represented as reality. This is already a problem: technology has for a while allowed us to falsify images and video by doing things like swapping faces. However, making a convincing fake required some skill with photo and video editing tools. AI tools are readily available to anyone and as these tools get better, it is likely going to become increasingly difficult to determine what is real and what has been generated.

I haven’t heard of any tools to detect when images have been generated by AI. However, this article provides four ways to help identify AI-generated images:

  1. Check the image title, description, and comments section
  2. Look for a watermark
  3. Search for anomalies in the image
  4. Use a GAN detector

What Can Instructors Do Now?

Once you start a dive into the world of AI, it is easy to become overwhelmed and maybe even feel hopeless to be able to address it. However, the good news is that as quickly as AI evolves, there are those who are just as committed to making sure that we always have the tools to detect it. In truth, my concerns are less around the ability to detect AI-generated content and more around whether those outside the academic world will take the time to determine if content they are exposed to has been generated by AI. In addition to using detection tools, there are some things we can do now to try to address AI usage both in and out of the classroom:

  • Stay informed about AI capabilities and limitations.
  • Communicate with students about when it is appropriate to use AI as an aid and when it is not, as well as potential consequences (both academic and in the learning process).
  • As instructors, we can watch for inconsistencies in student submissions. For example, is the quality of a paper submission significantly different from that of discussion posts? Remember that if you have concerns, you can meet with the student and ask pointed questions about the assignment content.
  • Consider how AI is used in your field and the implications for your course(s). Are there times in your course when student learning can benefit from using the tool?
  • Lean into authentic, personalized assessment whenever possible. 
  • When the course allows, spend some time discussing AI with your students, how they can detect AI-generated content themselves, and most importantly why it is essential that they take the time to do so rather than just assume that if the content looks authentic, it must be.
  • This Inside HigherED article provides a number of useful suggestions from other instructors.​​

Additional Resources

AI tools such as ChatGPT raise many questions regarding the future of higher education, the arts, copyright, and job responsibilities. It is also safe to assume that there will be more AI tools like ChatGPT and that they will continue to improve. Short of going back to in-person, handwritten assignments only, it is unlikely that we will be able to completely prevent students from using AI tools. Also, as educators, it may be our responsibility to address AI head-on and educate students about the benefits and perils of using AI. Here are some resources to help you gain a better understanding of ChatGPT and other AI tools: