AI detectors, sometimes mentioned as AI writing or AI content detectors, serve the purpose of identifying whether a text has been partially or fully composed by artificial intelligence tools like ChatGPT.
These detectors are useful for identifying cases where a written piece is likely created by AI. Application is beneficial in the following ways:
- Authenticating student work. Educators can use it to validate the authenticity of students’ original assignments and writing projects.
- Countering fake product reviews. Moderators can employ it to identify and address counterfeit product reviews that aim to manipulate consumer perception.
- Tackling spam content. It aids in detecting and removing various forms of spammy content that could distort online platforms’ quality and credibility.
These tools are still new and being tested, so we’re not entirely sure how reliable they are right now. In the following sections, we delve into their functioning, check how well they can be trusted, and explore a range of practical applications they offer.
|Educational institutions, including universities, are in the process of formulating their positions regarding the appropriate utilization of ChatGPT and similar tools. It’s essential to prioritize your institution’s guidelines over any advice you come across online.
How do AI detectors operate?
AI detectors usually use language models that are like the ones in AI writing tools they’re trying to find. Basically, the language model looks at the input and asks, “Does this look like something I might have made?” If it says yes, the model guesses that the text is probably created by AI.
In particular, these models search for two characteristics within a text: “perplexity” and “burstiness.” When these two aspects are lower, there’s a higher likelihood that the text was generated by AI.
However, what exactly do these uncommon terms signify?
Perplexity stands as a significant metric employed for assessing the proficiency of language models. It refers to how well the model is able to predict the next word in a sequence of words.
AI language models work towards creating texts with low perplexity, resulting in increased coherency, smooth flow, and predictability. In contrast, human writing often exhibits higher perplexity due to its utilization of more imaginative language options, albeit accompanied by a greater frequency of typographical errors.
Language models work by predicting what word would naturally come next in a sentence and inserting it. You can see an example below.
|I couldn’t finish the project last night.
|Low: Probably the most likely continuation
|I couldn’t finish the project last time I don’t drink coffee in the evening.
|Low to medium: Less likely, but it makes grammatical and logical sense
|I couldn’t finish the project last semester many times because of how unmotivated I was at that time.
|Medium: The sentence is coherent but quite unusually structured and long-winded
|I couldn’t finish the project last pleased to meet you.
|High: Grammatically incorrect and illogical
Low perplexity is taken as evidence that a text is AI-generated.
“Burstiness” is a way to see how sentences are different in how they’re put together and how long they are. It’s a bit like perplexity but for whole sentences instead of just words.
When a text mostly has sentences that are similar in how they’re made and how long they are, it has low burstiness. This means it reads more smoothly. But if a text has sentences that are very different from each other in how they’re built and how long they are, it has high burstiness. This makes the text feel less steady and more varied.
AI-generated text tends to be less variable in its sentence patterns compared to human-written text. As language models guess the word that’s probably next, they usually make sentences that are about 10 to 20 words long and follow regular patterns. This is why AI writing can sometimes seem monotonous.
Low burstiness indicates that a text is likely to be AI-generated.
Another Option to Consider: Watermarks
OpenAI, the creator of ChatGPT, is reportedly developing a method called “watermarking.” This system involves adding an unseen mark to the text produced by the tool, which can later be identified by another system to confirm the AI origin of the text.
However, this system is still being developed, and the exact details of how it will work are not yet revealed. Moreover, it’s unclear if any suggested watermarks will remain intact when edits are made to the generated text.
|While the idea of using this concept to detect AI in the future looks hopeful, it’s important to note that definitive details and confirmations about putting it into practice are still pending.
What is the reliability of AI detectors?
- AI detectors typically perform effectively, particularly with longer texts, but they might have problems if the AI-created text is purposely made less expected or is changed after it’s made.
- AI detectors might mistakenly think that text written by humans was actually made by AI, especially if it meets the conditions of having low perplexity and burstiness.
- Research about AI detectors indicates that no tool can provide complete accuracy; the highest accuracy was 84% in a premium tool or 68% in the best free tool.
- These tools offer valuable insights into the likelihood of a text being AI-generated, but we recommend not relying solely on them as evidence. With the ongoing progress of language models, the tools that detect them will need to work harder to keep up.
- The more confident providers typically admit that their tools cannot serve as conclusive evidence of AI-generated text.
- Universities, for now, don’t have strong trust in these tools.
|Trying to hide AI-generated writing can actually make the text seem very strange or not right for its intended use.
For instance, intentionally introducing spelling errors or employing illogical word choices in the text may reduce the chances of it being identified by an AI detector. However, a text filled with these errors and strange choices probably won’t be seen as good academic writing.
For what purpose are AI detectors employed?
AI detectors are meant for individuals who want to verify if a text could have been created by artificial intelligence. People who might use it are:
- Educators and teachers. Ensuring the authenticity of students’ work and preventing plagiarism.
- Students check their assignments. Checking to ensure that their content is unique and does not unintentionally look like text generated by AI.
- Publishers and editors reviewing submissions. Want to ensure that they only publish human-written content.
- Researchers. want to detect any potential AI-generated research papers or articles.
- Bloggers and writers: Wishing to publish AI-generated content but worry that it might rank lower in search engines if recognized as AI writing.
- Professionals in content moderation. Identifying AI-generated spam, fake reviews, or inappropriate content.
- Businesses ensuring original marketing content. Verifying that promotional material is not mistaken for AI-generated text, maintaining brand credibility.
|Due to worries about their reliability, many users are hesitant to completely depend on AI detectors at the moment. However, these detectors are already becoming more popular as a sign that a text might be AI-generated, especially when the user already had their doubts.
Manual detection of AI-Generated text
Besides using AI detectors, you can also learn to identify the unique traits of AI writing by yourself. It’s not always easy to do this reliably—human writing can sometimes sound robotic, and AI writing is becoming more convincingly human—but with practice, you can develop a good sense for it.
The specific rules that AI detectors follow, like low perplexity and burstiness, can seem complicated. However, you can try to find these traits yourself by looking at the text for certain signs:
- That reads monotonously, with little variation in sentence structure or length
- Using words that are expected and not very unique, and having very few unexpected elements
You can also use methods that AI detectors don’t, by watching out for:
|Chatbots such as ChatGPT are made to be helpful assistants, so they often use polite and formal language that might not sound very casual.
|Inconsistency in voice
|If you’re familiar with how someone typically writes (like a student), you can usually notice when something they’ve written is quite different from their usual style.
|Pay attention to whether there are not many strong and fresh ideas, and also notice if there’s a habit of using phrases that show uncertainty too much: “It’s important to note that …” “X is widely regarded as …” “X is considered …” “Certain people might argue that …”.
|Unsourced or incorrectly cited claims
|When it comes to academic writing, it’s crucial to mention where you got your information. However, AI writing tools often don’t follow this rule or make mistakes (like citing sources that don’t exist or aren’t relevant).
|Even though AI writing is getting better at sounding natural, sometimes the ideas in it don’t fit well together. Pay attention to places where the text says things that don’t match, sound unlikely, or present ideas that don’t connect smoothly.
|Overall, experimenting with various AI writing tools, watching the types of texts they can produce, and becoming familiar with how they write can help you get better at spotting text that might have been created by AI.
Detectors for AI images and videos
AI images and video generators, especially popular ones like DALL-E and Synthesia, can create realistic and altered visuals. This makes it crucial to identify “deepfakes” or AI-made images and videos to prevent the spread of false information.
Currently, many signs can reveal AI-generated images and videos, such as:
- Hands with too many fingers
- Strange movements
- Nonsensical text in the image
- Unrealistic facial features
Yet, spotting these signs might get harder as AI gets better.
There are tools designed to detect these AI-generated visuals, including:
- Intel’s FakeCatcher
It’s still unclear how effective and reliable these tools are, so more testing is needed.
The constant evolution of AI image and video generation and detection creates an ongoing need to develop more solid and accurate detection methods to address the potential risks associated with deepfakes and AI-generated visuals.
|AI detectors help identify texts generated by tools like ChatGPT. They mainly look for “perplexity” and “burstiness” to spot AI-created content. Their accuracy remains a concern, with even the best ones showing errors. As AI technology advances, differentiating humans from AI-produced content, including images and videos, gets harder, highlighting the need to stay careful online.
Commonly asked questions
|1. What is the difference between AI Detectors and Plagiarism Checkers?
A: Both AI detectors and plagiarism checkers find use in universities to deter academic dishonesty, yet they vary in their methods and objectives:
• AI detectors aim to identify text resembling output from AI writing tools. This involves analyzing text traits like perplexity and burstiness, rather than comparing them to a database.
• Plagiarism checkers aim to detect copied text from other sources. They achieve this by comparing the text with an extensive database of previously published content and student theses, identifying similarities—without relying on analyzing specific text traits.
2. How can I use ChatGPT?
A: To utilize ChatGPT, simply create a free account:
• Follow this link to the ChatGPT website.
• Select “Sign up” and provide the required information (or use your Google account). Signing up and utilizing the tool is free of charge.
• Type a prompt into the chat box to get started!
An iOS version of the ChatGPT app is currently accessible, and there are plans for an Android app in the pipeline. The app functions similarly to the website, and you can use the same account to log in on both platforms.
3. Until when will ChatGPT remain free?
A: The future availability of ChatGPT for free remains uncertain, with no specific timeline announced. The tool was initially introduced in November 2022 as a “research preview” to be tested by a wide user base at no cost.
The term “preview” suggests potential future charges, but no official confirmation of ending free access exists. An enhanced option, ChatGPT Plus, costs $20/month and includes advanced features like GPT-4. It’s unclear if this premium version will replace the free one or if the latter will continue. Factors like server expenses could influence this decision. The future course remains uncertain.
4. Is it okay to include ChatGPT in my citations?
A: In certain contexts, it’s appropriate to reference ChatGPT in your work, particularly when it serves as a significant source for studying AI language models. Certain universities may require citation or acknowledgment if ChatGPT aided your research or writing process, such as in developing research questions; it’s advisable to consult your institution’s guidelines. However, due to ChatGPT’s varying reliability and lack of credibility as a source, it’s best not to cite it for factual information.
In APA Style, you can treat a ChatGPT response as personal communication since its answers aren’t accessible to others. In-text, cite it as follows: (ChatGPT, personal communication, February 11, 2023).
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?