How to Build an AI Text Detector Using Python

Build an AI Text Detector

Hey, teachers! Are you worried about students using AI generators to write their essays, answer their homework questions, and generally avoid learning subjects themselves? Well, you’re not alone. In fact, with the wide availability of Large Natural Language Models (LLM) – and especially ChatGPT – there are now tons of new AI-powered tools that can literally produce content just by using some prompts.

There have been many confirmed instances of students using AI models to complete homework and assignments, leading to a considerable debate about whether they should be utilized at all. While it may not technically be considered plagiarism per se, it has been used for many unethical actions with increasing skepticism. OpenAI has even published a page that expands on the above considerations.

In this article, we will build an AI text detector using Python that assesses text snippets and paragraphs, and then predicts how likely they are to be AI-generated. You could use this tool to assist you in evaluating whether certain content was produced honestly or if it was most likely faked.

Python Script to Build an AI Text Detector

The goal is to write a simple Python script that will:

  • Accept text as input
  • Return an object containing a percentage score that reflects how likely the text was generated by an AI.

Here is what the script will look like in its final form:

# app.py
content     = """
Essay content here
"""
### Config
config = {}
detector = AITextDetector(config)
response = detector.detect(content)
print(response)
### Output
{"output"          : "The classifier considers the text to be possibly AI-generated.",
"confidence (%)" : 95.13123123}

To accomplish our goal, we will:

  1. Create an instance of the AITextDetector class passing a config object
  2. Populate this config with parameters for interacting with various AI detection APIs (like ai-text-classifier and GLTR).
  3. Call the detect method of this instance, passing the content to analyze. The program will do its magic and return a response.
  4. We’ll use the confidence percentage score as an indicator to assess whether or not the text was generated by AI.

Let’s get started!

AI Text Detector Project Setup

To follow along with the code in this article, you can download and install our pre-built AI-Generated-Text-Detector environment, which contains:

  • A version of Python 3.10.
  • All the dependencies used in this post in a pre-built environment for Windows, Mac and Linux:
    • Requests, which you will need to perform the API request and calculate the probabilities.
    • Python-dotenv, which you’ll need to load the environment variables from a .env file.

In order to download this ready-to-use Python project, you will need to create a free ActiveState Platform account. Just use your GitHub credentials or your email address to register. Signing up is easy and it unlocks the ActiveState Platform’s many other dependency management benefits.

Windows users can install the AI-Generated-Text-Detector runtime into a virtual environment by downloading and double-clicking on the installer. You can then activate the project by running the following command at a Command Prompt:

 

state activate --default Pizza-Team/AI-Generated-Text-Detector

For Mac and Linux users, run the following to automatically download, install and activate the AI-Generated-Text-Detector runtime in a virtual environment:

sh <(curl -q https://platform.activestate.com/dl/cli/_pdli01/install.sh) -c'state activate --default Pizza-Team/AI-Generated-Text-Detector'

Finally, you’ll need to create the basic Python script that we will implement:

$ touch app.py

 

AI Text Detection Using OpenAI Text Classifier

To build an AI text detector that works, we will be using OpenAI’s Completions API, which will allow us to request a special model from this endpoint. Using the response data, we will calculate the probability of this text being generated by AI.

OpenAI API Key Config

First things first: you want to create an API key from the API keys page. You should be able to store the key in a local .env file like this:

.env
OPENAI_API_KEY=<GENERATED_OPENAI_KEY>
You can then load that into Python using the os.environ helper:
# app.py
from dotenv import load_dotenv
load_dotenv()
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")

Define the Main Method

Next, we’ll create the basic interaction in the __main__ function as we initially specified:

if __name__ == "__main__":
content = """
"""
detector = AITextDetector(OPENAI_API_KEY)
response = detector.detect(content)

Now, you need to add the text content that you want to use for input. It needs to be at least 1,000 words in length to be valid. Feel free to use whatever content you wish to check.

Next, we will take a closer look at how the AITextDetector class works.

How AITextDetector Class Works

The main AITextDetector class is responsible for sending the request to OpenAI’s Completions API endpoint, and calculating the probability of the text being generated by an AI.

The constructor of this class will take the config consisting of the OPENAI_API_KEY and create the request headers:

# app.py
class AITextDetector:
def __init__(self, token):
self.header = {
"Content-Type": "application/json",
"Authorization": "Bearer {0}".format(token),
}

It creates the Content-Type and the Authorization header for the request. Next comes the most important method, which is detect. Let’s take a look at the first part of this code:

def detect(self, text):
data = {
"prompt": text + ".\n<|disc_score|>",
"max_tokens": 1,
"temperature": 1,
"top_p": 1,
"n": 1,
"logprobs": 5,
"stop": "\n",
"stream": False,
"model": "model-detect-v2",
}
response = requests.post(
"https://api.openai.com/v1/completions", headers=self.header, json=data
)

Here, it creates the request payload, which consists of the actual text with an added marker string (<|disc_score|>) and some parameters required for the API. It uses a special model-detect-v2 model that will calculate the top probabilities of the text being generated by AI. Then it sends the payload to the https://api.openai.com/v1/completions API.

On the response side, it needs to parse the top logarithmic error probabilities and create the assessment based on the top score. The code for this part is:

if response.status_code == 200:
choices = response.json()["choices"][0]
key_prob = choices["logprobs"]["top_logprobs"][0]["!"] or -10
prob = math.exp(key_prob)
e = 100 * (1 - (prob or 0))
for _, item in enumerate(self.assessments):
if e <= item.get("max_score"):
label = item.get("assessment")
break
if label is None:
label = self.assessments[-1].get("assessment")
top_prob = {
"Verdict": "The classifier considers the text to be {0}{1}{2} AI-generated.".format(
"3[1m", label, "3[0m"
),
"AI-Generated Probability": e,
}
return top_prob
return "Check your input, the length of content should be more than 1,000 characters"

Let’s go through and explain what it does here in detail:

  1. First, it checks to see if the response was successful (code 200).
  2. Then, it extracts the choices array from the response (since it contains the calculated error probabilities). For example, it might respond with:
{
"id": "cmpl-76MILwsHQTtUba4tErbIuR25M0Tjn",
"object": "text_completion",
"created": 1681750025,
"model": "model-detect-v2",
"choices": [
{
"text": "\"",
"index": 0,
"logprobs": {
"tokens": [
"\""
],
"token_logprobs": [
-0.02926684
],
"top_logprobs": [
{
"!": -3.5892034,
"\"": -0.02926684,
" \"": -9.217239,
"'": -10.437006,
"!\"": -8.85389
}
],
"text_offset": [
2804
]
},
"finish_reason": "length"
}
],
}
  1. From the choices field, it then extracts the top_logprobs[0][!] indexed probability. This would be a number that we need to raise exponentially to calculate the p factor:
    e^-3.5892034 = 0.02762032401
  2. The confidence level is calculated using the formula 100 * (1-p) = 100 * (1-0.027) = 97. This number is used to return the assessment based on certain limits. The assessment table is defined as:
assessments = [
{"max_score": 10, "assessment": "very unlikely"},
{"max_score": 45, "assessment": "unlikely"},
{"max_score": 90, "assessment": "unclear if it is"},
{"max_score": 98, "assessment": "possibly"},
{"max_score": 99, "assessment": "likely"},
]

In this case, since  max_score > 90 and max_score < 98, the assessment would be possibly.

Printing OpenAI’s Completions API Results

When running the script, you can print the verdict using the following print statements:

response = detector.detect(content)
print("Verdict: {0}".format(response.get('Verdict')))
print("AI-Generated Probability(%): {0}".format(response.get('AI-Generated Probability')))

For example:

$ python3 app.py
Verdict: The classifier considers the text to be possibly AI-generated.
AI-Generated Probability(%): 94.25936296438395
$ python3 app.py
Verdict: The classifier considers the text to be very unlikely AI-generated.
AI-Generated Probability(%): 3.6466497435988066

So there you have it, a quick and easy probabilistic tool to check whether any text was generated by an AI.

Special Case: Giant Language Model Test Room (GLTR)

The previous example used an API to detect if the content was AI-generated. If you want to do this locally, you can check out the GLTR project. This relies on existing pre-computed models from the transformers package. It works a bit differently, so we will explain the basic steps.

The main idea is to mark words using a color scheme where, for example, green or yellow colors mean that the text is more likely to be AI-generated, and red or purple colors indicate low probability. It considers several factors when calculating this score, including word occurrence, positioning, frequency, and style. For example:

So, if most of the words in the text are green or yellow, that is a strong indication that the text was generated by an AI.

You can run the GLTR project locally, and even try to incorporate it into the CLI tool that we’ll build later. GLTR requires torch, transformers and numpy, which are already included in the AI-Generated-Text-Detector environment.

To get started, create a script to preload the models:

# preload.py
from transformers import (GPT2LMHeadModel, GPT2Tokenizer)
def run(model_name_or_path):
GPT2Tokenizer.from_pretrained(model_name_or_path)
GPT2LMHeadModel.from_pretrained(model_name_or_path)
print("Loaded GPT-2 model!")
if __name__ == '__main__':
run("gpt2")

This will download and save the pre-trained models inside the host’s cache folder for later use. (You only have to do this once, and the models are about 550MB each.)

Next, calculate the probabilities. The main implementation of the algorithm is located within the calculate_probabilities method. You can copy the parent LM class and integrate it as part of the script you already have so that you can use it from the command line. Just copy all of the code for the LM class without the BERTLM part. Then, use the following main class:

# gltr.py
import numpy as np
import torch
from transformers import (GPT2LMHeadModel, GPT2Tokenizer)
class AbstractLanguageChecker:
…
if __name__ == "__main__":
content = """All children, except one, grow up."""
detector = LM()
response = detector.check_probabilities(content, 10)
print(response['bpe_strings'])
print(response['real_topk'])
print(response['pred_topk'])

When you run the above example, you will get three print statements for the input string. The first is the list of all detected words. (This is empty for now, so we won’t use it.)

The second is the  real_topk, which describes the fraction of probability for the actual word divided by the maximum probability of any word at this position. For example:

[(66, 0.00191), (47, 0.00198), (7, 0.03022), (10, 0.01205), (13, 0.00615), (0, 0.53441), (30, 0.00208), (0, 0.92513), (45, 0.00171)]

Here, the word “all” takes position 66 with 0.00191 percent, so it’s not in the top100. Generally, the higher the position compared to the lowest probabilities, the more probable it is that the word was written by a human. Therefore, if the position is very low (less than < 10) and high probability is typically defined as more than 0.8, the word is more likely to have been written by an AI (since it was accurately predicted by the classifier).

The last print is the pred_topk, which is the list of each (word, prob) tuple for each topK. Since we defined topk to be 10 in the check_probabilities method, it will print a list of 10 tuples for each word present in the text. This is used to calculate the entropies histogram for each word preset according to this formula in the code.

Since you want to have an adequate amount of entropy, it’s important to feed the model with a significant sample of text and use a relatively high number for topK > 50 so that you can consider it a safe estimation.

Finally, there’s the classification step. You can decide how to classify the results depending on the thresholds you put in place. A good place to start is to use the official project threshold limits. For example, use top10 for green, top100 for yellow, and top1000 for red. We will leave it up to you to come up with a final prediction score based on those criteria.

Conclusions – How to Detect AI-Generated Text

AI-generated text is popping up everywhere, from Marketing communications to Sales offers to Tech Support incident responses. While most of these industries (and their customers) are fine with the use of AI text, there are some industries where the growing prevalence of AI-generated material is becoming a concern:

  • Scientific Publishing – the number of scientific papers featuring fake, plagiarized or AI-generated information is increasing at an alarming rate.
  • Social Media – bots have always been a problem on social media sites, but now it’s even easier to create AI-generate posts and spam/overwhelm an audience.
  • Advertising – AI-generated/manipulated video has the potential to mislead viewers since famous personalities can be scripted to say things they never would in real life.

Having a reliable way to detect AI-generated text is becoming imperative. While there has been some discussion around embedding watermarks in AI-generated content, they have yet to be implemented.

The two example detectors worked thorough in this blog represent only the tip of the iceberg in terms of AI-generated text detection. Since LLMs are heavily used for generating text, there is a strong need for tools that can identify them reliably. If you are interested in taking a look at the relevant research, you can start with the following list of papers:

Next Steps:

Create a free ActiveState account and see how easy it is to generate a chain of custody for your software supply chain

Recent Posts

Scroll to Top