Detection of AI created content

Wooden letter tiles scattered on a textured surface, spelling 'AI'.

As AI has progressed generating code, writing, music or other intelligent language based skills through an LLM there is a parallel growth in detection of AI generated content. 

Like everything AI, it is a probability game. Trying to estimate the combination of words/tokens and comparing with what is in the model. Using a standard model like the one from OpenAI makes it an easier comparison. However, there are many methods that are used.

There are many methods that a AI detector could use: They could look at frequency patterns, or attention patterns. And variability or entropy through the documents. A human writing those might have periods of drift vs the AI written script which follows a models distribution. However , note that the detection process also needs to be trained through another model in which case they could obtain scripts written by AI and then compare it with human script and create a model. This trained model can then be used to detect a AI written script.

This is obviously a continuous progression since the models are changing fast and so is the output they produce. These models are also learning new methods of coming up with new techniques. For example, a feature that was often used to detect was perplexity – lower perplexity usually meant that it was more of expected text and hence was AI. 

I believe for any writing fully composed with AI using a standard model, it will be relatively easy to detect but as models get more complex and more advanced and use more “human” techniques, they will become progressively more difficult to detect small segments of AI embedded within a bigger piece of human creativity.

If you would like to try one on the web then use GPTzero.me

Similar Posts

  • |

    Open source protein models

    A company called Profluent (profluent.bio) has been developing protein models that can be used for designing new proteins (https://www.nature.com/articles/s41587-022-01618-2), modeling of new CRISPR-Cas sequences (https://www.nature.com/articles/s41586-025-09298-z) and developing LLM for protein generation (https://www.biorxiv.org/content/10.1101/2025.11.12.688125v1.article-info). What is amazing is that they have open sourced all their models and Profluent-E1 is available in GitHub to download and use. (https://github.com/Profluent-AI/E1)…

  • DeepSpot

    Kalin Klonchev – the winner of a competition for AI based data analysis from Broad in 2024 had also created a tool called DeepSpot. Worth looking at for spot analysis of H&E sections by converting a full H&E slide pictures to “spots” which are analyzed. Some good links: DeepSpot paper: https://www.medrxiv.org/content/10.1101/2025.02.09.25321567v1 DeepSpot GitHub repository: https://github.com/ratschlab/DeepSpot…

  • Error codes

    There is no error code in the answers that are provided by AI prompts. It will return an answer that is the best fit to the prompt or the question, but it does not tell you the probability that it is not correct or that it is low probability of answer. The conversational AI will…

  • AI Automations

    The AI automations have only increased. There is one interesting one that has been receiving publicity. Check it out: https://knowledgework.ai It takes notes while the person is working and becomes the second brain. Privacy and access may be of concern but capability is available with AI tools.

  • |

    Virtual cell

    Hani Goodrazi – Arc institute has been working on virtual cell. Drugs fail due to overfit experimental models, You need screen drugs with better models of human biology. Geneva is platfrom that brings tumor models into perturbation model – that is a transcriptomics assay that deconvolves into effect. Take multiple cell line and then treat…