Turnitin’s AI Writing Detection Capabilities
Frequently Asked Questions
The AI writing indicator that has been added to the Similarity Report will show an overall percentage of the document that may have been AI-generated. We make this determination with 98% confidence based on data that was collected and verified in our AI Innovation Lab. To open the new AI writing report, select the AI writing indicator. Please note, only instructors and (depending on license) account administrators will be able to see the indicator and the AI writing report at this time.
The AI writing report contains the overall percentage of prose sentences contained in a long-form writing format within the submitted document that Turnitin’s model determines was generated by AI. These sentences are highlighted in blue on the submission text in the AI writing report.
The percentage, generated by Turnitin’s AI writing detection model, is different and independent from the similarity score, and the AI writing highlights are not visible in the Similarity Report.
Turnitin’s AI writing detection model only highlights text that is highly likely to be AI-generated. This is to help ensure that students are treated fairly whilst safeguarding the institution’s academic integrity standards. We must stress that the percentage is interpretive and should not be used as a definitive measure of misconduct or punitive tool. Instructors should use this indicative percentage to help them decide how to best handle work that may have been produced or partially produced by AI writing tools.
Frequently Asked Questions
1. How does it work?
When a paper is submitted to Turnitin, the submission is first broken into segments of text that are roughly a few hundred words (about five to ten sentences). Those segments are then overlapped with each other to capture each sentence in context.
The segments are run against our AI detection model and we give each sentence a score between 0 and 1 to determine whether it is written by a human or by AI. If our model determines that a sentence was not generated by AI, it will receive a score of 0. If it determines the entirety of the sentence was generated by AI it will receive a score of 1.
Using the average scores of all the segments within the document, the model then generates an overall prediction of how much text (with 98% confidence based on data that was collected and verified in our AI innovation lab) in the submission we believe has been generated by AI. For example, when we say that 40% of the overall text has been AI-generated, we’re 98% confident that is the case.
Currently, Turnitin’s AI writing detection model is trained to detect content from the GPT-3 and GPT-3.5 language models, which includes ChatGPT. Because the writing characteristics of GPT-4 are consistent with earlier model versions, our detector is able to detect content from GPT-4 (ChatGPT Plus) most of the time. We are actively working on expanding our model to enable us to better detect content from other AI language models.
2. What does the percentage mean? What do I do with it?
The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the submission that Turnitin’s AI writing detection model determines was generated by AI (with 98% confidence based on data that was collected and verified in a controlled lab environment). This qualifying text includes only prose sentences, meaning that we only analyze blocks of text that are written in standard grammatical sentences and do not include other types of writing such as lists, bullet points, or other non-sentence structures.
The percentage shown in the AI writing detection indicator and AI writing report does not take into account any text within the document that is not considered long-form prose.
However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the indicator as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in greater detail according to their school's policies.
3. The percentage shown sometimes doesn’t match the amount of text highlighted. Why is that?
Unlike our Similarity Report, the AI writing percentage does not necessarily correlate to the amount of text in the submission. Turnitin’s AI writing detection model only looks for prose sentences contained in long-form writing. Prose text contained in long-form writing means individual sentences contained in paragraphs that make up a longer piece of written work, such as an essay, a dissertation, or an article, etc. The model does not reliably detect AI-generated text in the form of non-prose, such as poetry, scripts, or code, nor does it detect short-form/unconventional writing such as bullet points, tables, or annotated bibliographies.
This means that a document containing several different writing types would result in a disparity between the percentage and the highlights.
4. What do the different indicators mean?
Upon opening the Similarity Report, after a short period of processing, the AI writing detection indicator will show one of the following:
- Blue with a percentage between 0 and 100: The submission has processed successfully. The displayed percentage indicates the amount of qualifying text within the submission that Turnitin’s AI writing detection model determines (with 98% of precision based on data collected and verified in a controlled lab environment) was generated by AI.
As noted previously, this percentage is not necessarily the percentage of the entire submission. If text within the submission was not considered long-form prose text, it will not be included. To explore the results of the AI writing detection capabilities, select the indicator to open the AI writing report.
The AI writing report opens in a new tab of the window used to launch the Similarity Report. If you have a pop-up blocker installed, ensure it allows Turnitin pop-ups.
- Gray with no percentage displayed (- -): The AI writing detection indicator is unable to process this submission. This can be due to one, or several, of the following reasons:
- The submission was made before the release of Turnitin’s AI writing detection capabilities. The only way to see the AI writing detection indicator/report on historical submissions is to resubmit them.- The submission does not meet the file requirements needed to successfully process it for AI writing detection. In order for a submission to generate an AI writing report and percentage, the submission needs to meet the following requirements:
- File size must be less than 100 MB
- File must have at least 150 words of prose text in a long-form writing format
- Files must not exceed 15,000 words
- File must be written in English
- Accepted file types: .docx, .pdf, .txt, .rtf
- Error ( ! ): This error means that Turnitin has failed to process the submission. Turnitin is constantly working to improve its service, but unfortunately, events like this can occur. Please try again later. If the file meets all the file requirements stated above, and this error state still shows, please get in touch through our support center so we can investigate for you.
5. What can I do if I feel that the AI indicator is incorrect? How does Turnitin’s indicator address false positives?
If you find AI written documents that we've missed, or notice authentic student work that we've predicted as AI-generated, please let us know! Your feedback is crucial in enabling us to improve our technology further. You can provide feedback via the ‘feedback’ button found in the AI writing report.
Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that into consideration when looking at the percentage indicated.
In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing ends, but our model should give you a reliable guide to start conversations with the submitting student.
In shorter documents where there are only a few hundred words, the prediction will be mostly "all or nothing" because we're predicting on a single segment without the opportunity to overlap. This means that some text that is a mix of AI-generated and original content could be flagged as entirely AI-generated.
Please consider these points as you are reviewing the data and following up with students or others.
6. What is the availability of Turnitin’s AI writing detection indicator?
In this first iteration, Turnitin’s AI writing detection indicator is available to non-student users using Turnitin Feedback Studio (TFS), TFS with Originality, Turnitin Originality, Turnitin Similarity, Simcheck, Originality Check, and Originality Check+. It is available for customers using the web-based versions of these platforms or via an integration with an LMS or with Turnitin’s Core API.
Turnitin understands the concern many academics have over AI-generated writing and the effects it will have on the academic world. For this reason, we’ve made this initial preview available to institutions for no extra charge to their current license. Beginning January 1, 2024, only customers licensing Originality or TFS with Originality will have access to the full AI writing detection experience. This information is subject to change.
7. Will the capabilities work for submissions in a language other than English?
No. Our current model will only process English language submissions. Turnitin is exploring non-English support but it is not available at this time.
8. Will administrators be able to decide whether their users see the AI writing detection indicator?
No. Administrators will not be able to turn off this new capability from the account settings. It will be available for all non-student users.
Student users will not be able see the AI writing detection indicator.
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any academic misconduct has occurred.