Columbus, Ohio — Is this text AI-generated? That question crops up a lot these days. Computer software to flag AI-generated text exists. But most programs share one crucial weakness, says Jun Jang: no sense of style. His new software hopes to fill that gap.
Jun’s program takes note of a writer’s stylistic quirks. It can then use those quirks to verify if another text was written by the same person. Jun’s new software won the 16-year-old junior at Oxford High School in Mississippi a finalist slot here at the 2025 Regeneron International Science & Engineering Fair.
Jun says his project was motivated by seeing students admit to using AI tools on classroom assignments. In his English class, he says, “I’d always see kids using AI like ChatGPT for all types of homework.”
Explainer: What is generative AI?
But relying too much on these tools can erode someone’s creativity. It can also harm their ability to work out problems themselves, Jun says. And young people are most at risk, he adds, pointing to a 2025 study published in Societies. “I figured that by me making an attempt to solve this issue, I could help millions of teachers as well as maybe help kids become more creative and spark more innovation.”
AI use for homework is not rare. The Turnitin program is an existing tool to verify who wrote a piece of text. In the first year it was available, it analyzed more than 200 million student papers. Last year, it reported evidence that some 11 in every 100 papers it reviewed had used AI for at least a fifth of their text. Three in every 100 papers used AI for four-fifths of their content.
See all the entries from our Artificial Intelligence story collection
That’s “absolutely no surprise,” Jun says. Chatbots and other forms of generative AI are powerful, game-changing technologies. “Using AI can be good for education, obviously,” he says. But “a lot of kids don’t use it properly.”
Since the release of ChatGPT in November 2022, teachers increasingly have had to make hard calls. They must decide whether a student wrote an essay — or just gave some chatbot the prompts to write it. Wrongfully flagging some student’s work as cheating would erode the student’s trust in their teacher.
Jun’s new program hopes to limit that.
What’s new
Right now, most AI detectors look at one text by itself when deciding who — or what — wrote a document. They search for certain AI traits. But they don’t account for someone’s unique style of writing, says Jun.

His program instead analyzes a text that’s known to have been written by a student without AI help. How? Schools already use browser-lockdown programs that block access to ChatGPT and other AI writing tools, he points out. A teacher could collect writing samples at the beginning of the school year when students were in this lockdown mode and “obviously cannot cheat.” After that, the teacher can use his new program to compare future work by the student to those early samples.
Jun’s software then reviews text from a known author for how that person used such things as punctuation and grammar. How did they word what they wanted to say — with clear text, use of analogies or perhaps very unusual words or jargon? Did the writer regularly use long, complex sentences, simple ones or a mix of both?
Jun points to adverbs as one example: “What I noticed is that people very commonly differ — unconsciously or consciously — in where they decide to place their adverbs.” Some use them early, as in: Quickly, I packed my bags. Others might say: I quickly packed my bags. Or I packed my bags quickly. Such adverb placement can set one author apart from another, the teen notes.
His software searches for such personal styles in someone’s writing. Then it looks for evidence of the same style traits in a second text — one where the authorship has not been proven. By comparing the two, it seeks to verify whether both share the same author.
Text tests
Training this AI model required a lot of writing samples. “Thankfully, my wonderful high school classmates were actually able to give me some of their essays,” Jun says. That allowed him to train his model on how students write. He was careful to collect his known-author samples from things written before ChatGPT was publicly available.
He also used text from news, research journals and more. “There was this one specific one about British University students’ writings,” Jun says. These were also before the chatbot age. “So, these also had a vast variety of writing from university-level kids as well,” Jun says, “not just high-schoolers.” With this, he could adapt his model to different types of writing and by authors far his state.
Scientists Say: Model
To test his model, he used a type of machine learning “where you basically have a test-and-train split,” he says. To start, he trained his model on one chunk of the data. Then, to test the model, he gave it a new set of data and asked it to identify whether two documents came from the same author or different ones. Compared to author-verifying software on the market, Jun’s model was more accurate. He reports a “25 percent increase across the board.”
At this early stage in its development, those accuracy estimates are “promising,” he says. But this program and others still may not be completely accurate, he admits. To tackle that, he offers a bonus feature: transparency.

If a student’s work gets flagged as AI-written, Jun’s model explains how the model arrived at that judgment. Errors are inevitable. “The best way to combat them is to be open about why a decision was made,” he says. Teachers will understand how a model made its decision — such as whether it was based on vocabulary, grammar or other types of inconsistencies. Then they can talk to their students about those issues.
Jun hopes that will build trust. And that trust works both ways. It’s not just about teachers trusting their students. It’s also about students, “whose work may be being questioned,” putting more trust in their teachers.
Regeneron ISEF is a program created and run by the Society of Science (which also publishes this magazine). Jun is one of 1657 students — from 62 nations or territories — competing at the 75th annual ISEF. The participants will share in nearly $9 million in prizes later this week.
Do you have a science question? We can help!
Submit your question here, and we might answer it an upcoming issue of Science News Explores