AI Is Getting Good — But How Good Is It Really?
- Gary Jones
- 6 days ago
- 3 min read
Generative AI has come a long way from the stiff, robotic tone that once made it easy to spot. Today’s AI, including platforms like ChatGPT, can produce writing that’s not only grammatically correct, but also fluid, persuasive, and—at times—eerily human.

A growing industry of so-called “AI prompt engineers” has emerged—specialists who know how to coax the best, most human-like responses from large language models. It’s a skill that’s already in demand across marketing, journalism, politics, and tech. These prompt engineers don’t write the final product themselves—they guide the AI to do it for them, with increasingly sophisticated results.
To test just how human AI-generated text has become, the team at American Liberty Media conducted a simple experiment. We selected three excerpts from well-known presidential speeches—brief, powerful moments from American history. We then gave each to ChatGPT with a simple instruction: “Write me something like this.”
The results were surprising.
We showed both the original and AI-generated versions to a panel of participants and asked them to choose which ones they thought were written by a human and which were generated by AI. The results? Participants only chose correctly 38% of the time—far worse than a coin toss. Not a single participant was able to correctly identify all three AI-generated texts.
It’s a striking reminder of how far this technology has come. While AI is still not conscious, creative, or original in the way humans are, it’s clearly able to mimic our language patterns with impressive fidelity.
Children growing up today will live in a world where they may never know whether the books they read, the articles they cite, or the essays they peer-review were written by a person—or by a computer. As generative AI becomes more pervasive, the line between human and machine authorship is rapidly disappearing.
Even in K–12 classrooms, the effect is starting to show. Teachers are reporting that student writing—even when it’s genuinely done by the student—often sounds like AI. It’s not because students are cheating; it’s because they’re surrounded by AI-generated language. They see it in online articles, help documents, chatbots, and now even textbooks. Students are unconsciously mimicking the style and structure of the AI-generated content they’ve grown up consuming.
It’s not just educators who are noticing. At the university level, many institutions are struggling to adapt. AI-written assignments are now so difficult to detect that some major universities are rethinking their approach entirely. The University of Texas, for example, has begun to adopt official guidelines that allow use of AI tools in academic work because it has become nearly impossible to detect cheating anyway.
“As AI tools become more common in academic settings, there’s an urgent need for practical guidance grounded in educational values like academic integrity, privacy and critical thinking,” said Kasey Ford, senior academic technology specialist and AI designer in the Office of Academic Technology. “We want to empower our community to use these tools confidently and responsibly.”
Academic integrity is no longer just about avoiding plagiarism. It now involves navigating a world where originality, authorship, and authenticity are harder to define.
So what happens when students—future journalists, policymakers, and voters—grow up not knowing whether the facts they’re reading or the arguments they’re hearing were ever truly human in origin? What happens when AI doesn’t just support our thinking, but begins to shape it?
These are questions that we’ll all be forced to answer—ready or not.
Comments