How AI Killed a 133-Year-Old Princeton Tradition

· The Atlantic

In 1876, an editorial in Princeton’s newly founded campus newspaper, The Princetonian, argued against the use of proctors to monitor exams. Proctoring was “a means of bad moral education,” the author wrote. Treat students as presumptively dishonest, and some would become so; treat them as honorable, and they would learn to behave honorably. And so the editorial board suggested a different approach: “Let every man write at the end of his paper a pledge that he has neither given nor received help, and let professors and tutors address themselves to some better business than watching for fraud.”

Visit saltysenoritaaz.com for more information.

That proposal was eventually embodied in Princeton’s famous Honor Code, adopted in 1893 and modified only lightly in the ensuing 133 years. When students take their final exams, professors leave the room. Students write down a pledge not to cheat. They are expected to report anyone who does. Any student accused of impropriety comes before a jury of their peers.

The Honor Code had a good run. F. Scott Fitzgerald (who enrolled at Princeton in 1913 but did not graduate) once wrote that violating it “simply doesn’t occur to you, any more than it would occur to you to rifle your roommate’s pocketbook.” The code lasted through two world wars, the upheaval of the 1960s, the disillusionment of Watergate, and even the rise of search engines and SparkNotes. It finally met its match in generative AI. Yesterday, after the rise of AI-facilitated cheating became too obvious to ignore, Princeton’s faculty voted to begin proctoring exams again. Technically, the Honor Code is still in place. Students will still sign a pledge that they didn’t cheat. But now professors will be watching to make sure they’re telling the truth. The Honor Code can’t run on the honor system anymore.

[Rose Horowitch: What an Ivy League education really gets you]

Even at Princeton, obviously, some students have always cheated. Fitzgerald himself was scandalized when, during a campus visit a decade after his time at the university, a member of the football team told him that his roommate knew of unreported Honor Code violations. (Shortly thereafter, a fellow alumnus shared the same suspicion with the famous novelist.) “The implication was that these were many,” Fitzgerald wrote to the dean. Back then, however, academic dishonesty was constrained not only by codes of conduct but by the amount of effort it required. A student who wanted to cheat had to go to the trouble of finding someone who would let them copy their answers.

The internet and the shift to doing work on computers rather than by hand dramatically lowered the barriers to cheating. A study of thousands of students at Rutgers University found that, in 2017, a majority copied their homework answers from the internet. AI has taken that dynamic to new extremes. It can mimic any writing style, produce a unique essay, and add in typos to make it appear human-authored. The available detectors are not foolproof. Studies have consistently found that teachers are worse than they think at detecting AI usage. “It’s a temptation,” Anthony Grafton, a longtime Princeton history professor who retired last year, told me. “I can imagine the student with the devil over his or her left shoulder and the angel over his or her right shoulder.”

Since generative AI became widely available, in fall 2022, Princeton has seen rising academic dishonesty. The Committee on Discipline, which has jurisdiction over take-home assignments, found 82 students responsible for academic violations in the 2024–25 academic year, compared with 50 students in 2021–22. Those are just the students who manage to get caught; the real numbers are undoubtedly much higher. In the school newspaper’s survey of graduating seniors, which 501 students responded to, 30 percent said that they had cheated, 28 percent said that they had used ChatGPT on an assignment when it was not allowed, and 45 percent said that they knew of cheating by a peer and chose not to report it. Michael Laffan, a Princeton history professor, told me that he has sat in coffee shops near campus and watched as students copied responses from ChatGPT and passed them off as their own.

The ease of AI-enabled cheating seems to be imparting a “bad moral education” of its own. Cheating has become more visible, Nadia Makuc, a senior at Princeton and former chair of the Honor Committee, told me. Students post about violating the Honor Code on Fizz, the campus’s anonymous social-media app. That makes students who play by the rules feel like suckers. “There’s an air of people cheating on take-homes and people just using ChatGPT,” Makuc said. “As long as people think there is more cheating, it encourages more cheating.”

[Ian Bogost: College students have already changed forever]

Princeton’s professors are finally trying to reset the system. Proctors are just one component. In the past year, the number of take-home exams at Princeton has declined by more than two-thirds. Next year, the economics department will require its majors to do an oral defense of their research projects, Smita Brunnermeier, the director of undergraduate studies, told me. David Bell, a history professor, has also added in oral exams and switched from short take-home papers to in-class writing in blue books. One of his colleagues in the history department forces students to write their papers in Google Docs so that he can review the stages of their composition.

In short, what the 1876 editorial called a “system of suspicion and surveillance” is making a comeback. “It does change something about the student-faculty relationship,” William Aepli, a graduating senior and the former chair of the group that represents students accused of violating the Honor Code, told me. “It’s one thing to have proctoring from the very beginning. It’s another thing to have this tradition of self-proctoring exams and trust that students abide by the Honor Code, and then to take that away.”

Bell told me that AI has made him more wary of his students, and that they can tell. When he changes his assignments to keep them from cheating, they understand that he doesn’t trust them. “Inevitably, all the solutions involve a greater degree of surveillance—that’s the one thing in common,” he said. “Maybe we’ll just have to get used to this new kind of police state of instruction. But I’m not eager to see where this leads.”

Much of higher education’s value rests on the assumption that cheating is an exception, not the rule. A diploma is meaningless if employers and graduate programs can’t trust that graduates learned something in college. Prospective students and their families must believe that their tuition dollars will purchase a good education. And taxpayers need to trust that public-school students are getting something from their four years of subsidized education. Rampant AI use breaks down these signals. “It is bad policy to suspect a man of being a rogue in order to be sure that he is a scholar,” The Princetonian warned in 1876. Perhaps so. But the alternative is even worse.

Read full story at source