There was another viral essay about AI and academic research this week, “Academics Need to Wake Up on AI.” The thesis of the piece is that much of academia is in denial about how good Claude Code has gotten at producing our main unit of intellectual production: the 6,000-8,000 word journal article.

I took a snarky swipe at it on Bluesky, and Miranda Yaver wrote an excellent response piece, “In Defense of Good Writing.” There are three points that I’d like to expand on a bit further though.

First, let’s start here: the author is not entirely wrong. It seems very likely to me that the peer-reviewed journal article as a primary unit of academic production is indeed probably dead. And, as an institutional phenomenon, that’s a massive change — one that we ought to take seriously.

For decades, we have pressured junior scholars to pump out publishable articles in a race to attain scarce tenure track lines. The result hasn’t been good for production of actual knowledge. It also hasn’t been a stable equilibrium — this was all bound to crumble under its own weight regardless.

The way these institutional incentives work is that you learn to write papers that are barely publishable, then submit that work to journals and hope you pull a trio of peer-reviewers who will say “sure, this is fine.” If you get rejected, you resubmit to another journal and hope you draw a more favorable set of reviewers. It’s a crapshoot.

There are two major career-advancement hurdles in professional academia. You need to (1) land the (increasingly scarce) tenure-track job, and then (2) you have six years to publish enough work to earn tenure. Universities are large, bureaucratic institutions, and there is simply too much research being published for everyone to read across disciplines. So the emphasis gravitates toward counting stats. How many articles has this individual published? How highly-rated are the journals they published in.

Journal articles aren’t the only game in town — some researchers <raises hand> go the book route. But the job market is bad, and getting worse. I very nearly washed out of academia at step 1. If Rutgers University hadn’t been making a big cluster hire in 2010, I simply wouldn’t be in this career today — and, just to be clear, not all journal articles are crap. There are still plenty of manifestly excellent ones.

But the purpose of a system is what it does. And our status quo ante system of academic production incentivizes article quantity over article quality, amidst conditions of extreme resource scarcity where people either behave their incentives or find a different line of work.

Peer review was already stressed to the breaking point. It DOES NOT SURVIVE when a young researcher can have Claude Code produce a lit review, gather data, conduct a regression analysis, and slap on a passable discussion and conclusion section. Of course we will be flooded by AI-written/researcher-lightly-reviewed articles. Of course peer reviewers will either opt out of the (voluntary, thankless) labor of offering genuine feedback, or will have Claudebot heavily “assist” them in reviewing.

And this is a serious problem for Hiring Committees and Promotion & Tenure Committees. Universities are slow, lumbering bureaucracies. This is an appropriate time for them to freak out and start adjusting to the “death of the journal article.” They’re measuring the wrong thing. They will have to start measuring something else.

That brings me to my second point, though: good riddance!

I’ve been reading C. Thi Nguyen’s new book this week, The Score: How to Stop Playing Someone Else’s Game. (It’s excellent. You should read it.) One of the main insights of the book is that metrics affect how we play games. We ought to mindful in how we approach games (And gamification. And bureaucracy. There’s a lot there.). We ought to ask ourselves if this is the game we actually want to be playing.

Journal articles aren’t social science. They’re just a unit of measurement. They’re how we keep score. Producing journal articles isn’t what we are actually meant to be doing — we’re supposed to be learning meaningful things about people, power, and society.

If you think Claude Code is a better social scientist than you, then you’re probably right. But that means, at some point, you stopped trying to answer interesting/puzzling questions and started trying to win the publish-the-most-articles race. That isn’t a game worth saving. It was a subversion of actual knowledge-production all along.

That’s not to say that we should put a blanket ban on the use of this stuff. I can imagine plenty of productive ways to use agentic AI as a research tool.

My first empirical project, way back in 2007, was the Blogosphere Authority Index. It was a ranking system for tracking the top U.S. political blogs on the left and the right. I created it after meeting with my advisor, telling him about the ways that DailyKos and other top leftwing blogs were structurally different than their rightwing counterparts. They were behaving more like political organizations than solo-authors-writing-online. My advisor asked me how common this was. I said to him “oh that will be easy to answer.” And then I figured out that it was just-a-little hard to answer, and also that no other researchers had even attempted to answer it. So I wrote up a system for counting things (literally on the back of a napkin). And then I hand-coded the results. And then I showed them to my dad (who used to build databases for a living), and he said “y’know I could automate this for you.” We kept that system running for a couple years. Every month, we would talk through the automated results, identifying weird errors and talking them through. We abandoned it in 2010, because it had become more trouble than it was worth.

If I was a grad student today, and I faced that same question from my advisor, I would probably use Claudebot to vibe-code the automated portions. I would still make sure I stayed close enough to the data to know what it was telling me, and to identify the weird errors. It would be a lost opportunity for a nice bonding activity with my dad (that project was really fun, actually.), but a net positive for my actual engagement in the act of social scientific research.

One of my big insights back in 2012, when I was still writing about things like social science research methods, was that an awful lot of interesting data is simply never captured, and therefore can never be analyzed. Agentic AI offers a real opportunity to build, maintain, and update datasets that otherwise would fall into disrepair. I may be wearing my Butlerian Jihad Now t-shirt this morning, but I don’t actually believe we need to destroy all the AI systems for all purposes.

That brings me to the third thing, though. Academia is facing two crises right now. The AI story is, by far, the lesser of the two.

I wrote a piece last August titled “What It’s like being a political communication professor right now.” I’m going to quote a bit from it, so I can extend on the theme:

All told, it remains the best job a person like me could have. My old friend Chris Anderson once described academia as “the only profession where people are paid to think slowly.” That has always stuck with me. It is such a tremendous gift.

It is even moreso a gift because jobs like mine have become vanishingly rare.

“My peers and I didn’t exactly pull up the ladder behind us or anything. It’s more like we ran across a crumbling bridge. We arrived at the tail end of American government treating higher education as if it were a priority worth investing in. Tenure-line professorships aren’t cheap. Once you start treating academia as a business and trying to identify cost savings, you’ll eventually decide to try educate those student-consumers cheaper-and-worse via a more precarious workforce. (Mark this down as another example for the “we can have nice things, but they either won’t maximize profit or they won’t last” file.)

And the job itself is, well, fraying.

Most of the headlines I see about higher ed these days focus on the AI problem. For me, the much bigger issue is the authoritarianism problem.”

It matters quite a bit that Claude Code has just gotten good at 6,000-8,000 word journal articles just at a time when higher education is crumbling under an extended assault. It shapes the dynamics, and makes everything go so much worse.

Imagine adding these new tools to the research community at a time when resources were plentiful. (Universities hiring more tenure track lines, research budgets expanding, etc.) Those are circumstances where we, collectively, have the time and space to treat Claude Code as an exciting tool that can (a) save time on some existing projects, (b) open up new avenues for research in other projects, and (c) is a complete dead-end in still others. Cool. Fascinating. What an actually-fun time to be working in the field. Let’s all try things out, see how it goes, and then meet at a series of conferences to talk it through.

We are instead adding these tools after decades of shredding the university system, along with a year-plus of full-frontal assault. That is basically the worst precursor for the introduction of this particular new technology! It’s also, just generally, the worst context for conducting social science research. Researchers in online mis- and disinformation have faced years of harassment (from Members of Congress!) just because they studied things that powerful people would prefer were not studied. Universities are in a defensive crouch, trying to figure out what people, activities, and entire departments they can get rid of to keep the government from making an example of them.

This is all very bad! And there isn’t much that individual researchers can do about it. (Join AAUP. Take collective action. We stand together or we all fall apart.) But what a decidedly inopportune time to introduce a new tool that can do a lot of things, some of which will help you answer interesting research questions, and others of which will help you game the journal metrics so that hopefully you’re the last-remaining-winner-of-the-game-we-ended-up-playing.

The best thing a researcher can do right now is focus on “What questions do I actually want to answer?” Then do the work of ANSWERING IT. Scratch that intellectual itch. Engage in the pleasure of finding things out. If Claudebot is a useful tool, then tinker with it. But don’t focus on chasing the journal metrics. They’re collapsing anyway.

And, for the discipline as a whole, let’s remember that the metrics aren’t the thing we were meant to pursue. We are indeed in crisis, but not because of Claudebot. The whole system of knowledge-produciton has been defunded, attacked, and sent into a spiraling decline. That isn’t an AI story. Let’s not fool ourselves by pretending otherwise.

Reply

Avatar

or to participate

Keep Reading