Weaponized Malfunction
How am I able to write so many papers, on so broad a subject matter, in such a short time?
The answer is surprising. I have depression. Not the “I feel sad sometimes” kind. The “I literally cannot find the will to get out of bed,” diagnosed, medication-treated kind. If you don’t have it, you don’t understand it. I recommend William Styron’s Darkness Visible and Andrew Solomon’s The Noonday Demon to acquaint yourself with my black dog.
This depression does several things.
First, it provides long stretches of time in bed, with nothing for my mind to do but anguish over darker moments, lost opportunities, and imagined future failures—and also churn through the more interesting questions I have picked up along the way. The mind does not stop. It cannot. So it works.
Second, it gives me a powerful incentive to stay busy. When I get bored and listless, I fall into depression. Been there, done that. No thanks. So I try to outrun my frustrations.
Third—and this is perhaps my superpower in disguise—it drives what I call intellectual cross-training. When I am working on something, and I get stuck, I first try to power through, as the saying goes: “Force doesn’t work—use more force.”1 This lodges the problem firmly in my recurring thought process, whether I want it there or not. When that fails, and since I am trying to outrun frustration anyway, I switch to a different topic entirely. Rinse and repeat. Often, a problem from one discipline is solved by an insight from a completely disconnected realm. The black dog turns out to be an excellent research assistant, even if the working conditions are terrible.
This is why the output looks the way it does. It is not a production line. It is a mind that will not stop, pointed at a set of problems it finds genuinely interesting, running on the fuel of not wanting to be still.
Paradoxically, being in a less-than-famous college means I am not constrained by funding and promotional incentives. So I can actually research whatever the hell I want to.
Tom Waits, Please Call Me, Baby — Heart of Saturday Night (1974) · Gustave Doré, Paradise Lost, Book I (1866)
The Lonely Scholar Problem
The greatest danger for an independent researcher is mistaking internal consistency for external validity. Without the friction of a laboratory group, a departmental seminar, or a collaborator who will tell you when you are wrong, it is easy to construct elegant proofs for fundamentally malformed questions.
I have done this. Once. I produced six independent proofs of a temperature transformation law that the field had already recognized as the wrong question to ask.2 The paper was built on the scalar vocabulary of a century-old debate. Israel’s covariant formalism—the actual established foundation of relativistic thermodynamics—uses a different language entirely. I missed it. An arXiv rejection3 forced the literature search that should have come first.
The lesson was not “be more careful,” even if I could. The lesson was structural: the process needed a stage that does not currently exist in most solo research workflows. I added it.
The Pre-Proof Ontology Check
My process usually starts with a conversation, which is usually pretty benign. At one point, something or other will pique my curiosity, and the dangerous “hmmm… I wonder…” will surface. That’s where the idea comes from. Most often, I’ll sit with it for a couple of days, mulling it over with my other personalities, the black dog, and my wife.
However, before any serious drafting, every new idea must pass a conceptual audit. Five questions, answered explicitly, before a single theorem is attempted:
- Can the claim be stated in the field’s established terminology? If not, stop and do the literature search first.
- What are the 3–10 foundational papers that represent the field’s natural home for this question?
- How does this formulation differ from existing work? Is the difference a genuine contribution or a reframing of a solved problem?
- What would the harshest critic say is malformed about the premise, even before reaching the argument?
- What evidence would prove this is a category error rather than a bold insight?
If question five cannot be answered, the project halts. A claim that cannot in principle be falsified is not physics. It is not philosophy either. It is noise.
This stage exists because of the EM Thermality incident.
Adversarial Review
I use AI extensively. For draft editing, for literature search, for LaTeX, and for hostile reads. The drafting agent and the crusher are explicitly partitioned. The crusher is given one instruction: find the fatal flaw. Not “improve the paper.” Find the flaw. If it cannot, the paper proceeds.
This is not a substitute for human peer review. It is a filter that ensures the paper arriving at peer review has already survived a serious attempt to kill it.
When a hostile read finds a real error, the paper is fixed. When a hostile read finds nothing, the original argument is defended. The distinction matters. More than once I have been told to dilute a conclusion that turned out to be correct. I did not dilute it. Of course, running several “crush reviews” also runs the risk of too much proofing, or over-editing, but that’s a different issue.
Forensic Scholarship
High output from a single author at a non-elite institution triggers heuristics. I cannot change the institution. I can change the evidentiary trail.
Every citation is validated against primary sources. Every field-specific claim uses that field’s native terminology. The mathematical derivations are cross-verified against canonical models. Where code or data underlies the results, it is publicly available—on PyPI, on GitHub, with commit histories that predate the submission, sometimes by years.
This is not performative. It is the difference between a paper that looks credible and a paper that is credible. The distinction matters more when the other signals are ambiguous.
What This System Does Not Prevent
It does not prevent me from being wrong. For instance, a recent post I’ve published tasks the double-blind process with removing prestige ratchets at the level of desk editors. This is not correct, and I probably need to correct the record on that. The pre-proof ontology check catches malformed questions, not incorrect answers to well-formed ones. The crush reviews catch logical errors, not empirical errors that require an experiment to detect. The reference validation catches missing citations, not missing concepts.
The system is designed to fail early and fail visibly. A paper that fails at Stage 1 costs an afternoon. A paper that fails after submission costs months and leaves a record. Early failure is the goal.
The system is also not complete. It will be updated when it fails again. The EM Thermality incident is in the incident log. So is the prestige ratchets case. The next one will be too.
Why This Matters
The academic system is not well-suited to solo researchers working across fields at speed. The heuristics are calibrated for large groups, slow output, and narrow specialization. I am none of those things.
The correct response for me is to build a support system that strips away noise, weaponizes self-doubt, leverages ruminations, and ruthlessly distills insight.
Will it work? I don’t know. But the alternative—doing nothing and sinking into the black mire of depression—is worse. So it’s either this or better drugs. And I’m not sure I have enough money for better drugs.
The cross-training described above is not metaphor. Below is the full research provenance tree—every paper, seed idea, and dependency, traced from 2022 to the present. Purple nodes are ancestors. Green is published. The edges show where an idea from one field solved a problem in another.
View Research Provenance Tree →