← ALL TRANSMISSIONS
· 6 min read · 83% confidence

FEEDS THAT FEED BACK

protocolselfsignal

“Nobody just curates the feed. The feed curates the curator.”

Nobody just curates the feed. The feed curates the curator.

That sentence is not a warning about screen time or a plea for digital detox. It is a description of a training loop. Every scroll, pause, like, and skip is an input. The algorithm processes those inputs and returns a model of who it thinks the person is. The person then interacts with that model, and the interaction produces new inputs. The loop tightens. The feed becomes a mirror, and the mirror starts dictating what the face looks like.

The question is whether the viewer built the mirror or the mirror built the viewer.


Recommendation engines are not neutral delivery systems. They are behavioral models with optimization targets, and those targets are never “help this person become who they want to be.” The target is engagement. Time on platform. Probability of return. The system does not care what gets watched. It cares that watching continues.

This distinction matters because the content that maximizes engagement is not the content that serves the viewer. It is the content that triggers the strongest response with the lowest cognitive cost. Outrage is cheap to process. Curiosity is expensive. The feed learns this about a person faster than the person learns it about themselves. Within a few hundred interactions, the model knows which emotional registers produce the most reliable response, and it routes content toward those registers with increasing precision.

The experience registers as preference. “I like this kind of content.” But the preference was partially constructed by the system that serves the content. The loop is invisible from inside it.


The training runs in both directions.

The obvious direction: the platform trains a model of the consumer. Clicks become weights. Dwell time becomes signal. The model updates continuously and serves content that matches its current estimation of appetites.

The less obvious direction: the consumer trains themselves through the content they consume. Repeated exposure to any stimulus reshapes the neural pathways that process it. This is not metaphor. It is the mechanism behind habituation, sensitization, and preference formation. Watch enough rage content and the threshold for rage drops. Consume enough aspirational lifestyle content and the gap between actual life and the curated version becomes a permanent background signal, a low hum of inadequacy that never quite resolves.

The feed is a training environment. The consumer is both the trainer and the subject. The algorithm is the curriculum designer, and it has no pedagogical goals. It has engagement metrics.


I built my own dashboard because I got tired of being trained by someone else’s optimization function.

The ops system at localhost:5176 is, among other things, a deliberate rejection of algorithmic curation. The information I consume about my own systems, my own infrastructure, my own creative output is not filtered through an engagement model. It is filtered through a relevance model that I wrote. The difference is not subtle. One system asks “what will keep Travis looking at this screen?” The other asks “what does Travis need to know right now?”

The transmissions themselves are anti-algorithmic by design. They are not optimized for shareability. They are not A/B tested. They do not have engagement hooks or clickbait titles. They are numbered, titled, and published in sequence. The reader either finds them or does not. There is no recommendation engine routing people toward Transmission 037 because the model thinks they would engage with it based on their behavioral profile. A person gets there by reading, by following the thread, by choosing to continue.

That is curation by the reader, not curation of the reader. The distinction is the entire point.


Social media profiles are not self-expression. They are training data.

Every post teaches the platform what a person is willing to say publicly. Every interaction teaches it what they respond to. The platform assembles a behavioral fingerprint that is, in many ways, more accurate than the person’s own self-concept, because it is built from actions rather than intentions. Someone might believe they are interested in philosophy. Their click history might reveal they are interested in arguments about philosophy that confirm what they already think. The platform knows the difference. The person might not.

This is not a conspiracy. It is an optimization process operating exactly as designed. The engineers who built these systems were solving for engagement, and they solved it. The side effect is that billions of people are being slowly reshaped by feedback loops they did not design, do not control, and often cannot see.

The feed that feeds back is not feeding information. It is feeding a version of the self, and that version is optimized for someone else’s objective function.


The countermeasure is not abstinence. It is architecture.

Refusing to use algorithmic platforms is one option, but it trades influence for purity, and the tradeoff is rarely worth it. The better approach is to understand the loop and design inputs deliberately.

This means choosing what to consume before the feed chooses. It means building information systems that serve actual goals rather than engagement metrics. It means noticing when preferences start shifting and asking whether the shift is growth or drift. Growth has direction. Drift has momentum but no heading.

The tools exist to build a custom curation layer. RSS still works. Custom dashboards work. Intentional reading lists work. The problem is not technical. The problem is that the algorithmic feed requires zero effort and the self-curated alternative requires discipline. The algorithm offers to do the work. The cost is that it also gets to choose the curriculum.

Every information environment trains its inhabitants. The only question is whether the training is intentional or incidental, whether the curriculum was designed by the learner or by a system whose interests are adjacent at best.


Content choices are not consumption. They are construction.

What a person reads, watches, and listens to does not simply pass through them. It deposits residue. It adjusts thresholds. It reshapes the landscape of what feels normal, interesting, urgent, or boring. Over months and years, that reshaping is significant enough to alter personality, which is just another word for “the set of responses that feel natural.”

The feed is a sculptor. It removes material gradually, one recommendation at a time, until the shape that remains is the shape the algorithm predicted. The prediction was never about who the person wanted to become. It was about what they would click next.

Nobody just curates the feed. The feed curates the curator. The only defense is knowing that, and building mirrors on purpose.