Related, Semantic ablation: Why AI writing is boring and dangerous:They sit with laptops open, seven tabs competing for attention, notifications sliding in from three different apps, phones vibrating every few minutes. They're trying to read serious material while fighting a losing battle against behavioural psychology weaponised at scale. They believe their inability to focus is a personal failure rather than a design problem. They don't realise they're trying to think in a space optimised to prevent thinking.
During "refinement", the model gravitates toward the center of the Gaussian distribution, discarding "tail" data - the rare, precise, and complex tokens - to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.
When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters - the precise points where unique insights and "blood" reside - and systematically replaces them with the most probable, generic token sequences.
This is another example of a whole technology being blamed for a bad choice in using the technology, and this kind of ablation is not new and does not require high tech. It's exactly what happens in movie test screenings, or in almost any communication where the priority is to maximize the size of the audience.
Also, this is why I like the DeepAI image generator. It has 122 styles that make this choice in different ways. Some of them are slop-optimized, and some of them are instructed to do weird shit. Surely it wouldn't be hard to make a chatbot site that had the same variety of options. I think the trouble is that we're more afraid of text.
Finally, something lighter: I Taught My Dog to Vibe Code Games
]]>Electroencephalography (EEG) monitoring revealed that participants in the ChatGPT group showed substantially lower brain activation in networks typically engaged during cognitive tasks. The brain was simply doing less work. More alerting was the finding that this "weaker neural connectivity" persisted even when these participants switched to writing essays without AI.
]]>