This fits into a wider pattern.
When the creators & consumers have aligned worldviews & interests, allowing creators to perform cognitive labour on behalf of consumers makes sense. When their interests are not aligned — when the media or technical landscape is adversarial to the users — then any simplifying assumptions made by creators are at best ideologically suspect.
Many of the major growing pains related to the internet (and particularly the web) essentially come down to an “eternal september” moment where a set of technologies designed for hobbyist use in a a community with relatively aligned interests gets inserted into a commercial context where multiple adversarial parties are involved. (Spam, clickbait, fake news, intrusive advertising, all manner of security problems ranging from social engineering to sql injection, ‘dark ux patterns’, 419 scams, and trolling can all essentially be blamed on giving a system designed for good-faith cooperation to a bunch of people who would rather con each other to gain small advantages.)
Societies have an array of tools for limiting the damage done by bad-faith actors. Unfortunately, the cruel optimism of the people who design online communities either undercuts these mechanisms (in reality power structures are very conditional, since subordinates who lack trust in their superiors’ judgement will ignore orders; in computer systems, power structures are treated as much more cut and dry, compounding mistakes made by the powerful) or expands their power beyond what is reasonable (as with the human flesh search engine & other mechanisms that pile on shame out of proportion with the original failure).
Recently I’ve seen what looks like an upswell in the general understanding that the world is complex & can’t be easily understood or modeled, which makes me a little more hopeful for the future. However, easy solutions (even if they are wrong) can be very lucrative. Any system we design should be mindful of how it presents information & how that presentation effects society.