Wired, OReilly and others have been publishing articles about the rising potential for an RSS reemergence. RSS, the relatively obscure, techie-centric publishing technology that let users aggregate their own feed of web content, is heralded as a refreshing alternative to contemporary news curation. Today, in the age of a few large internet companies controlling news curation (and therefore, controlling what people read), the above writers (and many others) argue that we need to regain control of our own reading habits by decoupling them from social media; instead, we should curate our own RSS feed subscriptions. By doing this, writers argue, the semi-algorithmic middleman that brings articles to readers will be cut out. Readers will directly (and exclusively) access the content they want to see without being tracked, analyzed, or otherwise judged.
But the web today is not set up for an RSS revival. RSS itself is an imperfect technology that can be abused by publishers and was abandoned by readers for good reasons. The central problem RSS hopes to solve—over-curation of news—certainly still exists, but RSS is not the right technology to solve it.
Limits of RSS guarantee a poor reading experience#
Publishers have complete control over what goes in their RSS feed, which degrades the usefulness of RSS aggregations and brings about a confusing, dissonant, and inelegant user experience. Without any standardization, users spend more time parsing headlines and clicking on links to seek out what they want to read.
Drinking from the Firehose#
RSS is a technology optimized for readers who subscribe to infrequently-published feeds, every article of which might be worth their attention. Using RSS to read Daring Fireball, for example, makes sense: John Gruber publishes only a few times each day, and each post on his blog is—at least for me—worth taking a look at. When I browse his RSS feed, I see a list of content he has published since I last looked at his feed, I open up the articles that pique my interest, and I read them. This is, by all accounts, a pleasurable RSS experience. I see new stuff I want to read, and I read it. But the Daring Fireballs and personal blogs of the world don’t fully encapsulate what I want to read on the internet.
Major publications have a much harder time wrangling RSS feeds. Ars Technica offers 19 different feeds you can subscribe to, except it’s actually 38 since Ars subscribers can get access to feeds that don’t truncate content. Subscribers to their main feed got fifteen new articles in their RSS readers on April 23, which I argue is too many to be useful. Larger sites are even worse: the New York Times, the Washington Post, CNN, and Reuters each offer dozens of RSS feeds, usually one that publishes everything new and several for each section of the newspaper. Any one of these feeds puts out enough content to drown readers and, since each article in an RSS feed has the same visual weight in an RSS reader application, offers no indication toward what’s actually important. Readers get lost in dozens of unimportant headlines, causing them to miss the ones that are significant.
It’s hard to assign blame when reading through one’s RSS aggregator feels like drinking from a firehose. In frustration, I’ve unsubscribed from every feed that publishes more articles I skip over than click on. But these major news publications don’t offer a more curated way of browsing through their content, so I feel lost. Publishers could offer more feeds, letting me subscribe to, for example, only the most popular U.S. political news that affects technological and educational infrastructure and innovation. That solution would be hard for users and difficult for publishers. Until news organizations take over curation, Facebook, Twitter, and other services that track my interests and interactions will always be better at realizing that those types of articles are the ones I’m going to engage with.
Loss of discoverability#
One of the reasons social media has become a viable news curation service is because of inter-network sharing: on social media, I read not only content that publishers put out (if I subscribe to a publication’s Facebook or Twitter feed) but also the content my connections are interacting with. I learn about new blogs, new websites, and new publications because they appear on my social media feed relatively unprompted: this lets me feel sufficiently informed.
RSS has no such discoverability: the feeds I’m subscribed to are the feeds I get. The decentralized nature of RSS means I have no way of seeing what’s popular, what my friends and coworkers will be talking about the next day, what coverage of a certain event went more in depth. In this hypothetical decentralized future internet, discoverability would be the responsibility of individual readers, who might outsource the responsibility to individuals (like Gruber or Jason Kottke) or services (like Reddit, Digg, Hacker News, or Lobsters). This turns discovery into a multi-step, involved process, a process individual readers will not spring at the opportunity to endure.
UI Gripes#
Finally, RSS as a protocol is simultaneously too limiting and too unconstrained. Articles are sent to feed aggregators as bits of HTML, which the aggregator can choose to format however it wants. Assuming a well-designed feed aggregator (which many are not), the restrictiveness of just HTML means that publishers will resort to strange conventions to format articles (like using subheading tags for various organizational purposes) while aggregators will try to find a style that suits each feed equally. Those familiar with discussions around semantic HTML will notice a similarity: an <h2>
tag means different things on a website-to-website basis, and will therefore mean different things on a feed-to-feed basis. There isn’t a defined way to style RSS content, so publishers and aggregators have to find an awkward middle ground between design and customizability, which ultimately leads to a mediocre experience for everyone.
RSS feed aggregators will become the next social networks#
RSS aggregator will have to solve the problems of curation, discoverability, and design in order to make RSS a viable replacement for social media news curation, since these are the problems preventing RSS from becoming popular. However, any RSS aggregator that does so will ultimately have to fall into the traps that social media sites have found themselves in. Curation can either be aggregator-led or user-led: user-led curation is often too cumbersome for average users (see also: iTunes Smart Playlists), and aggregator-led curation leads to calls that the aggregator itself is biased. Discoverability, too, is a slippery slope: what will an aggregator deem acceptable and worth highlighting?
Finally, if an aggregator starts defining more advanced design paradigms, it will either do so from a place of customization (having custom elements in feed entries to define styling that only appears in that specific application) or from a place of dominance (if everyone uses Feedly, for example, Feedly’s styling becomes the de-facto default, and ev). Neither solution is ideal, since both fly in the face of the original issue: the decentralization of media content. If the design of RSS is to be improved without interfering with the openness of the protocol, every publisher and every aggregator will have to agree on a new set of standards.
Does RSS have a future?#
RSS has a place in the publishing world, especially in the space occupied by the Grubers, the Kottkes, and the personal blogs of the world. But for bigger news dominated by larger publications, there needs to be some level of hierarchy, some more advanced design, and some better discoverability. I’m not giving up on social media yet—while the platforms have flaws, they work as surprisingly capable news aggregation services. However, I have been using RSS to supplement: for the articles I don’t want to miss from the publications and writers I want to support.