Can't view rss feed hosted on local computer (localhost=127.0.0.1)

Simply put, fetching on the client side would be a heavy operation. You’re not merely grabbing the data, you’re parsing it and transforming it into something the reader can use to display. If you have 100+ feeds, making 100+ HTTP requests, then parsing all of that in the browser, given the heavy interactivity of the RSS reader already, would be a really poor user experience. On weaker machines, the page could simply hang/die, or the changes required to the JS to make sure it doesn’t would mean that the processing would take a really long time.

It’s not feasible, so the server does all of that, so you can throw as much processing power at it that you need and the reader is responsible for simply (or not-so-simply) rendering UI.

Plus, as Brett mentioned above, if a single server (or set of servers) has information about all of the feeds together, you can optimize things there in ways that you wouldn’t be able to do in the browser, e.g. if a dozen subscribers need cnn.com/rss, it only needs to be fetched once and can be used by all of them, in addition to optimizations about how often to fetch and what/how much data to cache.

It just makes a lot more sense to put that in a server instead of the browser.

A desktop application has access the computer hardware in a way that a web page doesn’t. You just can’t do that kind of heavy lifting in the browser (yet?).

@James, I agree more with your “yet?” than “It’s not feasible”…

From you what you say, its feasible, but the ui maybe not as good, on slow machines.

If the server gives a functionality which is not possible, eg do a search on content of the feeds that you are _not_ registered to (or similar usage in background for some purpose), than I agree that a server is a must.

But so far I’m not convinced. Maybe there is a something like, I’m not familiar with the server and how its implemented.

But as an end user, who has “64 feeds”, I really dont care if the server is saving 200 requests from other user to cnn.com/rss because it does it only once,
and before there will be about 100,000 users on the server , cnn.com will not care either.

From what you say, I understand the end user receives the same product,
(slowness is arguable).
But from unknown reasons (yet), the manufacturer of the product, is willingly pays $$$ a month to manufacture the same product.

The leads me to the conclusion, that something is missing, because it doesnt make sense to me (yet)

It is different models of RSS fetching. The NewsBlur model provides some features that are range from difficult and impractical to impossible to accomplish with a strictly-client model.

If you have a client RSS reader, like say NetNewsWire, it works exactly as you describe. Each install will go to cnn.com/rss, download the feed, parse out the stories, and show them to you. Some (like NetNewsWire) include a sync capability, so the app on your phone knows when you’ve already read a story on your laptop, etc.

What NewsBlur adds are things like:

  • more frequent feed fetching; NewsBlur can check the feed every few minutes, then push those updates out to all NewsBlur users. In your case, you could do the same for the 64 feeds, and make 64 requests every few minutes on a client. Or you can make one request every few minutes to NewsBlur, and it will tell you what updated. This becomes more important for big power users; I have a mere 117 feeds, I know some who follow thousands.
  • statistics on feeds; it’s not possible for a client RSS reader to know how many people are subscribed to a feed, just knowing the feed address. NewsBlur can track number of subscribers, how many people thumbs-up or thumbs-down a particular author, tag, or phrase in a feed, when a feed is available, when it changes, what story changes are made between fetches, and likely more that I cannot recall right now.
  • native social features; most clients can take advantage of OS features to share data via email, text, tweet, facebook, etc. NewsBlur includes blurblogs, which is a bit like a mini social network just for NewsBlur users.
  • feed history; if you subscribe to cnn.com/rss from a client RSS reader, you will only get the last X stories in the feed (usually something like 10, 20, 50). As long as someone else subscribed to the feed before you, NewsBlur will know all the stories that appeared in the feed going back some number of days. I think it used to be 30, and now is 90 days?

@John The slowness isn’t “arguable”; it’s the exact reason it’s not done that way. Samuel himself said above he attempted to do it that way, but it was a “nonstarter.” It’s not that it’s impossible, per se, but that you can’t get as much power out of doing it in the browser and thus can’t include many of the features Newsblur currently has.