Let the user download the list of his saved stories

What I’d like to do is be able to save them in another reader as a backup. Feedbin can import a json file of starred items, I’d just like to get them out of newsblur in the same format I brought them in from Google Reader, as a json file.

Ok, I can understand why there is zero motivation by Samuel to implement this as it would “allow” users to transition their own data to another service. Please, if this is something you will not work on just say so. My motivation is not to take to another service but to parse out articles for different threads of my research. Sheesh. Since I am not expecting a reply, can someone provide some actual sample API code?

Using the API to get your starred stories does not require any programming ability, nor any code at all. Simply visit http://www.newsblur.com/reader/starre… in any browser and save that file.

This is definitely not meant to discourage people from taking their data elsewhere. NewsBlur puts a “Download OPML” button right on everyone’s preferences page specifically to let you pack up your data and leave any time you want. (That doesn’t include starred stories since there’s no standard way of listing them for RSS readers, but that’s enough to get up and running on any other mainstream product.)

Rather, the reason programming skills are even in the discussion is that once you’ve downloaded your starred stories you’ll presumably want to do something with them, which likely will involve some programming. If you’re looking at a service that already knows how to read the JSON file and do interesting stuff with it, then downloading it from the API at the URL above is really all you ever need do.

1 Like

Well, same issue as above. This limits you to just 10 starred posts. I want to be clear about what is being asked. I don’t want to beat a dead horse here but: Is there a way to download ALL of your SAVED stories to json/opml/csv/whatever (see the very first post). I can manipulate those resulting formats to my needs.

It is known that you can download the opml of your subscribed feeds.

Bump? After seeing requests a whole 1 day old get a new feature in the app, here’s to hoping a feature request that is 3 years old get integrated.

Ok, with absolutely no desire by the owner to implement this feature, I am willing to PAY anyone to build this feature and submit it through whatever process the sole owner would approve. Takers?

Whoever has the time to take this on, it wouldn’t take very long. Just auth the user, and grab their /reader/starred_stories until it returns 0 stories. Possibly provide it in RSS format, but something else might work as well.

“it wouldn’t take very long.”

I am pretty insulted you provided this reply Samuel. So should the other folks requesting this feature for going on 3 years.

Ben, I’d rather you didn’t use that tone, but I understand where you’re coming from. I’ll try to hook up some sort of downloader tomorrow. It’ll be in a format you can’t use anywhere else, but at least it’ll be a convinient way to store your saved stories.

Well, it’s easy to be frustrated when you earlier stated “it’s a great idea” and “it wouldn’t take very long”.

Regarding “a format you can’t use anywhere else” I don’t believe you recall my earlier comment “My motivation is not to take to another service but to parse out articles for different threads of my research.” Nobody else in the thread is asking for an unusable backup - I assume you are doing backups of all of our unusable data.

Earlier, you do state “what format would you want? The API gives you JSON. Do you want XML?” or the comment yesterday about “RSS format” yet today it’s “a format you can’t use anywhere else”. I can’t seem to reconcile all these conflicting statements you make here.

My vote is for any of them, or all of them - anything I can do something with the stories for my research and sharing with students and other faculty.

So I’m not able to get to this today, but it’s on the list.

I hope it’s still on the list.

Bump again this week.

Bump again this month.

How about something like this? https://gist.github.com/jmorahan/56665898413fbb45212f

1 Like

Is there a way to integrate that into the newsblur website?

Getting errors. Trying to troubleshoot. Here’s the code I used (thanks to adept) to get it to “work”:
http://pastebin.com/gN1P9UQa

So, this grabs each page (same issue as listed above with just 10 stories per page), headers and all, and puts them into a json. I now have over 100 json files, and can’t concatenate them because of all the headers. Is there a way to put them all into one file with a single header? Maybe someone who knows jshon a bit better?

Or, maybe Samuel can just do this, it’s his code…

So I started trying to improve it and ended up rewriting it in PHP…

https://github.com/jmorahan/newsblur-export/releases

1 Like

Thanks John,
I was able to use this to obtain my saved stories. However, I really need this incorporated into the Newsblur site itself, for an everyday user it’s just not a great process to go through every time. I sure would like to know where this is on the the “feature” list - next few days seems to take months.

Bump again hoping to see this commitment completed.

this is a terrific idea. i’m not a terribly technical user, i’m afraid, so the API route isn’t as accessible for me as it might be to much of the user base here. is this still on the to-do list?