Let the user download the list of his saved stories

It’s the lack of any kind of response, even a no, on simple requests like this that made it easy for me to decide to drop my premium subscription when it expired in May. This, and a simple “mark all as read” button at the end of the article list. That was promised to be “looked at on Monday” its now been about a year since then and nothing but crickets…

i know this isn’t technically what you were asking for, but i find the shift-A hotkey very useful for the “mark all as read” functionality, just in case that makes your experience any better :slight_smile:

Bump again this month/year.

Ok, I added a ticket for this so it won’t get overlooked. I won’t be able to get to it immediately, but it’s now quite high on the list and will actually be done. 

https://github.com/samuelclay/NewsBlur/issues/740

Hey Geoff, I’m sorry to hear you’ve given up on your premium sub, but know that I am still committed to building this. However, that mark read at bottom of story titles is a great idea. I added that as well: https://github.com/samuelclay/NewsBlur/issues/741

1 Like

Sam, Thank you. I just re-subbed so Shiloh has a few days of food coming.

I’ve been trying to keep an eye on support and have done a good job cleaning up over the past month. But this particular thread is 2 pages deep, and that’s when it becomes hard to read each and every reply. If this doesn’t get done soon enough, please feel free to start a thread for the mark read issue. Or bump one if one exists.

Here’s to hoping for 2016!

Before Summer? I understand “immediately” can mean some time, but now 7 months?

Well Samuel, 2016 is slipping away and I still have no way to share articles with my research students and colleagues. I am sure glad you created a ticket. /s

I created a simple Python script that takes all of the NewsBlur saved stories and pushes them to a markdown document. I’m not sure where the best place to share this is, so I figured I would add it to this thread.

https://github.com/shmcgrath/newsBlurSavedStories

1 Like

Let’s try for 2017?

How about 2018?

Since it’s been awhile, let me confirm: do you want a downloadable archive that’s a bunch of .html files or just a bunch of .json file for easy parsing by a machine or another service? What are you going to do with this archive?

.json, .csv - anything, really containing all my saved stories in a single file that I can convert to my needs. As stated earlier on page 1 “My vote is for any of them, or all of them - anything I can do something with the stories for my research and sharing with students and other faculty.” I do research and have all these saved stories that just sit there. I can do my own parsing for stories on “broadband” or “taxes” for example, I just need to have a readily accessible archive from which to do my sorting/queries.

If you’re doing research, why not just use the API like the other posts are using? It’s a pretty easy API to use and you can do it from Python using NewsBlur’s own python API wrapper: https://github.com/samuelclay/NewsBlur/tree/master/api/newsblur.py

I don’t do Python programming nor server side work. I am not sure what your definition of research is but my methods don’t include your tools. Please, please, please take a minute to read through the first page of comments regarding the API being inaccessible to non-programmers. What I want is this: an option - either via webpage or app - to download all my saved files in one file in some sort of accessible file format. That’s it.

FYI, I found a file from a few years ago that someone sent me, they could only pull 10 at a time (I bet I have 2-3k now) but it showed me all the column headers which would be included. I’d rather not be dependent upon a good samaritan to obtain them. I would love to have most of these there:

story_authors intelligence/feed intelligence/tags intelligence/author intelligence/title shared_by_friends/0 story_permalink reply_count story_timestamp share_user_ids/0 user_id story_hash id comment_count story_title guid_hash starred_timestamp share_count story_date share_count_public friend_user_ids/0 shared_comments short_parsed_date share_count_friends

… (lots in here)

public_comments/2/id commented_by_public/0 commented_by_public/1 commented_by_public/2 story_tags/12 story_tags/13 story_tags/14 story_tags/15 story_tags/16 story_tags/17 story_tags/18 story_tags/19 story_tags/20 story_tags/21 story_tags/22 story_tags/23 story_tags/24 story_tags/25

*cough*

Samuel, any update before this gets lost again?