Benefits of owning a self-hosted NewsBlur?

docker ps -n 10
CONTAINER ID   IMAGE                                                  COMMAND                  CREATED         STATUS                          PORTS                                                                                                                 NAMES
94b8e7d47632   haproxy:latest                                         "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes                    0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:1936->1936/tcp, :::1936->1936/tcp   haproxy
398f31863c97   nginx:1.19.6                                           "/docker-entrypoint.…"   5 minutes ago   Up 5 minutes                    80/tcp, 0.0.0.0:81->81/tcp, :::81->81/tcp                                                                             nginx
fcece3e613bb   newsblur/newsblur_python3:latest                       "/bin/sh -c newsblur…"   6 minutes ago   Up 5 minutes                    0.0.0.0:8000->8000/tcp, :::8000->8000/tcp                                                                             newsblur_web
d83220c98b3b   newsblur/newsblur_node:latest                          "docker-entrypoint.s…"   6 minutes ago   Up 5 minutes                    0.0.0.0:8008->8008/tcp, :::8008->8008/tcp                                                                             node
4422ef5f38b3   postgres:13.1                                          "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes                    0.0.0.0:5434->5432/tcp, :::5434->5432/tcp                                                                             db_postgres
fcbc9133f0b8   redis:latest                                           "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes                    6379/tcp, 0.0.0.0:6579->6579/tcp, :::6579->6579/tcp                                                                   db_redis
b18e4a549a9c   ghcr.io/willnorris/imageproxy:latest                   "/app/imageproxy -ad…"   6 minutes ago   Restarting (1) 44 seconds ago                                                                                                                         imageproxy
3bf47ff0a2b7   newsblur/newsblur_python3                              "celery worker -A ne…"   6 minutes ago   Up 6 minutes                                                                                                                                          task_celery
dbf42eed04dd   docker.elastic.co/elasticsearch/elasticsearch:7.16.2   "/bin/tini -- /usr/l…"   6 minutes ago   Up 6 minutes                    0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp                                  db_elasticsearch
3f84cf896850   mongo:3.6                                              "docker-entrypoint.s…"   6 minutes ago   Up 6 minutes                    27017/tcp, 0.0.0.0:29019->29019/tcp, :::29019->29019/tcp                                                              db_mongo

Just notice imageproxy is restarting…and docker logs show
standard_init_linux.go:228: exec user process caused: exec format error

What version of docker compose are you using? It seems the containers can’t talk to each other.

I just upgraded from 1.21 to 1.28.4, they seems can talk to each other now, localhost can redirect to the web but the page looks frozen.

imageproxy still keep restarting:
standard_init_linux.go:228: exec user process caused: exec format error

I am guessing if imageproxy does not support arm…

I requested arm support in imageproxy here:

Glad upgrading fixed the issue! I haven’t seen a frozen page but I’d love to know if that’s because of anything not loading. Does the web inspector console show anything?

This morning I attempted to use the arm port of imageproxy (basecamp/imageproxy). The imageproxy is working but the page is still as shown. I can’t click to Sign In.

But I can click on the bar below now (e.g. ios). And I can scroll this page.

and here are the haproxy logs now:

[NOTICE]   (1) : path to executable is /usr/local/sbin/haproxy
[WARNING]  (1) : config : parsing [/usr/local/etc/haproxy/haproxy.cfg:49] : a 'monitor fail' rule placed after a 'redirect' rule will still be processed before.
[WARNING]  (1) : config : parsing [/usr/local/etc/haproxy/haproxy.cfg:50] : a 'monitor fail' rule placed after a 'redirect' rule will still be processed before.
[WARNING]  (1) : config : backend 'node_socket' uses http-check rules without 'option httpchk', so the rules are ignored.
[WARNING]  (1) : config : backend 'node_favicon' uses http-check rules without 'option httpchk', so the rules are ignored.
[WARNING]  (1) : config : backend 'nginx' uses http-check rules without 'option httpchk', so the rules are ignored.
[NOTICE]   (1) : New worker (8) forked
[NOTICE]   (1) : Loading success.
[WARNING]  (8) : Health check for server node_socket/node_socket failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 3ms, status: 0/2 DOWN.
[WARNING]  (8) : Server node_socket/node_socket is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT]    (8) : backend 'node_socket' has no server available!
[WARNING]  (8) : Health check for server node_favicon/node_favicon failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms, status: 0/2 DOWN.
[WARNING]  (8) : Server node_favicon/node_favicon is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT]    (8) : backend 'node_favicon' has no server available!
[WARNING]  (8) : Health check for server node_text/node_text failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 2ms, status: 0/2 DOWN.
[WARNING]  (8) : Server node_text/node_text is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT]    (8) : backend 'node_text' has no server available!
[WARNING]  (8) : Health check for server node_page/node_page failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms, status: 0/2 DOWN.
[WARNING]  (8) : Server node_page/node_page is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT]    (8) : backend 'node_page' has no server available!
[WARNING]  (8) : Health check for server postgres/db_postgres succeeded, reason: Layer4 check passed, check duration: 1ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server mongo/db_mongo succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server gunicorn/app_django failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms, status: 0/2 DOWN.
[WARNING]  (8) : Health check for server redis/db_redis succeeded, reason: Layer4 check passed, check duration: 1ms, status: 3/3 UP.
[WARNING]  (8) : Server gunicorn/app_django is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT]    (8) : backend 'gunicorn' has no server available!
[WARNING]  (8) : Health check for server redis/db_redis_pubsub succeeded, reason: Layer4 check passed, check duration: 1ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server redis_story/db_redis_story succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server redis_sessions/db_redis_sessions succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server elasticsearch/db_elasticsearch failed, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms, status: 0/2 DOWN.
[WARNING]  (8) : Server elasticsearch/db_elasticsearch is DOWN. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT]    (8) : backend 'elasticsearch' has no server available!
[WARNING]  (8) : Health check for server node_images/node_images succeeded, reason: Layer7 check passed, code: 200, check duration: 3318ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server node_socket/node_socket succeeded, reason: Layer4 check passed, check duration: 0ms, status: 1/2 DOWN.
[WARNING]  (8) : Health check for server nginx/nginx succeeded, reason: Layer4 check passed, check duration: 1ms, status: 3/3 UP.
[WARNING]  (8) : Health check for server node_favicon/node_favicon succeeded, reason: Layer4 check passed, check duration: 0ms, status: 1/2 DOWN.
[WARNING]  (8) : Health check for server node_text/node_text succeeded, reason: Layer7 check passed, code: 200, check duration: 368ms, status: 1/2 DOWN.
[WARNING]  (8) : Health check for server node_page/node_page succeeded, reason: Layer7 check passed, code: 200, check duration: 163ms, status: 1/2 DOWN.
[WARNING]  (8) : Health check for server node_socket/node_socket succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING]  (8) : Server node_socket/node_socket is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING]  (8) : Health check for server node_favicon/node_favicon succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING]  (8) : Server node_favicon/node_favicon is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING]  (8) : Health check for server node_text/node_text succeeded, reason: Layer7 check passed, code: 200, check duration: 5ms, status: 3/3 UP.
[WARNING]  (8) : Server node_text/node_text is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING]  (8) : Health check for server node_page/node_page succeeded, reason: Layer7 check passed, code: 200, check duration: 6ms, status: 3/3 UP.
[WARNING]  (8) : Server node_page/node_page is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING]  (8) : Health check for server gunicorn/app_django succeeded, reason: Layer4 check passed, check duration: 0ms, status: 1/2 DOWN.
[WARNING]  (8) : Health check for server gunicorn/app_django succeeded, reason: Layer4 check passed, check duration: 1ms, status: 3/3 UP.
[WARNING]  (8) : Server gunicorn/app_django is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
[WARNING]  (8) : Health check for server elasticsearch/db_elasticsearch succeeded, reason: Layer4 check passed, check duration: 1ms, status: 1/2 DOWN.
[WARNING]  (8) : Health check for server elasticsearch/db_elasticsearch succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING]  (8) : Server elasticsearch/db_elasticsearch is UP. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

That first screenshot is showing a broken NewsBlur missing a CSS file. What do you see in the web inspector when you look for broken links or CSS. Is it a 404 to a CSS file? What is that URL? Do you have DEBUG_ASSETS turned on or off in docker_local_settings.py or your own local_settings?

It is a 403 and URL is https://cloud.typography.com/6565292/711824/css/fonts.css

DEBUG_ASSETS is TRUE in docker_local_settings.py

Then everything else is working fine. Yeah I haven’t bothered to fix the fonts thing on my own installation either. It’s a bit hard with localhosts and not distributing unlimited keys.

So I’m not sure what’s broken now. Feel free to poke around the logs of different docker containers and restarting things to see if anything changes. I’ll need a bit more to go on, since it looks like everything is humming along.

It was the most difficult docker projects I encountered I must say…

I found I can actually signup (I found DB entries) and login. But after login the page was still redirected as show, with login 302 and a broken image (https://localhost/media/img/reader/default_profile_photo.png)

Are fixing these 2 elements can get me a working instance? Checked all the docker logs and everything run fine…

And the left-pane has gone after I try login at /account/login. Sounds like some UI elements are not loaded somehow. Is it the newsblur_web container controls the UI?

did you clone the github and then run make nb to get it work or is there a way to configure in a docker-compose.yml and just bring things up like most regular docker containers? I have an existing nginx proxy manager setup I was hoping to integrate this into to at least try it out extensively for six months (the free version doesn’t support enough feeds for me to just load the opml in from inoreader with 338 feeds and counting and I"m paid up till 2024 over there, so I’m not exactly averse to paying for this service, but I don’t exactly want to start paying for something I’m already paying for elsewhere when I haven’t determined if it is a big improvement or not. The two projects are strikingly different in what seems like opinionated ways but also this is the only other one I have looked at so far that is interesting enough to put the effort in to try it out and maybe start paying for it if self hosting goes well.)

I followed the github instructions where are clone and make nb, with some tweaks in some of the docker files needed to suit my environments. It is hard to make it works, at least for now. I may come to play again some times later when I have time.

did you have to edit the make file to change some “docker compose” to “docker-compose” to get it to get through the install? That was extremely weird to me because there are some correct docker-compose entries in there as well, no idea how that could happen in the code other than maliciously. My instance is now broken at the same place yours appears to be (weird website, unable to sign up). I spun up a new lxc to install it in so I didn’t mess with anything other than the very obvious docker compose errors in the make file.

Yes I did replacing. I do think this is the reason why my installation did not work but other works.

Are you running an older version of docker? I had to change all the docker-compose to docker compose because it would complain otherwise.

Docker version 20.10.12 currently, in openmediavault 5

Let’s move the discussion to a new ticket. If this ticket doesn’t describe the problem, then please make a new one, either here or on github: