Web Feeds: Turn any website into an RSS feed

  <p>Not every website has an RSS feed. Some never did. Some had one years ago and quietly removed it. And some sites have content that updates regularly but was never structured as a feed in the first place: job boards, product listings, event calendars, changelog pages. Until now, if a site didn’t offer RSS, you were out of luck.</p>

Web Feeds is a new feature that creates RSS feeds from any website. Point it at a URL, and NewsBlur analyzes the page structure, identifies the repeating content patterns, and generates extraction rules that turn the page into a live feed. It works on news sites, blogs, job boards, product pages, or really anything with a list of items that changes over time.

This is a huge feature and has been requested for years. I’m so thrilled to finally be able to offer it in a way that I feel comfortable with. Other solutions including having you select story titles on a re-hosted version of the page, but it was clumsy and error-prone. This way, we use LLMs to figure out what the story titles are likely to be, present the variations to you, and then let you decide what’s right. So much better!

How it works

Open the Add + Discover Sites page and click the Web Feed tab. Paste a URL and click Analyze. NewsBlur fetches the page, strips out navigation and boilerplate, and analyzes the HTML structure. Within a few seconds, you’ll see multiple extraction variants, each representing a different content pattern found on the page.

Progress updates stream in real-time while the analysis runs. NewsBlur typically finds 3-5 different extraction patterns on a page. The first variant is usually the main content (article list, blog posts, product grid), but sometimes the page has multiple distinct sections worth subscribing to. Each variant shows a label, a description of what it captures, and a preview of 3 extracted stories so you can see exactly what you’d get.

Select the variant that matches what you want to follow, pick a folder, and subscribe. NewsBlur will re-fetch and re-extract the page on a regular schedule, just like any other feed.

Story hints

Sometimes the initial best guess isn’t what you’re looking for. Maybe the page has a blog section and a job listings section, and you want the jobs. Click the Refine button and type a hint like “I’m looking for the job postings.” NewsBlur re-analyzes the page with your hint in mind and reorders the variants to prioritize what you described.

What gets extracted

For each story, NewsBlur extracts whatever it can find: title, link, content snippet, image, author, and date. Not every field will be available on every site, and that’s fine. At minimum you’ll get titles and links. The extraction uses XPath expressions, which means it’s precise and consistent across page refreshes as long as the site’s HTML structure stays the same.

When things change

Websites redesign. HTML structures shift. When NewsBlur detects that the extraction rules have stopped working (after 3 consecutive failures), the feed is flagged as needing re-analysis. You’ll see a feed exception indicator, and you can re-analyze the page with one click to generate updated extraction rules.

Use cases

Some examples of sites that work well with Web Feeds:

  • Company blogs without RSS — Many corporate blogs dropped their RSS feeds years ago. Web Feeds brings them back.
  • Job boards — Track new postings on a company’s careers page.
  • Government sites — Follow press releases, meeting agendas, or public notices.
  • Changelog pages — Monitor when a tool or service ships updates.
  • Event listings — Keep tabs on upcoming concerts, conferences, or local events.
  • Product pages — Watch for new arrivals or restocks on stores that don’t offer feeds.

Availability

Web Feeds are available to Premium Archive and Premium Pro subscribers. The ongoing feed fetching and extraction runs on NewsBlur’s servers like any other feed.

If you have feedback or ideas for improvements, please share them on the NewsBlur forum.


This is a companion discussion topic for the original entry at https://blog.newsblur.com/2026/03/13/web-feeds/
3 Likes

Huge! Really psyched to start using this. Congrats on the launch!

1 Like

Awesome! Ty! (It was a key feature I asked for) - I was able to get about 1/5 of the job board sites I use to work with it.

I’m not exactly sure how best to subscribe to sites with no current job posting. (Greenhouse seems to work as it flags where it would put them) Maybe just watch for custom text changing - Like No Jobs available disappearing?

Random selection of sites that didn’t work in other ways:

Awesome. The single most important new feature since newsletters. Love it!

1 Like

I just upgraded my account to use this feature. It’s fantastic, thanks!

1 Like

How frequently does a “web feed” update? Same rules and criteria as an RSS feed?

Yep, depends on your premium tier. Premium accounts should expect it to update every hour if it changes at least once a day. If it changes less often, it’ll be fetched less often. Premium Pro guarantees a fetch every 5 minutes. And you can always check how often its fetched by right-clicking on the feed title and opening Statistics.

Awesome feature. Thanks. I’ll try it out.

1 Like

Hi Samuel, thanks for the Web Feed feature. It worked for most pages I tried, but not with the one below. For some reason, the cards on the grid are not being recognized as items, even when writing the title of the individual entries. If you could check it, that would be great! Thanks!

Thanks for reporting this. I investigated and the issue is that fapemig.br is a JavaScript single-page application (built on Nuxt.js/Vue). When NewsBlur fetches the page, the server returns an empty content grid with no actual card data in the HTML. All the chamadas are loaded dynamically via JavaScript from a backend API after the page loads in your browser.

Since Web Feeds work by analyzing the server-rendered HTML to find repeating patterns, there’s nothing for it to find on this page. The grid is literally empty in the source HTML.

The good news is that the backend API is actually a WordPress site that has its own RSS feed. You can subscribe to https://api.site.fapemig.br/feed/ directly as a regular feed, though it may not have the exact same filtered content as the chamadas page.

I’m looking into adding JavaScript rendering support for SPA sites like this in a future update.

1 Like

Thanks for trying! Unfortunately, the WP feed has almost nothing, but I’ll explore the backend to see if I find the chamadas posts/pages. Cheers!