When saving a article, it would be helpful to have the option to save the AI response (if the AI summary was utilized). Iβm thinking of either appending to the notes, or on the bottom of the article as displayed when interacting with the AI as-is.
1 Like
Ok Iβve given this a lot of thought and I just donβt see it as a worthwhile tradeoff. Itβs a fair bit of code complication to be able to serve cached AI responses. Plus, it doesnβt get used a whole lot in the first place. So if you want to hold on to an Ask AI response, youβll have to re-generate it.
In the interest of time, if anybody else really wants this, let me know. Hereβs the Claude plan for how to build it, which I spent a while on refining, in case we do want to build it. This also works if a self-hoster wants to build it by feeding Claude or Codex this plan:
Plan: Persist Ask AI Conversations
Context
Forum request (save-ai-response/13479): Users want Ask AI conversations saved so they appear when re-opening a
story. Currently, conversations are ephemeral -- they vanish on navigation.
Design: The Celery task saves conversations automatically after each AI response. Conversations are returned inline
in existing story payload endpoints (same pattern as user_tags, user_notes, highlights). No new API endpoints.
- Regular stories: 30-day TTL via MongoDB TTL index (refreshed on each exchange)
- Starred stories: permanent (expires_at=None, TTL index ignores null)
- One conversation per user per story -- latest thread wins. Re-ask with a different model overwrites the saved
conversation. The comparison view is a within-session tool, not persisted.
---
1. New Model: MAskAIConversation
File: apps/ask_ai/models.py
class MAskAIConversation(mongo.Document):
user_id = mongo.IntField(required=True)
story_hash = mongo.StringField(max_length=32, required=True)
question_id = mongo.StringField(max_length=64)
custom_question = mongo.StringField()
model = mongo.StringField(max_length=32) # Model used for latest response
# [{role, content, model?, question_id?}] as compressed JSON
conversation_z = mongo.BinaryField()
is_permanent = mongo.BooleanField(default=False)
expires_at = mongo.DateTimeField() # null = permanent
created_at = mongo.DateTimeField(default=datetime.datetime.utcnow)
updated_at = mongo.DateTimeField(default=datetime.datetime.utcnow)
meta = {
"collection": "ask_ai_conversations",
"indexes": [
{"fields": ["user_id", "story_hash"], "unique": True},
{"fields": ["expires_at"], "expireAfterSeconds": 0},
],
"allow_inheritance": False,
}
Message format in conversation_z:
[
{"role": "user", "content": "Summarize in one sentence", "question_id": "sentence"},
{"role": "assistant", "content": "The article discusses...", "model": "opus"},
{"role": "user", "content": "What about the economic impact?"},
{"role": "assistant", "content": "The economic impact...", "model": "opus"}
]
Class methods:
- save_conversation(user_id, story_hash, messages, question_id, custom_question, model) -- upserts. Checks
MStarredStory.objects(user_id=user_id, story_hash=story_hash).count() to decide is_permanent/expires_at. Always
refreshes expires_at = now+30d for non-permanent. Enforces max 20 messages (if exceeded, drops oldest pair but keeps
first user message for question context). Skips save if compressed size > 100KB.
- append_exchange(user_id, story_hash, user_content, assistant_content, model) -- loads existing doc, appends {role:
"user", content} and {role: "assistant", content, model}, saves. Creates new doc if none exists (edge case:
follow-up on story whose conversation was TTL-expired). Same size/count limits.
- get_conversations_for_stories(user_id, story_hashes) -- bulk fetch. Returns {story_hash: {question_id,
custom_question, model, messages}} dict. Query uses (user_id, story_hash) unique index. Only fetches documents,
decompresses conversation_z.
- set_permanent(user_id, story_hash) -- update(set__is_permanent=True, unset__expires_at=True). No-op if no doc.
- set_expiring(user_id, story_hash) -- update(set__is_permanent=False, set__expires_at=now+30d). No-op if no doc.
Properties: conversation property with getter (decompress + JSON parse) and setter (JSON serialize + compress),
following MAskAIResponse.response_text pattern.
---
2. Celery Task Saves Automatically
File: apps/ask_ai/tasks.py
Two save points, both before publish_event("complete") to ensure persistence precedes the UI signal. Wrapped in
try/except so a Mongo failure doesn't prevent the user from seeing their response.
Cached path (line 94-116)
Insert before publish_event("complete") at line 112:
# Save conversation (cached response)
try:
from .prompts import get_prompt
prompt = get_prompt(question_id)
question_text = prompt.short_text if prompt else question_id
messages = [
{"role": "user", "content": question_text, "question_id": question_id},
{"role": "assistant", "content": response_text, "model": cache_model_key},
]
MAskAIConversation.save_conversation(
user_id=user_id, story_hash=story_hash,
messages=messages, question_id=question_id,
custom_question=None, model=cache_model_key,
)
except Exception:
logging.user(user, "~BB~FGAsk AI: ~FRFailed to save conversation~FG (cached)")
publish_event("complete")
Live path (line 189-243)
Insert before publish_event("complete") at line 214:
# Save conversation
try:
if conversation_history:
# Follow-up: append to existing conversation
MAskAIConversation.append_exchange(
user_id=user_id, story_hash=story_hash,
user_content=custom_question or conversation_history[-1].get("content", ""),
assistant_content=full_response_text,
model=cache_model_key,
)
else:
# Initial question: create/overwrite conversation
from .prompts import get_prompt
if custom_question:
question_text = custom_question
else:
prompt = get_prompt(question_id)
question_text = prompt.short_text if prompt else question_id
messages = [
{"role": "user", "content": question_text, "question_id": question_id},
{"role": "assistant", "content": full_response_text, "model": cache_model_key},
]
MAskAIConversation.save_conversation(
user_id=user_id, story_hash=story_hash,
messages=messages, question_id=question_id,
custom_question=custom_question, model=cache_model_key,
)
except Exception:
logging.user(user, "~BB~FGAsk AI: ~FRFailed to save conversation~FG (live)")
publish_event("complete")
Re-ask behavior: A re-ask from the frontend sends a new request with the same question_id but different model, and
no conversation_history. The backend treats it as an initial question and overwrites the saved conversation. This is
intentional -- comparison is a session tool, the latest response persists.
---
3. Story Payloads Include Conversation Data
File: apps/reader/views.py
Add bulk conversation fetch alongside existing starred_stories fetch in each story loader. The conversation data is
returned as ask_ai_conversation on each story dict.
load_single_feed() -- after line 1260 (starred_stories dict comprehension)
from apps.ask_ai.models import MAskAIConversation
ask_ai_convos = MAskAIConversation.get_conversations_for_stories(user.pk, story_hashes)
Then in per-story loop, after line 1343 (highlights):
if story["story_hash"] in ask_ai_convos:
story["ask_ai_conversation"] = ask_ai_convos[story["story_hash"]]
load_river_stories__redis() -- after line 2414 (starred_stories dict)
Same bulk fetch. Then in per-story loop, after line 2529 (highlights):
if story["story_hash"] in ask_ai_convos:
story["ask_ai_conversation"] = ask_ai_convos[story["story_hash"]]
load_starred_stories() -- after line 1709 (shared_stories dict)
Same pattern. Then in per-story loop, after line 1747:
if story["story_hash"] in ask_ai_convos:
story["ask_ai_conversation"] = ask_ai_convos[story["story_hash"]]
load_read_stories() -- after line 2188 (starred_stories dict)
Same pattern. Then after line 2213:
if story["story_hash"] in ask_ai_convos:
story["ask_ai_conversation"] = ask_ai_convos[story["story_hash"]]
---
4. Star/Unstar Integration
File: apps/reader/views.py
On starring -- _mark_story_as_starred()
After line 4609 (for new) and after line 4620 (for existing):
from apps.ask_ai.models import MAskAIConversation
MAskAIConversation.set_permanent(user_id=request.user.pk, story_hash=story_hash)
On unstarring -- _mark_story_as_unstarred()
After line 4722 (after save/delete):
from apps.ask_ai.models import MAskAIConversation
MAskAIConversation.set_expiring(user_id=request.user.pk, story_hash=story_hash)
---
5. Frontend: Auto-Show on Story Open
File: media/js/newsblur/views/story_detail_view.js
After story content renders, check if the story model has ask_ai_conversation. If so, create a restored
StoryAskAiView inline:
// In render or post-render hook:
var conversation = this.model.get('ask_ai_conversation');
if (conversation && conversation.messages && conversation.messages.length) {
var ask_ai_pane = new NEWSBLUR.Views.StoryAskAiView({
story: this.model,
question_id: conversation.question_id || 'custom',
custom_question: conversation.custom_question,
model: conversation.model,
inline: true,
restored_conversation: conversation
});
var $wrapper = this.$('.NB-story-content-positioning-wrapper');
$wrapper.append(ask_ai_pane.render().$el);
}
When user clicks Ask AI button (show_ask_ai_menu(), line 1867): Show the dropdown menu normally. If a restored pane
already exists in the DOM, the new pane from the menu selection appends after it (existing open_ask_ai_pane() logic
handles this via .NB-story-ask-ai-inline.last().after()). No special guard needed.
---
6. Frontend: Restored Mode in StoryAskAiView
File: media/js/newsblur/views/story_ask_ai_view.js
initialize() -- before auto-send logic (lines 56-59)
Add restored_conversation check that runs before the existing auto-send:
if (this.options.restored_conversation) {
this.restore_from_conversation(this.options.restored_conversation);
// Do NOT fall through to send_question()
return; // Skip auto-send entirely
}
// ...existing transcription_error and auto-send checks...
Wait -- return in initialize() would skip event binding. Instead:
// At line 53, BEFORE the auto-send block:
this.is_restored = false;
if (this.options.restored_conversation) {
this.restore_from_conversation(this.options.restored_conversation);
this.is_restored = true;
} else if (this.transcription_error) {
// existing
} else if (this.question_id !== 'custom' || this.custom_question) {
this.send_question(this.custom_question);
}
restore_from_conversation(conversation) method
restore_from_conversation: function (conversation) {
var messages = conversation.messages || [];
this.conversation_history = [];
this.section_models = [];
this.response_text = '';
for (var i = 0; i < messages.length; i++) {
var msg = messages[i];
if (msg.role === 'user') {
this.conversation_history.push({ role: 'user', content: msg.content });
if (i === 0) {
// First user message -- set question_text for header
this.question_text = msg.content;
} else {
// Follow-up -- add visual separator
this.response_text += '\n\n---\n\n**You:** ' + msg.content + '\n\n';
}
} else if (msg.role === 'assistant') {
this.conversation_history.push({ role: 'assistant', content: msg.content });
this.response_text += msg.content;
this.section_models.push(msg.model || this.model);
}
}
this.original_question_id = conversation.question_id || this.question_id;
this.original_custom_question = conversation.custom_question || '';
this.model = conversation.model || this.model;
this.response_model = this.model;
this.streaming_started = true;
this.current_response_text = messages.length > 0 ?
messages[messages.length - 1].content : '';
}
render() -- restored path
// In render(), after the template is inserted:
if (this.is_restored) {
// Render the saved conversation directly
var $answer = this.$('.NB-story-ask-ai-answer');
var html = this.markdown_to_html(this.response_text);
$answer.html(html).show();
this.$el.removeClass('NB-thinking');
// Show follow-up input
this.$('.NB-story-ask-ai-followup-wrapper').show();
this.$('.NB-story-ask-ai-followup-input').prop('disabled', false);
this.$('.NB-story-ask-ai-reask-menu').show();
this.$('.NB-story-ask-ai-send-menu').hide();
this.update_model_dropdown_selection();
this.update_thinking_toggle_selection();
return this;
}
---
7. Frontend: Update Story Model After Live Completion
File: media/js/newsblur/views/story_ask_ai_view.js
In complete_response() (line 564), after conversation_history.push() (line 577-580), update the story model so the
indicator and auto-show work in the current session without reload:
// Build messages array matching backend format
var messages = [];
for (var i = 0; i < this.conversation_history.length; i++) {
var entry = this.conversation_history[i];
var msg = { role: entry.role, content: entry.content };
if (entry.role === 'assistant' && this.section_models.length > 0) {
// Map section_models to assistant messages
var assistant_index = messages.filter(function(m) { return m.role === 'assistant'; }).length;
msg.model = this.section_models[assistant_index] || this.model;
}
if (i === 0 && entry.role === 'user') {
msg.question_id = this.original_question_id;
}
messages.push(msg);
}
this.story.set('ask_ai_conversation', {
question_id: this.original_question_id,
custom_question: this.original_custom_question || null,
model: this.model,
messages: messages
});
This ensures:
- The story title indicator updates immediately
- If the story detail re-renders, the restored conversation is available
---
8. Frontend: Indicator in Story Title View
File: media/js/newsblur/views/story_title_view.js
In all four templates (split, list, grid, magazine), add an indicator near the star/share divs:
<% if (story.get('ask_ai_conversation')) { %>
<div class="NB-storytitles-ask-ai-indicator"></div>
<% } %>
Place after <div class="NB-storytitles-share"></div> (lines 88, 137, 183, 227 respectively).
CSS: Style .NB-storytitles-ask-ai-indicator as a small sparkle/AI icon, similar to the star/share indicators. Use an
existing icon from media/img/icons/nouns/ (like the Ask AI icon).
File: media/js/newsblur/views/story_detail_view.js
In the story header template (line 472), add NB-has-conversation class on the Ask AI sidebar button when
ask_ai_conversation is present:
<div class="NB-sideoption NB-feed-story-ask-ai
<% if (show_sideoption_ask_ai && story.get('ask_ai_conversation')) { %>NB-has-conversation<% } %>"
role="button">
---
9. TTL and Limits
- TTL refresh: Every save_conversation() and append_exchange() sets updated_at=now and, for non-permanent
conversations, expires_at=now+30d. Actively used conversations don't expire.
- Max messages: 20 (10 user + 10 assistant turns). On overflow, drop the 2nd and 3rd messages (oldest user+assistant
pair), preserving the first user message (original question context).
- Max size: Skip save if len(conversation_z) > 100KB. Log warning.
- TTL index: MongoDB's built-in TTL runner checks every 60s. Docs with expires_at=None (null) are ignored.
---
Files Modified
ββββββββββββββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β File β Change β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β apps/ask_ai/models.py β Add MAskAIConversation model (~100 lines) β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β apps/ask_ai/tasks.py β Save conversation in cached path (before line 112) and live β
β β path (before line 214) (~35 lines) β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β apps/reader/views.py β Bulk conversation fetch in 4 story loaders + star/unstar hooks β
β β (~30 lines across 6 spots) β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β media/js/newsblur/views/story_ask_ai_view.js β restore_from_conversation(), restored render path, story model β
β β update in complete_response() (~80 lines) β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β media/js/newsblur/views/story_detail_view.js β Auto-show restored conversation on render, NB-has-conversation β
β β class (~20 lines) β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β media/js/newsblur/views/story_title_view.js β Add .NB-storytitles-ask-ai-indicator in 4 templates (~8 lines) β
ββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β CSS stylesheet β .NB-storytitles-ask-ai-indicator and .NB-has-conversation β
β β styles (~10 lines) β
ββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
---
10. Tests
File: apps/ask_ai/tests.py (new or append)
βββββββββββββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Test β What it verifies β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_save_initial_conversation β save_conversation() creates doc with correct messages, compressed β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_save_overwrites_on_reask β Second save_conversation() for same user+story replaces first β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_append_exchange β append_exchange() adds user+assistant messages to existing doc β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_append_creates_if_missing β append_exchange() on non-existent doc creates one β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_cached_response_saves_conversation β Cached code path also calls save_conversation() β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_star_sets_permanent β set_permanent() sets is_permanent=True, expires_at=None β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_unstar_sets_expiring β set_expiring() sets is_permanent=False, expires_atβnow+30d β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_get_conversations_for_stories β Bulk fetch returns correct dict for matching hashes, empty for β
β β non-matching β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_max_messages_limit β Conversation capped at 20 messages, oldest pair dropped β
βββββββββββββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β test_expires_at_refreshes β Each save updates expires_at to new now+30d β
βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
---
Verification (Manual)
1. Ask AI on a story β check db.ask_ai_conversations.find({user_id: X}) has document
2. Navigate away β re-open story β conversation auto-shows at bottom
3. Ask follow-up β MongoDB doc updated with new messages
4. Re-ask with different model β MongoDB doc replaced with latest response only
5. Star the story β is_permanent=True, expires_at=null
6. Unstar β is_permanent=False, expires_at set ~30d out
7. Reload feed β ask_ai_conversation present in story payload
8. Story list β indicator icon visible on stories with conversations
9. make test SCOPE=apps.ask_ai passes
Thanks Sam; would agree with that logic ![]()
1 Like