Recent improvements and issue resolutions in Storytell
YouTube Upload Feature Temporarily Disabled
We’ve temporarily disabled the YouTube upload functionality in Storytell to ensure system stability while we resolve underlying technical issues. Users will no longer see the YouTube upload option in file upload modals and SmartChats™. This preventive measure ensures reliable service while our engineering team works on a permanent solution.
This temporary fix is for all Storytell users who rely on consistent and stable file upload functionality. This particularly benefits users who were experiencing failed uploads or system errors when attempting to process YouTube content.
The YouTube content scraping functionality was experiencing critical failures in production, causing system panics and unreliable service. Rather than leaving users with a broken feature that could fail unpredictably, we proactively disabled it to maintain overall system stability and user trust.
Think of it like temporarily closing one lane on a highway because of construction - we’ve shut down the YouTube upload feature because it wasn’t working properly and was causing problems. Instead of letting people get stuck or frustrated with a broken feature, we’ve removed it completely until we can fix it properly. All your other ways of uploading content (like documents, PDFs, or web links) still work perfectly fine.
The implementation involved commenting out YouTube-related UI components in the React codebase. Specifically, we modified PersistentPrompt.tsx and ChatFileUpload.tsx to conditionally hide the YouTube upload option while maintaining the existing component structure. The changes ensure that the tab selection logic and modal functionality remain intact for other upload types. This approach allows for easy restoration of the feature once the underlying YouTube scraping issues are resolved.
XLS to CSV Conversion System Overhaul
We’ve completely rebuilt Storytell’s Excel file processing system to eliminate crashes and improve reliability. The new system uses a robust Python-based conversion engine that handles complex Excel files with better error handling and data fidelity. Users will experience more reliable Excel file uploads and processing without system interruptions.
We built this for business users, analysts, and data professionals who regularly upload Excel files containing complex datasets, formulas, and multiple sheets. This particularly benefits users working with large Excel files from enterprise sources like Nielsen data or financial reports.
The previous XLS conversion system used a third-party Go package called ‘Excelize’ that frequently crashed when processing certain Excel files, causing system panics and failed uploads. This unreliable behavior meant users couldn’t trust that their Excel files would process successfully, leading to frustration and workflow interruptions.
Imagine you had a broken can opener that sometimes worked but often got stuck or broke entirely when trying to open certain types of cans. We’ve replaced it with a much better, more reliable can opener that can handle all types of cans smoothly. When you upload Excel files to Storytell now, our new system can handle even the most complex spreadsheets with formulas, multiple sheets, and tricky formatting without breaking or getting confused.
The system now uses a dedicated Python script (xls_to_csv.py) that leverages openpyxl and pandas libraries for conversion. The script is integrated into the extractor service via HTTP endpoints (/v1/convert/xls). The conversion process includes intelligent data range detection, handles multiple sheets by creating separate CSV files for each, and implements proper UTF-8 encoding with CSV escaping. The new architecture separates conversion concerns from core business logic and includes comprehensive cleanup of temporary files stored in organization-specific directories. All conversion results include metadata about processed sheets and maintain data integrity while stripping formatting that could cause downstream processing issues.
Improved Prompt No Longer Disappears When Clicking Edit
We fixed an issue where the auto-improved version of a prompt would vanish if you clicked the “Edit” button. Now, when you auto-improve a prompt and then decide to make further adjustments by clicking “Edit”, the improved text will remain in the editor. This allows for a smoother workflow when refining prompts before sending them to a SmartChat™.
This fix is for users like Abby, who utilize the auto-improve prompt feature and need to subsequently edit the improved version.
Previously, if a user auto-improved a prompt and then clicked the “Edit” button, the improved text would disappear. This interrupted the user’s workflow, as they would lose the improved prompt and have to start over or re-initiate the improvement.
Imagine you used a cool tool to help you rephrase a sentence to make it sound better. But, if you then tried to make a small tweak to that new sentence, the whole thing would just vanish. That’s what was happening with our “improve prompt” feature. We’ve fixed it so now, after Storytell helps you improve your prompt, you can click “Edit” and the improved version stays put, ready for you to polish it further without losing any work.
The bug was resolved by ensuring that the improvedPrompt state is not reset when the editor updates, especially if the content of the editor hasn’t actually changed. The onUpdate function in PromptAutoImprove.tsx was modified; specifically, the condition if (content === oldContent) now prevents resetting alreadyImproved and clearing attempts if the text content remains the same. The debouncedImproving function will also exit early if alreadyImproved() is true. When onEditImprovement is called, isPromptBeingTyped is set to true to prevent onUpdate from clearing the prompt, and oldContent is updated to the improvement, ensuring the editor retains the improved text for further editing.
Prompt Library Creation, Editing, and Saving
We’ve addressed several issues in the Prompt Library to make creating, editing, and saving prompts a smoother experience. Key fixes include making the description field optional, ensuring formatting is preserved, keeping the action button in a fixed position for long prompts, resolving save button visibility issues at different browser zoom levels, and improving the reliability of pasting content.
This set of fixes is for users who frequently work with the Prompt Library, such as Jonah Larkin, who manage and create numerous prompts and encountered various usability issues.
Users were encountering several frustrating bugs when using the Prompt Library. These included: being forced to add a description even if it wasn’t needed, losing text formatting upon saving and reopening, the action button scrolling out of view with long prompts, the save button disappearing at certain zoom levels, and inconsistent pasting behavior. These issues made managing prompts inefficient and cumbersome.
Imagine your favorite app for saving notes had a bunch of small, annoying glitches. Like, it wouldn’t let you save a note unless you added a description, even if you didn’t want one. Or, if you made a really long note, the “Save” button would scroll off the screen. Sometimes, if you zoomed in on your screen, the “Save” button would just disappear. And copying and pasting text into it was a bit of a gamble. We’ve fixed all those little annoyances in the Prompt Library, so now it’s much easier and more predictable to create, edit, and save your prompts.
Several technical changes were implemented. For the optional description, the SaveToPromptLibraryModal.tsx was updated to treat the description form data as optional, defaulting to an empty string if not provided. To preserve formatting and fix pasting, a new utility parseStringIntoJSONContent.ts was introduced. This function takes a string, splits it by newlines, filters out lines that are only whitespace, and then constructs a Tiptap/ProseMirror compatible JSON object, ensuring each part becomes a paragraph. This is used when setting content in SaveToPromptLibraryModal.tsx and likely in PromptContext.tsx (though the direct paste fix implementation details are not fully specified for PromptContext.tsx in the provided commit messages, parseStringIntoJSONContent is added there too). The fix for the button moving with long prompts likely involved CSS adjustments to ensure the button container has a fixed or sticky position, though specific CSS changes are not detailed in the commit messages. Similarly, save button visibility at different zoom levels would also be a UI/CSS adjustment to ensure responsive and consistent rendering. The commit messages for ENG-3950 in Storytell-ai Changelog confirm these areas were addressed.
Removed duplicated Reasoning tags
Missing Closing Tags in Prompt Processing Messages
Race Condition in 'No Search Results' Messaging
Resolved a race condition that prevented “no search results” messages from reliably reaching the UI. This fix ensures users consistently receive feedback when their queries return no matches from their knowledge base, rather than being left with a non-responsive interface.
This fix serves all Storytell users who work with knowledge base scoping in their prompts, ensuring they receive consistent feedback regardless of timing issues in the backend processing.
A race condition in the system sometimes prevented the “no search results” message from being sent to the UI. This meant that users would be left without any feedback when their search returned no results, creating confusion and a poor user experience. The issue was particularly problematic because it occurred inconsistently, making troubleshooting difficult for users.
Imagine two people trying to deliver a message - one has the important news that “no results were found” but sometimes they get stuck in traffic, so the recipient never hears anything. Meanwhile, the second messenger is supposed to wait for the first one, but sometimes goes ahead and says nothing at all. We fixed the traffic pattern so that the important message always gets through, ensuring you’ll know when your search doesn’t find anything instead of just getting silence.
The issue was resolved by modifying the prompt builder in services/controlplane/domains/curator/prompt_builder_v2.go to handle the error case more consistently. The fix involved bubbling up the “no search results” error from the lower levels of the system and ensuring that the prompt builder always sends the final message with the appropriate progress information. The implementation now properly captures the error condition and explicitly sends a response containing the informative message to the UI channel, eliminating the race condition where the message could be dropped if processing terminated too quickly.
Asset Display in Citations Panel
Fixed an issue where assets weren’t properly displayed when viewing citations that referenced them. Previously, users would see an error message stating “Asset status is indeterminate” when clicking on citations, requiring them to refresh the page to view the asset details. This fix ensures seamless access to the source materials referenced in SmartChat™ responses.
This fix was created specifically for Sara, who relies on citations to verify information sources and need seamless access to the referenced assets.
When users clicked on citations in their SmartChat™ responses to view the source material, the system would fail to display the referenced asset. Instead of showing the expected document or file, users would see an error message stating “Asset status is indeterminate.” The only workaround was to refresh the entire page, disrupting the user’s workflow and creating friction in the verification process.
Imagine you’re reading a research paper that has footnotes, and when you click on a footnote to see the source, the page that should show you the original book just displays “Book not found” instead. The only way to fix it was to restart your entire reading session. We fixed that broken link - now when you click on a citation in Storytell, you’ll actually see the document that information came from, without having to refresh the page or interrupt your work.
The issue occurred due to a data overwrite problem when navigating to a SmartChat™ from a Collection page. When initializing the thread via a Collection, the asset reference data was being incorrectly overwritten with empty snapshot data.
The fix was implemented in the TextUnitV1.tsx component, which now includes improved validation to prevent setting empty asset references. The code was modified to check not only if the snapshotAssetRef exists but also whether it contains actual data (Object.keys(snapshotAssetRef).length > 0) before attempting to set it as the asset references map. This prevents the scenario where valid asset reference data gets replaced with empty data, ensuring that the citation drawer can properly display the referenced assets without requiring a page refresh.
Empty Response Handling in Gemini Integration
Fixed an issue where empty parts in responses from the Gemini AI model were causing processing errors. This fix improves the reliability of AI responses by properly handling cases where the model returns empty content segments.
This fix benefits all Storytell users who interact with content processed by the Gemini AI model, ensuring more consistent and reliable responses.
The Gemini AI model occasionally returns responses with empty content segments as part of its streaming output. Previously, the system would attempt to process these empty segments, which could lead to errors or interruptions in the response stream. This created an inconsistent user experience where some AI responses might be incomplete or fail to display properly.
Imagine having a conversation where occasionally the other person moves their mouth but no sound comes out. Instead of trying to figure out what they “said” during those silent moments (which would be confusing and might lead to misunderstandings), we now simply ignore those silent gaps and focus only on the actual words. This makes conversations with Storytell’s AI more reliable because it no longer gets confused by these empty responses that sometimes come from the AI model.
The fix was implemented in the pkg/go/domains/ai/gemini/streamer.go file by adding a condition to check for empty content before attempting to process it. Specifically, the code now includes a check if content == "" that causes the system to continue to the next iteration of the processing loop when empty content is detected, rather than attempting to create and return a response for that empty segment. This simple but effective change prevents the system from creating empty response objects that could cause downstream processing issues, making the response streaming more robust and reliable.
Fixed Mobile Menu Navigation for Logged-In Users
Faster Scaling of File Processing Workers
The autoscaling system for file processing workers now triggers faster when queues start to build up, significantly reducing the average time messages spend waiting to be processed.
Key updates:
This fix helps all users uploading or processing files via Storytell, especially those handling large uploads or batches that previously experienced slowdowns.
Users experienced delays in file processing due to the slow reaction of the system’s autoscaler. The threshold for scaling up more workers was too high, meaning queues could get long before additional resources kicked in.
Lower Queue Threshold for Autoscaling: Reduced the metric threshold that triggers new processing workers, so file queues are addressed more proactively.
Configuration Update: Updated scaling configuration in the backend deployment YAMLs to reflect the new, lower average value required to trigger a scale-up.
Faster Worker Activation: By responding to demand sooner, files are processed and available in Storytell more quickly after upload.
Imagine you’re checking out at a busy grocery store. If new cashiers only open up after the line has gotten really long, checkout is slow. With this fix, new cashiers step in much sooner, so you get through the line faster and wait less.
Citation Buttons Readable in Light Theme
Improved Audio Processing Reliability
We’ve enhanced Storytell’s audio processing capabilities by adding a critical validation step when processing audio files through Deepgram. This update ensures that the system properly identifies when audio transcription fails (such as with unsupported languages like Arabic) and provides appropriate error messages rather than proceeding with empty transcription data. This improvement increases reliability when working with diverse audio content and prevents processing issues from propagating through the system.
Key updates:
This improvement was requested by Nada and built for all Storytell users who work with audio files, particularly those working with international or multilingual content. It’s especially valuable for teams dealing with content in languages that might not be fully supported by our transcription services, ensuring they receive proper notifications rather than silent failures.
Previously, when Deepgram failed to process an audio file (for instance, Arabic language files that aren’t natively supported), Storytell would continue the ingest process with no utterances or transcription data. This created a confusing user experience where users would see their file was “processed” but contained no actual content. Now, users receive clear failure notifications when audio files can’t be properly transcribed, allowing them to seek alternative processing methods.
Imagine you’re sending a voice message in a different language to a friend, but their phone can’t understand that language. Before this update, their phone would just say “Message received!” but show nothing. Now, their phone will tell them “Sorry, I couldn’t understand this language” instead of pretending everything worked fine. This helps everyone know exactly what happened and can look for another way to communicate.
The fix implements a defensive check in the Deepgram function within the ingest pipeline. After receiving a response from the Deepgram API, the code now verifies that the data.Utterances array contains elements before proceeding. If no utterances are found (indicating Deepgram failed to process the file), the system:
Fixed Homepage Unresponsiveness After Upload for New Users
A recent update caused an issue on the Storytell homepage screen, preventing users from progressing past the initial screen, especially when uploading assets or trying to interact with the SmartChat™ input. This fix ensures that, after uploading assets to a Collection, users will now correctly see their assets displayed in a table on the getting started screen. The asset table is now visible as soon as assets are present, resulting in a more seamless onboarding and upload experience.
Key updates:
This fix is built for new and returning users onboarding onto Storytell, especially those using SmartChat™ or managing files within Collections. It’s especially valuable for users trying to upload assets for the first time, as well as content creators, researchers, or teams who regularly add files and expect to immediately see and interact with their uploaded assets.
Previously, after uploading assets to Storytell Collections or entering the Getting Started screen, users experienced a broken or unresponsive interface; uploaded assets did not appear, and the user could not proceed with their tasks. This created confusion and impeded onboarding, making it unclear if the upload was successful and preventing further interaction with Storytell.
Think of Storytell like a new digital workspace where you can upload your important files and chat about them. Before this fix, if you tried to add a file right when starting out, it was like putting your book on a shelf and then seeing an empty room — there was no sign that your book was there, and you couldn’t do anything else. Now, right after you upload a file, you instantly see it neatly listed on your homepage, so you always know what’s in your Collection and what you can work on next.
` control block using the assets query result.
pkg/ts/core/src/components/GettingStartedUpload.tsx
, CollectionGettingStartedScreen.tsx
, and CollectionScreen.tsx
, ensuring assets-related components now only render the table when assets are present, and feedback UI is current and concise.Improved Getting Started Experience and App Stability During Loading
Storytell has rolled out updates to improve the experience for new users and resolve a bug where the app could crash when actions were attempted during loading. The Getting Started state now works more reliably and only appears when appropriate, and SmartChats™ and Collections are handled more clearly during initial onboarding. This fix also addresses app instability when network requests are made before loading completes. Users will now experience a smoother, less confusing entry to Storytell, and will no longer be returned to the main page unexpectedly during “connecting”.
Key updates:
This fix was designed for new users joining Storytell for the first time or those with very few SmartChats™ and no Collections set up. It’s also for anyone who might encounter connectivity delays—ensuring that interacting with the product during loading is safe and doesn’t cause data loss or confusion.
Previously, if a user tried to use story creation or SmartChats™ while Storytell was still “connecting” (for example, after a login or a slow network), the application could crash and bounce back to the main screen. Also, the onboarding (“Getting Started”) hints sometimes appeared incorrectly or at the wrong times, which could confuse users about what to do next. This fix makes these behaviors predictable and error-free.
Robust Getting Started Detection:
The logic for displaying “Getting Started” onboarding is now accurate, only showing when there are truly no Collections and less than two SmartChats™—and only after the data is fully loaded.
Safer Loading State Management:
All actions triggered while Storytell is “connecting” will now be safely disabled until loading is complete, preventing application crashes or accidental reloads.
Sidebar and Layout Improvements:
The sidebar’s appearance and onboarding content now respond correctly to the user’s state, providing clear navigation/hints only when needed.
Technical Consistency:
All references to SmartChats™ and Collections have been capitalized for clarity, and references are consistent across the Storytell interface.
Imagine you’re setting up a new phone. Before everything is loaded, it’s confusing if you try to take a photo or send a text and the screen suddenly resets to the start. Storytell users faced a similar issue: if you tried to do something before the app had finished loading, the screen would crash back to the home page. Plus, some “getting started” hints would pop up at the wrong times, making things more confusing. With this fix, Storytell now makes sure everything is ready before letting you act, and onboarding help only appears right when it’s needed. It’s like a phone that waits to let you make calls or take selfies until it’s actually finished turning on.
useIsUserGettingStarted
) now checks that both the Collections tree and SmartChats™ resource have finished loading, and is resilient to the timing of data updates.isConnecting
state by early return patterns, and all UI that could trigger requests or crashes is hidden or disabled during loading.Prompt Editing Now Correctly Updates Existing Prompts
UpdateStoredPrompt
method to handle updates to existing prompts.Imagine you’re editing a document. You wouldn’t want your changes to create a new document instead of updating the one you’re working on. This fix makes sure that when you edit a prompt, your changes are saved to the same prompt, just like how you’d expect when editing a document.
pkg/ts/core/src/screens/modals/SaveToPromptLibraryModal.tsx
if (props.existingPrompt)
to determine if the prompt being edited already exists.UpdateStoredPrompt
method is called with the updated details.CreateStoredPrompt
method is used to create a new prompt.client.controlplane.UpdateStoredPrompt
: Handles updates to existing prompts.client.controlplane.CreateStoredPrompt
: Creates new prompts when no existing prompt is being edited.id
: The ID of the prompt being updated.name
: The new name of the prompt.description
: The new description of the prompt.prompt
: The updated text of the prompt.collectionId
: The ID of the collection the prompt belongs to.organizationContext
: Includes organization ID and tenant ID for access control.Optimized Streaming Data Processing to Prevent Renderer Overload
We’ve improved the stability and performance of our streaming data processing to ensure smoother rendering. This update introduces a 50ms processing interval to handle data more efficiently and prevent overwhelming the renderer.
Key Updates:
This improvement benefits developers and end-users working with real-time data streaming in Storytell. It’s particularly valuable for applications requiring stable, high-performance rendering.
The previous implementation processed streaming data as it arrived, which could overwhelm the renderer and cause performance issues. This fix ensures data is processed at optimal intervals, maintaining smooth performance without sacrificing real-time capabilities.
150ms Buffer Threshold:
Introduced a 150ms interval to buffer and process streaming data, preventing the renderer from being overwhelmed.
Controlled Read Loop:
Implemented a read loop that processes data in chunks, respecting the buffer threshold to maintain performance.
Final Data Cleanup:
Added a forced processing step at the end of the stream to ensure all remaining data is handled correctly.
Imagine trying to pour water into a small cup too quickly—it overflows. Our fix acts like a dam, controlling the water flow to match the cup’s capacity. Now, data is processed in manageable chunks every 50ms, keeping everything running smoothly.
Buffer Threshold Implementation:
The BUFFER_TIME_THRESHOLD
constant (150ms) determines the interval at which data is processed. This prevents excessive rendering updates.
Read Loop Mechanism:
The readLoop
function processes data chunks asynchronously. If the time since the last processing is less than the threshold, it skips processing until the interval passes.
Final Data Handling:
After the stream ends, a final readLoop
call with forceProcessing: true
ensures any remaining data is processed, avoiding data loss.
Fixed Inconsistent Collection State When Submitting Prompts
We’ve resolved an issue where submitting a prompt could cause inconsistent behavior with Collections, leading to unexpected switches in the default Personal Collection. This fix ensures a more stable and predictable experience when creating SmartChats.
This fix is for all users who create SmartChats and rely on Collections to organize their content. It’s particularly important for users who work with multiple Collections and need consistent behavior when submitting prompts.
The problem we solved was a race condition that could occur when creating a new SmartChat. In some cases, the backend wouldn’t save data in time, causing the UI to refresh and switch to the default personal Collection. This inconsistency could lead to confusion and a poor user experience.
updateThreadParents
Function: Modified the function to handle race conditions more gracefully by checking existing data before updating Collections.Imagine you’re writing a message, and just as you hit send, your app suddenly switches to a different folder. That’s what was happening here. We fixed it so the app stays in the right place, keeping your SmartChats organized as you expect.
updateThreadParents
function now checks if existing data is present before applying new Collections, preventing unintended UI switches.Alphabetical Sorting of Collections
We’ve fixed an issue where collections in the sidebar weren’t sorted alphabetically, making it harder to find specific collections quickly. With this update, collections are now ordered logically, enhancing your overall experience on Storytell.
This feature was built for:
The unsorted nature of collections in the sidebar made it difficult for users to locate specific collections, especially as the number of collections grew. This led to frustration and wasted time searching through disorganized lists. By implementing alphabetical sorting, we’ve streamlined navigation, making it easier for users to find what they need without hassle.
Alphabetical Sorting of Collections:
localeCompare
with case-insensitive comparison.Improved Code Structure:
filter
method for better readability and maintainability.Imagine you’re at a library and all the books are scattered randomly on the shelves. It would take forever to find what you’re looking for. Now, with this update, Storytell’s collections are like neatly organized bookshelves—everything is in order, so you can find what you need quickly and easily.
Sorting Implementation:
localeCompare
method on the collection labels.sensitivity: "base"
for case-insensitive sorting and ignorePunctuation: true
to disregard punctuation marks.Code Changes:
children
array retrieval to include sorting after filtering.filter
method chain for a cleaner implementation.Impact:
Resolved Full Screen Loading Indicator Flickering
The full screen loading indicator was flashing multiple times during certain operations, causing a disruptive user experience. This update resolves the issue by optimizing the loading state management, ensuring the loading indicator behaves as expected without flickering.
This fix benefits all users of the platform, particularly those who frequently interact with features that trigger loading states, such as form submissions or data fetching.
The repeated flashing of the full screen loading indicator was disruptive and could have led users to perceive the platform as unreliable or sluggish. By addressing this issue, we enhance the overall user experience and maintain user trust.
Imagine using a phone where apps load smoothly without any screen flickering. This fix ensures that the loading indicator on our platform behaves like a well-optimized phone, loading content without disrupting the user’s experience.
Enhanced Sidebar Functionality for New Users
Improved Mobile Collection Switching
We’ve fixed issues with switching between Collections on mobile devices, ensuring a smoother and more intuitive user experience. This update addresses navigation inconsistencies and improves how Collections are accessed on smaller screens.
onClick
handlers with on:click
handlers to better support mobile interactions and prevent event bubbling.useUIState
to manage the left drawer state, ensuring it closes appropriately when switching Collections on mobile.e.preventDefault()
and e.stopPropagation()
to prevent unintended behavior during Collection switching.Imagine you’re organizing files on your phone. Before, switching between folders was clunky and sometimes didn’t work as expected. Now, it’s smooth and reliable, making it easier to navigate and find what you need.
SidebarCollection
component to use on:click
for better mobile compatibility.useUIState
to control the left drawer, ensuring it closes when switching Collections.preventDefault
and stopPropagation
to manage event flow effectively.Fixed Browser Crashes Caused by Collections Sidebar Performance Issues
Improved File Extension Handling
We’ve enhanced Storytell’s file handling to make it more reliable and consistent. This update ensures that file extensions are determined based on the content type rather than the filename, reducing errors and improving handling of files with missing or incorrect extensions.
mime.DefaultExtension
..md
, .markdown
).This improvement is primarily for:
The previous method of determining file extensions relied on the filename, which could be unreliable due to missing, incomplete, or variant extensions. This led to errors during file ingestion and inconsistent handling of certain file types. By switching to content-type-based determination, we’ve eliminated dependency on potentially unreliable user-provided filenames.
New Method for File Extension Determination
mime.DefaultExtension
.Enhanced Handling of Multiple Extensions
.md
, .markdown
, .mdown
) are now handled more consistently, ensuring the correct extension is always used.Fallback Mechanism for Unavailable Content Types
Improved Error Handling
Imagine you’re uploading a document to Storytell, but the filename doesn’t have an extension or has an unusual one (like .mdx
instead of .md
). Previously, Storytell might struggle to recognize the file type, leading to errors. Now, Storytell looks at the actual content of the file to figure out what type it is, making the process smoother and more reliable. This is like having a librarian who doesn’t just look at the cover but reads the first few pages to correctly categorize the book.
Content-Type-Based Extension Determination
mime.DefaultExtension
to derive the file extension from the content type.mime
package, Storytell ensures that the file extension is always consistent with the actual content type, regardless of the filename.Fallback to Filename-Based Detection
mime.GetContentTypeFromExtension
to determine the content type when it’s not provided, ensuring robustness.Elimination of Filename Dependency
path.Ext
for determining file extensions.path.Ext
was used to extract the extension from the filename. This approach was error-prone, especially for files with missing or incorrect extensions. The new method eliminates this dependency, improving overall reliability.Enhanced Error Handling
Auto-Improve Prompt Reliability
We’ve fixed an issue where the Auto-Improve prompt feature wasn’t always triggering when it should. This reactivity bug occasionally prevented the auto-improve functionality from appearing at the appropriate times. Users can now rely on this feature to consistently help enhance their prompts without unexpected interruptions.
This fix benefits all Storytell users who rely on the Auto-Improve prompt feature to refine and enhance their prompts, particularly those who frequently create and edit prompts for optimal AI interactions.
The Auto-Improve prompt feature wasn’t consistently detecting when it should appear, leading to unpredictable behavior. This inconsistency meant users couldn’t rely on the assistant to help them enhance their prompts at the expected moments. The fix ensures that the prompt improvement suggestions appear reliably when needed, creating a more consistent and dependable user experience.
Imagine you have a helpful writing assistant that’s supposed to offer suggestions while you’re working on a document. But sometimes, this assistant wouldn’t show up when you expected it to - it was a bit unpredictable. It’s like having a spell-checker that only works some of the time.
We fixed this so now the assistant shows up reliably when you need it. The issue was that the system sometimes couldn’t tell when your document was ready for suggestions. We’ve fixed that detection system, so now it consistently recognizes when you’re working and offers help at the right moments.
The bug was related to a reactivity issue in the PromptAutoImprove component. The core problem was that the component wasn’t properly tracking when the editor was mounted, leading to inconsistent behavior with the Show component.
The fix implemented several changes:
Email Sharing Reliability
We’ve addressed an issue causing inconsistent email delivery when sharing Collections. This fix ensures that users reliably receive email notifications when Collections are shared with them.
Key updates:
This fix benefits all Storytell users who share Collections via email. It ensures that recipients are reliably notified when a Collection is shared with them.
Previously, email sharing was unreliable, with some users not receiving notifications when Collections were shared with them. This issue hindered collaboration and could lead to missed information.
Email Sharing Functionality Fix: Addressed issues with email sharing to ensure reliable delivery of share notifications.
This fix ensures that users receive email notifications when Collections are shared with them.
Imagine you share a document with a friend via email, but they never get the notification. This fix makes sure that when you share something from Storytell via email, your friend always gets the message.
Collection Permission Fixes
Fixed an issue where Collection permissions were not correctly applied. This fix streamlines access management and improve user experience.
Previously, Collection permissions could be incorrectly applied, leading to unauthorized access or preventing authorized users from accessing Collections. Additionally, some content was not displaying as expected, causing confusion and hindering productivity. These fixes address those issues by improving Collection access control and content visibility.
Imagine you have a folder on your computer with important files. You want to make sure that only certain people can access this folder. Previously, there was a glitch where the permissions for this folder weren’t working correctly, and sometimes people who shouldn’t have access could get in, or people who should have access couldn’t. We fixed that glitch, so now the permissions work as expected. Also, imagine that one of your important files wasn’t showing up in the folder. We fixed that too, so now you can see all your files.
createFetchCollectionAccessResource
function in pkg/ts/core/src/domains/collections/collections.store.ts
has been modified to correctly override the root access list with the active Collection’s access list. This ensures that the correct permissions are applied when fetching Collection access.Limit
parameter in the client.controlplane.CollectionAccess
API calls has been increased to 1000 to prevent pagination issues.Enhanced @Mention System for Collections and Assets
Fixed multiple issues with @mentions across Storytell where references to Collections and files would disappear or display incorrectly. The fix ensures consistent behavior when using the “Improve Prompt” feature, pasting content, or selecting previous prompts.
This fix benefits users who actively use @mentions to reference their Collections and files in SmartChats™, particularly those who frequently use the “Improve Prompt” feature or need to reference multiple Collections in their workflows.
Users were experiencing several frustrating issues with @mentions:
References to Collections and files would disappear when using “Improve Prompt” Mentions weren’t displaying correctly in various contexts Asset references would break when pasting prompts File names weren’t showing up when selecting previous prompts
Mention Persistence Fix Resolved issues where @mentions would disappear when using the “Improve Prompt” feature, ensuring all references to Collections and files remain intact throughout the interaction.
Display Formatting Corrections Fixed inconsistent display of @mentions across different contexts, including proper rendering in prompt history and when pasting content.
Asset Reference Handling Corrected the display of asset references in the improve prompt dialog, ensuring proper icon display and name formatting.
Mention Parser Improvements Implemented a more robust parsing system to handle @mentions consistently across all contexts, preventing formatting issues and reference breaks.
Imagine you’re writing a paper and using sticky notes to mark important reference books. Previously, some of these sticky notes would mysteriously fall off or show the wrong book title when you tried to improve your writing or copy parts of it. We fixed this so your references now stay exactly where you put them, showing the correct information no matter what you do with your text.
The fix involved several technical improvements:
Implemented new parsePromptTextMentions utility function using regex pattern /@[(asset|collection)_([a-z0-9])]”([^”]+)“/g for consistent mention parsing
Added CollectionMentionParser extension to TipTap editor with priority 102, handling mentions in the format @[collection_id]“Collection Name”
Standardized mention rendering format to @[” across all contexts
Updated TextPromptUnitV1 component with new parsing utility for consistent thread history display
Enhanced ImprovePromptModal to properly maintain and display mentions, including asset references
Modified editor extension configuration to ensure proper mention parsing priority
Added safeguards in the mention suggestion system to prevent duplicate or malformed mentions
Fixed Sharing and Access Control
This update introduces improvements to sharing Collections and addressing sign-in issues within Storytell. These changes enhance the user experience by ensuring seamless access and collaboration.
This update benefits all Storytell users, especially those who collaborate frequently and share Collections with others. Specifically, it helps users who experienced difficulty signing in or sharing Collections.
We addressed two critical issues: the limited visibility of the shareable link tab and sign-in problems. The shareable link tab was not visible to all users, restricting their ability to easily share Collections. Sign-in issues prevented some users from accessing Storytell altogether.
Shareable Link Tab Visibility: The shareable link tab is now consistently visible to all users, enabling them to easily share Collections with others. This ensures everyone can leverage the sharing functionality.
Sign-In Issue Resolution: Addressed and resolved issues that prevented users from signing in to Storytell. This enhancement ensures reliable access to the platform.
Imagine you want to share a cool playlist (Collection) with your friends. Before, the “share” button (shareable link tab) wasn’t always visible. Now, it’s always there, so you can easily share your playlist with anyone. Also, imagine you had trouble getting into your music app (Storytell). That problem is now fixed, so you can always access your music.
Shareable Link Tab: The fix for the shareable link tab involved modifying the PermissionsDrawerScreen.tsx
file within the pkg/ts/core/src/screens/drawer/
directory. This change ensures that the shareable link option is displayed correctly for all users, regardless of their account type or permissions.
Sign-In Issue: The fix for the signing in issues involved changes to the authBrowserMethods.ts
file located in pkg/ts/core/src/domains/auth/implementations/
. The useAuthBrowserMethods
function was updated to improve the handling of authentication state changes using onAuthStateChanged
.
Enhanced Debugging for MapReduce Streamer Configuration
This update enhances the debugging capabilities for Storytell’s map/reduce streamer configuration. Previously, the streamer configuration used in the final map/reduce step was not visible in the debug logs, hindering troubleshooting. This fix ensures that the final streamer configuration is now visible, enabling more effective debugging.
This update is primarily for engineers and developers who are responsible for configuring and debugging map/reduce processes within Storytell.
The lack of visibility into the final streamer configuration in the debug logs made it difficult to diagnose issues in map/reduce processes. This could lead to increased debugging time and difficulty in identifying the root cause of problems. By exposing this configuration, developers can now more easily identify misconfigurations and resolve issues.
Streamer Configuration Visibility: The streamer configuration used for the final reduction step in map/reduce is now displayed in the debug logs. This allows developers to inspect the exact configuration being used by the language model.
Map/Reduce History Check: A check was implemented to prevent the use of map/reduce when history is attached to the prompt. This change ensures that map/reduce is only used in appropriate contexts.
Imagine you’re building with Lego bricks, and you have a set of instructions to follow. Sometimes, those instructions might have a mistake, and you need to figure out where you went wrong. This update is like giving you a clear picture of the last step in the instructions so you can see exactly what you did and spot any errors more easily.
The fix involved modifying the reduce function within the pkg/go/domains/prompts/prompt.go
file. Specifically, the code was changed to ensure that p.streamerCfg
is correctly set to ai.NewStreamerConfig(buff.String())
before the streamer is generated. This ensures that the correct configuration is used and subsequently displayed in the debug logs. Additionally, a check was added to prevent map/reduce from being used if there’s history attached to the prompt.
LLM Model Selection
Fixed an issue where Storytell wasn’t remembering your selected AI model between sessions. Previously, when choosing a specific AI model (like GPT-4, Claude, or Gemini) for your SmartChats™, your selection would reset to the default DynamicLLM when you returned to Storytell. With this fix, your model preference is now properly saved until you explicitly change it.
This fix addresses a pain point for power users and specialists who consistently use a specific AI model for their work. This includes users who require:
Users were frustrated by having to repeatedly select their preferred model each time they started a new session in Storytell. This created unnecessary friction in workflows where consistency between model responses was important. By properly saving the model selection, we’ve eliminated this repetitive task, allowing users to maintain their preferred AI experience without additional configuration steps each time they return to the platform.
Persistent Model Selection - Your chosen AI model now properly remains selected across browser sessions until you explicitly change it. This preference is stored securely in your browser and automatically applied whenever you return to Storytell.
Improved Selection Interface - The model selection dropdown in the chat interface now correctly indicates which model is currently active, with proper handling of the default “DynamicLLM” option and all alternative models.
Session-independent AI Experience - Users can now enjoy a consistent AI experience across different work sessions without needing to reconfigure their preferences, creating more predictable and efficient workflows.
Imagine if your phone forgot which keyboard app you like to use every time you turned it off. You’d have to switch back to your preferred keyboard every single time you wanted to send a message, which would get annoying fast.
Storytell had a similar problem with AI models. If you preferred using a specific AI “brain” (like GPT-4 or Claude) for your conversations, you had to select it every time you came back to Storytell. We fixed this bug, so now Storytell remembers your choice, just like your phone remembers your keyboard preference.
So if you find that one particular AI model works best for your specific needs, you can set it once and forget it. Storytell will keep using your preferred AI model until you decide to change it.
This bug fix implements browser-based persistence for LLM model selection using cookie storage. The implementation involved several key changes to the codebase:
We created a new persistent state variable llmModelOverride in the UIState context using the makePersisted function from Solid.js primitives. This state is stored in cookies with the name “pb_llm_model:v1” and a 100-year expiration date.
The model selection logic was refactored to use this persistent state rather than the previous in-memory state:
The model selection UI was updated to reflect the persistent state, correctly highlighting the active model and handling the default “DynamicLLM” case when no specific model is selected.
The implementation ensures that prompt requests include the selected model ID when sending requests to the backend, maintaining consistency between the UI state and actual API calls.
Collection Permissions Fix
Fixed an issue where invitations to Collections disappeared when viewing child Collections. Users can now consistently see all access permissions regardless of whether they’re viewing a parent or child Collection. This improves the reliability of sharing and permission management across nested Collection structures.
This fix was built for organization administrators and users who manage shared Collections, particularly those who work with nested Collection structures and need to manage access permissions across multiple levels of the hierarchy.
When users invited collaborators to a child Collection, the invitations would disappear from view when accessing that Collection, though the permissions were still correctly applied in the database. This created confusion and duplicate invitation attempts when users couldn’t see that permissions had already been granted. The fix ensures that all permissions are consistently visible regardless of how users navigate to a Collection.
Consistent Permission Visibility: Modified the permission display logic to ensure that invitations and user access rights remain visible regardless of how a user navigates to a Collection.
Improved Permission Fetching: Enhanced the API to fetch both current Collection permissions and parent Collection permissions, ensuring complete visibility across the Collection hierarchy.
Imagine you have a filing cabinet (parent Collection) with several folders inside it (child Collections). You’ve written down a list of friends who can look at each folder on the folder’s cover. The problem was that sometimes when you opened a folder, the list of who could access it would disappear, even though your friends could still open it.
We fixed this so that no matter how you access a folder - whether by opening the cabinet first or going directly to the folder - you’ll always see the complete list of people who have permission to view it. This makes it easier to keep track of who has access to your information.
The root cause of this issue was in the UI’s handling of Collection access permissions. When viewing a child Collection’s permissions, the UI was fetching access data for the parent Collection instead of the current Collection, causing invitations specific to the child Collection to not appear in the interface.
The fix involved:
Modifying the UI to fetch permissions data specifically for the Collection ID in the URL.
Adding logic to fetch and merge permission data from both the current Collection and its root Collection.
The API now performs two distinct queries - one for the current Collection permissions and one for the parent Collection permissions - and combines the results.
This approach accommodates pagination properly, allowing the UI to display complete permission information even when the number of users with access exceeds the default pagination limit of 100 records.
SmartChat™ Message Completion Indicator
We’ve fixed an issue where SmartChat™ wasn’t properly marking the end of AI-generated responses. This update ensures reliable message completion indicators, improving the user experience and enabling more consistent interactions with Storytell. The fix ensures that both the user interface and the underlying system can accurately determine when a response is complete.
This fix benefits all Storytell users engaging with SmartChats™, particularly those who rely on knowing when a response is complete to continue their workflow. It’s especially important for users who need reliable cues about message status for efficient conversation flow.
SmartChat™ messages weren’t being marked as complete with the “isDone” flag set to true. This meant users and the system couldn’t reliably determine when a response was finished, potentially causing UI uncertainty and downstream processing issues.
It’s like having a conversation where the other person doesn’t clearly signal when they’re done speaking - you might not know when to start talking again. Our AI responses were missing their “I’m done speaking” signal. We’ve fixed this so both you and the system clearly know when a response is complete, making conversations smoother and more natural.
The issue was resolved by ensuring the isDone field is properly set to true in the final message packet sent over the websocket connection. This required modifying the SmartChat™ response handler to track message completion state and explicitly mark the final fragment with this flag. The fix ensures proper thread state management and enables UI components to accurately reflect completion status, which is critical for features like message suggestion rendering and proper history recording.
Fixed 'Improve It' Button
Fixed Right Drawer Overlay Index
This update corrects the z-index of the right drawer overlay component, ensuring that it displays properly above other UI elements. The fix addresses issues where the drawer might have appeared underneath menus or overlays due to incorrect index values in the CSS module.
Prior to this update, the right drawer overlay was not displaying as intended because its z-index was misconfigured. This could lead to parts of the overlay being hidden behind other UI elements, causing confusion. The fix ensures that the overlay is consistently and correctly positioned, thereby enhancing the overall navigational flow and accessibility.
CSS Module Update in RightDrawer.module.css:
The CSS file was updated to adjust the z-index properties. This ensures that the right drawer overlay now properly stacks above other overlays, improving its visibility.
Refactoring of CSS Overlap Configurations:
Minor modifications were introduced to keep the styling consistent with other overlay components. This involved correcting the z-index variables (such as $zindex--overlay
and $zindex--menu-overlay
) so that the proper hierarchy is maintained.
UI Consistency Verification:
Visual tests and commit verifications were performed to confirm that the right drawer now displays correctly across multiple screen sizes and conditions, ensuring a smoother and more predictable user interface.
Think of layers like stickers on a notebook. If one sticker is supposed to be on top, but instead is underneath, you can’t see it clearly. This fix moves the sticker (the right drawer) so that it’s on top of the others and easy to see.
The issue was resolved by modifying the z-index in the RightDrawer.module.css
file. The adjustment ensures that the right drawer overlay’s z-index is properly set to ensure it displays over menu overlays. This required reordering CSS rules to adhere to the correct stacking context within the application. The changes follow our standard theming and layout guidelines, ensuring consistency across components while isolating this fix to the specific module. Verification was done via automated and manual UI tests to confirm the corrected overlay behavior.
Fixed `Undefined` issue when downloading Markdown Tables
This update fixes the handling of formatted text within Markdown tables. Storytell now correctly parses and renders formatted elements in table cells, improving readability when users embed styling inside Markdown tables. The fix ensures consistency across different text formats.
We built this for Storytell users who create and view Markdown documentation, particularly those relying on styled tables to display information clearly. Our focus is on ensuring content creators and reviewers have a seamless viewing experience.
Previously, formatted text within Markdown tables did not display as intended, leading to inconsistent styling and confusion. This mattered because clear presentation in tables is crucial for comprehension, especially when precision matters in technical documentation.
Markdown Table Parsing Improvement: We fixed the extraction and concatenation logic for formatted text within table cells to ensure that Markdown tables render correctly.
Bug Fix Implementation: Adjustments were made in the Markdown rendering component to handle nested elements, ensuring that table structures maintain their clarity even when complex formatting is applied.
Imagine you have a picture book where some pictures have extra colors and effects. Previously, the book mixed up these colors, making the page look messy. Now, each picture shows its true colors clearly, just like a well-organized album.
The changes are implemented in the MarkdownRenderer component. We refined the text extraction function to map over the children nodes and join formatted text content correctly. This refactor ensures that text nodes and element nodes within table cells are processed separately, then recombined with the correct formatting attributes intact. This bug fix minimizes rendering irregularities and improves overall stability of Markdown table displays in Storytell.
Added GDPR Badge to Landing Page
This update corrects the hyperlink for the GDPR badge in Storytell, replacing the erroneous web.storytell.ai URL with the proper trust.storytell.ai link. This change ensures that users are directed to the accurate compliance page when they click the badge. The update enhances trust by providing clear regulatory information.
We built this for all Storytell users concerned with data privacy and regulatory compliance. It also targets compliance officers and legal teams who review Storytell’s trust signals to confirm the platform meets regulatory standards.
Users faced a broken link when clicking the GDPR badge, leading to confusion about Storytell’s compliance status. Ensuring the URL directs users to the correct trust page is essential for transparency and trust in our platform.
Think of it as fixing a road sign so that when you follow it, you actually end up at the correct destination—like fixing a wrong turn sign so everyone gets home safely.
The code update modifies the URL in the component responsible for rendering the GDPR badge, likely within the footer or a marketing module. This change replaces the string “web.storytell.ai/trust” with “trust.storytell.ai”. The update has been tested to ensure that all instances where the badge appears now use the correct link without side effects.
Fixed Close Button Overlap on Collection Invite Panel
Fixed broken Storytell Response Header Rendering
This fix removes any unparsed custom tags from rendered text in Storytell. By cleaning stray tags that were not processed, the update ensures that users see clear, uncluttered text output. This makes content consumption on Storytell smoother and more professional.
This update is for content creators and readers on Storytell who utilize custom tags within their text. It solves issues for users who expect automated formatting and a clean reading interface without extraneous artifacts.
Previously, any custom tags that failed to parse would remain in the displayed text, leading to confusion and a messy appearance. Removing these unparsed tags was essential to deliver a polished, professional experience.
Imagine you write a note with secret codes, but if the code isn’t turned into a picture, it just looks like random scribbles. Now, those random scribbles are removed so you see only the neat, finished message.
The update modifies the text rendering pipeline in Storytell’s content processor. When rendering text, the parser now scans for custom tags that remain unparsed and automatically strips them out before the final output. This change likely affects functions within the custom text processing modules and improves overall text sanitation, making the output more predictable and visually appealing.
Breadcrumbs Always Showing
System Role and Other References
Improved the handling of system instructions for various AI models and refined how Storytell references assets. This update ensures better compatibility with Anthropic and Gemini models, centralizes instruction management, and clarifies the role of references for Language Learning Models (LLMs).
system
role.Gotchas
tag with Unclear
tagThis update benefits users working with Anthropic and Gemini models, as well as those who need clear and consistent instructions for LLMs.
Previously, system instructions were not handled consistently across different AI models, leading to issues with Anthropic and Gemini. The terminology used to describe assets was also unclear for LLMs. This update addresses these inconsistencies and clarifies the role of references.
system
role.Gotchas
tag to Unclear
tag.Think of this as teaching Storytell to speak different AI languages more fluently. We’ve made sure Storytell knows how to give instructions in the right way for each AI, and we’ve clarified what “stuff” Storytell can use to answer questions.
The changes include:
PrepareUserQuestion
function in pkg/go/domains/prompts/main_transformer.go
to favor system instructions.services/controlplane/domains/curator/llm_anthropic.go
and services/controlplane/domains/curator/llm_gemini.go
to inject system instructions directly.Updated UI Terminology for Clarity
Updated the UI to improve clarity and consistency in terminology. This change replaces "Unclear"
with "Limitations"
and changes "Consider"
to "To Consider"
.
This update is for all users of Storytell, as it improves the clarity and understandability of the user interface.
The previous terminology was potentially confusing or ambiguous for users. This update provides clearer and more descriptive labels for these sections.
"Unclear"
tag in the UI with "Limitations"
."Consider"
tag to "To Consider"
in the UI.We’ve renamed a couple of labels in Storytell to make them easier to understand. Instead of saying something is “Unclear”, we now say it has “Limitations”. And instead of saying “Consider”, we now say “To Consider”.
This change primarily involves updating the UI components in apps/webconsole/src/domains/threads/components/units/MarkdownRenderer.tsx
to reflect the new terminology.
Improved XLS processing
We’ve resolved an issue that prevented certain XLS files from being processed correctly in Storytell. This fix ensures accurate data extraction from XLS files, improving the reliability of structured processing. This was specifically caused by malformed ”__” characters in the header row.
This fix is for all users who upload XLS files to Storytell, particularly those working with structured data from various sources.
Some XLS files failed to process due to a bug in the header row matching process. This prevented users from extracting data and utilizing it within Storytell.
_
characters.Think of it like fixing a broken zipper on a jacket. Sometimes, a little snag prevents the zipper from working. This fix removes that snag so Storytell can correctly read and process your data.
The issue stemmed from duplicate ”__” characters in the header row, which were created during the “cleansing” process. The fix involves modifying the cleanseFieldForMatching
function in pkg/go/domains/assets/tabular/csv.go
to remove repeating _
characters from the cleansed output. A new test case was added in pkg/go/domains/assets/tabular/csv_test.go
to cover this specific scenario.
Off-Screen Mentions Pop-up
We have fixed an issue where the Mentions pop-up was appearing offscreen by replacing Popper.js with Floating-UI. This change ensures that the pop-up now positions itself correctly within the viewport, providing a better visual experience. Key updates include improved pop-up positioning logic, enhanced responsiveness, and a streamlined integration with Floating-UI. The update addresses feedback from users encountering display issues during interactions.
This bug fix is aimed at users who routinely interact with Mentions on Storytell. In particular, it benefits those who have experienced interruptions or visual glitches in the Mentions feature during their daily communications.
Previously, the Mentions pop-up would sometimes render partially or completely offscreen, leading to a confusing user experience. Resolving this issue improves accessibility and ensures that all interactive elements are fully visible, which is essential for efficient user engagement and productivity.
Imagine you are writing a note on a sticky note, but sometimes the note slips off your desk and becomes hard to read. We fixed the problem causing the note to fall by changing how it sticks, so now it always stays where you can see it. This ensures that whenever you check your Mentions on Storytell, the pop-up is always right where it should be.
In this update, we replaced the older Popper.js library with Floating-UI for calculating and applying dynamic pop-up positions. The VirtualElementPopup component now imports Placement from Floating-UI, and corresponding hooks in popper.ts have been modified to utilize Floating-UI’s computePosition along with its middleware (shift, flip, offset). The changes include updates to event handling through autoUpdate, ensuring real-time repositioning based on viewport changes. These technical refinements ensure seamless interoperability with Storytell’s UI components while enhancing the accuracy of pop-up display dynamics.
Unable to search for Collections with whitespace
This update fixes an issue where searching for Collections with spaces did not work correctly. The bug was resolved by modifying the filtering logic to remove spaces from both the Collection labels and the search query. Key updates include ensuring that multi-word Collections are now accurately matched and improving overall search responsiveness.
This fix is intended for Collection managers, administrators, and power users who rely on efficient and accurate search functionality to navigate large datasets. It addresses the need for reliable real-time filtering when using Collections with multi-word titles.
Before this update, users encountered difficulties when searching for Collections that contained spaces. This issue impaired navigation efficiency and reduced the usability of the search function within Storytell. Correcting this bug ensures that search results are comprehensive and reliable, enhancing user productivity.
Imagine trying to find a book in a library where some titles have spaces between words. Before, if you searched for a book with a two-word title, the search would miss it because it couldn’t recognize the space. With this update in Storytell, the search now ignores spaces, making it easier to find the book (or Collection) you need, much like correctly matching pieces of a puzzle.
Technical details: The update was implemented in the searchAssetsAndCollections function. The new logic involves replacing all spaces in both the Collection labels and the input query with a blank string before performing a case-insensitive comparison. This ensures that Collections with multi-word names are correctly filtered and returned. The code also maintains real-time performance by efficiently processing API calls and DOM updates, ensuring that users experience minimal delay when searching.
Improved Collection Sharing & Access Managements
We’ve revamped how Collections are shared and accessed within Storytell, focusing on security and user experience. This update introduces automatic invitation claiming during registration, prevents sharing of personal Collections, and improves the invitation flow with better error handling and user feedback.
We addressed several friction points in the Collection sharing workflow. Users faced confusion when dealing with invitations, especially during registration. Personal Collections could be accidentally shared, and error messages weren’t clear enough when invitation tokens were invalid or already claimed.
Imagine you have a digital filing cabinet where you keep all your important documents. Sometimes you want to share certain folders with your classmates for group projects. We’ve made this sharing process smoother - now when you invite someone new, they automatically get access when they sign up, like getting a key to the cabinet along with their school ID. We’ve also made sure you can’t accidentally share your private folders, and if something goes wrong, you get clear messages explaining what happened.
PrettyTree
implementation to use .
for indentation instead of whitespacechild
kind when creating sub-CollectionsPrevent Deletion of Root Collections
Imagine you have a big toy box that holds all your best toys and a few smaller boxes inside that you can play with. We made a rule so you can only empty the small boxes, not the big one that holds everything special. This way, you keep your favorite toys safe while still being able to clean up some spaces.
This feature strengthens our API by enforcing protection rules. It ensures only child Collections (for example, favorites, personal, or organization children) can be deleted while preventing the accidental or unauthorized deletion of root Collections.
The API endpoint has been updated from DeleteCollections
to DeleteCollection
to accurately represent its functionality.
The new function, CanCollectionBeDeleted
, inspects the type of Collection using a switch-case logic to differentiate between deletable (child Collections) and non-deletable types (root Collections).
In the DeleteCollection method
, the check is executed early. If the validation fails (i.e., when the Collection is not a child), the delete operation is halted with an error message stating, “this kind of Collection cannot be deleted.”
This update assures that deletion requests are processed safely, preserving the integrity of our data and reducing the risk of unintentional deletion of essential Collections.
Renamed DeleteCollections to DeleteCollection We renamed the endpoint to reflect its actual responsibility accurately. This change clarifies that only a single Collection deletion operation is exposed via the API and helps avoid any confusion regarding bulk deletions.
Introduced CanCollectionBeDeleted Function We added a helper function, CanCollectionBeDeleted, to validate if a Collection type is eligible for deletion. It checks the nature of the Collection and allows deletion only if the Collection is a child (such as favorites, personal, or organization children). Root Collections, or any non-deletable type, are automatically protected.
Enhanced DeleteCollection with Validation Checks The DeleteCollection method now contains enhanced validation logic. Before proceeding with deletion, it verifies the Collection’s type. If the Collection isn’t permissible for deletion (for example, if it’s a root Collection), the API returns a clear error message: “this kind of Collection cannot be deleted”. This ensures our API behaves predictably and safeguards critical data.
Administrators and API users: Those managing Collection and needing to protect critical system data.
Developers and internal teams: Users who require robust data integrity during delete operations.
We built this feature to solve the issue of accidental or undesired deletion of root Collections. Root Collections are critical for organizing assets, threads, and overall system data. Allowing their deletion could lead to severe data loss or system instability. Restricting deletion to only child Collections protects data integrity, preserves system organization, and prevents unintended deletion errors.
``Improve it`` Button not working on Home Page
Long Prompts Cut Off on Collection Screen
Think of it like a magic sticky note that gets bigger as you write more words. Before, part of your writing was hidden, but now the sticky note grows so that you can see every single word without anything being cut off.
This bug fix addresses an issue on the Collection screen where long prompts were being truncated from the top. With this fix, the prompt bar now adjusts dynamically to ensure that the entire prompt is visible, improving readability and user interaction.
The solution required modifying the layout and CSS properties of the prompt bar within the CollectionScreen component. The prompt bar was adjusted to be statically positioned so that as the content grows, it pushes down the other elements rather than overlapping or cutting off the text. Additionally, horizontal centering was implemented to enhance visual alignment and ensure consistency across the interface.
Long Prompts Cut Off on Collection Screen:
Previously, long prompts were truncated, meaning that important portions of the text were not visible. This was especially problematic for those needing full context for editing and review. The fix guarantees complete visibility of the prompt content, thereby improving usability.
Collaborators not seeing who else has access in shared Collections
Better processing when handling structured CSVs
We made it so that if you give the computer a neat table, it reads it the right way instead of getting confused.
This feature ensures that when a user uploads a structured CSV, it is correctly identified and processed as structured rather than semi‐structured. The core change fixes the boolean logic in the CSV processor so that CSVs like Alex’s NBA stats are embedded accurately.
Previously, our CSV processing method misinterpreted structured CSV files as semi‐structured. This led to incorrect embedding and data inaccuracies. By addressing the boolean operation error in the CSV processor, structured CSVs are now correctly handled, resulting in better data accuracy and reliability.
The modification was made in the useSemiStructured() method within pkg/go/domains/assets/tabular/csv.go. The fix ensures that when no override is provided, the method returns the status of classification.IsStructured correctly, avoiding a mistaken inversion of the flag. This change prevents structured CSVs from being processed as semi‐structured, which could lead to errors in generating embeddings.
Known issue when inviting existing users to a Shared Collection
Improved answers when mentioning Collections
This update fixes the bug where mentions of Colelctions or assets were ignored when determining the answer strategy. Now, if a user queries with a focus on world knowledge but also includes a reference to a Collection or asset, the system automatically blends world knowledge with private Collection data to provide a relevant answer.
Previously, when users mentioned Collections or assets in their queries—even while focusing on world knowledge—the answer was generated using only global information. This meant that important contextual details were lost, leading to less accurate or relevant responses. Fixing this ensures that both types of knowledge are appropriately blended in the output.
Answer Strategy Adjustment
Our system now checks if a query includes mentions of Collections or assets. If it does, even when the user’s scope is focused on world knowledge, the answer strategy is adjusted to include private knowledge from those Collections.
Explain it to me like I’m five:
Imagine you ask a teacher a question and mention a favorite book; the teacher now uses both what they know about the world and what’s in your favorite book to answer you.
Technical details:
Changes in the thread processor code update the logic to check for nonempty mentions fields (Collections and assets) even when the world knowledge flag is active. This was done to blend answers from our private Collections with global answers.
SmartChats™ from a revoked Collection still show up for the user
This fix ensures that threads stored in Collections a user no longer has access to are not shown in their recent threads list. It prevents accidental exposure of chat threads from revoked Collections.
Before this fix, even after a user’s access to a Collection was revoked, threads from that Collection would still appear in their recent chat list. This bug could cause confusion or expose outdated information. Now, only threads from Collections to which the user still has access are visible.
Access Check Improvements
Introduced stricter authorization checks in the recent threads query to ensure threads from revoked Collections are not returned.
Explain it to me like I’m five:
It’s like having a locked toy box; if you lose the key, you no longer see what’s inside the box when you look around.
Technical details:
The authorization logic now verifies that for each thread returned, the user’s access based on Collection membership is current. The query was updated to join the threads with the current accessible Collections list from get_user_accessible_collections
.
Generic Guest profile avatar
This feature addresses an issue where users signing in with Magic Link or non-Google shared accounts see a generic “Guest” avatar. Currently, if a user logs into our staging environment with either a Storytell account or a personal email, their profile is rendered as “Guest”, even when different accounts and assets are in use. The solution sets the stage for a more intuitive experience by distinguishing between unedited guest profiles and those that have confirmed email data. The feature also lays the groundwork for allowing users to edit their profile name during onboarding or via an Edit Profile button.
This implementation modifies the avatar rendering algorithm. When a user signs in via Magic Link, which bypasses Google’s sharing of profile data, the system now checks for a confirmed email address. If an email is present but no explicit name has been set, the fallback “Guest” label is used as a temporary placeholder. A quick fix adjusts textual cues based on email verification; however, the long-term solution is to permit users to edit their profile names, thereby replacing the default “Guest” with personalized identifiers. Diagnostic endpoints have been integrated (see diagnostic JSON link) to assist in tracking and resolving any discrepancies between different account sessions.
Imagine you log in to an app and see a profile picture with the letter “G”, which simply means “Guest” because the system hasn’t learned your name yet. This is because, when signing in through a special link (Magic Link), the usual details from your Google account aren’t brought in. In simple terms, the app is saying “I don’t know who you are” until you tell it your name. In the future, we plan to let you set your name, so every time you log in, you see your personal details instead of just “Guest.”
Previously, our system showed every signed-in user as “Guest” when they used Magic Link. This caused confusion because different accounts, with distinct profiles (and even different email addresses), were not immediately distinguishable. Users and testers experienced ambiguity when identifying which account was active, which not only eroded user confidence but also complicated troubleshooting. By addressing this, we ensure that users are informed about their account status and the need to personalize their profiles—setting the stage for a better experience.
This feature was primarily built for our staging server users, which include internal testers and early adopters who rely on Magic Link authentication. It’s especially relevant for:
By solving this issue, we are improving clarity for both technical engineers and non-technical stakeholders who rely on accurate visual cues from their profile information.
XLS Upload Progress in Thread/Chat
This bug fix addresses the issue where users were unable to see the upload progress for XLS files in thread or chat environments. Previously, when users uploaded XLS files, there was no indication of progress, leading to confusion and uncertainty about whether the upload was successful. This fix introduces a progress indicator, enhancing the user experience by providing real-time feedback during the upload process.
To implement this fix, we integrated a task and watcher system specifically for XLS file uploads. This involves:
Imagine you’re building a LEGO tower, and you want to know how much you’ve built so far. Before, you couldn’t tell how tall your tower was until you finished. Now, with this fix, you have a ruler next to your tower that shows you how tall it is as you build. This way, you always know how much more you need to build to finish your tower.
The main problem was the lack of feedback during XLS file uploads in threads or chats. Users were left guessing whether their files were uploading correctly, leading to frustration and potential errors. By adding a progress indicator, we provide clear feedback, improving user confidence and satisfaction.
This fix was primarily built for users who frequently upload XLS files in chat or thread environments. These users rely on timely and accurate feedback to ensure their files are uploaded successfully, which is crucial for maintaining smooth communication and workflow.
Fixed an issue where .mp4 files were not being processed
We addressed a critical issue where MP4 files were not functioning correctly in the production environment of Storytell. This malfunction was caused by a recent refactor that moved processing to a job-based system without properly outputting sanitized HTML. The fix ensures that MP4 files are now processed seamlessly, restoring their functionality across all environments.
The root cause of the MP4 failure was identified during the transition to a job-based processing system. In the refactor, the process responsible for handling MP4 files did not include the necessary step to output sanitized HTML, leading to failures in production. To resolve this, we:
Imagine you’re trying to show a video to your friends, but every time you try, nothing appears. We found out that when we changed how we prepare the videos, we forgot to include a special step that makes them show up correctly. Now, we’ve added that step back, so your videos play without any problems, just like they should.
We built this feature to address the critical issue of MP4 files failing to work in the production environment of Storytell. This problem prevented users from uploading and viewing MP4 content, disrupting their experience and the platform’s functionality. By resolving this, we ensure that media-rich content can be shared and enjoyed seamlessly, maintaining user satisfaction and platform reliability.
This fix was specifically designed for Storytell users who rely on uploading and sharing MP4 videos as part of their storytelling experience. Whether it’s content creators sharing educational videos or users uploading media for personal projects, ensuring MP4 functionality is essential for their use cases. By resolving this issue, we support a smooth and uninterrupted experience for all users engaging with video content on our platform.
Disable YouTube Scraping Temporarily
The latest fix disables the YouTube scraping functionality temporarily while we undergo debugging to ensure the overall stability and performance of our system. This measure is in place to prevent any potential issues arising from incorrect data retrieval or processing during the debugging phase. By disabling this feature, we can focus on identifying and resolving critical bugs without the added complexity of YouTube data scraping.
This fix is specifically aimed at developers and QA engineers within our team who are involved in the debugging process. By temporarily disabling YouTube scraping, we provide these users with a clearer environment, eliminating potential discrepancies caused by YouTube data during testing. This allows for a more straightforward debugging process, enhancing our ability to resolve issues effectively.
Remove fallback behavior from CSV processing
We’ve removed the “fallback” behavior from the CSV processing workflow. Previously, if embeddings couldn’t be generated for a CSV, the job would continue using a “best effort” approach. With this fix, if embeddings generation fails, the entire job will fail, clearly signaling to the user that something went wrong.
In the previous implementation, the CSV row processing included a fallback mechanism that attempted to proceed even when embeddings couldn’t be generated. This approach was intended to monitor real-world performance and necessitated sending additional announcements when failures occurred. However, due to the inherent unpredictability of using a Large Language Model (LLM) for template generation, we encountered a consistent failure rate of approximately 12% in production.
By removing the fallback behavior, the system now enforces a strict policy where the job fails if embeddings aren’t successfully generated. This change not only ensures immediate feedback to users about processing issues but also allows the embedding process to be retried a defined number of times (N attempts). This retry logic enhances reliability without compromising on clear communication of failures.
Imagine you’re building a LEGO model, and sometimes some pieces are missing. Before, if a piece was missing, you’d just keep building and hope for the best. Now, if a piece is missing, the whole project stops so you know there’s a problem. This way, you can fix the issue right away instead of ending up with an incomplete model.
Removal of Fallback Mechanism: Eliminated the “best effort” approach in CSV processing to ensure that jobs fail when embeddings cannot be generated.
Job Failure on Embedding Generation Failure: Implemented a system where the entire job fails if embeddings aren’t successfully created, providing immediate feedback to users.
Retry Logic for Embedding Process: Added functionality to attempt the embedding process multiple times (N retries) before ultimately failing, increasing the chances of successful embeddings without hiding failures.
Previously, the CSV processing system would continue running even when embeddings generation failed, which led to incomplete or inaccurate data processing. This “best effort” approach masked underlying issues and did not effectively inform users about failures, resulting in a 12% failure rate in production. By enforcing a job failure when embeddings can’t be generated, we ensure that users are immediately aware of problems, allowing for quicker troubleshooting and maintaining the integrity of the data processing workflow.
We built this fix for data engineers and analysts who depend on reliable embeddings generation for processing CSV files. Use cases include preparing data for machine learning models, where accurate embeddings are crucial for model performance. By ensuring that embedding failures are promptly reported, we help these users maintain high data quality and streamline their workflow by addressing issues as they arise.
Fix CSV Sampling
The “Fix CSV Sampling” update addresses an issue where the sample CSV file was incorrectly generating a single row that combined both the header and the actual data rows. This fix ensures that the CSV file is properly formatted with distinct headers and multiple data rows, allowing for accurate and efficient data handling.
The core issue was identified in the CSV sampling module, where the function responsible for generating the sample was inadvertently appending data rows directly to the header row, resulting in a malformed single-row CSV file. To resolve this, the function has been refactored to separate the header generation from the data row accumulation. Specifically, the header is now initialized independently, and each subsequent data row is appended as a new distinct row in the CSV structure. Additionally, error handling has been enhanced to ensure that any discrepancies in data formatting are caught and addressed before file generation. This ensures that the resulting CSV adheres to standard formatting conventions, facilitating seamless integration with data processing tools and workflows.
Imagine you’re making a list of your favorite toys. First, you write the titles like “Toy Name” and “Color” at the top. Then, you add each toy’s details on different lines below. Before this fix, sometimes all the toys and the titles got mixed up on one line, making the list hard to read. Now, each toy has its own line under the correct titles, making the list neat and easy to understand.
Separated Header and Data Rows: The CSV generator now clearly distinguishes between the header row and the data rows, ensuring that each section is properly formatted and organized.
Enhanced Error Handling: Improved mechanisms are in place to detect and handle any formatting issues during CSV creation, reducing the likelihood of errors in the generated files.
Optimized Data Processing: Adjustments to the data processing logic allow for more efficient handling of large datasets, resulting in quicker and more reliable CSV file generation.
Previously, the CSV sampling feature was producing files where the header and data were merged into a single row. This made the CSV files difficult to use with standard data tools, as they expected distinct headers and multiple data rows. By correcting this structure, we ensure that users can seamlessly import and manipulate their data without encountering formatting issues, thereby improving overall data reliability and usability.
We developed this fix for data analysts and developers who rely on accurate CSV files for data manipulation and reporting. By ensuring properly formatted CSV samples, these users can efficiently import data into their preferred tools without encountering structural issues, thereby streamlining their workflow and enhancing productivity.
Making CSV Header Row Matching More Lenient
This fix introduces a more lenient approach to matching header rows when processing CSV files. The previous implementation required a strict mapping of every column, which did not accommodate cases where columns might be omitted by the LLM (Large Language Model) due to lack of value. The goal is to adapt our processing to better align with LLM behavior, specifically allowing it to choose relevant headers while ignoring unnecessary ones.
The previous methodology enforced a rule whereby all columns in the incoming CSV file had to be accounted for during processing. This rigidity was problematic, especially when working with responses from LLMs like Claude, which often discard columns deemed irrelevant.
With this fix, the processing logic has been modified to accept LLM responses without requiring strict adherence to our original column mapping rules. Here’s how it works:
Imagine you have a box of crayons, but sometimes some of your crayons don’t work or are just plain silly colors and you don’t want to use them. This fix helps our program listen to a friend (the computer) who says, “Hey, these crayons are the best ones. Let’s just use these!” Even if you have a set of rules about which crayons you must use, it’s better to listen to your friend and pick the pretty ones instead of sticking to old rules that don’t make sense. This way, we end up with a nicer picture!
The initial CSV processing system was too strict, leading to failures or unnecessary complexity in handling files that LLMs might process differently. This rigidity was a barrier for effective data manipulation and integration, causing delays and frustration. By adopting a more flexible approach, we are now able to better serve our users and allow the system to function more intuitively in line with LLM capabilities.
We primarily built this feature for users who handle CSV data extensively, such as:
Establish Maximum Retry Limit for Updating Asset Status
We’ve implemented a fix that sets a maximum number of attempts for updating the status of an asset. This enhancement ensures that the system doesn’t get stuck trying indefinitely to update an asset’s status, which was causing errors and job failures previously. By limiting the number of attempts, we improve the reliability and stability of the asset processing workflow.
The fix involves modifying the fileproc
module to include a max_attempts
parameter when updating an asset’s status. Previously, the system would continuously attempt to update the asset status without a defined limit, leading to repeated failures and resource exhaustion.
In the codebase, we’ve introduced a retry mechanism that caps the number of update attempts to a predefined maximum. If the asset status update fails, the system will retry up to the maximum number of attempts before logging the failure and moving on. The implementation includes comprehensive error handling to ensure that failed attempts are properly logged and do not interfere with other operations. Local testing with affected assets confirmed that jobs now fail gracefully after reaching the maximum attempt threshold, preventing endless retry loops and improving overall system performance.
Imagine you’re trying to put together a puzzle, and sometimes a piece just won’t fit. Instead of trying forever, you decide to try a few times and then ask for help. We did something similar with our system: when it tries to update the status of a file and it doesn’t work, it will only try a certain number of times before stopping. This makes everything run smoother and prevents getting stuck.
Max Attempts Parameter Added
Introduced a max_attempts
setting to limit the number of times the system tries to update an asset’s status. This prevents endless retries and conserves system resources.
Retry Mechanism Implemented
Developed a retry mechanism that initiates a new attempt to update the status only if the previous attempt fails, up to the defined maximum attempts.
Enhanced Error Logging
Improved error logging to capture detailed information about each failed attempt, aiding in debugging and monitoring system performance.
Graceful Failure Handling
Configured the system to handle failed update attempts gracefully by logging the error and preventing the job from being stuck in an infinite loop.
Before this fix, the system would continuously attempt to update the status of an asset without any limit, leading to repeated errors and failed jobs. This behavior not only caused disruptions in the asset processing workflow but also consumed unnecessary system resources, affecting overall performance and reliability. By setting a maximum number of attempts, we prevent these endless retry loops, ensuring that failures are handled efficiently and do not impact other operations.
This fix is designed for our engineering team and system administrators who manage asset processing workflows. Specifically, it addresses the needs of teams dealing with large volumes of asset status updates, ensuring that the system remains stable and efficient even when encountering issues. By implementing a capped retry mechanism, we provide a more reliable and maintainable solution for handling asset status updates, reducing downtime and improving user satisfaction.
Highlight Issue on Knowledge Preference Buttons on Homepage
Fixed a bug where the whole page shows an error while chatting