Bug Fixes
Recent improvements and issue resolutions in Storytell
Summary
Fixed multiple issues with @mentions across Storytell where references to Collections and files would disappear or display incorrectly. The fix ensures consistent behavior when using the “Improve Prompt” feature, pasting content, or selecting previous prompts.
Who We Built This For
This fix benefits users who actively use @mentions to reference their Collections and files in SmartChats™, particularly those who frequently use the “Improve Prompt” feature or need to reference multiple Collections in their workflows.
The Problem We Solved With This Feature:
Users were experiencing several frustrating issues with @mentions:
References to Collections and files would disappear when using “Improve Prompt” Mentions weren’t displaying correctly in various contexts Asset references would break when pasting prompts File names weren’t showing up when selecting previous prompts
Specific Items Shipped:
-
Mention Persistence Fix Resolved issues where @mentions would disappear when using the “Improve Prompt” feature, ensuring all references to Collections and files remain intact throughout the interaction.
-
Display Formatting Corrections Fixed inconsistent display of @mentions across different contexts, including proper rendering in prompt history and when pasting content.
-
Asset Reference Handling Corrected the display of asset references in the improve prompt dialog, ensuring proper icon display and name formatting.
-
Mention Parser Improvements Implemented a more robust parsing system to handle @mentions consistently across all contexts, preventing formatting issues and reference breaks.
Explained Simply:
Imagine you’re writing a paper and using sticky notes to mark important reference books. Previously, some of these sticky notes would mysteriously fall off or show the wrong book title when you tried to improve your writing or copy parts of it. We fixed this so your references now stay exactly where you put them, showing the correct information no matter what you do with your text.
Technical Details:
The fix involved several technical improvements:
-
Implemented new parsePromptTextMentions utility function using regex pattern /@[(asset|collection)_([a-z0-9])]”([^”]+)“/g for consistent mention parsing
-
Added CollectionMentionParser extension to TipTap editor with priority 102, handling mentions in the format @[collection_id]“Collection Name”
-
Standardized mention rendering format to @[id]"” across all contexts
-
Updated TextPromptUnitV1 component with new parsing utility for consistent thread history display
-
Enhanced ImprovePromptModal to properly maintain and display mentions, including asset references
-
Modified editor extension configuration to ensure proper mention parsing priority
-
Added safeguards in the mention suggestion system to prevent duplicate or malformed mentions
Summary
This update introduces improvements to sharing Collections and addressing sign-in issues within Storytell. These changes enhance the user experience by ensuring seamless access and collaboration.
- Shareable link tab is now visible to all users, enabling wider content distribution.
- Sign-in issues have been resolved, ensuring smoother access to the platform.
Who We Built This For
This update benefits all Storytell users, especially those who collaborate frequently and share Collections with others. Specifically, it helps users who experienced difficulty signing in or sharing Collections.
The Problem We Solved With This Feature
We addressed two critical issues: the limited visibility of the shareable link tab and sign-in problems. The shareable link tab was not visible to all users, restricting their ability to easily share Collections. Sign-in issues prevented some users from accessing Storytell altogether.
Specific Items Shipped:
-
Shareable Link Tab Visibility: The shareable link tab is now consistently visible to all users, enabling them to easily share Collections with others. This ensures everyone can leverage the sharing functionality.
-
Sign-In Issue Resolution: Addressed and resolved issues that prevented users from signing in to Storytell. This enhancement ensures reliable access to the platform.
Explained Simply:
Imagine you want to share a cool playlist (Collection) with your friends. Before, the “share” button (shareable link tab) wasn’t always visible. Now, it’s always there, so you can easily share your playlist with anyone. Also, imagine you had trouble getting into your music app (Storytell). That problem is now fixed, so you can always access your music.
Technical Details:
-
Shareable Link Tab: The fix for the shareable link tab involved modifying the
PermissionsDrawerScreen.tsx
file within thepkg/ts/core/src/screens/drawer/
directory. This change ensures that the shareable link option is displayed correctly for all users, regardless of their account type or permissions. -
Sign-In Issue: The fix for the signing in issues involved changes to the
authBrowserMethods.ts
file located inpkg/ts/core/src/domains/auth/implementations/
. TheuseAuthBrowserMethods
function was updated to improve the handling of authentication state changes usingonAuthStateChanged
.
Summary
This update enhances the debugging capabilities for Storytell’s map/reduce streamer configuration. Previously, the streamer configuration used in the final map/reduce step was not visible in the debug logs, hindering troubleshooting. This fix ensures that the final streamer configuration is now visible, enabling more effective debugging.
- Improves visibility of streamer configuration in debug logs.
- Prevents map/reduce usage when history is attached to the prompt.
Who We Built This For
This update is primarily for engineers and developers who are responsible for configuring and debugging map/reduce processes within Storytell.
The Problem We Solved With This Feature
The lack of visibility into the final streamer configuration in the debug logs made it difficult to diagnose issues in map/reduce processes. This could lead to increased debugging time and difficulty in identifying the root cause of problems. By exposing this configuration, developers can now more easily identify misconfigurations and resolve issues.
Specific Items Shipped:
-
Streamer Configuration Visibility: The streamer configuration used for the final reduction step in map/reduce is now displayed in the debug logs. This allows developers to inspect the exact configuration being used by the language model.
-
Map/Reduce History Check: A check was implemented to prevent the use of map/reduce when history is attached to the prompt. This change ensures that map/reduce is only used in appropriate contexts.
Explained Simply:
Imagine you’re building with Lego bricks, and you have a set of instructions to follow. Sometimes, those instructions might have a mistake, and you need to figure out where you went wrong. This update is like giving you a clear picture of the last step in the instructions so you can see exactly what you did and spot any errors more easily.
Technical Details:
The fix involved modifying the reduce function within the pkg/go/domains/prompts/prompt.go
file. Specifically, the code was changed to ensure that p.streamerCfg
is correctly set to ai.NewStreamerConfig(buff.String())
before the streamer is generated. This ensures that the correct configuration is used and subsequently displayed in the debug logs. Additionally, a check was added to prevent map/reduce from being used if there’s history attached to the prompt.
Summary
Fixed an issue where Storytell wasn’t remembering your selected AI model between sessions. Previously, when choosing a specific AI model (like GPT-4, Claude, or Gemini) for your SmartChats™, your selection would reset to the default DynamicLLM when you returned to Storytell. With this fix, your model preference is now properly saved until you explicitly change it.
Who We Built This For
This fix addresses a pain point for power users and specialists who consistently use a specific AI model for their work. This includes users who require:
- Consistent output formatting across multiple sessions
- Model-specific capabilities for specialized tasks
- Predictable performance characteristics for sensitive workflows
The Problem We Solved With This Feature:
Users were frustrated by having to repeatedly select their preferred model each time they started a new session in Storytell. This created unnecessary friction in workflows where consistency between model responses was important. By properly saving the model selection, we’ve eliminated this repetitive task, allowing users to maintain their preferred AI experience without additional configuration steps each time they return to the platform.
Specific Items Shipped:
-
Persistent Model Selection - Your chosen AI model now properly remains selected across browser sessions until you explicitly change it. This preference is stored securely in your browser and automatically applied whenever you return to Storytell.
-
Improved Selection Interface - The model selection dropdown in the chat interface now correctly indicates which model is currently active, with proper handling of the default “DynamicLLM” option and all alternative models.
-
Session-independent AI Experience - Users can now enjoy a consistent AI experience across different work sessions without needing to reconfigure their preferences, creating more predictable and efficient workflows.
Explained Simply:
Imagine if your phone forgot which keyboard app you like to use every time you turned it off. You’d have to switch back to your preferred keyboard every single time you wanted to send a message, which would get annoying fast.
Storytell had a similar problem with AI models. If you preferred using a specific AI “brain” (like GPT-4 or Claude) for your conversations, you had to select it every time you came back to Storytell. We fixed this bug, so now Storytell remembers your choice, just like your phone remembers your keyboard preference.
So if you find that one particular AI model works best for your specific needs, you can set it once and forget it. Storytell will keep using your preferred AI model until you decide to change it.
Technical Details:
This bug fix implements browser-based persistence for LLM model selection using cookie storage. The implementation involved several key changes to the codebase:
We created a new persistent state variable llmModelOverride in the UIState context using the makePersisted function from Solid.js primitives. This state is stored in cookies with the name “pb_llm_model:v1” and a 100-year expiration date.
The model selection logic was refactored to use this persistent state rather than the previous in-memory state:
- Removed the temporary model signal from PromptContext
- Updated ChatBottomBar component to consume the model choice from UIState instead
- Modified comparison logic to handle empty string ("") as the default value instead of undefined
- When a user selects a model, the choice is now stored in browser cookies, allowing it to persist across sessions. The implementation uses the proper cookie storage APIs to ensure compatibility across browsers.
The model selection UI was updated to reflect the persistent state, correctly highlighting the active model and handling the default “DynamicLLM” case when no specific model is selected.
The implementation ensures that prompt requests include the selected model ID when sending requests to the backend, maintaining consistency between the UI state and actual API calls.
Summary
Fixed an issue where invitations to Collections disappeared when viewing child Collections. Users can now consistently see all access permissions regardless of whether they’re viewing a parent or child Collection. This improves the reliability of sharing and permission management across nested Collection structures.
Who We Built This For
This fix was built for organization administrators and users who manage shared Collections, particularly those who work with nested Collection structures and need to manage access permissions across multiple levels of the hierarchy.
The Problem We Solved With This Feature:
When users invited collaborators to a child Collection, the invitations would disappear from view when accessing that Collection, though the permissions were still correctly applied in the database. This created confusion and duplicate invitation attempts when users couldn’t see that permissions had already been granted. The fix ensures that all permissions are consistently visible regardless of how users navigate to a Collection.
Specific Items Shipped:
-
Consistent Permission Visibility: Modified the permission display logic to ensure that invitations and user access rights remain visible regardless of how a user navigates to a Collection.
-
Improved Permission Fetching: Enhanced the API to fetch both current Collection permissions and parent Collection permissions, ensuring complete visibility across the Collection hierarchy.
Explained Simply:
Imagine you have a filing cabinet (parent Collection) with several folders inside it (child Collections). You’ve written down a list of friends who can look at each folder on the folder’s cover. The problem was that sometimes when you opened a folder, the list of who could access it would disappear, even though your friends could still open it.
We fixed this so that no matter how you access a folder - whether by opening the cabinet first or going directly to the folder - you’ll always see the complete list of people who have permission to view it. This makes it easier to keep track of who has access to your information.
Technical Details:
The root cause of this issue was in the UI’s handling of Collection access permissions. When viewing a child Collection’s permissions, the UI was fetching access data for the parent Collection instead of the current Collection, causing invitations specific to the child Collection to not appear in the interface.
The fix involved:
-
Modifying the UI to fetch permissions data specifically for the Collection ID in the URL.
-
Adding logic to fetch and merge permission data from both the current Collection and its root Collection.
-
The API now performs two distinct queries - one for the current Collection permissions and one for the parent Collection permissions - and combines the results.
-
This approach accommodates pagination properly, allowing the UI to display complete permission information even when the number of users with access exceeds the default pagination limit of 100 records.
Summary
We’ve fixed an issue where SmartChat™ wasn’t properly marking the end of AI-generated responses. This update ensures reliable message completion indicators, improving the user experience and enabling more consistent interactions with Storytell. The fix ensures that both the user interface and the underlying system can accurately determine when a response is complete.
Who We Built This For
This fix benefits all Storytell users engaging with SmartChats™, particularly those who rely on knowing when a response is complete to continue their workflow. It’s especially important for users who need reliable cues about message status for efficient conversation flow.
The Problem We Solved With This Feature
SmartChat™ messages weren’t being marked as complete with the “isDone” flag set to true. This meant users and the system couldn’t reliably determine when a response was finished, potentially causing UI uncertainty and downstream processing issues.
Specific Items Shipped:
- Enhanced message completion signaling: Fixed the message completion flag to properly mark when an AI response is fully delivered, improving reliability of the messaging interface.
Explained Simply:
It’s like having a conversation where the other person doesn’t clearly signal when they’re done speaking - you might not know when to start talking again. Our AI responses were missing their “I’m done speaking” signal. We’ve fixed this so both you and the system clearly know when a response is complete, making conversations smoother and more natural.
Technical Details:
The issue was resolved by ensuring the isDone field is properly set to true in the final message packet sent over the websocket connection. This required modifying the SmartChat™ response handler to track message completion state and explicitly mark the final fragment with this flag. The fix ensures proper thread state management and enables UI components to accurately reflect completion status, which is critical for features like message suggestion rendering and proper history recording.
Summary
We’ve fixed an issue where the “Improve It” button was malfunctioning due to JSON formatting inconsistencies. This repair restores the ability to refine and enhance AI-generated content with a single click. Additionally, we’ve addressed potential resource leaks in the Gemini implementation, improving overall system stability and performance.
Who We Built This For
This fix benefits all users who rely on content refinement capabilities, particularly content creators, editors, and anyone who iteratively improves AI-generated responses. It’s especially valuable for workflows that depend on quick content enhancement.
The Problem We Solved With This Feature
The “Improve It” button stopped working due to changes in the underlying LLM implementation. Specifically, when the system was updated to use Gemini 2.0 Flash, that model returned JSON in a non-standard format that broke the button’s functionality. Additionally, there were resources that weren’t being properly released after response generation.
Specific Items Shipped:
- Enhanced JSON handling: Improved the system’s ability to process varying JSON formats from different AI models, making features like “Improve It” more resilient to backend changes.
- Resource management improvements: Addressed potential memory leaks in the Gemini implementation, improving system stability and performance.
Explained Simply:
Imagine you have a universal translator that suddenly stops working with a specific language because that language uses slightly different grammar rules. The “Improve It” button was like this translator - it stopped working because one of our AI models (Gemini) started formatting its answers in a way our button couldn’t understand. We’ve upgraded the translator to understand this different grammar, so the button works again no matter which AI model is generating the content.
Technical Details:
The issue was resolved by adding a new function in the pkg/go/domains/ai package that correctly handles JSON responses from Gemini 2.0 Flash. The model was returning responses in the format json rather than strict JSON, which caused parsing failures. The fix includes a more robust JSON detection and extraction mechanism that can handle both standard JSON and the prefixed format used by Gemini. Additionally, resource cleanup was improved in the Gemini implementation of ai.Streamer to prevent goroutine leaks, ensuring that system resources are properly released after response generation.
Summary
This update corrects the z-index of the right drawer overlay component, ensuring that it displays properly above other UI elements. The fix addresses issues where the drawer might have appeared underneath menus or overlays due to incorrect index values in the CSS module.
Who we built this for
- End-Users and Designers: Users interacting with the right-hand drawer will now see the correct overlay, improving visibility and user experience.
- QA Teams and Frontend Developers: This fix simplifies visual debugging and ensures that layered UI components render in the expected order.
The problem we solved with this feature
Prior to this update, the right drawer overlay was not displaying as intended because its z-index was misconfigured. This could lead to parts of the overlay being hidden behind other UI elements, causing confusion. The fix ensures that the overlay is consistently and correctly positioned, thereby enhancing the overall navigational flow and accessibility.
Specific items shipped:
-
CSS Module Update in RightDrawer.module.css:
The CSS file was updated to adjust the z-index properties. This ensures that the right drawer overlay now properly stacks above other overlays, improving its visibility. -
Refactoring of CSS Overlap Configurations:
Minor modifications were introduced to keep the styling consistent with other overlay components. This involved correcting the z-index variables (such as$zindex--overlay
and$zindex--menu-overlay
) so that the proper hierarchy is maintained. -
UI Consistency Verification:
Visual tests and commit verifications were performed to confirm that the right drawer now displays correctly across multiple screen sizes and conditions, ensuring a smoother and more predictable user interface.
Explain it to me like I’m five
Think of layers like stickers on a notebook. If one sticker is supposed to be on top, but instead is underneath, you can’t see it clearly. This fix moves the sticker (the right drawer) so that it’s on top of the others and easy to see.
Technical details
The issue was resolved by modifying the z-index in the RightDrawer.module.css
file. The adjustment ensures that the right drawer overlay’s z-index is properly set to ensure it displays over menu overlays. This required reordering CSS rules to adhere to the correct stacking context within the application. The changes follow our standard theming and layout guidelines, ensuring consistency across components while isolating this fix to the specific module. Verification was done via automated and manual UI tests to confirm the corrected overlay behavior.
Summary
This update fixes the handling of formatted text within Markdown tables. Storytell now correctly parses and renders formatted elements in table cells, improving readability when users embed styling inside Markdown tables. The fix ensures consistency across different text formats.
Who We Built This For
We built this for Storytell users who create and view Markdown documentation, particularly those relying on styled tables to display information clearly. Our focus is on ensuring content creators and reviewers have a seamless viewing experience.
The Problem We Solved With This Feature
Previously, formatted text within Markdown tables did not display as intended, leading to inconsistent styling and confusion. This mattered because clear presentation in tables is crucial for comprehension, especially when precision matters in technical documentation.
Specific Items Shipped:
-
Markdown Table Parsing Improvement: We fixed the extraction and concatenation logic for formatted text within table cells to ensure that Markdown tables render correctly.
-
Bug Fix Implementation: Adjustments were made in the Markdown rendering component to handle nested elements, ensuring that table structures maintain their clarity even when complex formatting is applied.
Explained Simply:
Imagine you have a picture book where some pictures have extra colors and effects. Previously, the book mixed up these colors, making the page look messy. Now, each picture shows its true colors clearly, just like a well-organized album.
Technical Details:
The changes are implemented in the MarkdownRenderer component. We refined the text extraction function to map over the children nodes and join formatted text content correctly. This refactor ensures that text nodes and element nodes within table cells are processed separately, then recombined with the correct formatting attributes intact. This bug fix minimizes rendering irregularities and improves overall stability of Markdown table displays in Storytell.
Summary
This update corrects the hyperlink for the GDPR badge in Storytell, replacing the erroneous web.storytell.ai URL with the proper trust.storytell.ai link. This change ensures that users are directed to the accurate compliance page when they click the badge. The update enhances trust by providing clear regulatory information.
Who We Built This For
We built this for all Storytell users concerned with data privacy and regulatory compliance. It also targets compliance officers and legal teams who review Storytell’s trust signals to confirm the platform meets regulatory standards.
The Problem We Solved With This Feature
Users faced a broken link when clicking the GDPR badge, leading to confusion about Storytell’s compliance status. Ensuring the URL directs users to the correct trust page is essential for transparency and trust in our platform.
Specific Items Shipped:
- URL Correction for GDPR Badge: The hyperlink for the GDPR badge has been updated to point to https://trust.storytell.ai, removing the confusion caused by the previous URL.
Explained Simply:
Think of it as fixing a road sign so that when you follow it, you actually end up at the correct destination—like fixing a wrong turn sign so everyone gets home safely.
Technical Details:
The code update modifies the URL in the component responsible for rendering the GDPR badge, likely within the footer or a marketing module. This change replaces the string “web.storytell.ai/trust” with “trust.storytell.ai”. The update has been tested to ensure that all instances where the badge appears now use the correct link without side effects.
Summary
This bug fix addresses an overlapping issue in the right drawer component where the close button interfered with the title. Storytell users will now experience a clean UI where the close button and title are rendered separately with proper spacing. This enhances both usability and visual clarity.
Who We Built This For
This fix is aimed at Storytell users who interact with the right drawer for various in-app tasks. It is particularly beneficial for users relying on accessible and clear navigation buttons in the user interface.
The Problem We Solved With This Feature
The close button was previously overlapping the title, making the interface cluttered and less accessible. This overlap could lead to misclicks and a confusing user experience. The update resolves these issues, providing a cleaner, more intuitive interaction.
Specific Items Shipped:
- UI Layout Adjustment: Changes were applied in the RightDrawer component to correct spacing and positioning, ensuring that the close button and the title do not overlap.
Explained Simply:
Think of it like rearranging furniture in a room so that nothing gets in the way of important signs. Now the button to close the panel stands apart clearly from the title text, so you know exactly where everything is.
Technical Details:
The fix involved edits to the RightDrawer.tsx file. Developers adjusted the layout spacing and refined the tabIndex properties to ensure that interactive elements do not interfere with each other. These changes optimize the component’s render flow and improve accessibility by clearly delineating clickable areas.
Summary
This fix removes any unparsed custom tags from rendered text in Storytell. By cleaning stray tags that were not processed, the update ensures that users see clear, uncluttered text output. This makes content consumption on Storytell smoother and more professional.
Who We Built This For
This update is for content creators and readers on Storytell who utilize custom tags within their text. It solves issues for users who expect automated formatting and a clean reading interface without extraneous artifacts.
The Problem We Solved With This Feature
Previously, any custom tags that failed to parse would remain in the displayed text, leading to confusion and a messy appearance. Removing these unparsed tags was essential to deliver a polished, professional experience.
Specific Items Shipped:
- Custom Tag Cleanup: Implemented logic to detect and remove any custom tags that were not successfully parsed during text rendering, ensuring that only intended content is displayed.
Explained Simply:
Imagine you write a note with secret codes, but if the code isn’t turned into a picture, it just looks like random scribbles. Now, those random scribbles are removed so you see only the neat, finished message.
Technical Details:
The update modifies the text rendering pipeline in Storytell’s content processor. When rendering text, the parser now scans for custom tags that remain unparsed and automatically strips them out before the final output. This change likely affects functions within the custom text processing modules and improves overall text sanitation, making the output more predictable and visually appealing.
Summary
Fixed an issue where breadcrumbs in Collections were not displaying correctly for users without full access to the entire path. This update ensures that users can always see the breadcrumb trail for Collections they have access to, even if they lack permission to view parent Collections. This improves navigation and provides better context for users working with shared Collections.
- Ensures breadcrumbs are visible for accessible Collections.
- Hides parent Collections from the breadcrumb trail if the user lacks access.
Who We Built This For
This update is for users who collaborate on shared Collections, especially those who may not have full access to all parent Collections in a hierarchy.
The Problem We Solved With This Feature
Previously, breadcrumbs would not display correctly for users without full access to the entire Collection path. This made it difficult for users to navigate shared Collections and understand their location within the hierarchy. This fix ensures that users always have a clear sense of context, regardless of their access level.
Specific Items Shipped
- Breadcrumbs Display Fix: The breadcrumb component now correctly displays the path for accessible Collections, even if the user lacks permission to view parent Collections.
- Access Control Implementation: The system now verifies user access for each Collection in the breadcrumb trail and hides any parent Collections the user cannot access.
Explained Simply
Imagine you’re walking in a building with multiple floors, but you only have access to certain floors. This feature is like a map that only shows you the floors you can access, so you don’t get confused about where you are.
Technical Details
The fix involves modifying the breadcrumb component to check user permissions for each Collection in the path. The getBreadcrumbs
function in apps/webconsole/src/domains/collections/collections.service.ts
is updated to filter out Collections the user does not have access to.
Summary
Improved the handling of system instructions for various AI models and refined how Storytell references assets. This update ensures better compatibility with Anthropic and Gemini models, centralizes instruction management, and clarifies the role of references for Language Learning Models (LLMs).
- Improved system instruction handling for Anthropic and Gemini models.
- Instructions are now set with the
system
role. - Clarified that assets are “collections or available references” for LLMs.
- Replaced the term
Gotchas
tag withUnclear
tag
Who We Built This For
This update benefits users working with Anthropic and Gemini models, as well as those who need clear and consistent instructions for LLMs.
The Problem We Solved With This Feature
Previously, system instructions were not handled consistently across different AI models, leading to issues with Anthropic and Gemini. The terminology used to describe assets was also unclear for LLMs. This update addresses these inconsistencies and clarifies the role of references.
Specific Items Shipped
- System Role Implementation: Instructions are now consistently set with the
system
role. - Anthropic and Gemini Compatibility: System instructions are now injected directly into the API for Anthropic and Gemini models, separate from the conversation history.
- Asset Reference Clarification: Assets are now specifically referenced as “collections or available references” to improve LLM understanding.
- UI rename: Renamed
Gotchas
tag toUnclear
tag.
Explained Simply
Think of this as teaching Storytell to speak different AI languages more fluently. We’ve made sure Storytell knows how to give instructions in the right way for each AI, and we’ve clarified what “stuff” Storytell can use to answer questions.
Technical Details
The changes include:
- Modifying the
PrepareUserQuestion
function inpkg/go/domains/prompts/main_transformer.go
to favor system instructions. - Updating the API calls for Anthropic and Gemini models in
services/controlplane/domains/curator/llm_anthropic.go
andservices/controlplane/domains/curator/llm_gemini.go
to inject system instructions directly.
Summary
Updated the UI to improve clarity and consistency in terminology. This change replaces "Unclear"
with "Limitations"
and changes "Consider"
to "To Consider"
.
Who We Built This For
This update is for all users of Storytell, as it improves the clarity and understandability of the user interface.
The Problem We Solved With This Feature
The previous terminology was potentially confusing or ambiguous for users. This update provides clearer and more descriptive labels for these sections.
Specific Items Shipped
- “Unclear” to “Limitations”: Replaced the
"Unclear"
tag in the UI with"Limitations"
. - “Consider” to “To Consider”: Changed the
"Consider"
tag to"To Consider"
in the UI.
Explained Simply
We’ve renamed a couple of labels in Storytell to make them easier to understand. Instead of saying something is “Unclear”, we now say it has “Limitations”. And instead of saying “Consider”, we now say “To Consider”.
Technical Details
This change primarily involves updating the UI components in apps/webconsole/src/domains/threads/components/units/MarkdownRenderer.tsx
to reflect the new terminology.
Summary
We’ve resolved an issue that prevented certain XLS files from being processed correctly in Storytell. This fix ensures accurate data extraction from XLS files, improving the reliability of structured processing. This was specifically caused by malformed ”__” characters in the header row.
- Fixes processing errors with XLS files.
- Improves data extraction accuracy.
Who We Built This For
This fix is for all users who upload XLS files to Storytell, particularly those working with structured data from various sources.
The Problem We Solved
Some XLS files failed to process due to a bug in the header row matching process. This prevented users from extracting data and utilizing it within Storytell.
Specific Items Shipped
- Duplicate Character Removal: The “cleansing” process for header row field names now removes repeating
_
characters. - Regression Test: A specific case file/guidance was added to the unit tests to prevent future regressions of this issue.
Explained Simply
Think of it like fixing a broken zipper on a jacket. Sometimes, a little snag prevents the zipper from working. This fix removes that snag so Storytell can correctly read and process your data.
Technical Details
The issue stemmed from duplicate ”__” characters in the header row, which were created during the “cleansing” process. The fix involves modifying the cleanseFieldForMatching
function in pkg/go/domains/assets/tabular/csv.go
to remove repeating _
characters from the cleansed output. A new test case was added in pkg/go/domains/assets/tabular/csv_test.go
to cover this specific scenario.
Summary
We have fixed an issue where the Mentions pop-up was appearing offscreen by replacing Popper.js with Floating-UI. This change ensures that the pop-up now positions itself correctly within the viewport, providing a better visual experience. Key updates include improved pop-up positioning logic, enhanced responsiveness, and a streamlined integration with Floating-UI. The update addresses feedback from users encountering display issues during interactions.
Who we built this for
This bug fix is aimed at users who routinely interact with Mentions on Storytell. In particular, it benefits those who have experienced interruptions or visual glitches in the Mentions feature during their daily communications.
The problem we solved with this feature
Previously, the Mentions pop-up would sometimes render partially or completely offscreen, leading to a confusing user experience. Resolving this issue improves accessibility and ensures that all interactive elements are fully visible, which is essential for efficient user engagement and productivity.
Specific items shipped:
- Floating-UI Integration: The Mentions pop-up now leverages Floating-UI instead of Popper.js, resulting in more reliable offscreen handling. This change modernizes the underlying display logic and ensures better adaptability across devices and display sizes.
- Codebase Refactor: Adjustments have been made in components such as VirtualElementPopup and popper.ts. These modifications ensure that the new library’s API is well integrated, maintaining code clarity and performance.
- Layout and Positioning Enhancements: Updates in configuration and middleware (including offset and flip functionalities) improve the dynamic positioning of the pop-up. This guarantees that even when screen dimensions change, the pop-up remains in a visible and interactive location.
Explained Simply
Imagine you are writing a note on a sticky note, but sometimes the note slips off your desk and becomes hard to read. We fixed the problem causing the note to fall by changing how it sticks, so now it always stays where you can see it. This ensures that whenever you check your Mentions on Storytell, the pop-up is always right where it should be.
Technical details
In this update, we replaced the older Popper.js library with Floating-UI for calculating and applying dynamic pop-up positions. The VirtualElementPopup component now imports Placement from Floating-UI, and corresponding hooks in popper.ts have been modified to utilize Floating-UI’s computePosition along with its middleware (shift, flip, offset). The changes include updates to event handling through autoUpdate, ensuring real-time repositioning based on viewport changes. These technical refinements ensure seamless interoperability with Storytell’s UI components while enhancing the accuracy of pop-up display dynamics.
Summary
This update fixes an issue where searching for Collections with spaces did not work correctly. The bug was resolved by modifying the filtering logic to remove spaces from both the Collection labels and the search query. Key updates include ensuring that multi-word Collections are now accurately matched and improving overall search responsiveness.
Who We Built This For
This fix is intended for Collection managers, administrators, and power users who rely on efficient and accurate search functionality to navigate large datasets. It addresses the need for reliable real-time filtering when using Collections with multi-word titles.
The Problem We Solved With This Feature
Before this update, users encountered difficulties when searching for Collections that contained spaces. This issue impaired navigation efficiency and reduced the usability of the search function within Storytell. Correcting this bug ensures that search results are comprehensive and reliable, enhancing user productivity.
Specific Items Shipped
- Whitespace Handling Update: The filtering logic in the searchAssetsAndCollections function has been enhanced to strip all spaces from Collection labels and the search query. This ensures that searches for Collections with spaces now yield the correct results.
- Enhanced Search Results: The algorithm now performs case-insensitive comparisons after removing spaces, allowing for accurate matching of multi-word Collections.
- Performance Optimization: By refining the filtering process, the search functionality now operates more smoothly, providing real-time updates without additional latency.
Explained Simply
Imagine trying to find a book in a library where some titles have spaces between words. Before, if you searched for a book with a two-word title, the search would miss it because it couldn’t recognize the space. With this update in Storytell, the search now ignores spaces, making it easier to find the book (or Collection) you need, much like correctly matching pieces of a puzzle.
Technical Details
Technical details: The update was implemented in the searchAssetsAndCollections function. The new logic involves replacing all spaces in both the Collection labels and the input query with a blank string before performing a case-insensitive comparison. This ensures that Collections with multi-word names are correctly filtered and returned. The code also maintains real-time performance by efficiently processing API calls and DOM updates, ensuring that users experience minimal delay when searching.
Summary
We’ve revamped how Collections are shared and accessed within Storytell, focusing on security and user experience. This update introduces automatic invitation claiming during registration, prevents sharing of personal Collections, and improves the invitation flow with better error handling and user feedback.
- Key updates:
- Security & Access Control Enhancement: Added enforcement to prevent sharing of personal Collections and implemented automatic claiming of pending invitations during user registration.
- Collection Tree Improvements: Collection Trees now include descriptions for each Collection and use improved indentation with dots for better readability.
- Invitation Flow Updates: Added automatic handling of invites during signup and improved redirect behavior for already-accepted invites.
- Error Handling Improvements: Enhanced messaging for invalid invite tokens, showing specific messages for claimed or revoked tokens.
Who We Built This For:
- Enterprise Teams managing shared Collections
- New users joining existing workspaces
- Collection administrators managing access permissions
The Problem We Solved:
We addressed several friction points in the Collection sharing workflow. Users faced confusion when dealing with invitations, especially during registration. Personal Collections could be accidentally shared, and error messages weren’t clear enough when invitation tokens were invalid or already claimed.
Specific Items Shipped:
- Security & Access Control Enhancement: Added enforcement to prevent sharing of personal Collections and implemented automatic claiming of pending invitations during user registration.
- Collection Tree Improvements: Collection Trees now include descriptions for each Collection and use improved indentation with dots for better readability.
- Invitation Flow Updates: Added automatic handling of invites during signup and improved redirect behavior for already-accepted invites.
- Error Handling Improvements: Enhanced messaging for invalid invite tokens, showing specific messages for claimed or revoked tokens.
Explained Simply:
Imagine you have a digital filing cabinet where you keep all your important documents. Sometimes you want to share certain folders with your classmates for group projects. We’ve made this sharing process smoother - now when you invite someone new, they automatically get access when they sign up, like getting a key to the cabinet along with their school ID. We’ve also made sure you can’t accidentally share your private folders, and if something goes wrong, you get clear messages explaining what happened.
Technical Details:
- Modified Collection sharing validation to enforce strict checking of Collection types
- Implemented automatic token claiming during user registration when email matches invitation
- Enhanced Collection tree structure to include Collection descriptions
- Updated
PrettyTree
implementation to use.
for indentation instead of whitespace - Added validation for
child
kind when creating sub-Collections - Implemented automatic claiming system for pending invitations during user registration
Explain it to me like I’m five
Imagine you have a big toy box that holds all your best toys and a few smaller boxes inside that you can play with. We made a rule so you can only empty the small boxes, not the big one that holds everything special. This way, you keep your favorite toys safe while still being able to clean up some spaces.
Summary
This feature strengthens our API by enforcing protection rules. It ensures only child Collections (for example, favorites, personal, or organization children) can be deleted while preventing the accidental or unauthorized deletion of root Collections.
Technical Details
-
The API endpoint has been updated from
DeleteCollections
toDeleteCollection
to accurately represent its functionality. -
The new function,
CanCollectionBeDeleted
, inspects the type of Collection using a switch-case logic to differentiate between deletable (child Collections) and non-deletable types (root Collections). -
In the
DeleteCollection method
, the check is executed early. If the validation fails (i.e., when the Collection is not a child), the delete operation is halted with an error message stating, “this kind of Collection cannot be deleted.” -
This update assures that deletion requests are processed safely, preserving the integrity of our data and reducing the risk of unintentional deletion of essential Collections.
Specific items shipped:
-
Renamed DeleteCollections to DeleteCollection We renamed the endpoint to reflect its actual responsibility accurately. This change clarifies that only a single Collection deletion operation is exposed via the API and helps avoid any confusion regarding bulk deletions.
-
Introduced CanCollectionBeDeleted Function We added a helper function, CanCollectionBeDeleted, to validate if a Collection type is eligible for deletion. It checks the nature of the Collection and allows deletion only if the Collection is a child (such as favorites, personal, or organization children). Root Collections, or any non-deletable type, are automatically protected.
-
Enhanced DeleteCollection with Validation Checks The DeleteCollection method now contains enhanced validation logic. Before proceeding with deletion, it verifies the Collection’s type. If the Collection isn’t permissible for deletion (for example, if it’s a root Collection), the API returns a clear error message: “this kind of Collection cannot be deleted”. This ensures our API behaves predictably and safeguards critical data.
Who we built this for
-
Administrators and API users: Those managing Collection and needing to protect critical system data.
-
Developers and internal teams: Users who require robust data integrity during delete operations.
The problem we solved with this feature
We built this feature to solve the issue of accidental or undesired deletion of root Collections. Root Collections are critical for organizing assets, threads, and overall system data. Allowing their deletion could lead to severe data loss or system instability. Restricting deletion to only child Collections protects data integrity, preserves system organization, and prevents unintended deletion errors.
Explain it to me like I’m five
Imagine you have a magic button that’s supposed to show you a helpful box with ideas. Before, when you pressed it, nothing happened. Now, when you click the button, the magic box pops up just like it’s supposed to, so you can get help improving your words.
Summary
This bug fix ensures that when users click the Improve it button on the homepage, the modal now opens as expected. Previously, clicking the button did nothing, leaving the prompt improvement interface inaccessible.
Technical details
The fix involved making updates in the ChatBottomBar.tsx
and ChatPromptImproving.tsx
components. Specifically, the UI state management was corrected by properly setting the “modal open” flag and binding the modal contents. When a user clicks the “Improve it” button, the function now accurately assigns the state to open the ImprovePromptModal
component, ensuring that the modal appears with the proper content.
Specific items shipped
- Modal Activation Bug Fix:
The code was updated to correctly handle the event when “Improve it” is clicked. This involves setting the modal open state and assigning the correct modal content in bothChatBottomBar.tsx
andChatPromptImproving.tsx
. The fix ensures that clicking the button reliably brings up theImprovePromptModal
.
The problem we solved with these fixes
Improve it Modal on Home Page:
Before this fix, users experienced a broken interaction where clicking the “Improve it” button did nothing. This prevented them from accessing a crucial prompt improvement tool. The fix restores the intended functionality, ensuring users can refine their inputs effectively.
Long Prompts Cut Off on Collection Screen:
Previously, long prompts were truncated, meaning that important portions of the text were not visible. This was especially problematic for those needing full context for editing and review. The fix guarantees complete visibility of the prompt content, thereby improving usability.
Who we built these fixes for
Improve it Modal on Home Page:
- Content creators and users who interact with the homepage prompt functionality, relying on the ability to enhance their prompts quickly.
Explain it to me like I’m five
Think of it like a magic sticky note that gets bigger as you write more words. Before, part of your writing was hidden, but now the sticky note grows so that you can see every single word without anything being cut off.
Summary
This bug fix addresses an issue on the Collection screen where long prompts were being truncated from the top. With this fix, the prompt bar now adjusts dynamically to ensure that the entire prompt is visible, improving readability and user interaction.
Technical details
The solution required modifying the layout and CSS properties of the prompt bar within the CollectionScreen component. The prompt bar was adjusted to be statically positioned so that as the content grows, it pushes down the other elements rather than overlapping or cutting off the text. Additionally, horizontal centering was implemented to enhance visual alignment and ensure consistency across the interface.
Specific items shipped
- Prompt Bar Positioning Adjustment:
The update involved changes in the CSS and layout configuration of the CollectionScreen component. The prompt bar is now statically positioned; as more text is added, the container adjusts its size, ensuring no content is hidden. The horizontal centering improves the overall presentation of the prompt, making it easier to read.
The problem we solved with these fixes
Long Prompts Cut Off on Collection Screen:
Previously, long prompts were truncated, meaning that important portions of the text were not visible. This was especially problematic for those needing full context for editing and review. The fix guarantees complete visibility of the prompt content, thereby improving usability.
Explain it to me like I’m five
Imagine you have a big box with lots of smaller boxes inside. Before, when you looked for the very first or main box that started it all, you sometimes ended up at the wrong box. Now, we’ve fixed it so that you always get to the right big box, and everyone sharing the boxes can see the same one.
Summary
This feature addresses an issue where collaborators in shared Collections were not being displayed correctly because the system was not fetching the proper root Collection for shared Collections. The update refines the traversal of the Collection hierarchy in the CollectionsStore, ensuring that the correct root is determined and that all collaborators have proper access.
Who we built this for
- End users who collaborate using shared Collections
- Teams and project contributors who rely on accurate shared data access
This feature was specifically built to ensure that when collaborators work together on shared projects, they see the correct Collections view, thus improving their workflow and collaborative experience.
The problem we solved with this feature
We identified that collaborators were not seeing the appropriate shared Collection because the wrong root was being fetched during the hierarchy traversal. This malfunction disrupted the collaboration process by omitting some critical shared items. By addressing this, we have ensured that the Collection hierarchy is traversed correctly and that the proper root Collection is used, thereby maintaining data integrity and improving overall user experience.
Specific items shipped:
-
Correct Root Collection Identification:
The system now iteratively traverses the Collection hierarchy by using a stack-based approach. It pops the current Collection, checks if it is a true Collection root, and, if not, continues to fetch its parent. This fix ensures that the proper base or root Collection is identified for any given shared Collection. -
Optimized Hierarchy Traversal in CollectionsStore:
The code incollections.store.ts
has been updated to not only fetch the right Collection but also to handle cases where parent Collection might be missing or improperly cached. With this, the feature safeguards against potential edge cases that previously led to incomplete collaborator views.Technical details
The implementation leverages a loop that uses a stack to perform a depth-first search through the Collection’s parent IDs. We examine each Collection using the method
isCollectionRoot
. If the currently checked Collection is the root, then the feature returns the last valid Collection ID. The code also includes logic to gracefully exit if a parent is not found, ensuring stability and correct root Collection retrieval even in complex nested Collection scenarios.
Explain it to me like I’m five:
We made it so that if you give the computer a neat table, it reads it the right way instead of getting confused.
Summary
This feature ensures that when a user uploads a structured CSV, it is correctly identified and processed as structured rather than semi‐structured. The core change fixes the boolean logic in the CSV processor so that CSVs like Alex’s NBA stats are embedded accurately.
Who we built this for
- Data analysts and content managers: Users who routinely upload structured CSV files.
- Operational users: Those who rely on accurate data import and processing to drive downstream analytics.
The problem we solved with this feature
Previously, our CSV processing method misinterpreted structured CSV files as semi‐structured. This led to incorrect embedding and data inaccuracies. By addressing the boolean operation error in the CSV processor, structured CSVs are now correctly handled, resulting in better data accuracy and reliability.
Specific items shipped:
- Boolean Logic Correction in CSVProcessor
We fixed an error where a boolean condition erroneously overridden the CSV processing logic. Without an explicit override, the system now uses the classification’s IsStructured flag directly, ensuring the proper handling of structured CSV files.
Technical details:
The modification was made in the useSemiStructured() method within pkg/go/domains/assets/tabular/csv.go. The fix ensures that when no override is provided, the method returns the status of classification.IsStructured correctly, avoiding a mistaken inversion of the flag. This change prevents structured CSVs from being processed as semi‐structured, which could lead to errors in generating embeddings.
Explain it to me like I’m five
Think of it like making sure your friend gets invited to your party even if they were already on the guest list.
Summary
This update improves our invitation system by addressing a bug that prevented existing users from being added to a Collection due to a primary key conflict. With this fix, the system now handles such conflicts gracefully and ensures existing users are correctly added to Collection. Future improvements will automate the acceptance of pending invitations when users sign up with their invited email.
Who we built this for
- Collection administrators and team leads: Those who manage and share Collections with colleagues.
- Existing users: To ensure a seamless experience when they are added to Collections, eliminating errors caused by primary key conflicts.
The problem we solved with this feature
A bug in the invitation system caused an error when attempting to add an existing user to a Collection, due to a primary key conflict. This prevented smooth collaboration and sharing. By resolving the conflict, we now ensure that existing users can be invited without issue, streamlining the process and enhancing the user experience.
Specific items shipped:
- Conflict Resolution in User Invitations
We refined the invitation workflow to address primary key conflicts that occurred when an existing user was added to a Collection. The system now recognizes the conflict and processes the invitation, ensuring that users are correctly added.
Technical details:
The solution involved updating functions in curator_core_share_collection.go, especially the shareCollectionToExistingUsers routine. The new logic uses an SQL upsert pattern (ON CONFLICT DO NOTHING) to safely update or maintain the user’s Collection permissions without error. Integration and UI tests have confirmed that this fix effectively handles the conflict scenario, ensuring that existing users are granted the correct permissions without duplicate errors.
Summary
This update fixes the bug where mentions of Colelctions or assets were ignored when determining the answer strategy. Now, if a user queries with a focus on world knowledge but also includes a reference to a Collection or asset, the system automatically blends world knowledge with private Collection data to provide a relevant answer.
Who we built this for
- Users who ask relative questions across multiple Collection (e.g., feedback received from drodio)
- Users needing more context-aware answers that combine public and private knowledge
The problem we solved with this feature
Previously, when users mentioned Collections or assets in their queries—even while focusing on world knowledge—the answer was generated using only global information. This meant that important contextual details were lost, leading to less accurate or relevant responses. Fixing this ensures that both types of knowledge are appropriately blended in the output.
Specific items shipped
-
Answer Strategy Adjustment
Our system now checks if a query includes mentions of Collections or assets. If it does, even when the user’s scope is focused on world knowledge, the answer strategy is adjusted to include private knowledge from those Collections.Explain it to me like I’m five:
Imagine you ask a teacher a question and mention a favorite book; the teacher now uses both what they know about the world and what’s in your favorite book to answer you.Technical details:
Changes in the thread processor code update the logic to check for nonempty mentions fields (Collections and assets) even when the world knowledge flag is active. This was done to blend answers from our private Collections with global answers.
Summary
This fix ensures that threads stored in Collections a user no longer has access to are not shown in their recent threads list. It prevents accidental exposure of chat threads from revoked Collections.
Who we built this for
- Users handling shared Collections, where access might be revoked over time
- Administrators and users who want accurate chat visibility based on current permissions
The problem we solved with this feature
Before this fix, even after a user’s access to a Collection was revoked, threads from that Collection would still appear in their recent chat list. This bug could cause confusion or expose outdated information. Now, only threads from Collections to which the user still has access are visible.
Specific items shipped
-
Access Check Improvements
Introduced stricter authorization checks in the recent threads query to ensure threads from revoked Collections are not returned.Explain it to me like I’m five:
It’s like having a locked toy box; if you lose the key, you no longer see what’s inside the box when you look around.Technical details:
The authorization logic now verifies that for each thread returned, the user’s access based on Collection membership is current. The query was updated to join the threads with the current accessible Collections list fromget_user_accessible_collections
.
Summary
This feature addresses an issue where users signing in with Magic Link or non-Google shared accounts see a generic “Guest” avatar. Currently, if a user logs into our staging environment with either a Storytell account or a personal email, their profile is rendered as “Guest”, even when different accounts and assets are in use. The solution sets the stage for a more intuitive experience by distinguishing between unedited guest profiles and those that have confirmed email data. The feature also lays the groundwork for allowing users to edit their profile name during onboarding or via an Edit Profile button.
Technical details
This implementation modifies the avatar rendering algorithm. When a user signs in via Magic Link, which bypasses Google’s sharing of profile data, the system now checks for a confirmed email address. If an email is present but no explicit name has been set, the fallback “Guest” label is used as a temporary placeholder. A quick fix adjusts textual cues based on email verification; however, the long-term solution is to permit users to edit their profile names, thereby replacing the default “Guest” with personalized identifiers. Diagnostic endpoints have been integrated (see diagnostic JSON link) to assist in tracking and resolving any discrepancies between different account sessions.
Explain it to me like I’m five
Imagine you log in to an app and see a profile picture with the letter “G”, which simply means “Guest” because the system hasn’t learned your name yet. This is because, when signing in through a special link (Magic Link), the usual details from your Google account aren’t brought in. In simple terms, the app is saying “I don’t know who you are” until you tell it your name. In the future, we plan to let you set your name, so every time you log in, you see your personal details instead of just “Guest.”
Specific items shipped
- Avatar Fallback Logic:
The system now checks if a signed-in account has a confirmed email but no set name. In such cases, the default “Guest” label is displayed to ensure consistency. - Magic Link Handling:
For users signing in via Magic Link, which circumvents Google’s profile data sharing, the feature ensures that data is handled predictably, using diagnostics to verify that the display is correct. - Diagnostic Integration:
A linked diagnostic file is generated for each session, designed for engineering review. This JSON file helps track any discrepancies between account data and the avatar’s display, facilitating quicker debugging. - Future Edit Profile Flow Preparation:
The current fix acknowledges the need to allow users to edit their profiles. Though this update uses label adjustments as a quick fix, it points toward an eventual dedicated workflow for profile customization.
The problem we solved with this feature
Previously, our system showed every signed-in user as “Guest” when they used Magic Link. This caused confusion because different accounts, with distinct profiles (and even different email addresses), were not immediately distinguishable. Users and testers experienced ambiguity when identifying which account was active, which not only eroded user confidence but also complicated troubleshooting. By addressing this, we ensure that users are informed about their account status and the need to personalize their profiles—setting the stage for a better experience.
Who we built this for
This feature was primarily built for our staging server users, which include internal testers and early adopters who rely on Magic Link authentication. It’s especially relevant for:
- Testers ensuring account integrity and debugging potential mix-ups.
- Early users who need a clear indication of their account status while we prepare for enhanced profile customization.
By solving this issue, we are improving clarity for both technical engineers and non-technical stakeholders who rely on accurate visual cues from their profile information.
Summary
This bug fix addresses the issue where users were unable to see the upload progress for XLS files in thread or chat environments. Previously, when users uploaded XLS files, there was no indication of progress, leading to confusion and uncertainty about whether the upload was successful. This fix introduces a progress indicator, enhancing the user experience by providing real-time feedback during the upload process.
Technical Details
To implement this fix, we integrated a task and watcher system specifically for XLS file uploads. This involves:
- Task Initialization: A task is created when an XLS file upload is initiated. This task is responsible for managing the upload process and tracking its progress.
- Watcher Integration: A watcher is set up to monitor the task’s status. It periodically checks the progress of the upload and updates the user interface accordingly.
- Validation Steps: We conducted thorough validation on different environments:
- Development Environment: Initial testing to ensure the task and watcher are functioning as expected.
- Staging Environment: Further testing to simulate real-world conditions and ensure stability before deployment.
Explain it to Me Like I’m Five
Imagine you’re building a LEGO tower, and you want to know how much you’ve built so far. Before, you couldn’t tell how tall your tower was until you finished. Now, with this fix, you have a ruler next to your tower that shows you how tall it is as you build. This way, you always know how much more you need to build to finish your tower.
Specific Items Shipped
- Task for XLS Uploads: We created a new task that starts whenever an XLS file is uploaded. This task is like a manager that keeps track of everything happening during the upload.
- Watcher for Progress Monitoring: A watcher was added to keep an eye on the upload task. It checks how far along the upload is and updates the screen to show this progress.
The Problem We Solved with This Fix
The main problem was the lack of feedback during XLS file uploads in threads or chats. Users were left guessing whether their files were uploading correctly, leading to frustration and potential errors. By adding a progress indicator, we provide clear feedback, improving user confidence and satisfaction.
Who We Built This For
This fix was primarily built for users who frequently upload XLS files in chat or thread environments. These users rely on timely and accurate feedback to ensure their files are uploaded successfully, which is crucial for maintaining smooth communication and workflow.
Summary
We addressed a critical issue where MP4 files were not functioning correctly in the production environment of Storytell. This malfunction was caused by a recent refactor that moved processing to a job-based system without properly outputting sanitized HTML. The fix ensures that MP4 files are now processed seamlessly, restoring their functionality across all environments.
Technical Details
The root cause of the MP4 failure was identified during the transition to a job-based processing system. In the refactor, the process responsible for handling MP4 files did not include the necessary step to output sanitized HTML, leading to failures in production. To resolve this, we:
- Reintroduced Sanitized HTML Output: Ensured that the job responsible for processing MP4 files correctly outputs sanitized HTML, preventing any security vulnerabilities and ensuring proper functionality.
- Updated Job Configuration: Modified the job settings in the Storytell AI Platform repository to handle MP4 files appropriately.
- Enhanced Validation Processes: Implemented thorough validation checks across development, staging, and production environments to ensure that MP4 processing works flawlessly and to catch any potential issues early in the deployment pipeline.
Explain It to Me Like I’m Five
Imagine you’re trying to show a video to your friends, but every time you try, nothing appears. We found out that when we changed how we prepare the videos, we forgot to include a special step that makes them show up correctly. Now, we’ve added that step back, so your videos play without any problems, just like they should.
Specific Items Shipped
- Sanitized HTML Output Implementation: Added the necessary code to ensure that sanitized HTML is correctly generated during MP4 processing jobs.
- Job Refactor Fix: Updated the job structure to handle MP4 files without causing failures in production.
- Comprehensive Validation: Conducted extensive testing in development, staging, and production environments to confirm the effectiveness of the fix and ensure no residual issues remain.
The Problem We Solved with This Feature
We built this feature to address the critical issue of MP4 files failing to work in the production environment of Storytell. This problem prevented users from uploading and viewing MP4 content, disrupting their experience and the platform’s functionality. By resolving this, we ensure that media-rich content can be shared and enjoyed seamlessly, maintaining user satisfaction and platform reliability.
Who We Built This For
This fix was specifically designed for Storytell users who rely on uploading and sharing MP4 videos as part of their storytelling experience. Whether it’s content creators sharing educational videos or users uploading media for personal projects, ensuring MP4 functionality is essential for their use cases. By resolving this issue, we support a smooth and uninterrupted experience for all users engaging with video content on our platform.
Summary
The latest fix disables the YouTube scraping functionality temporarily while we undergo debugging to ensure the overall stability and performance of our system. This measure is in place to prevent any potential issues arising from incorrect data retrieval or processing during the debugging phase. By disabling this feature, we can focus on identifying and resolving critical bugs without the added complexity of YouTube data scraping.
Who we built this for
This fix is specifically aimed at developers and QA engineers within our team who are involved in the debugging process. By temporarily disabling YouTube scraping, we provide these users with a clearer environment, eliminating potential discrepancies caused by YouTube data during testing. This allows for a more straightforward debugging process, enhancing our ability to resolve issues effectively.
Summary
We’ve removed the “fallback” behavior from the CSV processing workflow. Previously, if embeddings couldn’t be generated for a CSV, the job would continue using a “best effort” approach. With this fix, if embeddings generation fails, the entire job will fail, clearly signaling to the user that something went wrong.
Technical Details
In the previous implementation, the CSV row processing included a fallback mechanism that attempted to proceed even when embeddings couldn’t be generated. This approach was intended to monitor real-world performance and necessitated sending additional announcements when failures occurred. However, due to the inherent unpredictability of using a Large Language Model (LLM) for template generation, we encountered a consistent failure rate of approximately 12% in production.
By removing the fallback behavior, the system now enforces a strict policy where the job fails if embeddings aren’t successfully generated. This change not only ensures immediate feedback to users about processing issues but also allows the embedding process to be retried a defined number of times (N attempts). This retry logic enhances reliability without compromising on clear communication of failures.
Explain it to me like I’m five
Imagine you’re building a LEGO model, and sometimes some pieces are missing. Before, if a piece was missing, you’d just keep building and hope for the best. Now, if a piece is missing, the whole project stops so you know there’s a problem. This way, you can fix the issue right away instead of ending up with an incomplete model.
Specific Items Shipped
-
Removal of Fallback Mechanism: Eliminated the “best effort” approach in CSV processing to ensure that jobs fail when embeddings cannot be generated.
-
Job Failure on Embedding Generation Failure: Implemented a system where the entire job fails if embeddings aren’t successfully created, providing immediate feedback to users.
-
Retry Logic for Embedding Process: Added functionality to attempt the embedding process multiple times (N retries) before ultimately failing, increasing the chances of successful embeddings without hiding failures.
The Problem We Solved with This Fix
Previously, the CSV processing system would continue running even when embeddings generation failed, which led to incomplete or inaccurate data processing. This “best effort” approach masked underlying issues and did not effectively inform users about failures, resulting in a 12% failure rate in production. By enforcing a job failure when embeddings can’t be generated, we ensure that users are immediately aware of problems, allowing for quicker troubleshooting and maintaining the integrity of the data processing workflow.
Who We Built This For
We built this fix for data engineers and analysts who depend on reliable embeddings generation for processing CSV files. Use cases include preparing data for machine learning models, where accurate embeddings are crucial for model performance. By ensuring that embedding failures are promptly reported, we help these users maintain high data quality and streamline their workflow by addressing issues as they arise.
Summary
The “Fix CSV Sampling” update addresses an issue where the sample CSV file was incorrectly generating a single row that combined both the header and the actual data rows. This fix ensures that the CSV file is properly formatted with distinct headers and multiple data rows, allowing for accurate and efficient data handling.
Technical details
The core issue was identified in the CSV sampling module, where the function responsible for generating the sample was inadvertently appending data rows directly to the header row, resulting in a malformed single-row CSV file. To resolve this, the function has been refactored to separate the header generation from the data row accumulation. Specifically, the header is now initialized independently, and each subsequent data row is appended as a new distinct row in the CSV structure. Additionally, error handling has been enhanced to ensure that any discrepancies in data formatting are caught and addressed before file generation. This ensures that the resulting CSV adheres to standard formatting conventions, facilitating seamless integration with data processing tools and workflows.
Explain it to me like I’m five
Imagine you’re making a list of your favorite toys. First, you write the titles like “Toy Name” and “Color” at the top. Then, you add each toy’s details on different lines below. Before this fix, sometimes all the toys and the titles got mixed up on one line, making the list hard to read. Now, each toy has its own line under the correct titles, making the list neat and easy to understand.
Specific items shipped:
-
Separated Header and Data Rows: The CSV generator now clearly distinguishes between the header row and the data rows, ensuring that each section is properly formatted and organized.
-
Enhanced Error Handling: Improved mechanisms are in place to detect and handle any formatting issues during CSV creation, reducing the likelihood of errors in the generated files.
-
Optimized Data Processing: Adjustments to the data processing logic allow for more efficient handling of large datasets, resulting in quicker and more reliable CSV file generation.
The problem we solved with this fix
Previously, the CSV sampling feature was producing files where the header and data were merged into a single row. This made the CSV files difficult to use with standard data tools, as they expected distinct headers and multiple data rows. By correcting this structure, we ensure that users can seamlessly import and manipulate their data without encountering formatting issues, thereby improving overall data reliability and usability.
Who we built this for
We developed this fix for data analysts and developers who rely on accurate CSV files for data manipulation and reporting. By ensuring properly formatted CSV samples, these users can efficiently import data into their preferred tools without encountering structural issues, thereby streamlining their workflow and enhancing productivity.
Summary
This fix introduces a more lenient approach to matching header rows when processing CSV files. The previous implementation required a strict mapping of every column, which did not accommodate cases where columns might be omitted by the LLM (Large Language Model) due to lack of value. The goal is to adapt our processing to better align with LLM behavior, specifically allowing it to choose relevant headers while ignoring unnecessary ones.
Technical details
The previous methodology enforced a rule whereby all columns in the incoming CSV file had to be accounted for during processing. This rigidity was problematic, especially when working with responses from LLMs like Claude, which often discard columns deemed irrelevant.
With this fix, the processing logic has been modified to accept LLM responses without requiring strict adherence to our original column mapping rules. Here’s how it works:
- We now evaluate the header rows with greater leniency and allow the LLM to determine which fields to include for processing.
- The system captures headers that contain actual data and ignores those like “Rk” (row number), which Claude frequently drops, improving both performance and usability.
- This shift not only aligns our software more closely with LLM behavior but also streamlines data handling, resulting in fewer errors during the CSV upload process.
Explain it to me like I’m five
Imagine you have a box of crayons, but sometimes some of your crayons don’t work or are just plain silly colors and you don’t want to use them. This fix helps our program listen to a friend (the computer) who says, “Hey, these crayons are the best ones. Let’s just use these!” Even if you have a set of rules about which crayons you must use, it’s better to listen to your friend and pick the pretty ones instead of sticking to old rules that don’t make sense. This way, we end up with a nicer picture!
Specific items shipped:
- Lenient Column Matching: The system now allows missing columns, which means if Claude decides certain columns aren’t needed, the process continues smoothly without errors.
- Support for Irrelevant Headers: Headers like “Rk” will be ignored if they yield no meaningful data. This optimizes the CSV processing by focusing only on important information.
- Enhanced Validation Process: New validations ensure that users’ CSVs work seamlessly under the new rules.
The problem we solved with this fix
The initial CSV processing system was too strict, leading to failures or unnecessary complexity in handling files that LLMs might process differently. This rigidity was a barrier for effective data manipulation and integration, causing delays and frustration. By adopting a more flexible approach, we are now able to better serve our users and allow the system to function more intuitively in line with LLM capabilities.
Who we built this for
We primarily built this feature for users who handle CSV data extensively, such as:
- Data Analysts: They require efficient data imports without errors to analyze and draw insights.
- Developers: Those interfacing with LLMs are often faced with CSV data that doesn’t conform strictly to preconceived rules, and this fix helps solidify that interaction.
- End Users: Users who upload various data records and need reliable processing without inflexibility.
Summary
We’ve implemented a fix that sets a maximum number of attempts for updating the status of an asset. This enhancement ensures that the system doesn’t get stuck trying indefinitely to update an asset’s status, which was causing errors and job failures previously. By limiting the number of attempts, we improve the reliability and stability of the asset processing workflow.
Technical details
The fix involves modifying the fileproc
module to include a max_attempts
parameter when updating an asset’s status. Previously, the system would continuously attempt to update the asset status without a defined limit, leading to repeated failures and resource exhaustion.
In the codebase, we’ve introduced a retry mechanism that caps the number of update attempts to a predefined maximum. If the asset status update fails, the system will retry up to the maximum number of attempts before logging the failure and moving on. The implementation includes comprehensive error handling to ensure that failed attempts are properly logged and do not interfere with other operations. Local testing with affected assets confirmed that jobs now fail gracefully after reaching the maximum attempt threshold, preventing endless retry loops and improving overall system performance.
Explain it to me like I’m five
Imagine you’re trying to put together a puzzle, and sometimes a piece just won’t fit. Instead of trying forever, you decide to try a few times and then ask for help. We did something similar with our system: when it tries to update the status of a file and it doesn’t work, it will only try a certain number of times before stopping. This makes everything run smoother and prevents getting stuck.
Specific items shipped
-
Max Attempts Parameter Added
Introduced amax_attempts
setting to limit the number of times the system tries to update an asset’s status. This prevents endless retries and conserves system resources. -
Retry Mechanism Implemented
Developed a retry mechanism that initiates a new attempt to update the status only if the previous attempt fails, up to the defined maximum attempts. -
Enhanced Error Logging
Improved error logging to capture detailed information about each failed attempt, aiding in debugging and monitoring system performance. -
Graceful Failure Handling
Configured the system to handle failed update attempts gracefully by logging the error and preventing the job from being stuck in an infinite loop.
The problem we solved with this fix
Before this fix, the system would continuously attempt to update the status of an asset without any limit, leading to repeated errors and failed jobs. This behavior not only caused disruptions in the asset processing workflow but also consumed unnecessary system resources, affecting overall performance and reliability. By setting a maximum number of attempts, we prevent these endless retry loops, ensuring that failures are handled efficiently and do not impact other operations.
Who we built this for
This fix is designed for our engineering team and system administrators who manage asset processing workflows. Specifically, it addresses the needs of teams dealing with large volumes of asset status updates, ensuring that the system remains stable and efficient even when encountering issues. By implementing a capped retry mechanism, we provide a more reliable and maintainable solution for handling asset status updates, reducing downtime and improving user satisfaction.
Summary
We’ve addressed and resolved the highlight malfunction on the knowledge preference buttons located on the homepage of Storytell. Previously, when users first visited the site, the highlight did not display correctly unless the prompt box was clicked on and off. Additionally, there was an unintended automatic redirection to storytell.ai. With this update, the highlight now centers properly upon the initial visit, and the redirection issue has been eliminated, ensuring a smoother and more intuitive user experience.
Technical Details
To tackle the highlight misalignment, we revisited the CSS styling associated with the knowledge preference buttons. The primary issue was that the highlight indicator wasn’t correctly centered due to conflicting CSS rules that applied margin and padding inconsistently across different states of the button (e.g., active, hover). We refactored the CSS by:
- Ensuring consistent use of flexbox properties to center the highlight both vertically and horizontally.
- Removing redundant margin and padding declarations that caused the offset.
- Implementing responsive design principles to maintain alignment across various device viewports.
Additionally, the automatic redirection to storytell.ai was traced back to a faulty event listener that triggered navigation upon the initial page load. We corrected this by:
- Reviewing and updating the JavaScript event handlers to ensure that redirection only occurs upon explicit user interaction.
- Adding conditional checks to prevent unintended navigation during the initial rendering phase.
These changes were rigorously tested across multiple browsers and devices to confirm the stability and reliability of the fix.
Explain it to Me Like I’m Five
Imagine you have some brightly colored buttons on a website that light up when you click them, making it easier to see which one you chose. Before, these lights weren’t always showing up right the first time you opened the website—they were a bit shaky and sometimes took a little extra clicking to work. We fixed the lights so they shine perfectly every time you visit, without any extra fuss. Plus, we stopped the website from jumping to another page all by itself, so everything stays where it should be when you’re clicking around.
Specific Items Shipped
-
Centered Highlight Indicator: Adjusted the CSS to ensure the highlight around knowledge preference buttons is perfectly centered when the homepage loads, providing immediate visual feedback to users.
-
Eliminated Unwanted Redirection: Fixed the JavaScript event listener that was causing the site to automatically redirect to storytell.ai upon the first visit, ensuring users remain on the intended page unless they choose to navigate elsewhere.
-
Responsive Design Enhancements: Improved the flexibility of button layouts to maintain proper alignment and highlight functionality across various screen sizes and devices.
-
Cross-Browser Compatibility Fixes: Ensured that the highlight issue is resolved consistently across different web browsers, including Chrome, Firefox, Safari, and Edge.
-
Robust Testing Procedures: Conducted extensive testing scenarios to verify that the highlight and redirection fixes work seamlessly under multiple user interactions and conditions.
The problem we solved with this fix
Users visiting Storytell’s homepage were experiencing issues with the knowledge preference buttons not highlighting correctly upon their initial visit. This malfunction required users to click on and off the prompt box to see the highlight, leading to a confusing and less intuitive user experience. Additionally, there was an unintended automatic redirection to storytell.ai, disrupting the user’s navigation flow. These issues hindered the usability and accessibility of the homepage, potentially causing frustration and reducing user engagement.
Who We Built This for
This fix is for our primary users who rely on the knowledge preference buttons to customize their homepage experience. Specifically, it caters to:
-
New Visitors: Ensuring first-time visitors have a seamless and clear interaction with the preference buttons without encountering technical glitches.
-
Returning Users: Providing consistent and reliable functionality each time they visit, enhancing overall satisfaction and ease of use.
-
Productivity-Focused Users: Users who depend on quick and accurate customization of their homepage to efficiently access desired content without unnecessary navigation hurdles.
Fixed a bug where the whole page shows an error while chatting
Was this page helpful?