Bug Fixes
Recent improvements and issue resolutions in Storytell
Summary
This update fixes the bug where mentions of Colelctions or assets were ignored when determining the answer strategy. Now, if a user queries with a focus on world knowledge but also includes a reference to a Collection or asset, the system automatically blends world knowledge with private Collection data to provide a relevant answer.
Who we built this for
- Users who ask relative questions across multiple Collection (e.g., feedback received from drodio)
- Users needing more context-aware answers that combine public and private knowledge
The problem we solved with this feature
Previously, when users mentioned Collections or assets in their queries—even while focusing on world knowledge—the answer was generated using only global information. This meant that important contextual details were lost, leading to less accurate or relevant responses. Fixing this ensures that both types of knowledge are appropriately blended in the output.
Specific items shipped
-
Answer Strategy Adjustment
Our system now checks if a query includes mentions of Collections or assets. If it does, even when the user’s scope is focused on world knowledge, the answer strategy is adjusted to include private knowledge from those Collections.Explain it to me like I’m five:
Imagine you ask a teacher a question and mention a favorite book; the teacher now uses both what they know about the world and what’s in your favorite book to answer you.Technical details:
Changes in the thread processor code update the logic to check for nonempty mentions fields (Collections and assets) even when the world knowledge flag is active. This was done to blend answers from our private Collections with global answers.
Summary
This fix ensures that threads stored in Collections a user no longer has access to are not shown in their recent threads list. It prevents accidental exposure of chat threads from revoked Collections.
Who we built this for
- Users handling shared Collections, where access might be revoked over time
- Administrators and users who want accurate chat visibility based on current permissions
The problem we solved with this feature
Before this fix, even after a user’s access to a Collection was revoked, threads from that Collection would still appear in their recent chat list. This bug could cause confusion or expose outdated information. Now, only threads from Collections to which the user still has access are visible.
Specific items shipped
-
Access Check Improvements
Introduced stricter authorization checks in the recent threads query to ensure threads from revoked Collections are not returned.Explain it to me like I’m five:
It’s like having a locked toy box; if you lose the key, you no longer see what’s inside the box when you look around.Technical details:
The authorization logic now verifies that for each thread returned, the user’s access based on Collection membership is current. The query was updated to join the threads with the current accessible Collections list fromget_user_accessible_collections
.
Summary
This update adds the functionality to send emails in batches using the Resend API. Instead of sending one email at a time, the system now creates separate email requests for each recipient and processes them as a batch.
Who we built this for
- Teams ensuring high-volume transactional emails are delivered efficiently
- Users who rely on consistent communication platforms where reliability and speed are critical
The problem we solved with this feature
Previously, emails were sent individually, which was not optimal for scenarios involving many recipients. With batch sending, the system now guarantees that each recipient gets their tailor-made email quickly and reliably, improving overall efficiency.
Specific items shipped
-
Batch Request Creation for Emails
Modified email client code to generate separate Resend request objects per recipient, preserving core email fields (From, Subject, Text, etc.).Explain it to me like I’m five:
If you want to send party invitations to all your friends, instead of mailing one invitation that everyone shares, you now send each friend a personal invitation so they feel special.Technical details:
The update involves cloning an email object and recreating the resend request for each recipient usingnewResendRequestBatch
. This change is implemented in the client logic and covered by new tests to ensure each request contains a single recipient in the To field and that the batch payload is correctly formatted.
Summary
This feature addresses an issue where users signing in with Magic Link or non-Google shared accounts see a generic “Guest” avatar. Currently, if a user logs into our staging environment with either a Storytell account or a personal email, their profile is rendered as “Guest”, even when different accounts and assets are in use. The solution sets the stage for a more intuitive experience by distinguishing between unedited guest profiles and those that have confirmed email data. The feature also lays the groundwork for allowing users to edit their profile name during onboarding or via an Edit Profile button.
Technical details
This implementation modifies the avatar rendering algorithm. When a user signs in via Magic Link, which bypasses Google’s sharing of profile data, the system now checks for a confirmed email address. If an email is present but no explicit name has been set, the fallback “Guest” label is used as a temporary placeholder. A quick fix adjusts textual cues based on email verification; however, the long-term solution is to permit users to edit their profile names, thereby replacing the default “Guest” with personalized identifiers. Diagnostic endpoints have been integrated (see diagnostic JSON link) to assist in tracking and resolving any discrepancies between different account sessions.
Explain it to me like I’m five
Imagine you log in to an app and see a profile picture with the letter “G”, which simply means “Guest” because the system hasn’t learned your name yet. This is because, when signing in through a special link (Magic Link), the usual details from your Google account aren’t brought in. In simple terms, the app is saying “I don’t know who you are” until you tell it your name. In the future, we plan to let you set your name, so every time you log in, you see your personal details instead of just “Guest.”
Specific items shipped
- Avatar Fallback Logic:
The system now checks if a signed-in account has a confirmed email but no set name. In such cases, the default “Guest” label is displayed to ensure consistency. - Magic Link Handling:
For users signing in via Magic Link, which circumvents Google’s profile data sharing, the feature ensures that data is handled predictably, using diagnostics to verify that the display is correct. - Diagnostic Integration:
A linked diagnostic file is generated for each session, designed for engineering review. This JSON file helps track any discrepancies between account data and the avatar’s display, facilitating quicker debugging. - Future Edit Profile Flow Preparation:
The current fix acknowledges the need to allow users to edit their profiles. Though this update uses label adjustments as a quick fix, it points toward an eventual dedicated workflow for profile customization.
The problem we solved with this feature
Previously, our system showed every signed-in user as “Guest” when they used Magic Link. This caused confusion because different accounts, with distinct profiles (and even different email addresses), were not immediately distinguishable. Users and testers experienced ambiguity when identifying which account was active, which not only eroded user confidence but also complicated troubleshooting. By addressing this, we ensure that users are informed about their account status and the need to personalize their profiles—setting the stage for a better experience.
Who we built this for
This feature was primarily built for our staging server users, which include internal testers and early adopters who rely on Magic Link authentication. It’s especially relevant for:
- Testers ensuring account integrity and debugging potential mix-ups.
- Early users who need a clear indication of their account status while we prepare for enhanced profile customization.
By solving this issue, we are improving clarity for both technical engineers and non-technical stakeholders who rely on accurate visual cues from their profile information.
Summary
This bug fix addresses the issue where users were unable to see the upload progress for XLS files in thread or chat environments. Previously, when users uploaded XLS files, there was no indication of progress, leading to confusion and uncertainty about whether the upload was successful. This fix introduces a progress indicator, enhancing the user experience by providing real-time feedback during the upload process.
Technical Details
To implement this fix, we integrated a task and watcher system specifically for XLS file uploads. This involves:
- Task Initialization: A task is created when an XLS file upload is initiated. This task is responsible for managing the upload process and tracking its progress.
- Watcher Integration: A watcher is set up to monitor the task’s status. It periodically checks the progress of the upload and updates the user interface accordingly.
- Validation Steps: We conducted thorough validation on different environments:
- Development Environment: Initial testing to ensure the task and watcher are functioning as expected.
- Staging Environment: Further testing to simulate real-world conditions and ensure stability before deployment.
Explain it to Me Like I’m Five
Imagine you’re building a LEGO tower, and you want to know how much you’ve built so far. Before, you couldn’t tell how tall your tower was until you finished. Now, with this fix, you have a ruler next to your tower that shows you how tall it is as you build. This way, you always know how much more you need to build to finish your tower.
Specific Items Shipped
- Task for XLS Uploads: We created a new task that starts whenever an XLS file is uploaded. This task is like a manager that keeps track of everything happening during the upload.
- Watcher for Progress Monitoring: A watcher was added to keep an eye on the upload task. It checks how far along the upload is and updates the screen to show this progress.
The Problem We Solved with This Fix
The main problem was the lack of feedback during XLS file uploads in threads or chats. Users were left guessing whether their files were uploading correctly, leading to frustration and potential errors. By adding a progress indicator, we provide clear feedback, improving user confidence and satisfaction.
Who We Built This For
This fix was primarily built for users who frequently upload XLS files in chat or thread environments. These users rely on timely and accurate feedback to ensure their files are uploaded successfully, which is crucial for maintaining smooth communication and workflow.
Summary
We addressed a critical issue where MP4 files were not functioning correctly in the production environment of Storytell. This malfunction was caused by a recent refactor that moved processing to a job-based system without properly outputting sanitized HTML. The fix ensures that MP4 files are now processed seamlessly, restoring their functionality across all environments.
Technical Details
The root cause of the MP4 failure was identified during the transition to a job-based processing system. In the refactor, the process responsible for handling MP4 files did not include the necessary step to output sanitized HTML, leading to failures in production. To resolve this, we:
- Reintroduced Sanitized HTML Output: Ensured that the job responsible for processing MP4 files correctly outputs sanitized HTML, preventing any security vulnerabilities and ensuring proper functionality.
- Updated Job Configuration: Modified the job settings in the Storytell AI Platform repository to handle MP4 files appropriately.
- Enhanced Validation Processes: Implemented thorough validation checks across development, staging, and production environments to ensure that MP4 processing works flawlessly and to catch any potential issues early in the deployment pipeline.
Explain It to Me Like I’m Five
Imagine you’re trying to show a video to your friends, but every time you try, nothing appears. We found out that when we changed how we prepare the videos, we forgot to include a special step that makes them show up correctly. Now, we’ve added that step back, so your videos play without any problems, just like they should.
Specific Items Shipped
- Sanitized HTML Output Implementation: Added the necessary code to ensure that sanitized HTML is correctly generated during MP4 processing jobs.
- Job Refactor Fix: Updated the job structure to handle MP4 files without causing failures in production.
- Comprehensive Validation: Conducted extensive testing in development, staging, and production environments to confirm the effectiveness of the fix and ensure no residual issues remain.
The Problem We Solved with This Feature
We built this feature to address the critical issue of MP4 files failing to work in the production environment of Storytell. This problem prevented users from uploading and viewing MP4 content, disrupting their experience and the platform’s functionality. By resolving this, we ensure that media-rich content can be shared and enjoyed seamlessly, maintaining user satisfaction and platform reliability.
Who We Built This For
This fix was specifically designed for Storytell users who rely on uploading and sharing MP4 videos as part of their storytelling experience. Whether it’s content creators sharing educational videos or users uploading media for personal projects, ensuring MP4 functionality is essential for their use cases. By resolving this issue, we support a smooth and uninterrupted experience for all users engaging with video content on our platform.
Summary
We have addressed this issue that was occurring in our production environment. This fix ensures that our tokenization strategy aligns with the models in use, preventing token overflow issues. Additionally, several optimizations and improvements have been implemented to enhance overall system performance.
Technical details
The error was caused by a mismatch in tokenization strategies between our platform and the deployed models. We were using the Cl100kBase tokenization strategy, while the 4o models utilize O200kBase. This discrepancy led to an excessive number of tokens being processed, resulting in errors in production. By updating our tokenization strategy to O200kBase, we ensure compatibility with the 4o models, thereby preventing the “Too Many Tokens” error. Furthermore, during the troubleshooting process, additional optimizations were made to improve system efficiency and reduce the likelihood of similar issues in the future.
Explain it to me like I’m five
Think of our system like a backpack that can only hold a certain number of books. We realized that we were trying to put too many books in the backpack, causing it to break. We fixed this by adjusting how many books we put inside, making sure the backpack stays strong and works well. We also made some other small improvements to keep everything running smoothly!
Specific items shipped
-
Aligned Tokenization Strategy: Changed the tokenization method from Cl100kBase to O200kBase to match the 4o models, preventing token overflow errors.
-
Performance Optimizations: Implemented various optimizations to enhance system performance and ensure smoother operations.
-
Bug Fix Deployment: Successfully deployed the fix to the production environment, resolving the “Too Many Tokens” error.
The problem we solved with this fix
We built this fix to resolve the “Too Many Tokens” error that was disrupting our production environment. This error was caused by using an incompatible tokenization strategy, leading to an excessive number of tokens being processed and causing system failures. Addressing this issue is crucial to maintain the reliability and stability of our platform, ensuring a seamless experience for our users without interruptions or errors.
Who we built this for
This fix is designed for our engineering team and our end-users who depend on the reliability of our platform. By eliminating the token overflow issue, developers can continue to build and deploy applications without encountering the “Too Many Tokens” error, while end-users benefit from a more stable and efficient system experience.
Summary
The latest fix disables the YouTube scraping functionality temporarily while we undergo debugging to ensure the overall stability and performance of our system. This measure is in place to prevent any potential issues arising from incorrect data retrieval or processing during the debugging phase. By disabling this feature, we can focus on identifying and resolving critical bugs without the added complexity of YouTube data scraping.
Who we built this for
This fix is specifically aimed at developers and QA engineers within our team who are involved in the debugging process. By temporarily disabling YouTube scraping, we provide these users with a clearer environment, eliminating potential discrepancies caused by YouTube data during testing. This allows for a more straightforward debugging process, enhancing our ability to resolve issues effectively.
Summary
We’ve removed the “fallback” behavior from the CSV processing workflow. Previously, if embeddings couldn’t be generated for a CSV, the job would continue using a “best effort” approach. With this fix, if embeddings generation fails, the entire job will fail, clearly signaling to the user that something went wrong.
Technical Details
In the previous implementation, the CSV row processing included a fallback mechanism that attempted to proceed even when embeddings couldn’t be generated. This approach was intended to monitor real-world performance and necessitated sending additional announcements when failures occurred. However, due to the inherent unpredictability of using a Large Language Model (LLM) for template generation, we encountered a consistent failure rate of approximately 12% in production.
By removing the fallback behavior, the system now enforces a strict policy where the job fails if embeddings aren’t successfully generated. This change not only ensures immediate feedback to users about processing issues but also allows the embedding process to be retried a defined number of times (N attempts). This retry logic enhances reliability without compromising on clear communication of failures.
Explain it to me like I’m five
Imagine you’re building a LEGO model, and sometimes some pieces are missing. Before, if a piece was missing, you’d just keep building and hope for the best. Now, if a piece is missing, the whole project stops so you know there’s a problem. This way, you can fix the issue right away instead of ending up with an incomplete model.
Specific Items Shipped
-
Removal of Fallback Mechanism: Eliminated the “best effort” approach in CSV processing to ensure that jobs fail when embeddings cannot be generated.
-
Job Failure on Embedding Generation Failure: Implemented a system where the entire job fails if embeddings aren’t successfully created, providing immediate feedback to users.
-
Retry Logic for Embedding Process: Added functionality to attempt the embedding process multiple times (N retries) before ultimately failing, increasing the chances of successful embeddings without hiding failures.
The Problem We Solved with This Fix
Previously, the CSV processing system would continue running even when embeddings generation failed, which led to incomplete or inaccurate data processing. This “best effort” approach masked underlying issues and did not effectively inform users about failures, resulting in a 12% failure rate in production. By enforcing a job failure when embeddings can’t be generated, we ensure that users are immediately aware of problems, allowing for quicker troubleshooting and maintaining the integrity of the data processing workflow.
Who We Built This For
We built this fix for data engineers and analysts who depend on reliable embeddings generation for processing CSV files. Use cases include preparing data for machine learning models, where accurate embeddings are crucial for model performance. By ensuring that embedding failures are promptly reported, we help these users maintain high data quality and streamline their workflow by addressing issues as they arise.
Summary
The “Fix CSV Sampling” update addresses an issue where the sample CSV file was incorrectly generating a single row that combined both the header and the actual data rows. This fix ensures that the CSV file is properly formatted with distinct headers and multiple data rows, allowing for accurate and efficient data handling.
Technical details
The core issue was identified in the CSV sampling module, where the function responsible for generating the sample was inadvertently appending data rows directly to the header row, resulting in a malformed single-row CSV file. To resolve this, the function has been refactored to separate the header generation from the data row accumulation. Specifically, the header is now initialized independently, and each subsequent data row is appended as a new distinct row in the CSV structure. Additionally, error handling has been enhanced to ensure that any discrepancies in data formatting are caught and addressed before file generation. This ensures that the resulting CSV adheres to standard formatting conventions, facilitating seamless integration with data processing tools and workflows.
Explain it to me like I’m five
Imagine you’re making a list of your favorite toys. First, you write the titles like “Toy Name” and “Color” at the top. Then, you add each toy’s details on different lines below. Before this fix, sometimes all the toys and the titles got mixed up on one line, making the list hard to read. Now, each toy has its own line under the correct titles, making the list neat and easy to understand.
Specific items shipped:
-
Separated Header and Data Rows: The CSV generator now clearly distinguishes between the header row and the data rows, ensuring that each section is properly formatted and organized.
-
Enhanced Error Handling: Improved mechanisms are in place to detect and handle any formatting issues during CSV creation, reducing the likelihood of errors in the generated files.
-
Optimized Data Processing: Adjustments to the data processing logic allow for more efficient handling of large datasets, resulting in quicker and more reliable CSV file generation.
The problem we solved with this fix
Previously, the CSV sampling feature was producing files where the header and data were merged into a single row. This made the CSV files difficult to use with standard data tools, as they expected distinct headers and multiple data rows. By correcting this structure, we ensure that users can seamlessly import and manipulate their data without encountering formatting issues, thereby improving overall data reliability and usability.
Who we built this for
We developed this fix for data analysts and developers who rely on accurate CSV files for data manipulation and reporting. By ensuring properly formatted CSV samples, these users can efficiently import data into their preferred tools without encountering structural issues, thereby streamlining their workflow and enhancing productivity.
Summary
This fix introduces a more lenient approach to matching header rows when processing CSV files. The previous implementation required a strict mapping of every column, which did not accommodate cases where columns might be omitted by the LLM (Large Language Model) due to lack of value. The goal is to adapt our processing to better align with LLM behavior, specifically allowing it to choose relevant headers while ignoring unnecessary ones.
Technical details
The previous methodology enforced a rule whereby all columns in the incoming CSV file had to be accounted for during processing. This rigidity was problematic, especially when working with responses from LLMs like Claude, which often discard columns deemed irrelevant.
With this fix, the processing logic has been modified to accept LLM responses without requiring strict adherence to our original column mapping rules. Here’s how it works:
- We now evaluate the header rows with greater leniency and allow the LLM to determine which fields to include for processing.
- The system captures headers that contain actual data and ignores those like “Rk” (row number), which Claude frequently drops, improving both performance and usability.
- This shift not only aligns our software more closely with LLM behavior but also streamlines data handling, resulting in fewer errors during the CSV upload process.
Explain it to me like I’m five
Imagine you have a box of crayons, but sometimes some of your crayons don’t work or are just plain silly colors and you don’t want to use them. This fix helps our program listen to a friend (the computer) who says, “Hey, these crayons are the best ones. Let’s just use these!” Even if you have a set of rules about which crayons you must use, it’s better to listen to your friend and pick the pretty ones instead of sticking to old rules that don’t make sense. This way, we end up with a nicer picture!
Specific items shipped:
- Lenient Column Matching: The system now allows missing columns, which means if Claude decides certain columns aren’t needed, the process continues smoothly without errors.
- Support for Irrelevant Headers: Headers like “Rk” will be ignored if they yield no meaningful data. This optimizes the CSV processing by focusing only on important information.
- Enhanced Validation Process: New validations ensure that users’ CSVs work seamlessly under the new rules.
The problem we solved with this fix
The initial CSV processing system was too strict, leading to failures or unnecessary complexity in handling files that LLMs might process differently. This rigidity was a barrier for effective data manipulation and integration, causing delays and frustration. By adopting a more flexible approach, we are now able to better serve our users and allow the system to function more intuitively in line with LLM capabilities.
Who we built this for
We primarily built this feature for users who handle CSV data extensively, such as:
- Data Analysts: They require efficient data imports without errors to analyze and draw insights.
- Developers: Those interfacing with LLMs are often faced with CSV data that doesn’t conform strictly to preconceived rules, and this fix helps solidify that interaction.
- End Users: Users who upload various data records and need reliable processing without inflexibility.
Summary
We’ve implemented a fix that sets a maximum number of attempts for updating the status of an asset. This enhancement ensures that the system doesn’t get stuck trying indefinitely to update an asset’s status, which was causing errors and job failures previously. By limiting the number of attempts, we improve the reliability and stability of the asset processing workflow.
Technical details
The fix involves modifying the fileproc
module to include a max_attempts
parameter when updating an asset’s status. Previously, the system would continuously attempt to update the asset status without a defined limit, leading to repeated failures and resource exhaustion.
In the codebase, we’ve introduced a retry mechanism that caps the number of update attempts to a predefined maximum. If the asset status update fails, the system will retry up to the maximum number of attempts before logging the failure and moving on. The implementation includes comprehensive error handling to ensure that failed attempts are properly logged and do not interfere with other operations. Local testing with affected assets confirmed that jobs now fail gracefully after reaching the maximum attempt threshold, preventing endless retry loops and improving overall system performance.
Explain it to me like I’m five
Imagine you’re trying to put together a puzzle, and sometimes a piece just won’t fit. Instead of trying forever, you decide to try a few times and then ask for help. We did something similar with our system: when it tries to update the status of a file and it doesn’t work, it will only try a certain number of times before stopping. This makes everything run smoother and prevents getting stuck.
Specific items shipped
-
Max Attempts Parameter Added
Introduced amax_attempts
setting to limit the number of times the system tries to update an asset’s status. This prevents endless retries and conserves system resources. -
Retry Mechanism Implemented
Developed a retry mechanism that initiates a new attempt to update the status only if the previous attempt fails, up to the defined maximum attempts. -
Enhanced Error Logging
Improved error logging to capture detailed information about each failed attempt, aiding in debugging and monitoring system performance. -
Graceful Failure Handling
Configured the system to handle failed update attempts gracefully by logging the error and preventing the job from being stuck in an infinite loop.
The problem we solved with this fix
Before this fix, the system would continuously attempt to update the status of an asset without any limit, leading to repeated errors and failed jobs. This behavior not only caused disruptions in the asset processing workflow but also consumed unnecessary system resources, affecting overall performance and reliability. By setting a maximum number of attempts, we prevent these endless retry loops, ensuring that failures are handled efficiently and do not impact other operations.
Who we built this for
This fix is designed for our engineering team and system administrators who manage asset processing workflows. Specifically, it addresses the needs of teams dealing with large volumes of asset status updates, ensuring that the system remains stable and efficient even when encountering issues. By implementing a capped retry mechanism, we provide a more reliable and maintainable solution for handling asset status updates, reducing downtime and improving user satisfaction.
Summary
We’ve addressed and resolved the highlight malfunction on the knowledge preference buttons located on the homepage of Storytell. Previously, when users first visited the site, the highlight did not display correctly unless the prompt box was clicked on and off. Additionally, there was an unintended automatic redirection to storytell.ai. With this update, the highlight now centers properly upon the initial visit, and the redirection issue has been eliminated, ensuring a smoother and more intuitive user experience.
Technical Details
To tackle the highlight misalignment, we revisited the CSS styling associated with the knowledge preference buttons. The primary issue was that the highlight indicator wasn’t correctly centered due to conflicting CSS rules that applied margin and padding inconsistently across different states of the button (e.g., active, hover). We refactored the CSS by:
- Ensuring consistent use of flexbox properties to center the highlight both vertically and horizontally.
- Removing redundant margin and padding declarations that caused the offset.
- Implementing responsive design principles to maintain alignment across various device viewports.
Additionally, the automatic redirection to storytell.ai was traced back to a faulty event listener that triggered navigation upon the initial page load. We corrected this by:
- Reviewing and updating the JavaScript event handlers to ensure that redirection only occurs upon explicit user interaction.
- Adding conditional checks to prevent unintended navigation during the initial rendering phase.
These changes were rigorously tested across multiple browsers and devices to confirm the stability and reliability of the fix.
Explain it to Me Like I’m Five
Imagine you have some brightly colored buttons on a website that light up when you click them, making it easier to see which one you chose. Before, these lights weren’t always showing up right the first time you opened the website—they were a bit shaky and sometimes took a little extra clicking to work. We fixed the lights so they shine perfectly every time you visit, without any extra fuss. Plus, we stopped the website from jumping to another page all by itself, so everything stays where it should be when you’re clicking around.
Specific Items Shipped
-
Centered Highlight Indicator: Adjusted the CSS to ensure the highlight around knowledge preference buttons is perfectly centered when the homepage loads, providing immediate visual feedback to users.
-
Eliminated Unwanted Redirection: Fixed the JavaScript event listener that was causing the site to automatically redirect to storytell.ai upon the first visit, ensuring users remain on the intended page unless they choose to navigate elsewhere.
-
Responsive Design Enhancements: Improved the flexibility of button layouts to maintain proper alignment and highlight functionality across various screen sizes and devices.
-
Cross-Browser Compatibility Fixes: Ensured that the highlight issue is resolved consistently across different web browsers, including Chrome, Firefox, Safari, and Edge.
-
Robust Testing Procedures: Conducted extensive testing scenarios to verify that the highlight and redirection fixes work seamlessly under multiple user interactions and conditions.
The problem we solved with this fix
Users visiting Storytell’s homepage were experiencing issues with the knowledge preference buttons not highlighting correctly upon their initial visit. This malfunction required users to click on and off the prompt box to see the highlight, leading to a confusing and less intuitive user experience. Additionally, there was an unintended automatic redirection to storytell.ai, disrupting the user’s navigation flow. These issues hindered the usability and accessibility of the homepage, potentially causing frustration and reducing user engagement.
Who We Built This for
This fix is for our primary users who rely on the knowledge preference buttons to customize their homepage experience. Specifically, it caters to:
-
New Visitors: Ensuring first-time visitors have a seamless and clear interaction with the preference buttons without encountering technical glitches.
-
Returning Users: Providing consistent and reliable functionality each time they visit, enhancing overall satisfaction and ease of use.
-
Productivity-Focused Users: Users who depend on quick and accurate customization of their homepage to efficiently access desired content without unnecessary navigation hurdles.
Fixed a bug where the whole page shows an error while chatting
Was this page helpful?