Updates & Releases
The latest from Storytell
Summary
Storytell now features a direct link to the “Why Storytell” page in the navigation bar. This makes it easier for users to quickly access information about the value proposition of Storytell and how it can benefit their organization. This update ensures that potential users can efficiently find key information to understand Storytell’s unique benefits.
Who We Built This For
This update is primarily beneficial for potential enterprise clients and users who are evaluating Storytell. It helps them understand the platform’s capabilities and value proposition more easily.
The Problem We Solved With This Feature
Previously, users had to navigate through multiple pages or rely on external search to find information about “Why Storytell.” This created friction and potentially hindered the evaluation process. Adding a direct link improves discoverability and simplifies access to this essential information.
Specific Items Shipped
- “Why Storytell” Link in Navbar: A new link labeled “Why Storytell” has been added to the main navigation bar, both on desktop and mobile views. This link directs users to the dedicated “Why Storytell Enterprise AI” page, highlighting the benefits and use cases of Storytell.
Explained Simply
Imagine you’re shopping online for a new gadget, and you want to know why you should buy this one instead of the others. Usually, you’d have to hunt around the website for the “Why This Gadget?” section. With Storytell, we’ve put a “Why Storytell?” button right at the top, so anyone can easily see what makes Storytell special without having to search around.
Technical Details
The change involves modifying the LandingPageNavbar.tsx
file within the pkg/ts/core/src/marketing/components
directory. Specifically, a new <li>
element containing an <a>
tag was added to both the NavbarMobile
and LandingPageNavbar
components. This <a>
tag points to https://web.storytell.ai/why-storytell-enterprise-ai
and includes the text “Why Storytell” with appropriate styling. The commit was verified with GitHub’s verified signature, using GPG key ID B5690EEEBB952194.
Summary
Storytell now supports Chinese Simplified (zh_CN) and Chinese Traditional (zh_TW) languages. This update broadens Storytell’s accessibility, allowing a larger global audience to engage with the platform in their native language. This feature adds the Chinese language to the Storytell platform, enabling users in China to utilize the platform effectively.
Who We Built This For
This update is specifically designed for Chinese-speaking users, enabling them to interact with Storytell in their preferred language. The feature benefits users who prefer to use Storytell in Chinese for improved comprehension and engagement. Colin from GMI may be one such user.
The Problem We Solved With This Feature
The addition of Chinese language support addresses the need for inclusivity and accessibility. Many users find it easier and more efficient to interact with digital platforms in their native language. By offering Chinese Simplified and Traditional, Storytell eliminates language barriers and improves the overall user experience for a significant portion of the global population.
Specific Items Shipped
-
Chinese Simplified Language Support:
Storytell now supports Chinese Simplified (zh_CN) as a language option. This includes translation of the user interface, prompts, and other relevant content.
-
Chinese Traditional Language Support:
In addition to Simplified Chinese, Storytell also offers support for Chinese Traditional (zh_TW). This ensures that users who prefer the Traditional script can effectively use the platform.
Technical Details
The update includes the addition of Chinese Simplified and Traditional to the supported languages within the pkg/ts/core/src/lib/data/lanugagesByPopularity.ts
file. This involves adding the appropriate language codes (“zh_CN”, “zh_TW”) and names (“Chinese - Simplified”, “Chinese - Traditional”) to the language configuration.
Explained Simply
Imagine Storytell is like a store that only speaks English. Now, we’ve added translators who can speak Chinese, both Simplified and Traditional. This means that more people can now easily use Storytell, no matter which version of Chinese they prefer. It’s like making sure everyone feels welcome and can understand what’s going on.
Summary
The Link Sharing for Collections feature allows users to share Collections with others via a unique, generated link. Anyone with the link can join the Collection, even if they don’t have an existing account. Upon clicking the link, users are prompted to register or sign in. Once authenticated, they gain access to the shared Collection. This simplifies the process of inviting new users to collaborate on Collections.
Who we built this for
This feature is designed for:
- Existing Storytell users: To easily share Collections with external collaborators or new users without needing to manually invite each person.
- New Storytell users: To seamlessly join shared Collections through a simple registration process, encouraging platform adoption and collaboration.
The problem we solved with this feature
Previously, inviting new users to a Collection required manually adding them, which could be cumbersome and time-consuming, especially for large groups or external collaborators. This feature streamlines the onboarding process by enabling Collection owners to simply share a link, making it easier to grow their community and collaborate effectively. This reduces friction for new users and simplifies sharing Collections for existing users.
Specific items shipped:
- Secret Link Generation:
- Summary: A unique, secure link is generated when sharing a Collection.
- Detail: When a user shares a Collection and chooses the “secret link” option, the system generates a unique token (a
share_public_token
which is achar(68)
) and stores it in thecontrolplane.dat_collection_link_shares
table. This token is then used to construct a shareable URL.
- Access Control and Permissions:
- Summary: The feature ensures that only users with the link and valid permissions can access the Collection.
- Detail: The
controlplane.create_collection_secret_share
function checks if the acting user has the required permission on the Collection before creating the share record. Access is granted through theAcceptCollectionSecretLinkToken
function incurator_core_accept_collection_secret_link_token.go
after validating the token and creating appropriate Collection grants.
- Database Migrations:
- Summary: Database schema was updated to support the new secret link sharing feature.
- Detail: The
services/controlplane/migrations/50_collection_secret_link_sharing.up.sql
migration creates thecontrolplane.dat_collection_link_shares
table to store the secret link details. It also creates the functioncontrolplane.create_collection_secret_share
to create the new Collection share record
- API Endpoints:
- Summary: New API endpoints were created to accept and delete Collection secret links.
- Detail: The
AcceptCollectionsSecretLink
endpoint (endpoint_collection_secret_link_accept.go
) allows users to join a Collection using a secret link. TheDeleteCollectionsSecretLink
endpoint (endpoint_collection_secret_link_delete.go
) allows Collection owners to revoke a secret link, invalidating it for future use.
- User Redirection and Registration:
- Summary: New users are redirected to register or sign in before gaining access.
- Detail: When a user clicks on a secret link, the system checks if they are authenticated. If not, they are redirected to the registration or sign-in page. After successful authentication, they are granted access to the Collection.
- Access Redaction
- Summary: When a secret link is used, email addresses are redacted
- Detail: The
GetCollectionAccess
function incurator_core_get_collection_access.go
redacts the email address to only show the domain.
Explain it to me like I’m five
Imagine you have a clubhouse (a Collection). Instead of inviting each friend one by one, you can now create a special “secret link” that acts like a magic ticket. You give this link to anyone you want to join your clubhouse. When they click the link, they can sign up (if they’re new) or sign in (if they already have an account), and BAM! They’re automatically members of your clubhouse. It’s like a super-easy way to invite lots of friends without having to do it individually.
Technical details
The implementation involves several key components:
- Database Schema: The
controlplane.dat_collection_link_shares
table stores information about the shared link, including the token, Collection ID, sharing user, and permissions. - API Endpoints:
AcceptCollectionsSecretLink
(POST /v1/collection-secret-token/:publicToken): This endpoint handles the acceptance of a secret link token. It verifies the token, retrieves the associated Collection share record, and grants the user access to the Collection.DeleteCollectionsSecretLink
(DELETE /v1/collection-secret-token/:tokenID): This endpoint revokes a secret link by deleting the corresponding record from thecontrolplane.dat_collection_link_shares
table.
- Curator Core Functions:
AcceptCollectionSecretLinkToken
(curator_core_accept_collection_secret_link_token.go
): This function validates the secret link token, retrieves the Collection share record, builds collection grants, and grants access to the user.DeleteCollectionSecretLinkToken
(curator_core_delete_collection_secret_link_token.go
): This function deletes a secret link token and its associated grants.GetCollectionAccess
(curator_core_get_collection_access.go
): This function checks if the current session user has access to the Collection via a secret link and redacts the email.
- Security: The
VerifyShareToken
function is used to ensure that the token is well-formed before performing any expensive operations. - Integration Tests: The
services/controlplane/test/integration/collection_share_test.go
file contains integration tests that verify the functionality of the secret link sharing feature, including creating, accepting, and deleting secret links.
Summary
Storytell now supports Map-Reduce functionality for Large Language Models, enabling more effective processing of lengthy content and complex queries. This architectural improvement provides better handling of large datasets by breaking them into manageable chunks, processing them independently, and then combining results. Users will experience improved responses when working with extensive documents or complex queries that require analyzing multiple references.
Who We Built This For
This feature was built for users who work with large knowledge bases, extensive documentation sets, or need to analyze multiple documents simultaneously. It’s particularly valuable for research teams, content strategists, and knowledge workers who require comprehensive analysis across multiple information sources.
The Problem We Solved With This Feature:
Previously, when dealing with large volumes of information, LLMs would either hit context limitations or provide incomplete answers that failed to incorporate all relevant information. This led to inconsistent results when processing extensive documentation or when answers required synthesizing information from multiple sources. Map-Reduce solves this by enabling systematic processing of large information sets while maintaining coherent, comprehensive responses.
Specific Items Shipped:
-
Map-Reduce Architecture:
Implemented a distributed processing approach that breaks large datasets into manageable chunks, processes each independently, and then combines the results into a coherent response. -
Enhanced Debugging Capabilities:
Added comprehensive debugging tools that provide visibility into the execution process, making it easier to troubleshoot and optimize performance. -
Improved Resource Management:
Implemented structured accounting for token usage based on actual LLM reports rather than internal estimates, providing more accurate usage metrics. -
Standardized LLM Integration:
Refactored how prompt building and execution occurs, creating a consistent interface for all LLM interactions regardless of where execution happens.
Explained Simply:
Imagine you’re researching a complex topic for a school project that requires reading 20 books. Without help, you’d need to read all the books cover-to-cover, remember everything, and then write your report. That’s exhausting and you’d probably miss important details.
Map-Reduce is like having a study group where each person reads a different book, takes notes on the important parts, and then everyone comes together to combine their findings into one comprehensive report. This way, all the important information gets included, and the final report is more complete than if any one person tried to read everything.
Storytell now works similarly when processing large amounts of information. It breaks up big tasks into smaller pieces, handles each piece separately, and then combines the results into one coherent answer - giving you better, more comprehensive responses when working with lots of information.
Technical Details:
The Map-Reduce implementation consists of several key components:
The system introduces a core ai.Streamer interface in pkg/go/domains/ai with specific implementations for each LLM vendor (e.g., OpenAI, Gemini).
The execution flow:
- Input content is divided into chunks that fit within model context windows
- A “mapper” prompt processes each chunk independently
- A “reducer” prompt combines and synthesizes the outputs from all mappers
- The final reduced output is returned as a cohesive response
- A new debugging framework in pkg/go/domains/debug provides structured capture of execution details, including token usage, response fragments, and timing metrics. Debug output is stored in a standardized JSON format at
{{bucket}}/{{organization}}/debugged/prompt/{{messageId}}.json
.
Token accounting now leverages the Accountant interface which records actual token usage as reported by the LLMs rather than relying on internal estimates.
Citations are limited to a maximum of 200 entries to prevent overwhelming the response with excessive references.
Prompt execution has been decoupled from the control plane, allowing the same execution patterns to be used across different services.
The implementation includes safeguards against excessive parallel requests and proper resource cleanup to prevent go-routine leaks.
Summary
Storytell now integrates Gemini 2.0 Pro, replacing Gemini 1.5 Pro. This upgrade brings Google’s latest large language model capabilities to Storytell, offering improved reasoning, understanding, and response quality. As part of this update, we’ve also streamlined our model offerings by removing several less effective models to optimize the experience.
Who We Built This For
This feature was built for all Storytell users who rely on high-quality AI assistance. It’s particularly valuable for users working on complex tasks that require sophisticated reasoning, document analysis, or nuanced responses.
The Problem We Solved With This Feature:
Google announced that Gemini 1.5 models will be deprecated in the coming months, which would have eventually caused disruptions for users. Additionally, there was an opportunity to improve overall response quality by upgrading to Google’s latest model. This update ensures continued support while providing access to Google’s most advanced AI capabilities.
Specific Items Shipped:
-
Gemini 2.0 Pro Model: Integrated Google’s latest Gemini 2.0 Pro model, offering improved reasoning capabilities and response quality compared to the previous 1.5 version.
-
Model Cleanup: Removed several underperforming models (AnthropicClaude3Haiku, GoogleGemini15Flash, and MetaLlama323bPreview) to streamline options and focus on the highest-performing models.
-
Ranking Updates: Removed the Artificial Analysis ranking system which was discontinued by the provider, ensuring that model selection remains based on current, relevant metrics.
Explained Simply:
Think of large language models like different chess engines that help you play better chess. We’ve just upgraded from a good chess engine (Gemini 1.5 Pro) to an even better one (Gemini 2.0 Pro) that sees more moves ahead and makes smarter decisions.
At the same time, we removed some older chess engines that weren’t as helpful
Summary
This feature introduces a retry button for file uploads that fail to write correctly to a GCP storage bucket. When a file upload returns a “0” response (indicating a failure due to permission issues, incorrect bucket names, network problems, or exceeding quotas), the user is presented with a retry button to manually initiate another upload attempt.
Who we built this for
- Content Uploaders and End Users: Users relying on our file upload mechanism can now recover from transient GCP errors without having to restart the entire process.
- Developers and Support Teams: Enhances debuggability and reduces user friction when encountering uploads that return an error.
The problem we solved with this feature
Many users encountered upload failures due to various backend issues with GCP. Without a way to retry, any interruption would force users to re-initiate file uploads from scratch, leading to frustration and potential data loss. By implementing a retry mechanism, we allow users to quickly recover from temporary failures, thus enhancing the user experience and overall reliability of our upload process.
Specific items shipped:
-
Retry Button UI Element:
A new button component has been added to the upload progress UI. When an upload fails (i.e., returns a “0” response), this button becomes visible, enabling the user to attempt the upload again. This improves usability and provides clear feedback on the upload status. -
GCP File Upload Retry Logic:
The underlying logic checks the response from GCP during a file upload. If a failure is detected, it triggers a state change in the UI to display the retry button. This logic helps to isolate transient errors from permanent upload issues. -
Error State Management in FileUploadProgress Component:
Updates in the FileUploadProgress component now accommodate error checking and prompt for retries. This ensures that the UI accurately reflects the file’s upload status and interacts with the retry mechanism seamlessly.
Explain it to me like I’m five
Imagine you are trying to send a drawing to a friend, but sometimes the mail van doesn’t pick it up. Now, you have a special button that says “Try Again” so you can ask the mail van to try picking up your drawing one more time. This way, you don’t have to start over and you eventually send your drawing successfully.
Technical details
The FileUploadProgress component has been updated to include error detection. When a PUT request to GCP returns a “0” response, the UI state is set to an error mode, causing the retry button (styled using the StButton component and with an appropriate refresh icon from TbRefresh) to render. The retry logic is implemented as an event handler on the button, which re-initiates the upload process. This feature neatly integrates with the existing file uploader architecture and allows for state-based UI updates using SolidJS’s reactive components.
Summary
This feature redefines how Storytell extracts and processes YouTube content. By shifting to a ScrapingBee-based solution, we now reliably retrieve video page HTML and transcripts. The integration leverages premium proxy settings and advanced header forwarding techniques to improve rendering and data accuracy.
Who we built this for
- Content Curators & Data Analysts:
Users who need accurate metadata and transcript data from YouTube videos to populate content repositories, enhance search functionality, and power data-driven insights. - Developers:
Teams that rely on seamless integrations to fetch, process, and analyze video content without dealing with unstable scraping techniques.
The problem we solved with this feature
We encountered challenges with the legacy method of extracting YouTube content using Firecrawl, which often resulted in incomplete transcript data and unreliable HTML scraping. This issue directly affected users who depend on accurate video metadata for content analysis and discovery. By introducing the ScrapingBee-based approach, we solved these reliability and performance issues, ensuring that our clients can confidently ingest and use YouTube video data.
Specific items shipped
-
YouTube Transcript Extraction Overhaul:
We restructured the function incrawl_strategy_youtube.go
to utilize ScrapingBee for retrieving video page content. This not only provides a more reliable method to fetch transcripts but also ensures a robust fallback in case captions are missing. The enhanced workflow minimizes errors and improves data completeness. -
ScrapingBee Integration Module:
A new module, encapsulated inpkg/go/domains/assets/ingest/web/scrapingBee.go
, was introduced. This module handles HTTP requests to the ScrapingBee API, forwarding necessary headers and managing proxy configurations. It abstracts the complexity of the scraping process to provide a simple interface for content extraction. -
Configuration and Dependency Updates:
With the transition from Firecrawl to ScrapingBee, relevant dependencies have been updated ingo.mod
and unused dependencies removed fromgo.sum
. Configuration files now require a ScrapingBee API key, ensuring that the setup is streamlined for the new integration. -
Codebase Streamlining:
Legacy scraping functionality was removed and replaced with concise, maintainable code. This cleanup not only reduces technical debt but also makes future updates to our scraping strategies easier to implement.
Explain it to me like I’m five
Imagine you have a magic net that catches butterflies (YouTube videos) and shows you pretty pictures (video webpages) and tells you the stories they share (transcripts). Our old net sometimes missed parts of the stories. We built a better net that catches more details, so everyone can enjoy the full story every time!
Technical details
At its core, the new feature uses the ScrapingBee API to perform HTTP GET requests. It constructs requests by encoding the target YouTube URL and sets parameters such as RenderJS
and PremiumProxy
to ensure complete rendering of dynamic content. The integration allows header forwarding, meaning custom HTTP headers can be added to mimic browser behavior—essential for bypassing common scraping pitfalls. After receiving the response, the module processes the HTML to extract essential metadata like the title and description. Additionally, the transcript extraction logic uses XML parsing to decode caption data. This shift not only replaces the older Firecrawl implementation (which has now been fully removed) but also makes the system more robust by combining effective error handling, improved dependency management, and a simplified code architecture.
Next Steps & Considerations:
- Investigate further API performance metrics to benchmark the reliability of ScrapingBee over the previous Firecrawl method.
- Develop comprehensive documentation and unit tests for the new scraping module to ensure future maintainability.
- Explore potential error logging improvements and configuration diagnostics to quickly detect changes in YouTube’s page structure that might affect transcript extraction.
Summary
Storytell now gives you direct control over which AI model processes your prompts with our new LLM Selection feature. Users can now override the default dynamic model selection and specifically choose models like O3 Mini, GPT-4o, Claude, or Gemini for their particular needs. This feature puts more power in your hands to optimize your AI interactions for specific types of queries. The UI offers an intuitive dropdown menu that makes selecting your preferred AI model simple and straightforward.
Who We Built This For
We built this feature specifically for users like Jeremy who need greater control over which AI models handle their prompts. This feature serves professionals who work with specialized content where certain AI models perform better than others for specific tasks.
The Problem We Solved With This Feature:
Previously, Storytell would automatically select the best AI model for each prompt using our dynamic LLM routing system. While this works well in most cases, power users sometimes need to specify a particular model because they know it performs better for certain types of queries or content. Without manual selection capability, users couldn’t leverage their knowledge of model strengths or preferences, limiting the control they had over their AI interactions.
Specific Items Shipped:
-
LLM Selection Dropdown - Added an intuitive dropdown menu in the chat interface that allows users to select specific AI models instead of relying on automatic model selection. The dropdown includes options for various models from different providers including OpenAI, Anthropic, Google, and open-source options.
-
Model Categorization - Organized available models by vendor (OpenAI, Anthropic, Google, etc.) to make finding specific models more intuitive. This structure helps users quickly navigate to their preferred model family.
-
Default “DynamicLLM” Option - Maintained the original automatic model selection as the default option, ensuring that casual users still benefit from Storytell’s intelligent routing while giving power users the ability to override when needed.
-
Visual Indicators for Selected Models - Implemented visual feedback that clearly shows which model is currently selected, making the system state obvious to users at all times.
Explained Simply:
Imagine you have a toolbox with different types of screwdrivers. Before this update, Storytell would automatically pick what it thought was the best screwdriver for each job. That works great most of the time, but sometimes you know exactly which screwdriver you need because you’ve done similar work before.
Now, with this new feature, you can reach into the toolbox and grab exactly the screwdriver you want. Some screwdrivers (AI models) are better at creative writing, others at analyzing data, and some at producing concise answers. If you’re working on a project where you know a specific AI model works best, you can now select it directly instead of letting Storytell choose for you.
It’s like being able to pick between different expert assistants based on what you know about their strengths and how they match your specific needs at that moment.
Technical Details:
The LLM Selection feature introduces a new UI component in the chat interface that integrates with the existing prompt context system. The implementation uses a dropdown menu created with the StDropdown component, which renders available model options organized by vendor.
The feature maintains a model selection state that’s initialized as undefined (which triggers the default dynamic model selection). When a user selects a specific model, the state updates to store the exact model ID (e.g., “o3-mini”, “gpt-4o”, “claude-3-7-sonnet-latest”).
The backend already supported model specification in requests, accepting both vendor names (like “openai” or “anthropic”) and specific model IDs. The frontend implementation now passes these specific model IDs when selected by the user.
The UI builds the dropdown hierarchically, with main categories for:
- DynamicLLM (default)
- OpenAI models (GPT-4o, O1, O1 Mini, O3 Mini)
- Anthropic models (Claude variants)
- Google models (Gemini variants)
- Open Source models (DeepSeek, Llama)
Each model selection is visually highlighted when active, and the current selection is displayed as the dropdown trigger label. The implementation includes integration with the prompt submission pipeline so that the selected model is applied when the user sends their prompt.
When no specific model is selected, the system falls back to the original dynamic LLM routing logic, ensuring backward compatibility and preserving the intelligent model selection for users who don’t need manual control.
Summary
This feature enables each chat thread to carry one or more attachments. In practice, a user can now include extra files or pieces of text (such as bits of content extracted from web pages) with any text prompt. The attachments are stored only as part of the thread’s local state and are not indexed for search or embedded into the knowledge base. This design supports use cases where a user wants to have a one‑off conversation or “chat session” enriched with extra details or references without cluttering the overall knowledge base.
Who We Built This For
- Chrome Extension Users and Chat-focused Customers
We built this feature for users who interact with our platform via our Chrome Extension or who prefer a simpler chat experience. It especially benefits those who want to quickly include supplemental data or files without having to configure a full-scale knowledge base.
The Problem We Solved with This Feature
Previously, adding reference material to chats required it to be absorbed into the global knowledge base, forcing users to manage a large persistent store—even when the references were for a single use. This feature solves that problem by allowing ephemeral attachments:
- It streamlines one-off or temporary chats.
- It prevents unnecessary clutter within the knowledge base.
- It improves performance and privacy by not processing attachments for search or indexing beyond their relevant chat thread.
Specific Items Shipped
-
ThreadState now Stores Attachments
The internal data model for a chat thread (ThreadState) has been extended to include an “attachments” field. This addition means that any files or content attached to a chat are stored directly with the conversation context, ensuring that only relevant and immediate attachments are preserved. -
Option to Add Attachments via Prompt
A new field has been added to the Prompt struct along with aWithAttachments
option function. With this change, developers can now include attachments when sending a text message prompt. The process is as simple as adding an “attachments” parameter to your API call, making the integration more flexible. -
Attachment Processing within the Token Budget
We introduced anaddAttachments
method that processes attachments alongside other prompt data. This method ensures that attachment content is properly prioritized and accounted for within the overall token budget, so the response generation process remains efficient even when supplemental data is included. -
Modified Prompt Building Flow
The workflow for building a prompt has been updated in the functions responsible for handling knowledge base references (i.e.,knowledgeBase
,includeTrainingData
, andonlyTrainingData
methods). They now factor in attachments when constructing the final prompt. This ensures that if attachments are provided, they are integrated optimally without surpassing budget limits. -
Bug Fix in the Knowledge Base Writer
A minor bug previously allowed headers and instructions to be written even when there was no available token budget. This issue has now been corrected, so unnecessary data is no longer appended, which improves prompt efficiency and clarity.
Explain It to Me Like I’m Five
Imagine you’re drawing a picture and you want to add a sticker just for fun. Now, you can stick that sticker on your picture, but it only stays on that picture and doesn’t go into the big book of all your drawings. This way, you can have extra fun with your picture without worrying about cluttering the big book.
Technical Details
Under the hood, the chat thread model has been enhanced by extending its state structure to include an array of attachment objects. Each attachment is defined with the following properties:
- ID: Generated using our
stid
package to uniquely identify each attachment. - Title: Can represent the original filename, a document title, or a custom label.
- Source: A string that may denote a filename, URL, or other origin identifier.
- ExtractedContent: The content extracted from the attachment, ideally processed into markdown.
- Size and ContentType: The attachment size (with a per-file limit of 1MB and a total limit of 3MB for a thread) and its MIME type.
To support the feature:
- A new field called
attachments
was added to the Prompt struct. - The
WithAttachments
option function allows developers to easily add one or more attachment objects during the creation of a prompt. - The
addAttachments
method is responsible for processing attachment data. It ensures that the content of each attachment is trimmed if necessary (using a preset token budget), validates that required fields (like title, source, and extracted content) are provided and not empty, and automatically rejects or truncates attachments that exceed predefined size limitations. - The methods responsible for constructing the final prompt (which incorporate knowledge base, training data, or a combination thereof) have been modified to “reserve” space within the token budget for the incoming attachment content. This integration guarantees that attachments receive priority when the model constructs the full input for generating a response.
- Finally, any unnecessary addition of header information to the buffer during knowledge base writing (if insufficient budget exists) has been addressed to ensure that the final prompt strictly fits within system constraints.
Summary
This feature enhances our system’s ability to generate explanation prompts tailored to different audience levels. Originally offering only “explain like I’m five” responses, we have now added options that provide responses in a style suited for middle school and high school audiences. In practice, when a user selects a prompt, our platform uses specialized configurations to generate an explanation at the requested level of complexity. This helps non-technical users, educators, and others quickly obtain explanations that match their understanding.
Who we built this for
- Educators, Instructional Designers, and Customer Support Teams: Those who need to communicate complex ideas in simple language.
- Non-technical Users: Individuals, including early learners, who benefit from explanations tailored to their age group.
- Internal Product Teams: Team members (such as Andi Rosca and DROdio) seeking clarity in how the platform adapts explanations based on user requirements.
The problem we solved with this feature
Before this update, our platform only offered one style of explanation (“explain like I’m five”) which did not suit the needs of users requiring different levels of detail. This feature addresses the need for customizable explanation levels, making complex concepts accessible to a broader audience. By tailoring responses to age-specific requirements, we improve overall user experience and ensure that our explanations resonate with the intended audience.
Specific items shipped
-
StDropdown Component Update
- Detail: The dropdown component now sets its stacking order (z-index) higher (from 111 to 400) to ensure it is prominently displayed when needed. This minimizes UI conflicts and ensures that pop-up explanations remain visible above other elements.
- Explain it to me like I’m five:
Imagine you have a box of toys. We decided to put this special box on the top shelf so that you can always see it when you need it. - Technical details:
We updated the CSS class string in the StDropdown component (and its item subcomponent) to modify the z-index value. By changing the index from 111 to 400, we ensure that the dropdown view will overlay other elements without interference during animations and visibility toggling.
-
TextSelectionMenu Prompt Enhancements
- Detail: The prompt text in the TextSelectionMenu has been revised. The original message “Explain this to me like I’m five” was updated to “Explain this to me like I’m five years old” for clarity. Additionally, new options have been integrated for middle school (“Explain this to me like I’m in middle school”) and high school (“Explain this to me like I’m in high school”) explanations.
- Explain it to me like I’m five:
Think of it like having different storybooks. Now, if you’re a bit older or need a different kind of story, you have books made just for your age. - Technical details:
In the TextSelectionMenu component, the onSelect event now uses a configuration fromresponsePromptsData
to set the correct prompt based on the user’s selection. Several new cases were added, each mapping to a different explanation style. The event handler calls the appropriate function (explainLikeImTwelve
orexplainLikeImSixteen
) after retrieving the text input, ensuring that the response reflects a middle school or high school level of explanation.
-
TextUnitV1ActionsBar Enhancements
- Detail: Similar changes were implemented in the TextUnitV1ActionsBar component. Interactive elements now trigger age-specific explanation prompts. Users can choose to have the text explained in a simplified manner for young children or in more advanced terms for older students.
- Explain it to me like I’m five:
Imagine if you could press a button on your computer and get a simple answer or a little more detailed one based on how old you are. It’s like choosing which version of a story you want! - Technical details:
Updates were made to the event handling logic in TextUnitV1ActionsBar. New calls tosubmitPrompt
have been added that leverage the enhanced responses fromresponsePromptsData
. The component uses conditional logic to direct the input text through either theexplainLikeImTwelve
orexplainLikeImSixteen
transformation functions, ensuring that the text is correctly formatted for the target audience.
-
Response Prompts Data Configuration Updates
- Detail: We extended the
responsePromptsData
configuration to support multiple age-based explanation templates. In addition to the ELI5 template, prompt templates for middle school (explainLikeImTwelve
) and high school (explainLikeImSixteen
) have been added. These templates include specific instructions to ensure that every generated explanation is tailored to its intended audience. - Explain it to me like I’m five:
It’s like writing different versions of a recipe for making a cake – one for little kids, a more detailed one for older kids, and an even fancier one for teenagers. Everyone gets a recipe that makes sense for them! - Technical details:
TheresponsePromptsData
file now includes new functions that wrap the input text in detailed instructions. These functions provide layered explanation prompts that guide the response generator on how to elaborate the explanation based on the selected age group. The templates use string interpolation and clearly defined language guidelines to ensure consistency in tone and detail for each audience.
- Detail: We extended the
Summary
This feature revises the homepage experience to improve clarity in showcasing our platform’s use cases. The homepage “hero” section now presents a reordered set of cards that better align with target user roles (e.g. Analysts, Product Managers, Sales, Finance, and Venture Capitalists). In addition, a dedicated “De‑silo Teams” hover link has been added in the navigation area. The “De‑silo Teams” link now features a custom SVG icon (sourced from Webflow) that visually matches the rest of our design system. Together, these changes streamline user access to key product functionalities while enhancing visual consistency.
Who we built this for
- End Users by Role:
- Analysts: Receiving a focused “Market Landscape Analysis” card.
- Customer Success Professionals: To help identify churn risks and retention opportunities.
- Product Managers: With prioritized tools like the “AI-Driven PRDs” card.
- Sales and Marketing Teams: With improved visibility of use cases such as “Sales Battlecards” and “Executive Updates”.
- Navigation Users: Those needing a dedicated entry point to the “De‑silo Teams” feature.
The problem we solved with this feature
Before this update, our homepage card order did not effectively communicate the platform’s key use cases, making it difficult for users to quickly grasp which functionalities applied to their roles. Additionally, the navigation did not include a dedicated entry point for the “De‑silo Teams” feature. These issues reduced clarity and user engagement. By reordering the cards and adding a clear, visually consistent “De‑silo Teams” link, we now offer an improved and intuitive user experience.
Specific items shipped
-
Navigation Enhancement – New “De‑silo Teams” Hover Link
- Details:
A newHoverLink
component was added to the header navigation. This element directs users to the “De‑silo Teams” section and uses a custom SVG icon imported directly from our Webflow design assets. The SVG, with a viewBox set to “0 0 48 48” and fill set to “currentColor,” guarantees that the icon scales properly and matches the established aesthetics. - Explain it to me like I’m five:
Imagine your favorite storybook—now we put a shiny new button on the cover that shows a pretty picture so you can easily find that special chapter about how friends work together. - Technical details:
In theLandingPageNavbar.tsx
file, we inserted a<HoverLink>
with the following properties:href="https://web.storytell.ai/de-silo-teams"
icon={icons.deSiloTeams}
label="De-silo Teams"
Theicons.deSiloTeams
property now contains a new SVG element using the provided SVG path data. The icon’s attributes (such aswidth
,height
, andviewBox
) ensure it is fully responsive and inherits the parent element’s color viafill="currentColor"
.
- Details:
-
Homepage Use-Case Cards – Reordering & Content Updates
- Details:
TheHeroUseCases
component inNewHomePage.tsx
was updated by modifying the underlying data array that drives the display of use-case cards. Several cards have been revised:- The “Analyze a startup pitch deck” card has been replaced with a “Market Landscape Analysis” card for Analysts.
- The churn-risk and retention card now better addresses the needs of Customer Success teams.
- Cards targeting Product Managers, Marketing, Sales, and Finance have been reordered and their titles and descriptions updated to clearly express value. These changes allow the homepage to immediately speak to each user group with a prioritized and contextual message.
- Explain it to me like I’m five:
Think of it like sorting your crayons. Instead of having them all jumbled up, we arranged them in a neat order so that your favorite colors are right on top and easy to pick out. - Technical details:
The reordering was accomplished by editing the array of configuration objects defined in theHeroUseCases
function within theNewHomePage.tsx
file. Each card object includes keys such ashref
,image
,useCase
,title
, anddescription
. For example:-
The initial entry was updated from using the
/use-case/analyze-startup-pitch-deck
route to/use-case/market-landscape-analysis
with theuseCase
property set to “Analyst.” -
Similar modifications were made for other objects to adjust the links, images, and messaging. These changes directly affect how the React components render, ensuring that the homepage accurately represents our strategic priorities and addresses the needs of our target roles.
-
- Details:
Summary
This feature increases the limit on the number of sheets (tabs) that can be processed from an Excel file. Prior to this change, the system was limited to processing 20 sheets. With this update, users can now upload and process XLS files with up to 30 sheets, addressing a common file upload failure issue reported by users.
Who we built this for
- Data analysts and spreadsheet users: Users working with complex Excel files that include multiple sheets benefit by having an increased limit for processing their data.
The problem we solved with this feature
Prior to this update, XLS file uploads that contained more than 20 sheets would fail, causing significant workflow disruptions. By increasing the tab limit to 30, we have resolved the upload issue, ensuring smoother processing of large, multi-sheet Excel files. This matters because it enhances user experience by reducing errors and supporting more complex data files.
Specific items shipped
-
Increase Tab Limit:
The maximum number of sheets that the system can process from an Excel file has been increased from 20 to 30. This improvement was implemented by updating the XlsMaxSheets value in the source code, accommodating larger and more complex Excel files. -
Enhanced File Upload Stability:
Alongside the increased sheet limit, diagnostic feedback indicated that while every sheet may have over 1,000 rows, the processing now completes successfully for each sheet. This ensures that even for hefty datasets, the upload process remains robust.
Explain it to me like I’m five
Imagine you have a coloring book with many pages. Before, our program could only look at 20 pages of your coloring book at a time. Now, it can look at 30 pages! This means when you give the program a big coloring book, it can see more pages without getting confused, even though some pages have a lot of pictures.
Technical details
The feature was implemented in the Go file located at pkg/go/domains/assets/tabular/xls.go
. The key change was updating the constant (initially set to 20) to allow up to 30 sheets. The change directly addresses feedback from DROdio, ensuring that XLS files with many tabs are processed without upload failures, although caution is advised regarding very large sheets (each exceeding 1,000 rows) which could introduce challenges in the summarization stage.
Summary
We have shipped a feature that now allows spaces when using the ”@” trigger to mention assets or Collections in Storytell. This update improves the mention search functionality by correctly handling file names and Collection names that include spaces. Users will experience a more intuitive and error-free mention system, ensuring all relevant assets and Collections appear in search results.
Who We Built This For
This update is built for content creators and platform administrators who routinely mention assets or Collections in Storytell. It is particularly beneficial for teams and users that work with files or Collection names containing spaces, ensuring a seamless experience when referencing these items.
The Problem We Solved With This Feature:
Previously, the mention functionality would break when assets or Collections had spaces in their names, leading to incomplete search results and user frustration. By allowing spaces, we resolved this issue, ensuring that all assets and Collections are reliably mentioned and accessible, thereby enhancing overall workflow efficiency.
Specific Items Shipped:
-
**Allow Spaces in Mentions: **Enabled support for spaces when using ”@” to tag assets or Collections, ensuring that file names with spaces are properly recognized.
-
**Enhanced Query Matching: **Modified the underlying regex patterns in the MentionsExtension to correctly process multi-word queries, which results in more comprehensive search results.
-
**Refined UI Components: **Updated components such as the SearchBar and Mentions rendering to seamlessly integrate the new logic, providing a consistent and user-friendly display across Storytell.
Explained Simply:
Imagine trying to call someone by their full name that contains a space. Previously, Storytell only recognized one part of the name, leading to confusion. Now, it’s like being able to say the whole name correctly so that Storytell can immediately find the right asset or Collection without error.
Technical Details:
Technical details: The MentionsExtension in the Mentions.extension.ts file was updated to set the “allowSpaces” flag to true, altering the regex used to capture queries. This change ensures that the search algorithm can accommodate spaces in file names and Collection names. Additionally, adjustments were made in related components like the SearchBar and the Mentions rendering logic to maintain consistent UI behavior. This integrated solution leverages the editor’s reactive owner along with the useWire pattern to deliver accurate and efficient search results within Storytell.
Summary
This feature removes the relevancy filtering from the Storytell. The change simplifies the UI by eliminating the differentiation based on content relevancy, resulting in a more uniform and predictable presentation of information.
Who We Built This For
- Content creators and end users: Those who need a clear and unbiased view of their content, without variability introduced by relevancy filters.
- Platform administrators and stakeholders: Users who identified that the relevancy feature was creating confusion and inconsistencies, as highlighted in internal feedback.
The Problem We Solved With This Feature
The relevancy filter was inadvertently obscuring important information by altering the display order and style of content blocks. This led to difficulties in content review and a less consistent user experience. By removing the relevancy aspect, the platform now presents all content uniformly, which aids in clarity and reduces user error.
Specific Items Shipped
- Relevancy Logic Removal: The code that evaluated relevancy metrics was removed, eliminating the logic that caused content to be displayed unevenly.
- UI Component Refactoring: Components in the UI were updated to remove any styling or ordering based on relevancy markers, resulting in a cleaner appearance.
- Backend Integration Update: Adjustments were made to the API and data handling processes, ensuring that relevancy attributes are no longer included in the responses sent to the frontend.
Explain It to Me Like I’m Five
Imagine if your teacher said some words were extra important while others weren’t, and it made the page look messy. Now, every word looks the same, so it’s easier to read without any distractions.
Technical Details
The removal of relevancy required a comprehensive refactoring of both frontend components and backend data handling. Conditional checks based on relevancy were eliminated from UI components, and the data models were updated to exclude relevancy attributes. This not only simplified the codebase but also improved performance and reduced potential bugs related to mismatched display logic. The changes have been verified with a series of tests to ensure consistent data presentation and UX continuity.
Summary
This feature improves the visual clarity of text in the Magic Link email interface when viewed on dark mode. By updating color contrasts and style rules, the UI provides a better and more accessible reading experience for users who rely on magic links to log in.
Who We Built This For
- Users who prefer dark mode: Particularly those who use dark mode in their applications and value an optimized email interface.
- Accessibility-focused users: Individuals who require high-contrast text for easier reading and better usability.
The Problem We Solved With This Feature
Previously, users struggled to read email content in dark mode due to insufficient text contrast. This made the login process confusing and could lead to misinterpretations of important information. Enhancing text visibility ensures that the login experience is smooth and accessible, thereby increasing overall user satisfaction.
Specific Items Shipped
- Contrast Adjustment: The text color and background were updated to have a higher contrast ratio. This enhancement makes the reading content easily legible on dark screens.
- Style Overrides: CSS modifications, including tweaks to autofill focus styles and input fields, were implemented to ensure a consistent visual experience across browsers.
- Accessibility Enhancements: Minor adjustments aligned with modern accessibility standards were introduced to guarantee that all UI elements comply with color contrast requirements.
Explain It to Me Like I’m Five
Imagine you have a coloring book where the words are written in light colors on a very dark page, making them hard to see. We changed the colors so that the words are now bright and clear, making them easy to read.
Technical Details
The update involved modifying CSS rules and autofill focus styles in the Magic Link email interface. By increasing the contrast ratios and ensuring consistent application of these styles across different browsers, the feature meets contemporary accessibility standards. The process included cross-browser testing and adjustments for specific autofill behaviors, ensuring robust performance in dark mode environments.
Summary
This update improves the Storytell interface by introducing a resizable vertical sidebar, increased capacity for displaying recent SmartChats (20 instead of 5), and chat timestamps for better timeline tracking. These improvements enhance user interface flexibility, navigation efficiency, and provide essential context for active conversations.
Who We Built This For
This feature was designed for active Storytell users managing multiple Collections and engaging in continuous SmartChat conversations. It is particularly useful for users who need a customizable interface and a clearer overview of their conversation history.
The Problem We Solved With This Feature:
Previously, the vertical sidebar was static and non-customizable, and users could only view a limited number of recent SmartChats without timestamps. This made it challenging to navigate through active conversations and manage communication efficiently. By addressing these issues, the update significantly improves user usability and the ability to track conversation timelines.
Specific Items Shipped:
-
Resizable Vertical Sidebar: The vertical sidebar now allows users to adjust its width, offering a personalized layout for easier navigation through Collections and menus. This flexibility caters to different user preferences and screen sizes.
-
**Increased Recent SmartChats Count: **The system now retrieves 20 recent SmartChats instead of the previous 5. This change provides a broader context of current conversations, ensuring users have quick access to more dialogue history.
-
**Chat Timestamp Display: **Each SmartChat now features a timestamp that shows when the conversation took place. This addition enhances the user’s ability to track timeline events and manage their chats more effectively.
Explained Simply:
Think of your sidebar as a window that you can resize to let in more of the view you prefer. Not only can you adjust this window, but you now see more of your recent conversations, each marked with the time they occurred so you know exactly when things were happening. This makes finding and following ongoing discussions much easier.
Technical Details:
The implementation utilizes the corvu/resizable library to enable dynamic resizing of the sidebar component in Storytell. Modifications were made to the API call parameters to increase the limit of fetched SmartChats from 5 to 20. Additionally, the SmartChat rendering component has been updated to incorporate the dayjs library for proper timestamp formatting, integrating seamlessly into the existing frontend architecture.
Summary
This new feature automatically scrolls a Collection into view when a user clicks its breadcrumb in Storytell. It enhances navigation by ensuring that the selected Collection is clearly visible upon interaction. The update leverages event dispatching for a smooth and responsive UI experience.
Who We Built This For
We built this for Storytell users who interact with Collections daily. It is particularly beneficial for users managing large sets of content and who need quick navigation between Collections without manual scrolling.
The Problem We Solved With This Feature
Users were previously experiencing difficulties finding Collections after clicking on breadcrumb links; the target was not always visible, causing confusion and reducing efficiency. With the auto-scroll functionality, users can navigate quickly to the content they need, increasing productivity.
Specific Items Shipped:
-
**Auto Scroll Implementation: **Integrated functionality that triggers scrolling behavior upon a breadcrumb click. This ensures that the target Collection slides into view smoothly on the user interface.
-
**Event Dispatch Mechanism: **Enhanced the CustomEvents system by adding and dispatching a dedicated scroll event (scrollCollectionIntoView) linked to the UI components. This event effectively communicates between the CollectionBreadcrumbs and SidebarCollections.
Explained Simply:
Imagine you have a very long list of items on a shelf, and you click on one item in a small list at the top. Instead of searching through the shelf, the shelf moves, bringing your chosen item right to the front. That’s exactly what this scroll feature does.
Technical Details:
The implementation affects the CollectionBreadcrumbs and SidebarCollections components. Upon a breadcrumb click, the CustomEvents.scrollCollectionIntoView event is dispatched with the specific Collection ID. An event listener in SidebarCollections then captures this event to calculate the proper scroll offset, using smooth scrolling parameters. The refactoring ensures that the new function integrates seamlessly with existing UI state management and enhances the responsiveness of the Collection navigation system.
Summary
This update enhances the “Not Found” screen messaging on Storytell by guiding users towards signing in or signing up. The new message more clearly communicates that the page is restricted due to authentication, helping reduce user confusion. The change is aimed at improving the onboarding experience for new users.
Who We Built This For
We built this for newly invited users and all visitors who encounter the Not Found screen. Particularly, it serves users who mistakenly land on restricted pages, clarifying the need to authenticate before accessing content.
The Problem We Solved With This Feature
Previously, users encountered a generic “Not Found” message without clear instructions, leading to confusion about access rights. This update remedies that by explicitly indicating that signing in or signing up is the required next step.
Specific Items Shipped:
- Guidance Message Update:
The Not Found screen now includes a clear call to action, prompting users to log in or sign up to access the page.
Explained Simply:
Imagine you try to open a locked door without a key. Instead of just seeing a locked door, there’s now a sign that tells you exactly where to get the key. That’s what this update does by guiding users towards signing in.
Technical Details:
The update modifies the NotFoundScreen component in Storytell. The textual content has been revised to provide explicit instructions, and spacing/layout adjustments have been made to make the message more prominent. These improvements prevent misinterpretation and align the screen’s messaging with Storytell’s overall user experience guidelines.
Summary
This update introduces Claude 3.7 to Storytell’s LLM router, enhancing the system’s AI capabilities. This integration ensures that Storytell can leverage the latest advancements in language model technology. The new model will provide users with more accurate and contextually relevant responses.
- Improved AI Capabilities
- Enhanced response accuracy
Who We Built This For
This feature is beneficial for all Storytell users who rely on accurate and contextually relevant AI-driven insights, particularly content creators and analysts.
The Problem We Solved With This Feature:
This integration addresses the need for Storytell to stay current with the latest advancements in LLM technology. By adding Claude 3.7 to the LLM router, Storytell ensures that users benefit from the most up-to-date AI capabilities, improving response accuracy and overall performance. This is vital for maintaining a competitive edge and delivering the best possible user experience.
Specific Items Shipped:
- Claude 3.7 Integration: Claude 3.7 has been successfully integrated into Storytell’s LLM router, enabling the system to utilize the new model for generating responses.
Explained Simply:
Think of Claude 3.7 as an upgraded brain for Storytell. By integrating this new model, Storytell becomes smarter and more capable of understanding and responding to your requests accurately. It’s like giving Storytell a boost in intelligence, allowing it to provide better and more relevant information.
Technical Details:
The Collections indicate that Claude 3.7 has been added to the LLM router. The integration involved fixing bugs that were occurring around instructions generated by the new model when the trained context was used. The ai packages were updated to include the AnthropicClaude37Sonnet model. The model specifications (AnthropicClaude37SonnetSpec) include details such as ranking, performance, cost, context window (180,000), and maximum output tokens (8,192). The TexterConfig and PromptBuilder components were adjusted to accommodate the new model.
Summary
We shipped an update in Storytell that enables users to export markdown tables as CSV files. With an inline “Download as CSV” button added just below each markdown table, users can now quickly convert table data into a CSV file. This update simplifies data extraction, supports seamless integration with spreadsheet tools, and enhances overall data management within Storytell.
Who We Built This For
This feature was built for content creators, analysts, and marketers who frequently use markdown tables. It addresses the need for an efficient method to extract and reuse table data in external applications such as spreadsheets.
The Problem We Solved With This Feature:
Previously, users had to manually copy and convert markdown table content before using it in spreadsheets. By automating the CSV conversion directly within Storytell, we eliminate manual errors and save valuable time for users who need to manage table data effectively.
Specific Items Shipped:
-
CSV Export Button Integration:
A new inline “Download as CSV” button has been added right below each markdown table. This provides a direct and convenient way for users to trigger the CSV export process without leaving the current context. -
Markdown Table Parsing Logic:
The MarkdownRenderer component now accurately identifies the<thead>
and<tbody>
sections of markdown tables. It parses header and row data into a structured array, ensuring that all text is properly escaped to conform to CSV standards. -
Seamless Download Mechanism:
The CSV content is generated and encapsulated in a Blob object. A temporary download link is programmatically created and clicked, initiating an automatic download of the CSV file, ensuring a smooth and integrated user experience.
Explained Simply:
Imagine you have a table on your document that you want to use in a spreadsheet, but copying it manually is messy. This update in Storytell adds a button below the table that, when clicked, automatically turns the table into a neatly formatted CSV file. It is like having a tool that effortlessly transforms a printed table into an editable digital version.
Technical Details:
Technical details:
The feature leverages updates in the MarkdownRenderer component by adding a function that locates the table’s <thead>
and <tbody>
elements. It then extracts header cells and row data, mapping them into a string[][] format. Special characters are properly escaped using CSV conventions. The resulting CSV string is wrapped into a Blob with MIME type “text/csv;charset=utf-8”, and a temporary anchor element is generated to programmatically trigger a download, providing a seamless export experience within Storytell.
Summary
The Collections sidebar now automatically scrolls to bring the active Collection into view. This enhancement ensures users can easily locate and access the Collection they are currently working on without manually scrolling through the sidebar.
Who We Built This For
This feature benefits all Storytell users, particularly those who work with a large number of Collections. It simplifies navigation and improves the overall user experience by ensuring that the active Collection is always readily accessible.
The Problem We Solved With This Feature:
The problem we solved with this feature is the difficulty users faced when trying to locate the active Collection in the sidebar, especially when working with numerous Collections. This can lead to wasted time and frustration. By automatically scrolling the sidebar to the active Collection, we streamline the user workflow and make it easier to stay focused on the task at hand.
Specific Items Shipped:
-
Scrolling Logic Implementation The application now contains logic that automatically scrolls the sidebar to ensure the active Collection is visible. This enhancement simplifies the navigation and ensures users can easily locate the Collection they are working on.
-
Enhanced User Experience Users no longer need to manually search for the active Collection in the sidebar, especially beneficial for those who work with many Collections. The active Collection is brought directly into view, streamlining workflow.
Explained Simply:
Imagine you have a lot of apps on your phone, and you’re currently using one that’s way down the list. Instead of scrolling through all your apps to find the one you’re using, Storytell now automatically brings that app to the top of the list. This makes it much faster and easier to switch between Collections, especially when you have a lot of them.
Technical Details:
The auto-scroll functionality is implemented in apps/webconsole/src/components/SidebarCollections.tsx
. A createEffect
hook is used to observe changes to the collectionId
. When the collectionId
changes, the code queries the DOM for the corresponding sidebar accordion trigger element using document.querySelector
. It then calculates the necessary scroll offset to bring the element into view within the #left-drawer>nav
element. The code checks if the element is already in view before scrolling to prevent unnecessary scrolling. The getBoundingClientRect
method is used to determine the element’s position relative to the viewport.
Summary
This feature introduces automatic Collection sharing with users based on their email domain. This update simplifies Collection access management by allowing you to share Collections with all users from specified email domains.
Key updates:
- Share Collections with all users from specified email domains
- Support for multiple domains per Collection
- Combined sharing with both domains and individual users
- Domain invitation management (creation and deletion)
- Optional bulk access revocation for domain-based users
Who We Built This For:
This feature is for organizations that want to streamline Collection access management for their users. It simplifies sharing Collections with entire teams or departments, reducing the need to individually invite users.
The Problem We Solved With This Feature:
The previous process of sharing Collections required individual user invitations, which was time-consuming and difficult to manage, especially for large organizations. This feature solves this by enabling administrators to share Collections with entire domains, ensuring that all users within that domain automatically gain access. This simplifies user onboarding and reduces administrative overhead.
Specific Items Shipped:
- Domain-based access: Collections can now be shared using domain-based access (
shareDirective: "domainInvite"
), allowing all users with a specified email domain to access the Collection. - Cron job for domain population: A
cron
job has been established to populate thedomains
table and linkss_users
to the domains table, automatically adding users on-demand as they sign-in or sync their auth. - Automatic access restoration: Every time a user syncs their auth, Storytell will scan and grant access if they have the correct domain and are missing access, including restoring access if someone revoked it.
- API update: The API now includes
shareDirective
in access responses to distinguish between Creator access, domain-based access, and direct user invites. Access records also includeopenDomainInvitations
showing active domain sharing configurations with metadata like creation timestamp, creator information, organization context, and permitted actions.
Explained Simply:
Imagine you’re a teacher and you want all students in your class to have access to a specific study guide. Instead of handing out individual copies to each student, you can now just say that everyone in the “class.school.edu” domain has access. So, whenever a new student joins the class and signs in to Storytell, they automatically get access to the study guide. This makes it easier for teachers to manage resources and for students to get the materials they need.
Technical Details:
Collections can now be shared using three methods: direct user invites (shareDirective: "directInvite"
), domain-based access (shareDirective: "domainInvite"
), and Creator access (shareDirective: "creator"
). A cron
job has been established to populate the domains
table and link ss_users
to the domains table. Every time a user syncs their auth, Storytell will scan and grant access if they have the correct domain and are missing access. The API now includes shareDirective
in access responses to distinguish between Creator access, domain-based access, and direct user invites. Access records also include openDomainInvitations
showing active domain sharing configurations with metadata.
Summary
Added breadcrumbs to the search bar to show the location of Collections in search results. This feature helps users quickly identify the context and hierarchy of Collections listed in the search results, making it easier to differentiate between Collections with similar names. For example: BeHeard > User Feedback
now shows under the Collection name.
Who We Built This For
This update is designed for users who frequently use the search bar to find Collections, especially those working with complex Collection hierarchies or Collections with similar names.
The Problem We Solved With This Feature
Previously, it was difficult to distinguish between Collections with similar names in the search results. Users had to rely on other information or navigate to the Collection to determine its context. This feature provides a clear and immediate way to understand the location of Collections in the search results.
Specific Items Shipped
- Breadcrumb Display: The search bar now displays breadcrumbs under Collection names in the search results.
- Hierarchy Indication: Breadcrumbs show the parent Collections of the search result, providing context and hierarchy information.
Explained Simply
When you search for something, you might see a few results with the same name. This feature is like adding addresses to those results, so you know exactly which one you’re looking for. Just like you see file paths for files, you now see breadcrumbs above Collections in the search results so you can differentiate the Collections by file path/location, such as BeHeard > User Feedback
.
Technical Details
The changes involve modifying the search bar component in apps/webconsole/src/components/SearchBar.tsx
to include breadcrumbs for Collection search results. The CollectionBreadcrumbs
component is used to display the breadcrumb trail.
Summary
Improved the performance of listing assets within Collections and added a new endpoint for counting assets. This update significantly reduces the execution time for asset listing queries, especially when dealing with large Collections and introduces a new endpoint that only returns the count of assets to help drive that functionality in the UI.
- Optimized the query to list assets within a Collection.
- Added an option to list assets within sub-collections.
- Introduced a new endpoint to return the count of assets.
Who We Built This For
This update is for users who frequently work with Collections containing a large number of assets, as well as UI developers needing to efficiently display asset counts.
The Problem We Solved With This Feature
Previously, listing assets within Collections was slow, especially for large Collections or when including sub-collections. This update addresses this performance bottleneck and provides a more efficient way to retrieve asset counts.
Specific Items Shipped
- Query Optimization: The query to list assets within a Collection has been revised to improve performance.
- Sub-collection Listing: Added an optional behavior to list assets within sub-collections.
- Asset Count Endpoint: Introduced a new endpoint that only returns the count of assets.
Explained Simply
Imagine you have a giant filing cabinet with lots of folders and papers. This feature is like hiring a faster worker to find and count the files you need, so you don’t have to wait as long.
Technical Details
- The query in
services/controlplane/domains/curator/collectionsSQL/asset_col_queries.sqlsqlc.generated.go
has been optimized. - A new endpoint,
GetCountAssetsByCollectionID
, has been added inservices/controlplane/endpoint_collection_get_count_assets_by_collection_id.go
. - The
GetAssetsByCollectionID
function inservices/controlplane/domains/curator/curator_core_get_assoc_assets.go
was modified to include sub-collections based on theIncludeSubCollections
parameter. - Execution time has decreased by 3x (2.734ms vs 0.895ms) with the new query plan.
- Planning time has increased (14.674ms vs 8.021ms), but is negligible since it happens only once.
Summary
Adding multiple websites to your Collections just got easier. The new web scraper enhancement lets you input several URLs at once, streamlining your content ingestion process. This update saves time and simplifies the initial setup of Collections with web-based data.
- Add up to 10 URLs in a single screen.
- Simplified workflow for ingesting web content.
Who We Built This For
This feature is designed for content curators, researchers, and knowledge managers who regularly incorporate web-based information into their Collections.
The Problem We Solved
Previously, adding multiple websites was a time-consuming process. Users had to add each URL individually. This feature improves efficiency.
Specific Items Shipped
- ”+ Add More” Button: A new button allows users to input multiple URLs on a single screen, up to a limit of 10.
- Batch URL Input: Users can now paste multiple URLs into the input field, separated by spaces, commas, or semicolons.
Explained Simply
Imagine you’re making a playlist, but instead of adding one song at a time, you can add a whole bunch at once. This feature lets you add multiple website links to your Storytell Collections. Instead of adding websites one by one, this feature lets you add up to 10 at once, saving you a lot of time and effort.
Technical Details
The implementation is currently FE only, with plans to move it to the BE for background processing and status updates. The FE implementation avoids complex DB transactions and asset uploads. The component WebContentUpload.tsx
has been modified to allow for multiple URL inputs. The CrawlURL
function in the client API is used to initiate the scraping process for each URL.
Summary
A new Search Functionality has been shipped in Storytell. This update allows users to quickly launch a search bar using ⌘ + / (Mac) / Ctrl + / (Windows), dynamically fetches Collections and Assets, and automatically highlights selected assets for better visibility. Users can now navigate directly to a Collection or asset with minimal effort.
- Keyboard Shortcut ⌘ + / (Mac) / Ctrl + / (Windows) to instantly open the search bar
- Real-time filtering and organization of Collections and Assets
- Automatic asset highlighting upon selection
Who We Built This For
This feature is designed for Collection managers, administrators, and power users who need rapid access to specific Collections and Assets. It addresses the needs of users handling large datasets who require efficient navigation and content retrieval.
The Problem We Solved With This Feature
Before this update, users had to manually scroll through lengthy lists of Collections and Assets. This update eliminates time-consuming navigation, reducing user effort while enhancing workflow efficiency.
Specific Items Shipped
- Keyboard Shortcut Activation:
The search bar can now be opened instantly using ⌘ + / (Mac) / Ctrl + / (Windows), providing immediate access to search functionalities. - Dynamic Search Results:
Collections and Assets are filtered dynamically as the user types, ensuring that results are relevant and updated in real time. - Asset Highlighting & Navigation:
Once an asset is selected from the search results, the interface scrolls to it and highlights it for easy identification, improving visual tracking.
Explained Simply
Imagine needing to find a book in a huge library. With this update, you simply press a shortcut, type in what you’re looking for, and the system immediately shows you the exact shelf and book you need. It saves time and makes your search as effortless as using a smartphone’s search feature.
Technical Details
This feature leverages a newly developed SearchBar component built using Solid-JS. It listens for specific keyboard events to trigger the search interface. The search functionality is powered by real-time API calls (via the searchAssetsAndCollections function), which dynamically retrieve and filter the data, while smooth scrolling and DOM manipulation ensure that selected assets are visually highlighted.
Summary
This feature enhances Storytell by exposing the model’s internal reasoning behind its answers. It introduces a dedicated section that presents the reasoning in a clearly formatted display, making it easier to understand how responses are derived. Key updates include new visual cues for the reasoning section and the ability to collapse/expand the reasoning details. The change is user-friendly, providing transparency in an elegant and accessible format.
Who We Built This For:
This feature is built for advanced users and power users who want to gain deeper insights into the answer generation process. Users who need to validate the reasoning behind responses for research or debugging purposes will particularly benefit from this update.
The Problem We Solved with This Feature:
Many users found it challenging to understand how Storytell arrives at its answers. By revealing the hidden reasoning steps, we address the opacity of the process and offer greater transparency. This improvement empowers users with a better understanding of the logic behind responses, making the technology more trustworthy and debuggable.
Specific Items Shipped:
- Reasoning Section UI Update: Introduced a new UI component that displays the model’s internal reasoning with a clear, collapsible section. This section is formatted in lighter grey for easy distinction from the final answer.
- Copy Button Functionality: Added functionality allowing users to copy both the question and the full response (reasoning plus answer) or just the answer, enhancing usability.
- Enhanced Formatting: Ensured the reasoning is displayed in a structured format (using designated section tags) so that it’s consistently readable and accessible across different devices.
Explained Simply:
Imagine reading a detailed explanation of how a math problem is solved – this feature gives you that insight for every answer in Storytell. It shows you the thought process step-by-step, just like seeing the scratch work behind a finished homework problem. This makes it easier to trust and understand the results provided.
Technical Details:
Technical improvements include enabling a new set of front-end components that parse and render the reasoning sections from model responses. The system detects XML-like markers (e.g.,
[[[Reasoning]]]
) and applies specific styling rules to display them in a collapsible format. This feature relies on efficient client-side processing to toggle visibility without impacting overall performance.
Summary
This feature offers an intuitive interface within Storytell for deleting SmartChats™ from a Collection. It provides a clearly labeled delete option directly on the SmartChat™ list, simplifying the process. Key updates include a responsive delete action, confirmation modals, and enhanced accessibility features. The update empowers users with efficient management of their SmartChats™.
Who We Built This For:
This update is designed for users managing Collections who need to organize or remove obsolete or unwanted SmartChats™ quickly. It is particularly useful for power users and administrators keen on maintaining a clean workspace and managing content effectively.
The Problem We Solved with This Feature:
Previously, users had difficulty in managing and deleting outdated SmartChats™, which often cluttered their Collections. By introducing a dedicated UI for deletion, Storytell addresses this issue, reducing friction and preventing accidental removals. It improves the user workflow by offering a clear, easily accessible deletion option.
Specific Items Shipped:
- Delete SmartChat™ Button: Implemented a clearly visible delete button next to each SmartChat™, ensuring users can easily identify and access the delete function.
- Confirmation Modal: Developed a confirmation modal to double-check user intent before a SmartChat™ is removed, minimizing accidental deletions.
- Responsive UI Enhancements: Optimized the UI layout for different devices, ensuring that deletion actions are seamless and intuitive across all screen sizes.
Explained Simply:
Think of it like tidying up an email inbox; this feature lets you quickly remove a conversation you no longer need. It adds a trash button to each SmartChat™ in your Collection, and when you click it, a pop-up asks if you’re sure you want to delete it. This makes managing your chats as easy as cleaning up your desk.
Technical Details:
The implementation leverages modern front-end frameworks to bind delete actions directly to the SmartChat™ UI components. When a user initiates a delete request, a function interfaces with Storytell’s back-end to mark the specified SmartChat™ as removed. The process employs asynchronous calls to ensure the UI remains responsive, and state management is updated in real-time to reflect the deletion across the Collection. Advanced accessibility features ensure that confirmation modals are navigable via keyboard and screen readers.
Summary
The Improve It button has been enhanced to provide a more engaging and user-friendly experience. This update includes visual enhancements and a regenerate button for refining prompts.
Who We Built This For:
- Users seeking to optimize their prompts for better results.
- Users who want a more intuitive and visually appealing prompt improvement process.
The Problem We Solved:
We aimed to enhance user engagement with the “Improve It” feature by making it more visually appealing and providing users with more control over prompt refinement.
Specific Items Shipped:
- Light Bulb Icon on the Improve It Modal: Added a light bulb icon to the “Improve It” modal to enhance visual appeal.
- Updated Copy: Updated the copy within the modal to be more user-friendly and informative.
- Light Bulb Icon on the Results Page: Included a light bulb icon on the title of the results page for visual consistency.
- Regenerate Button: Added a regenerate button to allow users to refine prompts and generate new suggestions, with a short bold summary sentence.
Explained Simply:
Imagine you’re writing an essay, and you want to make it even better. The “Improve It” button is like having a helpful friend who gives you suggestions. We’ve made this friend more visually appealing and easier to use. Now, this friend uses a lightbulb icon and shows it on the results page too. We’ve also added a “Regenerate” button so you can ask for even more ideas to make your essay shine.
Technical Details:
- Modified CSS to include a light bulb icon and update the text in the “Improve It” modal
- Added a regenerate button that calls the enhance prompt API.
Summary
Added timeout controls for LLM (Large Language Model) calls to prevent hanging responses and improve user experience. This update implements a 60-second timeout for model responses and 5-second timeout for categorization calls.
Who We Built This For:
- Users interacting with AI models
- Platform administrators managing system reliability
- Enterprise users requiring predictable response times
The Problem We Solved:
Addressed the issue of prompts “hanging” or “freezing” due to long responses that never finish, improving system reliability and user experience by ensuring predictable response times.
Specific Items Shipped:
- Context Timeout Implementation: Added 60-second timeout for LLM context calls
- Categorization Timeout: Implemented 5-second timeout for prompt categorization
- Global Timeout Controls: Added timeout controls across all LLM providers (Anthropic, OpenAI, Gemini, Groq)
Explained Simply:
Think of asking a question to an AI as making a phone call. Sometimes, the AI might take too long to respond - like being put on hold forever. We’ve added a timer that automatically ends the call if it takes too long, so you’re not left waiting indefinitely. It’s like having a reasonable time limit on how long you’ll wait for an answer.
Technical Details:
- Implemented
context.WithTimeout
with 60-second duration for main LLM calls - Added 5-second timeout for categorization and complexity assessment calls
- Applied timeouts across all LLM providers including Anthropic, OpenAI, Gemini, and Groq
- Implemented proper context cancellation and cleanup using
defer cancel()
- Added timeout handling in the model router for prompt categorization
Summary
Storytell now integrates both Deepseek R1 and Llama models in an enterprise-safe environment. Hosted on secure US-based infrastructure, these integrations enhance AI reasoning and natural language processing capabilities while ensuring strict data compliance. Users can manually select either model by appending Use Deepseek
or Use Llama
to their queries, though neither is yet embedded in the LLM router. Check this page on how to manually override the LLM router
Who We Built This For
- Enterprise Teams: Organizations requiring cutting-edge AI capabilities while adhering to strict data security and compliance standards.
- Developers & Engineers: Professionals needing advanced AI assistance for coding, debugging, and structured problem-solving.
- Researchers: Users who benefit from logic-driven queries and complex computations.
The Problem We Solved
We addressed the need for a secure, enterprise-grade AI solution that delivers advanced reasoning and computation. Traditional AI tools often lack the necessary security measures for sensitive applications, and open-source models may expose data risks. This integration provides a privacy-safe, compliance-focused alternative.
Specific Items Shipped
- Deepseek R1 Integration:
Storytell now supports Deepseek R1, enabling advanced reasoning and problem-solving capabilities. - Llama Integration:
Storytell now supports Llama, providing versatile natural language understanding and generation capabilities. - Manual Model Selection:
Users can explicitly invoke Deepseek R1 by addingUse Deepseek
or Llama by addingUse Llama
to their prompts. - Enterprise-Grade Security:
Both models are hosted on secure, US-based infrastructure, ensuring compliance with data privacy regulations. - Current Limitations:
Neither Deepseek R1 nor Llama is yet embedded in the LLM router and must be manually selected.
Explained Simply
Imagine you’re working on a complex math problem or trying to write a detailed report. Deepseek R1 is like a super-smart helper that can handle these tasks more effectively than most AI tools. Llama, on the other hand, is like a versatile assistant that excels at understanding and generating human-like text. Both helpers are hosted in highly secure environments, like safes, so your data stays protected.
Technical Details
- Integration Approach: Both Deepseek R1 and Llama are integrated as standalone models, allowing manual invocation via their respective keywords (
Use Deepseek
andUse Llama
). - Hosting: Both models are deployed on secure, US-based servers to meet enterprise compliance requirements.
- Functionality:
- Deepseek R1 excels at reasoning-heavy tasks, advanced computations, and structured problem-solving.
- Llama provides versatile natural language understanding and generation capabilities.
- Limitations:
- Neither model is currently embedded in the LLM router.
- Users must manually select the desired model for each query.
Summary
This feature enables users to mention Collections and files within Storytell simply by typing ”@“—in a manner similar to how Linear handles mentions. When a user types ”@”, our interface now displays structured suggestions divided into sections (Collections and Files). This not only streamlines the user experience but also reduces ambiguity when interacting with data references in prompts.
Who We Built This For
We built this feature for users who work heavily with Collections and file references, particularly teams needing to quickly and accurately reference these entities in their workflow. The targeted use cases include:
- Users who need to compose queries like “Tell me what files are in my @collection/abc” or “Compare @collection/abc to @collection/xyz” without any confusion.
- Teams relying on clear textual references to link Collections or files directly within our chat or command context.
The Problem We Solved With This Feature
Previously, users were limited by a lack of an intuitive system for referencing collections and files; all references had to be entered manually, increasing the risk of typos and misinterpretation by our backend processes. This feature addresses the following issues:
- It removes the ambiguity of manually typed references.
- It ensures our backend can detect and process these references reliably via an integrated lexer.
- By structurally integrating references, LLMs receive cleaner prompts, leading to more accurate responses and improved workflow efficiency.
Specific Items Shipped
-
@ Mention Parsing
- Summary: Implemented robust ”@ mention” parsing for Collections and files.
- Details: The interface now detects when users type ”@” and presents suggestions for Collections and files, divided into clear sections. This mimics the familiar behavior found in platforms like Linear, enhancing user productivity and reducing input errors.
- Explain it to me like I’m five: Imagine you’re writing a note about your favorite toys, and when you type ”@” it magically shows you a list of all your toys so you can pick the right one to talk about.
- Technical details: On the frontend, event listeners intercept ”@” keystrokes to trigger a dropdown menu with categorized suggestions. The suggestion list dynamically filters potential matches based on the user’s input. This is seamlessly tied into our backend services via a lexer that extracts and normalizes these mentions for further processing.
-
UI Enhancements for Mention Display
- Summary: Upgraded our UI components to support and display mention suggestions effectively.
- Details: Several UI components have been updated, including asset tables and modals, to incorporate the visual and interactive elements necessary for the new mention feature. This ensures that users see a seamless experience from both the insertion and display perspectives.
- Explain it to me like I’m five: It’s like getting a big, colorful sticker next to every toy name so you can tell them apart easily!
- Technical details: Changes include modifications to file asset renderers (e.g., determining mime types based on file names), updates to component styling for mention elements, and enhanced support for single-click activation of a mention. These refinements ensure that the feature integrates cohesively with existing workflows and maintains consistency in user interface behavior.
Explain it to me like I’m five
Imagine you had a favorite game that got a shiny new cover. Now the buttons on the game box point exactly where you need to go to start playing. That’s what we did here – we made it super clear where to click for signing up or logging in, so everyone can start having fun easily.
Summary
We’ve updated the signup and login pages on our website to match the new homepage design and streamline the user authentication process. These pages now feature new CTA links, refined messaging, and reliable event tracking (via PostHog) to guarantee a smooth experience when users sign up or log in.
Who we built this for
- New Users: Making it easier to sign up with clear instructions and a modern layout.
- Existing Users: Providing a consistent, updated login experience.
- Analytics Teams: Ensuring proper event tracking such as sign_up_started, sign_up_successful, and log_in_started.
The problem we solved with this feature
Our old authentication pages were mismatched with the new design language of the website, causing confusion and inconsistent user experience. This update reinforces our brand identity, simplifies navigation (by ensuring links point to the correct pages), and maintains accurate event tracking across the authentication flows.
Specific items shipped:
-
Updated Signup Page:
We redesigned the signup page to include a clear “Already have an account?” link that now correctly routes users to https://storytell.ai/auth/login. PostHog events such as sign_up_started are integrated to properly monitor user registration events. -
Updated Login Page:
The login page now displays a refreshed “Don’t have an account?” link directing users to https://storytell.ai/auth/signup. The page supports magic link email functionality, ensuring users have a seamless, password-free login experience while tracking events like log_in_started. -
Comprehensive Testing and Validation:
The changes were tested in our dev environment to verify that magic link emails, CTA interactions, and event tracking are functioning as intended, ensuring a smooth transition when the changes go live.
Technical details
The update leverages our existing frontend components, replacing outdated UI elements with updated versions matching the new homepage aesthetics. Routing was adjusted so that the “Already have an account?” CTA on the signup page points to the login route and vice‑versa, ensuring proper navigation. We verified that our PostHog analytics (tracking events such as sign_up_started, sign_up_successful, and log_in_started) are firing correctly. Comprehensive testing (both automated and manual) was conducted to ensure that magic link email functionality remains intact in both dev and production environments.
Explain it to me like I’m five
Think of your school locker where every important thing—your books, your toys, and your drawing pad—has its own special place, all arranged neatly. This update makes sure that whether you want to chat or look at pictures, everything you need is always right there, so you never have to search hard for your things.
Summary
This update introduces a persistent Chat tab and a restructured Collections Dashboard that provides three fixed tabs, including a new chat section. With a prompt bar at the top and an integrated view for both chats and assets, users have instant access to recent conversations and related content.
Who we built this for
- Collection Managers: Users who interact with Collections and need fast access to communications and assets.
- Proactive Users: Those who want a seamless experience navigating between chat conversations and assets.
- Stephen and Similar Users: Users who proposed and will benefit from the persistent navigation structure.
The problem we solved with this feature
Previously, users struggled to quickly access chats and assets within a Collection, leading to a disjointed navigation experience. This update reduces context switching by introducing persistent tabs, ensuring users have quick and continuous access to essential features.
Specific items shipped:
-
Persistent Chat Tab:
A new “Chat” tab now remains visible across all Collection views. This allows users to immediately access discussion threads related to each Collection without having to navigate away from the page. -
Unified Collections Dashboard:
The dashboard has been restructured to feature three persistent tabs – including chat and asset views – with a prompt bar fixed at the top. The interface now includes a recent chats card with filtering options and a “see all” button that opens a dedicated chat drawer. -
UI Enhancements for Consistency:
The overall layout now displays assets from sub-folders more cohesively, while the persistent tabs ensure a unified navigation experience across the platform.
Technical details
The dashboard was revised using our SolidJS component architecture to support three persistent tabs. The new Chat tab integrates with our threads API and maintains state through the UI context, ensuring it remains visible when navigating across Colections. A fixed prompt bar allows easy user input, while updated routing ensures the “see all” button correctly opens the chat drawer. Additionally, improved CSS styling and state management updates enhance the responsiveness and layout consistency of asset and chat displays.
Explain It to Me Like I’m Five
Imagine you have a super-fast, smart robot friend who can answer your questions almost instantly! It listens, thinks, and replies super quickly. But, just like learning a new game, sometimes it makes mistakes. We’re helping it get even better over time.
Summary
We’ve onboarded the Gemini 2.0 Flash model from Google, a newly released AI model designed to enhance AI-driven interactions within Storytell. This integration aims to provide faster, more contextually aware responses, improving overall user experience. However, certain limitations should be considered to ensure optimal performance.
Technical Details
Model Integration
The Gemini 2.0 Flash model has been seamlessly integrated into our infrastructure, enabling real-time interactions via API.
Data Handling
- The model processes incoming data dynamically, ensuring responsiveness to user queries.
- Routing Mechanisms: Since external ranking scores influence response selection, maintaining accuracy requires careful data handling.
Testing and Debugging
- Initial testing has been conducted, but due to the model’s novelty, unforeseen issues may arise.
- Further testing is necessary to refine edge cases and optimize performance.
Specific Items Shipped
- Model Interface Established – We created a way for our system to communicate with Gemini 2.0 Flash.
- Routing Control Using External Data – We leverage external ranking mechanisms to enhance response accuracy.
- Initial Testing Conducted – The model has undergone preliminary testing to ensure smooth functionality in most scenarios.
The Problem We Solved With This Feature
The integration of Gemini 2.0 Flash addresses the need for faster, smarter AI-driven interactions within Storytell. This upgrade enhances user experiences by delivering responses that are more natural, insightful, and efficient.
Who We Built This For
This feature is designed for:
- Developers & Businesses using AI for customer support, content creation, or interactive applications.
- Tech Enthusiasts eager to experiment with the latest AI advancements to push the boundaries of AI-driven interactions.
Summary
This feature adds a dynamic Improve It button to the prompt interface, enabling users to instantly generate enhanced versions of their prompts using a large language model (LLM). The button remains inactive (grayed out) until text is entered into the prompt bar. Once clicked, the user’s input is sent to the LLM, which returns multiple improved prompt suggestions. These suggestions populate the prompt bar, allowing the user to select one, edit further, or send the improved prompt directly.
Technical Details
- Button State Management:
- The button is disabled by default and uses frontend logic to monitor the prompt input field. When the input length exceeds zero, the button is enabled.
- Implemented via event listeners tied to the input field, triggering UI state updates.
- LLM API Integration:
- When clicked, the button sends the user’s prompt to a backend service that interfaces with an LLM (e.g., GPT-3.5/4).
- The API request includes a system prompt instructing the LLM to refine the user’s input for clarity, specificity, and structure. Example transformation logic includes:
- Rephrasing ambiguous language.
- Adding context or examples.
- Breaking down complex requests into step-by-step instructions.
- Response Handling:
- The LLM returns 3–5 improved prompts. These are parsed and displayed in the prompt bar as selectable options.
- Frontend components dynamically render suggestions without reloading the page.
- User Interaction Flow:
- Users can select a suggestion to auto-fill the prompt bar or continue editing manually.
- Selection triggers a tracking event for analytics.
Explain It to Me Like I’m Five
Imagine you’re asking a robot to help with your homework, but sometimes the robot doesn’t understand your question. The “Improve It” button is like a helper robot that makes your questions clearer and easier to understand. You type something, click the button, and it gives you better ways to ask your question. The button only works after you type something, and you can pick one of its ideas or change them yourself!
Specific Items Shipped
-
Improve It Button State Management:
Dynamic enable/disable based on input. The button stays grayed out until the user types text into the prompt bar, ensuring it’s only usable when meaningful input exists. -
Prompt Transformation Logic:
LLM-driven prompt optimization. The system sends the user’s prompt to an LLM with instructions to improve clarity, add examples, and structure the request effectively. -
Suggestion Handling & UI Integration:
Seamless display of LLM suggestions. Improved prompts populate the input field as clickable options, allowing instant selection or further editing. -
Analytics & Tracking:
Usage metrics collection. Clicks on the button and selections of suggestions are logged to measure feature adoption and effectiveness.
The Problem We Solved With This Feature
Many users struggle to craft effective prompts, leading to vague or unhelpful LLM responses. This creates frustration and inefficiency, especially for non-technical users. By automating prompt refinement, we reduce the learning curve for interacting with LLMs and help users get better results faster.
Who We Built This For
-
Primary Users:
- Non-technical users unfamiliar with prompt engineering best practices.
- Educators and students seeking to simplify complex queries.
- Professionals needing quick, reliable LLM outputs without manual tweaking.
-
Use Cases:
- A student writes a vague essay question and uses the button to transform it into a structured, detailed prompt.
- A marketer iterates on a social media post idea by selecting from LLM-generated variations.
Summary
The newly shipped feature involves onboarding the o3-mini model from OpenAI, a recently released model designed to enhance AI capabilities on Storytell. This feature allows integration with the latest AI model, potentially improving user interaction with our system. However, there are some caveats that need to be considered to ensure the model’s optimal performance.
Technical Details
The o3-mini model from OpenAI operates as a cutting-edge machine learning model. Integration involves several key technical aspects:
- Model Integration: The o3-mini model has been seamlessly integrated into our existing infrastructure. This involves interfacing with the model’s API to facilitate real-time interactions.
- Data Handling: The model processes incoming data, which necessitates efficient routing mechanisms. Given the reliance on external ranking scores for optimal routing, a lack of control over this data could lead to inaccuracies.
- Testing and Debugging: Thorough testing is required to identify potential bugs. Due to the novelty of the model, it hasn’t been tested across all possible scenarios, and unforeseen issues may arise.
Explain It to Me Like I’m Five
Imagine you got a cool, new toy robot that can talk and think. This robot can learn new things and help you answer questions or tell stories. However, sometimes it might not know everything yet, so it might need more time to learn the right answers. We’re still figuring out how to make it work perfectly, like teaching a puppy how to fetch.
Specific Items Shipped
- Model Interface Established: We created a way for our system to talk to the o3-mini model using its special language tools.
- Routing Control Using External Data: We use external scores to help decide the best way for the model to handle conversations. This helps make sure you get the smartest answers back.
- Initial Testing Conducted: Although all scenarios haven’t been covered, the model has gone through preliminary testing phases to catch initial errors and help it run smoothly in most situations.
The Problem We Solved With This Feature
This feature addresses the need for integrating advanced AI capabilities, thereby enhancing Storytell’s interaction quality. The o3-mini model provides more sophisticated AI responses, improving user experiences by making them more natural and intuitive. It’s crucial as it keeps us competitive in AI-driven technology.
Who We Built This For
This feature has been designed for developers and businesses that leverage AI for customer support, content creation, or any application requiring advanced interaction capabilities. It also serves innovative tech enthusiasts eager to experiment with the latest in AI technology, giving them the tools to explore new use cases and possibilities.
Summary
This feature gives users the ability to update asset display names and summaries and also lets them delete assets when needed. When an asset or an entire Collection is deleted, the deletion is performed in Clickhouse as well to ensure consistency between systems.
Technical details
- Front-end Changes:
- Updated components such as
CollectionAssetTable
,EditAssetModal
, andDeleteAssetModal
that provide users with interfaces to edit asset details or delete assets. - New modals are triggered by user actions allowing for confirmation and data input.
- Updated components such as
- Back-end Enhancements:
- The back-end service now handles updates and deletions by removing asset records from both primary storage and Clickhouse.
- Ensures that deletions are propagated to all relevant systems to maintain data integrity.
- UI/UX Improvements:
- Updated CSS and layout modifications ensure that the new functionalities are intuitive, responsive, and consistent across different devices.
Explain it to me like I’m five
Imagine you have a scrapbook with lots of pictures. Sometimes you want to change the title on one of your pictures, or maybe you decide you don’t want a picture anymore and tear it out of your scrapbook. This update gives you a simple way to change the picture names or remove them completely, so your scrapbook always looks neat and updated—even if you have a giant backup book keeping all your pictures safe.
Specific items shipped
-
Ability to Edit Assets:
Users can now click on an asset to update its display name and summary. The interface opens an edit modal where changes are sent to the back-end service, which then confirms success by updating the asset’s records. -
Asset Deletion with Confirmation:
A new delete asset option has been added. When a user deletes an asset, it not only removes it from the interface but also from Clickhouse if the whole Collection or an individual asset is being deleted. The process confirms deletion via a modal to avoid accidental removal. -
User Interface Enhancements:
Changes in components such asCollectionAssetTable
and related CSS ensure that the new editing and deletion buttons are well integrated into the existing design, making it intuitive and responsive on various devices.
The problem we solved with this feature
Users needed a simple and reliable way to manage their assets—being able to update what is displayed and completely remove assets that are no longer needed. Before this update, there was no straightforward mechanism to change asset display names or summaries, nor to ensure that deletions were correctly reflected in related systems like Clickhouse. This improvement reduces clutter, prevents confusion, and enhances the overall user experience.
Who we built this for
This feature was built for users who manage digital assets—such as content creators, asset managers, and system administrators. It addresses the need for better organization and control of assets by allowing them to easily rename or delete items and ensuring that these changes are synchronized across all systems.
Summary
We’ve successfully launched a feature that allows users to share Collections and chats. Check out our documentation here. This feature introduces a robust information architecture that allows users to manage permissions for individual entities or collections of entities effectively. It empowers Organization Admins and Collaborators to control access to Collections and Threads, enabling a more organized and secure way to handle assets and knowledge within Storytell.
Technical Details
The Collections, Permissions, and Entitlements feature is designed to provide granular control over user access within an organization. Here’s a deep dive into its technical components:
-
Collections Control Plane: Establishes the foundational structure for creating, reading, updating, and deleting Collections. It ensures that root Collections are protected from accidental deletions and allows assets and threads to be managed within these Collections.
-
Authorization Mechanism: Implements a flexible permission system where Collaborators can grant or revoke access at both the Collection and Thread levels. This includes handling different user types such as Registered Users, Guests, and Pending Users, ensuring that access is correctly managed based on user roles and organization policies.
-
API Integrations: Ensures that threads and assets are implicitly added to personal Collections if they are removed from all others. Additionally, it purges threads and assets from Collections upon deletion unless they exist in another Collection, maintaining data integrity.
-
Operational Tooling: Provides tools for Storytell Operators to export user Collections, assets, and permissions to external platforms like Google Sheets. This facilitates support and auditing processes by allowing easy access to user-specific data.
-
Permission Evaluation: Implements efficient algorithms to evaluate permissions within 100ms, ensuring a responsive user experience. The system prioritizes higher-level permissions in case of conflicts and handles permission inheritance and overrides to maintain consistent access control.
-
User and Team Management: Allows Organization Admins to manage user roles and team permissions effectively. This includes creating, updating, and deleting teams, as well as granting or revoking access for teams to specific Collections and SmartChats™.
Explain it to me like I’m five
Imagine you have a big toy box (Collections) with different compartments (Threads) for your toys. With our new feature, you can decide who gets to play with which toys. If you’re in charge (Organization Admin), you can say, “Johnny can play with the cars,” or “Sally can access the dolls.” This way, everyone knows exactly which toys they can play with, and you keep everything organized and safe.
Specific Items Shipped
-
Create, Read, Update, and Delete Collections: Allows users to manage Collections by adding, viewing, modifying, or removing them as needed.
-
Prevent Deleting Root Collections: Ensures that primary Collections cannot be accidentally deleted, protecting the core structure.
-
Upload Assets into a Collection: Enables users to add individual assets, like documents or files, into specific Collections.
-
Add Existing Assets to a Collection: Allows users to organize already existing assets by assigning them to different Collections.
-
Create Threads Directly in a Collection: Facilitates the creation of new threads within a selected Collection for better organization.
-
Grant or Revoke Collaborator Access: Provides mechanisms for Collaborators to manage user access to Collections and threads, including inviting new users via email or shared links.
-
Organization Admin Role Management: Allows admins to assign or revoke the Organization Admin role to ensure proper access control within the organization.
The Problem We Solved with This Feature
Before implementing Collections, Permissions, and Entitlements, managing access to various assets and knowledge within an organization was cumbersome and lacked flexibility. Users struggled with organizing their content effectively, leading to potential security risks and inefficiencies. By introducing a structured permission system, we address the need for precise access control, ensuring that only authorized users can access specific Collections or threads. This enhancement not only improves security but also streamlines the workflow, making it easier for organizations to manage their information architecture comprehensively.
Who We Built This For
We designed the Collections, Permissions, and Entitlements feature for Organization Admins and Collaborators who need to manage and oversee access to various assets and knowledge within their teams. Specific use cases include:
-
Enterprise Teams: Managing access to sensitive documents and projects within large organizations.
-
Project Managers: Organizing project-specific threads and ensuring that only relevant team members have access.
-
Support Teams: Quickly exporting user permissions and Collections to address support queries efficiently.
This feature caters to users who require a high level of control over their content, ensuring that information is both accessible and secure based on organizational roles and responsibilities.
Summary
This feature allows users to download a file directly from Storytell. When a user clicks on a file, they will be able to see the raw content of the file. Instead of displaying this content in a pop-up modal with a markdown version, the implementation now focuses on providing a direct download from a cloud storage URL. Here’s the documentation on how to do this
Technical details
The implementation of this feature involves creating endpoints for both markdown rendering and signed URLs. When the user clicks on a file, the application will generate a signed URL that grants temporary access to the raw content stored in the cloud. This is achieved by integrating the cloud storage API to facilitate smooth and secure file downloading.
-
Endpoint Creation: We set up two endpoints:
- One for handling markdown rendering.
- Another for providing signed URLs for direct content access.
-
Security Protocol: The signed URL ensures that the files are accessible only for a limited time, providing an additional layer of security and protecting against unauthorized access.
-
User Experience: The user simply clicks on the file link, which triggers the application to provide the download link, ensuring a seamless experience without unnecessary pop-ups.
Explain it to me like I’m five
Imagine you have a really cool drawing that you want to share with your friend. Instead of showing it to them on your phone or computer where they can’t touch it, you just let them take it home. When you click on the drawing, it doesn’t go on the screen; instead, it gets sent directly to them, so they can keep it forever. That’s exactly what we’ve done here—now when someone clicks on a file, they get to download it right away!
Specific items shipped:
-
Direct File Download:
- Users can click on a file link to download the actual file instead of viewing it in a pop-up. This streamlines the process and reduces user distractions.
-
Signed URL Generation:
- The system generates a temporary link that allows users to access their files securely. This URL ensures that only authorized users can view or download the files, maintaining data security.
-
Endpoint Setup:
- We’ve put in place specific endpoints to manage file access. One allows users to view markdown content, and the other gives them a quick link to download files directly.
The problem we solved with this feature
Before this feature was implemented, users had to view files in a modal pop-up, which could be an inconvenient experience. Users expressed a need for a simpler way to access and download files directly without the hassle of navigating through additional views. By enabling direct downloads, we improved user experience, allowing for quicker access to important documents and enhancing overall functionality.
Who we built this for
This feature was built primarily for content creators and end-users who frequently need to download files for offline use or sharing. Specifically, it addresses:
- Content Creators: Who need to provide raw data or documents to others easily.
- End-Users: Who prefer quick access to downloadable content rather than having to view it in a pop-up and then navigate away to download it.
Users we especially built file downloads for
- Brandon from Siemens
Summary
We have successfully replicated the Web home page design onto the Storytell.ai web app. This feature includes significant enhancements, such as integrating a prompt bar and an animated graphic that replaces the product screenshot.
Technical Details
The replication process involved several technical steps to ensure the new design elements were effectively integrated into the Storytell.ai web app. The key technical components include:
-
Prompt Bar Integration: The prompt bar is a critical feature allowing users to interact directly with the product from the homepage. It was implemented to be persistent, ensuring it remains accessible even when outside the user’s viewport. This required carefully considering the app’s layout and user interface design to maintain functionality across different screen sizes and devices.
-
Animation Replacement: The product screenshot in the “How Storytell turns data into insights” section was replaced with an animation. This animation dynamically demonstrates the product’s capabilities, providing a more engaging and informative user experience. The technical challenge was to ensure smooth animation performance without affecting the app’s load time.
-
Responsive Design Adjustments: The new homepage design required adjustments to ensure responsiveness across various devices. This involved using CSS media queries and JavaScript to adapt the layout and functionality based on the user’s device and screen size.
Explain it to Me Like I’m Five
Imagine you have a cool new toy and want to show it to your friends. Instead of just telling them about it, you create a fun and colorful picture that shows exactly how the toy works. That’s what we did with our website! We made it look super cool and added a special button that immediately lets people play with our toy (the app). It’s like having a magic door that takes you straight to the fun part!
Specific Items Shipped
-
Prompt Bar Integration:
We added a special bar at the top of the page that lets users start using our app right away. This bar stays in place even if you scroll down, so it’s always easy to find. -
Animation Replacement:
Instead of a boring picture, we now have a moving animation that shows how our app works. This makes it easier for people to understand what we do and why it’s awesome. -
Responsive Design Adjustments:
We ensured our website looks great on any device, whether a phone, tablet, or computer. This means everyone can enjoy the new design no matter how they visit our site.
The Problem We Solved with This Feature
The primary problem we addressed with this feature was the need to enhance user engagement and streamline the transition from the homepage to the product experience. By integrating the prompt bar, we provide users with immediate access to the app, reducing friction and improving the likelihood of user interaction. The animation replacement offers a more dynamic and informative way to showcase the product’s capabilities, making it easier for users to understand and appreciate the value of Storytell.ai.
Who We Built This For
This feature was primarily built for potential users visiting the Storytell.ai web app for the first time. The goal was to provide them with an engaging and intuitive experience that quickly demonstrates the app’s value and encourages them to start using it. By making the homepage more interactive and informative, we aim to attract and retain users who are looking for a powerful tool to turn data into insights.
Summary
Our latest feature enables the conversion of XLS files into multiple CSV files, each corresponding to a tab in the original XLS file. This feature is designed to streamline data processing and enhance compatibility with various data analysis tools that prefer CSV format.
Technical Details
The XLS → CSV conversion feature processes each tab within an XLS file independently, converting it into a separate CSV file. The conversion process involves several key steps:
- File Parsing: The XLS file is parsed to identify individual tabs. Each tab is treated as a separate dataset.
- Data Extraction: Data from each tab is extracted, ensuring that both structured and semi-structured data are handled appropriately.
- Classification and Validation: The data is classified to determine its structure. This involves checking for consistent column counts and identifying header rows.
- Conversion: The extracted data is converted into CSV format. Special attention is given to handling non-tabular data and ensuring that data integrity is maintained.
- Error Handling: The system includes mechanisms to handle errors such as varied headers and unstructured data, with fallback strategies to ensure conversion success.
Explain It to Me Like I’m Five
Imagine you have a big book with lots of pages, and each page has a different story. Our new feature takes that book and turns each page into its own little book. So, if you have a book with five stories, you end up with five smaller books, each with one story. This makes it easier to read and share just the story you want.
Specific Items Shipped
- Multi-Tab Conversion: Converts each tab in an XLS file into a separate CSV file. This allows for easier data management and analysis.
- Structured Data Handling: Ensures that data with consistent columns is accurately converted. This improves reliability for structured datasets.
- Semi-Structured Data Support: Implements fallback strategies for semi-structured data, enhancing flexibility in data conversion.
- Error Mitigation: Includes error handling for varied headers and unstructured data, reducing conversion failures.
- Validation and Testing: Comprehensive testing with various datasets to ensure robustness and accuracy.
The Problem We Solved with This Feature
We developed this feature to address the challenge of working with XLS files that contain multiple tabs. Many data analysis tools and workflows require data in CSV format, which is more universally compatible and easier to process. By converting XLS files into CSVs, we simplify data handling and improve accessibility for users who need to analyze or share data efficiently.
Who We Built This For
This feature is particularly beneficial for data analysts, researchers, and business professionals who frequently work with large datasets in Excel format. It is designed to support use cases such as:
- Data Analysis: Facilitating the import of data into analysis tools that prefer CSV format.
- Data Sharing: Simplifying the process of sharing specific datasets without the need to share entire Excel files.
- Data Management: Enhancing the organization and management of data by separating it into individual, manageable files.
Summary
The recently shipped feature enables users to perform various operations on Collections, specifically Create, Read, Update, and Delete (CRUD). This functionality gives users the flexibility to manage their Collections more effectively. They can create new Collections, rename existing ones, update them with new data, and delete unwanted Collections. Additionally, we’ve implemented a new API endpoint to facilitate the deletion of Collections directly.
Explain it to me like I’m five
Imagine you have a toy box. This feature is like a special way to organize your toys. You can add new toys to the box, change the name of your toy box, take out toys when you don’t want them anymore, or even throw the whole toy box away. Also, there’s a button you can press to make sure you can throw away the toy box whenever you decide!
Specific Items Shipped
-
Create Collection: Users can now create new Collections.
- This allows users to group related items easily, making organization easier.
-
Rename Collection: Existing Collections can be renamed.
- Users can change the name of a Collections to better reflect its contents or purpose.
-
Move Collection: Moving Collections helps users to organize their content more effectively.
- This feature provides flexibility for users to rearrange their Collections, ensuring that their organization systems remain current and intuitive.
-
Delete Collection: Added API endpoint for deleting Collections.
- Users can remove Collections they no longer need, ensuring the interface remains clutter-free and relevant.
The Problem We Solved with This Feature
We built this feature to address the challenges users faced in managing their Collections. Before this, users struggled with limited organization capabilities, leading to confusion as Collections grew. By providing a comprehensive set of CRUD functionalities, we empower users to keep their Collections tidy and easily accessible, which ultimately enhances their overall experience.
Who We Built This For
This feature is designed for users who frequently manage Collections, such as project managers, researchers, and hobbyists. Specifically, it addresses those looking to streamline their organization of items, whether for projects, studies, or personal interests. By enhancing their ability to create, update, and delete Collections effortlessly, we improve their workflow and productivity.
Summary
The newly shipped feature addresses the challenge of processing non-tabular data within CSV files. Unlike traditional CSV files that contain well-structured tabular data with consistent columns and headers, many files we receive lack these characteristics. This feature enables the system to handle such “report” type CSV files by leveraging a Language Learning Model (LLM) to generate data “chunks” effectively, ensuring no data points are lost in the process.
Technical Details
The feature utilizes an LLM to parse and process CSV files that do not conform to standard tabular formats. The LLM is prompted to generate data chunks from the CSV content, even in the absence of headers or consistent columns. This involves modifying the prompt to ensure the LLM does not summarize or drop data points, which was a challenge in earlier implementations. Additionally, a “source” identifier is included in each chunk to enhance searchability and traceability of the data.
Explain it to Me Like I’m Five
Imagine you have a messy box of toys instead of a neat row of blocks. Our new feature is like a special robot that helps organize these toys into groups, even if they don’t fit perfectly into rows. It makes sure every toy is in the right group and doesn’t lose any toys while doing so. Plus, it puts a little tag on each group so we can find them easily later.
Specific Items Shipped
- LLM-Generated Data Chunks: The feature uses an LLM to create data chunks from non-tabular CSV files, ensuring all data points are captured.
- Prompt Modifications: Adjustments were made to the LLM prompts to prevent data summarization and ensure complete data retention.
- Source Inclusion in Chunks: Each data chunk now includes a “source” tag to improve searchability and data traceability.
- Regression Testing: Comprehensive testing was conducted using various datasets to validate the feature’s performance and reliability.
The Problem We Solved with This Feature
The primary problem addressed by this feature is the inability to process non-tabular CSV files effectively. Traditional methods failed due to the lack of headers and inconsistent columns, leading to incomplete data processing. By implementing this feature, we ensure that all data points are captured and organized, even in non-standard formats, which is crucial for accurate data analysis and decision-making.
Who We Built This For
This feature was primarily built for data analysts and engineers who frequently work with CSV files that do not adhere to standard tabular formats. It is particularly useful for some organizations, where such files are common, and accurate data processing is essential for generating insights and reports.
Summary
The newly shipped feature classifies CSV content as either “structured” or “unstructured.” This classification allows for the application of different processing strategies, ensuring that CSV files are handled appropriately based on their content type. Structured CSVs typically have a clear header row followed by data rows, while unstructured CSVs may not follow this format and require different handling.
Technical Details
The feature employs a classification algorithm to analyze the content of CSV files. It inspects the first few rows to determine if they follow a structured format, characterized by a consistent header and data row pattern. If the pattern is not detected, the file is classified as unstructured. This classification is crucial for selecting the appropriate processing prompt, which optimizes data handling and ensures accuracy in data interpretation. The implementation is designed to be efficient, minimizing processing time and resource usage.
Explain it to Me Like I’m Five
Imagine you have two kinds of boxes. One box has toys neatly organized with labels, and the other box has toys all jumbled up. This feature is like a helper that looks at each box and tells us if the toys are organized or not. If they are organized, we use one way to play with them, and if they are jumbled, we use a different way. This helps us make sure we always know how to handle the toys correctly.
Specific Items Shipped
- Classification Algorithm: We built a smart system that looks at CSV files and decides if they are organized (structured) or not (unstructured).
- Efficient Processing: The feature is designed to quickly and accurately classify files without slowing down the system.
- Local Validation: We tested the feature with real data to make sure it works perfectly and doesn’t cause any problems.
The Problem We Solved with This Feature
In the real world, CSV files can vary significantly in format. Some are neatly structured, while others are not. This inconsistency can lead to processing errors and inefficiencies. By classifying CSV content, we ensure that each file is processed using the most suitable method, improving accuracy and efficiency in data handling.
Who We Built This For
This feature is designed for data analysts and engineers who frequently work with CSV files. It addresses the need for accurate data processing in environments where CSV files may not always follow a standard structure. By automating the classification process, we reduce the manual effort required to handle diverse data formats, allowing users to focus on more critical tasks.
Summary
We have enhanced our LLM (Large Language Model) router by adding a new feature that categorizes “comparative” queries. This improvement allows the router to better understand and process queries that involve comparisons, enhancing the model selection process. The router now returns a structured response to the prompt builder, which includes a list of categories identified by the LLM for the given prompt. This categorization helps in selecting the most appropriate model for processing the query.
Technical details
The LLM router has been updated to include a categorization mechanism specifically for “comparative” queries. When a query is received, the router analyzes the content to determine if it involves any form of comparison. If a comparison is detected, the query is categorized accordingly. The router then returns a structured response to the prompt builder, which includes a PromptCategories
list. This list contains the categories identified by the LLM, which are used to guide the model selection process. If the router cannot define a specific category, it assigns a generic category to ensure that the query is still processed effectively.
Explain it to me like I’m five
Imagine you have a big box of crayons, and you want to pick the right color to draw a picture. Our new feature is like a helper that looks at your picture idea and tells you which crayon to use. If you’re drawing something that needs comparing, like “Is the sky bluer than the ocean?” our helper will know exactly which crayon to pick to make your picture look just right.
Specific items shipped
- Comparative Query Categorization: We have implemented a system that identifies and categorizes queries involving comparisons. This helps in selecting the most suitable model for processing such queries.
- Structured Response to Prompt Builder: The router now returns a structured response that includes a
PromptCategories
list. This list helps in guiding the model selection process by providing identified categories for the prompt. - Generic Category Assignment: In cases where a specific category cannot be determined, a generic category is assigned to ensure the query is processed effectively.
The problem we solved with this feature
This feature addresses the challenge of accurately processing queries that involve comparisons. Previously, the LLM router might not have effectively identified and categorized comparative queries, leading to suboptimal model selection. By introducing this categorization, we ensure that comparative queries are processed more accurately, improving the overall performance and reliability of our system.
Who we built this for
This feature was primarily developed for users who frequently engage in queries that involve comparisons. It is particularly useful for applications where understanding and processing comparative language is crucial, such as data analysis, research, and decision-making tools. By enhancing the model selection process for these queries, we aim to provide a more accurate and efficient user experience.
Summary
The Web scraper MVP is an initial release of our web scraping tool designed to extract data from websites efficiently. This feature leverages the Firecrawl API to crawl and scrape web pages, currently supporting the downloading of YouTube videos and their transcripts. The scraper is capable of handling multiple URLs, storing each page’s HTML as an asset within its respective Collection. While the current implementation focuses on downloading transcripts to prevent blocking the control plane, future updates will introduce job architecture for more extensive scraping capabilities.
Technical Details
The Web scraper MVP utilizes several Python libraries, including the youtube-transcript-api
, to download YouTube videos and their transcripts. In its first generation, the scraper integrates the Firecrawl API to perform crawling and scraping tasks. Here’s a deeper look into its technical components:
-
Firecrawl Integration: Firecrawl is chosen for its ability to crawl entire websites rather than just single pages. It offers configuration options like maximum pages to crawl and crawl depth, allowing for controlled data extraction.
-
Endpoint Management: Currently, the scraper operates under a single endpoint, which handles the downloading of transcripts to ensure that the control plane remains responsive. Future iterations will implement asynchronous processing to manage larger scraping tasks without hindering performance.
-
Asset Storage: Each scraped page’s HTML is stored as an asset in the Collection from which it was added. This organized storage facilitates easy retrieval and management of scraped data.
-
Scalability Considerations: To support scaling, the system plans to switch to Scrapfly for better credit cost efficiency and implement a robust job architecture using our existing queueing and job processing frameworks. Rate limiting and credit management are addressed to ensure sustainable usage of scraping resources.
-
Challenges and Future Work: Upcoming enhancements include allowing users to specify multiple URLs for scraping, integrating anti-bot bypass strategies, and implementing proxy rotations to prevent IP blocking. These advancements aim to make the scraper more versatile and resilient against common web scraping obstacles.
Explain it to me like I’m five
Imagine you have a magic robot that can visit websites and copy down all the important information for you. The Web scraper MVP is like that robot. Right now, it can grab videos and their captions from YouTube and save the information neatly. It’s built to handle one website at a time, but soon it will be able to visit many websites at once without getting stuck. This helps us collect data quickly and keep everything organized.
Specific Items Shipped
-
YouTube Transcript Downloading
Enabled the scraper to download transcripts from YouTube videos using Python libraries.
This feature allows users to automatically fetch and store the transcript of any YouTube video, facilitating content analysis and accessibility. -
Firecrawl API Integration
Implemented Firecrawl for efficient website crawling and scraping.
By leveraging Firecrawl, the scraper can navigate entire websites, respecting limits on the number of pages and crawl depth to manage resource usage effectively. -
Single Endpoint Operation
Configured the scraper to run under one endpoint to maintain control plane performance.
This design choice ensures that fetching transcripts does not interfere with other system operations, maintaining overall stability. -
Asset Storage System
Developed a system to store each scraped page’s HTML as an asset within its Collection.
This organized storage method allows for easy access and management of the scraped data, categorizing it based on the source Collection. -
Scalability Framework
Set the foundation for future scalability by planning job architecture and rate limiting.
Preparing for growth, this framework will support more extensive scraping tasks and manage usage effectively as the number of users and scraped URLs increases.
The Problem We Solved with This Feature
Before the Web scraper MVP, extracting data from websites, especially large ones, was time-consuming and resource-intensive. Manually downloading videos and transcripts from platforms like YouTube was inefficient and prone to errors. Our scraper automates this process, enabling rapid and accurate data collection. This matters because it saves significant time and resources for users who rely on up-to-date and comprehensive web data for analysis, research, or content creation.
Who We Built This For
We developed the Web scraper MVP for engineers and researchers who need to gather large amounts of web data efficiently. Use cases include:
- Content Aggregators: Automatically collecting and organizing video transcripts for content libraries.
- Data Analysts: Gathering web data for trend analysis and market research.
- Educational Institutions: Compiling resources and transcripts for academic projects and studies.
Storytell is now HIPAA compliant, ensuring that we adhere to the highest standards for protecting the privacy and security of your health information. This certification underscores our commitment to maintaining the confidentiality of sensitive data and meeting regulatory requirements.
For more information about our security practices and policies, please visit our Trust Center.
Summary
We have implemented a feature that stops sending transformed prompts to the Language Learning Model (LLM) when creating suggestions. Instead, the system now uses the raw prompts provided by the user. This change simplifies the suggestion process and enhances performance by reducing the complexity and processing time involved in generating suggestions.
Technical details
Previously, when generating suggestions, the system injected additional system prompts into the user’s input, transforming the original prompt before sending it to the LLM. This transformation included predefined instructions and context that were not part of the user’s original request. While this approach provided a standardized context for the LLM, it introduced unnecessary complexity and increased the processing time, leading to slower suggestion generation.
With the new feature, the system bypasses the transformation process and sends the user’s raw prompt directly to the LLM. This change reduces the payload size, minimizes processing overhead, and accelerates the response time for generating suggestions. By eliminating the additional system prompts, the LLM focuses solely on the user’s input, which streamlines the suggestion process and improves overall efficiency.
Explain it to me like I’m five
Imagine you’re asking a friend for ideas on a drawing. Before, someone was adding extra instructions to your request, which made it take longer for your friend to come up with suggestions. Now, we let you ask your friend directly without any extra steps, so you get faster and simpler ideas for your drawing.
Specific items shipped
-
Direct Use of Raw Prompts: The system now sends the exact prompts provided by users to the LLM without any additional transformations or injected system prompts. This ensures that the suggestions are based solely on the user’s original input.
-
Enhanced Performance: By removing the transformation layer, the time taken to process and generate suggestions has been significantly reduced, resulting in quicker response times and a more efficient user experience.
-
Simplified Suggestion Pipeline: The elimination of system prompt injection streamlines the suggestion generation process, making it easier to maintain and reducing potential points of failure.
-
Improved Accuracy of Suggestions: With the LLM receiving only the raw user input, the relevance and accuracy of the suggestions have improved, as there is less noise and fewer instructions that could potentially misguide the model.
The problem we solved with this feature
We identified that the process of transforming user prompts by injecting additional system prompts before sending them to the LLM was causing unnecessary complexity and delays in generating suggestions. This transformation not only slowed down the response time but also introduced potential inaccuracies in the suggestions by adding extraneous information. By switching to using raw prompts, we simplified the suggestion generation process, making it faster and more reliable. This improvement matters because it enhances the user experience by providing quicker and more relevant suggestions, which is crucial for maintaining user engagement and satisfaction.
Who we built this for
We built this feature for our end-users who rely on quick and accurate suggestions to enhance their productivity and creativity. Specifically, users who interact with Storytell for content creation, brainstorming ideas, or seeking assistance in writing will benefit from faster and more relevant suggestions. By optimizing the suggestion generation process, we cater to professionals, writers, and creative individuals who need efficient tools to support their workflow and enhance their output quality.
Summary
The primary objective of this update is to generate responses that are both faster and more user-centric. This feature enhances the generation of responses, making interactions more user-friendly and efficient. Its aim is to ensure that responses feel natural while also processing them more swiftly.
Technical Details
For engineers, understanding the mechanics of this feature is crucial. Here’s a deeper dive:
- System Context Settings: Adjustments have been made to the context settings to promote a more natural conversation flow, encouraging direct interaction with the user.
- Prompt Structure: The prompts have been redesigned to leverage reference materials, reducing the reliance on external training data unless necessary. This helps the system remain aligned with the specific context at hand.
- Diagnostics Integration: New diagnostic tools are implemented to track the accuracy and speed of prompt-response cycles, enabling ongoing improvements and ensuring reliability.
Explain It to Me Like I’m Five
Imagine writing a message to a friend. This feature helps the system respond in a way that’s easy to understand and quick, like getting a faster, friendlier text back from a friend. It’s designed to make conversations smoother and answers come quicker without any confusion.
Specific Items Shipped
- More Friendly Responses: Improved Interaction - The new structure of prompts facilitates more conversational and natural interactions, smoothing out the dialogue between users and the system.
- Faster Responses: Enhanced Speed - With a more efficient processing system, prompts are turned into answers faster than before, reducing wait times.
- Diagnostics in Cloud Environment: Increased Reliability - By incorporating diagnostic tools, any discrepancies or issues can be rapidly identified and corrected, boosting the system’s overall dependability.
The Problem We Solved with This Feature
Previously, the system’s responses often seemed robotic and sometimes misaligned with what users were looking for. This feature addresses that by making responses more human-like and natural, enhancing the overall user experience and ensuring that interactions with the platform meet users’ expectations.
Who We Built This For
This improvement benefits all Storytell users who interact through queries. By refining response quality and reducing wait times, users from different sectors, particularly those requiring prompt and clear interactions for their tasks, will experience a more efficient workflow.
Summary
This feature enhances the routing logic by automatically excluding routes when the search results predominantly originate from CSV files. By implementing this, the system ensures that users receive the most relevant and diverse search results, improving overall user experience.
Technical Details
The feature integrates a filtering mechanism within the existing search algorithm. It analyzes the source of search results and identifies the proportion of results derived from CSV files. If the CSV file results exceed a predefined threshold, the routing process bypasses these results, directing the search to alternative sources. This is achieved by modifying the routing API to include a validation step that checks the origin of each result and applies the exclusion criteria accordingly. Additionally, caching strategies are employed to minimize performance overhead during this verification process.
Explain it to Me Like I’m Five
Imagine you’re looking for your favorite toy in a big toy store. Sometimes, most of the toys look the same and come from the same packaging (like CSV files). This feature helps the store staff make sure that when you search for a toy, you get a variety of options from different places, not just the ones that look alike. This way, you have a better chance of finding exactly what you want.
Specific Items Shipped
-
Routing Filter Module: Introduced a new module that scans search results to determine the origin of each result.
This module evaluates whether the majority of search outcomes are from CSV files. If the condition is met, it triggers the exclusion process to remove these results from the routing path.
-
Threshold Configuration Settings: Added configurable settings to define the percentage threshold for CSV file dominance in search results.
Administrators can adjust the threshold based on varying needs, allowing flexibility in how strictly the routing exclusion is applied.
-
Performance Optimization Techniques: Implemented caching mechanisms to store previously evaluated search results.
These optimizations ensure that the additional filtering does not significantly impact the search performance, maintaining a seamless user experience.
-
Enhanced Logging and Monitoring: Developed comprehensive logging features to track the filtering process and its impact on search routing.
This facilitates easier troubleshooting and performance assessment, ensuring the feature operates as intended.
The Problem We Solved with this Feature
Previously, when users performed searches, a significant portion of the results came from CSV files, leading to repetitive and less diverse information being displayed. This repetition could frustrate users and reduce the perceived quality of the search functionality. By implementing the “Remove from Routing if the Search Results Contain Mostly Results from CSV Files” feature, we ensure that search results are more varied and relevant, enhancing user satisfaction and engagement.
Who We Built This For
This feature is designed for users who rely on Storytell to access a wide range of data sources. Specifically:
- Data Analysts: Who require diverse datasets for comprehensive analysis without being confined to CSV file limitations.
- Researchers: Seeking varied sources to support robust and multifaceted research outcomes.
- General Users: Looking for diverse information without redundancy, ensuring a more enriching search experience.
Summary
The Supportability(Search) feature enhances our search system by providing comprehensive visibility into tenant-specific assets and their embeddings. This improvement allows our team to monitor and understand customer issues more effectively by displaying detailed information about the assets and embeddings associated with each tenant.
Technical Details
This feature integrates directly with our existing search and ingest services, enabling a seamless flow of information. A new user interface was designed, which offers real-time metrics reflecting the ingested assets and their corresponding embeddings. This is achieved through the backend’s connection to our databases.
The feature employs optimized API endpoints that minimize performance overhead while ensuring that data retrieval is both efficient and accurate. It allows for queries that provide:
- The total count of ingested assets
- The number of embeddings generated for those assets
- A breakdown of embeddings organized by asset for each tenant
The front end is developed using React to maintain a responsive and user-friendly experience. For visualizing data, D3.js is utilized, allowing the creation of dynamic and interactive charts, which present complex data in an easily digestible format. All these technical advancements work harmoniously to enhance the search system’s supportability.
Explain It to Me Like I’m Five
Imagine you have a big toy box filled with lots of toys, and your parents want to help you play with them. Sometimes it’s hard to know what toys you have and how they’re organized. The Supportability(Search) feature is like putting labels on each toy and organizing them into groups so you can easily see how many toys there are, what kinds they are, and how they are arranged. This way, you can find your favorite toys quickly and enjoy playing without any confusion!
Specific Items Shipped
-
Enhanced Dashboard Interface: Introduced a new dashboard within the search service that displays real-time metrics on ingested assets and embeddings.
This dashboard provides a clear overview of the number of assets ingested and the embeddings generated for each tenant. Users can navigate to detailed views for specific organizations to see more granular data. -
Organizational Detail Views: Added detailed views for each organization, showing the number of assets and embeddings per asset.
These detail views allow engineers to drill down into specific tenants to better understand their data landscape. This granularity aids in troubleshooting and optimizing search performance for individual organizations. -
Optimized API Endpoints: Developed new API endpoints to fetch and serve asset and embedding data efficiently.
These optimized endpoints ensure that data retrieval is fast and reliable, minimizing latency and providing up-to-date information for the dashboard and detail views. -
Minimalistic User Interface: Maintained a clean and simple UI design for ease of use and quick navigation between services.
The minimalistic approach ensures that users can access the search and ingest services without unnecessary complexity, focusing on the functionality that matters most.
The Problem We Solved with This Feature
Before implementing Supportability(Search), our team lacked the necessary visibility into the search system’s inner workings, which made diagnosing and addressing customer issues related to search functionalities a challenge. This gap hindered our ability to understand which assets and embeddings were present for each tenant, leading to slower response times in troubleshooting and optimizing the search experience.
Who We Built This For
Primarily for our engineering and support teams. These users require detailed visibility into the search system to monitor performance, diagnose issues, and optimize the search experience for our customers. Additionally, product managers benefit from these insights to make informed decisions about feature enhancements and resource allocation based on how different tenants utilize the search capabilities.
Improved the language of the knowledge preference buttons
Story Tiles™ now appear as opened by default
Ability to select Knowledge preference when chatting addeded to Storytell
- Story Tiles™ were added to SmartChat™
- Documentation is here
First version of code-driven Storytell documentation released
- Intelligent LLM Routing based on Quality vs. Price vs. Speed
- Documentation is here
Was this page helpful?
- 2025-03-26
- 2025-03-26
- 2025-03-25
- 2025-03-24
- 2025-03-18
- 2025-03-12
- 2025-03-11
- 2025-03-06
- 2025-03-05
- 2025-03-04
- 2025-03-03
- 2025-02-26
- 2025-02-25
- 2025-02-24
- 2025-02-21
- 2025-02-20
- 2025-02-19
- 2025-02-14
- 2025-02-12
- 2025-02-11
- 2025-02-10
- 2025-02-06
- 2025-02-05
- 2025-02-03
- 2025-02-02
- 2025-02-01
- 2025-01-29
- 2025-01-15
- 2025-01-14
- 2025-01-13
- 2025-01-10
- 2025-01-07
- 2025-01-02
- 2025-01-01
- 2024-12-31
- 2024-12-20
- 2024-12-06
- 2024-11-19
- 2024-11-13