Date: 2025-10-16 Version: Critical Patch 0.95.7
This update addresses a series of critical, cascading failures in the AI's permanent memory system. The iterative process of teaching an AI nuanced rules is a philosophical and technical challenge. Previous attempts resulted in a system that oscillated between over-eagerly saving irrelevant data and being too paralyzed to save critical facts. This patch represents a stable, architectural solution, finally achieving the reliability and intelligence this feature was designed for.
- Fixed Critical Logic Failure: The most severe issue, where the AI could respond with a blank message and only a memory tag, has been eliminated. The system prompt was fundamentally re-architected to establish an unbreakable rule: a conversational response is always the primary goal, and memory functions are a silent, secondary task.
- Intelligent Fact-Checking Heuristic: The AI is now equipped with a "Litmus Test" (
Does this describe WHO the user is or just WHAT the user asked about?). This simple, powerful heuristic guides the model to correctly distinguish between a valuable user fact (like a name, profession, or stated interest) and a trivial conversational topic, resolving the root cause of irrelevant memories. - Balanced & Nuanced Instruction: The prompt has been carefully re-calibrated to be less punitive and more descriptive. This fixes the "prompt paralysis" that prevented the AI from saving legitimate user-stated facts, such as their name or personal interests, and ensures the system is neither over-eager nor over-cautious.
- Robust & Consistent Behavior: By addressing these core architectural flaws in the prompt, the memory system is now significantly more robust, predictable, and useful. It can be trusted to build an accurate user profile over time without polluting the memory bank or failing its primary conversational duty.
Date: 2025-10-16 Version: Feature Release 0.95.7
This update focuses on two distinct but philosophically linked areas: refining the user's first interaction with the application to be more deliberate and responsible, and re-architecting the AI's core identity to be more transparent and customizable.
This update introduces a critical onboarding step to ensure user awareness and establish clear terms of use. My philosophy is that a responsible tool must be transparent about its nature. This feature formalizes the user's role and liability when interacting with local AI models.
- One-Time Agreement: On the very first time the application is launched, a User Agreement & Liability dialog is now presented. This is a one-time event; once accepted, it will never appear again.
- Clear Terms of Use: The dialog contains a standard End-User License Agreement (EULA) that outlines the user's responsibilities. It clarifies that the user, not the developer, is liable for the inputs provided to and the content generated by the local AI models they choose to run.
- Mandatory Acceptance: The application will not proceed to the main interface until the user explicitly agrees to the terms by checking a box and clicking "Agree & Continue." This ensures a clear and deliberate acceptance of the terms.
- Seamless & Themed Integration: The dialog is built using the same custom UI components as the rest of the application, ensuring it is fully themed (light/dark) and feels like an integrated part of the onboarding experience, not a generic system pop-up.
- Persistent & Unobtrusive: The user's acceptance is saved permanently in the application's persistent settings (
QSettings), guaranteeing a smooth, uninterrupted startup experience on all subsequent launches.
True agreement is not passive; it is an active and informed decision. The initial implementation of the EULA has been enhanced to ensure the user is genuinely presented with the full scope of the terms before being allowed to consent.
- Engineered for Compliance: The "Agree & Continue" button is now intelligently disabled until the user has scrolled to the absolute bottom of the EULA text. This is a crucial step in due diligence, ensuring the entirety of the agreement has been made available.
- Contextual Guidance: A disabled control without explanation is poor design. A tooltip has been added to the agreement checkbox, clearly instructing the user that they must scroll to the bottom to proceed. This removes ambiguity and friction from the onboarding process.
- Robust & Intelligent Logic: The system is engineered to handle edge cases gracefully. If the EULA text is short enough to not require scrolling, the system recognizes this and enables the agreement option immediately.
An application's logic should be distinct from its personality. This update fundamentally re-architects how the AI's core instructions are managed, moving them from being hardcoded within the application's source to residing in external, user-accessible text files. This is a critical step towards a more open, transparent, and customizable platform.
- Decoupled Architecture: The AI's foundational instructions are no longer entangled with the Python code. They now live in two dedicated files:
system_prompt.txtfor the core identity and safety protocols, andmemory_prompt.txtfor the specific rules governing the permanent memory feature. - Empowering User Customization: This change transforms the AI's persona from a static element into a dynamic one. Advanced users can now directly edit these text files to tailor the AI's tone, behavior, and even its operational rules without ever touching a line of code.
- Modular & Maintainable: Separating the core prompt from the memory prompt is a deliberate design choice. It makes the system's architecture cleaner and allows for the modular addition of future AI capabilities, each with its own externalized instruction set.
- Performance-Aware Implementation: This externalization was engineered with performance in mind. The application reads these files only once upon first use and then caches them in memory, ensuring there is zero performance penalty on subsequent AI interactions.
Date: 2025-10-15 Version: Feature Release 0.95.6
This update introduces a critical onboarding step to ensure user awareness and establish clear terms of use. My philosophy is that a responsible tool must be transparent about its nature. This feature formalizes the user's role and liability when interacting with local AI models.
- One-Time Agreement: On the very first time the application is launched, a User Agreement & Liability dialog is now presented. This is a one-time event; once accepted, it will never appear again.
- Clear Terms of Use: The dialog contains a standard End-User License Agreement (EULA) that outlines the user's responsibilities. It clarifies that the user, not the developer, is liable for the inputs provided to and the content generated by the local AI models they choose to run.
- Mandatory Acceptance: The application will not proceed to the main interface until the user explicitly agrees to the terms by checking a box and clicking "Agree & Continue." This ensures a clear and deliberate acceptance of the terms.
- Seamless & Themed Integration: The dialog is built using the same custom UI components as the rest of the application, ensuring it is fully themed (light/dark) and feels like an integrated part of the onboarding experience, not a generic system pop-up.
- Persistent & Unobtrusive: The user's acceptance is saved permanently in the application's persistent settings (
QSettings), guaranteeing a smooth, uninterrupted startup experience on all subsequent launches.
This update is centered on a single, powerful philosophy: a conversation should be a fluid, dynamic process that you, the user, can direct and refine at will. I've engineered two major new features—Regenerate and Fork—that transform the chat from a linear timeline into a branching tree of possibilities. These tools give you unprecedented control over the AI's output and the direction of your inquiry.
You are no longer limited to the first answer the AI provides. With the new Regenerate feature, you can now prompt the model to rethink its last response, offering a new perspective, a different creative take, or a more refined solution. This is an essential tool for iterative creative work, technical problem-solving, and exploring the full potential of the model.
- One-Click Reroll: A subtle "Regenerate" icon now appears below the most recent AI message. A single click will discard the previous response and generate a new one from your original prompt, keeping your workflow fast and intuitive.
- Intelligent State Management: This feature is engineered with precision. The "Regenerate" button is only ever visible on the single, most recent AI response, preventing confusion and ensuring a clean, predictable user experience. The moment you send a new message, the option on the previous response disappears.
- Seamless Integration: The regeneration process uses the same asynchronous, non-blocking architecture as a standard query, providing you with real-time status updates ("Analyzing," "Thinking...") while it formulates the new answer.
A single question can lead to a dozen interesting avenues of thought. The new Fork feature empowers you to explore them all without losing your place. You can now split any point in a conversation into a brand new, independent chat thread, preserving the context up to that moment.
- Branch Your Conversation: A new "Fork" icon now appears below every AI-generated message. Clicking this button instantly creates a new chat, copying the entire history up to that point. This allows you to pursue a tangent or explore an alternative line of questioning in a clean, separate workspace.
- Intelligent & Automatic Titling: Forked chats are named with clarity and precision. The system automatically takes the original chat's title and appends a "Thread" counter (e.g., "Python Scripting" becomes "Python Scripting Thread:2"). If you fork from a thread that is already a fork, it intelligently increments the number ("Python Scripting Thread:2" becomes "Python Scripting Thread:3"), maintaining a clear and logical hierarchy.
- Atomic & Instantaneous Creation: The underlying database logic has been engineered for performance and reliability. The creation of a forked chat is an atomic transaction, meaning the new thread and all its copied messages are saved to the database in a single, instantaneous, and failure-proof operation.
This update introduces a completely re-architected system for displaying code within conversations. My philosophy is that the tools you use should be as well-crafted as the work you create with them, and this feature elevates code from a simple text element to a rich, interactive, and professional component of the UI.
- Dedicated Code Container: Code blocks now live in a dedicated, professionally styled container that is visually distinct from conversational text. This container features a clean header that displays the detected programming language, providing immediate context.
- One-Click Copy Functionality: Each code block features a "Copy" button in its header, allowing you to extract snippets with a single click. The button provides subtle "Copied!" feedback, streamlining your workflow and eliminating the need for manual selection.
- Native, Theme-Aware Highlighting: The rendering pipeline uses a high-performance, Qt-native
QSyntaxHighlighter. This ensures that syntax highlighting is not only fast and accurate but also seamlessly adapts to both light and dark themes for perfect readability. - Intelligent Scrolling & Flawless Geometry: The container is independently scrollable for longer code blocks, preserving a clean chat layout. The widget has been meticulously engineered to display flawless, rounded corners that are pixel-perfect in all themes.
Date: 2025-10-14 Version: Hotfix 0.94.4
This hotfix addresses a critical bug that made the "Copy" and "Copy All" actions for chat messages non-functional.
The "Copy" (for selected text) and "Copy All" actions in the context menu were failing, either doing nothing or copying a blank string to the clipboard. This broke a fundamental user interaction.
The investigation revealed two underlying problems:
- State Loss on Focus Change: When the context menu appeared, it took focus from the message text. This caused the operating system to clear any active text selection before the copy action could read it.
- Incorrect Data Handling: The widget was not reliably storing the original, plain-text version of the message, causing "Copy All" to fail.
The event-handling logic has been re-engineered for stability:
- Proactive Text Capture: The application now captures any selected text the instant a right-click occurs, before the selection is cleared by the menu appearing.
- Robust Data Management: The logic has been corrected to ensure the full, unformatted message is always available for the "Copy All" command.
All clipboard functions within the chat view are now fully restored and reliable. You can copy selected text snippets and full messages without issue.
This update introduces a new, non-intrusive update notification system to keep you informed of the latest features and fixes, alongside a crucial stability enhancement for the update-checking process itself.
Staying up-to-date should be effortless. This release introduces a new, intelligent update notification system designed to be informative without being disruptive. My philosophy is that you should be in control of your workspace, and this feature reflects that.
- Automatic & Asynchronous Checking: When the application starts, it now performs a silent, one-time check in the background to see if a new version is available. This process is fully asynchronous, meaning it will never freeze or slow down your startup experience.
- Non-Intrusive Notification: You won't be interrupted by pop-ups. Instead, if a new version is detected, a clean and clear notification will be waiting for you within the Settings dialog. This allows you to check for updates on your own terms.
- Clear Status, Always: The Settings dialog now provides transparent feedback on the update check's status. You will always know if the application is up-to-date, if an update is available, or if the check encountered a network error.
- Robust Cache-Busting: The web request has been engineered to be highly robust. It sends specific headers that instruct servers and proxies to bypass their caches, ensuring the check always fetches the latest, live version information and never gives a false negative due to stale, cached data.
Date: 2025-10-13 Version: Hotfix 0.94.3
A critical hotfix has been deployed to address a bug where the AI's step-by-step reasoning was not being displayed in the chat interface. The "View Reasoning" button, which reveals the model's thought process, is now fully functional again for all compatible models.
We identified a critical issue reported by our users where the "View Reasoning" button was consistently absent from the AI assistant's chat bubbles. This feature provides transparency into the model's Chain-of-Thought (CoT) process, allowing users to understand how an answer was formulated. Its absence was a significant regression in functionality and user experience.
After a thorough investigation, we determined the root cause was an upstream change in the Ollama API's response structure (specifically in versions 0.12.5 and newer).
In a welcome move to improve clarity, the Ollama API now separates the model's reasoning process into a dedicated thinking field within the response. Previously, this reasoning block was embedded directly inside the main content field.
Our application's data parsing logic was still programmed to look for the reasoning in the old, embedded location. When the new API structure was encountered, our system failed to find the thinking data, and as a result, the UI correctly determined there was no reasoning to display.
The SynthesisAgent, responsible for handling communication with the LLM, has been updated with smarter response-handling logic:
- Primary Parsing Path: The system now correctly looks for and extracts the reasoning from the new, dedicated
thinkingfield provided by modern Ollama versions. - Fallback Mechanism: To ensure full compatibility, we have retained the old parsing logic as a fallback. If the
thinkingfield is not present in a response, the system will then scan the main content for the inline reasoning block.
This dual approach ensures that the "View Reasoning" feature works seamlessly for users running any version of the Ollama service, providing both forward compatibility with the latest updates and backward compatibility for those on older installations.
With this fix deployed, the "View Reasoning" functionality is fully restored. You can once again gain valuable insight into the AI's problem-solving process. No action is required on your part.
We extend our sincere thanks to the community member who provided the detailed logs and API outputs that allowed for a swift diagnosis and resolution of this issue. Your feedback is invaluable in helping us maintain the quality and reliability of the application.
This is a landmark update that touches nearly every part of the application. I've focused on rebuilding core systems for performance and reliability, while also introducing powerful new features and quality-of-life improvements based on how I see the app evolving.
I've fundamentally rebuilt how conversational data is stored and managed, transitioning from a system of individual text files to a robust, high-performance SQLite database. This architectural shift moves the application to a professional-grade storage solution, delivering massive improvements in speed, reliability, and scalability.
- Enhanced Performance & Scalability: You will experience a dramatic increase in speed, especially when loading your chat history. What once required scanning multiple files on disk is now an instantaneous, indexed database query. The application will feel significantly more responsive, regardless of whether you have ten chats or ten thousand.
- Rock-Solid Data Integrity: The new database system is transactional, which protects your chat history against corruption from unexpected application crashes or power failures. This ensures your conversations are always saved safely.
- Seamless, Automatic Migration: For existing users, this transition is completely effortless. On its first run after the update, the application will automatically detect your old chat files and migrate them into the new database. All of your history will be preserved with no action required from you.
- Foundation for Future Features: This new architecture isn't just about improving what's here; it's about unlocking what's next. It lays the groundwork for powerful future capabilities, such as a full-text search across all of your past conversations.
Getting started previously required manually finding, downloading, and installing Ollama, followed by using command-line tools to pull AI models. This process could be a significant barrier.
The new Corted Startup utility completely transforms this experience into a simple, guided workflow. My goal is to remove technical barriers and empower every user to get up and running in minutes.
- Guided Ollama Installation: The utility now presents direct download links for Windows and macOS and a one-click copy command for Linux. There's no more need to search for installation instructions.
- Integrated Model Manager: Forget the command line. You can now browse a curated list of available AI models directly within the application. Select the model you want, click "Pull," and monitor the download and installation progress in real-time.
- A Polished, All-in-One Interface: This entire process is wrapped in a clean, modern interface featuring a draggable window and a light/dark theme toggle to match your workspace. It provides a professional and centralized starting point for the entire Corted experience.
This update introduces a powerful new dimension of control over the AI assistant. You can now define a persistent persona and set global behavioral rules through the new System Instructions feature, allowing you to tailor the AI's core identity to your specific needs.
This feature was engineered with extreme care. The underlying AI prompt has been restructured to create an unmistakable hierarchy, ensuring the model perfectly understands its core function, your custom instructions, and the context of your conversation.
- Set a Persistent Persona: Accessed via the Settings menu, the new System Instructions dialog allows you to provide high-level directives that apply to every conversation.
- Precision Control Over AI Behavior: Your custom instructions are given the highest priority, giving you unprecedented influence over the tone, style, and structure of the AI's responses.
- Robust & Unambiguous Prompting: The AI's core prompt has been meticulously re-architected to ensure your instructions are integrated without conflicting with its primary system functions, leading to a more predictable and reliable output.
Building upon the foundation of AI customization, this update provides granular control over the core parameters of the language model itself. These advanced settings have been moved to their own dedicated dialog, ensuring a clean, uncluttered interface that gives power users the tools they need to fine-tune the AI's performance.
- Dedicated Control Panel: A new "Advanced Model Settings" dialog provides a focused workspace for adjusting the AI's inner workings without cluttering the main settings panel.
- Temperature Control: An intuitive slider allows you to precisely manage the AI's creativity. Lower the temperature for more deterministic, factual responses, or raise it to encourage more novel and imaginative output.
- Context Window Management: Directly set the size of the model's conversational memory (context window). Increase it for longer-term context retention in complex conversations, or decrease it to optimize performance and memory usage.
- Reproducible Outputs with Seeding: A seed value can now be set to ensure the AI produces the exact same response to the same prompt every time, a crucial feature for testing, development, and content generation. A "Random" button is provided for convenience.
A truly powerful tool should feel like an extension of your own thoughts, minimizing the friction between intent and action. This update introduces a comprehensive suite of keyboard shortcuts, designed to keep your hands on the keyboard and your mind focused on the conversation. My goal is to elevate the application from a simple point-and-click interface to a high-velocity command center for power users.
- New Conversation (Ctrl+N): Instantly start a fresh chat without reaching for the mouse, keeping your workflow seamless.
- Access Settings (Ctrl+,): Quickly open the settings dialog with a standard, universal shortcut to tweak the AI's model or appearance on the fly.
- Focus Input (Ctrl+L): Immediately jump to the chat input field from anywhere in the application. This is a massive time-saver for rapid-fire questioning.
- Close Window (Ctrl+W): A standard, convenient way to close the application window when your session is complete.
- Cross-Platform Native Feel: These shortcuts are intelligently mapped, automatically translating to
Cmdon macOS to ensure a native and intuitive experience on any operating system.
I've re-architected the application's dialog system to create a proper sense of depth and focus. The previous implementation could incorrectly blur parent dialogs, leading to a confusing user experience. This has been resolved with a more intelligent, stack-based management system.
- Correct Visual Hierarchy: The application now correctly tracks the layering of all open windows. When a new dialog appears, only the windows behind it are blurred, ensuring the active window is always sharp and clearly in focus.
- Robust Dialog Stacking: The new system can flawlessly handle any number of nested dialogs (e.g., opening "Manage Memories" from within the "Settings" dialog) while maintaining the correct visual state. This is a crucial fix for UI professionalism and usability.
These updates focus on improving the core conversational experience and providing greater control over your data.
- New Feature: Clear All Chat History (Sandstone): You now have the ability to permanently delete your entire chat history. This action performs a "clean sweep" of all recorded conversations, resetting your history to a blank slate. This option is accessible by right-clicking the "+ New Chat" button. A confirmation prompt will appear to prevent accidental data loss.
- Resolved Repetition Bug (Keystone): I've fixed a core logical issue where the AI would sometimes perceive the user's first message as a repeated statement, causing odd conversational artifacts (e.g., greeting you "again"). This is now fully resolved, significantly improving the model's contextual understanding from the very first turn.
A powerful tool should also be a pleasure to use. This update refines the application's user interface by addressing several visual inconsistencies and bugs.
- Instantaneous Theme Switching: All UI elements now update their appearance instantly when switching between light and dark modes.
- Improved UI Clarity: Corrected an issue where the text field in the System Instructions dialog could blend into the background in light mode.
- Stylesheet Stability: Fixed a minor bug that could cause stylesheet parsing errors, leading to more stable and robust rendering of all UI components.