Follow up on the latest improvements and updates.


First, we wanted to address some feedback we have received about file access.
We are aware of the intermittent issue with assistants over the past month and the with the new Claude models. We are actively working on our own file and assistant support architecture that will replace the existing one for all GPT models, including the new Claude models. This update is expected to ship later this week and will be the first release of our custom cognitive architecture, which will power both this feature and persistent memory, tools, and workflows.
🐦 Follow us on Twitter to stay up to date.
What we are launching today:
Quick Access with Space and Model Switchers
  • Easily switch between spaces using the shortcut
  • Swiftly change models with the shortcut
Undo/Redo and Writing Mode
Usability Updates
  1. Interrupting an AI Response
- Use the keyboard shortcut
or click the stop button to immediately halt the AI's reply.
- Alternatively, send your next message to interrupt the AI.
  1. Undo/Redo
- Undo your last action with
- Redo the action with
on Windows.
- Applicable to message sends, room changes, and other actions, this is a versatile feature which we will build on in coming releases with tree-based history navigation.
Writing Mode
  1. Writing Mode for Even Longer Responses
- Press
to open a resizable editor window for structured information input.
- Write longer documents in markdown and submit them all at once with
- Press
to change the orientation of the editor.
  1. Load Last Query
- Adjust and resend your last query without modifying the chat history using
- Particularly useful for building on previous questions or clarifying points, especially in combination with Writing Mode.
Performance and Bug Fixes
  • Significant performance improvements for code blocks
  • Overall stability upgrades
  • UI fixes for a smoother user experience
  • Image support for new Claude models
  • Faster inference for quicker responses
*Check out tips for more tips and tricks.*
As always, your feedback is invaluable to us, so please don't hesitate to reach out with any questions, concerns, or suggestions.
✋🏻 Vello Team
Spring ushers in a fresh season for Vello. We've added new models from Anthropic and Mistral, better file and document support, faster inference speeds, improved stability, and better team support.
Better Speed and Stability
  • We've improved our infra to produce significant speedups and reduce error rates across all model providers.
New Models
  • Opus
    models from Anthropic (fresh out of the oven today)
  • Tiny
    , &
    from Mistral
These are all available under the same premium plans you use today. Access and pin these from your settings (
) or via the model switcher (
Better File & Document Support
  • File search and browser for all spaces
  • File Preview when chatting with files
  • Improved note editing by the
    Note Editor
Note Editor
  • Image understanding via GPT4-Vision
Beta Feature Signup
  • We have several features currently in closed beta, if you'd like to test any of our upcoming launches, please send us a message. We are currently testing three features:
  1. Native mobile app
  2. Permanent Memory (Personas can learn over time)
  3. Improved team support (better sharing, easier on-boarding, channels). Vello is the best AI Suite for teams. If your team would like to use Vello we would love to onboard you, especially small to medium sized companies. Please reach out to us if you would like to chat.
As always please let us know if you have any questions or feedback, we love to hear it.
We have been busy bees at at Vello over the past month, and we are excited to announce a host of improvements launching today.
  • More capable personas with coding, image generation, web search, knowledge, and delegation.
  • Persona publishing and the Vello Persona Network Beta
  • Introducing Vella, our all purpose, friendly default assistant that can search the web, chat with documents, generate images, and answer any questions you have about how to use Vello. See an example thread here.
  • Dramatically upgraded backend - much better performance overall, better stability and speed.
  • Virtualization on the client, making the UI much snappier for long message threads.
  • Support for more base models, including Mistral MOE and Online and Chat models from Perplexity.
  • Mentions support - @ mention AI models and personas by name in the input box to ask them to respond.
  • Faster space switching, and a dedicated space switcher (
  • Fast model switcher - quickly switch models in a space (
  • Global notifications - see notifications across all your spaces.
  • Math Latex rendering - support for rendering mathematical notation and equations.
Thank you so much to all our loyal beta users, we have really enjoyed working with you to iterate on Vello over the past few months to make it the best AI client in the world! It means a lot that you have stuck with us through our growing pains. If you have feedback always feel free to email us directly at or post an idea in our new feedback forum.