We have been busy bees at at Vello over the past month, and we are excited to announce a host of improvements launching today.
- More capable personas with coding, image generation, web search, knowledge, and delegation.
- Persona publishing and the Vello Persona Network Beta
- Introducing Vella, our all purpose, friendly default assistant that can search the web, chat with documents, generate images, and answer any questions you have about how to use Vello. See an example thread here.
- Dramatically upgraded backend - much better performance overall, better stability and speed.
- Virtualization on the client, making the UI much snappier for long message threads.
- Support for more base models, including Mistral MOE and Online and Chat models from Perplexity.
- Mentions support - @ mention AI models and personas by name in the input box to ask them to respond.
- Faster space switching, and a dedicated space switcher (option+control+s)
- Fast model switcher - quickly switch models in a space (option+m)
- Global notifications - see notifications across all your spaces.
- Math Latex rendering - support for rendering mathematical notation and equations.
Thank you so much to all our loyal beta users, we have really enjoyed working with you to iterate on Vello over the past few months to make it the best AI client in the world! It means a lot that you have stuck with us through our growing pains. If you have feedback always feel free to email us directly at hello@vello.ai or post an idea in our new feedback forum.