We've been hard at work on a number of new features and improvements to Vello, and we're excited to share them with you today. Here's a quick overview of what's new:
-
New models
: Updated GPT4 Turbo, Llama 3 Large and Small models. The Llama 3 models in particular (served via groq) are a dramatic improvement over existing models because of their speed.
-
Better file support
: Improved handling of images, PDFs, and other file types. This is the first release enabled by our new in house architecture, many more iprovements to come including file access for all models, permanent memory, and integrations.
-
Referral program
: If you love Vello, share it with your friends and get rewarded. See details here
-
Per token pricing
: We are moving towards a more open and transparent pricing model.
Vello Flex
is a new option that allows you to pay per token above a plan usage cap. This is a great option to have if you need to use the full context of a model or if you have a high volume of requests. See more details here
As always, your feedback is invaluable to us, so please don't hesitate to reach out with any questions, concerns, or suggestions.
✋🏻 Vello Team