Good Things Come… Glow Up Part 2

The Vera Engineering Blog
3 min readMar 22, 2024

--

Nobody is good at waiting, much less the team at a startup like ours. It’s been a minute since our last release notes, but big things take time, especially when you’re completely redesigning the core experience of your platform. Read on to find out more about the big things we’ve been cooking up at Vera Engineering.

New User Experience

With huge thanks to our small but mighty Front End team (hi Allan!), we’ve completely rebuilt our chat UI from the ground up. Check it out:

Actually, wait. Let’s start at the beginning… Here’s a screenshot with some callouts we can look through together. The stars, of course, are markers for this blog post so we can show off what’s new (besides the obvious), and not some sort of crazy new UI framework. What else is new in Vera chat?

  1. No more slash commands! Now you can toggle model routing on and off. AND…
  2. You can also switch between your favorite models through the UI, or choose the one that’s best yourself.
  3. Previous conversations live where they always did, but now in a prettier, more user-friendly way.

You’ll also notice that each model response contains system information, and a notice of which model you’re using. And when model routing is on, we can provide even more context…

At Vera, we live what we preach, and so it’s important for us to disclose as much information about possible about why we sent the prompt to its destination. In this case, we can provide context about why we’re routing a question about life and metaphysics to Cohere. It’s because it’s the best one for the job.

Of course, the reasons to route model prompts may vary, based on the context of the request. Complexity isn’t the only reason we might choose one model over another. The reasoning could be more closely connected to cost or latency, if you like. And all of this can be configured through our admin UI, to meet your customized preferences, which we’ve discussed in previous posts.

In future releases, we’ll give you a tour of the even-more-granular model control we’re working to build right now!

For More Contextual Awareness… It’s Vera RAG!

Time and again we’ve heard from clients that their set of Generative AI applications includes use cases for RAG (Retrieval Augmented Generation). Our goal is to supply a comprehensive set of tools to apply a common policy to a robust set of AI tools, so supporting RAG was an easy choice for us to make.

Now you can apply the same internal threat policies (redact PII, block passwords etc.) and model response policies (don’t talk about politics, don’t leak training data, etc.) to your RAG-enabled tasks.

See something you like? Got a request for us? We’re listening! Get in touch.

--

--