Alon Gubkin’s Post

View profile for Alon Gubkin, graphic

CTO at Aporia | Forbes 30 Under 30

Everyone tries to mimic the real-time ChatGPT experience, but there's a much more reliable way to build with LLMs: **Select use-cases where users DO NOT expect an immediate response!** When model outputs aren't used immediately, humans can manually verify them. As confidence in various scenarios grows, you can automate more and verify manually less. Here are a few examples: - E-mail bots - Support assistants with Zendesk UX instead of chat UX - Coding assistants that create Github pull requests in the background Building this way is so powerful because: 1. It's essentially a solution to AI hallucinations 2. You can iterate your product over time without losing user trust 3. You can add guardrails to flag problematic outputs and redirect to human review. p.s. with OpenAI's batch API (https://lnkd.in/dvzuc6Cy) you get 50% cost reduction for offline LLM inference and higher rate limits :)

  • No alternative text description for this image
Tom Shapland

CEO Canonical AI. YC Alum. Tule founder.

1mo

My favorite part of your post is it’s representation in sql 🤣

Maia Brenner

CEO & Founder - Flipping the Game with Generative AI ✨

1mo

Juan Diego Balbi Yves Fogel check this out ! What we have been discussing about offline ai agents for backoffice tasks .

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics