At Observe next week, Jonathan Steuck (Innodata Inc.) will be speaking on a panel all about strategies and challenges for GenAI safety and alignment. Join us at SHACK15 on July 11 to explore the real world use cases, challenges in deploying products, and scaling of AI across organizations. Hear from AI leaders about industry-specific considerations to navigate the complexities of enterprise AI deployment. Innodata Inc. is an Observe sponsor! A special thanks to Innodata Inc. for its steadfast commitment to advancing AI by solving some of the toughest data engineering challenges, and empowering AI builders everywhere. Register: https://lnkd.in/gpbfB_dh
Arize AI
Software Development
Berkeley, CA 10,972 followers
Arize AI is an AI observability and LLM evaluation platform built to enable more successful AI in production.
About us
The AI observability & LLM Evaluation Platform.
- Website
-
http://www.arize.com
External link for Arize AI
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- Berkeley, CA
- Type
- Privately Held
Locations
-
Primary
Berkeley, CA, US
Employees at Arize AI
-
Ashu Garg
Enterprise VC-engineer-company builder. Early investor in @databricks, @tubi and 6 other unicorns - @cohesity, @eightfold, @turing, @anyscale…
-
Dharmesh Thakker
General Partner at Battery Ventures - Supporting Cloud, DevOps, AI and Security Entrepreneurs
-
Ajay Chopra
-
Jason Lopatecki
Founder - CEO at Arize AI
Updates
-
Presenting at Observe: Facundo Santiago (Microsoft) will share expertise on using the right model for the right job with the Azure AI model catalog. Hands-on sessions at Arize:Observe will provide pragmatic experience in evaluating and measuring the performance and reliability of AI systems using the latest open-source tools available. Thank you to Microsoft for sponsoring Observe, and for its unwavering commitment to advancing AI. We're looking forward to an inspiring and impactful event next week in San Francisco! There are a few tickets left: https://lnkd.in/gpEitQb5
-
-
We're thrilled to have Innodata Inc. as an Arize:Observe sponsor this year! They're helping us assemble major model creators, open source tool builders, and AI researchers for one day of innovating together. Observe is the year’s premier event for building AI-powered applications and improving quality and performance once in production, and it's happening in person next week at SHACK15 in SF. A special thanks to Innodata Inc. for its steadfast commitment to solving some of the toughest data engineering challenges out there, and empowering AI builders everywhere. Join us: https://lnkd.in/gpEitQb5
-
-
The Evaluator is a collection of top content we've published recently at Arize AI. In this month's edition we break down types of LLM evals, tracing, and token counting. We also dive into some of the latest and greatest AI research. As always, we conclude with a list of some of our favorite news, papers, and community threads. Read on and dive in...
-
Thrilled to have Gabriel P. (CEO, Naologic) joining us for Observe next week to speak on multi agent RAG. On July 11, we’re bringing the AI community together with the help of Cerebral Valley for one day dedicated to evaluating and troubleshooting AI in production. Arize:Observe is the place to meet top minds in the LLM evaluation space, learn about real-world use cases from the people who serve millions of users. Join us for one day of pioneering and learning together in the heart of the action SHACK15. Register: https://lnkd.in/gpEitQb5
-
-
Arize AI reposted this
The standard for evaluating text has always been human labeling. But human evaluation is much more expensive to setup and maintain. AI engineers are now relying on LLMs to evaluate the performance of their applications. We built the open source Arize Phoenix LLM Evals library for simple, fast, and accurate LLM-based evaluations. Get a quick taste in this demo – the second in a series – exploring how you can evaluate your LLM app for things like hallucinations leveraging Phoenix.
-
How do you navigate the complexities of enterprise AI deployment? At Arize:Observe, Barak Turovsky, VP of AI at Cisco will share his expertise on a panel about metrics and strategies to ensure AI quality. Join us for this and more on July 11 at SHACK15 in San Francisco. Explore real world use cases, challenges in deploying products, and scaling of AI across organizations. We also have speakers presenting on cutting-edge research, emerging techniques, and theoretical advancements in AI observability and LLM evaluation. See you soon: https://lnkd.in/gpEitQb5
-
-
How do you scale to conquer the million-user milestone? We put together a panel of experts at Observe to tackle this one. Aman Khan will talk to Swapna Kasula (Salesforce), James Emerson (Wayfair), and Hien Luu (DoorDash), about industry-specific considerations to navigate the complexities of enterprise AI deployment in order to scale. 🚀 Join us July 11 in San Francisco: https://lnkd.in/gpEitQb5
-
-
If you don't have tickets for Observe yet, consider this your sign. 🇺🇸 Join us in San Francisco for one day dedicated to evaluating and troubleshooting AI in production! We'll be at SHACK15. Also joining us are speakers from: Microsoft, OpenAI, Anthropic, DoorDash, Mistral AI, Meta, Priceline, Salesforce, LlamaIndex, Google, PromptLayer, Stanford University, University of California, Berkeley, and many, many more. Tickets are going fast ( ...see lineup 👆). Register: https://lnkd.in/gpEitQb5
-
-
Evaluation is key to foundation model quality hill climbing. The challenge with GenAI evals lies in metrics ambiguity, dataset diversity, scaling up evaluation, etc. At Observe this year, Yixin (Bethany) Wang will discuss foundation model evaluation. The Builders' Track at Arize:Observe allows attendees to gain pragmatic experience evaluating and measuring the performance and reliability of AI systems using the latest open-source tools available. Join us SHACK15 on July 11: https://lnkd.in/gpEitQb5
-