Kamil Litmanโ€™s Post

๐—ข๐—ฝ๐—ถ๐—ป๐—ถ๐—ผ๐—ป: ๐—š๐—ฃ๐—ง-๐Ÿฐ๐—ผ ๐—ถ๐˜€ ๐—ป๐—ผ๐˜ ๐˜†๐—ฒ๐˜ ๐—ฟ๐—ฒ๐—ฎ๐—ฑ๐˜† ๐—ณ๐—ผ๐—ฟ ๐—ฏ๐˜‚๐˜€๐—ถ๐—ป๐—ฒ๐˜€๐˜€ ๐˜‚๐˜€๐—ฒ ๐—ฐ๐—ฎ๐˜€๐—ฒ๐˜€. ๐˜ฝ๐™–๐™˜๐™ ๐™œ๐™ง๐™ค๐™ช๐™ฃ๐™™: Over the last couple of weeks I have been experimenting with #gpt4o for various customer projects. Most initiatives required processing files and text (the sweet spot for #llm). I don't yet have a quantifiable framework but Neferdata makes it easy to switch between models, so I could compare results side by side. ๐™‚๐™ค๐™ค๐™™: โšก The model is observably faster. It's also half the price of Turbo. ๐˜ฝ๐™–๐™™: ๐Ÿ› ๏ธ The model required far more deliberate prompting to return results in the expected format. ๐Ÿง The model was far more likely to hallucinate when no answer was available. ๐™‘๐™š๐™ง๐™™๐™ž๐™˜๐™ฉ: Remember, things in #ai change fast, so this may age badly. But as of today, I would recommend using #gpt4turbo for your production workloads. I'd be curious to discuss if you had different experiences. Happy to connect and compare notes.

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics