Eduardo Ordaxโ€™s Post

View profile for Eduardo Ordax, graphic

๐Ÿค– Generative AI Lead @ AWS โ˜๏ธ | Startup Advisor | Public Speaker

๐–๐ก๐ž๐ง ๐ญ๐จ ๐”๐ฌ๐ž ๐‹๐ฅ๐š๐ฆ๐š๐Ÿ‘ ๐Ÿ•๐ŸŽ๐ ๐ฏ๐ฌ ๐Ÿ–๐: ๐€ ๐๐ฎ๐ข๐œ๐ค ๐†๐ฎ๐ข๐๐ž ๐ญ๐จ ๐’๐ฆ๐š๐ฅ๐ฅ ๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ ๐š๐ง๐ ๐‹๐š๐ซ๐ ๐ž ๐‹๐š๐ง๐ ๐ฎ๐š๐ ๐ž ๐Œ๐จ๐๐ž๐ฅ๐ฌ ๐Ÿฆ™ While weโ€™re still waiting for the arrival of Llama3 400B, open source community agrees 70B and 8B versions are powerful models, but each has its ideal use cases. Below some tips ๐Ÿ‘‡ ๐‹๐ฅ๐š๐ฆ๐š๐Ÿ‘ ๐Ÿ•๐ŸŽ๐: ๐–๐ก๐ž๐ง ๐ญ๐จ ๐”๐ฌ๐ž ๐ˆ๐ญ ๐Ÿ”ธComplex Tasks: If your application involves understanding nuanced language, generating detailed content, or performing complex analysis, the 70B modelโ€™s larger capacity can handle it better. ๐Ÿง  ๐Ÿ”ธHigh Accuracy: For tasks where accuracy and precision are paramount, like medical research, legal analysis, or high-stakes decision-making, the 70B model provides a deeper understanding and more reliable outputs. ๐ŸŽฏ ๐Ÿ”ธLarge-Scale Deployments: When you have the computational resources to support it and need to serve large user bases with diverse queries, the 70B model can manage high volumes while maintaining quality. ๐ŸŒ ๐Ÿ”ธAdvanced AI Research: For cutting-edge AI research and development, where pushing the boundaries of whatโ€™s possible is the goal, the 70B model offers more capabilities. ๐Ÿ”ฌ ๐‹๐ฅ๐š๐ฆ๐š๐Ÿ‘ ๐Ÿ–๐: ๐–๐ก๐ž๐ง ๐ญ๐จ ๐”๐ฌ๐ž ๐ˆ๐ญ ๐Ÿ”ธSpeed and Efficiency: If you need quick responses with lower computational costs, the 8B model is more efficient and faster. Ideal for real-time applications like chatbots or interactive systems. ๐Ÿš€ ๐Ÿ”ธResource Constraints: When working with limited hardware or budget, the 8B model is more cost-effective and requires fewer resources. Perfect for startups and smaller projects. ๐Ÿ’ก ๐Ÿ”ธLess Complex Tasks: For straightforward tasks such as simple text generation, summarization, or moderate-level question answering, the 8B model is sufficient and performs well. ๐Ÿ“ ๐Ÿ”ธTesting and Prototyping: When developing new applications and needing to iterate quickly, the 8B model allows for faster experimentation and prototyping without heavy computational demands. โš™๏ธ Choose wisely based on your specific needs, use cases and resources to get the best out of these powerful LLMs. Wondering how wooly 400B will beโ€ฆ. #AI #Llama3 #ML #GenAI #LLM

I didn't even read the post, just laughing at the situation here. Sometimes the video is too funny and the message is lost. ๐Ÿ˜€

Sorin Gatea

Management Consultant, Executive MBA, Senior Enterprise Architect, Data Science, Digital Strategy and Innovation, Product Management

1w

all waiting for the 1 trillion beast

Muhammad Rizwan Munawar

Computer Vision Engineer @Ultralytics | Solving Real-World Challenges๐Ÿ”Ž| Python | Published Research | Open Source Contributor | GitHub ๐ŸŒŸ | Daily Computer Vision LinkedIn Content ๐Ÿš€ | Technical Writer VisionAI @Medium๐Ÿ“

1w

Super cool way to explain difference ๐Ÿ˜น๐Ÿฅณ

Like
Reply

Well, it does the job of being a llama ๐Ÿ˜‚

Like
Reply
Ashok kumar

Data Scientist | Python | MySQL | PowerBI | Tableau | NoSQL | Excel | Web Scrapping | GCP | Innovate AI

1w

It is "weights" reduced ๐Ÿ˜‚

Like
Reply
Shrinivas Sesadri

Student at Shanmugha Arts, Science, Technology & Research Academy (SASTRA), Thanjavur

1w

Love this ๐Ÿ˜‚ ๐Ÿคฃ

Like
Reply
Kyoungsu Park

Solutions Architect in AWS

1w

๐Ÿคญ๐Ÿคญ๐Ÿคญ

Like
Reply
Alette Horjus

Manager | Digital Transformation | TechLaw at KPMG Netherlands

3d
Like
Reply
Alberto Morgante Medina

Senior Telco Edge Software Engineer

1w

David Fernรกndez Ortega

Like
Reply
Nicolรกs Dagosta

SR Manager {Machine Learning | Data Science | Advanced Analytics | ML Ops | ML Engineering} @Ualรก

1w
See more comments

To view or add a comment, sign in

Explore topics