Logga in för att se hela Jon Eriks profil
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
eller
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Göteborg, Västra Götalands län, Sverige
Kontaktinformation
Logga in för att se hela Jon Eriks profil
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
eller
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
646 följare
Fler än 500 kontakter
Logga in för att se hela Jon Eriks profil
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
eller
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Visa kontakter gemensamma med Jon Erik
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
eller
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Visa kontakter gemensamma med Jon Erik
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
eller
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Logga in för att se hela Jon Eriks profil
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
eller
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Erfarenhet och utbildning
-
Knowit
********** ******* *** **** *********
-
******** ********
**** ********* (**********)
-
*** **** ** **********
**** ********* (**********)
Se Jon Eriks fullständiga erfarenhet
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Välkommen tillbaka
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Se hela Jon Eriks profil
Logga in
Håll dig uppdaterad om branschen
Genom att klicka på Fortsätt för att gå med eller logga in samtycker du till LinkedIns användaravtal, sekretesspolicy och cookiepolicy.
Ny på LinkedIn? Gå med nu
Andra har även besökt
-
Anastasiοs Tzinieris
GöteborgSkapa kontakt -
Edvard Lindelöf
GöteborgSkapa kontakt -
Tom H.
SverigeSkapa kontakt -
Mikaela Giegold
GamlestadenSkapa kontakt -
Jacob Ewertzh
SverigeSkapa kontakt -
Johan Acharius
Greater Stockholm Metropolitan AreaSkapa kontakt -
Sara Ingvarsson
GöteborgSkapa kontakt -
Kjetil Åmdal-Sævik
BærumSkapa kontakt -
Jerker Sandsten
LidköpingSkapa kontakt -
Tim Johansson
KarlskronaSkapa kontakt
Utforska fler inlägg
-
Joshua Wöhle
Two days ago, I said OpenAI clearly had something up its sleeve for paid users. Today, they announced interactive tables and charts from live files—and it looks awesome. Data Analyser is one of ChatGPT's lesser-talked-about features because it had a somewhat rocky start. First of all, LLMs (Large Language Models) aren't great at anything related to math. This makes working with data... a problem. OpenAI circumvented this by getting ChatGPT to write a program (in code, which it's great at) and then using the code it wrote to analyse the data. This was a clever workaround, and it significantly improved the accuracy of its solution when dealing with data. But it was still somewhat clunky and sometimes took 2-3 tries to get an analysis right. Over time, GPT-4 improved its reasoning, which in turn improved its analysis, and Data Analyser has now become a tool I use almost daily to interrogate various data sets. Today's announcement, however, takes it over the utility threshold for *most* people. Building charts for presentations, understanding data sets from various angles and presenting stories with data is something many *want* to do, but few get to. That's about to change. For everyone who keeps saying, "LLMs are no longer progressing enough; we've hit a wall," this is another example of how we can make extreme productivity jumps without any underlying LLM improvement. There are 1000 more like this - some will come from OpenAI directly, others from the industry building on top of these models. The next few years are going to be interesting :) #FutureOfWork #ArtificialIntelligence #Productivity
56
14 kommentarer -
Tecton
New blog post: Discover how Tecton solves the hidden data engineering problems in ML! Say goodbye to endless data engineering tasks and infrastructure headaches holding your ML projects back. In our latest post, we explore the core challenges faced by ML teams and how Tecton revolutionizes the way you work with data, including: 🛠 Automating data pipelines 🤹♂️ Seamless batch, streaming, and real-time processing ⚙ Fully managed compute and storage infrastructure Read the full post to learn how Tecton can unblock your ML projects and help you build smarter models, faster! 🔗 https://lnkd.in/eQKpFE6g #MachineLearning #DataEngineering #FeatureEngineering #Tecton
11
-
Smart Data Warehouse Solutions
Data Engineering: Working with semi-structured data types. In our previous post, we demonstrated how to parse comma-separated data into a structured file format within the Snowflake project that utilizes the medallion architectural framework. In this post, we’ll introduce another file format that you’ll frequently encounter in your data engineering projects: the JSON File Format. JSON (JavaScript Object Notation) is an open standard file format used for sharing data. It employs human-readable text to store and transmit data objects, which consist of attribute-value pairs and arrays. This file format is commonly used for transmitting data between web applications and servers, making it very popular and suitable for our discussion and demonstration. To explain in more detail, we have three common types of JSON: JSON objects, JSON nested objects, and JSON arrays. JSON objects: “User_Name” :{ “Fist_Name”: “Henry”, “Last_Name”: “Godson” } Nested JSON objects: “User_Informations”: { “Date”: “2024-05-21”, “Sports”: {“Football”: “Good”, “Swimming”: “Excellent” } “Name”: “Henry } JSON arrays: (JSON are written inside the square brackets). “Employees”: [ { “First_Name”: “Henry”, “Last_Name”: “Godson” }, { “First_Name”: “Endy”, “Last_Name”: “Junior” }, ] We also have JSON Array in the simple format as “Favourit_Sports”: [“Football”, “Baseball”]. While certain data migration tools, such as Azure Data Factory, possess built-in intelligence for transforming data from one format to another, most data engineering ETL projects necessitate data engineers to perform similar transformations using either SQL or Python. To transform a JSON array using the Medallion Architectural Framework, we have two major approaches: Copy to Bronze and Transform: In this approach, we copy the JSON file into the Bronze layer and then transform the data to a structured format before moving it into the Silver table. Direct Transformation to Bronze: The second approach involves transforming the data directly to a structured format and depositing it into the Bronze layer straight from the staging environment. This approach is recommended, and we’ll demonstrate how to achieve it in our upcoming post. Stay tuned!
3
-
Recordly
Are you an experienced Data Engineer who digs Databricks? ❤️🧱 Our current Databricks experts have a lot on their plates, and they (and we all ofc) would be very happy to welcome new members to the team! As a Data Engineer at Recordly, you get to work hands-on with data. Didn't see that one coming, huh? 😏 In all seriousness, though, you would be at the core of helping our customers capitalize on the opportunities data offers 🎯 You would also be a key player in _our_ growth journey: Recordly focuses on data architectures and data engineering that enable exceptional user experiences, efficiency gains, and new business models. In simple terms, we tech the heck out of data. Apply 👉 https://lnkd.in/dSURGJri #hiring #dataengineer #businessdata
19
4 kommentarer -
b.telligent
𝐃𝐨 𝐲𝐨𝐮 𝐰𝐚𝐧𝐭 𝐭𝐨 𝐮𝐧𝐥𝐨𝐜𝐤 𝐭𝐡𝐞 𝐟𝐮𝐥𝐥 𝐩𝐨𝐰𝐞𝐫 𝐨𝐟 Databricks 𝐟𝐨𝐫 𝐲𝐨𝐮𝐫 𝐝𝐚𝐭𝐚? Then check out our new “Databricks in a Week” workshop! In this 5 day workshop we will (optionally together with you): ✅ rapidly prototype and validate your data-driven product or solution ✅ showcase the Databricks technology hands-on with your data in your Databricks account ✅ demonstrate feasibility and value to your stakeholders Sounds good? Then head on over to our website and get in touch with our team! 👉 https://lnkd.in/dxHyFgfK What is your biggest challenge with Databricks at the moment? Let us know in the comments and let’s discuss! #databricks #data #DataIntelligence #DatabricksInAWeek
36
1 kommentar -
Datashift
What if you want to leverage the power of a Databricks cluster while enjoying the development experience in your beloved IDE, Integrated Development Environment, instead of the Databricks workspace? Well, for all the IDE lovers there is something called Databricks Connect. In this Medium blog post Stefanie Turelinckx will introduce you to Databricks Connect and show you step by step how to set it up! #databricks #IDE #Terraform #DatabricksConnect
23
-
PyData Eindhoven
As this year's PyData Einhoven is co-organized with JuliaCon 2024 it might be a good idea to brush up your Julia skills 😏 We've got just the talk for that, and even better, Joris K. shows you how you can use Julia as a backend for your Python packages to greatly improve performance 📈 Watch 'Building Fast Packages Faster: Julia as a Backend to Python and R' now: https://hubs.la/Q02s8Yt50
20
-
Diggibyte Technologies Private Limited
Setting up your own Data Intelligence Platform is no simple task, as harnessing the full potential of data can be quite challenging. But imagine this: What if you could get it done in just 25 hours? Yes, you heard that right. We can kickstart your Data Intelligence journey in just 25 hours on the Databricks Platform. Let us guide you on your #DataAI journey with our comprehensive #dataIntelligencePlatform. Databricks in a Box provides the tools and insights you need to excel in today's data-driven world. What sets us apart? Our platform isn’t just another off-the-shelf solution. Our team of #Databricks #SolutionArchitects has meticulously crafted it, bringing unmatched expertise and #innovation. From development to maintenance and ongoing management, you can trust that your Data Intelligence Platform is in the hands of industry #experts. Ready to take the next step? Explore the endless possibilities with us at the upcoming #MachineConGCC Summit. Join us and discover how our platform can revolutionize your #dataanalytics and #AIdriven decision-making. AIM William Rathinasamy Sekhar Reddy Anuj Kumar Sen Lawrance Amburose Brindha Sendhil Praveen Kumar C Rashika S Parthiban Raja
28
-
Solomon Kahn
You might be perfect, but your data, unfortunately, will never be perfect. You can spend an UNLIMITED amount of money and time investing in data quality. And still, you'll have problems. You'll think you know how many people visited your website, but oh wait, lots of people use ad blockers and were invisible to your tracking! Or your tracking library had a race condition in javascript so some unknown percentage of people never got tracked and your data will be incomplete forever. How do you navigate as a data person? Get comfortable making decisions off incomplete information. Too many people have an all or nothing view around data quality. But the real world doesn't work like that. Even imperfect data can be extremely helpful! If I take off my glasses, everything looks blurry... But I can still see clearly enough to walk around and know if I'm about to walk into the street. Get comfortable helping business folks make good decisions incorporating your understanding of how reliable and complete the information is. It's messier than everyone would like, but real life is messy! The only people who never need to worry about messy data and incomplete information don't actually make important decisions.
35
3 kommentarer -
Ori
Ready to experience the Snowflake-Arctic-instruct model with Hugging Face? Snowflake recently announced Arctic, their Mixture of Experts (MoE) model which can process 128 tokens and has roughly 479B parameters, making it a very efficient model for large-scale tasks. Arctic was designed to deliver exceptional performance for a variety of applications and use cases, including SQL data co-pilots and predictive analytics. One of the standout features of Arctic is its ability to scale efficiently across multiple hardware configurations, making it particularly well-suited for deployment on high-performance machines like the NVIDIA Cluster/Server with 8xH100s GPUs. By employing FP8 quantization to reduce the precision of floating-point numbers to 8-bit format, the model can run on a single GPU instance. Although this setup is not yet fully optimized, it can achieve a throughput exceeding 70 tokens per second at a batch size of one. We cover: - Configuration - Installing PyTorch - Installing NVIDIA’s Fabric Manager - Spinning up a virtual environment There are also how-tos on scaling up transformer architectures with Flash Attention and common troubleshooting. Read the full blog to run learn how to start deploying Arctic today! https://lnkd.in/gKDNp-6M
37
1 kommentar -
Advancing Analytics
There have been huge waves recently after #Databricks announced their new open-source fine-grained MoE (mixture of experts) large language model... but what does that actually mean? If you're already happily chatting away with your favorite LLM, why would you care? And how do you even get started with using it? In this video, Simon takes a look through the major points of the DBRX announcement and what the sheer focus on speed tells us about LLM maturity. We then take a dive into the new Databricks AI Playground to test out DBRX Instruct and show you how you get can get started with it. https://hubs.la/Q02trdVk0 #dataengineering #datalakehouse #datascience #machinelearning #llm #DBRX #genai
21
1 kommentar -
Seth Hirsch
Composable CDPs have some clear advantages over out-of-the-box solutions. Think about the example of reverse ETL tools, which take your first party data and push it to other destinations. If a company has had to build everything from scratch, they simply won't have the resources to invest in reverse ETL like a dedicated specialist would. They have so many other components to focus on that connecting with dozens of important destinations won't be possible. That's where the beauty of composable CDPs comes in. Imagine going to a company that specializes solely in reverse ETL and having them tailor their solution to fit with your existing tech stack. The result? A customized solution that meets your unique needs and surpasses anything a single CDP provider could offer. Instead of settling for a one-size-fits-all solution, brands can pick and choose the best-of-breed components, such as reverse ETL, from specialized providers. By connecting these components together, brands can create a purpose-built solution that outshines any one off-the-shelf option.
7
-
R for the Rest of Us
Transforming text data is often a necessary, yet tedious step in your data cleaning process. But the good news is that the {stringr} package can make your text cleaning faster and your life a lot easier. In our newest tutorial, we show you a couple of functions from this package to get you started. You can find the video at https://lnkd.in/gE2dJ-_T Oh and if reading is more your thing, you can find all of the information (including the full code) in our blog post at https://lnkd.in/g6FdZbWA #R #datacleaning #programming
7
-
Tim Gasper
#SemanticLayer How would you define it? Semantic Layer has been a hot topic for a few years, now even more relevant because of AI. I think I would define a Semantic Layer as an: * abstraction layer on your data, * combined with metadata, * to translate it into the language of your business * and enable understanding by humans and AI. (And knowledge graphs are an ideal way to model the semantic layer and/or the metadata that combines with it. In a way that is both human and machine readable, and can scale to include all concepts and data.) What would be your definition for #SemanticLayer? Jeremy Blaney Juan Sequeda Malcolm Hawker Veronika Durgin ✨Shane Gibson Santona Tuli, Ph.D. Jon Cooke Chris Tabb Joe Reis 🤓
48
44 kommentarer -
Dataminded
🚀 New Blog Article Alert! 🚀 Struggling to make your data team effective? Did you know that 79% of data teams face accountability issues due to unclear data ownership? 🔍 Dive deep into the latest blog article “The building blocks of successful Data Teams” by Niels Claeys to uncover: 💠 How data ownership can help your team 💠 The importance of focusing on business outcomes 💠 Which software best practices are relevant for data engineers 💠 Tips for creating a self-service data platform 💠 Why a company-wide data strategy matters Ready to level up your data team's game? 💡 Click the link to read the article and join the discussion! 📈 [Read the full blog here: https://hubs.li/Q02v_7nj0] 📚 #DataTeamSuccess #DataOwnership #BusinessOutcomes #DataMinded
6
-
CKearney Consulting
What's in your generative AI 🎒backpack? You might recognize the coloring page we created using DALL-E of an approachable data nerd with a pretty cool backpack. Are you curious what one might put in there for a journey through a magical data forest? Our latest blog article describes the essentials to keep your work with generative AI secure, ethical and effective. Read it here >> https://bit.ly/4aPiaNk Artwork: colored by CKC administrative assistant Aydan Kearney 🎨 🎉
5
2 kommentarer -
Matt Paige
When it comes to Gen AI, the chatbot is just the tip of the snowflake that just landed on the iceberg... This is starting to become more apparent as multimodal capabilities become mainstream. (see ChatGPT 4o and Google's Astra) The ability to develop agents, and orchestrate those agents in your current workflows both internally and externally is going to be an important evolution of Gen AI in the near future. Check out the latest episode of the Built Right Podcast where I sit down with Amber Prause, Digital Product Owner and Conversational Designer about the evolution Gen AI is driving in the practice of conversation design. Find it on Apple and Spotify 🎧
4