This presentation was made on June 30th, 2020. Recording of the presentation is available here: https://youtu.be/9LajqAL_CU8 As enterprises “make their own AI”, a new set of challenges emerge. Maintaining reproducibility, traceability, and verifiability of machine learning models, as well as recording experiments, tracking insights, and reproducing results, are key. Collaboration between teams is also necessary as “model factories” are created for enterprise-wide model data science efforts. Additionally, monitoring of models ensures that drift or performance degradation is addressed with either retraining or model updates. Finally, data and model lineage in case of rollbacks or addressing regulatory compliance is necessary. H2O ModelOps delivers centralized catalog and management, deployment, monitoring, collaboration, and administration of machine learning models. In this webinar, we learn how H2O can assist with operationalizing, scaling and managing production deployments. Speaker's Bio: Felix is a part of the Customer Success team in Asia Pacific at H2O.ai. An engineer and an IIM alumni, Felix has held prominent positions in the data science industry.
This video was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/TlmaF6zT43Q This talk will walk through a use case for Driverless AI within the manufacturing sector. We will discuss the motivation and tool selection process, then cover the solution development in detail. The solution development coverage will detail how Driverless AI was applied to the problem and how the resulting models are delivered to the customer. Bio: Robert Coop leads the Artificial Intelligence and Machine Learning team within the Digital Accelerator at Stanley Black & Decker. He has been working with machine learning techniques for the past 10 years and has spent the majority of this time practicing data science and leading teams within an enterprise environment. Robert also currently teaches the Georgia Tech Data Science and Analytics Boot Camp as part of the Georgia Tech Professional Education Program. Robert holds a Ph.D. in Machine Learning (Computer Engineering), where he focused on neural network architectures, training algorithms, and ensemble techniques.
These slides were presented by Marios Michailids and John Spooner at Dive into H2O: London on June 17, 2019. Marios's session can be found here: https://youtu.be/GMtgT-3hENY John's session can be found here: https://youtu.be/5t2zw4bVfsw
This presentation was made on June 18, 2020. Video recording of the session can be viewed here: https://youtu.be/YEtDwYSXXJo For many companies, model documentation is a requirement for any model to be used in the business. For other companies, model documentation is part of a data science team’s best practices. Model documentation includes how a model was created, training and test data characteristics, what alternatives were considered, how the model was evaluated, and information on model performance. Collecting and documenting this information can take a data scientist days to complete for each model. The model document needs to be comprehensive and consistent across various projects. The process of creating this documentation is tedious for the data scientist and wasteful for the business because the data scientist could be using that time to build additional models and create more value. Inconsistent or inaccurate model documentation can be an issue for model validation, governance, and regulatory compliance. In this virtual meetup, we will learn how to create comprehensive, high-quality model documentation in minutes that saves time, increases productivity, and improves model governance. Speaker's Bio: Nikhil Shekhar: Nikhil is a Machine Learning Engineer at H2O.ai. He is currently working on our automatic machine learning platform, Driverless AI. He graduated from the University of Buffalo majoring in Artificial Intelligence and is interested in developing scalable machine learning algorithms.
This slide was presented by Dmitry Baev, Pratap Ramamurthy and Karthik Kannappan at our AWS DevDay in Toronto, Canada on July 17, 2019
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/aXPE6IiKRmI The 2018 Brazilian Presidential Elections represented a tangible demonstration of radical change in the way candidates conduct their campaigns, as the shift from traditional media to social media hit the shore of the largest country in the southern hemisphere. Analyzing the political agenda, the broadcast TV-based debates and exchange on social media networks was an NLP feast that The AI Academy reckoned was too good to pass. In this panel, we present the work we conducted , and will show how Driverless AI helped us accelerate our NLP experiments thanks to the recent introduction of advanced text analytics recipes. Bio: Maker/Dreamer/Iconoclast/Chaordic Leader with over 20 years of experience across a number of high-tech industries around the world. Curiosity towards new technologies and the ability to adapt to different cultural and social environments has taken him from a research lab in Italy to a start up in Denmark, to a multinational technology company in Silicon Valley, and ultimately to a leading broadband and video service provider in Brazil. Time and again his career journey has demonstrated his ability to recognize at a very early stage high-potential disruptive ideas and the determination to transform an idea into a real product / service. Over the past seven years, Carmelo cultivated his passion for innovation by leading major technology incubations at a large Telecom operator, supporting the Brazilian startup ecosystem as a Mentor at a startup accelerator and continuously extending his business and technology knowledge through a blend of formal learning & hands-on projects implementations. His focus over the past few years has been on Data Science and Artificial Intelligence, carrying out in-depth technology investigations, product incubations and solutions development. By establishing The AI Academy, Carmelo intends to create and foster a rich environment for the study, research and application of Machine/Deep Learning techniques to real-life use cases, bridging the AI gap between talent and Enterprises - and furthermore elevating Brazil's "AIQ", inserting São Paulo on the world's AI Map.
This talk was recorded in London on October 30th, 2018 and can be viewed here: https://youtu.be/CeOJFynB6BE Real-Time AI: Designing for Low Latency and High Throughput Bio: Dr. Sergei Izrailev is Chief Data Scientist at Beeswax, where he is responsible for data strategy and building AI applications powering the next generation of real-time bidding technology. Before Beeswax, Sergei led data science teams at Integral Ad Science and Collective, where he focused on architecture, development, and scaling of data science-based advertising technology products. Prior to advertising, Sergei was a quant/trader and developed trading strategies and portfolio optimization methodologies. Previously, he worked as a senior scientist at Johnson & Johnson, where he developed intelligent tools for structure-based drug discovery.
This document provides a blueprint for developing a human-centered machine learning framework that combines techniques from AutoML, interpretable models, fairness, and post-hoc explanations to create low-risk models. It outlines steps for data exploration, benchmarking, training interpretable models, performing post-hoc analysis, implementing human review processes, and continually iterating to improve models. Open questions are also discussed around automation levels and implementing human appeals.
This document discusses using graphical models and machine learning techniques to improve management processes for 21st century businesses. It argues that current management practices have not evolved significantly and are poorly integrated with digital systems. The document proposes designing management tools and business models based on principles of continuous learning and integration between human and machine systems. It presents examples like the machine learning canvas and Wardley mapping to help conceptualize business problems and solutions in a way that facilitates machine learning. The goal is to develop tools that allow businesses to constantly adapt and improve using data and predictive analytics.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/4a_Y0L7suBc AI is real. Enterprises use it to automate decisions, hyper-personalize customer experiences, streamline operational processes, and much more. However, for most enterprise technology leaders, AI technologies and use cases are still far too mysterious. The field is moving fast. Enterprise leaders must forge a coherent, pragmatic AI strategy that is tied to business outcomes. In this session, guest speaker Forrester Research Vice President & Principal Analyst Mike Gualtieri will demystify enterprise AI, identify use cases most likely to succeed, and, most importantly, provide key advice to enterprise leaders that are charged with moving AI forward in their organization. Bio: Mike's research focuses on software technologies, platforms, and practices that enable technology professionals to deliver digital transformations that lead to prescient digital experiences and breakthrough operational efficiency. His key technology coverage areas are AI, machine learning, deep learning, AI chips and systems, digital decisions, streaming analytics, prescriptive analytics, big data analytical platforms and tools (Hadoop/Spark/Flink; translytical databases), optimization, and emerging technologies that make software faster and smarter. Mike is also a leading expert on the intersection of business strategy, artificial intelligence, and innovation. Mike provides technology vendors with actionable, fine-tuned advisory sessions on strategy, messaging, competitive analysis, buyer-persona analysis, market trends, and product road maps for the areas he directly covers and adjacent areas that wish to launch into new markets or use new technologies. Mike is a recipient of the Forrester Courage Award for making bold calls that inspire leaders and guide great business and technology decisions.
This session was recorded in NYC on October 22nd, 2019 and can be viewed here: https://youtu.be/micyBEIoE0Q Leveraging Data for Successful Ad Campaigns Marketing dollars should be spent to reach real people and make digital campaigns successful. IAS leverages large amounts of data and machine learning software to measure, analyze, and predict on billions of digital advertisements every day. I’ll be discussing how we do this in the context of fraud detection and brand safety, helping to ensure marketing dollars are used to reach the right people. Bio: With a desire for problem-solving and handling messy data, Amitpal Tagore completed a PhD and postdoc in astrophysics. Using the skills gained in academia, he became a data scientist at Vydia, working with rising artists on social media. Currently, Amit is a data scientist in the fraud detection lab at Integral Ad Science.
Seldon provides an open platform for deploying machine learning models at scale. It helps companies bring machine learning to life through its Seldon Core platform, which provides a control plane for managing ML workflows and inference graphs. Seldon Core supports deploying models built with any ML framework or language and integrates with common serving systems. It also includes powerful processing components for routing traffic, transforming data, and monitoring models.
1. Driverless AI can be used across many industries like banking, healthcare, telecom, and marketing to save time and money through tasks like fraud detection, customer churn prediction, and personalized recommendations. 2. The document highlights new features in Driverless AI 1.7.1 including improved time series recipes, natural language processing features, automatic visualization, and machine learning interpretability tools. 3. Driverless AI provides fully automated machine learning through techniques such as automatic feature engineering, model tuning, standalone scoring pipelines, and massively parallel processing to find optimal solutions.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/LUwMtXM2q88 In the current world of data science there many available data sources, big data platforms, and advanced Machine Learning and AI based technologies available. It has become easier and easier to derive predictive value in an efficient and streamlined way and lose sight of objectives especially in the business world. This session will focus on not losing the business context and objective as the navigator for these powerful tools we have at our disposal. Through this discussion, I will review a path towards how to use the tools like explainable and driverless AI to your advantage versus letting the tools set the direction. Bio: At Equifax, Tom leads the Data and Analytics consulting practice. Previously, Tom was the US Consumer and Commercial Data Sciences Leader. Tom joined Equifax in July of 2009. He brings several years of analytical experience in leading teams on statistical modeling engagements, analysis and consultation across several verticals including telecommunications, lending, mortgage, automotive, and marketing. Prior to Equifax, Tom was a data science manager at Experian and a Risk Modeler/Manager at American General Finance (now OneMain Financial). Tom holds a Master of Science in Applied Statistics from Purdue University, and a Bachelor of Science degree in Mathematics with a concentration in Statistics, also from Purdue University.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/VAW2eDht7JA Bio: Krish Swamy is an experienced professional with deep skills in applying analytics and BigData capabilities to challenging business problems and driving customer insights. Krish's analytic experience includes marketing and pricing, credit risk, digital analytics and most recently, big data analytics and data transformation. His key experiences lie in banking and financial services, the digital customer experience domain, with a background in management consulting. Other key skills include influencing organizational change towards a data and analytics driven culture, and building teams of analytics, statisticians and data scientists. Bio: Balaji Gopalakrishnan has over 15 years experience in the Machine Learning and Data Science space. Balaji has led cross functional data science and engineering teams for developing cutting-edge machine learning and cognitive computing capabilities for insurance fraud and underwriting, telematics, multi-asset class risk, scheduling under uncertainty, and others. He is passionate about driving AI adoption in organizations and strongly believes in the power of cross functional collaboration for this purpose.
En esta reunión virtual, damos una introducción a la plataforma de aprendizaje automático de código abierto número 1, H2O-3 y te mostramos cómo puedes usarla para desarrollar modelos para resolver diferentes casos de uso.