In this session, you will learn about what you should do after you’ve taken an AI transformation baseline. Over the span of this session, we will discuss the next steps in moving toward AI readiness through alignment of talent and tools to drive successful adoption and continuous use within an organization. To find additional videos on AI courses, earn badges, join the courses at H2O.ai Learning Center: https://training.h2o.ai/products/ai-foundations-course To find the Youtube video about this presentation: https://youtu.be/K1Cl3x3rd8g Speaker: Chemere Davis (H2O.ai - Senior Data Scientist Training Specialist)
Numerai is an open, crowd-sourced hedge fund powered by predictions from data scientists around the world. In return, participants are rewarded with weekly payouts in crypto. In this talk, Joe will give an overview of the Numerai tournament based on his own experience. He will then explain how he automates the time-consuming tasks such as testing different modelling strategies, scoring new datasets, submitting predictions to Numerai as well as monitoring model performance with H2O Driverless AI and R.
Presented at #H2OWorld 2017 in Mountain View, CA. Enjoy the video: https://youtu.be/ZrlJQqNaSMI. Learn more about H2O.ai: https://www.h2o.ai/. Follow @h2oai: https://www.twitter.com/h2oai.
Data is the only vertical, Machine Learning, bigdata, artificial intelligence - Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai - To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
En esta reunión virtual, damos una introducción a la plataforma de aprendizaje automático de código abierto número 1, H2O-3 y te mostramos cómo puedes usarla para desarrollar modelos para resolver diferentes casos de uso.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/aXPE6IiKRmI The 2018 Brazilian Presidential Elections represented a tangible demonstration of radical change in the way candidates conduct their campaigns, as the shift from traditional media to social media hit the shore of the largest country in the southern hemisphere. Analyzing the political agenda, the broadcast TV-based debates and exchange on social media networks was an NLP feast that The AI Academy reckoned was too good to pass. In this panel, we present the work we conducted , and will show how Driverless AI helped us accelerate our NLP experiments thanks to the recent introduction of advanced text analytics recipes. Bio: Maker/Dreamer/Iconoclast/Chaordic Leader with over 20 years of experience across a number of high-tech industries around the world. Curiosity towards new technologies and the ability to adapt to different cultural and social environments has taken him from a research lab in Italy to a start up in Denmark, to a multinational technology company in Silicon Valley, and ultimately to a leading broadband and video service provider in Brazil. Time and again his career journey has demonstrated his ability to recognize at a very early stage high-potential disruptive ideas and the determination to transform an idea into a real product / service. Over the past seven years, Carmelo cultivated his passion for innovation by leading major technology incubations at a large Telecom operator, supporting the Brazilian startup ecosystem as a Mentor at a startup accelerator and continuously extending his business and technology knowledge through a blend of formal learning & hands-on projects implementations. His focus over the past few years has been on Data Science and Artificial Intelligence, carrying out in-depth technology investigations, product incubations and solutions development. By establishing The AI Academy, Carmelo intends to create and foster a rich environment for the study, research and application of Machine/Deep Learning techniques to real-life use cases, bridging the AI gap between talent and Enterprises - and furthermore elevating Brazil's "AIQ", inserting São Paulo on the world's AI Map.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/VAW2eDht7JA Bio: Krish Swamy is an experienced professional with deep skills in applying analytics and BigData capabilities to challenging business problems and driving customer insights. Krish's analytic experience includes marketing and pricing, credit risk, digital analytics and most recently, big data analytics and data transformation. His key experiences lie in banking and financial services, the digital customer experience domain, with a background in management consulting. Other key skills include influencing organizational change towards a data and analytics driven culture, and building teams of analytics, statisticians and data scientists. Bio: Balaji Gopalakrishnan has over 15 years experience in the Machine Learning and Data Science space. Balaji has led cross functional data science and engineering teams for developing cutting-edge machine learning and cognitive computing capabilities for insurance fraud and underwriting, telematics, multi-asset class risk, scheduling under uncertainty, and others. He is passionate about driving AI adoption in organizations and strongly believes in the power of cross functional collaboration for this purpose.
The document discusses H2O.ai's Driverless AI product, which aims to automate and simplify the machine learning process. It provides an overview of H2O.ai as a company, their goals of operationalizing data science. Driverless AI uses techniques like automated feature engineering, model tuning and selection, and model ensembling to build accurate models fast. It also allows for interpreting and explaining machine learning models through features like model inspection and reason codes. A demo of Driverless AI predicting credit card default risk is shown to illustrate the system.
This video was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/TlmaF6zT43Q This talk will walk through a use case for Driverless AI within the manufacturing sector. We will discuss the motivation and tool selection process, then cover the solution development in detail. The solution development coverage will detail how Driverless AI was applied to the problem and how the resulting models are delivered to the customer. Bio: Robert Coop leads the Artificial Intelligence and Machine Learning team within the Digital Accelerator at Stanley Black & Decker. He has been working with machine learning techniques for the past 10 years and has spent the majority of this time practicing data science and leading teams within an enterprise environment. Robert also currently teaches the Georgia Tech Data Science and Analytics Boot Camp as part of the Georgia Tech Professional Education Program. Robert holds a Ph.D. in Machine Learning (Computer Engineering), where he focused on neural network architectures, training algorithms, and ensemble techniques.
H2O.ai provides open source machine learning platforms and enterprise AI solutions that help companies implement artificial intelligence. It offers tools for data scientists to build models using Python and R and also provides support services to help customers successfully deploy models in production. H2O.ai aims to democratize AI and help companies become AI-driven by leveraging its experts, community knowledge, and world-class technology.
In this talk we will share the idea of developing self guiding application that would provide the most engaging user experience possible using crowd sourced knowledge on a mobile interface. We will discuss and share how historical usage data could be mined using machine learning to identify application usage patterns to generate probable next actions. #h2ony - Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai - To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
This in-depth training on H2O Driverless AI was given by Wen Phan on June 28th, 2018. He elaborated on automatic feature engineering, machine learning interpretability, and automatic visualization components of this ground breaking product.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/cnU6sqd31JU Developing meaningful AI applications requires complete data lifecycle management. Sourcing, harvesting, labelling and ensuring the conduit to consume data structures and repositories is critical for model accuracy....but, one of the least talked about subjects. Intel’s optimized technologies enable efficient delivery of complete data samples to develop (and deploy) meaningful outcomes. During this session, we’ll review the considerations and criticality of data lifecycle management for the AI production pipeline. Bio: Meg brings more than 17 years of global product, engineering and solutions experience. She is presently a Solutions Architect with Intel Corporation specializing in Visual Compute and AAI (Analytics and AI) Architecture. She is passionate about the potential for technology to improve the quality of peoples’ lives and humanity on the whole.
This session was held by Vladimir Brenner, Partner Account Manager, Disruptors & AI, Intel AI at the Dive into H2O: London training on June 17, 2019. Please find the recording here: https://youtu.be/60o3eyG5OLM
The initial version of a maturity roadmap to help guide businesses when adopting AI technology into their workflow. IBM Watson Studio is referenced as an example of technology that can help in accelerating the adoption process.
This document provides a summary of the state of artificial intelligence (AI) research and developments over the past year. It covers key areas like research breakthroughs, talent, industries utilizing AI, and public policy issues related to AI. The document is produced by two authors in East London as a way to capture the progress of AI and spark discussion about its implications. It includes sections on research breakthroughs in areas like transfer learning, advances in hardware that have enabled progress, and the use of video datasets to help machines understand scenes and actions to gain a level of common sense.
This session was recorded in San Francisco on February 5th, 2019 and can be viewed here: https://youtu.be/4a_Y0L7suBc AI is real. Enterprises use it to automate decisions, hyper-personalize customer experiences, streamline operational processes, and much more. However, for most enterprise technology leaders, AI technologies and use cases are still far too mysterious. The field is moving fast. Enterprise leaders must forge a coherent, pragmatic AI strategy that is tied to business outcomes. In this session, guest speaker Forrester Research Vice President & Principal Analyst Mike Gualtieri will demystify enterprise AI, identify use cases most likely to succeed, and, most importantly, provide key advice to enterprise leaders that are charged with moving AI forward in their organization. Bio: Mike's research focuses on software technologies, platforms, and practices that enable technology professionals to deliver digital transformations that lead to prescient digital experiences and breakthrough operational efficiency. His key technology coverage areas are AI, machine learning, deep learning, AI chips and systems, digital decisions, streaming analytics, prescriptive analytics, big data analytical platforms and tools (Hadoop/Spark/Flink; translytical databases), optimization, and emerging technologies that make software faster and smarter. Mike is also a leading expert on the intersection of business strategy, artificial intelligence, and innovation. Mike provides technology vendors with actionable, fine-tuned advisory sessions on strategy, messaging, competitive analysis, buyer-persona analysis, market trends, and product road maps for the areas he directly covers and adjacent areas that wish to launch into new markets or use new technologies. Mike is a recipient of the Forrester Courage Award for making bold calls that inspire leaders and guide great business and technology decisions.
Artificial intelligence is becoming a hot topic due to recent advances in hardware capabilities, neural networks research, and technology investments. Deep learning is driving this resurgence by using neural networks with multiple layers to interpret nonlinear relationships in high-dimensional data. Deep learning is delivering improved performance on complex problems and creating value with little domain knowledge required. The presentation provides examples of AI applications in industries like banking, automotive, and healthcare. It also outlines steps to get started with an AI pilot project and developing an AI strategy and roadmap.
About the webinar The Internet is a rich source of data, mainly textual data. But making use of huge quantities of data is a complex and time-consuming task. NLP can help with this problem through the use of Named Entity Recognition systems. Named entities are terms that refer to names, organizations, locations, values etc. NER annotates texts – marking where and what type of named entities occurred in it. This step significantly simplifies further use of such data, allowing for easy categorization of documents, analyze sentiments, improving automatically generated summaries etc. Further, in many industries, the vocabulary keeps changing and growing with new research, abbreviations, long and complex constructions, and makes it difficult to get accurate results or use rule-based methods. Named Entity Recognition and Classification can help to effectively extract, tag, index, and manage this fast and ever-growing knowledge. Through this webinar, we will understand how NER can be used to extract key entities from large volumes of text data What you will learn - How organizations are leveraging Named Entity Recognition across various industries - Live demo - Identify & classify complex terms & with NERC (Named Entity Recognition & Categorization) - Best practice to automate machine learning models in hours not months
Watch the PPT to learn how intelligent automation enables companies to build remote working capabilities, increase productivity, and optimize the workforce.
Businesses are building digital platforms with modern architecture principles like domain driven design, microservice based, and event-driven. These platforms are getting ever so modular, flexible and complex. While they are built with architecture principles like - loose coupling, individually scaling, plug-and-play components; regulations and security considerations on data - complexity leads to many unknown and grey areas in the entire architecture. Details on how the different components of this complex architecture interact with each other are lost. Generating insights becomes multi-teams, multi-staged activity and hence multi-days activity. Multiple users and stakeholders of the platform want different and timely insights to take both corrective and preventive actions.Business teams want to know how business is doing in every corner of the country near real time at a zipcode granularity. Tech teams want to correlate flow changes with system health including that of downstream stability as it happens.Knowing these details also helps in providing the feedback to the platform itself, to make it more efficient and also to the underlying business process. In this talk we intend to share how we made all the business and technical insights of a complicated platform available in realtime with limited incremental effort and constant validation of the ideas and slices with business teams. Since the client was a Banking client, we will also touch base handling of financial data in a secure way and still enabling insights for a large group of stakeholders. We kept the self-service aspect at the center of our solution - to accommodate increasing components in the source platform, evolving requirements, even to support new platforms altogether. Configurability and Scalability were key here, it was important that all the data that was collected from the source platform was discoverable and presentable. This also led to evolving the solution in lines of domain data products, where the data is generated and consumed by those who understand it the best.
About the webinar It only takes one bad interaction for a customer to abandon a service or product. Businesses are no longer just competing with other companies’ products, they’re competing with a customer’s last service experience. All contact centers worldwide are looking for new and strategic ways to increase operational performance, reduce cost and still provide high-touch customer experiences that improve customer loyalty and highlight ways to increase revenue and productivity. Through this webinar, we will understand how AI can augment the effort, focus and problem-solving abilities of human agents so that they can tackle more complex or creative tasks. With an abundance of data from logs, emails, chat and voice recordings, contact centers can ingest this data to provide contextual customer service at the right time with the right way providing satisfactory customer service and retain the brand value. What you'll learn: - How organizations are leveraging AI & Machine learning in Customer Service - Live Demo of AI & ML in Customer Service - Best practices to automate machine learning models To explore more, visit: https://skyl.ai/form?p=start-trial
This document provides an overview of a proposed "Superdata Solution" or "Command Center" to help various personas within an organization better access and utilize data. It describes current challenges around isolated data solutions and proposes consolidating different data sources onto a centralized data platform to provide self-serve data and insights. Key aspects of the proposed solution include a data lake, data marts, orchestration services, data transformation/ML tools, and serving data through dashboards, APIs and reports to help business users, developers and other teams.
This document provides an overview of how to classify documents automatically using natural language processing (NLP). It begins with introducing NLP text classification and the types of classification that can be performed at the document, paragraph, sentence, and sub-sentence levels. It then discusses several business applications of content classification including legal document discovery, enabling customer support, and online content classification. The document demonstrates a live classification of news articles into categories. It also discusses challenges of implementing AI/ML projects and best practices for data collection, quality, security, labeling, infrastructure, skills, speed and continuous improvement. It promotes the capabilities of Skyl.ai as an ML automation platform to help overcome these challenges.
This is our offering for data science project methodologies. We offer our expertise in transforming your enterprise for the next big data revolution for Data science project
AI is transforming every aspect of our daily lives and the data landscape is becoming increasing open and transparent, thanks to the Consumer Data Right, most notably Open Banking. Between the high level academia and low level algorithms, where should the modern business leader start on their AI journey and harness true value from their data? Let us show you a step by step, data-driven approach towards enterprise-wide AI adoption.
About the webinar It’s no secret that a well-organized product catalog becomes extremely crucial as consumers look for a more rich and consistent online experience while E-shopping. Often, the task of digitizing the catalog of the fast-moving and large volume products becomes daunting due to insufficient, erroneous, and fragmented data. This leads us to the question: If E-commerce and fashion companies need to be agile and consumer-friendly, then why are so many still using the same product catalog management methods that were devised years ago? The manual product classification and data attribution process are only leading to an increased risk of error and time delay affecting the brand reputation. Also, leading to lost sales opportunities due to incomplete or inaccurate product records that don’t really reflect the actual product. In this webinar, we will discuss how to efficiently manage machine learning projects without tech headaches by plugging in your data and building your models instantly. What you will learn - How E-commerce companies are using AI to drive more sales and seamless customer experience - Know the secret sauce of automating time-intensive, repetitive steps to quickly build models - Demo: A deeper understanding of the end-to-end machine learning workflow for a fashion product catalog management using Skyl.ai
The document describes a Driverless ML API that was created to automate machine learning workflows including feature engineering, model validation, tuning, selection, and deployment. The API uses machine learning interpretability techniques to provide visualizations and explanations of models. It aims to help scale data science efforts and enable both expert and junior data scientists to more quickly develop accurate, production-ready models. Key capabilities of the API include automated exploratory data analysis, feature selection and engineering, model selection and hyperparameter tuning using GPUs for faster training, and model interpretability visualizations.