Unlocking the True Power of AI by Turning Conventional ML Wisdom On Its Head

Print Friendly, PDF & Email

At its core, machine learning is an experimental science. To drive true AI innovation you must accept the possibility that commonly-held knowledge — or methods that have worked in the past — may not be your best route to solving new problems. It’s vital to rethink how you approach your training data and how you evaluate performance metrics. 

This isn’t always what teams want to hear when developing a new product; however,  breakthroughs can be worth the extra days on the timeline. It’s a reminder of why many of us became data scientists, engineers, and innovators in the first place: we’re curious, and will do what it takes to solve even seemingly impossible problems.

I’ve witnessed the success of applying this concept first-hand with my team at Ultraleap, developing diverse machine learning models that meet the demanding hand-tracking needs of businesses and consumers alike, driving the future of virtual interaction. 

How Challenges can Become Opportunities with Machine Learning (ML) Experimentation 

Many businesses and industries have unique challenges with ML deployment that generic, one-size fits all solutions currently on the market don’t address. This can be due to the complexities of their application domains, lack of budget and available resources, or being in a more niche market that might not attract the attention of large tech players. One such domain is developing ML models for defect inspection in vehicle manufacturing. To be able to spot small defects over the large surface area of a car on a moving assembly line, you deal with the constraint of low frame rate but high resolution.

My team and I face the opposite side of the same constraint when applying ML to hand-tracking software – resolution can be low but frame rate must be high. Hand tracking uses ML to identify human gestures, creating more natural and life-like user experiences within a virtual setting. The AR/VR headsets we’re developing this software for are typically at the edge with constrained compute, so we cannot deploy massive ML models. They must also respond faster than the speed of human perception. Furthermore given that it’s a relatively nascent space, there’s not a ton of industry data available for us to train with. 

These challenges force us to be as creative and curious as possible when developing hand tracking models — reimagining our training methods, questioning data sources and experimenting not just with different model quantisation approaches but also compilation and optimisation. We don’t stop at looking at model performance on a given dataset, we iterate on the data itself, and experiment with how the models are deployed. While this means that the vast majority of the time, we’re learning how not to solve for “x”, it also means that our discoveries are even more valuable. For example, creating a system that can operate with 1/100,000th of the computing power of say ChatGPT, while maintaining the imperceptibly low latency that makes your virtual hands precisely track your real hands. Solving these hard problems whilst a challenge, also gives us commercial advantage – our tracking runs at 120hz compared to the norm of 30hz delivering a better experience in the same power budget. This is not unique to our problems – many businesses face specific challenges due to niche application domains that give the tantalizing prospect of turning ML experimentation into market advantage.

By nature, machine learning is always evolving. Just as pressure creates diamonds, with enough experimentation, we can create ML breakthroughs. But as with any ML deployment, the very backbone of this experimentation is data.   

Evaluating the Data Training ML Models 

AI innovation often revolves around the model architectures used, and annotating, labeling and cleaning data. However, when solving complex problems — for which previous data can be irrelevant or unreliable — this method isn’t always enough. In these cases, data teams must innovate on the very data used for training. When training data, it’s critical to evaluate what makes data “good” for a specific use case. If you can’t answer the question properly, you need to approach your data sets differently.

While proxy metrics on data quality, accuracy, dataset size, model losses, and metrics are all useful, there is always an element of the unknown that must be explored experimentally when it comes to training an ML model. At Ultraleap, we mix simulated and real data in various ways, iterating on our data sets and sources and evaluating them based on the qualities of the models they produce in the real world – we literally test hands-on. This has expanded our knowledge of how to model a hand for precise tracking regardless of the type of image that comes in and on what device – especially useful for creating software compatible across XR headsets. Many headsets operate with different cameras and layouts, meaning ML models must work with new data sources. As such, having a diverse dataset comes in handy. 

If you are to explore all parts of the problem and all avenues for solutions you must be open to the idea your metrics may also be incomplete and test your models in the real world. Our latest hand tracking platform, Hyperion, builds on our approach to data evaluation and experimentation to deliver a variety of different hand tracking models addressing specific needs and use cases rather than a one-size-fits-all approach. By not shying away from any part of the problem space, questioning data, models, metrics and execution, we have models that are not just responsive and efficient but deliver new capabilities such as tracking despite objects in hand, or very small microgestures. Again the message is that broad and deep experimentation can deliver unique product offerings. 

Experimentation (from every angle) is Key

The best discoveries are hard-fought; there’s no substitute for experimentation when it comes to true AI innovation. Don’t rely on what you know: answer questions by experimenting with the real application domain and measuring model performance against your task. This is the most critical way to ensure your ML tasks translate to your specific business needs, broadening the scope of innovation and presenting your organization with a competitive advantage. 

About the Author

Iain Wallace is the Director of Machine Learning and Tracking Research at Ultraleap, a global leader in computer vision and machine learning. He is a computer scientist fascinated by application-focused AI systems research and development. At Ultraleap, Iain leads his hand tracking research team to enable new interactions in AR, VR, MR, out of home and anywhere else you interact with the digital world. He earned his MEng in Computer Systems & Software Engineering at the University of York and his Ph.D. in Informatics (Artificial Intelligence) from The University of Edinburgh. 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind