I love watching the rapid evolution of real-time, markerless motion capture. Enough of the cartoons in today's VR - let's see the artistry and imagination drive our avatars and infuse magic into immersive experiences. And shared immersive experiences - a visit with your college friends, with family overseas, with your kids when you travel for business - may become so much more memorable, visceral and present, even with that layer of fantasy captured by your digital alter-ego. Now add full-body haptics into the story, and each interaction becomes a rich, shared experience to remember.
Introducing Move Live, a revolutionary new way for brands and studios to create real-time 3D digital experiences.
📸 Markerless motion capture powered by AI
🎥 Real-Time + Post-Processing
🏃♀️Capture people in any environment
🎮 Unreal Engine Plug-In
Learn more: https://lnkd.in/edzGRGU8#3DAnimation#MotionCapture#MoveLive
Chief Creative Officer at Ultra V.
Future of Storytelling and Immersive Entertainment Experiences.
From Midjourney to 3D and real-time 3D space generation. The two game devs at ilumine AI are experimenting with AI to make digital environment creation simple, scalable, accessible, and fun.
At this minutes, you need manually set a depth map for your image in order to view its correct 3D version. But soon they will be offering instant 3D conversion for any image.
Cristian Peñas Laplaceta tweeted with this video: "I've decided to create a highlight reel of my work over the past few months, focusing on AI and its potential use in the video game industry" https://lnkd.in/eiPwwzK8
Links to test it out in the comments.
Also check out the SKYBOX AI by Blockade Labs, check out the new improved sketch mode here: https://lnkd.in/eRtpJZcV#midjourneyv5#gamedev#stablediffusion#genai
While I "HATE" A.I. (like many of my Creative peers) We MUST face facts. A.I. is an amazing "tool" with unlimited potential!
As a Game Developer myself, I remember hard-coding a game-engine from scratch back in the day, before Tools like AGK, Unity, or Unreal. (It was a real headache!!!..Mostly from banging my head into the wall during the process)
And I can't begin to tell you how much easier such tools have made my life!
Remember when everyone thought "Music Videos" would "Kill the Radio Star"?
WELL here we are decades later, & we still have Radio Stars!
And decades later, I'm willing to bet, Game Designers/Artists/Developers (and Programmers) will still be around and thriving!
But let's remember: A.I. is only a TOOL, and nothing more. It belongs in the Artist's "hand"...not in his/her "Chair".
This means Game/Tech Companies should not look to "replace" their creative team, but instead put this "Tool" into their hands. It's always more efficient, and profitable to "reinvest" in your Team than to "reduce" your headcount.
Chief Creative Officer at Ultra V.
Future of Storytelling and Immersive Entertainment Experiences.
From Midjourney to 3D and real-time 3D space generation. The two game devs at ilumine AI are experimenting with AI to make digital environment creation simple, scalable, accessible, and fun.
At this minutes, you need manually set a depth map for your image in order to view its correct 3D version. But soon they will be offering instant 3D conversion for any image.
Cristian Peñas Laplaceta tweeted with this video: "I've decided to create a highlight reel of my work over the past few months, focusing on AI and its potential use in the video game industry" https://lnkd.in/eiPwwzK8
Links to test it out in the comments.
Also check out the SKYBOX AI by Blockade Labs, check out the new improved sketch mode here: https://lnkd.in/eRtpJZcV#midjourneyv5#gamedev#stablediffusion#genai
Check out Jon Finger’s AI-assisted 3D workflow to create this 3D humanoid using:
✏️ Sketches in Procreate
🚚 Style transfer in Magnific AI
🦾 Converted to 3D in 3DAI Studio
🕺 Motion capture with Move AI
This is a great example of how AI tools can unlock a whole new creative process! 🔥🦾
#3DAnimation#MotionCapture#3DProduction
Lately, been having a blast tinkering with stable diffusion. I'm pretty stoked about AI becoming a rad tool for artists and creators down the road, more like a sidekick than a replacement. Imagine cranking out beefed-up, super-detailed renders – no need to sweat it, AI's got our back. Let's not freak out about this tool; it's all about adding it to our creative toolkit. And man, I can't wait to see it pop up in 3D apps, especially in simulation engines. Exciting times ahead! #stable#ai#art
AI just keeps on impressing, right?
Recently, there's this cool new technology that can transform our stop-motion videos into animations using a stable diffusion technique. Here's a quick overview of how it goes.
Has anyone tried it out yet? Share your results with us!
#FUTR#Data#Design#Technology#StableDiffusion#AI
WELCOME TO SYNTH CITY
Our customer receives every second an incredible amount of unstructured data that is not annotated. Data not being structured nor annotated increases the work needed for developing high-value-creating video analytics models to transform video into insights and actions.
Therefore, we have developed Synth City!
A virtual city created through the Unreal Engine, an advanced real-time 3D creation tool used for creating photoreal visuals (and immersive experiences) in video games. Here, we generate synthetic data, which maintains the characteristics of real-world data – but doesn’t interfere with the privacy of individuals.
Synthetic data is basically an ‘artificial’ dataset containing computer-generated information instead of real-world records. Right now, the synthetic data project at Milestone is running an entire photorealistic city with an unlimited number of characters and other moving figures, such as vehicles.
By changing a host of elements – such as camera angles or character features – we can create an almost infinite number of datasets. For example, starting with a real image, the engine can create multiple synthetic versions of the image in low-light, snow or bright sunshine.
Read more in our annual report: http://ow.ly/Vt7Q104CtkG#SyntheticData#AI#VideoTechnology
AI in media is moving from text2image and image2video to creating 3D scenes in platforms like Luma or Midjourney. Soon, we'll navigate and adjust in these "digital sets" to capture photos and videos
Credit: Luma
#innovation#genai#artificialintelligence
1/ Stability AI introduces Stable Video 3D (SV3D), a generative model that can create high-quality, consistent, and controllable 3D content from a single frame.
2/ SV3D uses video diffusion models that provide better generalization and view consistency, and comes in two variants: SV3D_u for orbital video without camera control and SV3D_p for video along predetermined camera paths.
3/ SV3D is available for commercial use through the Stability AI membership, while the model weights are available for download for non-commercial use on Hugging Face.
CEO of TerraCorp, Terrex & MJV Capital
1moLove this! The virtual and the real continue to converge.