As we all know, stopping motion animation is difficult to succeed, largely because it is a numbing and slow process. Each frame in the final video is a separate photo. For each frame, the characters and props need to move an appropriate amount so that the final result looks smooth. You don’t even want to know how long Ben Wyatt spent on Tuesday’s Requiem, but to be fair, it may have been completed before the next avatar.
But [Nick beard] believes that his latest project may improve this classic technology through a small amount of artificial intelligence provided by Jason Xavier NX. Basically, Jetson watches real-time video through the camera and uses hand posture to detect the model, waiting for no hands in the picture. Once the coast is safe, it will try, and then go back and wait for the next hands-free opportunity. By taking photos automatically, you are free to focus on moving your characters in a convincing way.
If it still hasn’t clicked for you, please check out the video below. [Nick] first shows the unedited original video, mainly composed of three LEGO portraits he moved, and then the final product generated by his system. All the images of his fiddling with the scene are automatically trimmed, leaving an animated short film of the characters moving themselves.
Don’t be fooled. It will take some time. According to our statistics, moving around the mini figs for a full two minutes produces animation for a few seconds. Therefore, although we can say that its speed is faster than the traditional freeze frame production, it is certainly not fast.
Machine learning is not the only modern technology that can simplify stop motion production. We have seen some examples of using 3D to print objects instead of manually adjusting graphics. Printing still takes a long time. Of course, it will consume a lot of filaments, but the mechanical accuracy of the printing scene makes the final result very clean.