Moo Cap? Not hats for cows, but motion capture

Unless you’re living under a cow, you’ve probably heard of mo cap, but what is it really? People running around in spandex suits with pingpong balls glued to them? Well, that was my first mental image. I’ve since expanded my knowledge, and I now know that there are also dogs running around in spandex with pingpong balls glued to them. Ok, so maybe there’s a little more too it? Well, mocap is actually a pretty loose term that covers a bunch of quite different systems, what they all have in common is that they try to capture motions into data.

Traditional Mocap, and the rest

Traditionally, what most people associate with mocap is actually an optical mocap system, it relies on a bunch of cameras that track either the visual silhouette of an actor, or the reflections of the baubles attached to their limbs. The optical systems are the most common, but also the most expensive. They require 4 or more specialist cameras to be set up and calibrated to capture a space. Each camera will capture the data from all the baubles it sees, and when 3 or more cameras see a bauble; the motions are captured. At this point complicated, high bandwidth things happen, and the data captured is fed to a computer.

We also have non-optical methods, which use accelerometers, magnets or otherwise interesting kit to track the location and velocity of the actors limbs. There are also other optical methods than the suit with the baubles, but I haven’t looked much into those

Faces and Hands

Things I had never considered; the details captured by a mocap system setup to capture body movements, is not diverse enough to also capture face and hand movements in the same pass. Although the technology could probably be pushed there, and surely it will come soon, current standard is to capture hand movements and face movements in separate passes, similar to how voice actors do their voicing as a separate pass. What I mean with separate pass here, is that the data is not recorded all in one go. The voice actors for example will have a script, maybe a storyboard, or if they’re lucky, they can see rough 1st cut while stood in a soundbooth. Similarly the pass for hands or face will happen as a separate take. The setup for both hands and faces require the cameras to be in a different place, and the spandex bauble suit is replaced by tiny baubles with doublesided tape. It makes me think of the behind the scenes for any movie with heavy reliance on mocapped characters, it must be hard work to be a mocap actor. Most of the time you will be dressed in a ridiculous spandex suit, pretending to be a creature you look very little like, possibly acting completely by yourself in an empty mocap space.

The costs

The costs of mocap can easily become an insurmountable obstacle for someone that would like to use it. The amount of cameras needed for a high quality capture is scary, even more so the cost of each one. Around $200 000 for just one camera, $800 000 for the smallest possible setup. On top of this comes software costs, suit and baubles, maintenance. Even renting a studio can be $10 000 a day, or even a per character per second price for the data captured. As an amateur or hobbyist, using an actual mocap studio is basically impossible. Getting to use the mocap equipment at campus certainly feels like a privilege. I look forward to it!

DIY mocap

Realistically for the future though, I suspect if I am going to use any form of motion capture, I would rely on a cheaper DIY solution. One suggestion is to use two Kinect devices. The kinect uses a similar principle to a professional motion capture system, with an optical camera blended with an IR system that captures body shape, a bit like sonar I think. I already have one Kinect at home, so to make a setup I would probably need to spend around $100 on adapters, software and a second Kinect. While it isn’t something I need right now, and therefore also not something I’m going to do for a while yet, it’s certainly a possibility for the future.

Separate Face capture software

Realistically I would think a Kinect setup mostly captures overall body motion, and while it can certainly be used for some interesting gameplay, I have my doubts about using it for animations. Certainly not for facial animations. Thankfully though, I already have a solution for face mocap; FaceRig. FaceRig uses a webcam and some pretty advanced tracking software to map facial expressions. The result is a realtime render of your face as an animated character. Which is pretty neat. You might think the uses for this is pretty limited in a game, but if you could integrate this to games, so that your ingame avatar mimicked your actual expressions, I think you could get some interesting emergent gameplay.


THE END, of this post

I think that’s a long enough post about mocap, it hardly even has any pictures in it! I’m going to make another post though; a little bit about the pipeline that the university studio requires, some about the difference between using motion capture in a completely animated movie versus compositing into real footage, some about mocap actors and why there are so few famous ones and lastly about what I plan to do for my assignment. Later at some stage there will be a post-assignment-reflection-post.

RELEVANT READING

Mocap Terminology

Advertisements