Wow, it’s been quiet around here.
There are buy cialis 20mg some absolutely stunning realtime 3D engines available these days, primarily intended for games. A few groups also use these game engines to make short cialis lowest prices films, or machinima.
While perhaps not quite up to cinematic standards, some of the engines meet or exceed the graphical standards of what you’ll see on TV animation. It’s interesting that no-one seems to have attempted using machinima techniques for television shows. Admittedly, machinima relies on the game engine handle the minutiae of animation, and uses pre-written scripts for the character keyframes, so as it stands it’s quite limiting from a creative point of view. But does it need to be?
Motion capture can be done pretty cheaply these days with a couple of cameras and some ping pong balls; and some more advanced software might not even need the markers. Given a 3D character that has been properly rigged, it should be possible to map the motion data directly onto the model, without need for prescripted animation sequences. Most of the latest engines support facial animation as well; with a cheap knockoff of James Cameron’s face capture technique from Avatar, it should also be possible to automatically reproduce an actor’s expression on the model as well.
Usually the camera is controlled by another player in the game engine, but when appropriate, even this could be done via motion capture, as Steve Jackson did with Lord of the Rings. This would allow for vastly improved camera interaction with characters and scenery. Imagine “filming” a motion capture performance with an iPad, and seeing the final result right there on the screen!
Modelling and Set Building
Character and scene modelling are fairly time consuming tasks, but probably comparable to costume design & set building for traditional filming; perhaps even faster, and certainly cheaper due to the absence of material costs. The model rigging process for animation is still needed, which can be performed by a competent modeller; but for most purposes there shouldn’t be a need for the more specialised animator skills, as the roles will be played out by actors directly. This would bypass another major time sink from the animation process.
Filming and Production
By working with realtime rendering, production times could be cut down considerably. There wouldn’t be a wait to see if each scene had rendered correctly, with associated turnaround times if changes were needed — just reshoot the scene with the actors as you would normally. All the setup and motion data for a scene could be stored, so if the performances were good but the camerawork was not, the digitised performance could be reshot using the existing data. The camera operator could be shown a first-person realtime feed of the performance in the game engine as he “films” it again. This could allow for some innovative camerawork, too — there’s no reason why you’d need to operate on the scale as the original performance. The entire scene could be scaled down relative to the camera operator; he could walk around the scene doing a sweeping overhead shot that would normally require an expensive crane (or even a helicopter), and do it all by hand.
Typically the production would be edited as normal, then rendered out as a finished video. The final rendering could run slower than realtime and add extra effects and beauty passes if necessary. However, that’s not the only way this process could work…
This setup would allow for truly realtime animation, such that a show could even be broadcast live.
And it wouldn’t need to be broadcast to television, either. Using the game engine’s network code, the performance itself could be broadcast straight to a rendering client, and the viewer could watch a live stream in ultra-high definition with minimal network traffic. If desired for the production, this would even allow the viewer to have control over the camera themselves, and view the performance from wherever they liked in the scene.
This would allow for the first truly 3D performances; the entire scene information is available to the viewer, so given the appropriate hardware (for example, head tracking stereo glasses), they could be immersed directly in the scene itself rather than looking through a window with current 3D TV approaches.
This is a great article by the guy who calculated the trajectories for the Apollo missions.