Wuff

Tuesday, June 2, 2009

software: the world is flat but for my house

Google was at Maker Faire promoting SketchUp, a 3D program.

One of the things it can do is texture the surfaces of a model. Wait, Google Maps has a top-down picture of your house from satellite imagery. So draw boundary lines on the edges of your roof, then extrude vertically, then pull up the roof line, and you have a crude wooden-block house shape with your roof. Next, Google Street View may have a drive-by panorama of your house, assuming an angry luddite mob didn't block Google's camera car. So grab the street view and paste it on the front of the model. Five minutes later (assuming you've spent months or years mastering the unintuitive mysteries of a 3-D modeling program) you have a passable representation of your house. You can upload this to Google's 3-D warehouse of SketchUp designs, and you can place it in Google Earth, a more sophisticated version of Google Maps that presents landmarks and other geographic data anywhere and everywhere on earth. When people waltz around your neighborhood in Google Earth, they'll see your dollhouse.[*]
SketchUp house in Google Earth
In the screenshot, the panel below is Google Earth's in-program browser with the house model that Google's 3D ninja whipped up. (Click the screenshot to see more of the Google Earth program).

Yes my neighbors' houses are all low-rise ranch houses sunk into the earth, and there really is a 7-meter shiny ball parked on the street!

Google is crowd-sourcing the creation of a 3-D model of the world. As builders and planners and amateurs create more 3D models, the virtual world gets fleshed out until a fly-through in Google Earth is a pretty good approximation of being there. You can see downtown and the Bay Bridge are getting filled in.
view of downtown SF
It's more evidence for my thesis that computer previsualizations of movies will be good enough to replace the filmed movie.

All of these tools and programs are free, I don't know where Google makes money. Google is looking to get 3D into the browser, so soon you'll get all this in Google Maps; maybe Google will sell billboards in virtual earth. Or maybe they'll charge to have you socialize in it with other avatars.

[*] If you want to see my house, you've got to ask for the additional 3-D warehouse, it doesn't appear automatically. I guess that provides some protection for Google against complaints from house-proud owners that a griefer uploaded a model that makes their property look ugly, or shows a guy mooning out of a window.

An interesting question is why doesn't Google automate this. They have the overhead picture, they have the front picture, so run some AI to glue the two together so my neighbors' houses poke out of the ground to form a 3D canyon.
Road Rash screenshot
I asked Google's modeling ninja and he said the AI isn't smart enough to do it. 10 years ago MetaCreations released Canoma which supposedly let you semi-automatically pin photographs onto 3D shapes and it would guess the outlines of the building. Despite all the wonders our network of computers is producing, hard AI remains hard.

Labels: , , ,

Monday, April 27, 2009

movies: the previsualization IS the movie

Previsualization supervisor Steve Yamamoto made Hancock. Not the director, not the actors.

Movies used to be storyboarded: someone would make a sketch of every shot in the movie and pin them to a wall. Sometimes these were turned into animatics, a movie consisting of simple camera moves and zooms over each sketch and transitions between them. This was especially true for animated movies, and Pixar still makes 2-D animatics of their movies; the fascinating featurette on the The Incredibles DVD shows some.

In effect-laden movies, the crew have to figure out the lighting, the camera moves and the camera details (field of view, focus, etc.) for all the elements of a scene that will be filmed in real life or rendered by computer—for the actors on a set, for the filmed backgrounds, for the digitally-modified background elements, and for all the computer-generated imagery (explosions, flying glass, monsters, digital hair hiding actor's balding head, ...). Everything has to match otherwise the pieces can't be composited to make the final shot. Two-dimensional storyboards are insufficient for this. So someone builds a 3-D world for the scene, puts some 3-D character models in it, animates the models, and then goes nuts moving a virtual camera around to create a computer animation of the sequence of shots. The result is a clunky computer videogame version of the sequence. The previsualizations made for Hancock resemble Sega's Virtua Cop videogame:
screen cap of PC Sega Virtua Cop 2 game
screen cap of Sega Virtua Cop 2
Yet the pre-viz comes scarily close to what the final film looks like.

In the Seeing the Future featurette for Hancock the movie team watches this videogame of their movie, months before they start filming. They change camera angles, re-edit cuts, reposition the actors, even use different virtual lenses to improve the scene. They then have to figure out how to film the real portions of the scene such as Hancock flipping a car upside down, or decide to do it digitally.

gif animation showing sequence

The result is that the actual moviemaking—actors acting, cameramen operating physical cameras, effects houses making special effects—becomes no more than re-implementing what's in the pre-viz. You see Charlize Theron watching the pre-viz on a Mac notebook, watching her 3-D character to learn what she's supposed to do in the shot! Jason Bateman says of the process "It's been interesting". As in, it must suck. The cameramen, the actors, even the director, all watch a movie that already exists that dictates what they need to do.

The obvious next step is to make the pre-viz good enough that it becomes the movie and the producer and the previsualization team tell the cast and crew to stay home! There's no reason the backgrounds in the pre-viz have to look like cardboard or the characters look blobby. Spend some time refining them, gathering better textures and using higher-quality models, record the dialog, and then after you've got the low-quality videogame doing what you want, use a bank of computer to render the movie in high-def using realistic lighting effects. Perhaps it's still cheaper to film an actor covered in sweat/makeup/dirt speaking and emoting rather than trying to model and render him, but animation software and computer hardware relentlessly advances. For the first hour of "The Curious Case of Benjamin Button" the aged Brad Pitt face is entirely computer-generated (watch long interesting video), it'll only get easier and faster.

I wonder if anyone has made a videogame directly out of the pre-viz. At a minimum you should be able to move the camera around yourself to make your own cut of the movie; add some standard videogame AI programming and you should be able to pause the movie and make the character walk off somewhere else and fire bullets at the scenery. The great William Gibson saw all this coming, read his 2003 talk to the Director's Guild of America.

Labels: , ,