Thursday, August 04, 2005

Stuff I missed...

  • I heard about an interaction trick by Hal Bertram shown at the RenderMan users' group meeting, sending RenderMan into a loop, running constantly, so the scene can be modified during rendering.
  • I did not get a chance to check out PFTrack and PFMatch by The Pixel Farm. They looks great, feature-wise, esp. PFTrack's integrated pixel flow, plus it runs on Linux.
    Update: Looking at literature from the show, there's a new PFMatch plugin for Shake 4 for $800 which pretty much does everything PFTrack (standalone) does-- very interesting, esp. since compositors would not need to learn a new program (read: SynthEyes).

Wednesday, August 03, 2005

After the fact reporting: Electronic Theatre

I saw the Tuesday, 1:30PM Electronic Theatre screening.
  • There was a 15-minute pre-show "performance" of an interactive paint/flyaround system. A slow drifting camera let the artist paint CG forms in the space-- lumpy sticks out of the ground, mountains in the background, lumpy sticks emitting lights that swarmed, stringy things that wiggled, and a "dragon"-sort of thing that flew all over the newly painted "garden." I found it 100% uninteresting-- tortuously long and boring, and oy, the music...
  • I enjoyed the ILM and DD pieces, though I always want more "explode the shot" breakdowns. They exploded wow rh10, the big drive-away with the freeway crashing down.
  • "Cubic Tragedy" was the standout funny, smart, well done short animation, with hilarious references to CG software. Click here to view/download.
  • Blur studios had two brilliant, adorable short animations-- the first was "Gopher Broke": a gopher trying to get food from trucks going to a farmer's market; the second was "In the Rough": a caveman couple working through some marital issues. They're using the right checklist: story, character, cinematography, and they were both visually stunning. I'd like to see them make a feature-- are they making a feature? Their polish reminded me of Pixar.
  • The presentation was cursed, as usual, by a number of well-produced, visually interesting, but pointless and WAY TOO LONG shorts from France: "La Dernière Minute" (abbreviated version-- thank goodness), "Overtime," "Helium," "Workin' Progress." The last three of those are from Supinfocom, which must be destroyed. A slightly less pointless, but still too long short from Poland was "Fallen Art"-- gross. To people making animation shorts: PACING, PEOPLE!
  • Shane Acker did a piece "9" which seemed interesting until the end-- I didn't get it. I did enjoy the trap which was set for the predator, and the pacing was tight. I get it now-- Brian explained the souls-in-the-metal-box thing to me-- ok, very nice...

After the fact reporting: George Lucas Keynote Interview

George's keynote interview Monday was interesting from the moment the doors opened-- people were running to the front of the auditorium like it was a general admission U2 concert. It took a little while for people to start asking Dennis, Rob, and John for photos and autographs, and once people started asking, sheepishly, more and more people went up for photo ops-- it did get a little hectic up there. It looked like Dennis was enjoying it the most.

I hadn't realized how much time it would take to fill everyone in on Siggraph's developments over the last year. Interesting tidbit-- I did not know that Siggraph membership includes on-line programming courses and streaming video of all conference sessions. I will get a membership (I'm currently only an ACM member). The awards took a long time due to the Japanese researcher's speech-- not a minute or two reviewing old work and thanking people, but a full review of all of his research and influences-- it felt like 30 minutes, but was probably 10-15. [SNORE]

George's interview answers included a few gems, for sure. Referring to Ron Fedkiw's water work (Ron received an award before the interview), George said something to the effect of "His water looks so good, it'll make you wet." Later, talking about Star Wars, he mentioned that many shots are "just done on a Mac." I imagine Cliff's eyes went wide at that point.

I'm always impressed by his confidence while at the same time admitting that he does not understand much of the technology. Answering a question late in the interview about the biggest hurdles to overcome in the vfx industry, he said he could not think of any specific effect hurdle to overcome in terms of creating realistic images, but that usability is a large hurdle-- he mentioned ILM's in-progress pre-viz tool and his hope that somebody with more artistic than technical skills could use the toolkit to rough out sequences.

Tuesday, August 02, 2005

Sketch: Cinematography

  • Shutter Efficiency and Temporal Sampling: Talking about how CG camera shutters are not realistic at all. Is he saying shutter or shitter? Reyes implementation examples: his motion blur looks very good. He calls this a "low efficiency shutter" for temporal sampling. He calls it a "tent" filter as opposed to a "box" filter... terrible speaker, not really describing his innovation well at all.
  • A New Camera Calibration Method Taking Blur Effects Into Account: Measuring the width of blur in a checkerboard photograph to find the desired point to perform a standard camera calibration.
  • Cinematized Reality: Visual hulls to reconstruct geometry, stored camera paths to "apply expertise." It was more complicated than I thought it would be: relationships and actions in the vh reconstruction are noted, and using traditional filmmaking language, a database of a director's shot types is accessed and a sequence is assembled. A large amount of information must be entered manually, and a director's "technique" needs to be captured in the database. It seems like a silly project, but it's interesting in the context of a pre-viz tool: how can a director's technique be generalized? How can traditional cinematography concepts be enforced in software?
I'm leaving at this point to head back to The Standard for my shuttle pickup...

Exhibition

There wasn't much groundbreaking stuff on the floor-- standard mocap, cg shops, 3D CG apps, gpu vendors, etc. I had little time to see everything, but did zig-zag the whole floor, stopping to check out anything relating to matchmoving, 3D scanning, pre-viz, and face modeling.

SynthEyes was the most impressive thing on the floor-- it can be used in fully automatic mode, or geometry can be imported and the user can track points and constrain them to the geometry. Russel Andersson Ph.D. is the whole company-- 15 years at Bell Labs and "other companies," and he has considered just about everything. Single-parameter radial distortion can be solved, and it has a distortion 2D processing mode. More complicated distortion is not considered.

The 3Q scanner is now "3dMD, a 3Q Company" and is pretty polished-- three cameras and one projector per pod, and they were showing a two pod rig. I didn't speak with them-- it looks the same.

Brian and I stopped by the Shake booth and he grilled a demo artist on some new tools in Shake 4: basic 3D compositing, flow-based retiming on fileIn nodes, and flow-based smoothing and resizing. The results looked OK, but it was difficult to tell how good their flow solution is-- they showed a 2-second shot extended to 21 seconds, so of course the hair had a lot of artifacts. They would do better with a 10-second shot with a real background (not sky!), speeding it up to 7 seconds, slowing it down to 15 seconds.

Genmation has a product called GenHead for converting a 2D image to 3D head. It would be great if the 3D reconstruction wasn't clearly wrong!

Illuminate Labs Turtle renderer for Maya looks very good-- I'm not familiar with their technology. OpenEXR output-- the industry has adopted the standard pretty well.

Konica-Minolta has two non-contact scanners, .6m to 2.5m range which seem to be the same scanner with different cases. They looked nice enough, but not applicable for us. Also-- in the literature but not at the booth: PSC-1 Photogrammetry System. It automatically solves camera positions using to "scale bars" and "coded markers." The scale strips just have a few coded markers on them, similar to what Dan Piponi has kicking around his office.

Realviz has turned their matchmoving tools into plugins for Maya, 3ds Max, and I think XSI. They said version 2 of their Storyviz package would be out soon, and I got a spec sheet which is not impressive in the least. The price has been lowered to $1200.

Antics has a $500 pre-viz tool-- their demo video was embarrasingly bad.

Point Grey has a "Ladybug2" spherical camera with improved resolution-- 3800 pixels horizontally. The head unit processes the images into a panorama and can run at 30fps if compressing with JPEG, or 15fps uncompressed. The nodal points seem closer than in the previous version. They also have a "Dragonfly Express" with a 1394b interface allowing 640x480 output at 120fps, or 200fps if "Format 7"-- what is that?

Monday, August 01, 2005

Sketch: Twisted Perspectives

[Posting the sketch title so i can post the body later... I might get in trouble for untaping a socket-- they're being anal about current since they're running lights etc.-- bs: i'm not plugging in a hair dryer! My battery is recovering...]
  • Twisted City: Animation from stills projecting onto blocks, flying around between POV's. Inspiration: forced perspective, "hanging miniatures" (?). Proposal to write MEL script to automatically detect edges, create blocks, intersect blocks, and transition between photos. Neat piece, nothing groundbreaking.
  • Multiperspective Collages: NYT snap of text over image-- most images we consume contain multiple explicit and implicit perspectives. Early paintings lacking "correct" perspective, but effective representation. Renaissance-- use of single camera perspective. Cubism: deliberate multiperspective as a theme. Hockney example. Children and perspective: Toku's 40 classifications of children's drawings-- multiperspective is #20. "Capitalism Likes Multiples"-- interesting theory: contemporary information objects encode mutiple perspectives.

    Showing recent works: projecting onto planar geo. Freight elevator installation. Software for creating collages: "Perspect": nice Mac.app-- Java/OpenGL. Proposed nonconventional rendering (ncr), as opposed to standard non-photorealistic rendering (npr).

  • "ZECTO" Cinematography for Depth-Based Live-Action Imaging: From Russian for "space": "mecto" but with a Z since it's 3D. Uses "Axi-Vision" camera which captures color and depth concurrently, developed by NHK for depth keying, but used in Zecto to manipulate depth of subject. Color and depth are merged in Shake and manipulated with a custom plugin. Examples amplified noise in the range image, but had interesting effects.

[Missed last part-- need to get some food... GWL keynote coming up]

Course 16... Wide baseline VBR: Silhouettes and photo-consistency

Using visual hulls to extract 3D structure. Image-based vs. polyhedral visual hull ("VH").

Good ibvh example from high ring of cameras. Trick (2000 paper) to use scanline scanning to build image data structure for fast computation. pvh example shows rougher depth map.

Camera weighting: what to project onto geo: blend three nearest views, removing view with greatest disparity to ensure smooth transitions.

At this point we have a PowerPoint crash... summarizing w/o visual aid Conclusion: vh approach with camera weighing provides good results with a reasonable number of cameras.

Photo-consistency: plane-sweep: a multi-view stereo implementation. Compare cameras' reprojections onto planes at different depths-- depths appear sharp as plane intersects subject depth. Fast disparity computation-- sweep plane through depth, compute disparity, aggregate found depths. Use mipmap approach to eliminate multiple found depths per pixel in 2-camera case. Trivial to extend to more cameras.

Specular hilight problem: highlights and reflections interfere with matching. Enforcing photo-consistency works for this too-- uses least intense sample for texturing the point.

Course 16... Dynamic Light Field Warping

How to compensate for disparate views: recover depth maps. MS Siggraph 2004 (Zitnick) light field warping, stero reconstruction methods.

Review of stereo reconstruction... interesting: "combinatorial optimization" (Kolmogorov + Zabih, ECCV 2002)

3D warping over light field rendering: only needs four proximate images to generate synthedic image: disparity-compensate neighboring images and blend warp images. Project dmesh onto depth map, warp the mesh to the viewpoint, project the images onto the mesh, weigh images based on angle.

Food for thought wrt clean/dirty plate locking issues...

Course 16: Video-Based Rendering:Small baseline VBR

View interpolation from closely spaced cameras: 3D TV

Auto-stereoscopic display for multiple users. View angle limitations with lenticular sheets. View "pops" when going between discreet views. Overlap of discreet views. Trade-offs: limit system to horizontal parallax only.

System architecture: strip of cameras (since no vertical parallax)...

Display calibration: project checkerboard: compute homographies, intersection, maximum rectangle. Align checkerboards on screen.

Lightfield rendering: user moves proxy plane to desired viewing depth, not necessarily the zero-disparity plane.

"Soon, you will probably find this in your household"... I doubt that.

Course 16: Video-Based Rendering:Acquisition...

Camera issues: Tradeoffs between analog vs. digital caqmeras, resolution vs. bandwidth vs. processing time, color adjustment and calibration, frame vs. field capture (field tear example), 3 CCD vs. 1 CCD cameras (color leak/halo example), zoom lenses and distortion, shutter speed vs. motion blur...

Infrastructure issues: Camera placement (wide vs. narrow base line), bandwidth / time code / sync, diffuse lighting, capture pc's and networking, geo and color calibration.

Example system: CMU virtualized reality lab. First built 1996, now in 3rd gen.

  • 1st gen: 3D Dome w/51 b+w cameras, 51 vcr's.
  • 2nd gen: 3D Room, 14 hq cameras w/zoom, 25 low res cameras, video stored on 17 PC's, 6 second capacity. Analog images time coded before digitization.
  • 3rd gen: 3D Cage, sync+infrastructure from 2nd gen but w/24 PC's, 48 cameras, video written directly to RAID (640x480 30fps), 2-10 hour capacity.
Capture: plan, clean the set, always get clean bg, capture multiple times, playback after capture, "Remember to capture camera calibration information" <-- yeah...;)

Post capture processing: field mode correction, geo calibration, silhouetting.

BGsubtraction from clean plate: diff + levels > contract + expand matte > manual corrections (removing shadows etc.)

Region-based approach to post-processing: Remove shadows by comparing color against bg: shadow color matches clean bg color.

Course 16: Video-Based Rendering: Intro

Inspiration for video based rendering was The Matrix-- to be able to view a scene from an arbitrary POV.

Reconstruct dynamic 3D shape from multiple video streams.

Fast realistic rendering from shape+video.

Acquisition with synched video cameras, processing w/consumer PC, display with consumer PC GPU.