Page Navigation

Saturday, December 7, 2024

Entering the world of 3D AI

Dozerfleet Studios now has a double-camera setup for shooting backgrounds and video in 3D. The setup doesn't quite have all its bugs worked out yet, but enough of a double image was able to be produced to result in anaglyph experiments demonstrating the strengths and drawbacks of using AI to make post-processing conversions to 3D.

For posters that mostly consist of AI-generated assets in the first place, it's very difficult to combine native and post-processed assets together to make a believable composite. Depth map manipulation to improve Photoshop layer pixel displacement on a flattened copy of a file - with all assets added before flattening of layers - is the most practical way to do it.

This is most evident in the 3D version of the main book cover / official poster for The Tale of Plum Bixie:

However, while post-processing can speed up the editing process, and is usually pretty reasonable, it has nothing on the power of a wisely-chosen setup of native 3D shooting - something not possible with most AI art generators that are equipped to only do so-so with generating 2D assets.

Some 3D glasses are needed to truly see the difference, but here's a sample image of a Christmas village that was post-processed using the app Owl 3D:

By contrast, this is that same village when two different cameras' feeds were overlapped to produce native 3D:

The depth detail is simply amazing; and the depth maps involved with Owl 3D, as good as they are, cannot hope to match the sheer power of a proper parallax reconciliation.

Granted, achieving this latter image took a little more work. The right eye camera was a Canon PowerShot ELPH 135. The left eye camera was a Vivitar Popsnap VEC S124. Stabilizing either camera without a tripod splitter was an exercise in splitting hairs. To make matters worse, macro photography like this pushed the limits of what both cameras could do. When it came to color recognition, the Canon definitely had better chrominance range than the Vivitar.

As a consequence, the resulting composite image is a fraction of what was originally possible. Regardless, the point was made.

This was also demonstrated when The Amazing Spider-Man first hit theaters in RealD 3D, and was shot natively. The result was a far-better-looking world than the MCU, which released The Avengers with post-processed 3D. Native 3D clips of TASM are hard to find now, with most YouTube clips featuring badly-washed-out post-processed 3D that's difficult to watch. Clips of The Amazing Spider-Man 2 suffered for this format, however, as it made more obvious than ever when Andrew was web-swinging against a greenscreen - a similar problem to that which plagued The Hobbit: The Desolation of Smaug.

Viewers of

The Avengers didn't have that issue, since the 3D made available is the original post-processed result, just switched from RealD 3D to anaglyph per the limitations of most home monitors.

What does this mean for Dozerfleet 3D?

  • Whenever possible, backgrounds and props that can be shot in native 3D can and should be done so. With a few tweaks, the Vivitar's current limitations may be overcome gradually. A tripod splitter will further aid the two cameras in communicating with each other to achieve an ideal parallax angle.
  • If The Mutt Mackley Show ever made a comeback, an upgrade to the SD card setup for the Vivitar would enable Mackley videos to be shot in 3D. A different puppeteer might be needed, since the director / camerman would have to manage two cameras as the same time. This would allow future Mutt Mackley entries to be in native 3D.
  • If location doubling is possible, backgrounds for story covers / posters could have 3D equivalents. Some clever blending of post-processed AI-generated foreground assets and native-shot assets would make better-looking 3D possible.
  • Where such location doubling isn't possible, post-processing would be necessary to take full advantage of environments that are AI-generated from wholecloth.
  • For Blood Over Water 3D for the remake, action shots that are complicated enough would be post-processed. Simpler shots could be done in The Sims 4 with carefully-handled trucking of the in-game camera to the right. The front and back covers, however, would have to be post-processed.
  • So far, with regard to the machinomics, only Blood Over Water TS4 is being planned for a release some time in 2025 or 2026 with a 3D release. Readers will be encouraged to have or else order pairs of 3D glasses when reading Blood Over Water TS4 3D. All the other remaining machinomics are being planned for 2D only.
  • From there, anaglyph work will be in limited supply, on an as-neede basis. Some anaglyphs are included in the official handbook for The Sims 4: Magic of Movies and Memes Stuff, though these are post-processed.
  • Projects that are deemed passionate enough, or where there is enough demand, will then use native 3D as much as possible, and post-processed 3D whenever native 3D capture isn't possible. Most projects, for the sake of time, however, will still be 2D-only releases.

No comments:

Post a Comment