Sunday, November 27, 2011

Modelling Artifacts in Autodesk 123D Catch Beta

Ground Stone Gouge Rendered in Autodesk 123D Catch Beta
One of the challenges of making a good artifact reproduction is coming up with good reference materials.  A photographic plate in a published report rarely has enough information for me to work from and its not always possible to have the actual artifacts on hand while I make a reproduction.

A dirty job
I usually need to see the object firsthand and take a set of photos and measurements, but there are always questions that come up as the work progresses.  When I heard about the free 3D photo editing software called Project PhotoFly a few weeks ago, I was eager to try it out on artifacts.  In theory, I'd be able to transform a set of 2D photos into a fully rendered 360 degree computer model  that I could then manipulate and view at home.  There needs to be separation between the dust of the workshop and the cleanliness of the lab and this could help bridge it.

The blue wireframes of Project PhotoFly are gone in 123D Catch
Project Photofly is now called 123D Catch Beta and there have been some minor changes in how it works, but its still free, fast, and powerful.  123D Catch Beta takes a folder of 40 or 50 photos and processes them into a 3D model.  It works with objects, room interiors, building exteriors and even people.  I want to use it to create free floating 3D models of artifacts from digital photos.  So far, I can do that, except for the free floating part.

Trimming leaves holes
In the models that I've been able to make, the objects stay fixed to the surface they were photographed on.  Its possible to trim around the artifact so that you don't see the table or pedestal that its sitting on, but that leaves a hole in the object where it made contact with the surface.

Fly-through animations of the model can be exported to youtube, like this one:


The model shows the camera's position for each reference photo
I thought that I'd be able to make a complete 3D object by moving the artifact around to take photos of it from all sides, but that doesn't work.  The program reads all of the information in the photo while assembling the 3D model and gets very confused if the object in the foreground changes, but the background stays the same.  The software is designed to work with fixed objects and a photographer moving around and taking many images of that stationary scene from multiple angles.  Perhaps a completely neutral background without any shadows could allow you to rotate the object and keep the camera in a fixed position, but I don't have the set-up to attempt that.

I had decent results on the weekend taking photos of a reproduction ground stone gouge in my living room.  I mounted the gouge on a glass candlestick so that I could get photos of its underside.  Making this model taught me how important the set up and initial photography is.  The software is capable of quickly and accurately rendering a detailed model from good photography, but I found making changes to that model after the initial rendering much slower and more limited.  The models, called "scenes" in 123D Catch, are created on a remote server and then downloaded to your computer.  On my machine, it takes about 10 minutes to go from a folder full of photos to a 3D scene.

Manual stitch gives the illusion of control to the user
The scene can be rotated, zoomed, cropped and exported as an animation.   Missing or poorly fit images can be manually stitched into the model and then re-submitted to the server and a new model with the new information will be rendered.  The manual stitch option is good to include, but it sucks up time and I found it was quicker and easier to just re-shoot the missing photos and try building the model from scratch again.  For practical purposes, the only post-rendering scene manipulation that the software offers is cropping.  If the shape of your model doesn't look right after the initial render its very difficult to improve it through editing.  And be careful with your cropping - there isn't an undo button.  Seriously, why isn't there an undo button?

As good as the photos you feed it
Autodesk 123D Catch Beta is powerful at rendering images into a 3D scene, but limited in its editing functions.  In order to make better models you need to set-up and take better photos and not count on being able to manipulate the scene within the program.  If the goal of the Beta version is to prove the concept and create a desire for a fully functional 3D photo editing suite, then I think 123D Catch performs perfectly.

Photo Credits: Tim Rast, screen captures from Autodesk 123D Cactch Beta and Project Photofly.

5 comments:

  1. I wonder if any of that new 3D printer technology has been used to "print" reproductions of artifacts. Maybe eventually one will be able to send information from the 123D program to a printer and press "print"!

    ReplyDelete
  2. I know 3D printing is done with fossils, but I'm not sure how common it is with artifacts. But I'm sure you're right, I think 3D printers will become much more accessible in the future. Ten years ago I would have never guessed that I'd be able to create a virtual 3D model in 10 minutes on a Sunday afternoon using free software and a camera.

    ReplyDelete
  3. Hi Tim, very interesting post! Photogrammetry has a great future in archaeology. I was going to attempt to do this with one of your reproductions I was given as a gift but haven't got around to it yet. You might want to take a look at Agisoft PhotoScan as it allows you to mask areas of the image so they won't be used to compute the 3D model.

    This blog post shows an awesome example of some 3D printed rock art: http://palentier.blogspot.com/2011/11/3d-prehistoric-pictograph-printing.html

    ReplyDelete
  4. Thanks for the tips and the link - that's a very cool idea. Rock art is best appreciated in 3D so that's a perfect use of the technology.

    If you do scan the reproduction, please let me know - I'd love to see it.

    ReplyDelete
  5. Hi Tim
    I was thinking that making a cradle with fish line to support the artifact might be pretty easy. It would be virtually transparent (with some playing around) and strong enough to hold the artifact. The results might be pretty close to free float. Interesting blog. keep up the good work

    ReplyDelete