Photogrammetry tutorial 6: building a model from the photos

EDIT 12 May 2015: Please see this post for an update of some of the methods described below! Sometimes, a specimen is better dealt with without a turntable, even though it theoretically could go on one!

We’ve been through the details on how to take photos for photogrammetry (parts 1, 2, 3, 4, 5), now it is time to talk about the next step: how to actually get a program to build a model for you.

For all those who wonder why there are no pics in this post – patience, young Palaeowan! There will very soon be a paper out that has all the info and pics you need, and that’s all I will say now :)

There are plenty of photogrammetry programs out there (see this overview, somewhat incomplete, on wikipedia), some of them for free, some of them looks like they cost nothing, and some that cost money.

Why do I say some ‘look like they cost nothing’? Because some cloud-based programs actually do not cost money, but want the copyright to your models, and some even to the photographs you upload. Quote from Autodesk 123D Terms of Service: “… you grant to us and our affiliates a world-wide, royalty-free, fully paid-up, perpetual, non-exclusive, transferable, and fully sublicensable (through multiple tiers) right and license (but not the obligation) to reproduce, distribute, redistribute, modify, translate, adapt, prepare derivative works of, display, perform (each publicly or otherwise) and otherwise use all or part of Your Content, by any and all means and through any media and formats now known or hereafter discovered, but solely in connection with the Service and/or our business activities (such as, without limitation, for promoting and marketing the Service) and/or to comply with legal or technical requirements.” Obviously, they need rights to do much of that so they can actually use your raw data to make a model for you, but look at the bolded parts! Imagine you create a really neat model of something, and Autodesk distributed printed-out copies of it as an advertising gift to millions of people……. That’s a minefield you should NOT get into!

Personally, I use Agisoft’s Photoscan Pro. An educational license costs $549, and the program covers 99% of my needs, while being extremely user-friendly. Also, I have made excellent experiences with the Agisoft support team, via their user forum, so overall I am very happy with my choice. Others tell me that the professional Autodesk program Image Modeler  is excellent, too, and it contains a lot more than photogrammetry capabilities. On the other hand, VisualSFM (SFM = Structure from Motion) is completely free for personal, non-profit or academic use, and gives you many options for handling difficult data sets that Photoscan Pro doesn’t have. My esteemed colleague Peter Falkingham is a great fan, and has used it successfully for some very neat science work, including the reconstruction of the Paluxy River trackways from old photos. Peter also has a neat guide on how to use VisualSFM and the free mesh editor Meshlab up on, so if you’re registered there, go grab it.

So what you read below is what I do in Photoscan, and if you use a different program you may need to adjust things a bit to the specifics of that program. The general principles, however, hold true for all of them.

Alignment and Model Building

Before I give you details on what to do in-program, it is important to know what kind of raw data set you have, as directions differ a bit.

Immobile specimens

If we’re dealing with an object that you could only photograph using the walk-around method, you should simply throw all the photos into one Chunk in the program, and run Alignment*. If the resulting sparse point cloud (a point cloud consisting of all the points the program used to find the camera positions) and camera positions look fine, the next step is generating a dense point cloud. You now have two options: have the program build it, then clean it up by removing the objects around your specimen, or mask all photographs to remove the objects around your specimen, and then have the dense cloud calculated. Usually, the former is more difficult, while the latter takes longer. If I need to do a lot of pre-alignment masking, or if I want to be very sure that the final model will have perfect edges where the real specimen contacts its support, I opt for masking first. Otherwise, I often choose middle ground: I mask all photographs, but only where the specimen contacts the ground. That gives a nice distance there, to use for cropping the dense point cloud.

* Remember that if there is motion in the background, like people walking through, you will need to mask the moving objects on all photographs they appear on!

So, decide how to proceed, mask as necessary, align the photos (Menu ‘Workflow’, option ‘Align Photos’). Now adapt the selection box as needed (sometimes, the auto suggestion will cut off part of the specimen), and have a dense cloud built (Menu ‘Workflow’, option ‘Build Dense Cloud’). Crop that dense cloud as needed and have a mesh built (Menu ‘Workflow’, option ‘Build Mesh’)

Turntable specimens

If you used a turntable for the photography, you will likely have to mask the background in all images. And I assume that you took at least two sets of photos, between which you flipped the specimen over so that you can get a model of the underside, too.

In this case, you can proceed as above, with prior masking or with cropping-after-dense-cloud generation if you treat both series separately, and only combine them into one chunk later (what I term the multi-chunk method; see below on how to do that). However, the most elegant way is having all photos in one chunk, and having Photoscan spit out the complete model in one go. This I call the one-chunk method.

For the one-chunk method, you should always mask all images. If you made sure that the background in your images is out of focus or so blank that Photoscan can’t find any points, it may be enough to mask the ground below the specimen near the contact points – but that may lead to colour errors in the final model. Thus, I recommend spending the time to completely mask the background in all images. Luckily, Photoscan has an ‘invert’ option for the masking tool, so you can simply click the masking line all around the specimen, hit the invert icon, and then ‘mask’.

Now align and see – if your photos are good you should get a perfect alignment! Build the dense cloud, the mesh, scale it (see below) and enjoy! Yes, that simple!

If it is that simple, why would anyone use any other method at all? Well, the one-chunk method has one slight drawback: it depends on you taking really good photographs, that are perfectly suited for photogrammetry, and taking them in proportion to the part of  the surface they show. If Photoscan has a hard time finding enough points in some images, or if there are parts of the surface that are grossly overrepresented in the images, you will end up with a sub-perfect alignment. Or if you can’t really fill the frame of the image with your specimen, which is not that rare when you digitize small specimens with a normal lens. In really bad cases, Photoscan may even be hard pressed to align the photos from one set, and there are ways for helping the program along, mostly using the background for alignment. However, if you do that, you need to have the background immobile versus the specimen (i.e., rotate it with the turntable, e.g. by placing a printed text page on the turntable and under the specimen), and you can’t use the one-chunk method easily.

For the multi-chunk method, which you can also use if you did not use a turntable,you must place each set of images in a chunk of its own. You do NOT want to mask the background that moved with the specimen. However, you should mask a thin strip around the specimen, to create a small gap in the dense cloud that makes cropping the background away easier. Basically, it is the same thing you want to do as with immobile specimens. Now, align the photos in each chunk, check the alignment, create the dense clouds, and delete the background points.

Next come the tricky part: you need to place at least three markers on each model that are in the exact same spots on both chunks. The best way is to place those markers on the photographs by finding a point that you can really nail down to the exact pixel on several shots, right-click on one photo on it, choose ‘Create Marker’, then right-click and ‘rename’ that marker (e.g., ‘A’ or a speaking name like ‘large sand grain on left postzygapophysis’, or whatever). If the marker is not perfectly located you can left-click and drag it.

Now, go to another image from the same set that shows this point, right-click on it, and choose ‘Place Marker’. From the pull-down menu, choose the correct marker. Repeat this procedure until you have three markers placed and visible on the dense cloud, and check that they show up on the cloud where they should be.  This is a good check on how good the alignment within each chunk really is. If your markers aren’t in the right place on the cloud, check your makers on the photos. If that’s all OK, you can’t use photo-based markers and need to switch over to cloud-based ones. These you create similarly, but it is all a bit more cumbersome, as you can’t right-click a marker to rename it, nor can you shift its position with the mouse. And later it will mean that you can’t merge the two dense point clouds into one, but have to calculate a new one from the merged chunks’ sparse point cloud – ugh! So do try to find images that are well-aligned and create your markers on them.

Now, place markers in the exact same places in the other chunk, and rename them to the exact(!) same names. Now, choose ‘Align Chunks’ from the workflow menu, set the method to ‘Marker based’, make sure the correct chunks are selected, and clikc ‘OK’. There is an option that is a bit confusingly names ‘Fix Scale’. It does not, as I first assumed, fix any scale differences between the chunks, but rather makes Photoscan NOT scale the chunks to fit each other. DUH!

Once aligned, you now need to merge the chunks. Menu ‘Workflow’ –> ‘Merge Chunks’. Logically. As I said, Photoscan is very user friendly ;) It creates now a new chunk, so your previous work is untouched. You will want to merge the models, so tick that box. Do NOT tick the box for merging markers, as un-merged markers are an easy way of spotting problems. Now, check out the merged dense cloud in the new chunk. If the marker pairs are very close together and the cloud looks fine, have a mesh generated and you’re done. If something big is amiss, it is usually caused by erroneous marker placement. Delete the merged chunk and fix, then align the chunks anew and merge them again until satisfied.

Scaling the Model

This last step should be easy, provided you followed the protocol and added a scale. All you have to do is mark the ends of your scale in at least two photos. In Photoscan, you right-click in that place on the photo, choose ‘Create marker’, and then rename it (e.g., 0 cm or 3.7 cm or whatever; it is usually smart to use a speaking name!). Then, choose another point and create a second marker, and rename it (e.g., 10 cm or 7.2 cm). Try to use the longest distance between markers you can find!

Now, go to another image and find the same points. Right-click, choose ‘Place marker’, and select the correct marker. Doing so in one other image will do, and you do not even have to create and place the two markers in one image each. You can place one marker in, say, your 5th image, another in your 8th, and place the first on image 3 and the second on image 11. Doesn’t matter, as long as the scale object you used hasn’t move relative to the specimen between all those photos.

Once you have the two markers, select them both (CTRL and left-click one, then the other), right-click and choose ‘create scale bar’. Switch to the Ground Control tab (at the bottom left!), select the scale bar and enter the length for it. CAUTION: Photoscan is in meters! Now, click on the ‘Update’ icon on the top of the pane (it looks like two blue arrows forming a circle). DONE!


About Heinrich Mallison

I'm a dinosaur biomech guy working at the Museum für Naturkunde Berlin.
This entry was posted in Digitizing, photogrammetry. Bookmark the permalink.

10 Responses to Photogrammetry tutorial 6: building a model from the photos

  1. Pingback: How to Make Your Own 3D « paleoaerie

  2. Aaron Curtis says:

    Is a palaeowan an ancient padawan?

  3. dmaas says:

    do you even need agisoft pro, Heinrich? I have a license of the basic for ~150 Euro and it’s great for all the meshing and item digitization stuff… pro seems to me to be for geo-scaping needs.
    Thanks for these tips… very helpful . At the agisoft forums, I requested filtering of particles via color. This would aid in quickly isolating misplaced background points. And generally, I’m hoping agisoft goes more “next-gen” … with all the information it has available, it could be generating physically based surfacing maps. Not so important for palaeowans, but good for vfx artists :-D

  4. Steve says:

    Pics? Over a year ago since you posted it, wil them be done soon? ;)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s