Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan

Over the course of the last year I have helped a bunch of people with their photogrammetry projects. Usually, they needed help with the photography – understanding how the program works, so that they take the right amounts of photographs in the right places. Or with camera settings. Or with the set-up for easy and quick models that show the underside of a specimen, too.

Recently, however, I realized that a lot of people also have problems making the most out of their photographs in their photogrammetry software, most of them using Agisoft‘s really easy to use Photoscan. They’d run through the “Workflow” menu of Photoscan and end up with model that were OK or so-so, based on photographs that should deliver really excellent models.

Below, therefore, I’ll describe how I use Photoscan to create high-quality models. Models like the one in this screenshot:

Screenshot from Photoscan

(click for larger size. note that I decreased model size from ca. 45 million polygons to 10 million)

PLEASE NOTE: this post assumes that you are using Photoscan Pro, not the standard version! Some steps described below are not available in the Standard Version!

Before we get into the details of Heinrich’s Photogrammetry Work Scheme, let me list the basic steps of building a 3D mesh from photographs:

  1. Put photographs into software
  2. Have software align photographs, including building a sparse point cloud (tie points)
  3. Have software build a dense point cloud, which may need manual cleaning,
  4. Have software build a polygon mesh
  5. If you want to, have software calculate a texture for the mesh.

Somewhen in between, you also need to scale your model. This can be done at any time after Step 1 (I usually do this after step 3), and thus I will put the info on it at the end of the post.

So, how do I do these steps?

Placing the photos in the software

First of all, chunks.
Photoscan allows using several separate chunks, which you can have the software perform alignment and dense point cloud building and meshing etc. on separately, then align and possible combine them later. I do not use separate chunks if I can help it. Rather, I try to make sure that all my photos show my specimen with a totally immobile background, or one that has no features at all, across all sets of photos I take. This means, for example, that if I need to flip a specimen over to photograph the underside, I move it to a different location, so that the two background are totally different from each other. This way, the only correlating features between the first set of photographs of the upper side of the specimen) and the second set (showing the underside) are all on the specimen itself.

And that’s it!🙂 Step 1 completed! Unless you want or need to use masks. A huge topic of their own, which I will address elsewhere. Here’s let’s simply assume you do not need to mask anything.

Having the software align the photos

This step is not tricky at all: simply go to the “Workflow” menu and select “Align photos”. You now face a pop-up window with options:

Screenshot from Photoscan

Uh, erh….. em…. what now?

Simply stated, accuracy determines how fine-tuned the camera position determination is. For a quick check you can set it to low, but otherwise keep it on “high”.

Pair selection basically allows the program to spend some time finding out which photo pairs likely do not overlap anyways, and can thus be skipped during later steps, which can save quite some calculation time. If you take a series of photos of a long dinosaur track, for example, there are photos from the beginning and from the end of the track that cannot show the same points ever – and thus any checking for matching points is a waste of time. I typically leave this option on “disabled”. Sometimes, tuning it on can help getting an alignment, so if a project fails try it again with Pair selection turned on.

The “Advanced” section holds three very important settings.
The key point limit basically tells the program how finely it is supposed to sample each photo. Higher values mean more features (point that may be re-recognized on other photos) are to be looked for, and thus there is a higher chance such alignments will work and be of high quality. At the cost of a much longer calculation time. If you have all the computer power in the world, type a 0 into this field (which means “unlimited”). Otherwise, 400000 is a good value for me, so try it out.

Tie point limit sets a limit on the number of points that tie one image to another. Theoretically, 3 is the minimum you need, but the more the better. I usually keep this at 0, too (again 0 means “unlimited”), but to save time I sometimes limit it to 10000. Although you can get very good models with value as low as 100, and although 1000 is often good enough, the time saved compared to a setting of 10000 is so little that I keep it at 10k – after all, re-running a project because I set a too-low limit also costs time.

And then there is that check-box offering the option of constraining features by masks. That’s a key thing if you failed to do things right during photography with regards to the background. See, if there are things visible in the photos’ backgrounds that move in relation to the specimen (for example if you used a turntable and the background is somehow halfways in focus), you need to mask the photos (more on that below). And you need to check the checkbox “Constrain features by masks” if you want Photoscan to heed those masks. Otherwise, that pesky background will ruin the photo alignment.
In previous versions of Photoscan the box was auto-checked if there were any masks defined. No longer – so please remember to expand the Advanced section of the window and check that box if you created any masks!

All set? Then click OK, and enjoy a looooong wait😉 Next up is having the software create a dense point cl…. oh, wait! Not there yet!

After the alignment has run, there is something you need to do before you can have the program create a dense point cloud. Something many people don’t do, something that is at fault for roughly half the failed or bad-quality models I have seen (the other half is caused by bad photography). You need now to

OPTIMIZE TIE POINT CLOUD AND ALIGNMENT

Yes indeed – a key work step that has no entry of its own in Photoscan’s Workflow menu! It is, however, of paramount importance if you want to create really good models. Go to the “Edit” menu and select “Gradual selection“. Actually, before you do this, right-click your current chunk in the Workspace window and select “Duplicate chunk”, then proceed with the copy. That way, if you mess up you can simply go back to the original chunk, make a new duplicate and start over.

Ok, now open the gradual selection dialogue. A pop-up appears, and here you meet one of the few key differences between the current official version 1.1.6. and the pre-release of version 1.2.0: the available options differ a bit. 1.2.0 offers an additional option, and because the algorithms apparently have changed a bit, I use the options that have stayed the same a bit differently. Here’s, I’ll detail 1.2.0 only, as it cannot take much longer for it to become the official versions.

The first option to choose is “Reconstruction uncertainty“. When you do so, the slider will show numbers between 0 (on the right) and a varying number on the left – likely one in the middle to high hundreds. Simply click into the field and type “10“. Your computer will think for a second, and you’ll see a large bunch of points in the sparse cloud selected (turning pink). Click “OK” and delete all the selected points. You can do this by pressing the [delete] key of by clicking on the appropriate icon. If the [delete] key doesn’t work, you need to click once(!) into the model window to activate it, and press [delete] again.

And yes,  delete all those points. All of them. Basically, you’re throwing out a bunch of tie points that have a low likelihood of being in the right place.

Now, do it again: open the gradual selection dialogue, select “Reconstruction uncertainty”, enter “10”, click OK and delete the points.

Why twice? Because the slider has discrete steps with values depending on the highest number available (on the left), and even if you enter a specific number it will only jump to the next-higher step available. Thus, if 838 is shown on the left, it will jump to something like 17.3 instead of 10, even if you enter 10. Therefore, repeat the procedure to make sure you get close to 10.

If you now check the spare point cloud again you’ll notice that all of a sudden a lot of the floating nonsense data has disappeared. If you’re lucky, you have a much clearer view of the object you were digitizing. If you are unlucky, you removed so many points that you just killed the model. If that’s the case (and you will notice this soon), it is best to start again by taking new photos. You might want to try with a higher number than 10, but usually that results in a pretty shitty model.

Next, you now need to tell photoscan to optimize the alignment of the cameras based only on the high-quality points you retained. To do this, click the little icon that looks like a wizard’s wand.

ref_pane

Here it is, in the middle.

If you can’t see such an icon, that’s because you can’t see the ‘reference’ pane. It’s the second tab hiding behind the ‘Workspace’ pane – you can simply select it, then drag and drop it so that it is below and not behind the other pane. In any case, go there and select the icon.

If you are using the Standard edition, you won’t fine a reference pane and wizard’s wand icon. However, you can still optimize your point cloud: go to the Tools menu, and select the “Optimize cameras” option.😉

Photoscan will now show you a window that lists a lot of parameters you can select. Simply select them all but the last, and click OK. The ensuing calculations can take a few minutes. And Photoscan may inform you that some cameras (photos) have too few tie points left and will be tossed out of the alignment. That’s OK, unless your photos are very bad and you should start over again anyways, so click OK.

In the end, you’ll see your sparse point cloud again, and the quality of the alignment will have improved. On to the next step: Select “Gradual Selection” again, and this time check the “Reprojection Error” option. Change it so that it is not much higher than 1, delete the selected points, and click that wizard’s wand icon again. This time, you don’t need to do things twice, though, as the highest value will normally not exceed 5, and thus the step-closest-to-1 will be very close to 1 indeed.

Next, select “Gradual Selection” again (yeah, yeah, it does get old), and this time check the “Projection Accuracy” option. I must admit that I do not really understand too much of how it works, and that I have seen very variable numbers shown for this option. However, if there are values much higher than 10, I normally enter 10 and delete the selected points. If the number of points selected is less than ca. 10% of the total, I often go down to a value of 8 or even 7, until I hit 10%. That doesn’t sound like much, but I have seen models improve quite a bit by deleting these points. Anyways, kill the points and hit you-know-which-icon.

If you wonder why I recommend you do this all, check this post. It has pictures. No, not of cats.

And finally, this step is DONE! On to the next……

Ha! Not so fast! Before we proceed, you should spend some time turning the sparse cloud around and checking it. Sometimes, you can immediately see if there are problems. A common one is that your model is there twice, with a very slight offset between the two versions. Or you can see a bunch of nonsense points, duplicating part of the object. Usually, that’s bad news and means a lot of work coming your way, but there’s nothing stopping you from trying to improve your odds and manually editing your sparse point cloud. Simply select points you do not like (usually, using the lasso tool is a good idea) and delete them, then hit you-know-which-icon again. Once you’re satisfied, you’re ready for

Having the software build a dense point cloud

This is really easy, as there is an entry in the “Workflow” menu. Haha!

uhm, not so fast (again). If you have Photoscan build a dense cloud now, it will do so within the selected box – and you should make sure that the object you intend to model is fully within the selection box, and as little else as possible. So, use the icons for rotating and scaling the box to fit. Then do select the option “Build Dense Cloud” from the “Workflow” menu. Again, Photoscan opens a little dialogue window offering options:

Options that truly speak for themselves. Low quality means just that, high means high – golly! I recommend normally using “High“, but it all depends on what you want to achieve.

If you expand the hidden options you can see the options for Depth Filtering. Keep this on “Aggressive” unless you have a very good reason to do otherwise (i.e., if your otherwise perfect project was suddenly missing tiny details where you needed them preserved). Click OK and sit back for another lengthy wait…… next up is the ugly task of cleaning your dense point cloud, and boy can that be a bother! Or not, depending on your photography set-up and whether you used masking or not.

In order to remove unwanted points form a dense cloud Photoscan 1.2.0 offers several tools. You can use the various selection tools from the tool bar – I usually use only the lasso tool – to select points and then either crop to the selection or delete it. Or you can use the menu “Tool”, entry “Dense Point Cloud” –> “Select Points by Mask” or “Select Points by Color”. In the former case, you get to choose which mask(s) on one of the images is used to select all point that are covered by it/them. Very helpful if your specimen doesn’t have a complex geometry. In that case it is possible that you inadvertently select parts of the specimen, so please be careful. Always make a duplicate of the chunk and work on that only. Selecting points by colour is pretty self-explanatory. Play with the various options a bit, and try the “Pick Screen Color” option. How to do this depends a lot on background and specimen colour, and so on, therefore I can’t really give you detailed recommendations.

And then you’re almost done! The next step is

Having the software build a mesh

This is really easy again, as there is an entry in the “Workflow” menu again. Haha again! Select it, and Photoscan will think for a moment. Then, you get a pop-up window:

meshing

If you are modelling a near-flat surface, e.g. the surface of earth from satellite images, you can change the surface type to Height Field. Otherwise leave it on Arbitrary. You can also create a mesh from the spare (tie) point cloud, kind of like a preview, but you will normally want to use the Dense cloud. For face count (number of polygons to create) you can choose between three suggestions or select Custom and enter a zero 0. This will give you the full size that can be created from the dense cloud. If you select one of the other settings, Photoscan will use a 0 and then downsize the mesh to what you selected.You can downsize the mesh afterwards, using the Tools –> Mesh –> Decimate Mesh option, so there really is not too much of a need to decimate it automatically.

I’d normally leave Interpolation on Enabled (default). And as I normally do not classify my point clouds in separate classes, there is no need to select any point class.

Simply click OK now and wait……. a…… while……. Photoscan will project a short time for the job, which will grow and grow and grow. I’ve seen initial suggestions of 5 minutes after 10% completion grow to 6 hours!

In the end, you should now have a very nice high-resolution mesh. Congratulations!  If instead you have a Photoscan crash, make sure that in the program settings (Tools –> Preferences; then select the Advanced tab) you have de-selected “Switch to model view by default”. It keeps Photoscan from changing the view to a mesh that is too big for your computer’s dinky memory.

Always(!) save your project after meshing and before changing to mesh view. ALWAYS!

You can check the size of the mesh before you change view, too, by clicking on the + sign next to the chunk in the “Workspace” tab. It’ll show you a little triangle icon with the text “3D model”, and give the number of faces. Learn which sizes your computer can display!

If anything is too large to show, either right-click the triangle icon and export the mesh to work on it in another program, or duplicate the chunk and use the ” Tools –> Mesh –> Decimate Mesh” option to reduce the mesh to something palatable. I recommend against doing away with the full-size mesh if you#re planning on doing science with your data.

OK, that’s all! Have fun🙂 And if you still have any questions or problems, email me or ask for help in the comments here!
                                                                                                                                                                     

Scaling your model

As I mentioned above, you can scale your model at various stages of the process. I typically scale as late as possible, simply because a model may fail for various reasons during the process. Scaling is a not a lot of effort, but it is effort, and if a model fails fatally, then that work is wasted. On the other hand, I have had cases where I forgot to scale and ended up using 3D data that was un-scaled. Luckily, I or someone else always noticed in time – e.g. before a 3D print was started and monetary costs ensued. But it may still be a good idea to scale early in the process, always at the same step of the process, just to make sure that you never use un-scaled data by accident.

In order to scale a Photoscan project you need an object of known size in your photos. Obviously, the easier way to achieve this is placing a good old scale bar next to specimen during photography. Or, better, several scale bars. If you’re a lazy fuck (like me), there are so-called ‘coded targets’ – scale symbols Photoscan can recognize automatically. More on that below. Let’s begin this with the hard-core way of scaling.

Photoscan can scale a model only if there is a in-program scale bar. Such a scale bar can be created from two in-program markers, which in turn need to be identified by you on at least two photographs. That’s easily done: Select a photo and open it by double-clicking. Select the “Navigation” icon (arrow – you should be on it by default). Right-click the point on the image you want to place the marker on and select ‘Create Marker’. Now, right-click the marker and choose ‘Rename Marker’.

It doesn’t really matter what name you give the marker, just make sure that it makes sense to you. You can even stick with the default name, but I find it helpful to give ‘speaking’ names. If, for example, you have a scale with numbers, I’d name the makers ‘0 cm’ or ‘2 cm’ or ’10 cm’, depending on where you place them. That way, it is easier to pick markers from a list and immediately know how long the scale bar between them is in reality, and thus should be made in Photoscan.

If you place markers after aligning the photos, when you’ve placed a marker in one photo, Photoscan will put a line onto other photos showing where it projects the marker to. In essence, it shows you the trace of the position vector from the previous camera position through the marker on all other cameras that look the right way. Whether you aligned the photo already or not, now all you need to do is to repeat the above-described process: right-click the image in the correct location (which in the case of a sub-optimal alignment may be off the displayed line), select ‘Place Marker’, and choose the appropriate marker name. If you ran alignment before, you now basically gave Photoscan two place vectors from two positions the relative alignment of which is known – and the marker must thus be located in the place the two vectors intersect! Hurrah!

Next, you simply select the two markers that describe your real-life scale bar and create an in-program scale-bar. Selection is easy: click one in the list in the ‘Reference’ tab, press and hold [CTRL], and select the other. Then, right-click and select ‘Create Scale Bar’. DONE!

Now, the bottom-most part of the ‘Reference’ tab will show a scale bar. You now need to assign a length to it (simply click the correct line in the “Distance” column and enter the correct distance; remember this is in meters!). Do this for all the scale bars you created (more than one is a good idea!), then click the ‘Update’ icon in the ‘Reference’ tab. It shows two arrows going opposite ways and is next to the you-know-what ‘Optimize Cameras’ icon. Here’s that screenshot from above again, which shows you what the icon looks like. It’s to the right of the Wizard’s wand icon.

ref_pane
AND DONE!

Yes, it really just takes a split-second and your model has been scaled. Check out the “Error” column in the ‘Reference’ tab (you may need to move the scroll bar to the right. If you did good on, say, a sauropod bone, you’ll see 4 decimal zeros🙂
Coded targets

In Photoscan you can print out circles with black and white patterns. They all have a little white dot in the center. If these are in your photographs, you can ask Photoscan to go looking for them and automatically assign markers to them. If all goes well, you just saved a lot of marker placing, and only need to create the appropriate scale bars. However, before you can do all this, you need to create real-life scales with two coded targets each, print them out, and place them next to your specimens. My colleague Matteo has done this, and we have been using these sclaes with mixed success. It turns out that the coded targets, nice and large on our scales, are too big to be found by Photoscan if we take close-up shots of bones. In the field, where the photos typically show larger areas and the coded targets are proportionally smaller on the images, they work flawlessly. Email me if you want the know more, or comment below.

About Heinrich Mallison

I'm a dinosaur biomech guy working at the Museum für Naturkunde Berlin.
This entry was posted in Conferences, digiS, DigitalSpecimen 2014, Digitizing, How to, MfN Berlin, photogrammetry. Bookmark the permalink.

21 Responses to Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan

  1. Pingback: Photogrammetry tutorial add-on: The consequences of optimizing a sparse point cloud | dinosaurpalaeo

  2. Pingback: PaleoNews #18 (National Fossil Day 2015 Special Edition) | An Odyssey of Time

  3. Thomas says:

    Thank you for the nice tutorial. Can you explain the meaning of the 10 at the point “Reconstruction uncertainty“, please?

    • Thomas, that’s just a value I found to work well! If you choose too low a value (say, 4), you delete too many points. If you leave the value too high, you retain too many bad points.

      As I said, I do not know exactly what the various parameters do. Agisoft doesn’t explain this in detail in their manuals. All I found is this:
      “High reconstruction uncertainty is typical for points, reconstructed from nearby photos with small baseline. Such points can noticably deviate from the object surface, introducing noise in the point cloud. While removal of such points should not affect the accuracy of optimization, it may be useful to remove them before building geometry in Point Cloud mode or for better visual appearence of the point cloud.”

  4. jaka says:

    whoa!
    nice tutorial dood😀
    extra information will definitely come handy
    cheers

  5. Darren Noble says:

    Great Turtorial, Thanks.

    Can you advise when would be the best part of the process to add in Ground Control Points for Aerial work?

    Thanks so much.

  6. Really complete and nice tutorial! thanks for sharing all that wonderful experience with as many details and reasons to understand a little more how the software works.

    Just one question, about the tie points cleaning (thanks so much to talk about it, is the first time i read this process and yes this solve many problems in some models I have done) Can you explain how to do it properly in the version 1.1.6?

    Thanks!

  7. Matt T says:

    This tutorial is a little nugget of gold – thanks🙂

  8. Bence Vágvölgyi says:

    Thank you for your article, it was great help, especially the part about point cloud and image alignment optimization!
    What I was wondering about was how large the dense point cloud usually get for you? How many points?
    I myself regularly work with archaeological data with Photogrammetry, but mostly with excavation data, which is quite different from what you are working here. Where you have to capture the fine details of a given object, an excavation trench is too large for such a level of detail (and in most cases doesn’t even need that much detail), and would require too much computing, not to mention the amount of photographs needed. With these differences in mind it will be interesting to see if (and where) the settings and the tresholds you used will need to be changed for such a different application. I’ll make sure to document it.🙂

    • Bence,
      the size of the dense cloud really depends on the circumstances. Anything between 500.000 and 3.5 billion is on😉
      For a large sauropod bone I would aim for some 1.5 million points, but the finished mesh will usually get downsized to ca. 150.000 polygons anyways.

      As for excavations – these are similar to tracksites, which I have done. The same problem for both: you need high-res in specific locations that you can’t afford to produce overall. I usually solve this by making a low-res model from photos able to provide high-res, then duplicating the chunk over and over again, and making in each chunk a high-res model of the details. These I copy into the overview model after cropping them to the object I really want.

  9. Lawrence Diamond says:

    You mention the first time the sparse cloud is optimized with the gradual selection process that if using the Standard Edition, one can choose “Optimize cameras” instead of the (missing) Wizard’s wand icon. With Standard Edition, does choosing “Optimize cameras” instead of the wand icon work for each of the additional gradual selection steps?

    Thank you for a well written tutorial.

  10. ikram says:

    can you give the dimensions of your scale with coded targets?

  11. Thank you so much for this tutorial. I just have a question about scaling…Is this necessarry? I have made a 3d model of a small object that I want to scale up, with 3d printing, so the original size doesn’t really matter.

  12. Pingback: The Basic PhotoScan Process – Step 4 – Public Archaeology in 3D

  13. Bastian says:

    Hi Heinrich,
    this is to let you know that I appreciate all the effort you put into this tutorial. It totally made the difference! I only use the standard edition but your article made quite a difference. So agian, thanks.
    Cheers!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s