If you build a photogrammetric model following the workflow I suggest in this post, one of the steps is optimizing the sparse point cloud (tie point cloud) via gradual selection and deletion of points. Here’s why that matters.
Below you see a screenshot of a model of a sauropod bone. The sparse point cloud is shown as produced in-program by aligning the 190 images (one image doesn’t count, as it shows the label, and accordingly wasn’t aligned).
(click to enlarge)
Note that there are close to 200,000 tie points connecting the images.
Now, I used Gradual Selection to select lower-confidence tie points, and deleted them. After optimizing the alignment the point cloud looks like this:
(click to enlarge)
In this case, I deleted nearly 50% of the tie points, which are down to just over 100,000. Now compare the errors shown in the screenshots! The very same two scale bars of 25 cm length that differed by 0.000557 m (you need to add the two errors to get the total discrepancy), slightly over half a millimeter, now differ by only 0.00032 m, i.e. 57% of the previous error!
Ok, admittedly this almost-doubling of accuracy doesn’t make much difference in this case, but imagine you’re modelling a specimen with fine details. Sauropod vertebrae with their fine laminae come to mind. Decreasing the error by half means that your chances of getting the fine details modelled without too much error and messy edges doubles!
Let me end this with a view of the derived mesh. Ain’t sauropod bones beautiful? (oh, and this is medium density only! The mesh could be much better resolved).
(click to enlarge)
Pingback: Photogrammetry tutorial 11: How to handle a project in Agisoft Photoscan | dinosaurpalaeo
Pingback: PaleoNews #18 (National Fossil Day 2015 Special Edition) | An Odyssey of Time
Hi Heinrich,
might want to give CapturingReality a try.
(https://www.capturingreality.com/)
It’s in beta, not sure what the price will be, but tests (http://www.pi3dscan.com/index.php/instructions/item/agisoft-vs-capturingreality) show promise in quality and performance.
LOL
this is really neat – I am waiting for the time where I find time to sort through my data and find a really juicy big project so that the people at edico (who are behind capturing reality) run a test for me, to compare to Photoscan 🙂
They’d certainly love to have a dinosaur (coughRexcough) to flaunt their wares, so … free license? If you bargain hard, you may get a free license for me, too!
we have good contacts with them 😉 that’s all I am going to say now.
Colour me officially impressed – although I must say that the comparison you linked to may be unfair. It is unclear how the Agisoft file was run. If the sparse cloud was not optimized, then it is no surprise that there is a lot of floating nonsense data, and that there are rough edges and seams.
Hello Heinrich,
I am trying to find literature comparing the accuracy of scaling/metrics between Agisoft and NextEngine laser scans. Are you aware of any I should read?
Cheers
That’s a really useful tutorial – thanks!
“One image doesn’t count, as it shows the label, and accordingly wasn’t aligned.”
So it’s harmless to just toss those ones into the pot — it all sorts itself out?
To a certain degree, yes. Your photos of labels will be vastly outnumbered by those of the specimen, and thus won’t matter.
Pingback: Photogrammetry: index to Heinrich Mallison’s tutorials | Sauropod Vertebra Picture of the Week