“Liberation from the Bone Cellar” – a progress report

Here’s a short update on how my digiS 2015 project is coming along. Yes, 2015 is still running, due to a bunch of unforeseen circumstances a huge theropod sticking its ugly skull into my affairs and demanding to be photogrammetrized in a big hurry. digiS 2016 is also running, which means that I have about 50% of the computing power at hand that I need – ugh! Our IT is doing all they can to keep me happy, which is way more that their jobs dictate they should do, but they can’t work miracles. I am enormously grateful, and really hope that my work makes the higher-ups in the museum realize how excellent the support is IT gives researchers at the museum.

Now, where are we with regards to “Liberation from the Bone Cellar” – a project title not quite as tongue in cheek as it may sound, as the work conditions in the basement are really suboptimal enough to make many research approaches barely feasible that really should be easy in an ideal world.

Well, I am glad to report that things are finally chugging along nicely! Both my computer screen and that of my colleague Bernhard Schurian are usually populated with views like this:

batch process result

(click to embiggen for readability)

What you see here is a batch process file in Agisoft’s awesome Photoscan Pro. Each batch contains the photos taken of one bone (both top and bottom side), and we run an overnight batch process for photo alignment for all chunks. Then, we optimize the resulting sparse point cloud, scale the models, and run another batch process for dense point cloud calculation. Then, the results must be manually cleaned – after all, we do not want to have all kinds of background data in the files. The screenshot was taken just after cleaning of the dense clouds. As you can see, in this case there are 6 chunks, i.e. 6 bones. The second and third are marked inactive, which means that we had some sort of problem with them. Usually, what gives us trouble are photo sets that do not align perfectly, usually because we run the models with fairly low sample point ratios (max 10.000 per image). Instead of stopping work on the other chunks while we fix these problems, we typically just ignore them, finish the rest of the chunk, and then come back and deal with the problems. Usually, this simply means re-running the alignment with more sample points (40.000 or unlimited).

Each of the six chunks has been aligned, and you can see the number of photos per chunk, the number of resulting points in the sparse cloud, as well as the number of markers we already assigned. In the two chunks that have been expanded you can also see the number of aligned images each: in the first 169 of 175, but in reality 173 as the first two show the label, which equates to an alignment quota of over 95%, and in the second 163 of 165-2=163, a quote of 100%. Considering the free-hand photography at close quarters that we did this is a pleasing result🙂

You can also see the setting (Medium quality) and resulting number of points of the dense point clouds: over 7 and 9 million points, respectively. That’s way more than 99% of all science uses of the models will ever need, and in fact way more than 99% of all science uses can handle! I expect to get meshes with around 9 to 13 million polygons, and such big files are a bother to load. Mounting a full skeleton, or even just a girdle + limbs, at this resolution will crash most computers!

The key thing we are proud of you can see at the bottom left of the image: the average error between our scale bars. For the “small” bone fragment in the open chunk we used four scale bars, one of which is 20 cm long, and the other three are 25 cm long. The average distance between them in the model, which in an ideal model would be zero, here is 1.3 mm, i.e. slightly over half of a percent of the average scale bar length – and this is one of the worst models we produced (which is why I show you this one). Most have three zeros after the comma, not two! That is an amazing accuracy when you consider the far-from-optimal conditions in the Bone Cellar and the speed with which we acquire the data: I usually take less than 7 minutes per bone including transport!

So, overall, I’d call digiS 2015 an overwhelming success – for us, for paleontology as a whole, and for all our colleagues out there who want to quickly capture a lot of data during collection visits. While our photography method is physically exhausting, the results show that digitizing 10 to 20 big bones per day in collections is entirely feasible.

 

About Heinrich Mallison

I'm a dinosaur biomech guy working at the Museum für Naturkunde Berlin.
This entry was posted in digiS, Digitizing, MfN Berlin, photogrammetry, Tendaguru. Bookmark the permalink.

2 Responses to “Liberation from the Bone Cellar” – a progress report

  1. Sean says:

    Hey i really like your blog! You should totally check mine out

  2. Pingback: Making Mike Taylor gloriously green-eyed | dinosaurpalaeo

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s