3D digitizing black, shiny bones

Obiwan

I know you all wait for news on Tristan, the Tyrannosaurus rex that will soon grace the exhibition spaces of the Museum für Naturkunde Berlin. I know, and I understand, and I feel with you. Really, I do. But…. see above: this post has nothing whatsoever to do with Tristan. Rather, it deals with a hypothetical. Let’s say that you need to digitize a large number of fossils, fossils that are of a very dark colour and very shiny, and that have a complex surface shape with many holes and deep depressions. Fossils like…. the skull bones of a very large theropod from a sediment that is rich in organic materials. If – and this is entirely hypothetical – if you wish to mount such a skull so that museum visitors can see it up close, and so that researchers can have good and easy access, then you can’t mount the real skull with the rest of the skeleton. It would end up several meters above the heads of visitors, laypeople and researchers alike. So what, then, do you mount in front of cervical 1 (atlas) of the skeleton? A cast? That means making moulds of all the oh-so-fragile skull bones, and if in any way possible the damage that may (and often does) associate with casting is something a curator will wish to avoid. Quite obviously, you can CT scan the bones, and have them rapid protyped. But if the bones are a bit bigger – say, like those of a megapredator of Late Cretaceous times – then 0.25 mm slices, the best really good large medical CTs can do, just do not cut it. One point per 0.25 mm translates to 4 dpmm (dots per millimeter), i.e. 10.16 dpi. Your screen on which you are reading this text has 72 dpi. So, we need a data capture method that has a much better resolution. Laser scanning – requires a really nice laser scanner. No money no laser scanner. Structured light scanning – requires a really nice structured light scanner. No money…. you get the drift.

Photogrammetry to the rescue!

Yeah, I know, this is getting kinda old. But then, photogrammetry offers really high resolution 3D digitizing at relatively low cost, with relatively little effort. Just perfect for Tristan the megapredator skull bones.

So, photogrammetry…. if you are using a regular lens, say an 18-135 mm Canon EF-S lens, you can easily get files with a resolution of 1/20th of a millimeter. That’s roughly 500 dpi. And once you have such files, bone for bone can be laser sintered. Those of you who talked to me at SVP 2014 in Berlin probably remember my poor imitation of a beach watch salesman, pulling some bone replicas out of my jacket pockets. Those were laser sintered, by the very helpful and capable 3D Lab of Technical University Berlin. The folks there, especially Joachim Weinhold and Ben Jastram, have been a joy to work with, and now we take our cooperation a step further, with the skull of Tristan a large dinosaur skull. The layer thickness if 0.1 mm, or ca. 260 dpi. THAT’s cool, because it means that at half of arm’s length you can’t see the difference between original and print, and if the prints are properly coloured you can’t even see the difference at a much shorter distance.

printed bones

3D laser sintered bones of the stegosaurian dinosaur absolute king bad-ass of the Jurassic of Africa, Kentrosaurus aethiopicus Hennig 1915, at various degrees of scaling. The big femur at the bottom with the hole in it is hollow, with a wall thickness of less than 1/2 mm (< 1/50th of an inch for my ex-British-colonial friends). The walls are so thin they actually flex when pressed, but the models are tough enough to toss (hard) at anything without taking damage!

But before any printing there is digitizing and data editing. And that’s where the problems begin with our entirely hypothetical large theropod skull….. black, shiny bones, is turns out, aren’t giving anything up easily!

In the field our hypothetical black bone often looks like fossil charcoal. I was only able to distinguish wood from bone in the field this summer because there is a layer of white matter around bones, but not around wood, and because a guy who excavated a dinosaur at that site actually told me which is which. Let me show you:

bone in the field

Fossil bone eroding out of a hillside at a dig site in Montana. Estwing hammer for scale – you ain’t no true geologist unless you own and use(!) a regular Estwing rock pick (pointed tip) – and no, rock picks with a chisel edge do not count, they are good only for archaeologists and other mud scrapers 😉 Both leather covered and blue grips are acceptable, and the grey colouring or the naked metal looks are interchangable, too. Just don’t show up with an ACE-brand hammer. Just……. don’t……. EVER!

OK, so the bones are black, and they are shiny. For photogrammetry that normally means  that any light on the bones will cause strong reflections, and that I need to expose the photographs so that the black bone is not uniform black, but actually has colour variation in the images. Looooong exposures, as a consequence, because I can’t just use a strong flash (remember: reflections!), which again means using a tripod. With a good image stabiliser I can handhold a 20th of a second, but not hundreds of times in a row. Sometimes.
The next problem is that of lighting – if I shine enough light on the bone to get rid of the usual shadows on the underside of projections and the outside rim, and light up deep recesses sufficiently, I end up creating more highlight…… impasse!
There is obviously no way around putting a lot of light on the specimen. Necessarily lens-parallel, as anything else would lead to deep shadows in recesses. This means a ring light or ring flash. And lots of reflections. Therefore, I need to take so many photos that either there are enough images for each spot on the bone that happen to have no reflection, or that there are enough photos with the same brightness of reflection in it. In the former case, the software can build the model from the good bits, in the latter I’ll end up with a nearly white model – but I will get a model!
Here’s the set-up I am using:

set_up

The turntable is huge, and in fact that is a nice thing I should have realized long ago: if the turntable is much bigger than a specimen, the photos will show practically no background that doesn’t turn with the specimen. Also note how I am using styrofoam to make the background all-white. Photoscan is so incredibly good at finding points (real or fake) even on totally out-of-focus parts of an image that a structured background that doesn’t rotate with the specimen can really mess up the alignment.

So, with this setup I take a few photos from a steep angle (as in the photo above), then I bring the camera down ot a shallow angle and take many photos – one every 5° of rotation or so. And then I bring the camera down all the way, so that it looks perfectly sideways at the specimen, and again take very many photos. If necessary, I add more rings of images, so that the part of the specimen that is currently up is well-covered. Notice that there are scale bars on the turntable – obviously, I make sure that they can be seen in many images.

Then, I take the specimen off the turntable and completely exchange the cover. A blue trash bag that covers the cushioning material serves well. Or a different type of cushioning material. I also take the scale bars off. Now, I put the specimen down again, but upside down. And then I repeat the above photo sequence. Here’s how the alignment looks with one side done (blue rectangles represent photos), and the other side just started (red rectangles).

alignment example

In theory you can now toss all images into one chunk. There are no features on the two backgrounds for the two sets of images that connect them – after all I exchanged the entire background. That’s why I don’t use scale bars for the second round: they might give features that match between set 1 and set 2. So if all works out I get a very nice alignment of all images in one go.

In reality, this worked very well for many of the ca. 30 bones I have worked on during the last week. The more photos I took, however, the more problems surfaced. For one thing, Photoscan kept being unable to align the vast majority of images. In the example above, only 155 out of 828 photos were initially aligned. I was able to manually align the rest (i.e., select them in the image list and ask Photoscan to align them without running point detection and matching first), but I ended up with several groups of images aligned well within the groups, but not between the groups.

So now I decided to help Photoscan along. The obvious big issue was that the bone in question has a thin margin, it is essentially a flat piece. Thus, the overlap between the two image sets is a small area with a high curvature, and thus prone to produce photos with lots of reflections and few features. I therefore manually aligned all images of one side – 514 in all worked out, and some 12 or so didn’t. Now, I optimized that alignment by using Gradual Selection (setting 10 for Reconstruction Uncertainty and setting 1 for Reprojection error). This led to a very nice sparse point cloud, less than 10% of the initially calculated one, but already a lot of bone detail was recognizable. It also dumped a few photos due to lack of points, so I ended up with 495 still aligned.

Now I chose those images of the second set that were shot with the camera level with the edge of the bone, and manually aligned them one by one. This worked quite well (see photo above; it’s the red images that got aligned to the whole set of blue ones), because now Photoscan had an excellent tie point set with a nice bone rim available that it could match the points on the new photo to. And once the first circle of photos from the second set was aligned, I could align all the rest of the second set to the sparse cloud, as this now contained also all the tie points that connected the second set’s first circle between themselves, including features on the background (i.e. on the turntable and the cushioning material).

During this process, the tie point cloud went from a very nice recognizable bone to ape-shit. In order to be able to recognize if a newly aligned group was totally out of whack I intermittently ran Gradual Selection again – and the sparse cloud always popped back to a nice one. With only one or two images tossed out, this repetitive optimization kept new alignments tight and nice. After a while, I began to see not only the medial side of the bone, corresponding to the first set of images, but also the lateral side, corresponding to the second set of images.

I ended up with 769 photos aligned, nice and tight! Not too shabby, given the original number of 155, and the total chaos that erupted when I simply asked Photoscan to align all non-aligned images willy-nilly. The time I spent sorting which images to align at what time, and to optimize the point cloud in between, turned chaos into a solid alignment.

alignment example

This is the final result, with the last few aligned images in red. As you can see from the small peek you get at the tie point cloud, the bone is not all over, but concise. Now I look forward to the dense cloud, which will be computed in a day or two…. *sigh* The punishment for a large number of high-res images.

The take-home lesson

If you digitize dark, glossy stuff,

  • take very many photos
  • make sure that you use lens-parallel light
  • make sure that you take two circles of photos of the connecting area between the various specimen positions that are near-identical, to maximize your chance of the sets aligning
  • be willing to invest a lot of time into manually helping the alignment along
  • intermittently, use sparse point cloud optimization to keep the alignment on track
  • do not give up too early!

To end this post, here’s a view of the alignment from a different angle, which shows the tie point cloud a bit better.
alignment example
Oh, and please remember:

Obiwan

About Heinrich Mallison

I'm a dinosaur biomech guy
This entry was posted in 3D modeling, Conferences, Digitizing, Dinosaur models, Dinosauria, Kentrosaurus, MfN Berlin, Ornithischa, Stegosauria, SVP 2014, Theropoda, Tristan, Tyrannosauridae, Tyrannosaurus. Bookmark the permalink.

15 Responses to 3D digitizing black, shiny bones

  1. dmaas says:

    whoah… manual alignment… must… focus…
    Thaks for this detailed description.

  2. ijreid says:

    Oh, yeah, T. rex is definitely not a large, megapredator with teeth comparable to a “huge turntable”. Just no. But Kentrosaurus definitely IS the king of all badass reptiles. (Hahahaha) Seriously though, excellent post. Many things I could never have done unless someone told me exactly how to do it (oh wait, isn’t that why you made this post! 😀 ) Nice job.

  3. Kurt says:

    Great post. Any other tips for shooting dark objects. I’m currently studying the morphology of Antarctic ventifacts, which are just black shiny rocks. Tried a bunch of different techniques (including a ton of pre-processing / edge detection in photoshop), with generally crap results.

  4. Pingback: Tristan 3D printing | dinosaurpalaeo

  5. Pingback: Tristan the T. rex is here! | dinosaurpalaeo

  6. Pingback: “Liberation from the Bone Cellar” – a progress report | dinosaurpalaeo

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.