Last year I received funding to digitize a lot of big bones of the Tendaguru collection from the Museum für Naturkunde’s Bone Cellar. This year, I was lucky to again secure funding from the digiS programme. This time, it’s for digitizing the mounted skeletons in the Dinosaur Hall, the mounts my esteemed colleagues M&M (Matt and Mike) called “a shedload of awesome“. The reasons are fairly straightforward and simple: due to digiS 2015 we now have better digital access to the individual single bones from Tendaguru than to the partial skeletons of better preservation that form the largest parts of the mounts!
Now, that’s only true of a selection of dinosaurs in the hall. We already have excellent high-resolution models of the original material of Kentrosaurus and Elaphrosaurus, and of their (plaster) skulls. The models were created by David Mackie, then of RCI, who laser-scanned them one by one. You’ve seen the Kentrosaurus scans already, for example in my paper on range of motion of the skeleton, about which I should actually blog on of these days. The Elaphrosaurus models haven’t been used much yet, so here’s a link to a post with a bunch of photos of the mount.
So, for the digiS project, we’re mainly talking Giraffatitan, Dicraeosaurus, and Diplodocus.
However, the project also aims to get models of the entire skeletons of all animals, not just bone-by-bone. Although such models must necessarily be of lower resolution – after all, each individual bone we scanned in the bone cellar leads to a model with usually more than 6 and up to 30 million polygons! – they offer the great advantage of showing the bones in articulation, as mounted. And that is something we do not yet have of both Elaphrosaurus and Kentrosaurus.
Obviously, I did previously try to align the individual Kentrosaurus bone scans into an articulated skeleton, the results of which did not only make it into the above-mentioned paper, but also served as the basis for a 3D volumetric model of the animal. That model was used for both my paper on Jurassic baseball batters from hell (direct link to paper here) and my paper on the effect that osteoderm distribution has on the position of the center of mass (not really much of an influence, it turned out). However, that skeletal pose was not an attempt to re-create the MfN’s mount’s pose, but just an attempt to get the bones correctly articulated.
So, how do I plan to get a low-resolution model that is good enough for one-by-one replacement of the low-res bones by the high-res laser scans? Well, in fact, that task has already been done 🙂
This is the dense point cloud of a photogrammetric model, made from 120 photos taken by my very capable colleague Bernhard Schurian. All images aligned with ease. The model has some 8 million points, but that number will shrink as I clean it. Here’s an overview of the camera positions:
The limiting factor in model resolution here is not the number of photos but the resolution of the photos versus the size of the object on them. Simply put, at best you can expect to distinguish, as separate points, two neighboring pixels. Therefore, the bigger the object you model in your images, the higher the resolution. If you show an entire dinosaur the resolution is much lower than if you photograph only part of it. The higher resolution of the latter approach comes at the cost of having to take many more photos, though.
Vice versa, for a given view, the model will be of higher resolution the more resolution your camera offers. A 50 megapixel camera is much better than a 12 megapixel one. In this case, Bernhard used a Canon EOS 5DS R, which has a 50 megapixels sensor. This means that far fewer images are needed than in my previous attempts, but it does not directly translate to shorter calculation times. After all, 2x 25 MPX is the same amount of point data as 1x 50 MPX.
For scaling we used a number of scale bars scattered all around the skeleton. You can see their digital representation in the images above as yellow lines. Each is 50 cm long, and the final error in the model between them is pretty stunningly low: 0.000998 m! Yes indeed: that is an average error smaller than 1 mm! Less than 1/25th of an inch for my US friends. Let’s interpret this to mean that each scale bar is around 2 mm off – for the length of the entire dinosaur this gives us a divergence of less than 1 cm. Color me impressed!
Now, one thing such a model is not, and that is perfect! The chance that the insides of bones are captured is virtually zero, as is true for the vertebral centra and, because of the many osteoderms in the way, the dorsal spines, and there always is a lot of floating nonsense data between the bones. The image above shows part of that cleaned up, much more work awaits 😦 However, the external surfaces come out quite nicely in this model, due to the diligence of Bernhard, who made sure that all images are excellently exposed and perfectly in focus. Well, no surprised, he is a master photographer 😉
As a direct consequence, the software found features all over with ease: each point is a feature, each blue one is a feature the software was able to re-recognize on another image. The limit was set to a total of 40,000 points per image, and to 4000 matches between images.
What’s next? Well, cleaning the model. Then I’ll calculate a polygon mesh, import that into the CAD program of my choice (Rhinoceros NURBS modelling for Windows 5.0) and start aligning the old high-resolution scans. I’ll show you how that’s done once I have the first few bones aligned.