January 18, 2016

3D Modelling

Simon Brown of deep 3D has been working hard perfecting 3D imagery and we have been using our very own land based shipwreck The Dolly Varden for practice. If this all works out as we would like we will have a very powerful survey tool which may not be for the purist but is plenty good enough for our purposes and what we have in mind. So till we get diving again Simon is very kindly going to step us through the processes involved.

Image capture and alignment

First up, the images of Dolly – all 2502 of them – need aligning. The software (Agisoft Photoscan) is looking for matching points between frames and working out where the camera was when the image was shot. Get this step right, and the model has more chance of being accurate. Get it wrong, and end up wasting processing time building something less than worthy.

Each image is 24Mb shot in RAW format. The volume of data to manage can become unmanageable, but once loaded Photoscan sets to work analysing the images.

3 days and 18 hours later and 2483 images are aligned. Each one of the blue squares in the screen grab represents a single image, and the picture shows just small section of the starboard bow. Along with alignment, the result is a sparse cloud built from matching points. At this stage, we can start to see the Dolly take 3D form, but the point cloud lacks the detail we require.

Using known values taken from the hull the next step is to refine the camera alignment.

3d-image-alignment

Building a cloud

The next step is to build a dense cloud of common points. It takes around 10 hours to process the points into the dense cloud, and in Dolly’s case there are 215 million separate points hanging in space. When viewed from a distance, their density gives the appearance of a solid shape and details such as paint runs, wood grain and small fittings start to appear but there is still fresh (digital) air between them.

Looking through the section of Dolly (just aft of the bow) the dense cloud still appears solid, but we still need to wrap a skin, or mesh, between the points to give Dolly a real solid feel.

3d-image-dense-cloud

It’s a wrap

Thinking about the math’s required to wrap a surface between the 215 million points makes my head hurt. And we found it pushed the limits of desktop computing power to the limits, and even deploying some cloud based power left us scratching our heads for a bit. The solution was to dice up the dense cloud into separate chunks and process them individually.

Which works well as long as you can figure out how to put them all back together again. At this point, the decision to place control points on Dolly before photographing her really paid off – Photoscan Pro will align chunks based on markers (as well as cameras and points) so the datums for alignment were there.

6 separate chunks were needed to process Dolly, taking another 36 hours to run.

The result is a solid model. We are now really starting to see Dolly appear on the screen.

3d-image-mesh

But does it measure up to reality?

At this point in the process we have a solid model of the Dolly. But is it accurate?

Armed with a tape measure, Grahame was tasked with running the rule over the Dolly, taking measurements. The numbered control points are still fixed to Dolly, so we can use these points to compare to the model :-

3d-image-accuracy

Grahame’s measurements :-
58 to 59 = 1.74 metres
50 to 51 = 1.94 metres
Rudder length = 2.2 metres

So we can now measure between the same control points in the model:-
58 to 59 = 1.738 metres
50 to 51 = 1.93 metres
Rudder length = 2.21 metres

The next step is to add texture to the model and the work is complete.

All the data used to build the model – image files, point clouds, surfaces – comes to a total of 72Gb of disk space.

The last steps is to publish on Sketchfab, add the lighting etc. Check back tomorrow for the final result.

In conclusion

Simon’s thoughts on the finished article from his perspective as a professional photographer and a very good one at that we think!

As a photographer I would often spend a lot of time planning, shooting and selecting a just a single, solitary view of a subject that had a story to tell or something to say. The single image had to define the story, but the photographer is always forced to choose and define the single shot from the many outtakes. Some would say the ability to do a tight edit from a shoot was a skill in itself.

For a long time I felt photographs and video had potential to present more to the viewer than just an edited view of the world. Its worth observing that with photogrammetry there are hundreds or thousands of collated images published in the form of a model, but the viewer can choose their own view of a subject. The edit will still be there, but its applied for different reasons and the viewer of the model can really explore every single frame in a way that makes sense.

With the finished, scaled model of Dolly now ready, everyone can choose their own view. Enjoy!

Just a few stats :-

Around 5 days of processing time.
2500 images.
72Gb of data.
16 million faces.
215 million points.

3d-image-final-result

Click here to view the model on Sketchfab