One of my bigger challenges is finding ways to efficiently perform QA/QC of collected point cloud data. Back when I collected all of my own data it was pretty simple. I maintained strict control over the field methodology, keeping me aware of any problem areas before I ever started registering the data.
Later, I did most of the modeling while our field crews collected the data. Occasionally, I would get deep into modeling before I found an error. But I rarely missed an error as nothing makes you go through a point cloud with a fine tooth comb like modeling. Nowadays, I am in the field less than ever and we deliver a lot of data not modeled at all, leaving me and my team fewer options and in need of better QA/QC tools.
We can, of course, look at the registration diagnostics, but in reality it only tells us how well the targets aligned. If you have poor target geometry or are using a hybrid of cloud to cloud and target constraints, it can be difficult to get an accurate picture of the actual accuracy of the registered point cloud. I was thinking about this inability to look at the data without having to actually look at the data, when I came across an article in New Scientist titled, “Stuff Symphony: Beautiful Music Makes Better Materials.” Unfortunately, you have to have a subscription to read the article but there is an excerpt available on Slate and if you want all of the details you can review the original study here.
The first point that struck me was the comparison of musical composition to the arrangement of physical objects. My first academic training was in music, and I often find myself falling back upon methods of pattern creation and recognition that I learned there in other disparate fields. But I thought that was just a personal quirk. According to the article, my brain recognizes systems of hierarchical order the same way; regardless of the origin of their components.
This led the researchers to substitute components from something we can easily experience (auditory tones) for those that we are less able to understand (in this case, protein sequencing). In short, by relating the two, the scientists were able to define the types of melodies that produced protein sequences resulting in spider silk versus melodies that did not produce “stringy” silk.
Their goal is to create new sequences of proteins by applying patterns that are known to us through music as opposed to blindly trying random patterns to see what works. After all, every (western) song you’ve ever heard was created using a variation in the sequencing and hierarchical arrangement of only 12 unique components backed up with hundreds of years of works to draw melodies from (not to mention the plethora of music theory data that some argue exemplifies they very hierarchical order found in nature.)
How does this help us with point cloud QA/QC? Two things come to mind. If their theory that the “harmony” and appreciation of music in humans is related to the similarities in hierarchical arrangements of the auditory tones and the naturally occurring hierarchy in our experience, then does it not stand to reason that this same patterning would be found in recorded measurements of the environment?
Secondly, what types of data do we collect that could be used as a mathematical correlation with pitch and rhythm? Simply setting the pitch to the elevation or “z” value on a floor would let you listen to the slope (and range noise) of the floor. The rhythm could be set to correspond to the point density.
It’s an interesting thought and I’ve already spent more time than I’d like to admit running coordinate values through VBA-enabled Excel synthesizers. But, don’t look for my data to hit the charts anytime soon. The real problem, I found, is that listening takes time.
Perhaps the answer lies in a sound histogram. Perhaps that would provide a way to limit what areas warrant listening to and what areas are harmonious enough to turn over to the client unheard. Besides, if it works, we can all expense account some really great headphones!