Session at SPAR International has attendees openly wondering
HOUSTON – Perhaps the best-attended breakout session at SPAR International on the first full day of presentations was “Next Generation Photogrammetry,” which had one attendee openly wondering as part of the Q&A, “Will there be a SPAR 2013?,” as he contemplated the disruptive nature of new-generation photogrammetry technology.
There will be, assured David Boardman, president of URC Ventures, whose software technology that can turn tens of thousands of photographs into a two-billion-point cloud overnight partly elicited the question.
“But they’re going to have to keep changing their focus if they want to last much longer than that,” he predicted.
Boardman was joined by Eugene Liscio, founder of the forensically-focused AI2-3D, and Carlos Velazquez, head of Epic Scan, which just recently dove into photogrammetry after more than a decade of laser scanning.
Liscio kicked off the session with a brief history lesson that outlined photogrammetry’s long history, culminating with the significant break through in 1999 by David Lowe at the University of British Columbia with Scale Invariant Feature Transform, a robust algorithm that essentially launched modern-day photogrammetry software, which no longer requires targets in the scene to match photo features and allow for 3D calculations. It is SIFT, Boardman agreed, that led to software like URC’s, which can rapidly create large point clouds, with high levels of accuracy (“not quite the same as laser scanning, but close,” he said), out of very large photo-sets.
How large? URC’s airborne solution produced a gigapixel per second.
It is also driving the likes of Autodesk’s 123D Catch and any number of other photogrammetry solutions that are giving 3D data collection professionals options as they consider how best to collect data, process it, and serve it back in a fashion that people can use to make good decisions for their businesses.
Velazquez presented a small case study, for example, that he conducted comparing laser scanning, GPS, and photogrammetry in doing a volumetric calculation. He had literally been called out of the blue by Boardman after Boardman had seen some of Velazquez’s YouTube videos and offered the chance to work with URC’s software solution.
“Having been through the headaches of lidar,” Velazquez said, “I saw this as a solution that could really open up the market.”
He set up a simple test whereby he established survey control using a Trimble S6, then used three methods for establishing the volume of a pile of gravel: taking 311 points with handheld GPS; using a Leica C10 scanner; and a bunch of photos taken with an iPhone and processed with URC’s software.
The results? If you use the lidar results of 20,193 cubic yards as “truth,” then the GPS calculations produced error of 2.71 percent and the photogrammetry produced error of just 1.29 percent.
Does that mean, one attendee asked, that if this room is 200 feet long and I use photogrammetry I’ll be off by two feet?
All three panelists agreed you could do much better than that, depending on the control used, the skill of the person taking the photographs, the resolution of the camera being used, and any number of other factors. While the error rate of a laser scanner is relatively fixed based on the equipment’s capabilities and the distance of the object from the scanner, the accuracy of photogrammetry depends a great deal on the application.
Just try using photogrammery in an empty room with all smooth, white walls. You might have some problems.
However, all three agreed, when you factor in the cost of the data acquisition and the speed with which the photos can now be converted to point clouds, there’s now a much more interesting calculation that 3D data capture professionals can make about how much accuracy they really need, what the data will be used for, and how easily the data can be collected.