Date of Award

Fall 2017

Project Type


Program or Major

Natural Resources

Degree Name

Master of Science

First Advisor

Russell G Congalton

Second Advisor

Rebecca Rowe

Third Advisor

Stephen Eisenhaure


To overcome the main drivers of global environmental change, such as land use and land cover change, evolving technologies must be adopted to rapidly and accurately capture, process, analyze, and display a multitude of high resolution spatial variables. Remote sensing technologies continue to advance at an ever-increasing rate to meet end-user needs, now in the form of unmanned aerial systems (UAS or drones). UAS have bridged the gap left by satellite imagery, aerial photography, and even ground measurements in data collection potential for all matters of information. This new platform has already been deployed in many data collection scenarios, being modified to the needs of the end user. With modern remote sensing optics and computer technologies, thematic mapping of complex communities presents a wide variety of classification methods, including both pixel-based and object-based classifiers. One essential component of using the derived thematic data as decision-making information is first validating its accuracy. The process of assessing thematic accuracy over the years has come a long way, with site-specific multivariate analysis error matrices now being the premier evaluation mechanism. In order to perform any evaluation of certainty, or correctness, a comparison to a known standard must be made, this being reference data. Methods for reference data collection in both pixel-based and object-based classification assessments are indeterminate, but can all become quite limiting due to their immense costs. This research project set out to evaluate if the new, low cost UAS platform could collect reference data for use in thematic mapping accuracy assessments. We also evaluated several collection process methods for their efficiency and effectiveness, as the use of UAS is still relatively unknown in its ability to acquire data in densely vegetated landscapes. Collected imagery was calibrated and stitched together by way of structure-from-motion (SfM), attempting calibration and configuration in both Agisoft PhotoScan and Pix4DMapper Pro to form orthomosaic models. Our results showed that flying heights below 100m above the focus area surface, while acquiring ultra-high-detailed imagery, only resulted in a maximum of 62% image calibration when generating spatial models. Flying at our legal maximum flying height of 120m above the surface (just below 400ft), we averaged 97.49% image calibration, and a gsd of 3.23cm/pixel over the 398 ha. sampled. Using a classification scheme based on judging the percent coniferous composition of the sampled units, our results during optimal UAS sampling showed a maximum of 71.43% overall accuracy and 85.71% overall accuracy, respectively, for pixel-based and object-based thematic accuracy assessments, in direct comparison to ground sampled locations. Other randomly sampled procedures for each approach achieved slightly less agreement with ground data classifications. Despite the minor drawbacks brought about by the complexity of the environment, the classification results demonstrated OBIA acquiring exceptional accuracy in reference data collection. Future expansion of the project across more study areas, and larger forest landscapes could uncover increased agreement and efficiency of the UAS platform.