Loading
Пропустить Навигационные Ссылки.

Авторизоваться
Для зарегистрированных пользователей

назад

Conflation of expert and crowd reference data to validate global binary thematic maps

Waldner F., Schucknecht A., Lesiv M., Gallego J., See L., Pérez-Hoyos A., d’Andrimont R., de Maet T., Laso Bayas J.C., Fritz S., Leo O., Kerdiles H., Díez M., Van Tricht K., Gilliams S., Shelestov A., Lavreniuk M., Simões M., Ferraz R., Bellón B., Bégué A., Hazeu, G., Stonacek V., Kolomaznik J., Misurec J., Veron S.R., De Abelleyra D., Plotnikov D.E., Mingyong L., Singha M., Patil P., Zhang Y., Defourny, Р.

// Remote Sensing of Environment, 2019. Vol. 221. P. 235–246.

With the unprecedented availability of satellite data and the rise of global binary maps, the collection of shared reference data sets should be fostered to allow systematic product benchmarking and validation. Authoritative global reference data are generally collected by experts with regional knowledge through photo-interpretation. During the last decade, crowdsourcing has emerged as an attractive alternative for rapid and relatively cheap data collection, beckoning the increasingly relevant question: can these two data sources be combined to validate thematic maps? In this article, we compared expert and crowd data and assessed their relative agreement for cropland identification, a land cover class often reported as difficult to map. Results indicate that observations from experts and volunteers could be partially conflated provided that several consistency checks are performed. We propose that conflation, i.e., replacement and augmentation of expert observations by crowdsourced observations, should be carried out both at the sampling and data analytics levels. The latter allows to evaluate the reliability of crowdsourced observations and to decide whether they should be conflated or discarded. We demonstrate that the standard deviation of crowdsourced contributions is a simple yet robust indicator of reliability which can effectively inform conflation. Following this criterion, we found that 70% of the expert observations could be crowdsourced with little to no effect on accuracy estimates, allowing a strategic reallocation of the spared expert effort to increase the reliability of the remaining 30% at no additional cost. Finally, we provide a collection of evidence-based recommendations for future hybrid reference data collection campaigns.

Ссылка на текст: files/publications/sotrudniki/plotnikov_remote_sening.pdf
назад