Tuesday, July 12, 2016

Of "Eyes" and "Beholders"

Terrapattern selections (source)
It seems like every few years we have to learn to "see" with new eyes in earth observation, both in terms of sensors and the process of deriving information from satellite imagery. When images were few and expensive, a single Landsat image cost hundreds of dollars and five cloud-free scenes was a decent data record. Research methods focused on multi-spectral analysis, trying to locate features in bandspace, and developing useful indices like NDVI. This was a natural inclination, I think, to want to automatically name what we see (as we do cognitively when we view and image), and wanting to link how something appears in an image with how it behaves on the Earth. It's funny, then that image classification and correlation analysis have become, as Iain Woodhouse writes, "two things that give remote sensing a bad name". I couldn't agree more, and I think our teaching focus needs to evolve beyond these early research methods.

There is so much more remotely sensed data available to us now, partly because of new sensors but also because the data record for longer-lived missions continues to grow. I'm particularly interested in these longer time series of data because I want to "see" things like fire disturbance and recovery cycles, which last for decades to centuries. The data record doesn't have to be centuries long to study fire recovery cycles, though. Having a 20 year data record (like Landsat) or a 30+ year data record (for the AVHRR GIMMS dataset) is useful for observing the most dynamic period of recovery for a decade or two post-fire. A long time series of data can make analysis more complicated, though and it's tempting to reduce the size of the dataset. Sometimes ecologists focus only on the peak growing season (e.g., June, July, August) and ignore the "shoulder seasons" in spring and fall. Or many studies concentrate on a few indices (I'm looking at you, NBR) instead of all the bands in a dataset to reduce the amount of data to be processed. It's hard to know just how to glean the most information from large datasets, and there's a lot of room for exploring the full dimensionality of a longer time series. With so much data, there are likely to be many opportunities to discover new ways of "seeing" various phenomena with remotely sensed imagery. Check out Terrapattern, for example, which isn't multitemporal, but uses spatial patterns and machine learning to identify features on the urban landscape (warning: you might waste lots of time having fun on this website).

New sensors, too, are coming online that will require new methods for analysing data. The time series for these may not be long, but the spatial resolution is much finer and revisit times often faster. Private companies like Planet Labs are peppering the sky with dozens to hundreds of new satellites that can image the Earth at 5 m spatial resolution in three bands nearly every day.

We're nearly spoiled for choice in terms of imagery (provided money's no object). So how does one decide what sensors to use, what time period to look at, how fine a temporal or spatial resolution is necessary or optimal? It's easy to fall into the trap of thinking that finer resolution is "better" resolution, but the answer really depends on how variable your features of interest are in time and space. It's a question at least as old as the first Landsat mission, and these guys took a swing at constructing an idealized 'scene model' for spatial resolution back in the day. It would interesting, I think, to re-visit the optimization of temporal, spectral, and radiometric resolution, particularly given the plethora of options available to us now.


No comments:

Post a Comment