AoR 17: Jason Karl, Big Data for Big Landscapes — Detecting Change with Remote Sensing

Can satellite data and drones answer questions we’re not even asking yet? Jason Karl, Univ. of Idaho researcher, believes that’s a good possibility. We may be able to make connections, associations, observations, even test hypotheses, using images of whole landscapes in tandem with ground-based measurements to better understand and manage rangelands. Join Jason and Tip as they discuss the limitations and opportunities in remotely-sensed data, how to choose good monitoring indicators and measurements, and rancher-ready tools for analyzing landscapes. 

Transcript

[ Music ]

>> Welcome to The Art of Range, a podcast focused on rangelands and the people who manage them. I'm your host, Tip Hudson, Range and Livestock Specialist with Washington State University Extension. The goal of this podcast is education and conservation through conversation. Find us online at ArtofRange.com.

[ Music ]

My guest today is Jason Karl. Jason is a Professor of Range at the University of Idaho, just across the border in Moscow, Idaho. Jason, welcome to the show.

>> Thanks, thanks for having me.

>> What was your pathway to becoming a rangeland scientist who's known for working with remote sensing and big data?

>> Yeah, that's, that's a great question because it was not an intentional pathway. I came--.

>> Most of them aren't.

>> Yeah, I came to Idaho in, in 1992 from, from growing up in the Midwest. And I came out here to go to school at University of Idaho, and I was studying wildlife biology, and with all the intention of, of you know doing bird research and studying birds. And so all through my undergrad I, I did, you know, various field jobs with, with wildlife. My master's work also at the University of Idaho also was a bird research project and that's kind of where I first started to get into the GIS and remote sensing and the data side of things. But it wasn't really until I started working with the Idaho chapter of the Nature Conservancy in what was that, 2002? That I really kind of made that pivot into, into range systems. So my, my first area that I worked within was in Hell's Canyon, working on some invasive species issues there. And yeah from there it was sort of like more South-Idaho-oriented stuff which was more range, and then just as my career progressed from there it just became more and more range, and yeah. And here we are, right?

>> So how long have you been with U of I? Maybe you said that.

>> Well, I've been back with the University of Idaho for actually it's just, it'll be two years this August, so just shy of two years now.

>> Yep, yeah, well what angle of remote sensing and applications in rangeland are you focused on right now in terms of research and outreach?

>> Yeah, so there, there's a couple of things. Right now I'm, I guess sort of I'm focused kind of on, on sort of two different ends of a, of a spectrum maybe? Or I don't know, this multi-dimensional spectrum, right? So on one hand, I'm, I'm really interested in and doing a lot of work now on the use of drones for collecting really high-resolution imagery for range monitoring. And I like that platform a lot because well, there's a lot we can do with it. The technology's evolving really rapidly. It's also becoming a lot more accessible, and so I know there's some really cool like $100,000 drones out there and I would love to have one, but I'm really sort of intrigued by what can we do with the technology that's say, like, you know, a BLM field office could afford? Of a, you know a conservation district could afford, right? So that kind of puts it in the realm of the, like the thousand-dollar drone, and the neat thing about it with the advances in the software is like you can actually get good quality data from a sensor like that, from a package like that. And so like what are the parameters under which we need to collect the imagery in order to get good data? And then what is it that we can actually get out of those that's useful from a, from a management perspective? So that's kind of like one end of it. And then the other end of it is like the totally opposite end of like, like satellite, you know sort of moderate to coarse resolution satellite imagery, but really dense time series stuff. So rather than looking at like a satellite image or a couple of satellite images, then what can we get by looking at, you know, every Landsat scene from 1984 until now? Or every MODIS scene from you know 2001 until now.

>> For a given location.

>> Yeah, for a given location, and then sort of pull that apart and, and tease out that variability, and how that variability changes. And then what the signal looks like under, under different you know sort of events that we know have happened, right? And try to pull apart and tell the story of the land through that, that time series record. So those are the two sort of like, like things that I think I'm most interested in or most sort of--. I think there's the greatest potential to, to inform management from those, from those two angles.

>> Yeah, the, the drone data gives you more specific information to a project [inaudible] that provides some generic publicly accessible data available for gigantic landscapes potentially that could be analyzed.

>> Yeah, yeah, and, and you know you have to ask the question, right, of like does this sort of--it's like a spatial resolution versus like a time frequency, right? And, and which one is it that's really more important for the question that you're trying to ask?

>> And both may be useful.

>> Oh, of course.

>> You know there's, I have mixed feelings about big data, and of course the term gets used for all kinds of stuff. What we're talking about is maybe something different than the big data that we get used to analyze you know say a presidential election or something. But it was a, you know, a big brave new world out there of big data, and it seems like the sky's the limit for the uses of data that we can get from remote, remote sensing applications, you know whether drone data or satellite data. As the costs have come down on satellite data and the number of applications that provide kind of a simple user interface for people that are not GIS specialists to do something with it. It seems like there are people who are I would call average rangeland users like myself who are beginning to be able to do something with it. It is, do you see that happening? Or is that still not quite there yet?

>> I think it's coming along, but I think we've got a long way to go, and it's, it's sort of like an implementation gap, right? That, that you know we've had remote sensing technologies now for you know what? Like 30 or 40 years. And it's, it's been promising all sorts of things over that time frame. And I guess I feel like and, and from what I see, it's just now starting to sort of realize that. And I think some of the tools that are making these data more accessible are certainly helping with that. I guess I feel at that some level, relative to how the remote sensing data are actually like used or useful for management, we're still trying to figure out what that actually looks like, and I think we, we've been hamstrung a little bit in the past by expecting the remote sensing technologies to replicate the things that we would do on the ground. And I, and I guess what I've seen over the last say like five or you know maybe ten years, right? Is that the research has moved in the direction more of okay, well what, what can remote sensing tell us that, that we, that we can't measure on the ground, right? And so this, this idea of larger-scale indicators or longer time frame indicators, right? So yeah, so things maybe that the remote sensing technology could, could do that's maybe better than we can do in the field. And then sort of an example of that would be drone imagery. It gives us maybe an opportunity to do like, like height and structure data better than we might be able to do in the field. Or we actually published a couple of papers when I was with the Jornada on looking at soil erosion with drone imagery, and that's a case where the drone is actually a platform that allows us to measure something a lot easier than we can do in the field. So measuring erosion, actually soil surface change, or soil movement. As a field method is really hard, right? You're either doing like, like, you know silt fences or you know something where you're trying to trap sediment, but then you have a hard time quantifying like where that sediment actually came from, or you're doing like erosion bridges or erosion pins right? Which, but that's the very like point-based estimate of a process that's happening over a larger area. And with the drone, we can, we can look at the change in the soil surface over time with repeated drone flights, and then do that over a much, much larger area. And so, so it opens up sort of new, new possibilities.

>> Yeah, I feel like those two things that you mentioned are major barriers to implementation even for ground-based monitoring, namely the amount of variation across the landscape, the heterogeneity combined with the size, the spatial scale of rangelands means that if I'm measuring something right here, right now, that may not apply. That what I'm measuring may not be representative of even the soil type that's next to me, much less something like extrapolated across, you know, say a 15,0000 acre ownership, and it, it seems like you know we, we've said for years you can't measure everything, and so we have to pick and choose indicators that are, you know, reliable that are less sensitive to interannual variation of precipitation. That are less sensitive to management and the timing of management input. But, but maybe the idea that we can't measure everything isn't quite true anymore. You know, to me, the idea of being able to actually measure say cover, whatever kind of cover type we want to use across that 15,000 acre hypothetical ownership is, seems pretty cool and complementary with ground-based monitoring. You know so the solution in the past has been to multiply replication to, to replicate so that you can say with you know some defined level of statistical significance that what you're measuring is representative or is not, but if you can measure, if you can analyze the whole thing, it seems like that would be pretty useful.

>> Yeah, except I think there's an important distinction to make, and that's between like we, we still can't measure everything, but now we're in a position where we can estimate everything, okay? And, and, and that's a, that's sort of a critical difference right? And so the sensors themselves in a remote sensing context, the sensors themselves are just measuring like the amount of light that's reflected off of a surface in various wavelengths, and then we have to sort of like put some sort of interpretation to that in terms of like what it, what it means. And so--.

>> [Inaudible] of the metric.

>> Come up with something yeah that's of interest to us or something that means something to us from a, from a management perspective. And so yeah, they, they aren't measurements, right? They're, they're and so the measurements are actually still have to happen on the ground, and by and large still happen at these sort of side scales or point scales, but then the imagery products or sensing products give us an opportunity to then scale those up, right? And apply those across a larger landscape, but the other sort of I think important thing to keep in mind is that you know when, when we would take a sample of points in the field and go out and measure something, and then we can say like calculate some sort of statistic for that. And then we can put a, like a confidence interval around that statistic, okay? And it's like we, we know sort of intuitively what that means, right? That the actual value is probably somewhere within that range that we, that we calculated, right? When we, when we do a remote sensing model of say like cover, right? Across a large landscape, then the, the meaning of that uncertainty is different right? And so now we have an observation for everywhere. We have a, a pixel value, right? That we get from the sensor for every spot on the ground. And so the, the uncertainty isn't sort of like well the true value is probably somewhere within this range. The uncertainty actually describes like how well those, those reflected light values from that sensor actually correlate to the field data that we collected, right? And so it's a little bit of like, like mental gymnastics, sometimes, to sort of figure out like okay what does that actually mean relative to my assessment of conditions within say like the 15,000-acre ownership that you talked about. So yeah, I think it's, you know, that, I think that's the challenge at this point is in sort of placing these different pieces that we have now, the field observations, the remote sensing observations, right? Into a context or a framework, but I don't really like that term. I think it's really overused, right? Of how, how we use those things together to sort of leverage the strengths of each one, and then account for those known sort of limitations or weaknesses, right? So the field data gives us that, that sort of locked-in perspective. You know we visited that spot on the ground. We know what's going on there, right? But, but there doesn't necessarily relate well to what's the next pasture over, right? Or the next hill over, and that's the strength of the remote sensing is it gives us that, that sort of wall-to-wall coverage, that synoptic view that allows us to maybe, maybe put the story to the data a little bit more, right? Puts those field observations in that larger context.

>> Prior to us interpreting the kind of imagery that's out there, what, what kinds of information are out there that are generically available from satellite data? You know, for example, I'm thinking of the [inaudible] data that is used in the rangelands analysis platform. You know, what is that based on? I realize it's not some kind of direct you know eye in the sky measurement of cover, like you said. What is the, what's the, you know what's the nature of the actual data that's being pulled in, and how does that get transformed to get to something like a, a cover value? And what other kinds of raw data are out there that are available?

>> Yeah, so, so the, so what's in the sort of the wrap, as you called it, is at its sort of core it's some field observations that have been collected by the BLM and RCS over, over many, many years, and then those, those values, those cover values from those field data then are correlated to the satellite imagery values. And then they create this statistical model that then gets applied across sort of space and, and time, right? And so, so that's kind of like one style of product that's out, remote sensing product that's out there, right, about these empirical models. So there's the RAP, there's the, you know the, the USGS grass shrub products, right? There's a number of these, of these data sets that are out there and available now. And then there are some other types of remote sensing products that are sort of generally available and used. And so one would maybe, we could maybe call it more of a biophysical model where it's based off of like the known relationships between the, the light, the spectral sort of properties of the imagery, and like photosynthetic activity. And so there are a number of vegetation indices like a NDVI: normalized difference vegetation index that we know, and that's just a ration of the near infrared light, right? Which plants reflect almost all of the near infrared light that is incoming, right? And then the red light, it would just, they absorb most of the red light, right? For photosynthesis. And so the ratio of those two wavelengths tells you a lot about photosynthetic activity in a, in a plant, right? And so there's been a lot of research over the years done to sort of characterize like what NDVI or these vegetation index values mean, right? And there's a whole host of them out there, and some of them, like this one that was developed at University of Arizona, this soil-adjusted total vegetation index. That one actually then can be you know through a little bit of like, like mathematical jujitsu, right? You can actually convert that into estimates of like total foliar cover, right? And so but, but those are based off of like first principles sort of understanding of how light reflects off of you know photosynthetically active vegetation.

>> Versus soil versus dead vegetation.

>> Yeah, yeah. And so those are, those are, those are different than like what the cover products that, you know the RAP has, right? Which are, which are statistical models. These are more of like a, like a physical model, right? And so yeah, I think those would probably be the two kind of main products that are out there, but, but you know there's a whole host of things that are, that are available now. I think the trick becomes in, yeah, in sort of like packaging and presenting those, and then interpreting what they mean. And then interpreting them relative to like okay, well we know that you know NDVI sometimes is challenging to use in like low cover situations because it's not really sensitive to, to changes in say like well like in the Chihuahua Desert, right? Where you have like very low cover of photosynthetically active vegetation. You could double your amount of cover of photosynthetically active vegetation and NDVI might not pick that up because it's not really sensitive at that low end, right?

>> Yeah, interesting. So at the, for example, if, if for lack of a better term mapping application like the RAP says that you have this percentage of perennial cover versus X-percentage of annual cover, they're getting perennial cover from some, some, from imagery that's showing greenness for longer in the year. Is that right? So like perennial grass is going to be green from March 15 to maybe July 15 or the first of August, whereas an annual grassland or places that have a lot of annual grasses are going to have a very narrow window where they're green, and then they immediately turn color. Is that how that would work?

>> Yeah, I don't, I'm, I'm not sure of the actual specifics of how they, they sort of modeled those. Yeah, I know it was, it was a sort of a time series approach, but I think that would be a great conversation to have with either Brady, you know Allred or Matt Jones about the specifics of that. That's certainly one, one approach to doing it, you know, is to do this sort of multi scene selection, or just considering this whole time series of images that keys into those different phenological stages, and the differences between the phenological stages.

>> Right.

>> We did something with a, with a postdoc that I had at the Jornada, John Maynard. Took the approach of using the time series information to, to discriminate or classify, map basically the different ecological sites on the Jornada. And so it was all based off of just NDVI values over time, but like the, the clay site had a much different sort of like temporal signature, time series signature than the sandy sites did, right? So just using like changes in plant green up over time and you know the seasonality of these different sites, we were able to actually pull these apart and map those really pretty, pretty well at the Jornada. So it's that same, same kind of idea, right? That the, rather than just looking at a point in time, how's the light reflect off of different types of vegetation, right? We can, we can look at okay, well how do these things change over time, right? And then what do these changes actually tell us?

>> With some of the major satellite suites that have available data like Landsat or MODIS, what's the time frequency of their flyovers' regular location, assuming there's no cloud cover?

>> Yeah, so Landsat is every 16 days, and it's a, you know it's a one-shot deal, right? It passes over every 16 days, and if it's, if it's cloudy, you're out of luck, right? You know, if it--.

>> Right.

>> Whereas MODIS, the MODIS satellite is actually, it's going over what is it, every day, right? And shooting, and it covers huge swaths of the continent at one time, right? Now it's much coarser resolution, but what they do with MODIS then is that they, they sort of like collect those daily passes, and then they pick the best pixel out of that, you know. So that's seven or eight, seven or eight days. And that's the, the pixel that actually makes it into their like, like product, right? So they do these eight-day composites, right? And which, which gives you that, that sort of insulation against like cloud cover and stuff like that. So you get more, you get a more consistent sort of time series out of the MODIS than you do our of Landsat, right? Because it's got more data to pick from, and then it picks the best pixels for each one. So for the time series stuff, MODIS is really nice for that.

>> Just a coarser spatial resolution?

>> Yeah, it's a coarser spatial resolution, right? But another paper that my postdoc at the Jornada, John Maynard and I did, was it's, this is the "Royal We," right? This is mostly John and his idea of what was to look at the, the effect of that, that spatial resolution, right? So to basically ask the question what's more important to have the finer spatial resolution of the, of the Landsat or to have the coarser resolution of MODIS but have it more frequently? And we, we basically found that the, at the Jornada in this sort of Chihuahua Desert area that the variability there, you had to like site scale variability, fine scale like patch structure, right? You know, mesquite dune land kind of things, right? And but once you got above that, it was basically those same patterns just repeated across the landscape. And, and those patterns were fine enough that even Landsat pixels weren't pixing them, picking them up, and so at that point, it's like well you're getting way more information by having the more frequent MODIS data than you were out of having the higher-resolution Landsat data, right? So those, it was really cool, the approach that John took to sort of figuring out like how to like evaluate those trade-offs, right? Those space and time tradeoffs that you have to deal with satellite imagery.

>> Right. I just wanted to mention for listeners who feel like this is a little bit esoteric, we'll eventually dial back to WIFM, what's in it for me? And I think bring this back down to earth.

>> Yeah, I can geek out on this stuff all day, right?

>> Moving on to monitoring for, for management, there's a trend nationally, I would say, toward creating monitoring systems that have comparable metrics even if they don't have similar measurement methods. And in the 2017 Range textbook chapter on monitoring protocols that you wrote with Jeff Herrick who we listened to already, and David Pike, you say that "Robust and interoperable monitoring programs provide a much more useful starting point for addressing known unknowns and unknowns unknowns." What did you mean by "robust and interoperable"?

>> Yeah, so, so "robust" to me means that there are like structures or systems in place to support the collection, the sort of like care and feeding of your data, and then the use of that data on the backside. And so, so what I mean by that, right, is like that there are well-described protocols that have been sort of like, like vetted and validated. There's, there's training resources in place for that. There's defined procedures for how you do your data, you know, QA, QC, right? And, and then there's, there's sort of documented ways in which you can analyze those data, and then like, like steps for how they actually feed into the decision-making process. So to me, all of that would sort of wrap together into a robust monitoring protocol. And we actually were, were, that was sort of on display. Last week we did the annual sort of upland monitoring for the university [inaudible] Rock Creek Ranch, down in southern Idaho, and we're using the same monitoring protocols, the same monitoring system there that the BLM and the, the NRCS National, National Research and Inventory uses, right? And you know and the reason we're doing that rather than doing anything on our own is that yeah, those, those protocols that, that were sort of developed and implemented as part of that, that program are the, are the product of like 15 or 20 years of, of sort of like refinement. And so we can take advantage of all of that and implement something, and not have to worry about how to develop our own protocols. And then you know kind of deal with all the headaches of like, well what do we do in this case? Oh, we didn't think about this. Now we have to make a rule on the fly sort of thing.

>> So if you're, if you're doing research and applying management treatments on the Rock Creek Ranch, and then getting a certain set of results, your description of the results would be, could be an apples to apples comparison to data that the BLM might be collecting on their own land?

>> Yeah, and that's the interoperability part, right? And you know these data sets are, are compatible across ownerships, right? And so if I have data on Rock Creek and you have data on either the ranch next door or some other BLM allotments, then they all sort of feed together and, and give us this kind of critical mass at landscape scales that was really, really hard. Historically, it's been really, really hard for us to actually achieve, and, and I think we've seen that, that play out in, well in countless cases, right? I mean the, the whole issue around sage grouse, the sage grouse habitat, and sage grouse populations. You know, that, that sort of discussion in how that played out. That, this is just in my humble opinion, right? Probably could have been well informed by having some consistent data sets, you know across ownerships. Which we actually like have now. We, you know we, we can actually start to say things.

>> It's like speaking the same language.

>> Yeah, yeah. It all, that sort of interoperability part, you know not, not to be sort of like, like trivial about it, but the interoperability part all comes down to the fact that you and I have to agree on what a rock is, right? And, and if we don't, then we've got a square-one issue, and you know you can't use my data, right? But if we can sort of hash out some of those, some of those sort of fundamental definitions and fundamental concepts around monitoring, then my data become useful to you.

>> Which I think also motivates more people to collect some data because if the only motivation is so I can use my data internally, comparing against my own data historically, that's maybe not as strong a motivation as if what I'm doing can be compared with the guy next door, the BLM, NRCS, Rock Creek Ranch.

>> Yeah, yeah, I think you're right. Although I think that there, there is still a challenge that you know even though we talk about the importance of having you know like, like looking across boundaries and sort of taking a landscape view of it, a lot of management and, and this is not just within agencies, it's within sort of you know private entities as well. A lot of management happens at that project scale, and so there are just these intense forces that, that make us focus on that project scale. And so it's, in sort of implementing these, these consistent, you know maybe standardized approaches to monitoring, it's, it's been you know a bit of a struggle to get people to sort of think a little bit broader, and I think that, that in getting this, getting this setup implemented, right? Getting this concept implemented, there's almost like this altruistic phase that we have to go through, right? Where we need enough people to sort of like buy into the idea and do it for a while to, to sort of build that, that critical mass of data that then starts becoming you know useful to people at, at a number of different levels. And I think to their, to their credit, the BLM has really stepped up to the plate on this and invested in their AIM monitoring program which you know in the, in the course of what was it, 2019 now, right? So in the course of like eight years, they've gone from basically having no AIM data to now having AIM data on, what, you know over 20,000 locations, right? And so that then becomes this, this base where you have a new question now, right? Then this gets to that, to the unknown unknowns, right? And so now a new question comes up, well hey, I've already got a set of data that maybe can start to inform that, not to suggest that those data are going to be sufficient to answer any new question or even any existing question. But at least they're a starting point for it.

>> Yeah that issue about using common metrics and the BLM's stepping up to the plate makes me wonder, one, is the Forest Service at that same plate? And two, are the, are the AIM data or the AIM metrics ones that would be useful on the forest? My own experience with monitoring has been that you know in places where you've got more precipitation like the forest, even a dry forest, different indicators are useful than if you're in you know shrub-steppe in Eastern Washington under eight inches of annual precip. You know I guess one example's canopy cover. You know, at lower, at lower precipitations, canopy cover may be telling. It could tell a number of different things, but at least it's something that is sensitive and measurable. If you're in a higher rainfall site, canopy cover may be 100% all the time regardless of the condition of rangeland. So question number one is, is the, are the same metrics that BLM is using useful to the Forest Service? And two, is the Forest Service using them or planning to?

>> So, so to the first question, I would argue yes, they are, but maybe not necessarily in the same way, right? So and, and when we work through the process with the BLM to define these, you know we call them core indicators, right? And, and those are largely consistent with the NRCS and the National Resources Inventory are using, right? That these, the core indicators were selected because they were applicable in a number of different environments, situations, and we've implemented them, you know all the way from the, you know the Mojave, Sonoran, Chihuahua Deserts in the Southwestern US, all the way up to the North Slope of Alaska, right? On these, on these coastal tundra, you know systems, right? So, so we're pretty confident that they are applicable. Now to your specific example about you know maybe like total canopy cover is not as informative in a more mesic forest as it is in sort of a dry range site, right? But you could argue that well composition probably is, right? And, and so and you know that's, so composition is a, it's sort of another factor, another piece of those core indicators, right? So up on the, up on the, you know these coastal tundra systems in Alaska right? Canopy gap is not useful in the same context that it is down in the Southwestern US, right? But you know yeah, so as to sort of where the Forest Service is in terms of this, that, I'm not actually really sure on that at this moment since it's been a couple of years since I've engaged there. I know there were a number of people really sort of trying to coordinate efforts between the various agencies, and this is an area where I think like sage grouse is actually proving to be kind of useful in that it's, it's forcing these kinds of conversations to happen about how we actually standardize things, and how we then you know get data sets that can, they can inform across these boundaries.

>> Right, which is something that we should be doing anyway, but sometimes it takes a healthy crisis to make it happen?

>> Yeah, yeah, and you know I mean each, each of the agencies, it's not, I don't want to just pick on agencies, right? Because in any organization is, is you know prone to this, right? Is that there's this sort of culture and sort of legacy of the organization that, that sort of factors into like what I'm doing today, you know as a product of what this agency has done in the past.

>> Which is also useful because if you're, to the extent that they are doing any monitoring, there's quite a bit of incentive to continue what they had been doing even if it's not the best thing out there because at least it's comparable if they're collecting the same data in the same way.

>> Yeah, to a point though, but you know my, my analogy that I like to point out right? Is that we don't still measure temperature with mercury bulb thermometers, right? You know at some point, you know we, we the you know the sort of meteorological community made a shift from mercury bulb thermometers to digital thermometers, right? Which everything's based off of now. Now digital thermometers actually behave slightly differently than mercury bulb thermometers do, right? And so, but that, that, those properties, right? That difference was sort of really well studied, and you know there was a, there was a plan put in place from how to transition from one to the other. And to the point like it's totally transparent to use as just consumers of sort of weather information, right? I think that's a, that's an interesting model to think about, you know. Like yeah, as we look at sort of legacy monitoring efforts, and then how we sort of bring them up to speed with or into you know into correspondence with, with new approaches to monitoring. It's like okay, we need to sort of study these and understand sort of how they relate to each other so that we can actually make a useful transition. I don't think that you know we should just willy-nilly quit with you know sort of establish monitoring programs and then just like, like abandon them because we have some sort of new--, newfangled you know, technique, right? And--.

>> Temperature's still useful, we just need to update how we collate that.

>> Yeah, and that, that was one of the big reasons like, like in that book chapter that you referred to, and when we set up the, the monitoring protocols like for the BLM, you know we, we made a, almost like a painful distinction between like an indicator and a method, right? And to me the indicators like what it is you're actually going to measure, that's the part that needs to be consistent over time, and then the methods should actually be well one, picked so that they're consistent with the definition of what the indicator is. But the methods should evolve as our, as the science evolves, right? And, and as our ability to measure things evolve. And so really like, like when it comes down to it and you know line point intercept is like our go-to method for measuring cover now, right? Because it gives us like a huge amount of data for a modest sort of amount of effort, right? But in 15 or 20 years, if we're still using line point intercept to collect cover data, I might be kind of disappointed, right? It's like I would expect there to be actually something maybe better, right? Maybe we're using drones to do it. I don't know, I'm just making all of this up now, right? You know, but, but--.

>> Right, you expect the methodology to mature somewhat over time.

>> Yeah. And then--so the indicator then though is what needs to be consistent in order to carry things forward.

>> In that chapter, you mention a number of criteria for selecting core indicators, or to evaluate what makes them useful or relevant. And for people who may be considering adopting some sort of monitoring program where they haven't before, particularly people who are not attached to an agency, where that's going to be prescribed for them, I think it's useful to take a look at some of those criteria. Again, these are for indicators, not for measurement methods. And you can talk about each one as much or as little as you want to. The first one, you mentioned the chapter is relevance to ecosystem structure or function. That's one that's probably not quite as apparent to the listener at face value as to what it means. What exactly do you mean "relevance to ecosystem structure or function"?

>> Yeah, well I think that most people expect this. It's like implied, right? If we're going to, if we're going to choose something to measure, it should actually like mean something to the system, right? And so, you know if you're interested in you know sort of like change from, from say like healthy sagebrush systems to like annual grass invaded ecosystems, right? Then you should be measuring the things that relate to that process of invasion by annual grasses. That's what that, that sort of means.

>> They could measure leaf width, but it would have no relevance to--.

>> Yeah, probably not, right?

>> Yeah.

>> But measuring like you know the amount of bare ground, or measuring you know sort of establishment of annual grass seedlings or, you know sort of yeah, like, like basal cover or perennial grasses, right? All of those things could be sort of functional indicators, right? Because they're related to that process of, of annual grass invasion.

>> Right. The second one you mentioned was usability. Does that mean whether or not it's a measurement or an indicator that somebody could actually measure?

>> Yeah, yeah. I mean there's all sorts of things, right? That we can dream up that we would like to measure that are like super hard to do. And so yeah, pick, picking indicators that people can actually do, and that give data that actually like means, like means something, right? Like in some of these criteria overlap, right? You know, because there's one in there that talks about interpretability, right? Like what does this actually mean? And there are lots of things too, and we saw this a lot like in the, in the nineties and early 2000's when this kind of landscape ecology concept really exploded, and there was a lot of effort going on in defining landscape indicators, patch indicators, things like that, and there are all sorts of funky things sort of thrown out, like fractal dimensions and you know there was, there were all these papers that show well oh, the fractal dimension of these systems is really, you know, strongly correlated with these different properties, right? Well explain fractal dimension to your grandma, you know. It's like, so I don't know, and then this may be just my personal bias, but I think we should be picking sort of the simplest and most straightforward indicators that we can actually like describe and put a meaning to.

>> Your third criterion was cost effectiveness. That's fairly straightforward. What would be an example of, you know say an expensive indicator versus a cheap indicator?

>> Well, I mean expense comes in different forms, right? So it can either be expensive to collect a, an indicator like at a site, right? And I'm, I'm trying to think of an on-the-fly example of that. But I mean it could be something that actually like requires some sort of sensor or instrument to measure, right? And then there's expense in the value, like the, the data that you actually get for each observation is really pretty small, so you have to have lots and lots and lots and lots of observations, right? And so the example of that could actually be like soil erosion where you know I'm going to measure like erosion pins or an erosion bridge over like a meter area, right? Versus flying that with a drone a couple of you know different times with good ground control, and then I can measure soil surface change over, you know 50, 60 acres, right?

>> Right. Right. The fourth criterion was cause and effect.

>> Yeah, and that gets back to this sort of functional indicator thing, right? So you would want to pick an indicator that, that --.

>> Was causal?

>> Yeah, that change in that indicator value is actually directly related to the thing that you're interested in. And so you know it could be change in like, like how connected your bare ground patches are, which like canopy gap intercept would be a method to sort of measure that. Change in that connectedness is actually directly related to wind erosion, right? Or water erosion. That's a cause and effect, something that has a strong sort of cause and effect [inaudible] there.

>> How about signal to noise ratio?

>> Yeah, that's a good one, too, and that comes actually in two different flavors as well, right? And, and before you mentioned that these are sort of mostly defined relative to indicators. But most, a lot of them can be applied to methods as well, and so when we talk about like signal to noise ratio, right? Noise is just sort of like, like undesired variability in our data, right? And, and that can come either because the, the systems we're trying to measure are just sort of naturally heterogenous, right? There's just variability in the systems. Or it can come, it can be introduced through the methods that we use as well, right? And so you know dealing with, with noisy systems like just variable systems, right? Then there's different approaches to sample design or there's you know sort of different indicators that we can pick so like you know. Variability relative to interannual precipitation, right? Some years you're really wet, some years you're really dry, and if you're trying to measure like, like cover of annual grasses, right, it's just going to be enormously variable between years, right? But some, some other sort of metric like a density measure, well for annuals, that's not a good example, right? But, but for those like perennials, right, a density measure might actually be more stable over time. So to have a higher signal to noise ratio. Provided that it was actually still meaningful, right, as a functional indicator. But then in terms of methods, you know like a, a method that has more like observer variability right? Or is more subjective, is just going to be noisier. So that should just drive us to the methods that are most consistent and are the easiest to apply.

>> Both across sites and across observers.

>> Yes, yeah. Quality assurance?

>> Yeah, to me this is just sort of, almost like a checked-box kind of thing, right? You know, like are there rules in place and things that you can do to assure that you're getting high quality data, right? And again, that gets back to this idea of like quantitative measurements and observations versus qualitative just sort of like eyeball assessments and descriptions, right? It's easy for us to, to or easier for us to train and then like, like verify that you and I are taking the same measurement, or you and I are observing the same phenomenon than it is to coordinate our interpretation of that site or our sort of ocular estimate of that, of that area, right?

>> In the list I don't see anything on observer bias. Would observer bias fit under that category? You know where if I measure this particular indicator using this method and then you do it, and we come up with wildly different results, seems like that would be a less useful---? So that fit under quality assurance, or is that more of the signal to noise problem?

>> Well, I think you could fit it under both, but you know you could also make an argument that it could stand on its own as a criterion, right? That, that yeah, there are, and we're seeing this actually with a project that I have a graduate student right now, Alex Trainor, evaluating utilization data on, we have a project in Southern Idaho that's collecting a lot of utilization data at the same sites using different techniques, and Alex is looking at okay, what are the effects of like the different observer effects? And is there an effect of like people who have well from like you know people who actually have a range background versus people who have a wildlife background, and they're trying to implement these? Or is there an effect as you move from say like a more productive system to a less productive system? And for some methods, especially the ocular estimation methods like landscape appearance, there seems to be a really strong effect there. So people who move from a, from a productive sort of you know ecosystem to a less productive one tend to rate that less productive ecosystem as having higher utilization when it may be the same or actually may be lower utilization, just by virtue of the fact that it's a less productive system, right?

>> Right.

>> And so yeah, that sort of inherent, those inherent biases, right? I think are not really well described a lot of times with the different methods that we use. But probably actually influence our data a lot, right?

>> The next criterion is that an indicator should be anticipatory or have anticipatory value. Does that mean that's it's a leading indicator that represents you know a something that would happen toward the front end of degradation, for example?

>> Yeah, yeah, and yeah, that's pretty much exactly--it's just like a, an academic way of saying right that it's a leading indicator. So an example of that could be you know if you're looking at sort of you know degradation in like a, a perennial grass system, right? Then cover of those perennial grasses could be a leading indicator because you know would expect cover to decrease before you actually lost individuals of that, of that grass, right? But, but then on the flipside of that, if you're looking at like restoration or recovery, cover's probably not a good anticipatory indicator of that, right? Because you need establishment of individuals, and then the cover will come later, right? And so in a, in a recovery context, then maybe you know a density measure might actually be more useful.

>> I'm just thinking on the flipside, if for example, I have some interest in, in whether insect diversity could be used as a, you know rangeland health indicator. Or things like that or other sort of you know higher-order species are more of a lagging indicator rather than a leading indicator? The canary in the gold mine would be less useful than you know something that measures the introduction of gases prior to something dying from the gases.

>> Yeah, right. Yeah, yeah--that's a, yeah. That, that's sort of a good point. I don't know. That would be an interesting one to sort of think through, right? And, and this gets back to sort of that idea of, of having these, these conceptual models, right? For how a system works which you know gets a lot of, gets a lot of play. And actually I think people sort of like, like dismiss that a lot of times as a step in the, in the process. Like oh yeah, we already know how these systems work, right? It was like well, we may know how they work. We may only think we know how they work, but that's sort of like, like the foundation upon which we select these functional indicators, and that is what then could suggest things like well, you know, insect diversity may actually be a good indicator for something, right? Like this thing that we're interested in because it's, it's causally or conceptually tied to these other factors, these other processes that are happening on the ground.

>> So a given indicator but have different anticipatory value depending on what you're wanting to find out?

>> Yeah.

>> What would I mean for an indicator to have to be retrospective?

>> Well, that, that's--.

>> I mean historical data exists.

>> That's part of it, right? You know, part of it too means that, that indicator is sort of capturing like the indicator itself is sort of capturing the history of what's happening at that site, right? And think about like, like pedestaling in a range site, right? You know, the fact that you have pedestaling means that there is some erosion process that happened in that site over time, right? Or if you have like, like plant bases that are buried under sand or sediment, right? Then, then that is a retrospective indicator because it's capturing the fact that there is some process that happened some, some yeah time before, right?

>> Right.

>> And you know not, not every indicator's going to be able to achieve every one of these, you know, kind of things, right?

>> Right, but the more, the more criteria work for a given indicator, the higher value it has.

>> Yes.

>> One of, one of the benefits that I can see in standardizing approaches to monitoring is trying to value ecosystem goods and services. You know from an economic perspective, if you talked to ten economists, you'd get 27 different opinions on how to value these things. Both intangible and tangible ecosystem goods or services, but to value something, you have to measure it. I mention this because we're going to go into a series of podcast episodes talking to people about you know one, how do we, how do we ensure that we are creating, you know ecosystem goods and services? Which I think is one of the things that makes rangeland-based livestock production standout from say corn production, you know? You don't grow corn in places that are, it's no longer wildlife habitat, right? But we, we expect clean water, wildlife habitat, open space. These are all things that we expect from rangeland ecosystems, but to value something or to put a value on something, you have to measure something, and I think, I think that, that some of these, some of these satellite data that we can get to at larger scales maybe can help to put value on, on these less tangible ecosystem goods and services. I, do you think, do you have any experience with efforts to try to measure that or quantify those specific things using satellite data?

>> That's a good question. I mean, I think a lot of the things that we have sort of, you know worked at or worked it through defining as indicators have, have ties or linkages back to sort of defined ecosystem services. You know that, that's not sort of a lens that I've applied directly in a lot of my research. You know, I, I think certainly to the extent that satellite imagery gives us an opportunity to define new indicators that we haven't really looked at in the past, that certainly opens up sort of that, that possibility, right? Of informing ecosystem goods and services. And just a somewhat related example from the work we were doing last week, you know there's, at the Rock Creek Ranch in southern Idaho, there's some interest in doing some, some stream restoration work down there. And we've been having just some open discussions about like well what are the indicators? What would we actually measure to sort of track the success of these treatments, and then their effect on these sort of watersheds, right? And you know one of the things that was brought up is that while the actually extent and change in shape and size of these stream riparian areas, right? Or these, these sort of wetland areas would potentially be a really useful indicator. And it's like well that's something that you can do really easily from imagery that, that you know is actually kind of a pain to do that in the field, right? So I think that the image products, remote sensing products do give us an opportunity to sort of measure things that we haven't done well before, right?

>> Yeah, that's a, that makes me think I have measured sinuosity over time using Google Earth imagery just to, I mean it doesn't take any time at all to roll back imagery, at the ones that you've got enough resolution you can actually see the stream channel and get pretty close to what the [inaudible] would be.

>> And I, there's something else for this really interesting in that too in that I think our, our tendency is to jump to really sort of like sophisticated analysis, complicated sort of approaches to do things, but there's a lot of [inaudible] that can come out of these remote sensing products just by you know it's almost like the low-tech, high-tech approaches, right? You know, yeah. You have the imagery that's available, and you can actually go in and digitize the, you know, the stream channel over time, and you know sort of quantify how that's changed or moved, right? So.

>> While it's a lagging indicator, it's one that's indicative of a whole lot of things that are playing into that to make that happen.

>> Yeah, yeah. And in one sense it's a lagging indicator, but it's also a persistent indicator, right? So like, like you know annual grass cover's a lagging indicator, too, but it's really flashy. It changes from year to year. Changes in sinuosity of a stream. Now granted, you could have some sort of really, you know like, like significant episodic flooding event that changes it like overnight, but you know yeah. In some cases, just because something is lagging doesn't necessarily mean that it's--.

>> Not useful.

>> Not useful, right.

>> Now you mentioned earlier that there are unknown unknowns that, that we may begin to learn about through access to data that we haven't had before. You know either at larger spatial scales or at temporal scales where we have a frequency of data that we haven't had before, particularly with ground-based monitoring. The unknown unknown I think is a quote from Donald Rumsfeld when he said, "There are, there are things that we know we don't know, and know that we don't know we don't know, and the things that we don't know we don't know may be the ones that are important or that we should be worried about." It seems like remote sense, remotely sensed data could be one of the good ways to get at some of these unknown unknowns that could be important.

>> Yeah, yeah, and that's, I love that quote. And Donald Rumsfeld was just skewered for that, for saying that. I find it really interesting that that quote didn't originate with him. It actually came from a NASA administrator who's talking about space exploration and, but, but it is sort of a useful sort of like way to think about like how we go about business and monitoring, right? And we're setting up these systems to deal with our known unknowns. I need to know like what my you know sort of my forage availability is and track that over time, right? But, but yeah, there are these things we know, I mean experience has shown us that, that there are all these unanticipated things that we're going to need to have data for, and sometime have it really fast, right? And so yeah, I think that there, that remote sensing certainly gives us a platform for doing that, you know? Because it's this sort of continuously operating collection of observations, you know, and there's a phrase we use sort of like, like in range, in not necessarily--range monitoring, not necessarily a friendly context, right? We call it answers in search of questions, right? And I think we generally would say that, that monitoring just for the sake of monitoring's a bad idea, right? We need to have, you know we need to have goals and objectives in mind for why we're doing it. But, but there is a lot of value in having some of these answers in search of questions, right? That give us this kind of just base data from which we can sort of start, right? And remote sensing is a great, a great source of those data, and, and I would say too that the you know these, these sort of consistent, you know sort of cross-ownership monitoring programs that come from like the core indicators and methods is, is another example of that. You know a lot of those efforts are designed around answering specific questions, but it's a, it's all apples to apples data, and so we can repurpose it. We can, you know to the extent possible, you know, to, to be one of those sort of base data sets for us.

>> You mentioned earlier that things like satellite data, large quantities of satellite data don't tell a story without some interpretation based on assumptions. You know going back to the value of monitoring, some people have said that monitoring is sort of like the dipstick in your car. You know where we want to know do we have enough oil that the engine is not going to burn up and leave us on the side of the road, but of course that assumes that we know how much oil is important, that our dipstick is actually accurately measuring how much oil is there. What are the, what are the dangers in, in satellite data in big data, where, or where that connection maybe isn't all that transparent or clear?

>> Yeah, I think that this sort of ties back to this idea of like land potential, right? And that's the sort of equivalent of you knowing how much oil's supposed to be in your car, right? And, and if you don't know what the, what the potential of the land is, and you don't know like how the, the difference or parts of that system are supposed to function, then you can, you can come to sort of like, like the wrong conclusion based on the data, right? And so you know you could look at like an increase in greenness, right? Over a landscape and say that that's a good thing, right? But that increase in greenness, that's all the satellite's measuring, right? It's just there's more photosynthetic activity here, and you know we might think that's good. But that may be a result of you know we just have like annual grass invasion, and yeah, we've got a lot more photosynthetic activity, but it's like totally not the thing that we want, right?

>> Right.

>> And so you need to know, yeah, what the land's capable of doing, what the potential of that land is. And then what you should be expecting in order to interpret what those values mean, right? Because just on, on their own, those, they're not really much more useful than gee-whiz values, right? Yeah.

>> If I'm in a six-inch precip zone, I shouldn't expect 25% basal cover.

>> Right, yeah.

>> If I get eight percent, that should be considered--.

>> Yeah, yeah. And so there's this sweet spot, this range of values that you would expect to see if you go below that or if you go above that. Then that's suggestive that something's not right, right? But, but we need to know and maybe this is sort of an area of research, right? Of like we need to know what those expectations are for these different indicators and these different types of land. And I think that you know interestingly, I think that a lot of these data that we're collecting through programs like AIM or through the NRI, or even efforts like you know the RAP, right, these remote sensing products help us to build these profiles of different systems. And so we can, we can sort of start to get that window into how they're responding under different situations.

>> What's your take on message for ranchers? Are there some, you know, rancher-ready tools that we should promote? We've mentioned a couple that may or may not be totally ready for primetime like the RAP. But anything else that you feel like people should pay attention to and think about using?

>> Yeah, I mean, you know there's plenty of tools to RAP. There's LandPKS, which I think you've sort of talked to Jeff before on that one. You know that's a, that's a great sort of easy way for people to be collecting some data and observations. I guess my, my sort of take-home would be that you know you really don't think about it in terms of methods. Think about it in terms of indicators. What, what is it that you're trying to measure? You're trying to track over time? And then you know look for the ways to, to do that that are most consistent with sort of what like the larger community of people are doing, right? And the values in that are going to be that yeah, your data are more sort of useful and informative because there's a larger context in which to interpret the data. But the additional value is that there are also like more resources to support you using that, that approach, right? So I think that would be sort of my largest take-home, right? Is to focus on the indicators, and then you know let, let the best technology of the day sort of inform us on how we actually measure those indicators, be it a, you know a field technique or a remote sensing technique, right? And I think we're going to see more and more that, I was going to say the line blur between those two, right? But I think we're going to see these kind of hybridized systems sort of evolve, right? And you can argue that something like the RAP is already a hybridized system because it's using a lot of those field data to create that product, but I think we're going to start seeing more and more systems where you can actually drop your field observations in, and then it provides you that, that interpretation of it. And that's really sort of what LandPKS is, is aiming for, right? Aiming towards.

>> Yeah. Jason, thank you for your time.

>> Tip, it's been great! Thanks.

>> And we'll put some information, some links in the show notes for some of the resources that we've mentioned: websites and publications. Thanks again.

>> Thank you!

>> Thank you for listening to The Art of Range podcast. You can subscribe to and review the show through iTunes or your favorite podcasting app so you never miss an episode. Just search for "Art of Range." If you have questions or comments for us to address in a future episode, send an email to show@artofrange.com. For articles and links to resources mentioned in the podcasts, please see the show notes at ArtofRange.com. Listener feedback is important to the success of our mission, empowering rangeland managers. Please take a moment to fill out a brief survey at artofrange.com. This podcast is produced by Connors Communications in the College of Agricultural, Human, and Natural Resource Sciences at Washington State University. The project is supported by the University of Arizona and funded by the Western Center for Risk Management Education through the USDA National Institute of Food and Agriculture.

Mentioned Resources

Book chapter on monitoring protocols: Options, approaches, implementation, benefits. 
Landscape Toolbox website

We want your input

Future podcasting funding depends on listener feedback. Please take a minute of your time to respond to this short survey.

Give Feedback

Taking suggestions

Have a question for us to answer on air, or a topic suggestion for a future episode? Email show@artofrange.com