Jasper Slingsby
Informing decisions requires knowing (or guessing at) something about the future.
We base our expectation on:
This can be represented like so:
This framework is similar if you are approaching the decision quantitatively (i.e. using models and data).
The relationship between effort and reward is nearly 1 to 1, suggesting that the more effort you invest, the more reward. That said, there is scatter around in the points around the 1:1 line, suggesting uncertainty.
Revisiting our Effort to Reward example, what would you do if the decision-maker decided to invest huge effort, but the next few data points looked like this?
The iterative decision making cycle mirrors the scientific method, i.e.:
Observation > Hypothesis > Experiment > Analyse > Interpret > Report > (Repeat)
So iterative decision-making facilitates iterative learning (i.e. scientific progress).
“prediction1 is the only way to demonstrate scientific understanding” - Houlahan et al. 2017
…if we cannot make reasonably good predictions, we’re missing something.
In ecology, we mostly test qualitative, imprecise hypotheses:
Without testing precise hypotheses and using the results to make testable predictions we don’t know if our findings are generalisable beyond our specific data set.
Seeks to make prediction a central focus in ecology, on a time scale that is both useful for decision makers and allows us to learn from testing our predictions (i.e. days to decades)
Step 1: Start with your initial conditions (data and knowledge that feed into designing and fitting your model)
From Dietze et al. (2018)
Step 2: Make forecasts (i.e. predictions into the future - in blue) using your model, based on your initial conditions (red).
From Dietze et al. (2018)
Step 3: Monitor and collect new observations (green) to compare with your forecasts (blue) and original observations (i.e. initial conditions (red)).
From Dietze et al. (2018)
Step 4: Analyse the new observations in the context of your forecasts and original observations, and update the initial conditions for the next iteration of the forecast.
From Dietze et al. (2018)
This can also be represented as a cycle, mirroring the scientific method:
The key steps are:
Two things not obvious from this diagram are:
Iterative ecological forecasts are thus aimed at:
So it’s a great way of getting scientists to engage in real-world problems, demonstrating the value of our science, and learning by doing!
This figure from Dietze et al. 2018 provides an expanded representation of these conceptual links between iterative ecological forecasting, the scientific method, and decision making (here in the context of adaptive management, which is a management paradigm that focuses on learning by doing).
Iterative ecological forecasts need to be founded on a highly efficient informatics pipelines that are robust and rapidly updateable.
The emphasis is on near-term forecasts to inform management. If the process of adding new data and updating forecasts is too slow, the value of the forecasts is lost.
The best way to build the ecoinformatics pipeline is to follow reproducible research principles, including good (and rapid) data management
Adding this to previous slide highlights what I like to think of as “The Olympian Challenge of data-driven ecological decision making”.