It may seem surprising that agroclimate metrics can be computed from either observed (i.e., ‘real’) or modeled (i.e., computer generated) weather data. What does it mean, and why would someone want to, compute something like degree days with modeled data?
For the purposes of computation, the metrics don’t care where the data come from. If you give the degree day equation a record of temperature values, it will give you back the metric. It doesn’t care whether the data are ‘real’ or ‘made up’. But of course what you do with that metric depends a great deal on where the weather data came from.
Metrics computed from actual measurements, and metrics computed from weather models, are used for different purposes. If you’re a farmer, you probably wouldn’t want to schedule your irrigation based on the simulated weather from a climate model. Likewise, if you’re a water control engineer, you probably wouldn’t want to plan the size of your flood infrastructure for the next 50 years based solely on the weather from the past couple of years.
Each type of weather data has appropriate and inappropriate uses. But the good news is that agroclimate metrics are generally based on plant and insect physiology, so they work for all kinds of data - past, present, and future. A metric that predicts nut development for a particular cultivar today is still a pretty good guess for nut development rates in the past, as well as 50 years from now.