I would venture to guess that everyone reading this post owns a computer, if not more than one. In 1976, however, that would have been an absurd claim. That was the year that Apple Inc. was founded, and the company’s now-famous owners set out to make personal computing a reality.  At the time, computers existed primarily (if not exclusively) as mainframes serving the business needs of large organizations. The arrival of personal computers onto the scene effectively gave individuals the computing capabilities of those large organizations. I wonder, though, did most of those individuals immediately see the value and integrate the new innovation into their work and lives?  I have not done the research, but I am willing to venture another guess that the answer is no.

Advancing technology is naturally accompanied by a lag in uptake and often a need for auxiliary tools or new skills to help us realize the full benefit of the advancement.  The history of hydrologic forecasting is no different.  For example, the sophistication of underlying models, availability of data to inform models, and spatio-temporal resolution of forecasts have come a long way, particularly recently with the rise of machine learning and near-infinite computing power. But we still have some catching up to do in terms of making the information truly useful.

From few to many, but barriers remain

For years, hydrologic forecasting was performed only by large organizations with the IT and personnel resources necessary to run the complex data collection and modeling systems. Though these systems were robust and significant effort went into the forecast process, computing power and availability of real-time data were limited.  Thus, forecasts were generally issued only at select locations a few times a day (at best). Furthermore, given the limited information to judge the forecasts, interpreting the reliability of forecasts and translating the information into useful guidance required a significant amount of forecaster experience.

In recent years, the sharp increase in computer speed and storage resources, along with a growing recognition of the value of (and thus greater investment in) natural resource-related data, is creating an opportunity for a wider range of organizations to generate forecasts or get access to forecasts in a greater number of locations. Thanks to expanded ground and satellite monitoring networks, meteorological, land, and soil data are now available publicly, in near real-time, and in good quality. The abundance of data and swell of computing power have led to the development of mostly automated national- and global-scale hydrologic forecast systems, with forecasts produced and published at high resolutions and frequencies. For example, in the United States, anyone can now get near real-time streamflow forecasts for almost any river, stream, or creek. The issue of limited accessibility and coverage is diminishing.

Despite forecasts being more readily and widely available, barriers still remain for many potential users to effectively take advantage of this new information. Potential end-users are people or organizations like reservoir operators, emergency managers, hydropower producers, irrigators, etc. – people whose jobs or situations require that they anticipate, and often base decisions on, future river or reservoir conditions. Why do many who could greatly benefit from local high-resolution streamflow forecasts struggle to make use of them? To appreciate the barriers, we need to step out of our research scientist’s shoes for a moment and into the (undoubtedly more stylish) forecast end-user’s shoes. 

Shifting to a user’s perspective

As users, let’s consider our ultimate objective for using river forecasts in the first place – i.e., to make decisions. To truly help us make decisions, we need information that is 1) actionable and 2) trustworthy, in the context of our decisions and circumstances.

For information to be valuable, we must understand it and directly relate it to an action we can take. For example, streamflow or water level information alone may not mean much to us if we are the general public or first responders. Instead, in these roles, we need to know how a flood will impact us (e.g., will a bridge be under water) or how it compares to prior events (e.g., will more homes be flooded than during the last one). Likewise, if we are reservoir operators, we need to know how much water will enter our reservoir to guide our key decisions – i.e., how much water to release. Forecasts of flow downstream of our dam (which may include incorrect predictions of our operations) are sometimes less relevant. Therefore, because the content in forecast products is often not actionable, and because translating it into something actionable can be complicated, we end up not using the forecasts.

For information to be trustworthy, we need evidence that it is reliable and of sufficient quality to establish confidence. In the realm of predicting the future, trustworthiness is tightly entwined with understanding uncertainty. Hydrologic forecasts are typically produced through statistical methods, physical/conceptual modeling, or machine learning applications, each of which is impacted by many sources of uncertainty. Understanding the sources and impacts of uncertainty for hydrologic forecasts is essential to judge their quality (e.g., how good was the forecast in previous events of this type) which in turn is essential to establish trust. Tools and approaches are lacking for us, as users, to understand the uncertainty in a given forecast (i.e., at the time decisions need to be made), as well as to judge how well a given forecast generally performs or has performed in the past. The lack of user-accessible tools or guidance for using them prevents us from trusting the forecasts and, consequently, from using them to guide our decisions.

Cultivating utility

Stepping back into our scientist’s shoes, let’s focus on how we can chip away at these barriers and cultivate greater use of hydrologic forecasts. Here at RTI’s Center for Water Resources, we are working to improve the context of forecast information, making it more applicable for users in several ways, e.g. applying post-processing to create more relevant information, providing intermediate and more relevant model outputs, and augmenting forecasts with additional, external information. We are also helping users implement custom forecast systems to generate forecasts that better match their needs and resources or to address unique situations. 

We are also working to build trust with users by helping them understand the accuracy and uncertainty associated with forecasts.  For example, to characterize and convey uncertainty, we can provide forecasts from different sources, or multiple versions of forecasts from the same source (‘ensembles’), so that users can visibly see a range of possible future flows or water levels. In parallel, to help get a sense of the accuracy they can expect from forecasts, we are developing tools for systematic forecast evaluation i.e., qualitatively and quantitatively assessing how well the forecasts have performed in the past. In some cases, forecast evaluation may also reveal ways to reduce uncertainty in the forecast process. We believe ensemble forecasting and forecast evaluation are key for a wider range of users to see the value, build trust, and make effective use of hydrologic forecasts.

Moving forward

The field of hydrologic forecasting has made enormous progress over the last few decades, only a fraction of which we have touched on above (other areas include model improvements, data management and dissemination, reduced latency time, data assimilation approaches, etc.).  However, as the technology continues to advance, we must not lose sight of the user perspective and leave utility in the dust.  If we are aiming to put forecasts into the browser of the individual, we may need to think more like Apple and work backward from the user experience (and keep wearing their stylish shoes).

Learn more about what our customized forecast solutions can do to transform your organization and enhance data-driven decision making.