Using data as a weapon in wildfire prevention is top of mind at utilities across the world. Read TROVE’s data-gathering recommendations as featured in PG&E’s recent report to the California PUC.
Download the entire report.
In TROVE’s opinion as experts in helping utilities maximize the value in their data and delivering analytics to support data-driven, risk-informed decision making, a successful data approach can be broken into 3 categories:
- Data access. First and foremost, data access for third party researchers poses a unique challenge when looking to work with utility data. Traditionally, many utilities store their data on private servers, usually spread across multiple different places throughout the organization. By compiling the relevant utility data in one cloud-based location (AWS or similar), utilities will be able to significantly reduce the burden of data access for third party researches. Additionally, by leveraging a standard platform, such as AWS, third party researchers can then easily bring their software and analytics into the secure utility environment and quickly get up to speed on the value-add research instead of spending significant up-front effort on unnecessary data transfer and setup processes.
- Utility Context. PG&E and the other CA utilities will benefit significantly from active partnerships with third party analytics and data science companies who have extensive utility experience. Utility data and, more importantly, utility context can be nuanced and challenging. A data analysis without relevant utility expertise may uncover results that are irrelevant or impossible to act on if the data is analyzed without the proper context and understanding. In order for data analysis to be value add, the third-party researchers should have a healthy balance of “new” approach (new data, new methods, new data science expertise, etc.) coupled with a foundational understanding of utility context (What is the specific business problem we are trying to solve? How might new analyses impact business operations? What contextual challenges shape the desired outcomes?).
- Non-utility data. One specific area of balancing new-ness with utility expertise is in finding experts who can combine new data sources with traditional utility data. For example, data experts who understand the data science approach of predictive hazard tree identification combined with the utility data of what risk that poses to utility assets are the perfect balance of “new” with “context” to deliver high value results for utilities. Some specific areas of expertise (by no means exhaustive) might include: predictive hazard tree analytics, bird migration and impact prediction, lightning strike risk modeling, and predictive asset failure modeling, all of which can be overlaid with grid and asset data to perform detailed risk assessment.
- Detailed Customer Data. In additional to grid-related expertise, granular customer data and analytics could be used to understand and optimize communications and engagement strategies in addition to grid hardening and traditional risk mitigation techniques. Traditionally, many utilities leverage static, demographic-based personas to understand their customers (e.g., “techie millennials,” “family of four,” “retirees,” etc.), but best practice is to begin using “Segment of One” understanding to deliver best-in-class engagement with customers. This targeted customer engagement approach can support multiple aspects related to the overall wildfire safety approach including: data-driven approach of how to best message to customers in order to get them to adopt safety practices (maintaining defendable space, understanding the increased presence of PG&E tree trimmers, preparing for PSPS, etc.) as well as informing grid safety investments (i.e., as a minor input to customer impact, asking questions such as: Which customers are most likely to complain about PG&E tree trimming causing potential delays? Which feeders have customers more likely to have well electric water pumps in case of PSPS? etc.)
- Metrics. TROVE has done significant work with utilities on using historic/reporting data to determine future risk and inform risk-mitigation strategy. One of the key lessons learned from these efforts is: if a utility is only focused on eliminating major events (i.e., “a reduction in the risk of catastrophic wildfire”), there aren’t enough data points to know if the utility is moving the needle or not. Instead, utilities who adopt a data-driven approach to minimizing the risk of minor events that have the potential to turn into major events (regardless of whether or not they actually do become major events) is a best practice that can be data-driven and can form trackable metrics. The important thing is to be proactive, not reactive. The metrics that are best for reporting are not necessarily the metrics that will be the best decision making tools see article on why CAIDI/SAIDI/SAIFI shouldn’t be used for deciding where to make reliability investments. For specific reduction of the risk of wildfires, TROVE believes there are two high value lenses through which risk should be viewed.
- A data-driven risk score of all fire starts (regardless of size) combined with risk of growth (Wildfire Hazard Map) and impact (a combination of number and types of customers in vicinity).
- A data-driven risk score of all outages that could have caused fires (i.e., veg related outages, wire down, catastrophic transformer failure, etc.). This could be evaluated both location agnostic and/or with a similar overlay of risk of growth and impact as outlined in #3a.
Tracking both of these metrics would help provide some consistency from year to year. Eliminating fire starts is the key metric, but if in any given year, fire starts decrease while outages increase, the utility may have simply gotten lucky in that outages didn’t turn into fires at the normal expected rate. The combination of both metrics will provide richer context.