Blog

Solar Performance Gaps: What Benchmark Data Reveals That SCADA Alone Cannot

Written by Cristina Daimiel | May 12, 2026 5:49:06 PM

According to kWh Analytics' 2025 Solar Risk Assessment, PV sites across the US are underperforming their P50 forecasts by 8.6% on average, a figure drawn from more than 34,000 system-months of operating data spanning 2015 to 2023. That average masks a worsening trend: the report shows performance has declined year over year across the period, with 2023 representing some of the weakest results in the dataset.

Behind that gap sits a pattern of accumulated underperformance across components: inverter derating, tracker misalignment, string anomalies and other degraded states that fall below the threshold of traditional alarm-based monitoring and compound quietly over weeks and months. The report identifies DC health issues, alongside curtailment, shading and sub-hourly clipping, as a key contributing factor to that gap.

The more recent kWh Analytics 2026 Solar Risk Assessment can be found here.

 

Why Alarm-Based Monitoring Leaves Performance Gaps Open

Alarm systems are built to detect when something stops working in more of a binary manner. Detecting when something is working poorly is a different problem entirely, and one they are largely blind to.

An inverter underperforming due to thermal derating or a firmware condition will not typically trigger an alert. A tracker running with a calibration offset will show generation, just less of it. The same applies across the full range of loss categories that drive avoidable underperformance in solar portfolios:

  • Modules, strings and combiner boxes: blown fuses, loose MC4 connectors, cables damaged by rodents, shaded strings, soiled panels from dust or bird droppings.

  • Inverters: derating due to overheating, efficiency below rated output, reactive power dispatch reducing active power output.

  • MV/HV transformers: forced derating due to overtemperature.

  • Trackers: misalignment due to bad calibration, motors stuck in fixed position.

These are the loss categories that consistently account for the largest share of production losses at utility-scale PV sites, often ahead of weather-related losses that asset managers tend to assume are the primary driver. The distinction matters for how teams allocate their time. If underperformance is being attributed to irradiance variability when it is actually equipment degradation, corrective maintenance gets deprioritized and losses continue.

 

The Comparison Problem That Most Monitoring Tools Cannot Solve

Knowing your plant is underperforming versus its P50 forecast is useful. Knowing whether that gap reflects a site-specific issue or a condition common across assets in your region and asset vintage is a different capability.

Without benchmark context, asset managers face two recurring problems. The first is over-escalating: raising issues with O&M contractors or OEMs that turn out to be weather-driven and within expected variation. The second is under-acting: accepting performance gaps that look modest in isolation but are significant outliers against peer assets.

Both are costly. The first wastes engineering time and creates friction with service providers. The second leaves recoverable revenue on the table.

The gap between "my site is down 5% versus budget" and "my site is down 5% versus budget and performing 8% below comparable assets in this climate zone" is the difference between a note in a monthly report and an urgent investigation.

 

What 350+ GW of Industry Intelligence Enables

Clir's platform, backed by the largest renewable energy dataset globally, enables asset managers to move from absolute performance tracking to relative performance benchmarking across technologies, geographies and asset vintages.

When a PV site shows an availability gap in a given month, Clir can distinguish between a gap that reflects localized equipment degradation and one that is consistent with broader conditions across a region. That distinction is drawn from standardized performance data across more than 350 GW of renewable assets, making it a quantified comparison rather than a qualitative judgment.

Clir's platform runs more than 50 automated detectors covering solar-specific loss categories, from string-level anomalies and inverter derating to tracker misalignment, soiling and curtailment. When an issue is flagged, asset managers can interrogate it directly through the platform's event timeline and analytics tooling.

The 350+ GW dataset provides the resolution context that most monitoring tools cannot deliver: whether the issue is isolated to this site or reflects a pattern seen across comparable assets, and how others in the same situation have successfully resolved it.

 

Why Data Quality Determines Loss Accuracy

Most monitoring platforms connect to SCADA and display the data as received. For older assets in particular, that raw data is rarely clean: sensor spikes, negative generation values, flat-lined readings that mask actual output and coverage gaps that quietly distort availability calculations. Displaying that data without correction produces numbers that are difficult to trust and harder still to use as the basis for a board presentation or an OEM dispute.

Clir cleans and enriches SCADA data before any analysis runs. Erroneous values are identified and corrected, gaps are handled consistently and the resulting dataset reflects what the site was actually doing rather than what a faulty sensor reported.

That cleaning step is also what allows loss accounting to be precise: when the underlying data is reliable, each loss category can be attributed accurately rather than absorbed into an unexplained residual. For asset managers trying to understand why a site is underperforming, the difference between cleaned and raw data is often the difference between a clear answer and an inconclusive one.

 

From Gap Identification to Board-Level Explanation

One of the less visible costs of solar underperformance is the time spent accounting for it. Asset managers who cannot clearly explain a performance gap face difficult conversations with boards, investors and service providers, often relying on inconsistent data and no external benchmark to reference.

Platform-driven performance monitoring changes that. When underperformance is detected early, quantified accurately and benchmarked against peer assets, asset managers can explain what happened, what it cost and what is being done about it. The analysis that previously took weeks of manual work becomes available much quicker.

An 8.6% performance gap against P50, sustained across a national fleet and worsening year on year, represents a significant volume of recoverable revenue. Closing it starts with knowing where the losses are, and whether what you are seeing is a problem unique to your assets or one that the rest of the industry already knows how to resolve.

Cristina Daimiel is a Senior Solar Analyst at Clir Renewables, where she focuses on developing algorithms to support Clir's AI platform, turning raw SCADA data into accurate, reliable performance insights for clients.