Performance data errors do not just create operational rework. They damage client trust, raise regulatory scrutiny, and erode a firm’s reputation. Performance results are not just numbers; they represent a firm’s credibility. Portfolio managers, client reporting, marketing, and compliance all rely on performance results to be accurate, consistent, and durable. Once firms publish results, stakeholders expect them to persist. Errors or revisions quickly undermine confidence across the organization and with clients.

The Performance Operations team serves as the gatekeeper of performance results and an essential line of defense to identify upstream data issues. Once performance is published, the expectation is that it will persist. Performance quality audits and oversight controls are integral to ensure the data delivered is free of material defects. However, it is challenging for Performance Operations to define enough audit rules to proactively identify true issues while trying to limit false positives. Once results are published, there needs to be a well-defined process to revise and republish.

To address these challenges, firms must focus on post-performance calculation quality controls that balance thoroughness with efficiency. The objective is to identify material errors without overburdening teams, ensuring accurate results flow to downstream consumers.

To help firms navigate this balance, we outline three best practices that directly address common pain points and reduce the risk of inaccurate or unreliable results.

1. Establish Quality Results Using Relative Analysis

Without an appropriate comparison for returns, firms face frequent false positives, inconsistent remediation, and the risk of overlooking material errors.

Audit rules should be grounded in tolerances that are both reasonable and measurable. These checks often uncover accounting data issues such as missing or stale security prices, incorrect FX rates, or misapplied corporate actions that impact market values or cash flow.

Return reasonability can be defined in two ways:

  • Absolute tolerance threshold, such as asset-class level checks, create more efficiency and are more effective for low volatility investments, such as Cash and possibly Fixed Income, than security-level checks
  • Relative returns against an assigned benchmark or peer group, required for moderate or highly volatile market segments to find true outliers

Ideally, market data is used to reduce the number of false positives by providing necessary information relative to return swings that can be explained through market conditions. If market data is not available, then the use of peer group analysis can also be very effective in comparing total-level performance against their designated peers (peer-to-peer). Comparative analysis provides an effective approach to quickly and efficiently identify return anomalies without much overhead.

When quality standards are relative and consistently applied, teams can focus on issues that materially impact results rather than spending time on noise.

2. Define The Right Level of Granularity

Many firms either over-engineer their audits, wasting resources on immaterial checks, or under-scope them, leaving gaps that expose downstream consumers to errors.

Consider the organization as a whole and the breadth of performance results delivered to downstream consumers. What is the volume and complexity of your reporting? For example, a pure equity firm that reports performance for a limited set of product types at the total level will not have the exact audit requirements as a multi-asset firm reporting across diverse products and structures.

At an extreme, audit checks may need to validate all the following:

  • Total portfolio, gross and net
  • Security-level returns and contributions
  • Segment or classification groupings
  • Multiple performance start dates
  • Currencies, local returns
  • Attribution factors
  • Ex-post risk statistics
  • Account aggregates
  • GIPS composites or composites for internal oversight

Although the list is extensive, the focus should be to ensure consumers of performance are confident in the results published. As mentioned previously, asset-class audits can provide effectiveness and efficiency over security-level audits, especially if security-level returns are not reported or required for attribution. If a firm is not confident in the integrity of its asset-class structures or data, then drilling down to security-level returns is required to determine the source of data issues, but only when necessary.

For security-level validations, Contribution to Return can be used to focus efforts on security level results that have a material impact on the total portfolio. In addition, the Z-score (or Standard Score) metric can be utilized to measure dispersion (i.e., standard deviations) relative to the mean of return distributions to further identify outliers in a data set.

The future of these checks and validations lies in Machine Learning, which continues to evolve and improve over time. To learn more about Machine Learning and its impact on data quality checks, please read Jose Michaelraj CIPM, CAIA, “Applying Machine Learning Techniques in Investment Performance: Uncovering Heuristics to Decipher Data Quality Checks“ on the Meradia website (www.Meradia.com).

3. Implement a Transparent Account Re-Opening Policy

Changes to previously published results erode both efficiency and client trust. A robust re-opening policy provides clarity, consistency, and buy-in across accounting and performance teams.

Key elements include:

  • A well-defined tolerance framework to determine when re-opening is warranted
  • A transparent, methodical process for assessing materiality
  • Agreement across workstreams to ensure efficiency and accuracy

Although this process can sometimes be managed by accounting, from a performance viewpoint, the operations team should monitor activity, isolate single or cumulative data changes, and assess materiality to determine whether a breach of policy threshold has occurred. This ensures revisions are justified, well-documented, and do not undermine the integrity of published results.

Conclusion

Performance Operations often serve as the final line of defense before results are delivered to internal and external consumers. By framing audit controls around client pain points such as credibility, efficiency, and transparency, firms can preserve their reputation and strengthen downstream trust. Defining the right level of granularity, establishing relative thresholds for quality, and implementing a disciplined re-opening policy are proven practices that reduce risk and build confidence in performance reporting. As AI continues to evolve, its ability to enhance data validation and exception management will further strengthen these practices, empowering teams to focus on insight over investigation.

How Meradia Can Help

Investment management firms compete on both performance and reputation and cannot afford gaps in their data control framework. Re-evaluating audit controls and validation processes is an ongoing priority for future-focused firms. Meradia brings deep front-to-back expertise to help assess, design, and optimize these frameworks. With Meradia, firms gain a partner that understands the complexity of performance operations and how to turn robust controls into a competitive advantage. To learn more about how Meradia can support your firm’s performance operations and data integrity initiatives, please get in touch with us for a consultation.

Download Thought Leadership ArticleServices: , Expertises: Clients: , , Authors:

Mark Pazdyk, CIPM

Mark Pazdyk utilizes his 19 years of experience in performance measurement technology to enhance clients’ data management operations. He is a technical business analyst with a deep understanding of data analysis, data quality, and data scrubbing. Mark has worked on complex and challenging project implementations defining requirements, leading initiatives, and working with teams to communicate optimized process flows. Mark is an effective leader and communicator.