As with the other three principles, the PRA gives sound, common sense guidance overall but leaves a lot of practical matters untouched and questions unanswered. In relation to model validation they highlight five overlapping issues:
- scope of validation and review;
- competence and influence;
- treatment of model issues/deficiencies; and
- frequency of validation.
Reading their guidance, there are a number of themes we could pull out, here we focus on one: context.
The PRA highlights that the scope of validation work and parties carrying it out should take into account the use, complexity and materiality of the models. However, they give no guidance on what validation means, specific ideas on scope, nor any thoughts on how to tailor it to context.
Perhaps it is right that organisations work this out for themselves but some will not know where to start; we therefore offer a simple framework. The table below provides a straightforward way of thinking about model risk management. We look at it on two dimensions:
- Modelling risk
Modelling risk is the risk intrinsic to a given model, and reflects the risk arising from its structure and coding. Indicators of high modelling risk would be:
- Model implementation
- high formula complexity;
- high volume of formulae;
- presence of certain complex functions;
- a high number of interlinking sheets; and/or
- a high number of links to, and interdependencies with, external documents.
- Modelling environment
- a lack of robust model documentation;
- weak access and version controls;
- high frequency of update;
- only one model user / owner; and/or
- no previous reviews (internal, peer or external) have been performed on the model.
- Model implementation
- Business risk
Business risk is an inherent feature of the decisions and applications for which financial models are used. Business risk is most prominent in relation to models used in business critical contexts e.g:
- decision making;
- to inform external stakeholders / third parties;
- underpinning a material transaction;
- market guidance / price sensitive information; or
- contributing to financial reporting / forecast results or KPIs.
The table indicates what review approaches we view may be appropriate relative to the business and model risk associated with a given model. The table is not intended to be prescriptive. These approaches include:
- Structural risk assessment: the use of diagnostic tools and procedures to assess whether a model has been built according to generally recognised good practice.
- Analytical review: the review of model inputs and outputs, including the use of sensitivity analysis, KPIs and ratio analysis, and charts to assess whether the results are as expected for a given set of input data
- Model review: a detailed inspection of the models logic and formulae – or code inspection
- Model audit: a full (and frequently) independent review of the model, typically including all the foregoing review procedures and often validation of tax and accounting bases.
This framework does two things, it gives a tool for prioritising and escalating models for review and it indicates the types of review that might be most appropriate and proportionate.
Once you have determined the approach for a given model then the related points we would highlight from the guidance are:
- Engaging the right skills in the review process: critical to this is an understanding that model assurance is a specific skillset in its own right and is not the same as 'good at Excel' (or similar). Also importantly, as the PRA highlights, engaging commercial business knowledge is key. An independent analytical review of a models inputs and outputs by someone with an intuitive understanding of the underlying product will typically flush out material and conceptual issues much quicker than the technical code review, in our experience.
- Ensuring the review process has teeth: the need for business clout behind the review process to ensure review findings and issues are escalated and dealt with on a timely basis.
- Making sure that the review is truly independent: for more complex models it becomes hard for the developer to see the wood from the trees and in all fields of modelling 'marking your own homework' is a recipe for at best cognitive bias and at worst fraudulent manipulation. Independent does not have to mean external (although there are plenty of firms such as ours who can provide such a service) structured internal peer review can work well and should be a minimum requirement.