The starting point for SS3/18 is that banks must establish a definition of a model and maintain a model inventory. The inventory is clearly heavily dependent on the definition adopted by an individual institution.
Too loose a definition and the inventory becomes impenetrable and potentially hard to maintain (although automated software solutions are available). Too narrow and an institution risks missing key models or introducing risk.
The PRA requires Banks to establish their own definitions. This is not as easy as it sounds. The PRA provides some guidance on the key factors to consider at P1.1. At face value, this guidance appears to focus on the purpose or functional output of calculation mechanisms to drive the definition. However, looking more closely, the third and fourth points hint at the practical complexities and problems involved.
Where qualitative judgement is applied
Qualitative judgement is applied wherever there is manual intervention or input needed to adjust results, where matters of configuration or set up require external decisions, or where judgement is made for edge cases. Some of these judgements are made outside the models and can be poorly documented and evidenced, some may not be immediately apparent without a deeper understanding of individual models, and some may rest in secondary systems or models.
We acted as expert witness on a major financial transaction that had failed. The source of the problems was an ill-defined and poorly documented implicit assumption in the transaction model that went straight to the question of value. By implicit, we mean baked into the model’s logic and calculations. The impact was a material overstatement of value. Due to the lack of clear explanation or narrative, it would have been very difficult for any user to identify the issue.
Organisations will need to capture good quality, comprehensive documentation of their calculation mechanisms to inform their assessment against their model definition. This requires robust corporate policies on end user computing, clearly communicated, and backed up by individual discipline in practice. Critically, inputs, assumptions, data and judgements should have nominated owners, a clear audit trail and understandable explanations.
Where outputs of other models are used
This phrase of the PRA (P1.1(d)) tilts at the potential scale of the challenge of defining models for these purposes. Many organisations caught by the PRA guidance will have a web of interconnected files, systems and calculation mechanisms, a model ecosystem if you will.
Identifying which represent models within the scope of SS3/18 will require care. Files which are used simply to transmit data within the chain of reporting may not immediately appear to fit PRA guidelines, but if critical KPI data passes through them they present single point of failure risk. On the other hand standalone analytical models may not always be relevant in this context.
Mapping and maintaining an up to date picture of the model ecosystem will be an ongoing task. Linked spreadsheets in particular tend to proliferate, and while it is easy to see the linked files feeding a spreadsheet, it is not so straightforward (without assistance) to see the files fed from a spreadsheet, without process documentation.
Understanding and defining your model ecosystem
The following are issues to think about when shaping the definition and assessing potential models for inclusion on the inventory (and for risk generally):
- Subject matter – does the model analyse or support stress testing analysis?
- Technical risk/complexity – how complex are the calculations that are being performed?
- Dependency – does the model contain links to or from other files or systems? (remembering that these may not always be obvious)
- Materiality – does (or could) the model have a material impact on stress testing results, either individually or through connected files?
- Reliability – how reliable are the calculations and underlying data? Has the model been independently checked? Is the data statistically robust? Is data provenance understood?
- Subjectivity – to what extent are the results from a model impacted by subjective judgements and how sensitive are they to these?
- Business risk - Is the model relied upon for business critical analysis / decisions?
The good news is that there are a range of software tools and techniques available to help assess model risk and create model inventories (both at a tactical and enterprise level). But getting the definition right remains the major challenge. Please contact us if we can help with regards to any of the foregoing.