THE principles, which cover both qualitative and quantitative measures, address four key areas:
1) The importance of senior management and boards exercising strong governance over a bank’s risk-data aggregation capabilities, risk reporting practices and information-technology (IT) capabilities. This includes:
• The documentation, validation and robustness (in the event of new products and activities, or changes in group structure) of these capabilities and processes; and
• The design, building and maintenance of data architecture and IT infrastructure which fully supports a bank’s risk data aggregation capabilities and risk reporting practices not only in normal times but also during times of stress or crisis.
2) The accuracy, integrity, completeness, timeliness and adaptability of aggregated risk data. This includes:
• the adequacy of the systems and controls that generate the risk data and its aggregation; and
• the capability to adapt rapidly to changes in key risks, decision-making arrangements and regulatory requirements.
3) The accuracy, comprehensiveness, clarity, usefulness, frequency and distribution of risk-management reports, including to senior management and the board.
This includes:
• Procedures for monitoring the accuracy of data and the reliability of models;
• Making good use of forward-looking assessments of risk; and
• Reviewing the usefulness of risk management reports to senior management and the board.
4) The need for supervisors to review and evaluate a bank’s compliance with the first three sets of principles listed above, to take remedial action as necessary, and to cooperate across home and host supervisors.
BCBS 239: The bigger picture
BCBS 239 accepts that where data is not available “expert judgment” is acceptable. If this “exemption” becomes prevalent, there is little incentive to create the processes and systems to capture the quantitative-risk data to create objective risk information. We accept the data landscape can contain actual and “expert” data items, but if the data-road map is not based on complete and systemic coverage, the industry is likely to end up with suboptimal solutions at best.
Over the years, management systems in banks—and in other financial services companies—have had to cope with increasing regulatory requirements, new corporate structures, new products and operating models and more (the financial crisis). As with other infrastructure, systems for the collection, aggregation and analysis of risk data have typically developed in an incremental fashion, with different modules, incompatible data and a range of ad-hoc processes. Often relevant data is missing or inadequately analyzed, resulting in the formation of “reconciliation industries” within the organization as data is passed between a multitude of systems across inconsistent integration mechanisms. The issue in many organizations is that the reporting architecture is a patchwork of data extraction, manual calculation and reporting components focused on individual reports by business area. This rarely allows risks to be calculated or reported across lines of business for instance, by country, or byproduct and may not easily facilitate drill down or ad-hoc analysis to understand the underlying trends or issues.
Risk data is frequently being provided too late to influence basic business, trading and overall operations that depend on it, while the operating costs are still incurred. In addition to “business as usual” requests, the need for quick and accurate data to meet the recovery and resolution plan requirements means data is critical in a stress situation as well as business as usual.
Regulators have become increasingly concerned about the how weaknesses in risk-data aggregation systems may compromise financial reporting. While these shortcomings were exposed at the height of the financial crisis, little progress has been made. Many institutions are still unable to provide the required data, or find themselves coordinating a massive manual and ad-hoc intervention to assemble the data demanded by their management teams or by regulators. Major market participants still question whether firms have the capacity to extract the necessary information quickly enough to understand the location and extent of risks and exposures contributing to whatever future crisis of confidence the global financial system may face.
This is not an issue that the majority of firms have squarely addressed—Can we handle another financial crisis?
Firms needs to ask themselves whether they have a clear data architecture to support the principles of risk-data aggregation and whether they are able to create future data capabilities that will enable them to comply with the BCBS principles by the required deadline of January 1, 2016. Implementation of an additional reporting capability is very straightforward compared with the challenges of making fundamental changes to the quality and completeness of data across the enterprise that would be required in order to effectively comply with BCBS 239. The biggest issue is likely to be with being able to apply data quality, data governance and data management techniques pragmatically and effectively so that the quality of the data being used for risk reporting can be demonstrated (measured) automatically. Compliance with the principles will be compulsory for global systemically important banks (G-SIBs) from January 1, 2016. National supervisors need to have translated the principles into detailed regulation by then. The basel committee also recommends that banks classified as domestic systemically important banks (D-SIBs) by national regulators should be required to comply within three years of such designation.
To be continued