Learn how Financial Risk organizations can benefit from both BCBS 239 practices and Data Mesh pillars to create consumable and explainable datasets.
For investment organizations, where high-stakes decisions are made daily on risk hedging, capital exposure, or trade execution quality, the data consumption challenge is deeply rooted in the aspects of data quality, storage, aggregation, and the execution of coded calculations.
This complication arises primarily because the vast data repositories, often in the form of data lakes, lack practical business usability. The situation is further exacerbated by the prohibitive and unpredictable costs associated with cloud computing and storage, necessary for performing complex, real-time calculations on massive datasets.
Moreover, the critical processes of backtesting, executing 'what-if' scenarios, and providing intraday refreshes are hampered by these limitations.
The crux of the issue lies in the growing need for data consumption against the backdrop of technical debt in data chains, widespread data silos, and the intricate challenge of applying business logic effectively to massive and often unstructured data repositories. This has led to a scenario where accessing, aggregating, and analyzing data in a meaningful and timely manner has become an increasingly burdensome task, both in terms of cost and complexity.
BCBS 239, developed in the wake of the financial crisis, aimed to bolster the banking sector's risk management by improving data practices. It underscored the importance of strong data governance, effective data aggregation, and reliable reporting. However, it emerged at a time when cloud computing and big data were in their infancy.
Data Mesh was created when big data technologies had matured. It was designed to address major bottlenecks generated by the gravity of data lakes and IT organizations, designed more than ever as a shared service across all business lines.
While BCBS 239 is focused on risk analytics practices for the regulated banking sector, Data Mesh was envisioned with a wider application, shifting from an organizational model layering responsibilities and architectural components with a project approach, to a collaborative model focused on data as a product, legitimate data ownership, increased systemic governance, and the deletion of technological / structural bottlenecks.
This article dives deep into how Financial Risk organizations can benefit from both BCBS 239 practices and Data Mesh pillars, exploring their compatibility and how their influences can inspire a new model of implementation for creating consumable and explainable risk datasets.
BCBS 239, also known as the Basel Committee on Banking Supervision's standard number 239, was established in response to the 2007-2008 global financial crisis. The crisis highlighted significant shortcomings in banks' information technology and data architecture systems, particularly in risk data aggregation and risk reporting practices. These deficiencies impeded the ability of banks to identify and manage risks effectively.
The primary aim of BCBS 239 is to strengthen banks' risk data aggregation capabilities and internal risk reporting practices. This standard is crucial for enhancing the resilience and risk management practices of banks, particularly those classified as globally systemically important banks (G-SIBs). It sets out principles that these banks must adhere to in order to improve their ability to identify, measure, and manage risk comprehensively and accurately.
The principles of BCBS 239 cover four key areas:
BCBS 239 lays out clear principles geared towards refining banks' capability to aggregate and report on risk data, particularly during financial distress. The principles touch on:
Given the intricacies of quantitative risk dimensions like market, credit, counterparty, and liquidity risk, there's a clear necessity for real-time, scalable, and comprehensive data management solutions.Requirements such as FRTB's Standardized Approach (SA) and Internal Models Approach (IMA), RWA calculations, and liquidity ratios (including NSFR and LCR) further underscore the necessity for granular data aggregation, simulations, and 'what-if' scenarios.
Data mesh shifts the focus from centralized data lakes or warehouses to a more democratized, product-centric approach. It fosters four foundational pillars:
This approach is especially pertinent for entities engaged in trade management and execution. For buy-side institutions, which find themselves under an increasing regulatory spotlight, the ability to quickly pull, process, and analyze data within their domain is invaluable.
In the evolving landscape of risk, treasury, and trade execution analytics, there's a growing need to integrate the rigor of BCBS 239 with the flexibility of Data Mesh principles. This approach involves a sequence of steps, starting from initial workshops to the final self-service implementation for end-users.
Step 1: Workshops for Business Requirements and Technical Feasibility
The first step involves conducting comprehensive workshops focused on understanding business requirements and assessing technical feasibility. These workshops serve as a platform for stakeholders to articulate their needs and for technical teams to identify potential challenges and solutions. The goal is to ensure alignment between business objectives and technical capabilities.
Step 2: Building a Comprehensive Data Model
Next, a comprehensive data model is developed. This model should encompass a wide array of data types and sources, reflecting the multifaceted nature of risk, treasury, and trade execution domains. The data model must be robust, scalable, and flexible, capable of adapting to evolving business needs.
Step 3: Creating a Semantic Layer for Enhanced Navigability
To facilitate easy access and understanding of the data, a semantic layer is added. This layer acts as a bridge between the complex data model and the end-users, enabling them to navigate through large groups of dimensions effortlessly. It simplifies the user experience by abstracting the underlying data complexity.
Step 4: Building a Physical Data Store with Embedded Business Logic and Data Quality
A physical data store is then constructed, integrating business logic and data quality controls directly at the data level. This step ensures that the data is not only stored efficiently but is also processed and validated, enhancing reliability and accuracy. The integration of business logic at this stage aligns with the principles of Data Mesh, bringing intelligence closer to the data.
Step 5: Enabling True Self-Service for Data Consumers
The final step is to offer true self-service capabilities to data consumers. This can be achieved in two ways:
By following these steps, organizations can implement a robust analytics framework that leverages the strengths of both BCBS 239 and Data Mesh principles. This hybrid model ensures compliance, scalability, and flexibility, catering to the dynamic requirements of risk, treasury, and trade execution analytics. It represents a forward-thinking approach to data management, marrying regulatory rigor with the agility of modern data architectures.
About the author: Emmanuel Richard is a Data and Analytics expert with over 25 years of experience in the technology industry. His extensive background includes leadership roles at industry giants and startups across the US and Europe.