The importance of keeping track of your data
As financial services businesses accrue more and more data, they need to scrutinise their data supply chain, says Martijn Groot, VP of Product Management, Asset Control. Notably, as good data management leads to better business decisions
Mention the phrase ‘supply chain’ and most people instinctively think of a manufacturing process involving a sequence of steps in which raw materials are gathered and ultimately turned into a finished product that is then delivered to the end customer. It is a concept that we are increasingly seeing applied to the way that financial services firms manage data. The parallels are striking.
Instead of raw materials, businesses across the financial services space are having to deal with ever-escalating volumes of raw data, much of it generated and captured from non-traditional data sources such as web crawling or satellite data. The subsequent data management phase sees the raw data sourced quickly and cost-efficiently. Next, to further extend the manufacturing supply chain analogy, we reach the assembly line stage. In the context of the data world, this entails cross-referencing; integrating and cross-verifying these data sources to create a whole that is greater than the sum of its parts.
Of course, whether we are thinking about a manufacturing supply chain or a data supply chain, being able to trace materials or data across the whole process is very important. In the case of the latter, financial services companies need to understand and to audit what happens to the data across the process, who has looked at it, how it has been verified and they also need to keep a full record of any decisions that are made.
Ultimately, they need to ensure traceability, that they can track the journey of any piece of data across the supply chain and see both where it has been and where it finally ends up.
The benefit for financial services firms who reach the end of this data supply chain is that the result of this process supports informed opinion that in turn drives risk, trading and business decisions.
Bringing the data together in this way is important for many financial services firms. After all, the reality is that these businesses, today even more than pre-crisis, typically have many functional silos of data in place, a problem made still worse by the preponderance of mergers and acquisitions taking place across the sector in recent times. Typically today, market risk may have its own database, so too credit risk, finance stress testing and product control. In fact, every business line may have its own data set. Moreover, all these different groups will all also have their own take on data quality.
Many financial services firms increasingly appreciate that this situation is no longer sustainable. The end to end process outlined above should help to counteract this but why is it happening right now?
Regulation is certainly a key driver. In recent years, we have seen the advent of the Targeted Review of Internal Models (TRIM) and the Fundamental Review of the Trading Book (FRTB) both of which demand that a consistent data set is in place. It seems likely that the costs and the regulatory repercussions of failing to comply with this will go up over time.
Second it is just becoming increasingly costly to keep all these different silos alive to support it. A lot of these silos are internally developed systems. The staff who originally developed them are often no longer with the business or have a completely different set of priorities, so it makes for a very costly infrastructure.
Finally, there is a growing consensus that if a standard data dictionary and vocabulary of terms and conditions are used within the business, and there is common access to the same data set, that will inevitably help to drive a better and more informed decision-making process across the business.
Finding a way forward
So how can organisations start to address these issues and find a way of overcoming the data challenges outlined above. They can begin by ensuring that they have a 360˚ view of all the data that is coming into the organisation. They need to make sure they know exactly what data assets there are in the firm – what they already have on the shelf, what they are buying and what they are collecting or creating internally. In other words, they need to have a comprehensive view of exactly what data enters the organisation, how and when it does and in what shape and form.
This is far less trivial than it might sound because in large firms in particular, often due to organisational or budgetary fault lines, organisations may often have sourced the same data feed multiple times, or they might find that the same data product, or slight variations of it may be brought into a business on multiple occasions or via different channels.
Firms need to therefore be clearer not only about what data they are collecting internally but also what they are buying. If they have a better understanding of this, they can make more conscious decisions about what they need and what is redundant and prevent a lot of ‘unnecessary noise’ when it comes to improving their data supply chain.
They also need to be able to verify the quality of the data of course – and that effectively means putting in place a data quality framework that encompasses a range of dimensions from completeness to timeliness, accuracy, consistency and traceability
To deal with all these data supply chain issues, of course, businesses need to have the right governance structure and organisational model in place. Consultants can help here in advising on processes and procedures and ensure for example that the number of individual departments independently sourcing data is reduced and there is a clear view in place of what is fit for purpose data.
If the right processes and procedures are in place, however, alongside a good governance structure, the organisation can start to think about a technological solution.
The role of technology
Technology can play a key role, of course, in helping organisations to get a better handle on their data supply chains. For most businesses, a primary requirement is to have good data sourcing and integration capability in place. This means systems that understand financial data products but also the different data models and schemas that are in place to identify instruments, issuers, taxonomies and financial product categorisations.
The chosen solutions should also be able to quickly and easily move between one set of identifiers and classification schemes to another. Organisations also need the capability to support the workflow process and workflow integration to effectively manage a process whereby users can easily interact with the data either to include their own data in the integration or to check the result of various screening rules that affect the quality of the data.
Businesses also need a data reporting capability of course. Technology chosen to fulfil this role must be capable of providing metrics on the impact of all the different data sources the organisation has bought, what benefits it has achieved from those sources; what kind of quality are they and what gaps are there in the data, and where is the organisation in providing this data to business users for ad hoc usage.
This latter capability is critically important, of course. It is one thing for financial services businesses to follow the lead of the manufacturing sector in understanding and monitoring their supply chains and ensuring that an auditing and traceability element is in place. But they also need to make certain that data governance and data quality checking is fully implemented. And like a manufacturer, they need to ensure that the end product is put to good use.
After all, financial services businesses will only gain limited value from their data supply chains, however efficient, if they do not make the data itself readily available to users to browse, analyse and support decision-making processes that ultimately contribute to driving business advantage and competitive edge.
Visit the Asset Control website