Huge amounts of money are being spent on bolt-on regulatory technology. Why don’t we embrace the capabilities of our digital infrastructure and allow the data required by regulation to be gathered as part of the core business process. I worry that a lot of the RegTech industry is just barking up the wrong tree.
RegTech is expensive
Bloomberg says that financial institutions spend US$70bn globally on compliance. Alpha FMC said that Asset Managers spend 21-30% of their change budgets on regulatory projects. Can it be any surprise that, according to KPMG, VC funds in H1 2017 invested US$591mm across 60 deals in Europe alone. The FinTech Times says that “2017 will be the year of RegTech”.
No kidding.
When financial institutions spend $70bn on being compliant that is $70bn of your money that is not being spent on improving the return of your pension fund.
RegTech isn’t the solution
When I worked on City’s bond trading floors it dismayed me that it was acceptable to view regulatory adherence as a bolt-on module that a financial institution would buy in addition to running their business. I remain a huge believer in the financial services industry, but some things have to change. The ability to help that change is one of the reasons why I co- founded FINBOURNE.
For example let’s take a transaction which needs to be amended. The historical approach was to get middle office to make the change. This means the history is gone and the auditor/Head of Trading cannot see what it looked like when it was first executed.
The reaction from the RegTech and indeed the business community has been to add new layers of complexity around approval processes for trade amends, or to regularly copy the data into yet another data warehouse. At FINBOURNE, we believe this is structurally the wrong approach to the problem. A system where every action or event on this trade is kept would mean that you can always reproduce what it looked like at any point in time.
Compliance can’t be a bolt-on
A lot of time and effort is being spent searching for, collating and cleaning up data for regulatory submissions. The process is heavily manual and, by its very nature, reactive. If OTC trade reporting under MiFID 2 requires 81 data fields on all transactions, why not make it easy to add them at the earliest opportunity rather than having to do an expensive refit to add these specific fields. When “field 82” comes along next year, do you really want yet another expensive system change? Proactively capture your data in an efficient way at the time it becomes available and collect all your data in the highest fidelity.
And do it as part of your normal business process, using systems that never forget.
About 3 years ago when I ran Fixed Income Syndicate and Capital Markets for an Investment Bank, the FCA launched an industry wide investigation focusing on the allocation procedures for debt and equity issues. They asked all leading investment banks to provide a list of underwritten deals in the last 7 years. Now, this is a big list for even the smaller banks who would routinely be doing several deals a day of varying sizes, currencies, structures and investor types. Luckily, in most houses one of the junior analysts would have been keeping a spreadsheet of the deal count (mainly used to impress senior management and potential clients).
But the FCA also wanted to know to whom the bonds were allocated, client type, in what size, at what price, and, crucially, why. In a “hot” deal why were certain clients allocated less and some more? (which is a really interesting question but a subject for another blog!).
Like most across the Street, we would have had to hire outside consultants and lawyers to go through our records looking for the data. In the end the authorities had to cut back on their demands as no one had the ability to get the data with sufficient fidelity and in a reasonable cost and time frame. The market had just forgotten.
The one thing that we can confidently predict is that the level of demanded transparency placed on the financial services is only going to increase. The only way to be prepared for whatever the regulators, the clients or business opportunities require is to have the infrastructure to be able to keep accurate and clean data.
Let’s disrupt RegTech!
The RegTech companies we are inspired by are concentrating on systems to improve other areas such as Compliance workflow, document tracking, AML, identity and KYC.
However the many RegTech solutions that merely automate previously manual processes are missing a key point. Modern technology offers a far more effective way of solving the problem, including using AI and Machine Learning. But these are dependent on having complete and comprehensive data. This is what the new wave of digital financial infrastructure should be enabling.
Modern digital infrastructure can,
- be the single source of truth for the front- and middle-office
- be immutable, ie no data is ever deleted
- be open – access to the data done using open APIs, easy integration with regulators and even other vendor and in-house systems
- be fully bi-temporal – combined with the immutability this means that it is always possible to go back in time and recreate the data as it was at any point in the past
FINBOURNE’s LUSID™ technology has embarked on the journey to answer these problems. We will keep all the data the way it used to look like and we keep all the data the way it currently looks. But does that make us a RegTech company? I don’t think so. I hope we are part of the wider evolution toward a better set of systemic digital infrastructure that, as a natural by-product, achieves the highest levels of compliance without huge additional cost for the customer.
I’d be interested to hear your experiences of the above so please don’t hesitate to email me.
I referred to following,
- Alpha FMC report
- KPMG “Pulse of Fintech Q2 2017”
- The FinTech Times, “Tech to turn Reg” July-August 2017
- FCA Investment and Corporate Banking Market Study 2016