How an IBOR should give you better control of your data

Investment Technology

How an IBOR should give you better control of your data

Finbourne Logo

15/05/2020

Last week we held a webcast with The Trade where we did something we think is unusual: we defined an Investment Book of Record (IBOR). Not by what it is – that’s a fairly commonplace categorisation – but by what it does. Or rather, what it should do to be a relevant and productive tool in today’s landscape of systemic margin compression, increasingly exigent regulatory bodies and historic market dislocations.

We think in our definition we set the bar pretty high; but then we also did a couple of live surveys with the audience (mostly fund managers and asset servicers) which demonstrated that we are not alone in our intentions.

The results are fascinating for anyone evaluating their current investment data platform: it matters what problems your competitors are minded to solve first and where it leaves you when they do.

Are you missing any of the following functionality in your investment platform?

Ability to produce shadow NAVs

It’s not surprising that the most common choice for missing functionality is the ability to produce shadow NAVs. We also suspect many funds would add ‘at all’ at the end, but even for the ‘flush and fill’ contingent, a static NAV based on last night’s position and risk is dangerously naïve in thin markets. You also cannot reconcile your fund accountant and custodian and cannot run simulations or scenario analysis without real time positions. We prefer the safety net of instant reconciliations, on-the-fly position derivation and bi-temporal data to always return historical valuations.

I can’t connect to third parties, e.g. fund admins or custodians

This really is all about owning your own data. To do this, you need to start with logical partitions within a cloud native distributed architecture to separate your administrators’ datasets. From there you need client configurable trade mapping to capture all TPA data formats and an extensible data model to avoid leaking information. Combining this with an open API interface dramatically reduces integration time with disparate data sources and lets you retain and reconcile all version of the truth over time, regardless of current state.

My reporting isn’t flexible

2020 is set to be an intense period of regulatory change for the buyside with a barrage of changes such as Uncleared Margin Rules (UMR), Central Securities Depositories Regulation (CSDR) and G-SIFI systemic assessments, just as an example. Factor in Brexit adjustments and the Libor transition and the buyside has a lot of regulatory reporting to handle. This doesn’t come cheaply either. According to KPMG, financial institutions spent $270bn annually on compliance in 2017. Add to this burden the increasing demand for interaction and reporting from end clients, and you can easily see how the need for an internal, dynamic, customisable solution is quickly becoming vital. We believe your data should be available at your fingertips, irrespective of the source. The centralised virtualisation process should create a repeatable protocol for the gathering, reshaping and presentation of all your data, and handle localised entitlements to ensure control is maintained.

I can’t calculate price and risk across all instruments

Whichever system you use, historically there has always been an asset class that doesn’t fit. And that’s frustrating. Constrained by technology, the front office resorts to desktop means to value and risk certain instruments. A consolidated view then becomes a challenge and control issues arise. Even if all instruments can be combined, the system is always beholden to one pricing model. There’s no ability to dynamically route pricing to different models, so model testing is a month-long finance process rather that something instantaneous, tailored and repeatable. The control issues around a single pricing model are self-evident, and all results exist for that moment in time alone, with no ability to recover. 

We prefer the ability to parallel models and instantly translate between them and the instrument set. We regard the ability to configure this dynamically at an instrument level and to layer in appropriate market data with its own rulesets to be vital for testing, simulation and effective risk management. And as mentioned before, we prefer the safety net of all data being bi-temporal such that these calculations are always fully recoverable.

Let’s take a look at question 2 which reveals the risks of buying a new solution.

A new system takes too long to implement

Typically, the implementation period for a new investment platform is anywhere from ‘several months’ to ‘ongoing’. We have heard horror stories of multi-million-pound, multi-year implementations that still fail to deliver required functionality or asset classes. But it doesn’t have to be a dramatic ‘big bang’ IT migration. We believe in an incremental approach, where the client describes the problems they suffer in order of importance, and the platform removes them at the client’s pace. To demonstrate capability, we believe in 2-week long proofs of concept (PoC) that allow for full wish lists to be addressed and for the client dev and ops to become independent experts on the platform themselves, thus also doing away with the need for mis-aligned consultant requirements around implementation.

A new system will underdeliver functionality

We’ve all been there in a sales meeting where you’re promised the earth and delivered a tiny patch of land instead. Why shouldn’t you be allowed to try out a platform before committing, just as you would do with Spotify or Slack for example? In addition to our PoC approach, we also encourage people to sign up for a free account where they can test all of LUSID’s capabilities so they can see for themselves where it can add value.

I’ll need to increase headcount to implement the new system

We fundamentally believe that this doesn’t need to be the case. Our aim is to have an engaged and experienced developer, operations and data science community in our ecosystem that shares ideas and reference implementations, data modelling ideas, and marketplace tools.

Where possible, the platform itself should be capable of operating with no code expertise necessary, where the constructs become too complex for a user interface, the community or platform itself steps in.

This means there’s no need to think about support overheads, the aim should be for 0. Far from increasing headcount, the ideal platform should allow you to lower it. And retain a far more engaged, creative and efficient team as they can access and create ground-breaking tools and concepts.

If you found these findings interesting, you can watch our CEO Thomas McHugh chat about this and more in the webcast with The Trade.

https://www.theiaengine.com/iaengine-events/can-an-ibor-give-you-better-control-over-your-positions/

Finbourne Logo

Gus Sekhon

15/05/2020

twitterlinkedinfacebook
Stay tuned background

Related articles

FINBOURNE Technology appoints Francesca Lubbock as its new Chief Operating Officer as part of its continued growth strategy

Finbourne LogoFINBOURNE10/10/2024

Controllable AI for private markets: how do we open the black box?

Finbourne LogoFINBOURNE03/10/2024

FINBOURNE and Propellant Digital join forces to pursue bond tape provider projects across the UK and Europe 

Finbourne LogoFINBOURNE03/10/2024

Is a SSOT the Silver Bullet COOs at Asset Managers Need? 

Finbourne LogoFINBOURNE30/09/2024