Increasingly diversified investment strategies, closure of mutual funds, the rise of passively managed funds and companies racing to consolidate. It’s easy to say the asset management industry is slow to change but when you look at it from the outside, it’s being pummelled by change in all directions.
How do you adapt to external change to get and stay ahead?
Common steps asset managers take:
- Launching a new fund or onboarding different asset classes
- Starting operations in new jurisdictions as you seek expansion overseas
- Understanding and implementing alternative data to give you unique insights beyond earnings and market information
- Adapting to regulatory change
- Providing clients with enhanced reporting
But change costs!
What should be simple changes often turn out to be expensive, month / year long projects. The business has a constant refrain of “how hard can it be to?...” Whereas data and IT teams reply with <sound of breath being sucked in through clenched teeth> “I don’t like the look of that!” as they see the multi month business transformation process pan out in front of them, another additional system to be added to the inventory, or a “tactical solution” being proposed which they know will out-live many strategic systems.
For the business to move forward without incurring unacceptable delays or costs, your data environment and IT infrastructure needs to be flexible - you want to be able to add new systems or data feeds to meet whatever direction of travel you’re headed. Until now, this has been easier said than done.
So why does it take so long to implement what should be straightforward changes to your data and IT infrastructure?
Searching for answers
Given the pressure to avoid reputational and regulatory risks, any change under the bonnet has to be so smooth it’s almost unnoticed. One key area often overlooked in assessing the work required to implement a business change is the need to parallel run the new systems or data changes to ensure a faultless migration.
Parallel running can require complete new environments to be set up and maintained, which is expensive, headcount intensive and time consuming:
- Additional hardware, databases and processes.
- Replicated data feeds from vendors, and feeds of activity from the production systems.
- Reconciliation of data between production and test environments.
And as your IT infrastructure grows in complexity, it becomes more difficult to ensure base data from production is homogeneous across your test environments. For example:
- It may not be possible to get an accurate feed of operational activity from the production system, e.g. if there are manual updates to production it’s not always possible to apply these to a test environment in an automated way. What’s more the people responsible for the manual updates are unlikely to be happy to make them multiple time
- Differences in setup or application versions between prod and non-prod systems cause issues that require investigation
- If the reconciliation process is manual then it is often difficult to perform it in real-time, leading to timing differences across the environments
So what are the options to reduce the time and cost spent on this?
Ideally you want a system that allows you to test the changes in a securely partitioned ‘sandbox’ environment. For a system upgrade you should only need to set up the specific component being changed. Alternatively, for a data change you’ll want the ‘sandbox’ environment to automatically inherit data (e.g. holdings, trades, corporate actions etc.) from the live production environment, but allow you to swap in specific things, for example alternative market data feeds in preparation for a change in source.
Using LUSID, you can mirror your live investment environment into a different segregated ‘scope’. This lets you continue business as usual while simultaneously building and testing. Let’s say you want to change the way you are booking a particular type of transaction - here’s how you’d do that in LUSID:
- Create your test environment give the test users read-only access to a set of portfolios from your live environment
- Replace the transactions booked in the legacy way with transactions booked using the new booking model
- Extract the data from the test environment and validate the only differences in the data are the expected changes
We store data at its most granular level with ‘As at’ and ‘Effective from” dates for every event, allowing you to recreate the state of your portfolio at any point in time so you can test and roll back changes on demand. To find out what this means in more detail, check out our blog on bitemporal data.
Ultimately, this means you can remove much of the complexity of adding new capability to your IT infrastructure, and do so at the fraction of the time and cost compared to traditional parallel testing implementations.
If you’re interested and want to take a closer look under the bonnet – visit our GitHub page where we have a notebook with the full code samples so you can try this out with your own data. https://github.com/finbourne/sample-notebooks/blob/master/examples/use-cases/Safely%20and%20efficiently%20test%20changes%20to%20your%20system.ipynb