Investment data management outlook 2023
Introduction
FINBOURNE Technology and Citisoft recently held an industry roundtable in Copenhagen, attended by over a dozen senior operational leaders across Nordic pension, insurance and asset management firms.
The topic of discussion—how to stop infrastructure inefficiency eating into profit margins in a world of constant change— was recognized by all the participants as the key data challenge they are trying to address today. Increasing volumes and complexity of data combined with fewer resources and a more cost-conscious mentality has made it difficult to safeguard data quality and provide fully integrated workflows that enable revenue growth. The tactical ‘sticky plasters,’ manual workarounds, and the need to manage duplicated data held in multiple data stores are hindering the ability to take full advantage of new market opportunities and initiatives contributing to post-pandemic returns.
Topics discussed
- Key challenges
- System-first to data-driven
- Cloud is the future
- Simplify the complex
- ESG is just the latest data challenge
Key challenges
The roundtable participants all acknowledged that increasing data volumes, types, and complexity are here to stay. A key challenge in 2023 is to continue to provide control of the required data at a reasonable cost without having to completely re-design their existing technical landscape.
There is a lot of data available, but companies often do not have the ability to organize and analyze it to optimize decision making or identify possible opportunities efficiently. Getting the data into place is a good start, but data is ultimately used to tell a story. Not all stories will be the same and there are multiple ways to look at the same data depending on the needs and viewpoints of the individual users, human or application.
How to efficiently translate the data into different viewpoints and outcomes is a key driver for the roundtable participants. Discussions centered around how technological decisions should be made and what kinds of technology could be transformational instead of solving only one specific issue. With sustainable finance an important theme in the financial services industry today, how to approach ESG data was also a big topic of conversation. Although collecting, organizing and processing ESG data is currently not fully developed in most firms, the participants agreed that it is just another data challenge. Managing this new type of data in the most efficient and cost-effective manner possible was key.
System-first to data-driven
Data is one of the most valuable assets for a business today and is the foundation upon which revenue generating, operational, decision-making and insight applications depend. Traditionally, a system-first solution was followed to fill gaps or resolve problems. This led to many different applications in an organization, each using and managing their own sets of data independently. Companies are grappling with three major issues because of this approach:
- Maintaining data consistency, reliability, and transparency across different data sources
- Blending the data together, which often involves expensive and bespoke development
- Difficulty in providing data in a timely fashion due the number of data sources
We are now seeing a shift towards a more data-driven approach. Data considerations are now at the head table along with functionality when evaluating the way forward. Just because a system works today, it does not mean that it will work tomorrow. It should therefore be considered replaceable, while the data itself should be looked at as the durable resource.
Making the data durable should not require a re-design of the entire system landscape. Instead, using the existing systems and implementing an interconnected data ecosystem with a layer that will harvest data from these applications, validate for quality and consistency, and then distribute it was a more preferable option.
Several of the round table participants are investigating two emerging data management concepts, data fabric and data mesh, to enable this desired model.
Data fabric
A data fabric is meta-data driven, providing an abstraction layer to simplify data access and facilitate self-service data consumption at scale. It breaks down data silos, integrates different data sources and then shares the data to get it into the hands of the users that need it.
Data mesh
A data mesh is about creating a distributed data system that aligns data sources by business domains or functions, each with their own data owners that are responsible for the integrity of their data. With data ownership decentralisation, data owners can create data products for their respective domains.
The good news is that both data architecture concepts are complimentary. A data fabric provides the capabilities needed to implement and take full advantage of a data mesh by automating many of the tasks required to create data products and manage the lifecycle of data products.
Key takeaways
- The traditional system-first approach has resulted in the growth of individual unconnected applications resulting in challenges around data quality, data blending and data distribution.
- There has been a shift towards a more data-driven approach for solutions where data considerations are at the same level of importance as functionality.
- Data should be considered as a long-lasting resource while applications that use the data should be considered replaceable.
- Look to build solutions by integrating existing systems together before buying new applications.
- Data fabric and data mesh management concepts can be used to provide a data architecture that enables an integrated, connected data experience across a distributed and complex data landscape.
Cloud is the future
With concerns about security and compliance addressed, the cloud is now the platform on which companies want to operate, especially in relation to data. The scalability, flexibility and cost of data storage cannot be matched by on-prem solutions. Business contingency and data sovereignty capabilities offered by the major cloud platform providers are mainstream. Integrating efficiently with existing on-prem systems or private clouds is available. Using a cloud solution, volume and storage of data is no longer a major factor. Going forward, cloud migration decisions will focus more on transitional and operational costs rather than functionality.
Moving to the cloud will be a phased journey for more established organisations. All new applications should be cloud-first developments or purchases. Migrating legacy applications will take more time and there is little point moving the same problems to the cloud. If an application is not fit for purpose on-prem, then why spend the time, effort and money moving it to the cloud where it will still not be fit for purpose? Functional and technical gaps need to be resolved first.
Over the past several years, nearly every major vendor or service provider in the financial services space has been marketing their cloud capabilities or their plans to incorporate cloud into their offering. Newer vendor applications are being built as cloud-native to deliver on the full promise of cloud right out of the box – lower fixed and variable costs, greater scalability, flexible compute power, enhanced security, better data accessibility and data loss prevention, improved application interoperability and faster time to market for new features.
But even the more mature, functionally rich incumbents that are present in most investment management firms today are now either cloud-based or cloud-enabled, for at least some of their functionality, and are working towards increasing their cloud presence. On-prem applications that continue to be fit-for purpose will continue to exist even if not all their available functionality is used, like airplanes that still have seat ashtrays. But the future is most definitely in the clouds, and this is the direction that the roundtable participants are looking at.
Key takeaways
- Use of cloud platforms is the future for data collection and storage.
- Nearly every major vendor or service provider is moving functionality to the cloud.
- Resolving data volume and storage issues allows companies to focus additional resources on ensuring the quality of the data.
- The cloud journey will be phased for older firms with existing on-prem technology, but there is little point spending the time, resources and money moving legacy systems to the cloud that are not fit for purpose on-prem.
- Newer cloud-native applications can deliver cloud advantages right out of the box.
Simplify the complex
Data is expensive. Costly to source, to keep and to manage. But the financial services industry is a highly data-intensive business and companies depend on their data to make decisions, perform business activities, communicate with their clients, and satisfy regulators. The sheer amount of data has become a substantial challenge, not only in terms of the volume, but also in terms of ensuring its quality.
The emerging data architectural models and improved technology solutions, like the cloud, will help. But a different aspect was brought up during the roundtable: simplification. That is, to only collect, control and consolidate that data you really need to carry out your activities and not source extraneous data because it is available, just in case it is needed or because you have always done so. This will reduce both complexity and cost. One of the participants likened it to a living room where you keep adding the latest fashionable piece of furniture because it looks good without focusing on how you want to use the room. And when you finally step back and decide what you want to use the room for, you are scared to take any of the furniture out just in case you might need it in the future.
It is the same story with data. No one likes to remove data. It is unsettling, especially if you are not sure exactly what the data is used for. Not having a company-wide understanding of who uses specific types of data, in what systems, for what business purposes and where the data is sourced from is a common state in many organisations. Performing an organisational data mapping exercise starting with what data is used by each business unit for what purposes, then moving to where the data is source from and stored and finally linking that information to the technical data architecture that supports the whole process will provide the organisation with a detailed data operating model.
This data operating model is the first step in a data simplification exercise because it identifies what data is not used as well as what is. It is a key contributor to the development of a comprehensive business and technical data strategy and is a foundational element to define and design effective data architectures. Also, using the information gathered, data sourcing costs can be rationalised, potential data inconsistency issues identified, and data traceability improved. This will benefit the bottom line and improve reporting to regulators and to clients.
Key takeaways
- Reducing the amount of data collected, stored and managed to only that which is actually used for business purposes is another way to streamline data management processes and reduce costs.
- Most organisations do not have a complete company wide understanding of what data is collected, where it is sourced from, which business area uses that data and for what.
- Developing a comprehensive business and technical data operating model for the company is a foundational element to defining a data strategy, improving data consistency and transparency, and identifying potential cost reductions and bottlenecks.
ESG is just the latest data challenge
An example that encompasses many of the data challenges being faced by investment management firms today is Environmental, Social and Governance (ESG). It was a hot topic during the roundtable because of the impact it is having on both the business and technology areas of their companies.
At its heart though, ESG is just another type of data problem that needs to be solved. It may be more problematic due to inconsistency across data providers, continually changing regulations and the need to incorporate semi-structured and unstructured data into the mix. But the same basic data activities are required: identification, sourcing, storage, quality control, management and distribution.
ESG has been around since the 1960’s. Previously known as socially responsible investing or ethical investing, it is not a new concept. At its most basic, instrument and issuer restricted lists were used to ensure that companies engaging in certain activities, like tobacco, weapons, etc., were not included in a specific fund. Not so now. With ESG investing’s skyrocketing popularity, it has become a significant consideration for many clients, and therefore to the firms selling to them. Bloomberg Intelligence estimated that global ESG assets may exceed $53 trillion by 2025, representing more than a third of the projected total assets under management (AUM) globally. By 2026, environmental sustainability will be a key criterion in over 60% of data management initiatives,2 according to Gartner.
With this increase in demand came an increase in ESG regulation and reporting requirements. These regulations are all trying to reduce company greenwashing, improve transparency on inherent sustainability risks in financial products, and provide more certainty to investors looking to position their investments toward more sustainable technologies and businesses.
These regulations help to professionalize the business of creating and selling ESG products, but have also caused several data challenges that investment managers are still trying to deal with like:
- Lack of international standards for reporting, increasing the effort needed to fulfill disclosure requirements.
- Varying interpretations across markets for defining a sustainable investment and managing sustainability risks effectively.
- The lack of quality data surrounding ESG metrics, hindering the reporting process and undermining the credibility of the disclosures made.
- ESG regulations are still being defined and developed around the world, so a global standard set of data points or methods of interpretation to exactly define an ESG-compliant investment is not yet available.
- Determining the best solution for managing and distributing ESG data that was likely obtained from multiple independent and fragmented sources and include a combination of structured, semi-structured and unstructured data.
- Ensuring that appropriate data quality controls, assurance capabilities and data governance are in place to manage the ESG-related data.
There was a consensus amongst the roundtable participants that ESG factors are no longer just preferential considerations, but key drivers behind investment decisions. To make the best decisions, traceable high quality and consistent data must be available to evaluate a combination of criteria to ascertain whether an investment is ESG compliant. New sustainability indicators and thresholds continue to be created and new data types, sources and systems are being developed to provide the information needed.
Selecting the right data vendor(s) is not straightforward due to differences in taxonomies and methodologies used. There are many ESG data vendors in the market and firms now often purchase multiple data sets and then overlay with their own internal analysis. Knowing the business activities to be performed and understanding what data the firm needs to do them is the first step. Using a methodology like the one discussed above for simplifying data would be useful.
New applications have been developed to consume the data and support various aspects of ESG. Existing, more mature vendors have also been enhancing their systems to support ESG. So, there are a lot of alternatives available making the process of making the right choice complex. Going back to the concept of moving towards a more data driven selection methodology, because the scope of ESG data needed is not yet stable, applications must be able to add new data types or attributes without needing to go through long testing and upgrade paths is crucial.
Key takeaways
- ESG is just another example of a data problem that needs to be solved. • ESG factors are no longer just preferential considerations, but key drivers behind investment decisions.
- Inconsistent and evolving ESG regulations globally are significantly increasing the complexity of sourcing, maintaining, and using ESG data.
- ESG applications must be able to add new data types or attributes without needing to go through long testing and upgrade paths is crucial.
- A flexible data foundation will help firms solve the fundamental problem of understanding and deriving value from their ESG data.
About Citisoft
Since 1986, we’ve solved complex technology and operations challenges for the investment management industry. With a team of over 85 dedicated consultants in North America and the UK, we’re committed to working with asset managers and asset servicers globally on projects of every scope. From guiding complete business transformation programs to on-the-ground delivery, our team is equipped to fulfill any strategic or tactical need.
To learn more about our Advisory and Delivery Services or to leverage the legacy scale in your organization, contact us at insights@citisoft.com or visit us at www.citisoft.com
About FINBOURNE Technology
At FINBOURNE, our Modern Financial Data Stack delivers global investment management, banking and capital markets firms an interoperable approach to data management – we provide a trusted and consolidated view of your financial data across the front, middle and back office.
Our cloud-native solutions are scalable whether you are large, complex multi-region asset manager, a global bank or a smaller wealth fund. They can be deployed in a modular fashion to ensure a data solution designed for your specific needs.
LUSID is an API-first, SaaS native operational data hub that consolidates, aggregates and distributes financial data throughout your entire organisation, along with your investors, clients, and third parties. LUSID can also provide full front-to-back financial functionality, either natively or through integrations with your existing providers.
Luminesce is a data virtualisation platform that lets you query and combine data from multiple sources and systems, including LUSID, into an integrated view for interrogation. You can add new data sources and access them within minutes. Data is fetched from source systems in real-time without being replicated, so you have full confidence about its fidelity.
FINBOURNE is partnered with some of the world’s leading financial services firms, including Fidelity International, London Stock Exchange Group, Baillie Gifford and Railpen.