Erik te Selle
Senior Manager, Synechron , The Netherlands
Tim Seta
Senior Consultant, Synechron , The Netherlands
Data
Authored by:
Erik te Selle, Senior Manager, Synechron, The Netherlands
Tim Seta, Senior Consultant, Synechron, The Netherlands
Examining the bigger picture Is your organization struggling with becoming regulatory compliant by changing the simplest data element in your Transaction Reporting process? A proper review of the overall architecture could be the best way to go. And with a technology-like Data Virtualization, there is a solution path to simplify and improve your Transaction Reporting architecture. In this article we focus on the architecture involved with Transaction Reporting.
Organizations need to start thinking about improving their current Transaction Reporting processes. This includes conducting a thorough assessment of the involved architecture -- if not for reasons of efficiency and cost improvement, then for the increased regulatory focus on quality and correctness. In this article, the concepts of Data Lakes and Data Virtualization are introduced and their relevance for Transaction Reporting processes is discussed.
Defining the problem
Over the past several years, demanding regulatory deadlines were chased. This drove the implementation of Transaction Reporting in the processes and systems of companies and resulted in an architecture that was, or still is, not meeting the demands of the future.
Within a generic Transaction Reporting processes framework, as depicted below, the ingestion of data in the reporting engine is likely not as straight forward as portrayed. It is common to see over a dozen different systems interfaced, with hundreds of data points being bounced back and forth between the systems. Some key challenges can be identified which are summarized in our previous article
The Generic Transaction Reporting Processes Framework
Focusing on the core architecture, there are multiple elements likely at the root of Transaction Reporting issues, like data quality, an abundance of data formats, almost endless point-to-point interfaces, and source system deficiencies. The architecture and systems involved are not often designed or even equipped for the standards and requirements dictated by Transaction Reporting (e.g., near-real-time reporting, data availability requirements, and data transformation requirements).
Additionally, running a complex architecture often results in quite a burden with regards to maintenance and managing changes (e.g., system and data, file formats, and interfaces). Moreover, there are more stringent expectations and regulations on the topic of data ownership and accountability in which a solid architecture plays a vital role.
How technology can help
The aforementioned key challenges can be addressed by introducing a Data Lake to improve the architecture supporting Transaction Reporting processes. Data Lakes will provide tangible benefits which we will investigate here. However, over the past few years Data Lakes have presented some shortcomings which we want to highlight as well.
In recent years, Data Virtualization has evolved into the technology that could potentially be a more future fit solution. Here, we will introduce the concepts of Data Lakes and Data Virtualization along with their benefits and shortcomings.
Understanding Data Lakes A Data Lake is a centralized repository that allows organizations to store all structured and unstructured data at any scale.
Although often used interchangeably with a Data Warehouse, a Data Lake is fundamentally different. Unlike a Data Warehouse, a Data Lake retains all data and does not apply any transformation to the data. This means that data is stored in the rawest form possible. In essence, (financial) companies export all data from their core systems in a central database to be used for other processes like specific reporting and Management Information. This early ingestion (and late processing) of data is a key characteristic of Data Lakes and enables organizations to leverage important benefits, including the following:
When there is a clear business purpose defined (e.g., Transaction Reporting), and the Data Lake is prepared accordingly, data can be further processed to enable the creation of a specific output/report.
The concept of merging all data sources into a centralized repository (i.e., Data Lake) also brings some challenges for organizations, in particular:
Data Lakes, however, bring important benefits regarding data storage, analysis and scalability. But the challenges outlined above are causes for serious concern, if not properly addressed. An approach that dismisses these challenges altogether, is Data Virtualization.
Understanding Data Virtualization
Data Virtualization integrates different types of data from multiple sources into a holistic, logical view without moving it physically.
The concept of Data Virtualization is as old as Data Lakes, but back in 2010 the capacity and performance needed for Data Virtualization could not match the requirements to make the concept function effectively. However, with the introduction of Microservices in 2010 (with Amazon and Netflix as the pioneers), network and processor speeds have literally become a million times faster. Additionally, improved communication protocols have been tested and commercialized for large data transfers and long messaging queues. Essentially, the technology is now supporting the demanding requirements of Data Virtualization, making it the best solution path to consider.
With Data Virtualization, users can access data directly from the golden source, across multiple data types, locations, or formats, without having to do any data duplication or moving of data. Accessing data from the golden source directly addresses the shortcomings of Data Lakes and marks the following benefits of Data Virtualization:
On the other hand, there are certain challenges with Data Virtualization as well, including:
Side-by-side comparison
Comparing both technologies, conventional Data Lakes are still needed to do some of the heavy lifting of big data operations (e.g., storage and data analysis). However, Data Virtualization directly addresses the shortcomings of Data Lakes. Data Virtualization also leverages key benefits with respect to data sourcing as well as governance and ownership of data, on top of making the day-to-day operational data more accessible with low time-to-market to the business users. These are important considerations for organizations and could improve many business processes, including Transaction Reporting.
Data Virtualization as the powerhouse for Transaction Reporting
When assessing the Transaction Reporting processes framework (see diagram above), its key challenges, and the technology involved, it is obvious that a critical look at the architecture could greatly benefit organizations. With the introduction of Data Virtualization specifically, organizations are able to leverage some key benefits:
Transaction Reporting would be able to leverage these key benefits of Data Virtualization. The immediate effect would be the following:
Extended benefits of Data Virtualization
Transaction Reporting is not the only business process that uses the source data, which means Data Virtualization will have side benefits as well:
Conclusion:
Data Virtualization has the potential to be a powerhouse in the architecture of Transaction Reporting.
For more information on Transaction Reporting data quality requirements, please read this article published by Synechron:
https://www.synechron.com/en-us/insights/adhering-transaction-reporting-data-quality-requirements
What can Synechron offer in Transaction Reporting?
Synechron has, on multiple occasions, been tasked with setting up and improving Transaction Reporting processes for our financial services industry clients. This provides us with an excellent understanding and vision of the most relevant pain points and improvements needed to ensure a proper Transaction Reporting implementation -- from the regulation all the way to system implementation. This includes control frameworks and creating data hubs. With our industry-focused regulatory domain knowledge, hands-on change specialists and technology leadership, we are uniquely positioned to assist you in any capacity -- from analysis to implementation, and program management to development.
Want further, in-depth information and insight about our capabilities and vision on Transaction Reporting?
Connect with us and let’s talk:
Erik te Selle Senior Manager Reach out to:
erik.teselle@synechron.com
Checkout our new website: Transaction Reporting
Interested in joining our Digital Transformation journey? At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron’s progressive technologies and optimization strategies span end-to-end Consulting, Design, Cloud, Data and Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Blockchain and Artificial Intelligence to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,000+, and has 40 offices in 17 countries within key global markets.
Check out the current vacancies on our website: Careers