Unearthing pre-trade gold with post-trade analytics

Learn how to leverage the insights of your past trading activities to improve your future execution.

Christophe Rivoire
August 31, 2023
Share to

With the electronification of the trading environment gaining progressively across all the asset classes, and the implementation of new regulatory reporting obligations, the amount of transparent and available data has massively grown. While markets have yet reached full transparency, the ongoing initiatives around the development of a European Consolidated tape highlight the desire to continue increasing transparency, convinced it is a prerequisite to attract and generate more liquidity.

This paper is not going to opine on this correlation as other market participants are taking care of it, but instead explore the potential market intelligence this ‘tsunami’ of data offers to traders. Financial Institutions are sitting on a gold mine, including public data as well as all private data market participants receive and generate around their trading activities (RFQ, parent orders, axes, executions…). As the volumes of data traders can access in real-time are not only restricted but fragmented, assessing the execution and efficiency of trades can only be done after the battle has been fought. What if you knew that you would win before starting?

From reaction to prediction, a simple idea for improved outcomes

The idea is not new and has always been implicitly used by market participants in leveraging past trading activities and knowledge to inform and improve current trading and execution decisions. If firms were able to instantaneously analyse a wide range of information and understand the variables and decisions that contributed to successful outcomes in the past, then it is entirely reasonable to expect similar results if those variables are known and their decisions are systematised.

Traders can do it the way they have always done it, relying solely on past experiences with certain banks or brokers, or venues being able to offer you the most reliable liquidity in that security. Alternatively, they could make an informed and systematised decision by analysing a broader range of securities they have traded or received interests on, over a configurable number of hours or days subject to the liquidity of the instrument, and where a significant number of those activities were related to similar instruments than the security you’re looking to trade (in terms of issuer or sector, potential venues etc). The idea is to leverage all that information along with recent axes you might have received from dealers, which, when aggregated, can provide you with pre-trade market intelligence that significantly enhances your prediction, and even prevention capabilities, and hence maximises the chances of a favourable outcome.

Your decision regarding how you execute that new order at that moment becomes more deliberate and more informed as traders can decide in real time what information they want to see and how they want to use it.

Let’s take a few examples based on an order received by a trading desk to buy USD 10M of the ISIN US09659X2Q47 a senior financial nonpreferred bond issued by BNP maturing in 2027.

  • Launching an RFQ (Request for Quote) involving a large number of dealers is definitely not the right approach.
  • Being able to check immediately and automatically all the trades where few dealers were always aggressive in selling this bond or this issuer denominated in USD, or other Financial nonpreferred in the same bucket (for example, USD BPCE 2026 or USD ABN 2027) is highly valuable.
  • But if out of this automatically identified X dealers, half of them did share in the last few days they were trying to borrow this specific security for short covering, it may change the view…
  • And what about a dealer (not identified above) who sent an axe one or two days ago to sell these three bonds even if no trade has been done with this specific dealer on this type of security in the last weeks?

These are only examples but automatically running these simple queries or more advanced algorithms defines the ‘most likely counterparts’ to be involved in the request. Simultaneously, it can offer you the ‘best proxy’ bond trader can purchase with a different list of ‘most likely counterparts’.The example above can obviously be replicated to other asset classes with their own specificities.

If only 1+1=1…

Data is key for pre and post-trade tools to be relevant as they need to ingest and collect immense volumes of data, as already highlighted in our team’s takeaway TradeTech interview. The more data sources you can input, the better the output will be.

One of the largest obstacles to generate this pre-trade market intelligence is the fragmentation of data sources. The complexity of handling and integrating disparate datasets from various asset classes and trading desks makes it difficult to merge, standardise, and store the data in one place.

The ongoing digitalisation of the industry adds to the complexity due to increased data volumes, sources, and formats, especially if firms store their secured financing and cash transactions data separately. Similarly, axes that dealers send to investors tend not to be stored. Or if they are, they are often located in different repositories to where the trades reside, while orders and/or RFQs are in yet another dataset.

Storing all this data in a single location would give traders and analysts access to all that information in a single place, which makes it drastically easier to leverage the data and cross-reference information in real time. And the benefits don't stop there - Enriched data like Transaction Cost Analysis from external providers can be integrated to facilitate reporting to internal or external stakeholders and ensure a centralised information.

Centralising the data is the first of many steps to making data relevant to a user, as typically multiplying sources increases the probability of data quality issues. In a recent survey by Refinitiv they demonstrated the importance of automation in FX markets. 53% of respondents identified delays in data source consolidation with input errors and data access as the most often encountered problems. Having processes to ensure and monitor correct data ingestion, and automatically identify outliers and errors on data is critical to ensure trading information is accurate and as clean as possible.

The capacity to easily query and aggregate the data in any axis with the right algorithm to extract the valuable insights and achieve a real ‘best execution’ is critical.

What now?

Opensee’s technology empowers market participants to gain a complete view of their trading activities, not only by producing reports linked to execution, but by unlocking actionable pre-trade intelligence on demand through accurate, flexible and fast analysis of their past activities. Data quality checks, implementation of Machine Learning models in Python are natively offered.  And in addition to the technology, Opensee provides the understanding of what is relevant in the various asset classes and has included all those specificities.

If you’re interested in having a close look into how Opensee’s Trade Management and Execution Analytics solution can solve these data challenges, don’t wait and read our blog “Taking Trade Best Execution to the Next Level Through Big Data Analytics”.

About the author: Christophe Rivoire joined Opensee in 2019 as Senior Advisor before becoming Head of Strategy and UK Country Manager. Before joining Opensee, he spent more than 25 years in the Banking industry and especially in Fixed Income Trading in Europe and in the US. He started his career at Louis Dreyfus Finance before joining HSBC in 1998 where he held various responsibilities. Combining Risk, Trading, Technology has always been at the centre of his interests.

Other articles

Put Opensee to work for your use case.

Get in touch to find out how we can help with your big data challenges.

Get a demo