News by sections
ESG

News by region
Issue archives
Archive section
Multimedia
Videos
Podcasts
Search site
Features
Interviews
Country profiles
Generic business image for editors pick article feature Image: ICMA

02 February 2021

Share this article





Alexander Westphal and Richard Comotto
ICMA

ICMA's Alexander Westphal and Richard Comotto, look back at the first year of SFTR reportig, including what the data does and doesn’t tells us about the market and areas for improvement for both regulators and market participants

Can you explain what the International Capital Market Association (ICMA) is doing in relation to SFTR public data?

Westphal: Since the reporting started back in July, we have been publishing on a weekly basis the available public data from Securities Financing Transactions Regulation (SFTR) reporting. The data is based on the summary statistics that the four trade repositories (TRs) are obliged to publish under SFTR, even if these are unfortunately only covering a very small subset of the data points captured by SFTR.

We take the data from the TRs, aggregate it and publish it in a format that is hopefully much easier accessible than the format prescribed by the European Securities and Markets Authority (ESMA) which the TRs have to use. Alongside the data itself we have also been publishing some trends and charts which we hope are all helpful elements further contributing towards an improved transparency of the repo market and which are also useful complements to our existing repo publications, including of course our bi-annual European Repo Survey, which has been running since 2001 and remains the most comprehensive source of public data on the size and composition of the European repo market.

Richard, you’ve been vocal in highlighting issues with the data reported under SFTR. In particular, you’ve suggested that the reported collateral numbers are wrong, can you explain why?

Comotto: Something must be wrong when some items are reported to be collateralised by 12 and a half million per cent. In fact, no items are even close to where they should be. New repo has an average 32,000 per cent collateralisation. Outstanding repo is 147,000 per cent. Some securities finance transactions (SFTs) even have negative collateral. And it has to be said that some of the loan is also looking dubious.

To an extent, problems were expected with the collateral numbers. They arise largely because the aggregation of collateral data does not seem to have been thought through. ESMA have failed to address the issue of how to add collateral allocated gross to individual repos to collateral allocated net to portfolios of repo. Net collateralisation will make collateral amounts look too small when compared with the size of loans. And as collateral that has been given is a negative number and collateral that has been taken is positive, you cannot simply just add everything up. Unfortunately, ESMA failed to provide the necessary detailed methodology to the trade repositories and, on many issues, the TRs have each gone their own way.

Could you expand on what you think ESMA should do to improve the usability of the SFTR public data?

Comotto: ESMA needs to sit down, analyse the basic challenge of meaningfully representing collateralisation, talk to the TRs and translate solutions into algorithms to be handed out to the TRs.

One possible improvement would be to split the loan and collateral data by gross and net collateralisation. To overcome the arithmetic signage problem, they need to filter collateral by unique transaction identifier (UTI), so that only the collateral for one side of each repo is counted. And they need to do the same for portfolios which are net collateralised, where the collateral reports do not have UTIs (which would require filtering by LEI pairs and master agreement).

To be frank, I am not sure the collateral data could tell us anything useful even if they were accurate. What value is there in knowing that all the repos in the EU are collateralised by 102 per cent or 105 per cent? What we really need from ESMA is more data, e.g. SFTs broken down by cash currency, origin of collateral, maturity and so on. This would require ESMA to increase the number of data points to be published by the TRs – hopefully something for the upcoming review of SFTR.

Some of the reporting issues, including with the XML schema itself, were known to be sub-optimal prior to go live. What’s being done to fix these issues?

Westphal: Yes, some of the problems we are now seeing are indeed not new. We have been in a very open and constructive dialogue with ESMA since the beginning and have flagged some of the problems. Others have only emerged after the reporting started.

Based on feedback from members in our SFTR task force we have put together a list of the most common reporting issues encountered by members post-go-live which we have also shared with ESMA and national regulators. Of course, the 50-odd issues that we have identified go beyond the problems that we observe with the public data and are in fact only partly due to issues with the official rules. But, for those that are, we hope indeed that these can be addressed in future iterations of the relevant guidance documents. We know that ESMA is currently reviewing the existing documents, including validation rules and XML schemas, and is looking to publish updated versions relatively soon, but this obviously takes some time to go through the formal process.

On a more general note, it is important to keep in mind that the fact that there are still issues is of course hardly surprising given the complexity of the reporting rules and the level of detail required. It was always clear that this would be a learning process for everyone involved, not only for ESMA and the trade repositories, but obviously also for reporting firms, market infrastructures and vendors involved in the process. Overall, I think it is fair to say that the SFTR implementation has been a great success, especially compared to similar implementation efforts in the past. The level of cross-industry collaboration on SFTR has really been quite unique, with over 700 active members on our task force alone. This has certainly been a key factor.

Brexit created a schism in SFTR reporting data pools and also forced many entities to double report. How big an issue is this for overall market transparency and data quality and is this a sustainable model for the long term?

Westphal: From a public data perspective this split is indeed unfortunate. As it is not possible to simply aggregate the numbers, we have had to start publishing separate figures for the UK and the EU, which obviously disrupts the data series that we’ve been building up since July. The problem is not so much double reporting by firms, although this does happen particularly for some branches, but rather the overlap in the data.

Take the example of trades between an EU and a UK counterparty. These were previously reconciled and hence only counted once in the data, but now they are reported separately to a UK TR and an EU TR, and therefore of course also counted twice. In the short term this creates some friction, but in the longer term there shouldn’t be any fundamental issue with this setup. Ultimately, SFTR reporting was born from a global initiative by the Financial Stability Board (FSB) and was always meant to be separately implemented in the different jurisdictions. The FSB agreed some standards that should allow the global aggregation of the data and this remains valid.

There are of course other related issues beyond the public data. What firms are mainly worried about is potential divergence between the UK and the EU rules. As mentioned, this is an extremely complex regime and a lot of effort and money has been spent to build the systems to comply with the specific SFTR rules, so anything that would require a significant adjustment on either side could be quite problematic.

Going beyond SFTR, Richard, you also recently highlighted the need for repo platforms and CCPS to stop publishing only term-adjusted and moving-average turnovers, in favour of unadjusted average turnover and end-period positions. How will this help market transparency?

Comotto: That is just a bee in my personal bonnet. Most platforms and central clearing counterparties (CCPs) are very helpful with data. But processed numbers always look disingenuous. There’s a place for term-adjustment and moving averages but there’s nothing as good as raw numbers. So why not also provide unadjusted and daily or weekly or even monthly averages and, if you want to show you’re doing term business, give a maturity distribution? And how about publishing turnover as well as outstanding amounts?

How would raw numbers help market transparency?

Comotto: Well, it would help people like me add them up and compare them. I don’t want to criticise anyone who regularly publishes data, even if they adjust or average it. The real problem is that some infrastructures don’t publish any numbers at all or do so only very occasionally.

In some cases, it’s because they are embarrassed by their low volumes. As for the others, who knows? But infrastructures are at least likely to have a commercial reason for their reticence. The same cannot be said for central banks. It’s inexcusable that the European Central Bank, Bank of England and other central banks do not or no longer publish the money market data they collect from samples of banks and dealers. Time to lead by example.

Advertisement
Get in touch
News
More sections
Black Knight Media