In recent years, the accumulation of large data sets by companies and the ability to process it through the rapid development of computer algorithms and artificial intelligence (together referred to as Big Data for the purposes of this article) have been discussed at great length in the competition law world. Competition authorities across Europe are paying ever-increasing attention to the phenomenon and there is a growing, but somewhat disparate, body of case law.
Understanding regulatory authorities' current thinking on the topic is essential for tech companies who wish to navigate successfully through a fast changing commercial landscape. We previously summarised the position set out in the French and German competition joint study on the compatibility of the antitrust toolkit with big data (see link here). This note rounds up recent developments and maps out what in house lawyers need to watch out for when handling big data.
At its simplest, data is simply a product and the same competition law analysis can be applied to it as is applied to any other product. Services founded on data can be analysed in the same way under competition law as any other service.
The difficulty is that data is not finite in the same way as most products. It is, in many circumstances, replicable. However, large data sets may be very difficult to replicate, first-mover advantage can be significant and data can reinforce barriers to entry. In addition, the value of data is directly linked to one's ability to process it. Pure data can be worthless if it cannot be read and analysed to guide commercial decisions. The analysis is therefore significantly more complicated than it might seem at first glance.
a. Anti-competitive Agreements involving Big Data
Perhaps the most likely form of anti-competitive agreement involving Big Data would be an agreement for exclusive licensing or exclusive access to an important data set. Just as exclusive licences to other IP rights can restrict competition, it is easy to imagine circumstances in which exclusive licences to data can do so, particularly where the data set cannot easily be replicated.
At the more serious end of the scale, data and privacy could be the subject of hardcore, cartel-type, behaviour. Companies compete on many things, price, quality and, increasingly now, the strength of their privacy protection. In the context of mobile texting apps for example, companies such as WhatsApp proudly guarantee their users end to end encryption. Agreements intended to reduce that competition are likely to fall within the category of anti-competitive agreements. It is therefore possible to envisage competitors agreeing to reduce, or limit the extent of, their privacy offering, thereby reducing their internal costs and in parallel the level of competition within the market. Such an agreement could easily fall under the far reaching test of Art 101 TFEU or Chapter I of the UK Competition Act 1998. Alternatively, as seems to be the subject of a current Commission investigation into Polish banks, companies could agree not to give access to data needed to enter a market.
Another potential data-related anticompetitive agreement would be an agreement to price discriminate based on data obtained or provided. An online distributor could enter into an agreement with a manufacturer to sell products at a variable price depending on how much shoppers would be willing to pay for certain products, based on data obtained about their shopping habits or location based on their IP addresses. The accessibility and ability to quickly process a wide array of data would be key to such an agreement. Interestingly, the Commission is currently investigating an alleged agreement between the largest European tour operators and hotels which may contain clauses discriminating between customers based on information obtained about their nationality or country of residence.
The rapid development of AI tools also gives rise to new hypothetical yet not unimaginable types of anti-competitive agreements. If companies were to use AI to determine their prices (many already do so), a situation could occur whereby separate algorithms, each used by competing companies, decide of their own accord that the best way to maximise their respective company's profit, would be to collaborate on price. There would therefore be an anti-competitive agreement between the companies, but without any human intervention. In this instance, where would the liability lie? Could liability sit with the companies which use the AI? How could such cartel behaviour be detected by mere humans? Whilst such an example may seem far-fetched today, competition authorities are already asking themselves whether our current competition law tools would be equipped to deal with such behaviour adequately.
b. Abuse of dominance
The accumulation of data is not, in itself, problematic under competition law. However, could owning a significant data set make you a dominant undertaking and therefore subject to added scrutiny? Or can a large data set reinforce a dominant position in another market? Does having a dominant position make you more able to accumulate a large data set and use it to exploit other markets?
The answer to these questions will depend on the type of data in question, the market in which a company operates and crucially a company's ability to process the data. The question is not a novel one. For example, key cases such as Microsoft centred around the provision of information (APIs), and the European Commission forced Thomson Reuters to provide access to data feeds as far back as 2012.
However, it seems that the boundaries are being pushed in terms of market definition and categories of abuses. The most likely type of abuse of dominance is perhaps where access to a particular dataset is essential to enable competition in a downstream market, however as most datasets can be replicated, an abuse on these lines might require very specific facts and indeed could be exceptional.
The FCO investigation into Facebook is testing another type of abuse – it is investigating whether Facebook abused a dominant position in the market for social networks, through its terms of service on the gathering of user data. This extends further the categories of abuse. In the European Commission's investigation into Polish banking associations that we referred to above, the Commission is also looking at potential abuse of (joint) dominance most likely under the 'essential facility' doctrine. It could certainly be an abuse of a dominant position to refuse to grant access to information where there is an objective necessity for access to a particular input.
c. Merger control
In the context of merger control, the European Commission has analysed in detail the effects on competition of the combination of two companies' data sets and whether this may lead to the foreclosure of competitors. In recent examples, (Google/DoubleClick and Facebook/WhatsApp) the Commission concluded that the mergers would not create a competitive advantage which could not be replicated by competitors. In 2006, the European Court of Justice in Asnef-Equifax stated that privacy concerns raised by the Big Data phenomenon are outside of the scope of intervention of competition authorities. This was reiterated by the Commission more recently in its assessment of the Facebook/Whatsapp merger.
Notwithstanding this, privacy issues cannot be excluded from consideration under competition law. Indeed, despite the Commission's previous position in Facebook/WhatsApp, most recently in Microsoft/LinkedIn, it has acknowledged that privacy is an important parameter in competition, so its policy may be shifting. Decisions taken by an undertaking regarding the collection and use of personal data can have, in parallel implications on economic and competition dimensions. There may be a close link between the dominance of the company, its data collection processes and competition on the relevant markets, which could justify the consideration of privacy policies and regulations in competition proceedings.
This has been reflected by the question raised on the adequacy of the current merger control thresholds themselves, in the context of a growing digital economy and the increasing numbers of digital start-ups. Such companies will often fail to generate high turnovers at first. However, their corporate value may be significant as a result of their degree of innovation, the data sets they have acquired and their market presence in terms of users. The conflict between high purchase prices and low turnover exemplifies one of many overlaps between data protection and competition law. This has resulted in a potential loophole which competition authorities are pro-actively trying to close down.
i. Germany
In Germany, an additional, value-based merger control threshold came into force in 2017 which aims to capture digital market transactions with significant potential competition effects and therefore address the above mentioned loophole. The FCO will now review mergers where the value of the consideration paid exceeds €400 million and where the parties are active in Germany to a significant extent. The addition of a criteria looking at the value of the consideration of the transaction looks at a company's potential and therefore its competitive relevance.
ii. EU
In response to the debate on the adequacy and effectiveness of purely turnover based merger notification thresholds in the digital age, the European Commission launched a public consultation on the EU merger control thresholds in October 2016. The consultation and proposed changes aim to simplify the current functioning of the Merger Control Regulation and modernise the assessment for the digital economy.
The Commission specifically mentions the fact that in the digital sector, some target companies, while having generated little turnover, could play a crucial competitive role because they hold commercially valuable data not captured by the current merger threshold tests. The current proposals include the introduction of a "size of the transaction test" in similar terms to the new German merger control test.
Interestingly, the EU has published the comments submitted in response to the consultation and the vast majority of national competition authorities oppose the Commission's proposal. That may be one reason that it appears not yet to have reached any conclusion.
d. Market investigations
Enabling access to data is increasingly being seen by competition authorities as a key way of ensuring competition in a market. By way of an example, following the UK CMA's two high profile market investigations in the UK energy market and the current account banking sector, in both cases the CMA opted for forced data sharing as part of its remedy package. Incumbent energy suppliers must disclose their customer lists to other operators including in situations where a current tariff is coming to an end, to allow other operators to target new customers. Data protection concerns were partly appeased by an option for customers to opt out. In the banking sector, a similar remedy now termed "Open banking" is due to come into force in January 2018, allowing current account customer data to be accessed directly by licensed banking start-ups.
In recent years there has been a lot of noise surrounding competition law's impact on Big Data. The above mentioned legal developments indicate that the Commission along with national authorities is beginning to bite on several fronts and this trend is likely to increase. It seems likely that we are moving to a method of competition law enforcement that takes account of privacy and data protection issues.
Dominant market players should watch out for new categories of abuse which are likely to be road tested by competition authorities. Tech companies in general should also tread carefully when it comes to exclusive licensing or exclusive access to important data sets. On the merger control front however, with the exception of Germany, any potential changes to the merger filing thresholds are currently at an embryonic stage. However, when looking at the substantive assessment, it seems that the Commission has already begun to take companies' ability to accumulate and process data sets into account.