The retail sector increasingly feels like the stereotypical crowded bazaar as traditional retailers and online-only players struggle for attention from fickle consumers. Even when a consumer has been tempted to view your wares, conversion rates continue to be a concern (and can be as low as 2% for online sales).
This explains why creating a truly personalised shopping experience has become the holy grail for many retailers.
Attempts at personalisation are nothing new, but many tools offer little more than simple segmentation of consumers into demographic groups. By contrast, the advanced personalisation solutions now becoming available allow you to customise the shopping experience in real time in response to each consumer's individual behaviour and shopping history.
The key technologies underpinning personalisation are artificial intelligence and big data (generally powered by the use of cloud computing platforms). The opportunities offered by such tech are impressive. For example:
However, a recent Retail Week survey revealed that 57% of their respondents felt that they were not particularly knowledgeable about AI or machine learning tech (with 21% saying they did not know anything about it at all).
So, what legal issues do you need to think about when you are procuring AI or big data solutions to boost your drive for personalisation?
Data is frequently described as the "new oil" and is the most critical component of any personalisation solution. As such, it would be entirely natural for a customer of an AI/big data solution to want to include a clear statement that "I own all of the data".
However, under current UK and EU laws, there is no real concept of data ownership. Many people find this surprising, but the existing concepts of copyright, database rights and trade secrets will not adequately protect many types of customer data that will be part of any personalisation solution.
Given this, your contract will need to clearly set out how the relevant data will be used by each party. In particular, the customer will likely want to limit the supplier to only using the data in order to deliver the personalisation solution itself.
Equally, when data sets from third parties are being used, the contract will need to clearly set out which party is responsible for procuring access to such data (and for ensuring that the rights to use the data are broad enough to cover the proposed personalisation activities).
By contrast to the above, the supplier will often want to use the data for its own purposes or perhaps even to sell it to third parties (e.g. by offering "market analysis" reports or similar).
Suppliers will often seek to reassure their customers that all consumer data will be aggregated and anonymised before being used in this way. However, this area needs to be approached with caution - anonymisation can be difficult to achieve in practice. For example, a well-known study by a US university concluded that it was possible to personally identify 87% of the US population simply using 3 different types of data (ZIP code, gender and date of birth). European data protection regulators have also made it clear that achieving true anonymisation is not an easy task.
We will be looking at this issue – and other data protection issues raised by personalisation – in a later article.
This can be more complex than for other types of technology as most AI solutions will have a learning functionality.
Take the example of a chatbot being offensive to a customer. Is this due to:
A market consensus on how to treat these issues is yet to emerge. A radical approach being discussed is to treat the AI solution as a separate legal person, combined with mandatory insurance. While this approach is more relevant to tech such as fully-autonomous vehicles or robots, it is not inconceivable that at some point in the future, a chatbot could become intelligent enough that it is effectively making its own autonomous decisions.
In the meantime, your contract will need to carefully address how risks and liabilities relating to the AI solution will be allocated between the parties.
A related issue is what contractual warranties are appropriate for an AI solution. For example, the traditional approach of seeking a warranty that services will be provided with reasonable skill and care may not make sense in an AI context (and it is questionable whether an AI solution is capable of being negligent or whether this is very much a human trait!).
Implementation and acceptance testing are a key part of the contract for any major IT project. This is particularly so with AI solutions as there is likely to be an element of "training" the solution so it knows how to respond in real-world customer interactions. This could include:
The contract will need to make clear:
In many cases, the customer will likely want to pilot and/or soft launch the personalisation solution before committing to a full roll-out. This will need to be factored in to the agreed pricing model and implementation plan.
Personalisation solutions are likely to be rolled out to tight timeframes. As such, the contract should have some practical remedies for dealing with implementation delays and testing problems (rather than relying on traditional dispute resolution mechanisms, which are likely to be cumbersome and time-consuming). These could include:
As AI solutions become built on ever more complex algorithms and models (and are capable of analysing bigger and bigger data sets), there is a risk that they become effectively a "black box" where it is difficult – if not impossible – to understand the basis on which a particular recommendation or decision was made.
This obviously becomes a concern where that recommendation or decision is inaccurate or just plain wrong. While this concern is more acute where AI is being used to determine the outcome of mortgage applications or even the length of jail sentences (as in the US), it is still relevant in the retail sector.
There are various approaches being developed to address this concern. A number of these can be thought of as similar to flight recorders – in effect, monitoring what data the AI solution has looked at in making the relevant decision and whether particular data sets dominated this decision-making process (or were discarded by the AI solution).
Any personalisation solution will likely need to interface with a number of other key IT systems, including the main customer website, your CRM system, your loyalty & marketing platforms and your ERP system to name but a few.
Testing these interfaces will be a key part of the implementation and acceptance testing regime.
As personalisation solutions become more common, they will become a mission-critical system for retailers. As with any critical IT solution, the customer should ensure that there is a clear and robust SLA in place with the supplier.
Crucially, this should not simply rely on service credits (which are often a blunt tool to deal with SLA failures). Instead, the remedy regime described above in relation to implementation delays and testing failures can usefully be recycled to deal with ongoing SLA issues as well. To support this, it can be useful to distinguish between minor or one-off SLA failures versus persistent or serious SLA failures.
It is fairly common for the supplier to demand the right to be able to suspend customer access to their solution in certain circumstances. These can range from the customer failing to comply with "acceptable use" requirements to non-payment.
Given the detrimental effect on customer engagement and sales if the personalisation solution is suspended, any such regime needs to be tightly drawn, focussing on the following issues:
Given the quantity – and type – of data that is likely to be used in any personalisation solution, these issues need to be carefully addressed. In particular, retailers need to bear in mind the upcoming implementation of both the GDPR and the ePrivacy Regulation (which will regulate electronic/location marketing and the use of tracking technology).
We will be looking at some of the data protection issues raised by personalisation in a later article.
In terms of cybersecurity generally, your contract should provide for a detailed security regime. This is likely to cover some or all of the following topics:
Online marketplaces may also be caught by the NIS Directive when it is implemented across the EU (which must happen by 9 May 2018). This will impose similar – but separate – obligations to the GDPR in terms of implementation of appropriate security measures and the notification of security incidents.