Report from AI Decoding IP: Day 1

Written By

toby bond module
Toby Bond

Partner
UK

I'm a partner in our Intellectual Property Group. Having studied physics at university, I'm fascinated by technology and the way in which it is reshaping our world.

Bird & Bird’s Toby Bond attended the first day of the UKIPO and WIPO joint conference on the interaction between AI technology and intellectual property law and practice.

The conference opened with an address from the IP Minister, Chris Skidmore MP who highlighted that AI technology has the potential to increase the UK's GDP by 10% in the next decade. The Minister also announced the publication of the UKIPO’s report “Artificial Intelligence - a worldwide overview of AI patents” which reviews the trends in the AI sector worldwide with a focus on patenting by the UK AI sector. The report indicates rapid growth of worldwide patenting in AI technologies over the past decade with an increase of over 400% in the number of published AI patent applications while the UK AI sector has seen its patenting activity more than double over the same period.

Lord David Kitchin JSC and Francis Gurry, Director General of the World Intellectual Property Organization, followed with insightful and interesting introductory remarks highlighting the potential for AI technology to drive economic growth. WIPO research suggests a sharp upswing in AI related scientific publications since 2003, followed by a surge in AI patent filings from 2013 onwards. Indeed 53% of all AI related patents filed since 1950 have been filed since 2013. Both speakers also emphasised the impact of IP laws on the development of AI technology, as AI gives rise to new questions for both patent and copyright law. For example, should inventions arising from the use of AI systems be patentable and if so, who should be considered the inventor? Will the assessment of inventive step need to be revisited if the state of the art reaches the point where the skilled person would inevitably think to use an AI system? In relation to copyright, how will existing concepts of authorship and originality sit with AI generated works? Can an AI generated work ever be an author’s intellectual creation?  

Looking towards the horizon, many of the IP challenges around AI systems revolve around access to and use of data. More training data generally results in better AI decision making, but how do we reconcile this with proprietary rights in data and a push to restrict access to certain categories of data for other policy reasons, such as data privacy and data localisation requirements? This tension between openness and closure of data sets needs to be resolved at the policy level and the UK has an important role to play in seeking international consensus on a way forward.

Following the opening remarks, Day 1 of the conference consisted of four panel sessions covering Applications of AI and new Business Models, AI & IP – Disrupting the Established, Ownership, Entitlement, and Liability and Ethics & Public Perception. A lot of ground was covered during the day, but the author’s personal reflections on some of the most interesting themes are summarised below.

Data is at the heart of AI

Many speakers emphasised the importance of data to the development of AI. Getting access to the right data to train a system can be key but negotiating data sharing agreements is often a slow process, which can stifle many start-up businesses. Although template agreements and open licensing policies can help, some speakers questioned whether IP and other rights which restrict access to data sets need to be re-examined in order to support the free flow of data.

Another challenge the speakers identified is a lack of transparency. With a patchwork of different rights applying to data it is often hard to understand where the boundaries lie and how data can be used. Some organisations address this by focusing on data sets where the chance of rights being enforced is low; a dataset of emails from the now defunct energy company Enron is widely used by AI developers. Another strategy is to locate AI research in jurisdictions which have permissive “fair use” exceptions to copyright infringement such as the US and Israel. While the EU’s new text and data mining exceptions might go some way towards making Europe a more permissive environment, the potential for rights holders to opt out by indicating they don’t want their works subject to text and data mining could present its own challenges.   

Inventive and creative AI

Picking up on the questions posed by Lord Kitchin, Dr Noam Shemtov of Queen Mary University presented the findings of the EPO’s recent legal study on AI inventorship. The study concluded that the law in most jurisdictions either expressly or by implication requires named inventors to be humans. It also concluded that there would be little benefit in allowing AI systems to be named as inventors. There is strong evidence that being named as an inventor incentivises humans (e.g. by enhancing their reputation) but little evidence to suggest that the same would apply to AI systems. Opinions differed on whether there is legal difficulty identifying a human inventor where an AI is used as part of the invention process. Some felt that the collaborative nature of many AI projects, with the potential for multiple parties to be involved in the design, training and implementation of an AI system necessitated further legislation to clarify the position. Others suggested that in the short-to-medium term AI systems will only be used as tools in the invention process and can be accommodated within existing ownership laws. We do not, for example, think that the designer of an advanced microscope should be the owner of inventions arising from the use of that microscope.  

In relation to copyright, the speakers generally agreed that whether copyright currently subsists in works created by AI depends on the nature of the work. International conventions require the protection of films, sound recordings and broadcasts without a threshold requirement of originality and without a requirement for human authorship. Where such works are created using an AI system, they are likely to be protected by copyright. The same is not true for works of authorship (known as literary, dramatic, musical or artistic works in the UK) where copyright protection requires the work to be original. Following the CJEU’s case law (starting with Infopaq) originality requires the intellectual creation of an author, which represents an expression of their personality and which is unconstrained by technical requirements. The majority view appears to be that this requirement will disqualify most works of authorship created by AI from copyright protection. While section 9(3) of the UK’s Copyright, Designs and Patents Act 1988 indicates that the author of a literary, dramatic, musical or artistic work created by a computer (i.e. without a human author) will be the person who made the arrangements necessary for the creation of the work, a number of speakers asked whether this is now compatible with the CJEU’s case law on originality.

Aside from the question of whether AI generated works could qualify for copyright under existing legislation, there was significant debate between the speakers and the audience as to whether such works should ever qualify for protection. One view was that we should not abandon lightly the existing anthropocentrism in copyright law which arises because a fundamental justification for awarding copyright is to reward (therefore incentivise) human creativity as a positive social activity. An opposing view is that the availability of works is itself a social good and should be incentivised regardless of how the works are created. What became apparent from the discussion is that this debate strikes at the core of some deep philosophical issues regarding the nature and value of human creativity.

Ethics & public perception

No AI conference would be complete without a discussion of the broader ethical implications of AI technology and with Dr Christopher Markou of the University of Cambridge and Dr Jon Machtynger, RAEng Visiting Professor for AI & Cloud Innovation at the University of Surrey we were in excellent hands. What emerged from the discussion was the importance of both an ethical frameworks used during the development of AI systems, and the wider law and policy response to the potential impacts of AI on specific parts of society. While discussion of ethical frameworks for AI are very much in vogue at present, it’s important not to forget that regulators also have a duty to predict, monitor and where necessary, legislate to reduce the potential harm of AI systems. When it comes to high profile issues such as the use of facial recognition technology, there do appear to be signs that legislators are increasingly aware that the public's concern regarding the use of certain types of AI technology may require them to act. However, in a specialist area such as IP, the IP community will need to play a more active role in identifying possible issues and guide regulators through the process of addressing them.  

Watch this space for our report of Day 2!

Latest insights

More Insights
The European Commission Modern office buildings in Brussels, Belgium.

VAT in the Digital Age (“ViDA”): prepare your business with Bird & Bird – 10 key insights for success

Nov 15 2024

Read More
HR Data Essentials image

Report of Trade Mark Cases For the CIPA Journal October 2024

Nov 15 2024

Read More

Hungary: Easing the tax burden of innovative startups – from January 2025, the IP contributions will become tax-free

Nov 14 2024

Read More