Companies should be alert to legal issues around data collection and use

Data plays a critical role in AI-driven solutions, impacting their accuracy and effectiveness. As organisations increasingly use AI for competitive advantage and innovation, careful consideration of the legal implications of data collection and use, including the reuse of existing data sets, is essential. Adhering to the key requirements of an effective data governance framework is critical to the responsible use of AI, establishing accountability for data governance and ensuring accuracy, privacy, ethics, security and compliance.

Strict data protection and privacy laws around the world govern the use of identifiable data (e.g., personal data in the EU or personally identifiable information in the U.S.), requiring organisations to address issues of bias, fairness, transparency and accountability for the responsible and accountable use of identifiable data for AI.

Ensuring compliance is even more important as data protection regulators emerge as key players in the regulation of AI, stepping up their enforcement activities to ensure that fundamental privacy rights are adequately protected in the development and use of AI. Their role as key players stems from the extensive involvement of personal data in AI systems, as well as the significant overlap between AI governance in general and data protection governance for AI systems in particular, which further strengthens the role of data protection regulators in AI regulation.

While data protection laws and (generative) AI are generally compatible, in certain situations legal requirements in relation to AI create difficulties that need to be addressed. This is the case, for example, with inaccurate personal data produced as output by an AI model, where the data subject's right to rectification or erasure may not be enforceable due to the black-box effect of AI systems.

The principles of data minimisation and purpose limitation, which are widely established in various privacy laws, also reflect some apparent contradictions between privacy requirements and AI. For example, how organisations can balance the need for large data sets with the principles of data minimisation (including de-identification/anonymisation) needs to be addressed appropriately.

However, it is reassuring for organisations that privacy regulators around the world continue to face a learning curve in understanding AI and its implications. They have begun to address AI and issue guidance to make existing privacy regulations more applicable to AI technologies. These developments need to be carefully monitored by organisations developing or deploying AI.
Moreover, the use of non-identifiable data, such as business-related information containing financial and technical details, strategic expertise and proprietary knowledge, may also be subject to legal restrictions. In addition to being protected by intellectual property rights, such data may be considered confidential information under applicable laws or contractual agreements, making its mishandling potentially subject to civil and criminal penalties.

Organisations operating in different regions face the complexity of finding a way to comply with wide-ranging requirements for data at the international and national level of the various countries in which they operate. Thereby, compliance with those data regulations not only avoids the imposition of penalties, but also serves to build trust with stakeholders and prevent reputational damage, loss of business opportunities, and legal liability.

For further information, please contact Nils Lölfing