10-22-2018 10:15 AM
0 0 623
By David Hoffman, Associate General Counsel and Global Privacy Officer and Riccardo Masucci, Global Director of Privacy
People are talking about AI’s enormous potential in life-enhancing and potentially life-saving technologies: precision medicine, disease detection, driving assistance, increased productivity, workplace safety, better access to education and more. At Intel we are developing many of these technologies, and we are focused on integrating artificial intelligence capability across the global digital infrastructure. At the same time, we recognize the need to consider possible implications of AI on individuals and society – particularly when it comes to privacy and data protection.
During the 40th International Conference of Data Protection and Privacy Commissioners, we released a paper on Protecting Individuals’ Privacy and Data in the Artificial Intelligence World. Due to advances in computing power, data collection and analytics, many technologies are able to make autonomous determinations in near-real time – a capability that has implications for privacy. With this in mind, we make five observations:
- Increased automation should not translate to less privacy protection
- Explainability needs more accountability, meaning organizations need to demonstrate they have the processes to explain how algorithms come up with specific results
- Ethical data processing is built on privacy
- Privacy protects who we are (how others see us and how we see ourselves)
- Encryption and de-identification help address privacy in AI
We then offer six policy recommendations for privacy in the age of AI:
- New legislative and regulatory initiatives should be comprehensive, technology neutral, and support the free flow of data: Horizontal legislation can encompass both data uses and technologies that fall outside existing sectoral laws and that are still unforeseen.
- Organizations should embrace risk-based accountability approaches, putting in place technical (privacy-by-design) or organizational measures (product development lifecycles and ethics review boards) to minimize privacy risks in AI.
- Automated decision making should be fostered while augmenting it with safeguards to protect individuals: Legitimate interest should be acknowledged as legal basis for data processing for AI. Industry and governments should work together on algorithm explainability and risk-based degrees of human oversight to minimize risk to citizens from automated decision-making.
- Governments should promote access to data, for example, opening up government data, supporting the creation of reliable datasets available to all, fostering incentives for data sharing, investing in the development of voluntary international standards (i.e., for algorithmic explainability) and promoting cultural diversity in datasets.
- Funding research in security is essential to protect privacy in areas like homomorphic encryption, which can enable more protected analysis of personal data.
- It takes data to protect data: Algorithms can help detect unintended discrimination and bias, and identity theft risks or cyber threats.
We welcome the opportunity to discuss these issues further as well as to receive feedback on our proposals. Intel stands ready to work closely with all interested stakeholders worldwide to develop viable pathways to pursue AI innovation and privacy together.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.