0
0
362
By Claire Vishik, Intel Fellow, and Riccardo Masucci, Global Director of Privacy
One of the considerations for a technology area that is expected to gain considerable importance is the development of international standards to ensure global interoperability and harmonization. Artificial Intelligence (AI) is not an exception. A number of efforts in technical standardization have already been started in international standards bodies, like ISO/IEC JTC1 (where a dedicated committee for AI, SC42, has been created) and IEEE.
Claire Vishik, Intel Fellow
AI is an area that depends on established infrastructure, advances in foundational components of the ecosystem, and innovative approaches to algorithm development or privacy preserving techniques. On February 11, 2019, the US government issued an Executive Order on Artificial Intelligence. The EO directed the National Institute for Standards and Technology (NIST) to create a plan for federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies. To this purpose, NIST published a request for information (RFI) on artificial intelligence in May.
Riccardo Masucci, Global Director of Privacy
Intel prepared a response to the RFI that reiterated the importance of federal engagement in voluntary, consensus-based international standards and described key technical and research aspects on AI Trustworthiness for consideration in the plan.
Current success of machine learning based AI is predicated on technological advancements like computing power, network bandwidth, hardware and software architectures; a wide range of technical standards can be deployed and adapted for AI systems. There are a number of aspects in AI that could benefit from standardization. Intel response focused on trustworthiness. Specifically, we focused on the following areas: foundational technologies and AI standardization; views on technical elements of trustworthiness; understanding the attacks on AI environments; mitigations for threats in AI systems hardware; privacy aspects of AI workloads; data-related guidelines and best practices; societal issues and standardization areas; use cases.
Different approaches can be applied to the standardization with regards to the areas identified as important to develop trustworthy AI systems. Some areas require more research, with standardization still premature: this is the case for reducing algorithm and data bias by using techniques developed to support context discrimination and error control. Other areas can benefit from sense-making, examining a very complex field for cross-cutting characteristics, and establishing if premises for standardization exist: for instance, exploring machine readable elements, such as associated metadata, could assist in achieving privacy objectives such as accountability, transparency, and user control; similarly, fostering interoperability formats would improve data access and sharing, crucial for AI workloads. In some sub-fields, including homomorphic encryption or federated machine learning, sense-making and standardization are happening simultaneously, pointing to gradual standardizations as new technologies mature. In still other cases, standards already exist and need to be adapted to the needs of AI systems, for example in relation to memory encryption and trusted execution environments for data protection.
Identifying diverse and representative AI use cases will support the development of timely and useful standards. Active involvement of federal agencies together with industry is necessary to ensure the development and adoption of AI technologies and technical standards which facilitate interoperability and trustworthiness in the AI space.
On July 2, NIST published a draft for public comments of the plan, highlighting accuracy, reliability, robustness, security, explainability, safety and privacy as key elements of trustworthiness. Horizontal (comprehensive) and vertical (sector-specific) standards for AI are needed. Those should be complemented by a series of tools like data standards and datasets in standardized formats, fully documented use cases, testing methodologies, benchmarks, or auditing tools. In a complex and fast growing field, a lot needs to be understood and clarified. NIST suggest to “promote focused research to advance and accelerate broader exploration and understanding of how aspects of trustworthiness can be practically incorporated within standards and standards-related tools.”
There is a lot of work to do, for the technical community, as is also indicated in Intel’s response to the RFI in June. Intel is committed to assisting with the development of global systems of AI trustworthiness, and we look forward to continuing to work with NIST on these issues.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.