Securing Artificial Intelligence (SAI); Traceability of AI Models
ETSI TR 104 032 V1.1.1 (2024-02)
Organization:
ETSI - European Telecommunications Standards Institute
Year: 2024
ETSI - European Telecommunications Standards Institute
Abstract: The NWI will study the role of traceability in the challenge of Securing AI and explore issues related to sharing and re-using models across tasks and industries. The scope includes threats, and their associated remediations where applicable, to ownership rights of AI creators as well as to verification of models origin, integrity or purpose. Mitigations can be non-AI-Specific (Digital Right Management applicable to AI) and AI-specific techniques (e.g. watermarking) from prevention and detection phases. They can be both model-agnostic or model enhancement techniques. Threats and mitigations specific to the collaborative learning setting, implying multiple data and model owners, could be also explored.
The NWI will align terminology with existing ETSI ISG SAI documents and studies, and reference/complement previously studied attacks and remediations (ETSI GR SAI 004, ETSI GR SAI 005). It will also gather industrial and academic feedback on traceability and ownership rights protection and model verification (including integrity of model metadata) in the context of AI.
Collections
:
-
Statistics
Securing Artificial Intelligence (SAI); Traceability of AI Models
Show full item record
contributor author | ETSI - European Telecommunications Standards Institute | |
date accessioned | 2024-12-18T15:03:19Z | |
date available | 2024-12-18T15:03:19Z | |
date copyright | 2024 | |
date issued | 2024 | |
identifier other | tr_104032v010101p.pdf | |
identifier uri | https://yse.yabesh.ir/std/handle/yse/340182 | |
description abstract | The NWI will study the role of traceability in the challenge of Securing AI and explore issues related to sharing and re-using models across tasks and industries. The scope includes threats, and their associated remediations where applicable, to ownership rights of AI creators as well as to verification of models origin, integrity or purpose. Mitigations can be non-AI-Specific (Digital Right Management applicable to AI) and AI-specific techniques (e.g. watermarking) from prevention and detection phases. They can be both model-agnostic or model enhancement techniques. Threats and mitigations specific to the collaborative learning setting, implying multiple data and model owners, could be also explored. The NWI will align terminology with existing ETSI ISG SAI documents and studies, and reference/complement previously studied attacks and remediations (ETSI GR SAI 004, ETSI GR SAI 005). It will also gather industrial and academic feedback on traceability and ownership rights protection and model verification (including integrity of model metadata) in the context of AI. | |
language | English | |
publisher | ETSI - European Telecommunications Standards Institute | |
title | Securing Artificial Intelligence (SAI); Traceability of AI Models | en |
title | ETSI TR 104 032 V1.1.1 (2024-02) | num |
type | standard | |
page | 29 | |
status | Published | |
tree | ETSI - European Telecommunications Standards Institute:;2024 | |
contenttype | fulltext | |
scope | - |