Term
Model Extraction (ˈmɒ-dəl ɪkˈstræk-ʃən)
Definition
Model extraction is the practice of duplicating a machine learning model’s abilities by analyzing the data it processes and the results it generates. This often occurs without permission and is a form of intellectual property theft.
Where you’ll find it
In AI technologies, model extraction can happen when there is interaction with the model that allows external users to input data and observe outputs. This situation is commonly found in APIs or web services where models provide automated responses based on user inputs.
Common use cases
- Testing the security of a model by authorized security professionals.
- Unauthorized entities attempting to replicate a model’s function for competitive advantage.
- Research and development teams trying to understand model behavior by observing outputs in response to given inputs.
Things to watch out for
- Unintended data leaks through overly informative outputs.
- Lack of sufficient monitoring and access control around models.
- Legal and ethical implications if proprietary model functionalities are stolen or duplicated.
Related terms
- Intellectual Property Theft
- API Security
- Machine Learning
- Data privacy
- Cybersecurity Measures