Machine learning: tensorflow introduces experimental privacy testing library

Machine Learning: Tensorflow introduces experimental privacy testing library

The core developer team behind tensorflow, a popular machine learning framework, has published a new experimental module for the tensorflow privacy library. The privacy testing library is peppered with tutorials and analysis tools and should support developers in compliance with data protection.

The tensorflow library for privacy is designed for the programming language python and allows the implementations of the tensorflow optimizer for the training of machine-learning models with differential privacy. Developers can evidently rate the privacy properties of their classification models. Differential privacy is intended to prevent individual records from identifying, for example, with which backlens on individual persons are possible without reducing the data quality for accuracy.

Privacy is a rough topic

Privacy is a coarse topic in the field of machine learning, and so far, there are no canonical guidelines for creating a private model. There are more and more research results that show that a model for machine learning can leak sensitive information of the training, creating a risk for the privacy of the user of the training.

Therefore, the development team behind tensorflow has introduced tensorflow privacy last year, with the developer can train their models according to the principle of differential privacy: by means of storage noise, data sets can be hedged, but this noise was apparently designed for academic worst-case scenarios, so it the model accuracy can significantly affect.

As a result, research on the privacy properties of ml models have focused on a few years ago. So-called membership inference attacks obviously predict whether a specific file element was used during the training.

Differential privacy

By means of internal tests, the developers have evidently found that the differential privacy contributes to reducing these vulnerabilities. Even with very little noise, the accomplence has removed. Now even external developers membership inference tests should try to build very well-worthy models and identify architectures that take into account the principles of privacy design as well as data processing decisions.

Tensorflow would examine the transferability of an expansion of the attacks from membership inference tests – examine [—] beyond the classifiers and develop new tests. Also planned is an investigation whether this test can be recorded by integration with tfx in the okosystem of tensorflow, which last in version 2.2 has appeared. Further details can be found in the release notes.

Leave a Reply

Your email address will not be published. Required fields are marked *