Аннотация:Attacks on machine learning systems are usually called special manipulations with data at different stages of the machine learning pipeline, which are designed to either prevent the normal operation of the attacked machine learning systems, or vice versa - to ensure their specific functioning, which is necessary for the attacker. There are attacks that allow you to extract non-public data from machine learning models. Model inversion attacks, first described in 2015, aim to expose the data used to train the model. Such attacks involve polling the model in a special pattern and represent a major threat to machine learning as a service (MLaaS) projects. In this article, we provide an overview of off-the-shelf software tools for carrying out model inversion attacks and possible protection against such attacks.