Аннотация:We have derived conditions when unintended feedback loops occur in supervised machine learning systems. In this paper, we study an important problem of discovering and measuring hidden feedback loops. Such feedback loops occur in web search, recommender systems, healthcare, predictive public policing and other systems. As a possible cause of echo chambers and filter bubbles, these feedback loops tend to produce concept drifts in user behavior. We study systems in their context of use, because both learning algorithms and user interactions are important. Then we decompose the automation bias from the use of the system into users adherence to predictions and their usage rate to derive conditions for a feedback loop to occur. We also provide estimates for the size of a concept drift caused by the loop. A series of controlled simulation experiments with real-world and synthetic data support our findings. This paper builds on our prior results and elaborates the analytical model of feedback loops, extends the experiments, and provides practical application guidelines.