Аннотация:Due to the rapid development of Artificial Neural Networks (ANN) models, the number of hyperparameters constantly grows. With such a number of parameters, it’s necessary to use automatic tools for building or adapting new models for new problems. It leads to the expansion of Neural Architecture Search (NAS) methods usage, which performs hyperparameters optimisation in a vast space of model hyperparameters, so-called hyperparameters tuning. Since modern NAS techniques are widely used to optimise models in different areas or combine many models from previous experiences, it requires a lot of computational power to perform specific hyperparameters optimisation routines. Despite the highly parallel nature of many NAS methods, they still need a lot of computational time to converge and reuse information from the generations of previously synthesised models. Therefore it creates demands for parallel implementations to be available in different cluster configurations and utilise as many nodes as possible with high scalability. However, simple approaches when the NAS solving is performed without considering results from the previous launches lead to inefficient cluster utilization. In this article, we introduce a new approach of optimization NAS processes, limiting the search space to reduce the number of search parameters and dimensions of the search space, using information from the previous NAS launches that allow decreasing demands of computational power and improve cluster utilization as well.