Аннотация:Subjective image quality measurement plays a crucial role in the advancement of image-processing technologies. The primary goal of a visual quality metric is to reflect subjective evaluation results. Despite the rapid development of these metrics, their potential limitations have not been sufficiently explored. This research paper aims to address this gap by demonstrating how image preprocessing before compression can artificially inflate the quality scores of widely used metrics such as DISTS, LPIPS, HaarPSI, VIF, STLPIPS, ADISTS, MR-Perceptual, AHIQ, IQT, and CONTRIQUE. We present several CNN-based preprocessing models that significantly increase these metrics when images are JPEG-compressed. However, a subjective assessment (with 1027 participants) of preprocessed images reveals that the visual quality either decreases or remains unchanged, thereby challenging the universal applicability of these metrics. The detection of metric attacks integrated into image processing systems has emerged as a significant issue. Attack detection can be achieved by comparing subjective evaluation results with metric results. If these results are anticorrelated, it suggests that the metric has been attacked. However, the time-consuming nature of subjective evaluations and the need for numerous participants make them impractical for routine attack detection. To address this, we propose using other metrics to determine whether the target metric has been attacked. Our results show that attacking one metric affects the output of other metrics, offering a potential method for attack detection.