The purpose of the no-reference Greens image quality assessment (NR-IQA) is to measure perceived image quality based on subjective judgments; however, due to the lack of a clean reference image, this is a complicated and unresolved challenge.Massive new IQA datasets have facilitated the creation of deep learning-based image quality measurements.We present a unique model to handle the NR-IQA challenge in this research by employing a hybrid strategy that leverages from pre-trained CNN model and the unified learning mechanism that extracts both local and non-local characteristics from the input patch.
The deep analysis of the proposed framework shows that the model uses features and a mechanism that improves the monotonicity relationship between objective and subjective ratings.The intermediary goal was mapped to a quality score using a regression architecture.To extract various feature maps, a deep architecture with an adaptive receptive field was used.
Analyses of this biggest NR-IQA benchmark datasets demonstrate that the suggested technique Lemonade outperforms current state-of-the-art NR-IQA measures.