Blind image quality assessment by learning from multiple annotators

K Ma, X Liu, Y Fang and E P Simoncelli

Published in Proc 28th IEEE Int'l Conf on Image Proc (ICIP), pp. 2344-2347, Sep 2019.

DOI: 10.1109/ICIP.2019.8803390

Download:

  • Reprint (pdf)

  • Models for image quality assessment (IQA) are generally optimized and tested by comparing to human ratings, which are expensive to obtain. Here, we develop a blind IQA (BIQA) model, and a method of training it without human ratings. We first generate a large number of corrupted image pairs, and use a set of existing IQA models to identify which image of each pair has higher quality. We then train a convolutional neural network to estimate perceived image quality along with its uncertainty, optimizing for consistency with the binary labels. The reliability of each IQA annotator is also estimated during training. Experiments demonstrate that our model outperforms state-of-the-art BIQA models in terms of correlation with human ratings in existing databases, as well in group maximum differentiation competition.
  • Listing of all publications