CN114549402A - Underwater image quality comparison method without reference image - Google Patents

Underwater image quality comparison method without reference image Download PDF

Info

Publication number
CN114549402A
CN114549402A CN202210006948.2A CN202210006948A CN114549402A CN 114549402 A CN114549402 A CN 114549402A CN 202210006948 A CN202210006948 A CN 202210006948A CN 114549402 A CN114549402 A CN 114549402A
Authority
CN
China
Prior art keywords
image
quality
underwater
images
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210006948.2A
Other languages
Chinese (zh)
Inventor
杨淼
王海文
董金耐
殷歌
刘春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Ocean University
Original Assignee
Jiangsu Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Ocean University filed Critical Jiangsu Ocean University
Priority to CN202210006948.2A priority Critical patent/CN114549402A/en
Publication of CN114549402A publication Critical patent/CN114549402A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing and image quality analysis, in particular to a method for detecting the distortion of a mixed image in an environment where complex mixed distortion exists, a high-quality reference image is lacked, and the perception difference between images is difficult to measure; the invention relates to the quality contrast of paired underwater images, which is based on the underwater image pair with any content and does not need a reference image; specifically, the method for comparing the quality of the underwater images without reference images is provided, and a comparison result between the quality of two underwater images is described by adopting a ternary classification model. Sensing global and local image quality characteristic differences of the two fused images by using improved inclusion and Reduction modules; accumulating the obtained two-to-two quality comparison results between the images, and establishing quality comparison sequencing of application results of various underwater image enhancement methods on underwater images with different contents; the method can be used for comparing the quality of the underwater image and is also suitable for other scenes in which the image difference is difficult to perceive due to mixed distortion.

Description

Underwater image quality comparison method without reference image
Technical Field
The invention belongs to the technical field of image processing and image quality analysis, and particularly relates to an underwater image quality comparison method without a reference image.
Background
The underwater vision is a key sensing technology, relates to a large amount of useful information, and has important value for autonomous operation of underwater robots and underwater engineering monitoring. The image or video quality evaluation of the underwater image has important significance for high-quality image screening, underwater image enhancement/recovery result comparison, underwater image reconstruction and imaging system design. Since no reference image in the underwater environment can be correlated, the mixed distortion degree of the underwater images cannot be grouped. Furthermore, the blurring and low contrast of the different content images makes it difficult to make a choice when comparing the quality of the two underwater images. Therefore, there is a lack of an effective method for comparing the quality of two underwater images that meets subjective perceptions in comparing different enhancement methods. The perception of image quality has an unavoidable relationship with visual attention. However, we know little about the effect of the distortion type and attention area on the underwater image quality judgment. It is noted that when the observer views two images at the same time, the influence of the image quality on the visual attention is remarkably expressed.
Disclosure of Invention
The perceived quality difference between the underwater images is difficult to measure due to the mixed distortion in the underwater images and the lack of high quality images to reference. The uncertainty of the quality difference of two images in the underwater image quality comparison process provides a challenge for sequencing results of different underwater image enhancement methods, and based on the problems, an underwater image quality comparison method without a reference image is provided. The DP-UIQC method provided by the invention is not associated with a reference image, and can be popularized to quality judgment between images with different contents.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in the first step, we pair underwater images in the dataset into three classes, and mark them as preferences { +1, -1}, { -1, +1}, {0,0}, respectively. Wherein { +1, -1} indicates that the quality score of the left/top image is higher than the quality score of the right/bottom image, and the { -1, +1} labels are reversed. The 0,0 labels indicate that the relative quality of two underwater images is visually indistinguishable by an observer.
Second, establish a paired underwater image quality contrast model (DP-UIQC method overview)
The overall framework of DP-UIQC is shown in FIG. 5, with convolutional layer parameterization, max pooling, and layer-to-layer connectivity detailed in Table I.
And inputting the two underwater images into a pre-trained IncepotionResNet V2 model to extract the features of the two images. Features of two underwater images extracted by the InceprionResNet V2 are gathered together to serve as the difference of the input learning quality of the next CNN-pair module, and the improved Incepration module is used for sensing the global and local quality difference in image pairs. And the Reduction module is used for reducing the multi-scale features, so that the bottleneck problem is avoided. After three-tier FC linear mapping, preference tags are sorted by the Softmax function.
The third step, the inclusion-Reduction module
The human visual system can distinguish objects from the background, locate objects of interest, detect motion and direction in the environment. Some neurons of the primary visual cortex have binocular properties. According to weber's rule, two images have a particular effect on each other's visual perception when viewing a pair of images. The visual perception is affected first by overall quality differences such as brightness, saturation averages and hue differences. For image areas with similar quality, local detail expression and edge strength are compared autonomously. We input the cascade features of the image pairs into two inclusion-Reduction modules to obtain quality difference perception at different scales.
The architecture of the inclusion-Reduction module is shown in fig. 6, the inclusion module is composed of a plurality of convolution kernels with different sizes, and the invention adopts a convolution layer with the size of 1 × 1, 3 × 3, 5 × 5 and an Avgpooling operation to extract the characteristics of the output of the previous layer. A1 x 1 convolution is added in the increment and Reduction modules to reduce the number of channels and increase the nonlinearity of the model. The size of the characteristic diagram is further reduced by convolution and pooling operation in the Reduction module, the bottleneck problem is avoided, and the adaptability of the network to the underwater image quality diversity is improved. In contrast to the grid-reduction module presented in, the max pooling layer is replaced by a 3 x 3 convolution branch, in order to prevent loss of quality difference in maximizing the pool.
The fourth step, training
The method adopts a back propagation method to calculate the gradient of the loss function to all model parameters, and adopts an Adam random gradient method to update the parameters. The convolutional layers in the network mentioned in the scheme all adopt a Leaky ReLU (LReLU) activation function. The batch-size was set to 8, the initial learning rate was set to 0.001, which became 1/2 of the original learning rate at intervals of 20 epochs, and the learning rate did not decrease when decreasing to 0.000005. The loss function is the cross-entropy loss of the model output and the batch image for the true label, using the L2 regularization term. The model ends training after 120 epochs. In order to compare the quality of different image contents, the size of the input image is kept at the original size.
Fifthly, testing the quality comparison result of the image pair
Assuming a graph to predictThe image data set is P, all images in the data set are paired pairwise, and each pair of images is paired with Pi,jIf the output result is of the first type, the corresponding image pair quality judgment result is { +1, -1}, which means that the quality score of the left/upper image is higher than that of the right/lower image, and the difference can be visually perceived; if the output result is of the second type, the corresponding image pair quality judgment result is { -1, +1}, which means that the quality score of the left/upper image is lower than that of the right/lower image, and the vision can sense the difference; if the output result is of the third type, the difference of the relative quality of the two underwater images corresponding to the image pair is difficult to distinguish by the observer.
Assuming that the image dataset to be predicted is P, all pairs of images P in the dataset P are predictedi,jLabel l ofi,jCumulative mass comparison score S obtained for image iiCan be calculated as:
Figure BDA0003455702690000031
the technical scheme can obtain the following beneficial effects:
the method is different from the quality comparison model depending on the reference image in the past, is independent of the reference image, and does not need to apply distortion contained in a natural image database for pre-training;
the invention treats the quality comparison of image pairs as a ternary classification problem, and the three classifications { +1, -1} represent that the quality of the left/upper image is higher than that of the right/lower image, and the vision can sense the quality difference; { -1, +1} denotes that the quality of the right/lower image is higher than that of the left/upper image, and this quality difference is visually perceived. The 0,0 label indicates that the relative quality difference between the two underwater images is difficult to distinguish by the viewer.
The CNN-pair model provided by the invention is an end-to-end training network model, and the network weights of two images are not shared. The proposed CNN-pair model learns the multi-scale joint feature difference of two images, where the extracted pairs of features are input to the following initiation-reduction (ir) module. Modeling the visual perception of global and local multi-scale quality difference of the two images by utilizing an IR module;
experiments on an underwater image pair quality database show that the proposed scheme can accurately reflect the visually perceptible quality difference of the underwater image pair;
the scheme provided by the invention is applied to the evaluation of the underwater image enhancement method, the quality comparison sequencing of different underwater image enhancement methods applied to different underwater images can be obtained, and the performance of the underwater image enhancement method is disclosed.
Drawings
FIG. 1: partial underwater images are divided;
FIG. 2 is a drawing: histogram distribution of APL values;
FIG. 3: three kinds of distribution rules of Δ Si, j;
FIG. 4 is a drawing: underwater image pair samples under three classifications;
FIG. 5: DP-UIQC entire framework;
FIG. 6: an inclusion-Reduction module;
FIG. 7: a convergence curve of the model;
FIG. 8: a confusion matrix of 9 indices;
FIG. 9: image pairs and real tags;
FIG. 10: sorting (and labeling) results generated by different enhancement methods of the underwater image;
FIG. 11: sorting (and labeling) results generated by different enhancement methods of the underwater image;
FIG. 12: sorting (and labeling) results generated by different enhancement methods of the underwater image;
FIG. 13: sorting (and labeling) results generated by different enhancement methods of the underwater image;
attached table figure i: details of the proposed dp-uiqc model;
attached table diagram II: labeling the sample with the image;
attached table, figure III: and sorting the enhancement results.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
as shown in fig. 1-13 and tables i, ii, and iii, the method of the present invention is a quality comparison model for simulating quality comparison judgment when human eyes observe images. We first present the database and settings for performing the comparison experiments in detail. A series of experiments were then performed to verify the performance of the proposed solution in underwater image quality comparison, including accuracy in underwater image quality comparison with state-of-the-art natural image blind quality evaluation methods, ranking-based image quality evaluation algorithms, and underwater image evaluation methods. The model is used for the consistency between the performance sequencing result of the underwater image enhancement method and subjective perception.
The method comprises the following specific steps:
in the first step, we pair underwater images in the dataset into three classes, and mark them as preferences { +1, -1}, { -1, +1}, {0,0}, respectively. Wherein { +1, -1} indicates that the quality score of the left/top image is higher than the quality score of the right/bottom image, and { -1, +1} labels are reversed. The 0,0 labels indicate that the relative quality of two underwater images is visually indistinguishable by an observer.
Second step, overview of DP-UIQC method
The overall framework of DP-UIQC is shown in FIG. 5, with convolutional layer parameterization, max pooling, and layer-to-layer connectivity detailed in attached Table I. And inputting the two underwater images into a pre-trained IncepotionResNet V2 model to extract the features of the two images. Features of two underwater images extracted by the InceprationResNetV 2 are gathered together to serve as the difference of input learning quality of the next CNN-pair module, and the improved Incepration module is used for sensing the global and local quality difference in an image pair. And the Reduction module is used for reducing the multi-scale features, so that the bottleneck problem is avoided. After three-tier FC linear mapping, preference tags are sorted by the Softmax function.
The third step, the inclusion-Reduction module
The human visual system can distinguish objects from the background, locate objects of interest, detect motion and direction in the environment. Some neurons of the primary visual cortex have binocular properties. According to weber's rule, two images have a particular effect on each other's visual perception when viewing a pair of images. The visual perception is affected first by overall quality differences such as brightness, saturation averages and hue differences. For image areas with similar quality, local detail expression and edge strength are compared autonomously. We input the cascade features of the image pairs into two inclusion-Reduction modules to obtain quality difference perception at different scales.
The architecture of the inclusion-Reduction module is shown in fig. 6, the inclusion module is composed of a plurality of convolution kernels with different sizes, and the invention adopts a convolution layer with the size of 1 × 1, 3 × 3, 5 × 5 and an Avgpooling operation to extract the characteristics of the output of the previous layer. A1 x 1 convolution is added in the increment and Reduction modules to reduce the number of channels and increase the nonlinearity of the model. The size of the characteristic diagram is further reduced by convolution and pooling operation in the Reduction module, the bottleneck problem is avoided, and the adaptability of the network to the underwater image quality diversity is improved. In contrast to the proposed grid-reduction module, the max pooling layer is replaced by a 3 × 3 convolution branch, in order to prevent loss of quality difference in maximizing the pool.
The fourth step, training
The method adopts a back propagation method to calculate the gradient of the loss function to all model parameters, and adopts an Adam random gradient method to update the parameters. The convolutional layers in the network mentioned in the scheme all adopt a Leaky ReLU (LReLU) activation function. The batch-size was set to 8, the initial learning rate was set to 0.001, which became 1/2 of the original learning rate at intervals of 20 epochs, and the learning rate did not decrease when decreasing to 0.000005. The loss function is the cross-entropy loss of the model output and the batch image for the true label, using the L2 regularization term. The model ends training after 120 epochs. In order to compare the quality of different image contents, the size of the input image is kept at the original size.
Fifthly, testing the quality comparison result of the image pair
Assuming that the image dataset to be predicted is P, all images in the dataset are paired two by two, each pair of images is paired with a pair Pi,jIf the output result is of the first type, the corresponding image pair quality judgment result is { +1, -1}, which means that the quality score of the left/upper image is higher than that of the right/lower image, and the difference can be visually perceived; if the output result is of the second type, the corresponding image pair quality judgment result is { -1, +1}, which means that the quality score of the left/upper image is lower than that of the right/lower image, and the vision can sense the difference; if the output result is of the third type, the difference of the relative quality of the two underwater images corresponding to the image pair is difficult to distinguish by the observer.
Assuming that the image dataset to be predicted is P, all pairs of images P in the dataset P are predictedi,jLabel of (1)i,jCumulative mass comparison score S obtained for image iiCan be calculated as:
Figure BDA0003455702690000051
example 1: the underwater image quality comparison model provided by the invention is used for sequencing the performance of the underwater image enhancement method
How to objectively evaluate the performance of an underwater image enhancement algorithm is a troublesome problem all the time, and ranking the quality of the enhanced images is an effective measurement method. Common underwater image quality assessment methods such as UCIQE and UIQM may have false judgments for overly enhanced images. We enhance the underwater images of the four colors blue, green, white, yellow, respectively. The underwater image enhancement method (GEnh) proposed by Galdran et al, the underwater image enhancement method (FuEnh) proposed by Fu et al, the underwater image enhancement method (LiEnh) proposed by Li et al, the underwater image enhancement method (YEnh) proposed by Yang et al, and the deep learning underwater image enhancement method including DeepSESR, UWCNN and FUNIE-GAN are respectively adopted to enhance the underwater image. We compared all 496 (N-1)/2, N-84) pairs of images using the DP-UIQC model proposed by this scheme.
The application may derive a cumulative label score for each image, thereby determining a quality ranking of the enhanced results. The results of enhancement ranking using several commonly used image quality evaluation methods are shown in attached Table III, and the results of ranking of four groups of images using the DP-UIQC method set forth herein are shown in FIGS. 10-13. In conjunction with Table III and FIGS. 10-13, it can be seen that improved contrast and color recovery tend to give better subjective image quality perception. As can be seen from fig. 10, the images of fig. 10(a) - (c) are visually better in quality, with the image of fig. 10(h) being the worst. However, the image shown in FIG. 10(h) is ranked first in the ranking of the metric UIQM. A similar situation occurs in fig. 12 and 13, since the UIQM score of the enhanced underwater image of the FUNIE-GAN method is high, it is evaluated as the best image quality, which is clearly inconsistent with the fact. The DP-UIQC method provided by the scheme outputs two judgment labels for comparing the image quality, so that the problem of inaccuracy of a traditional method for independently outputting each image quality evaluation value when a subjective perception quality comparison result is represented can be solved. One exception is that in fig. 11, the false color in fig. 11(c) yields a high cumulative label performance because most underwater images win the quality comparison due to better color contrast. As can be seen from the sequencing results shown in the attached figures 10-13, the DP-UIQC method provided by the scheme provides more effective quality comparison sequencing compared with other image quality evaluation methods when the enhanced underwater images are compared. It can be a good indicator of relative quality differences in the image.
Example 2: the underwater image quality comparison model provided by the invention is used for a comparison experiment of a quality difference judgment result of an evaluated underwater image pair and judgment results of other image quality evaluation methods:
the data set consisted of 1000 underwater images. The underwater images have various contents, and have mixed low contrast, uneven color degradation, illumination and fuzzy distortion at different degrees. The resolution of these underwater color images is 512 x 512. And establishing initial image sequencing of the underwater images according to the accumulated quality label value corresponding to the images in the data set. The cumulative mass label value (APL) distribution for 1000 underwater images is shown in fig. 2. For convenient calculation, the accumulated quality label value of 1000 underwater images is mapped to 0-100.
There are 1000 underwater images in the dataset, so there are 499500 possible image pairs. The quality preference labels of these image pairs are divided into three categories, { +1, -1} indicating that the quality of the left/top image is higher than the quality of the right/bottom image and that the visual perception of this quality difference; { -1, +1} denotes that the quality of the right/lower image is higher than that of the left/upper image, and this quality difference is visually perceived. The 0,0 label indicates that the relative quality difference between the two underwater images is difficult to distinguish by the viewer.
We randomly picked 800 image construction image pairs for training, with 10% of the image pairs as the validation data set. The remaining 200 pictures were paired as a test data set. For the convenience of calculation, assume that the image pairs with label value {0,0} are set to S00We define S00The maximum post-mapping cumulative quality label score difference between the pairs of mid-image is Δ S00maxMean value of Δ S00mean. We calculate S using the IL-NIQE algorithm00The mean objective score difference for the pair of median images, called Δ SILmean. Let Δ Si,jFor a given underwater image pair pi,jThe accumulated quality label score difference after mapping.
Figure BDA0003455702690000071
Delta S of three classifications of underwater image obtained by using constraint conditioni,jThe distribution is shown in figure 3. The training data sets co-generate 41359 pairs for image pairs with { +1, -1} labels, 42186 pairs for image pairs with { -1, +1} labels, 30289 pairs for image pairs with {0,0} labels; the verification dataset contains 4565 pairs { +1,1} of images, 4643 pairs { -1, +1} of images, 3339 pairs {0,0} of images, and 4888 pairs { +1, -1} of images, in the test dataset, there are4,930 pairs { -1, +1} tags, 1578 pairs {0,0} tags. The samples of the image pairs are shown in fig. 4. The convergence curve of the DP-UIQC method training proposed by the scheme is shown in figure 7.
The model provided by the invention and a common blind image quality evaluation method (BIQA) in the past research comprise BRISQUE, DIIVENE, BLIINDS-II and CORNIA algorithm, a new CNN-based no-reference image quality method KonCept512 and an image quality evaluation method DipIQ based on sorting. In addition, we compared DP-UIQC with two existing underwater image quality assessment algorithms, UCIQE and UIQM. For a fair and comprehensive comparison with other algorithms, we trained the above mentioned underwater image data sets for each method according to the experimental setup mentioned in these original works.
The prediction quality value of 200 underwater images in the test data set obtained by each method is mapped to a real accumulated quality label score value through a 5-parameter mapping function before image matching, and the value is recorded as qiAnd i is more than or equal to 1 and less than or equal to 200. In order to ensure that the subset constructed by the other image quality evaluation method for comparison with the method proposed by the present scheme can contain the true label, the quality evaluation value output by the other image quality evaluation method for comparison determines the prediction label of each image pair in the three test subsets according to the following method:
Figure BDA0003455702690000072
in the formula,. DELTA.Qi,j=qi-qjFor a given underwater image pair pi,jThe various methods of (2) predict poor quality values.
The confusion matrix for classifying three underwater image pairs formed by the test set is shown in fig. 8. It can be seen that most image quality evaluation methods designed for natural images do not perform well in predicting quality differences between pairs of underwater images. In addition, as shown in fig. 8(g) and (h), the two underwater image quality evaluation methods predict significant image quality differences more easily than predicting underwater image pairs with similar quality. It is worth noting that the DP-UIQC method provided by the scheme has better accuracy than other image quality evaluation methods and underwater image quality evaluation methods no matter for image pairs with obvious quality difference or for two images with similar quality. The accuracy of the {0,0} class is lower than the other two classes. The reason why the analysis is possible is that uncertainty of image quality is difficult to predict due to fluctuations in subjective perception, which is related not only to the observation environment but also to psychophysiology and experience of the observer. However, the accuracy of the {0,0} label on the underwater image dataset reaches 91%, { +1, -1} and { -1, +1} labels reach 94.91% and 93.19%, respectively.
Some of the results of the comparisons are shown in FIG. 8, and the corresponding results from the comparison methods are shown in FIG. II. As can be seen from the attached table II, underwater images with similar quality are difficult to compare, and most image quality evaluation methods cannot be distinguished. By using the DP-UIQC method provided by the scheme, underwater images with different quality difference degrees are correctly classified, and the method has good correlation with visual perception.
The above description is the preferred embodiment of the present invention, and it is within the scope of the appended claims to cover all modifications of the invention which may occur to those skilled in the art without departing from the spirit and scope of the invention.

Claims (6)

1. An underwater image quality comparison method without a reference image, characterized in that the method comprises:
s1 classification label of underwater image of data set
Pairing underwater images in a data set, dividing the underwater images into three classes, and respectively marking the underwater images as preferences { +1, -1}, { -1, +1}, and {0,0 };
s2, establishing a pair underwater image quality contrast model
Inputting two underwater images into a pre-trained InceprtionResNet V2 model to extract the characteristics of the two images, collecting the extracted characteristics together to serve as the input of a next CNN-pair module to learn the quality difference, sensing the global and local quality difference in an image pair by using an improved Incepression module in the CNN-pair module, reducing the multi-scale characteristics by a Reduction module, and classifying preference labels by a Softmax function after three-layer full-convolution linear mapping;
s3 modified inclusion-Reduction module
Inputting the set features of the image pairs into two inclusion-Reduction modules to obtain quality difference perception under different scales, wherein the inclusion module consists of a plurality of convolution kernels with different sizes, and extracting the output features of the previous layer by adopting convolution layers and average pooling Avgpoulding; adding a 1 × 1 convolution in an inclusion and Reduction module to reduce the number of channels and simultaneously increase the nonlinearity of a model;
s4 training
Calculating gradients of the loss function on all model parameters by adopting a back propagation method, updating the parameters by adopting an Adam random gradient method, wherein the loss function is cross entropy loss of the real label between the model output and the batch image, and an L2 regularization item is adopted;
s5 quality comparison of test image pairs
Assuming that the image dataset to be predicted is P, all images in the dataset are paired two by two, each pair of images is paired with a pair Pi,jThe model provided by the scheme is used for predicting the quality comparison result label;
if the output result is of the first type, the corresponding image pair quality judgment result is { +1, -1}, which means that the quality score of the left/upper image is higher than that of the right/lower image, and the difference can be visually perceived;
if the output result is of the second type, the corresponding image pair quality judgment result is { -1, +1}, which means that the quality score of the left/upper image is lower than that of the right/lower image, and the vision can sense the difference;
if the output result is of the third type, the quality judgment result of the corresponding image pair is {0,0}, which indicates that the difference of the relative quality of the two underwater images is difficult to distinguish by the observer.
2. The underwater image quality comparison method without the reference image according to claim 1, wherein: { +1, -1} in said S1 represents that the quality score of the left/upper image is higher than the quality score of the right/lower image, and { -1, +1} represents that the quality score of the right/lower image is lower than the quality score of the right/lower image; the {0,0} label indicates that the relative quality of the two underwater images is difficult for the viewer to distinguish when voting.
3. The underwater image quality comparison method without the reference image according to claim 1, wherein: the inclusion module in the S3 extracts features output by a previous layer by using convolution layers with sizes of 1 × 1, 3 × 3, 5 × 5 and average pooled avgpoling, a convolution of 1 × 1 is added to the inclusion and Reduction modules to reduce the number of channels and increase nonlinearity of the model, and compared with the Grid-Reduction module, the largest pooling layer is replaced by a convolution branch of 3 × 3, so as to prevent loss of quality difference features in the process of maximizing the pool.
4. The underwater image quality comparison method without the reference image according to claim 1, wherein: the convolutional layers all adopt Leaky ReLU (LReLU) activation functions; the batch-size was set to 8, the initial learning rate was set to 0.001, which became 1/2 of the original learning rate at intervals of 20 epochs, and the learning rate did not decrease when decreasing to 0.000005.
5. The underwater image quality comparison method without the reference image according to claim 1, wherein: predicting all image pairs P in the data set P in S5i,jLabel l ofi,jCumulative mass comparison score S obtained for image iiCan be calculated as:
Figure DEST_PATH_IMAGE002
6. the underwater image quality comparison method without the reference image according to claim 1, wherein: the data set consists of underwater images, the contents of the underwater images are various, and mixed low contrast, uneven color degradation, illumination and fuzzy distortion exist in different degrees; the resolution of the underwater color image is 512 × 512; and establishing initial image sequencing of the underwater images according to the accumulated quality label value corresponding to the images in the data set.
CN202210006948.2A 2022-01-05 2022-01-05 Underwater image quality comparison method without reference image Pending CN114549402A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210006948.2A CN114549402A (en) 2022-01-05 2022-01-05 Underwater image quality comparison method without reference image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210006948.2A CN114549402A (en) 2022-01-05 2022-01-05 Underwater image quality comparison method without reference image

Publications (1)

Publication Number Publication Date
CN114549402A true CN114549402A (en) 2022-05-27

Family

ID=81669372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210006948.2A Pending CN114549402A (en) 2022-01-05 2022-01-05 Underwater image quality comparison method without reference image

Country Status (1)

Country Link
CN (1) CN114549402A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960301A (en) * 2018-06-20 2018-12-07 西南大学 A kind of ancient Yi nationality's text recognition methods based on convolutional neural networks
CN110349134A (en) * 2019-06-27 2019-10-18 广东技术师范大学天河学院 A kind of piping disease image classification method based on multi-tag convolutional neural networks
CN111401257A (en) * 2020-03-17 2020-07-10 天津理工大学 Non-constraint condition face recognition method based on cosine loss
CN111612741A (en) * 2020-04-22 2020-09-01 杭州电子科技大学 Accurate non-reference image quality evaluation method based on distortion recognition
CN113066065A (en) * 2021-03-29 2021-07-02 中国科学院上海高等研究院 No-reference image quality detection method, system, terminal and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960301A (en) * 2018-06-20 2018-12-07 西南大学 A kind of ancient Yi nationality's text recognition methods based on convolutional neural networks
CN110349134A (en) * 2019-06-27 2019-10-18 广东技术师范大学天河学院 A kind of piping disease image classification method based on multi-tag convolutional neural networks
CN111401257A (en) * 2020-03-17 2020-07-10 天津理工大学 Non-constraint condition face recognition method based on cosine loss
CN111612741A (en) * 2020-04-22 2020-09-01 杭州电子科技大学 Accurate non-reference image quality evaluation method based on distortion recognition
CN113066065A (en) * 2021-03-29 2021-07-02 中国科学院上海高等研究院 No-reference image quality detection method, system, terminal and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MIAO YANG ET AL.: "Distortion-Independent Pairwise Underwater Image Perceptual Quality Comparison", 《JOURNAL OF LATEX CLASS FILES》, vol. 14, no. 8, 31 August 2021 (2021-08-31), pages 1 - 16 *
机器之心: "从Inception v1到Inception-ResNet,一文概览Inception家族的奋斗史", pages 1 - 12, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/37505777> *

Similar Documents

Publication Publication Date Title
CN108090902B (en) Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network
CN106920224B (en) A method of assessment stitching image clarity
CN104992223B (en) Intensive Population size estimation method based on deep learning
CN105825511B (en) A kind of picture background clarity detection method based on deep learning
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN102422324B (en) Age estimation device and method
CN103198467B (en) Image processing apparatus and image processing method
CN109816625A (en) A kind of video quality score implementation method
CN108235003B (en) Three-dimensional video quality evaluation method based on 3D convolutional neural network
CN108734283A (en) Nerve network system
CN108009560B (en) Commodity image similarity category judgment method and device
CN113610046B (en) Behavior recognition method based on depth video linkage characteristics
Messai et al. Adaboost neural network and cyclopean view for no-reference stereoscopic image quality assessment
AU2020103251A4 (en) Method and system for identifying metallic minerals under microscope based on bp nueral network
CN109961425A (en) A kind of water quality recognition methods of Dynamic Water
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
Jenifa et al. Classification of cotton leaf disease using multi-support vector machine
CN116994295B (en) Wild animal category identification method based on gray sample self-adaptive selection gate
CN110348320A (en) A kind of face method for anti-counterfeit based on the fusion of more Damage degrees
CN114080644A (en) System and method for diagnosing small bowel cleanliness
CN108510483A (en) A kind of calculating using VLAD codings and SVM generates color image tamper detection method
CN112633301A (en) Traditional Chinese medicine tongue image greasy feature classification method based on depth metric learning
CN116091403B (en) Subjective and objective evaluation method for color night vision fusion image quality
CN114549402A (en) Underwater image quality comparison method without reference image
Li et al. Saliency consistency-based image re-colorization for color blindness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination