CN112819015A - Image quality evaluation method based on feature fusion - Google Patents

Image quality evaluation method based on feature fusion Download PDF

Info

Publication number
CN112819015A
CN112819015A CN202110166618.5A CN202110166618A CN112819015A CN 112819015 A CN112819015 A CN 112819015A CN 202110166618 A CN202110166618 A CN 202110166618A CN 112819015 A CN112819015 A CN 112819015A
Authority
CN
China
Prior art keywords
image
comparison
pictures
features
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110166618.5A
Other languages
Chinese (zh)
Inventor
俞文心
张学文
刘明金
陈世宇
聂梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202110166618.5A priority Critical patent/CN112819015A/en
Publication of CN112819015A publication Critical patent/CN112819015A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses an image quality evaluation method based on feature fusion, which comprises the following steps: s10, acquiring a comparison image through a collection model according to the target image; s20, extracting the characteristics of the target image to be detected and the comparison image; and S30, analyzing the characteristic values of the target image and the comparative image according to the fusibility quantification, and obtaining an evaluation result. The invention realizes the image quality evaluation under the condition of lacking the standard original image, can realize the effective evaluation of various quality images and has accurate evaluation result.

Description

Image quality evaluation method based on feature fusion
Technical Field
The invention belongs to the technical field of image quality evaluation, and particularly relates to an image quality evaluation method based on feature fusion.
Background
Image displays generated by generating a countermeasure network (GAN) have good flexibility, but their picture results are often unusable because of the low quality. Distortion of these types of pictures is often related to perceptual aspects, rather than pure noise or blur (e.g., poor bird images generated may not contain heads or wings). In some generation tasks, these distortions may be more pronounced than image-to-image forms. For example, pictures generated from natural language descriptions will fall into this problem due to the conversion process from less information (textual description) to more information (image). An effective quality evaluation method is found for the images of the types, so that the basic quality of the images can be guaranteed, the images can be actually put into use only after the quality of the generated images reaches a certain level, and the deficiency of the evaluation method is undoubtedly a great obstacle in application requirements.
In the prior art, the evaluation strategy is usually based on the whole situation, and the generated pictures can be scored from the overall view point of a large number of pictures. However, this method only measures overall identifiability and diversity, but does not focus on individual images. Evaluating the image itself is necessary for further optimization of the results and the application. Conventional Image Quality Assessment (IQA) methods can provide an assessment for a single image. Generally, these solutions are divided into Full Reference (FR) and No Reference (NR). A study based on sensitivity of the Human Visual System (HVS) to visual signals or on a Structural Similarity Index (SSIM) of an image structure may be regarded as a representative FR method. They are not useful for the generated image because the original image based on pixel level cannot be found due to randomness and uncontrollable nature in the generation process. Considering the only viable existing no-reference image quality assessment methods, they are also not well suited for generating images. Most methods involve pixel level distortion (e.g., blurring or pure noise), which is not appropriate for the characteristics of the generated image. Theoretically, some assessment work specifically aimed at aesthetic aspects may be applicable in this case. That is because the aesthetic and subjective perception are not separable. However, the evaluation design of the generation task needs to consider the relative quality associated with a particular image set by which to use as a generation resource (training data); otherwise, the index contributes negligible to the optimization. Therefore, the existing methods are irrelevant to resource sets for guiding generation, cannot realize wide-area image quality evaluation, cannot realize the problem of generating image quality evaluation, cannot realize image quality evaluation under the condition of lacking standard original images, and are difficult to obtain data sets in the field of image quality evaluation.
Disclosure of Invention
In order to solve the problems, the invention provides an image quality evaluation method based on feature fusion, which realizes image quality evaluation under the condition of lacking of standard original images, can realize effective evaluation of various quality images and has accurate evaluation results.
In order to achieve the purpose, the invention adopts the technical scheme that: an image quality assessment method based on feature fusion comprises the following steps:
s10, acquiring a comparison image through a collection model according to the target image;
s20, extracting the characteristics of the target image to be detected and the comparison image;
and S30, analyzing the characteristic values of the target image and the comparative image according to the fusibility quantification, and obtaining an evaluation result.
Further, the step S10 includes the steps of:
extracting semantic features of the image to be targeted through a convolutional neural network;
and selecting a comparison image for comparison according to the extracted semantic features.
Further, extracting the characteristics of the target image by using an RESNET pre-training model, obtaining the semantic category probability of the image after a SoftMax function, selecting N comparison images according to the semantic category probability, and selecting N for the ith semantic categoryiPicture opening:
Figure BDA0002934648880000021
n represents the total number of the pictures to be selected and compared and can be set according to requirements and hardware conditions; n iscRepresenting the total number of semantic categories of pictures in the dataset; c. CjRepresenting the number of pictures in the jth semantic class, ciIndicating the number of pictures in the ith semantic class.
Further, the step S20 includes the steps of:
bringing together the target image and a corresponding comparison image of the target image;
and extracting the features of all pictures by adopting the shallow features of the network. The shallow features of the network are used because the deep features ignore the detailed information of the image, and the image quality assessment is to be performed on a large scale to determine whether the details are good or not, and the features need to show the details of each picture as much as possible.
Further, the shallow layer features of the network are adopted to extract the features of all the pictures, and the used network parameters and the structure are the same as those of the convolutional neural network adopted by the collection model.
Further, in step S30, quantitative analysis of compatibility is performed according to the features of the differences in the comparative images corresponding to the target image, so as to obtain an evaluation score.
Further, in the step S30, the method includes the steps of:
the distance between two features is expressed in terms of the Wasserstein distance:
Figure BDA0002934648880000031
P1、P2respectively representing two feature distributions to be calculated; II (P)1,P2) Is the set of all possible joint distributions that the two distributions combine; then for each joint distribution γ, sample from it to sample (x, y); (ii) a The mathematical expectation of distance E is that the Wasserstein distance is taken to the lower bound under all joint distributions;
identifying, with the quantitative result of the compatibility, the evaluated target image and the comparison image:
Figure BDA0002934648880000032
Figure BDA0002934648880000033
Ci=(W(fi,f1),W(fi,f2),...,W(fi,fn);
CP represents the compatibility, T represents the feature vector of the difference between the target image and the comparison image; and Ci represents a feature vector of the difference between the features of the ith comparison image and the remaining comparison images; w (f)1,f2) Represents the Wasserstein distance between two feature distributions; f. of1,f2…fnRepresenting the feature distribution of all comparison pictures, and f represents the feature distribution of the target picture to be evaluated. So that the violation of the target image in the real image can be quickly noticed by the computer between the target image to be evaluated and the comparison image.
The beneficial effects of the technical scheme are as follows:
the invention realizes image quality evaluation under the condition of lacking standard original images. The problem of wide-area image quality assessment is solved, especially in the case of image loss other than traditional synthetic loss (such as noise, blur, etc.). The method solves the problem of quality evaluation of the generated image, and can be suitable for detecting and evaluating the low-quality image with distorted reality on the overall structure of the image.
The invention adopts the compatibility of the target guided by the image semantic features and the comparative picture to calculate the quality score, and solves the problem that the data set in the image quality evaluation field is difficult to obtain. The method is realized by utilizing the quantification of the inter-feature compatibility of the pictures, and is an unsupervised and reference-picture-free generated picture quality evaluation method under the condition of no subjective marking data set.
The invention can directly improve the quality of the generated picture, and the generated picture has important functions in the fields of computer aided design, conversion from other form information to image information and the like. The method can directly optimize the final result through two angles of filtering the low-quality picture and feeding back the optimization generation algorithm, and provides basic guarantee for the application of generating the picture.
Drawings
FIG. 1 is a schematic flow chart of an image quality evaluation method based on feature fusion according to the present invention;
FIG. 2 is a schematic diagram illustrating an image quality assessment method based on feature fusion according to an embodiment of the present invention;
FIG. 3 is a distance scatter plot between two quality picture difference features evaluated by the present invention in an embodiment of the present invention;
FIG. 4 is a diagram illustrating the results of evaluating the scores of birds generated by the method of the present invention in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1 and 2, the present invention provides a method for evaluating image quality based on feature fusion, including the steps of:
s10, acquiring a comparison image through a collection model according to the target image;
s20, extracting the characteristics of the target image to be detected and the comparison image;
and S30, analyzing the characteristic values of the target image and the comparative image according to the fusibility quantification, and obtaining an evaluation result.
As an optimization scheme 1 of the above embodiment, the step S10 includes the steps of:
extracting semantic features of the image to be targeted through a convolutional neural network;
and selecting a comparison image for comparison according to the extracted semantic features.
Extracting the characteristics of a target image by using a RESNET pre-training model, obtaining the semantic category probability of the image after a SoftMax function, selecting N comparison images according to the semantic category probability, and selecting N for the ith semantic categoryiPicture opening:
Figure BDA0002934648880000041
n represents the total number of the pictures to be selected and compared and can be set according to requirements and hardware conditions; n iscRepresenting the total number of semantic categories of pictures in the dataset; c. CjRepresenting the number of pictures in the jth semantic class, ciIndicating the number of pictures in the ith semantic class.
As an optimization scheme 2 of the above embodiment, the step S20 includes the steps of:
bringing together the target image and a corresponding comparison image of the target image;
and extracting the features of all pictures by adopting the shallow features of the network. The shallow features of the network are used because the deep features ignore the detailed information of the image, and the image quality assessment is to be performed on a large scale to determine whether the details are good or not, and the features need to show the details of each picture as much as possible.
And extracting the features of all pictures by adopting the shallow features of the network, wherein the parameters and the structure of the network are the same as those of the convolutional neural network adopted by the collection model.
As an optimization scheme 3 of the above embodiment, in step S30, quantitative analysis of compatibility is performed according to features having differences in comparison images corresponding to the target image, so as to obtain an evaluation score.
In the step S30, the method includes the steps of:
the distance between two features is expressed in terms of the Wasserstein distance:
Figure BDA0002934648880000051
identifying, with the quantitative result of the compatibility, the evaluated target image and the comparison image:
Figure BDA0002934648880000052
Figure BDA0002934648880000053
Ci=(W(fi,f1),W(fi,f2),...,W(fi,fn);
CP represents the compatibility, T represents the feature vector of the difference between the target image and the comparison image; and Ci represents a feature vector of the difference between the features of the ith comparison image and the remaining comparison images; w (f)1,f2) Represents the Wasserstein distance between two feature distributions; f. of1,f2…fnRepresenting the feature distribution of all comparison pictures, and f represents the feature distribution of the target picture to be evaluated. . So that the violation of the target image in the real image can be quickly noticed by the computer between the target image to be evaluated and the comparison image.
In order to fully verify the effectiveness of the method of the present invention, the difference characteristics of the two different quality picture sets in the middle of the algorithm are shown by a scatter plot, as shown in fig. 3. The left image and the right image are observed, so that the left quality is obviously better, the distance distribution of the pictures is extracted, and the distance between the characteristic vectors of the pictures with better quality is obviously smaller and stable; the picture quality can be effectively evaluated through distance evaluation.
As shown in fig. 4, the scores of the following different generated pictures are obtained through evaluation by the method, and the higher the score is, the better the quality of the picture is, and the picture quality can be effectively evaluated.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. An image quality assessment method based on feature fusion is characterized by comprising the following steps:
s10, acquiring a comparison image through a collection model according to the target image;
s20, extracting the characteristics of the target image to be detected and the comparison image;
and S30, analyzing the characteristic values of the target image and the comparative image according to the fusibility quantification, and obtaining an evaluation result.
2. The method for evaluating image quality based on feature fusion according to claim 1, wherein the step S10 includes the steps of:
extracting semantic features of the image to be targeted through a convolutional neural network;
and selecting a comparison image for comparison according to the extracted semantic features.
3. The image quality evaluation method based on feature fusion as claimed in claim 2, characterized in that a RESNET pre-training model is used to extract features of a target image, after a SoftMax function, semantic category probabilities of pictures are obtained, N comparison pictures are selected according to the semantic category probabilities, and for the ith semantic category, N comparison pictures are selectediPicture opening:
Figure FDA0002934648870000011
in the formula, N represents the total number of pictures to be selected and compared; n iscRepresenting the total number of semantic categories of pictures in the dataset; c. CjRepresenting the number of pictures in the jth semantic class, ciIndicating the number of pictures in the ith semantic class.
4. The method for evaluating image quality based on feature fusion according to claim 1, wherein the step S20 includes the steps of:
bringing together the target image and a corresponding comparison image of the target image;
and extracting the features of all pictures by adopting the shallow features of the network.
5. The method for evaluating image quality based on feature fusion of claim 4, wherein the shallow features of the network are used to extract features of all pictures, and the network parameters and structure are the same as those of the convolutional neural network used by the collection model.
6. The method for evaluating image quality based on feature fusion as claimed in claim 1, wherein in step S30, quantitative analysis of fusion is performed to obtain evaluation scores according to features having differences in the comparison images corresponding to the target image.
7. The method for evaluating image quality based on feature fusion according to claim 6, wherein in the step S30, the method comprises the steps of:
the distance between two features is expressed in terms of the Wasserstein distance:
Figure FDA0002934648870000021
P1、P2respectively representing two feature distributions to be calculated; II (P)1,P2) Is the set of all possible joint distributions that the two distributions combine; then for each joint distribution γ, sample from it to sample (x, y); (ii) a The mathematical expectation of distance E is that the Wasserstein distance is taken to the lower bound under all joint distributions;
identifying, with the quantitative result of the compatibility, the evaluated target image and the comparison image:
Figure FDA0002934648870000022
Figure FDA0002934648870000023
Ci=(W(fi,f1),W(fi,f2),...,W(fi,fn);
CP represents the compatibility, T represents the feature vector of the difference between the target image and the comparison image; and Ci represents a feature vector of the difference between the features of the ith comparison image and the remaining comparison images; w (f)1,f2) Represents the Wasserstein distance between two feature distributions; f. of1,f2…fnRepresenting the feature distribution of all comparison pictures, and f represents the feature distribution of the target picture to be evaluated.
CN202110166618.5A 2021-02-04 2021-02-04 Image quality evaluation method based on feature fusion Pending CN112819015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110166618.5A CN112819015A (en) 2021-02-04 2021-02-04 Image quality evaluation method based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110166618.5A CN112819015A (en) 2021-02-04 2021-02-04 Image quality evaluation method based on feature fusion

Publications (1)

Publication Number Publication Date
CN112819015A true CN112819015A (en) 2021-05-18

Family

ID=75862100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110166618.5A Pending CN112819015A (en) 2021-02-04 2021-02-04 Image quality evaluation method based on feature fusion

Country Status (1)

Country Link
CN (1) CN112819015A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362240A (en) * 2021-05-31 2021-09-07 西南科技大学 Image restoration method based on lightweight feature pyramid model
CN113362239A (en) * 2021-05-31 2021-09-07 西南科技大学 Deep learning image restoration method based on feature interaction
CN117078664A (en) * 2023-10-13 2023-11-17 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN106131545A (en) * 2016-08-09 2016-11-16 上海泓申科技发展有限公司 A kind of picture quality inspection device and method
CN109727246A (en) * 2019-01-26 2019-05-07 福州大学 Comparative learning image quality evaluation method based on twin network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN106131545A (en) * 2016-08-09 2016-11-16 上海泓申科技发展有限公司 A kind of picture quality inspection device and method
CN109727246A (en) * 2019-01-26 2019-05-07 福州大学 Comparative learning image quality evaluation method based on twin network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XUEWEN ZHANG ET AL: "Deep Feature Compatibility for Generated Images Quality Assessment", 《INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING 2020》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362240A (en) * 2021-05-31 2021-09-07 西南科技大学 Image restoration method based on lightweight feature pyramid model
CN113362239A (en) * 2021-05-31 2021-09-07 西南科技大学 Deep learning image restoration method based on feature interaction
CN117078664A (en) * 2023-10-13 2023-11-17 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus
CN117078664B (en) * 2023-10-13 2024-01-23 脉得智能科技(无锡)有限公司 Computer-readable storage medium, ultrasonic image quality evaluation device, and electronic apparatus

Similar Documents

Publication Publication Date Title
CN112819015A (en) Image quality evaluation method based on feature fusion
US10685434B2 (en) Method for assessing aesthetic quality of natural image based on multi-task deep learning
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN106778687B (en) Fixation point detection method based on local evaluation and global optimization
CN110135282B (en) Examinee return plagiarism cheating detection method based on deep convolutional neural network model
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN111709914B (en) Non-reference image quality evaluation method based on HVS characteristics
CN111008647B (en) Sample extraction and image classification method based on void convolution and residual linkage
CN111611861B (en) Image change detection method based on multi-scale feature association
CN104778466A (en) Detection method combining various context clues for image focus region
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN109086794B (en) Driving behavior pattern recognition method based on T-LDA topic model
Ji et al. Blind image quality assessment with semantic information
CN111008570B (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
CN104484679B (en) Non- standard rifle shooting warhead mark image automatic identifying method
CN117011563A (en) Road damage inspection cross-domain detection method and system based on semi-supervised federal learning
CN115393666A (en) Small sample expansion method and system based on prototype completion in image classification
Wiling Locust Genetic Image Processing Classification Model-Based Brain Tumor Classification in MRI Images for Early Diagnosis
Qu et al. UMLE: unsupervised multi-discriminator network for low light enhancement
CN111105387B (en) Visual angle synthesis quality prediction method based on statistical characteristics and information data processing terminal
Chen et al. Intelligent teaching evaluation system integrating facial expression and behavior recognition in teaching video
CN109829887B (en) Image quality evaluation method based on deep neural network
CN115311657B (en) Multi-source algae image target detection method, system, electronic equipment and storage medium
CN116521917A (en) Picture screening method and device
Fu et al. Conditional synthetic food image generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518

RJ01 Rejection of invention patent application after publication