CN113763348A - Image quality determination method and device, electronic equipment and storage medium - Google Patents

Image quality determination method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113763348A
CN113763348A CN202111024179.0A CN202111024179A CN113763348A CN 113763348 A CN113763348 A CN 113763348A CN 202111024179 A CN202111024179 A CN 202111024179A CN 113763348 A CN113763348 A CN 113763348A
Authority
CN
China
Prior art keywords
image
quality
sample
prediction model
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111024179.0A
Other languages
Chinese (zh)
Inventor
冯子勇
周瑞
赵勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gelingshentong Information Technology Co ltd
Original Assignee
Beijing Gelingshentong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gelingshentong Information Technology Co ltd filed Critical Beijing Gelingshentong Information Technology Co ltd
Priority to CN202111024179.0A priority Critical patent/CN113763348A/en
Publication of CN113763348A publication Critical patent/CN113763348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image quality determination method, an image quality determination device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction. The quality label comprises the quality difference of two sample images, so that the real quality of the sample images can be reflected better, and the quality of the image to be identified can be accurately determined by using the quality prediction model.

Description

Image quality determination method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an image quality determination method and apparatus, an electronic device, and a storage medium.
Background
Along with the rapid development of science and technology, the target recognition technology is applied to various fields, so that the life of people is more convenient and intelligent. In target recognition, feature extraction is generally required to be performed on an acquired image to be recognized, and matching and recognition are performed based on the extracted features.
Taking face recognition as an example, in the actual use process, due to factors such as motion and illumination, the problem of poor quality such as face blurring may occur in the collected face image, so that the features extracted finally by face recognition are poor, and different people may be mistakenly recognized as the same person or the same person may be recognized as two people at this time. Therefore, in the technical field of face recognition, the quality evaluation of the face image is performed, the low quality score is filtered out, and then the face features are extracted, which is an important part in the whole face recognition.
At present, the quality of one image depends heavily on the subjective consciousness of workers, different workers may obtain different results for the same image, and therefore, the quality of the image is difficult to determine accurately.
Disclosure of Invention
The embodiment of the application provides an image quality determination method and device, electronic equipment and a storage medium, which can effectively solve the problem that the quality of an image is difficult to accurately determine.
According to a first aspect of an embodiment of the present application, there is provided an image quality determination method, which acquires an image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction.
According to a second aspect of embodiments of the present application, there is provided an image quality determination apparatus including: the image acquisition module is used for acquiring an image to be identified; the quality score acquisition module is used for inputting the image to be identified into a quality prediction model to obtain a quality score corresponding to the image to be identified, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and the output module is used for outputting the quality fraction.
According to a third aspect of embodiments of the present application, there is provided an electronic device comprising one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method as applied to an electronic device, as described above.
According to a fourth aspect of the embodiments of the present application, there is provided a computer-readable storage medium having a program code stored therein, wherein the method described above is performed when the program code runs.
The image quality determining method provided by the embodiment of the application is adopted to obtain the image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction. The quality label comprises the quality difference of two sample images, so that the real quality of the sample images can be reflected better, and the quality of the image to be identified can be accurately determined by using the quality prediction model.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of an image quality determination method according to an embodiment of the present application;
fig. 2 is a flowchart of an image quality determination method according to another embodiment of the present application;
FIG. 3 is a schematic diagram of a twin network according to yet another embodiment of the present application;
FIG. 4 is a functional block diagram of an image quality determination apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for executing an image quality determination method according to an embodiment of the present application.
Detailed Description
Along with the rapid development of science and technology, the target recognition technology is applied to various fields, so that the life of people is more convenient and intelligent. In target recognition, feature extraction is generally required to be performed on an acquired image to be recognized, and matching and recognition are performed based on the extracted features.
Taking face recognition as an example, in the actual use process, due to factors such as motion and illumination, the problem of poor quality such as face blurring may occur in the collected face image, so that the features extracted finally by face recognition are poor, and different people may be mistakenly recognized as the same person or the same person may be recognized as two people at this time. Therefore, in the technical field of face recognition, the quality evaluation of the face image is performed, the low quality score is filtered out, and then the face features are extracted, which is an important part in the whole face recognition.
The inventors have found in their research that there are generally two methods in determining image quality. The first is to measure the second derivative of the image using the laplacian, emphasizing image regions containing fast intensity variations. If an image contains high variance, there is a large range of responses in the image, including edges and non-edges, which represents a normal image. But if the variance of the image is low, the range of response is small, which indicates that the edges in the image are small, and the image is blurred when the picture contains less edge information.
However, this solution is practically limited to some motion blur and lighting induced picture quality problems. Therefore, the second way of deep learning is developed. In the method, a large amount of data is required to be collected, wherein the data comprises fuzzy and clear images, then the pictures are labeled by a labeling person, the labeling method is divided into two categories, namely fuzzy or non-fuzzy, or a score is given to the picture, however, the fuzzy or non-fuzzy and the fuzzy scores are the subjective consciousness of the labeling person, and the model capable of predicting the picture ambiguity is obtained by being sent into a convolutional neural network for training on the premise that objective correctness is not guaranteed. When the model prediction is inaccurate, a large number of similar samples need to be collected and added into the training data set, and a new model which can solve the errors occurred before is obtained by retraining.
However, neither the blur level nor the image quality is good, there is no clear definition of itself, and thus there is no way to accurately determine whether an image is good or bad, and it is more difficult to score an image to indicate its quality. Therefore, when the annotating personnel annotate the data, the subjective consciousness of the annotating personnel is often added, or the annotation standards of the same annotating personnel at different times are possibly different, so that the annotated data is disordered and a good model cannot be trained, and the image quality is difficult to accurately determine.
In order to solve the above problem, an embodiment of the present application provides an image quality determining method, which obtains an image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction. The neural network is trained by utilizing the image pair with the quality label to obtain a quality prediction model, the quality label comprises the quality difference of two sample images, and the real quality of the sample images can be reflected better, so that the quality of the image to be identified can be accurately determined according to the quality prediction model obtained by training the image pair with the quality label.
The scheme in the embodiment of the present application may be implemented by using various computer languages, for example, object-oriented programming language Java and transliterated scripting language JavaScript, Python, and the like.
In order to make the technical solutions and advantages of the embodiments of the present application more apparent, the following further detailed description of the exemplary embodiments of the present application with reference to the accompanying drawings makes it clear that the described embodiments are only a part of the embodiments of the present application, and are not exhaustive of all embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Referring to fig. 1, an embodiment of the present application provides an image quality determining method, which is applicable to an electronic device, where the electronic device may be a smart phone, a computer, a server, or the like, and the method may specifically include the following steps.
And step 110, acquiring an image to be identified.
The image acquisition device can be used for acquiring images, and the electronic equipment can acquire the images acquired by the image acquisition device as the images to be identified.
In some embodiments, an image acquisition device may be disposed on the electronic device, so that an image to be identified may be directly acquired.
In other embodiments, the electronic device and the image capturing device are two devices, and when the image capturing device captures an image to be recognized, the image to be recognized may be sent to the electronic device, so that the electronic device may obtain the image to be recognized.
Step 120, inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by training a neural network by using a plurality of image pairs with quality labels, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images.
After the image to be recognized is obtained, the image to be recognized may be input into a quality prediction model, so as to obtain a quality score corresponding to the image to be recognized. The quality prediction model is obtained by training a neural network by utilizing a plurality of image pairs with quality labels. The image pair comprises two sample images, and the quality label comprises the quality difference of the two sample images. The quality difference may reflect which of the two sample images is better quality than the other. For example, the quality of the sample image a is better than that of the sample image B, the quality label of the sample image a may be 1, and the quality label of the sample image B may be 0, so that the sample image with better quality and the sample image with worse quality can be determined from the quality labels.
In some embodiments, the quality label may be pre-labeled by the staff member, and may be that the staff member confirms that the quality of the sample image in the image pair is better, and then the quality of the other sample image in the image pair is poorer. The manner in which the image pair is used is more objective and accurate than using a sample image to allow the operator to directly determine whether the quality is good or bad.
In other embodiments, the quality label may be automatically generated if two sample images in the image pair are distorted from one sample image to the other. The sample image after the distortion processing is inferior in quality to the original sample image, for example, the sample image a1 is obtained by subjecting the sample image a to the distortion processing, the sample image a and the sample image a1 are combined into an image pair, the quality label of the automatically generated sample image a is 1, and the quality label of the sample image a1 is 0.
And taking a plurality of image pairs with the quality labels as a training sample set, inputting the training sample set into a neural network, and training the neural network to obtain a quality prediction model. The quality prediction model may output a quality score corresponding to an image to be recognized according to the input image to be recognized.
In some embodiments, the neural network may be a twin network comprising a first neural network and a second neural network having the same parameters, the twin network being trained using the training sample set to obtain the quality prediction model.
It is understood that the parameters of the first neural network and the second neural network are the same, and therefore, the image to be recognized can be input into the first neural network or the second neural network, and both quality scores corresponding to the image to be recognized can be obtained.
And step 130, outputting the quality fraction.
After the quality score is obtained, the quality score may be output for subsequent use.
In some embodiments, a preset score may be set, and when the quality score is greater than or equal to the preset score, the image to be recognized is recognized. If the quality score of the image to be recognized is greater than or equal to the preset score, the image to be recognized is good in quality, good features can be extracted from the image to be recognized, recognition accuracy is high, and therefore the image to be recognized can be recognized. For example, the image to be recognized may be a face image, and when the quality score of the face image is greater than or equal to the preset score, face recognition is performed on the face image.
If the quality score of the image to be recognized is smaller than the preset score, the quality of the image to be recognized is poor, the features extracted from the image to be recognized are poor, and the recognition accuracy is low, so that the image to be recognized can be discarded, and the image to be recognized can be obtained again.
The preset score can be determined according to the quality score corresponding to the image to be recognized. Specifically, feature extraction may be performed on a plurality of images to be recognized, and an image to be recognized with a better extracted feature is determined. And inputting the image to be recognized into the quality prediction model to obtain a quality score corresponding to the image to be recognized, and taking the quality score as the preset score. Through carrying out quality screening on the image to be recognized, only the image to be recognized with better quality is recognized, and the recognition accuracy can be improved.
The image quality determining method provided by the embodiment of the application acquires an image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction. The quality label comprises the quality difference of two sample images, so that the real quality of the sample images can be reflected better, and the quality of the image to be identified can be accurately determined by using the quality prediction model.
Referring to fig. 2, another embodiment of the present application provides an image quality determination method, which focuses on the process of obtaining a quality prediction model based on the foregoing embodiment, and specifically, the method may include the following steps.
Step 210, obtaining a training sample set, where the training sample set includes a plurality of image pairs with quality labels.
The training sample set includes a plurality of image pairs and quality labels corresponding to the image pairs. Each image pair may comprise two sample images, the quality label comprising a difference in quality of the two sample images.
And training the neural network based on the training sample set to obtain a quality prediction model. The training sample set may be obtained before training the neural network to obtain the quality prediction model.
When the training sample set is obtained, a plurality of sample images can be obtained; carrying out distortion processing on the sample image to obtain a processed sample image aiming at each sample image; randomly combining the sample image and the processed sample image into an image pair; and generating the quality difference of the two sample images in the image pair according to the distortion processing degree to obtain a quality label corresponding to the image pair.
Specifically, when the training sample set is obtained, a plurality of sample images may be obtained first, where the plurality of sample images may be obtained directly from an existing database of sample images, or may be a large number of randomly acquired sample images. After the sample image is obtained, distortion processing may be performed on the sample image to obtain a processed sample image. And randomly combining the processed sample image and the sample image to obtain the image pair.
For example, the sample image a1 and the sample image a2 can be obtained by subjecting the sample image a to distortion processing with different degrees of distortion. Then, the sample image a and the sample image a1 may be combined into the image pair, the sample image a and the sample image a2 may be combined into the image pair, and the sample image a1 and the sample image a2 may be combined into the image pair.
After the image pair is obtained, a quality label for the image pair may be generated based on the distortion level. Specifically, the quality of the processed sample image with low distortion degree may be better than the quality of the processed sample image with high distortion degree, and the distortion degree may be the lowest in the sample image without distortion processing. As described in the previous example, the sample image a1 is obtained with a greater degree of distortion than the sample image a2, the sample image a1 and the sample image a2 are combined into the image pair, and the quality label of the sample image a1 is automatically generated to be better than the quality label of the sample image a 2. If the sample image a and the sample image a1 are combined into an image pair, the quality of the automatically generated sample image a1 is better than the quality label of the sample image a 2.
In some embodiments, two combined image pairs may be arbitrarily selected from the sample image and the distorted sample image, and for an image pair for which a quality label cannot be automatically generated, labeling is performed in a manual labeling manner to obtain the quality label corresponding to each image pair.
Specifically, the manual labeling may be performed by displaying two sample images in the image pair, and determining that the sample image acted by the selection operation of the worker is a sample image with better quality, so as to obtain the quality label of the image pair. For example, two sample images in the image pair are simultaneously displayed on the display device, and the sample image clicked by the worker is determined to be the sample image with better quality, and the other sample image with poorer quality.
In some embodiments, before the sample image is subjected to distortion processing to obtain a processed sample image, a target area where a target object in the sample image is located may be further identified; and cutting the sample image based on the target area, and rotating the cut sample image to adjust the target object to a preset angle.
Taking a face image as an example, the existing face detection model can be used to identify and position a face region in the face image, and the face image is cut according to the face region. After the face image is cut, the face image only comprises a face area, the face image can be rotated, and the face is adjusted to a preset angle. Specifically, a connection line between two eyes in the face image may be acquired, and the face image may be rotated to an angle at which the connection line is horizontal.
And 220, training a twin network through the training sample set to obtain the quality prediction model, wherein the twin network comprises a first neural network and a second neural network with the same parameters.
After the training sample set is obtained, the twin network may be trained based on the training sample set to obtain the quality prediction model. Referring to FIG. 3, the structure of the twin network is shown. In the twin network, a first neural network and a second neural network are included, the first neural network and the second neural network share weights, namely the parameters of the first neural network and the second neural network are the same, each neural network has an input and an output, and a loss function is calculated according to the outputs of the first neural network and the second neural network.
When the twin network is trained, one sample image in the image pair may be input into a first neural network, and the other sample image may be input into a second neural network, so as to obtain quality scores corresponding to the two sample images, respectively; calculating a loss function according to the quality fraction and the quality label; determining whether the twin network converges according to the loss function; if the twin network is converged, obtaining the quality prediction model; and if the twin network is not converged, adjusting the parameters of the first neural network or the second neural network until the twin network is converged.
The twin network comprises a first neural network and a second neural network with the same parameters, and two inputs and two outputs are corresponding to the first neural network and the second neural network. The two sample images are included in the image pair, so that one sample image of the two sample images can be input into the first neural network, and the other sample image can be input into the second neural network, and the quality scores corresponding to the two sample images respectively can be obtained. For example, if the image pair includes sample image a and sample image a1, sample image a may be input into a first neural network and sample image a1 may be input into a second neural network, such that the first neural network outputs a quality score corresponding to sample image a and the second neural network outputs a quality score corresponding to sample image a 1.
After the quality scores corresponding to the two sample images in the image pair are obtained, a loss function can be calculated according to the quality scores and the quality labels of the image pair. The higher the quality score is, the better the quality of the sample image is, and according to the quality score, the sample image with relatively better quality in the image pair can be determined. The loss function adopted by the embodiment of the application can be hinge loss, the hinge loss is a two-classification loss function, if the classification is correct, the hinge loss is equal to 0, and if the classification is wrong, the hinge loss is equal to 1.
That is, it may be determined whether the quality difference of the two sample images reflected by the quality score is consistent with the quality label to verify whether the twin network predicts correctly. When the mass fraction is consistent with the mass label, the corresponding hinge loss is equal to 0; and when the quality difference reflected by the quality fraction is inconsistent with the quality label, the corresponding hinge loss is equal to 1. Therefore, when the hinge loss is equal to 0, the twin network is considered to be converged, and the quality prediction model is obtained; when the hinge loss is equal to 1, the twin network is considered to be not converged, parameters of the first neural network or the second neural network can be adjusted until the loss function is equal to 0, the twin network is considered to be converged, and the quality prediction model is obtained.
As in the previous example, sample image a and sample image a1 form an image pair, and the quality label corresponding to the image pair is that the quality of sample image a is better than the quality of sample image a 1. Inputting the image pair into the twin network to obtain a corresponding sample image A with a mass fraction XAThe mass fraction corresponding to the sample image A1 is XA1Wherein a higher mass fraction indicates a better mass. If XALess than XA1Indicating that the quality of sample image a is worse than the quality of sample image a1, and is not consistent with the quality label, parameters of the first or second neural network may be adjusted until the output quality score is consistent with the quality label.
Thus, the twin network is trained through a plurality of image pairs, and the quality prediction model is obtained when the twin network converges.
The quality prediction model is obtained by training a sample image subjected to distortion processing, so that the quality prediction model has the same performance in a real scene. The quality prediction model may be retrained using image pairs in the real scene, parameters of the quality prediction model are updated, and the quality prediction model is fine-tuned.
Step 230, acquiring an image to be identified.
Step 240, inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by training a neural network by using a plurality of image pairs with quality labels, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images.
And 250, outputting the mass fraction.
The steps 230 to 240 can refer to the description in the foregoing embodiments, and are not repeated herein.
After predicting the quality score of the image to be recognized through the quality prediction model, a worker can randomly check the accuracy of the quality prediction model.
The staff member can store the image pairs with obvious prediction errors as error samples in the preset database. For example, the quality score corresponding to the image A to be recognized is X through the quality prediction modelAThe mass fraction corresponding to the image B to be identified is XBThe mass fraction corresponding to the image C to be identified is XCWherein X isA>XB>XCThe quality of the image C to be recognized is then clearly perceived by the operator as being better than that of the image B to be recognized, so that the image C to be recognized and the image B to be recognized can be combined into an image pair and stored as an error sample in the predetermined database. When the erroneous sample is stored in the preset database, the image pair may be labeled with a correct quality label, and the quality label may be stored in the preset database together with the image pair.
The electronic device may obtain the number of erroneous samples in the preset database, obtain a quality label of the image pair when the number is greater than the preset number, and train the quality prediction model using the image pair and the quality label. That is to say, when the quality prediction model is used, if it is considered that the quality score obtained by the quality prediction model is wrong, the image to be identified with the wrong prediction can be combined into an image pair, the quality label of the image pair is obtained and stored in the preset database as a wrong sample, and the quality prediction model is retrained again by using the wrong sample, so that the quality prediction model can be iterated in real time, and the iteration cycle of the model is shortened. With the increase of the data quantity, the generalization capability of the quality prediction model is gradually enhanced, and the probability of errors is lower and lower.
It should be noted that, the steps 210 and 220 may be generally performed before the step 230, and of course, may also be performed after the step 230 and before the step 240, and may be selected according to actual needs.
According to the image quality determining method provided by the embodiment of the application, a training sample set is obtained, wherein the training sample set comprises a plurality of image pairs with quality labels; training the twin network through the training sample set to obtain the quality prediction model; and obtaining a quality score corresponding to the image to be identified based on the quality prediction model. When the training sample set is obtained, the quality difference of the two sample images is marked by the staff in an image pair mode, so that the real quality of the sample images can be reflected more objectively and accurately.
Referring to fig. 4, an embodiment of the present application provides an image quality determining apparatus 300, where the image quality determining apparatus 300 includes an image obtaining module 310, a quality score obtaining module 320, and an output module 330. The image obtaining module 310 is configured to obtain an image to be identified; the quality score obtaining module 320 is configured to input the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, where the quality prediction model is obtained by training a neural network using a plurality of image pairs with quality labels, each of the image pairs includes two sample images, and each of the quality labels includes a quality difference between the two sample images; the output module 330 is configured to output the quality score.
Further, before the image to be recognized is input into the quality prediction model and the quality score corresponding to the image to be recognized is obtained, the quality score obtaining module 320 is further configured to obtain a training sample set, where the training sample set includes a plurality of image pairs with quality labels; and training a twin network through the training sample set to obtain the quality prediction model, wherein the twin network comprises a first neural network and a second neural network with the same parameters.
Further, the quality score obtaining module 320 is further configured to obtain a plurality of sample images; acquiring a plurality of sample images; carrying out distortion processing on the sample image to obtain a processed sample image aiming at each sample image; randomly combining the sample image and the processed sample image into an image pair; and generating the quality difference of the two sample images in the image pair according to the distortion processing degree to obtain a quality label corresponding to the image pair.
Further, the quality score obtaining module 320 is further configured to identify a target area where a target object in the sample image is located; and cutting the sample image based on the target area, and rotating the cut sample image to adjust the target object to a preset angle.
Further, the quality score obtaining module 320 is further configured to input one sample image in the image pair into a first neural network, and input the other sample image into a second neural network, so as to obtain quality scores corresponding to the two sample images respectively; calculating a loss function according to the quality fraction and the quality label; determining whether the twin network converges according to the loss function; if the twin network is converged, obtaining the quality prediction model; and if the twin network is not converged, adjusting the parameters of the first neural network or the second neural network until the twin network is converged.
Further, after the quality score is output, the quality score obtaining module 320 is further configured to obtain the number of error samples in a preset database, where the error samples are image pairs of prediction errors confirmed by the staff according to the quality score; and when the number is larger than the preset number, acquiring a quality label of the image pair, and training the quality prediction model by using the image pair and the quality label.
Further, the image quality determining apparatus 300 further includes an identification module, where the image to be identified is a face image, and the identification module is configured to perform face identification on the image to be identified when the quality score is greater than or equal to a preset score.
The image quality determining device provided by the embodiment of the application acquires an image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction. The quality label comprises the quality difference of two sample images, so that the real quality of the sample images can be reflected better, and the quality of the image to be identified can be accurately determined by using the quality prediction model.
It should be noted that, as will be clear to those skilled in the art, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Referring to fig. 5, an embodiment of the present application provides a block diagram of an electronic device, where the electronic device 400 includes a processor 410, a memory 420, and one or more application programs, where the one or more application programs are stored in the memory 420 and configured to be executed by the one or more processors 410, and the one or more programs are configured to perform the image quality determination method described above.
The electronic device 400 may be a terminal device capable of running an application, such as a smart phone or a tablet computer, or may be a server. The electronic device 400 in the present application may include one or more of the following components: a processor 410, a memory 420, and one or more applications, wherein the one or more applications may be stored in the memory 420 and configured to be executed by the one or more processors 410, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
Processor 410 may include one or more processing cores. The processor 410 interfaces with various components throughout the electronic device 400 using various interfaces and circuitry to perform various functions of the electronic device 400 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 420 and invoking data stored in the memory 420. Alternatively, the processor 410 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 410 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 410, but may be implemented by a communication chip.
The Memory 420 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 420 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 420 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The data storage area may also store data created by the electronic device 400 during use (e.g., phone books, audio-video data, chat log data), and the like.
The electronic equipment provided by the embodiment of the application acquires an image to be identified; inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images; and outputting the quality fraction. The quality label comprises the quality difference of two sample images, so that the real quality of the sample images can be reflected better, and the quality of the image to be identified can be accurately determined by using the quality prediction model.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An image quality determination method, characterized in that the method comprises:
acquiring an image to be identified;
inputting the image to be recognized into a quality prediction model to obtain a quality score corresponding to the image to be recognized, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images;
and outputting the quality fraction.
2. The method according to claim 1, wherein before inputting the image to be recognized into a quality prediction model and obtaining a quality score corresponding to the image to be recognized, the method comprises:
obtaining a training sample set, wherein the training sample set comprises a plurality of image pairs with quality labels;
and training a twin network through the training sample set to obtain the quality prediction model, wherein the twin network comprises a first neural network and a second neural network with the same parameters.
3. The method of claim 2, wherein the obtaining a training sample set comprises:
acquiring a plurality of sample images;
carrying out distortion processing on the sample image to obtain a processed sample image aiming at each sample image; randomly combining the sample image and the processed sample image into an image pair;
and generating the quality difference of the two sample images in the image pair according to the distortion processing degree to obtain a quality label corresponding to the image pair.
4. The method of claim 3, wherein before the distorting the sample image to obtain the processed sample image, further comprising:
identifying a target area where a target object in the sample image is located;
and cutting the sample image based on the target area, and rotating the cut sample image to adjust the target object to a preset angle.
5. The method of claim 2, wherein training the twin network through the set of training samples to obtain the quality prediction model comprises:
inputting one sample image in the image pair into a first neural network, and inputting the other sample image into a second neural network to obtain quality scores corresponding to the two sample images respectively;
calculating a loss function according to the quality fraction and the quality label;
determining whether the twin network converges according to the loss function;
if the twin network is converged, obtaining the quality prediction model;
and if the twin network is not converged, adjusting the parameters of the first neural network or the second neural network until the twin network is converged.
6. The method of claim 1, wherein after outputting the quality score, further comprising:
acquiring the number of error samples in a preset database, wherein the error samples are image pairs of prediction errors confirmed by workers according to the quality scores;
and when the number is larger than the preset number, acquiring a quality label of the image pair, and training the quality prediction model by using the image pair and the quality label.
7. The method according to any one of claims 1 to 6, wherein the image to be recognized is a face image, the method further comprising:
and when the quality score is greater than or equal to a preset score, carrying out face recognition on the image to be recognized.
8. An image quality determination apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be identified;
the quality score acquisition module is used for inputting the image to be identified into a quality prediction model to obtain a quality score corresponding to the image to be identified, wherein the quality prediction model is obtained by utilizing a plurality of image pairs with quality labels to train a neural network, the image pairs comprise two sample images, and the quality labels comprise quality differences of the two sample images;
and the output module is used for outputting the quality fraction.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory electrically connected with the one or more processors;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
CN202111024179.0A 2021-09-02 2021-09-02 Image quality determination method and device, electronic equipment and storage medium Pending CN113763348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111024179.0A CN113763348A (en) 2021-09-02 2021-09-02 Image quality determination method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111024179.0A CN113763348A (en) 2021-09-02 2021-09-02 Image quality determination method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113763348A true CN113763348A (en) 2021-12-07

Family

ID=78792528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111024179.0A Pending CN113763348A (en) 2021-09-02 2021-09-02 Image quality determination method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113763348A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743254A (en) * 2021-08-18 2021-12-03 北京格灵深瞳信息技术股份有限公司 Sight estimation method, sight estimation device, electronic equipment and storage medium
CN114363925A (en) * 2021-12-16 2022-04-15 北京红山信息科技研究院有限公司 Network quality difference automatic identification method
CN114372974A (en) * 2022-01-12 2022-04-19 北京字节跳动网络技术有限公司 Image detection method, device, equipment and storage medium
CN115661619A (en) * 2022-11-03 2023-01-31 北京安德医智科技有限公司 Network model training method, ultrasonic image quality evaluation method, device and electronic equipment

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN104318562A (en) * 2014-10-22 2015-01-28 百度在线网络技术(北京)有限公司 Method and device for confirming quality of internet images
CN108876758A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 Face identification method, apparatus and system
WO2019041406A1 (en) * 2017-08-28 2019-03-07 平安科技(深圳)有限公司 Indecent picture recognition method, terminal and device, and computer-readable storage medium
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109871780A (en) * 2019-01-28 2019-06-11 中国科学院重庆绿色智能技术研究院 A kind of face quality decision method, system and face identification method, system
CN110033446A (en) * 2019-04-10 2019-07-19 西安电子科技大学 Enhancing image quality evaluating method based on twin network
CN110879985A (en) * 2019-11-18 2020-03-13 西南交通大学 Anti-noise data face recognition model training method
US20200111019A1 (en) * 2018-07-06 2020-04-09 Capital One Services, Llc Failure feedback system for enhancing machine learning accuracy by synthetic data generation
CN111046959A (en) * 2019-12-12 2020-04-21 上海眼控科技股份有限公司 Model training method, device, equipment and storage medium
CN111339810A (en) * 2019-04-25 2020-06-26 南京特沃斯高科技有限公司 Low-resolution large-angle face recognition method based on Gaussian distribution
CN111640099A (en) * 2020-05-29 2020-09-08 北京金山云网络技术有限公司 Method and device for determining image quality, electronic equipment and storage medium
CN111832627A (en) * 2020-06-19 2020-10-27 华中科技大学 Image classification model training method, classification method and system for suppressing label noise
CN111914939A (en) * 2020-08-06 2020-11-10 平安科技(深圳)有限公司 Method, device and equipment for identifying blurred image and computer readable storage medium
CN112529210A (en) * 2020-12-09 2021-03-19 广州云从鼎望科技有限公司 Model training method, device and computer readable storage medium
CN113158913A (en) * 2021-04-25 2021-07-23 安徽科大擎天科技有限公司 Face mask wearing identification method, system and terminal

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914676A (en) * 2012-12-30 2014-07-09 杭州朗和科技有限公司 Method and apparatus for use in face recognition
CN104318562A (en) * 2014-10-22 2015-01-28 百度在线网络技术(北京)有限公司 Method and device for confirming quality of internet images
CN108876758A (en) * 2017-08-15 2018-11-23 北京旷视科技有限公司 Face identification method, apparatus and system
WO2019041406A1 (en) * 2017-08-28 2019-03-07 平安科技(深圳)有限公司 Indecent picture recognition method, terminal and device, and computer-readable storage medium
US20200111019A1 (en) * 2018-07-06 2020-04-09 Capital One Services, Llc Failure feedback system for enhancing machine learning accuracy by synthetic data generation
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109871780A (en) * 2019-01-28 2019-06-11 中国科学院重庆绿色智能技术研究院 A kind of face quality decision method, system and face identification method, system
CN110033446A (en) * 2019-04-10 2019-07-19 西安电子科技大学 Enhancing image quality evaluating method based on twin network
CN111339810A (en) * 2019-04-25 2020-06-26 南京特沃斯高科技有限公司 Low-resolution large-angle face recognition method based on Gaussian distribution
CN110879985A (en) * 2019-11-18 2020-03-13 西南交通大学 Anti-noise data face recognition model training method
CN111046959A (en) * 2019-12-12 2020-04-21 上海眼控科技股份有限公司 Model training method, device, equipment and storage medium
CN111640099A (en) * 2020-05-29 2020-09-08 北京金山云网络技术有限公司 Method and device for determining image quality, electronic equipment and storage medium
CN111832627A (en) * 2020-06-19 2020-10-27 华中科技大学 Image classification model training method, classification method and system for suppressing label noise
CN111914939A (en) * 2020-08-06 2020-11-10 平安科技(深圳)有限公司 Method, device and equipment for identifying blurred image and computer readable storage medium
CN112529210A (en) * 2020-12-09 2021-03-19 广州云从鼎望科技有限公司 Model training method, device and computer readable storage medium
CN113158913A (en) * 2021-04-25 2021-07-23 安徽科大擎天科技有限公司 Face mask wearing identification method, system and terminal

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743254A (en) * 2021-08-18 2021-12-03 北京格灵深瞳信息技术股份有限公司 Sight estimation method, sight estimation device, electronic equipment and storage medium
CN113743254B (en) * 2021-08-18 2024-04-09 北京格灵深瞳信息技术股份有限公司 Sight estimation method, device, electronic equipment and storage medium
CN114363925A (en) * 2021-12-16 2022-04-15 北京红山信息科技研究院有限公司 Network quality difference automatic identification method
CN114363925B (en) * 2021-12-16 2023-10-24 北京红山信息科技研究院有限公司 Automatic network quality difference identification method
CN114372974A (en) * 2022-01-12 2022-04-19 北京字节跳动网络技术有限公司 Image detection method, device, equipment and storage medium
CN114372974B (en) * 2022-01-12 2024-03-08 抖音视界有限公司 Image detection method, device, equipment and storage medium
CN115661619A (en) * 2022-11-03 2023-01-31 北京安德医智科技有限公司 Network model training method, ultrasonic image quality evaluation method, device and electronic equipment

Similar Documents

Publication Publication Date Title
CN113763348A (en) Image quality determination method and device, electronic equipment and storage medium
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
CN111222500B (en) Label extraction method and device
CN109034069B (en) Method and apparatus for generating information
CN109784381A (en) Markup information processing method, device and electronic equipment
US20230119593A1 (en) Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium
CN110321845B (en) Method and device for extracting emotion packets from video and electronic equipment
CN110781805B (en) Target object detection method, device, computing equipment and medium
US10169853B2 (en) Score weights for user interface (UI) elements
CN108986137B (en) Human body tracking method, device and equipment
CN111401318B (en) Action recognition method and device
CN111680546A (en) Attention detection method, attention detection device, electronic equipment and storage medium
CN110796039B (en) Face flaw detection method and device, electronic equipment and storage medium
CN113642466B (en) Living body detection and model training method, apparatus and medium
CN112347997A (en) Test question detection and identification method and device, electronic equipment and medium
CN115830002A (en) Infrared image quality evaluation method and device
CN110728193A (en) Method and device for detecting richness characteristics of face image
CN111127400A (en) Method and device for detecting breast lesions
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
CN112749696A (en) Text detection method and device
CN111666884A (en) Living body detection method, living body detection device, computer-readable medium, and electronic apparatus
CN116521917A (en) Picture screening method and device
CN115719428A (en) Face image clustering method, device, equipment and medium based on classification model
CN114299598A (en) Method for determining fixation position and related device
CN110851349A (en) Page abnormal display detection method, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination