CN108335293B - Image quality determination method and device - Google Patents

Image quality determination method and device Download PDF

Info

Publication number
CN108335293B
CN108335293B CN201810094333.3A CN201810094333A CN108335293B CN 108335293 B CN108335293 B CN 108335293B CN 201810094333 A CN201810094333 A CN 201810094333A CN 108335293 B CN108335293 B CN 108335293B
Authority
CN
China
Prior art keywords
sub
image
foreground
images
test standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810094333.3A
Other languages
Chinese (zh)
Other versions
CN108335293A (en
Inventor
朱兴杰
刘岩
党莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN201810094333.3A priority Critical patent/CN108335293B/en
Publication of CN108335293A publication Critical patent/CN108335293A/en
Application granted granted Critical
Publication of CN108335293B publication Critical patent/CN108335293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for determining image quality, which comprises the steps of converting an image to be tested to be evaluated into a test standard graph for each image to be tested, wherein the image to be tested is required to be subjected to quality evaluation, so as to ensure the accuracy of the evaluation, extracting a plurality of sub-images from the test standard graph according to a preset extraction rule, respectively inputting each sub-image into a neural network model which is established, obtaining a test score corresponding to each sub-image after the neural network model is subjected to analysis and test, calculating each obtained test score, obtaining a target score corresponding to the test standard graph, further determining the quality of the image to be tested according to the target score, and realizing the evaluation of the quality of the image to be tested. According to the image quality determination method provided by the invention, the neural network model is applied to score each sub-image in the test standard graph, the operation speed is high, and the efficiency of evaluating the image quality is improved.

Description

Image quality determination method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for determining image quality.
Background
With the rapid development of computer technology and the internet, the application of images is more and more extensive. The image is used as an important information carrier, and meets the increasing requirements of modern business. The image can cause distortion of different degrees and types in the processes of acquisition, compression, processing, transmission, storage and the like, and the acquisition of information in the image is directly influenced. Therefore, it is of great significance to establish an effective image quality evaluation system in the field of image processing and application.
The inventor researches the existing image quality evaluation process, and finds that the image quality is generally evaluated manually, the evaluation process needs a long time and takes a lot of time, and the efficiency of evaluating the image quality is low.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image quality determination method, which can quickly evaluate the quality of an image to be tested and improve the evaluation efficiency of the image quality.
The invention also provides a device for determining the image quality, which is used for ensuring the realization and the application of the method in practice.
A method of determining image quality, comprising:
normalizing the current image to be tested to obtain a test standard graph corresponding to the current image to be tested;
acquiring a sub-image set corresponding to the test standard diagram, wherein the sub-image set comprises a plurality of sub-images extracted from the test standard diagram;
inputting each sub-image into a pre-established neural network model for testing, and obtaining a test score of each sub-image;
and calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score.
In the foregoing method, optionally, the sub-image set includes a first sub-set and a second sub-set;
the extraction process of each sub-image in the first sub-set comprises the following steps:
according to a preset scale coefficient, carrying out scale segmentation on the test standard diagram to obtain a plurality of segmentation subimages of the test standard diagram; the size of each of the segmentation sub-images is the same;
adding each of the segmented sub-images to the first sub-set.
In the foregoing method, optionally, the extracting process of each sub-image in the second sub-set includes:
performing foreground and background segmentation on the test standard image, and selecting N foreground areas in the test standard image according to a preset selection rule; n is a positive integer;
when N foreground regions are selected, performing a first operation on the N foreground regions to obtain N foreground sub-images, and adding each foreground sub-image into the second subset;
the first operation includes:
determining whether a first class foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the first class foreground area is larger than that of the segmentation sub-image;
when the foreground sub-images exist, center clipping is carried out on each first type of foreground area, and foreground sub-images with the same size as the size of the segmentation sub-images are clipped from the first type of foreground areas;
and
determining whether a second type of foreground region exists in the N foreground regions, wherein the size of a circumscribed rectangle of the second type of foreground region is smaller than that of the segmentation subimage;
and if the foreground sub-images exist, intercepting the foreground sub-images which comprise the second type of foreground areas and have the same size as the segmentation sub-images from the test standard image.
In the above method, optionally, when the test standard chart includes M foreground regions, M is smaller than N, and M is a positive integer; the method further comprises the following steps:
selecting random factors after the M foreground areas are selected and first operation is performed on the M foreground areas, and randomly selecting X random sub-images from the test standard chart according to the random factors and adding the random sub-images into the second sub-set, wherein the size of each random sub-image is the same as that of the segmentation sub-image, X is the difference value of N and M, and X is a positive integer.
Optionally, in the method, the neural network model is pre-established by using a deep convolutional neural network;
the deep convolutional neural network includes:
1 input layer;
11 convolutional layers, wherein 5 pocolling layers are applied to the 11 convolutional layers;
3 full connection layers;
1 output layer.
Optionally, in the foregoing method, the calculating a test score of each sub-image to obtain a target score corresponding to the test standard chart includes:
assigning a test weight corresponding to each test score;
and calculating each test score distributed with the test weight to obtain an average value, and taking the average value as a target score corresponding to the test standard chart.
An apparatus for determining image quality, comprising:
the processing unit is used for carrying out normalization processing on the current image to be tested to obtain a test standard chart corresponding to the current image to be tested;
the acquisition unit is used for acquiring a sub-image set corresponding to the test standard diagram, and the sub-image set comprises a plurality of sub-images extracted from the test standard diagram;
the testing unit is used for inputting each sub-image into a pre-established neural network model for testing and obtaining a testing score of each sub-image;
and the calculating unit is used for calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score.
The above apparatus, optionally, the obtaining unit includes:
and the extraction subunit is used for extracting the sub-images from the test standard graph.
A storage medium comprising a stored program, wherein an apparatus in which the storage medium is located is controlled to perform the above-described image quality determination method when the program is executed.
An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by one or more processors to perform the image quality determination method described above.
Compared with the prior art, the invention has the following advantages:
the invention provides a method for determining image quality, which comprises the steps of converting an image to be tested to be evaluated into a test standard graph for each image to be tested, wherein the image to be tested is required to be subjected to quality evaluation, so as to ensure the accuracy of the evaluation, extracting a plurality of sub-images from the test standard graph according to a preset extraction rule, respectively inputting each sub-image into a neural network model which is established, obtaining a test score corresponding to each sub-image after the neural network model is subjected to analysis and test, calculating each obtained test score, obtaining a target score corresponding to the test standard graph, further determining the quality of the image to be tested according to the target score, and realizing the evaluation of the quality of the image to be tested. According to the image quality determination method provided by the invention, the neural network model is applied to score each sub-image in the test standard graph, the operation speed is high, and the efficiency of evaluating the image quality is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a flowchart of a method for determining image quality according to the present invention;
FIG. 2 is a flowchart of another method of determining image quality according to the present invention;
FIG. 3 is a flowchart of another method of determining image quality according to the present invention;
FIG. 4 is a flowchart of another method of determining image quality according to the present invention;
FIG. 5 is an architecture diagram of a neural network provided by the present invention;
FIG. 6 is a diagram illustrating an implementation architecture of a method for determining image quality according to the present invention;
fig. 7 is a schematic structural diagram of an image quality determining apparatus according to the present invention;
fig. 8 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like.
An embodiment of the present invention provides a method for determining image quality, which may be applied in a processor of a computer or a terminal, where the processor executes a determination process of the method for determining image quality provided by the present invention, and fig. 1 shows a flowchart of the method for determining image quality provided by the embodiment of the present invention, where the method includes:
s101: normalizing the current image to be tested to obtain a test standard graph corresponding to the current image to be tested;
in the embodiment of the invention, each image to be tested, which needs to be subjected to quality evaluation, is converted into a preset standard image. Aiming at the current image to be tested, normalization processing is carried out on the current image to be tested in the embodiment of the invention, and a test standard graph corresponding to the current image to be tested is obtained.
S102: acquiring a sub-image set corresponding to the test standard diagram, wherein the sub-image set comprises a plurality of sub-images extracted from the test standard diagram;
in the embodiment of the invention, after a test standard diagram of an image to be tested is determined, a sub-image set corresponding to the test standard diagram is obtained, wherein the sub-image set comprises a plurality of sub-images, each sub-image is from the test standard diagram, and the sub-images are extracted from the test standard diagram according to a preset extraction rule.
S103: inputting each sub-image into a pre-established neural network model for testing, and obtaining a test score of each sub-image;
in the embodiment of the invention, a plurality of standard graphs with evaluation scores are predetermined, a plurality of sub-images of each standard graph are used as training samples to be trained in a neural network, and a neural network model capable of scoring the images is obtained.
S104: and calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score.
In the embodiment of the invention, the obtained test score of each sub-image is calculated according to a preset calculation rule, and finally, the target score corresponding to the test standard graph is determined. According to the embodiment of the invention, the image quality of the current image to be tested is determined according to the target score, so that the quality of the image to be tested is evaluated.
In the method for determining image quality provided by the embodiment of the invention, for each image to be tested, which needs to be subjected to quality evaluation, the image to be tested is firstly converted into a test standard graph to ensure the accuracy of the evaluation, then a plurality of sub-images are extracted from the test standard graph according to a preset extraction rule, the sub-images are respectively input into a neural network model established with the image to be tested, after the neural network model is used for analysis and test, a test score corresponding to each sub-image is obtained, each obtained test score is calculated to obtain a target score corresponding to the test standard graph, and further the quality of the image to be tested can be determined according to the target score, so that the evaluation of the quality of the image to be tested is realized. According to the image quality determination method provided by the invention, the neural network model is applied to score each sub-image in the test standard graph, the operation speed is high, and the efficiency of evaluating the image quality is improved.
In the method for determining image quality provided by the embodiment of the present invention, the sub-image set corresponding to the test standard chart includes a first sub-set and a second sub-set;
the first sub-set comprises a plurality of sub-images, and the second sub-set also comprises a plurality of sub-images; each sub-image in the first sub-set and each sub-image in the second sub-set are from the test standard graph, and the extraction mode of each sub-image in the first sub-set is different from that of each sub-image in the second sub-set.
Referring to fig. 2, a process of extracting each sub-image in the first sub-set in the sub-image set in the method for determining image quality provided by the embodiment of the present invention is shown, which specifically includes:
s201, according to a preset proportionality coefficient, carrying out proportion segmentation on the test standard diagram to obtain a plurality of segmentation sub-images of the test standard diagram; the size of each of the segmentation sub-images is the same;
s202 adds each of the segmented sub-images to the first sub-set.
In the embodiment of the invention, a scale coefficient for dividing the test standard chart in proportion is preset, and the scale coefficient is adjusted according to the specification of the standard chart set in the embodiment of the invention. When each image to be tested is converted into a standard square image, each side of the square image can be divided after being equally divided in the same proportion according to a set proportion coefficient, and a plurality of divided sub-images are obtained. When each image to be tested is converted into a standard rectangular image, the length of the rectangular image can be divided in an equal proportion according to a set proportion coefficient, and then the width of the rectangular image is divided in another equal proportion to obtain a plurality of segmentation sub-images of the rectangular image.
For example, in the embodiment of the present invention, the standard graph may be an M × M image, that is, the standard graph is a square image. Therefore, in the embodiment of the present invention, the test standard graph corresponding to the image to be tested may be a square image in an M × M format, and when the test standard graph is divided in proportion, each square edge of the test standard graph may be divided equally, and then divided, for example, each divided sub-image may be a sub-image in a size of P × P, and P may be M/3, that is, 9 sub-images with the same size are divided from the test standard graph, and in the embodiment of the present invention, the sub-images may also be divided in other proportions, such as M/4, M/5, and the like.
In the embodiment of the present invention, the standard graph may also be an M × N image, and the standard graph may be a standard rectangular image. Therefore, in the embodiment of the present invention, the test standard map corresponding to the image to be tested may be an M × N rectangular image, and when the test standard map is divided proportionally, the length and the width of the test standard map may be equally divided, for example, into a plurality of X × Y sub-images, where X may be M/3 and Y may be N/2, so that 6 sub-images with the same size may be divided from the test standard map.
On the basis of fig. 2, referring to fig. 3, a process of extracting each sub-image in the second sub-set in the sub-image set in the method for determining image quality provided by the embodiment of the present invention is shown, which specifically includes:
s301, performing foreground and background segmentation on the test standard image, and selecting N foreground areas in the test standard image according to a preset selection rule; n is a positive integer;
s302, when N foreground regions are selected, executing a first operation on the N foreground regions to obtain N foreground sub-images, and adding each foreground sub-image into the second subset;
in the embodiment of the invention, a second subset is also extracted from the test standard graph, and the test standard graph of the image to be tested is subjected to front-background segmentation treatment, wherein a foreground region in the test standard graph is a region with remarkable characteristics in the test standard graph.
In the embodiment of the present invention, the preset selection rule may be selected according to the order of the size of the circumscribed rectangle of each foreground region from large to small, for example, the number of the preset selection rules is set to be 2, after the test standard chart is subjected to the foreground and background segmentation processing, 5 foreground regions exist in the test standard chart, the circumscribed rectangle of the 5 foreground regions is determined, and 2 larger circumscribed rectangles are selected to serve as the foreground regions to be selected in the test process.
In the embodiment of the invention, the set number is N, the N is a positive integer, N foreground areas are set and selected, and when the number of the foreground areas existing in the test standard chart is larger than N, the first N foreground areas with larger circumscribed rectangles are selected. When the number of foreground regions existing in the test standard diagram is smaller than N, in the embodiment of the present invention, after each foreground region included in the test standard diagram is taken out, a certain number of random sub-images are randomly selected in the test standard diagram in a random selection manner, so as to satisfy the set number N.
The first operation includes:
determining whether a first class foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the first class foreground area is larger than that of the segmentation sub-image;
when the foreground sub-images exist, center clipping is carried out on each first type of foreground area, and foreground sub-images with the same size as the size of the segmentation sub-images are clipped from the first type of foreground areas;
and
determining whether a second type of foreground region exists in the N foreground regions, wherein the size of a circumscribed rectangle of the second type of foreground region is smaller than that of the segmentation subimage;
and if the foreground sub-images exist, intercepting the foreground sub-images which comprise the second type of foreground areas and have the same size as the segmentation sub-images from the test standard image.
In the embodiment of the present invention, since the sizes of the selected foreground regions are different, in order to unify the test standards, in the embodiment of the present invention, a first operation is performed on each foreground region, when the first operation is performed, it is determined whether a foreground region having a circumscribed rectangle with a size larger than that of the segmented sub-image exists in the N foreground regions based on the size of the segmented sub-image in the first subset, when the foreground region having a circumscribed rectangle with a size larger than that of the segmented sub-image exists, the foreground region having a circumscribed rectangle with a size larger than that of the segmented sub-image is used as a first class of foreground region, the first class of foreground region is subjected to center clipping, and a foreground sub-image having a size identical to that of the segmented sub-image is clipped from the center position of the.
In addition, it is also necessary to determine whether a foreground region having a size of an external rectangle smaller than the size of the segmentation sub-image exists in the N foreground regions, and if so, the foreground region having a size of an external rectangle smaller than the size of the segmentation sub-image is taken as a second class of foreground region, and then the foreground sub-image including the second class of foreground region and having the same size as the segmentation sub-image is captured from the test standard image.
According to the embodiment of the invention, for the extraction rule of the foreground subimage, the foreground area of which the circumscribed rectangle is larger than the segmentation subimage is cut, and the foreground area of which the circumscribed rectangle is smaller than the segmentation subimage is covered. In the embodiment of the invention, when the size of the circumscribed rectangle of the foreground region is the same as that of the segmentation sub-image, the foreground region is directly input into the second subset.
In the embodiment of the invention, when the test standard graph comprises M foreground areas, M is less than N and is a positive integer; the method further comprises the following steps:
selecting random factors after the M foreground areas are selected and first operation is performed on the M foreground areas, and randomly selecting X random sub-images from the test standard chart according to the random factors and adding the random sub-images into the second sub-set, wherein the size of each random sub-image is the same as that of the segmentation sub-image, X is the difference value of N and M, and X is a positive integer.
In the embodiment of the invention, when the number of foreground regions in the test standard graph is smaller than a set value N, M foreground regions in the test standard graph are extracted, a first operation is executed, and then X random sub-images with the same size as the segmentation sub-images are randomly selected in the test standard graph according to a random factor, wherein X is the difference value of N and M.
In the embodiment of the present invention, preferably, the image to be tested is converted into a test standard graph in an M × M square format, and when the test standard graph is divided, the test standard graph is preferably divided into sub-images P × P, where P is M/3, that is, 9 sub-images with the same size are preferably divided from the test standard graph. In the embodiment of the present invention, when the image to be tested is converted into the M × M test standard graph, preferably, 2 foreground regions are selected from the test standard graph.
In the embodiment of the invention, the adopted neural network model is pre-established by adopting a deep convolution neural network; a network architecture diagram of a deep convolutional neural network employed by an embodiment of the present invention is shown in figure 5,
the deep convolutional neural network includes:
1 input layer;
11 convolutional layers, wherein 5 pocolling layers are applied to the 11 convolutional layers;
3 full connection layers;
1 output layer.
The following describes in detail the training process of the neural network model in the embodiment of the present invention:
in the embodiment of the invention, the set standard graph is an image in an M-by-M format.
Because the input training sample images have different resolutions, in the embodiment of the invention, each training sample image is normalized into an M x M training standard graph, and each training standard graph is scored in a score range [1,100], wherein the higher the quality of the image is, the higher the corresponding score is.
In the embodiment of the present invention, in order to better reflect the local information of the image, each training standard graph is segmented, and the segmentation criterion refers to the above-described extraction process of each sub-image in the first sub-set and each sub-image in the second sub-set.
In the embodiment of the present invention, each training standard graph is preferably segmented to obtain 9 sub-graphs with P × P size, where P is M/3, and 2 foreground regions are extracted from the training standard graph, and a specific processing process refers to a processing process of a sub-image in the second subset.
In the embodiment of the invention, 11 sub-graphs split from each training standard graph form a training sample set as a training sample, and the 11 sub-graphs have the same score.
The training sample set is input into a deep convolutional neural network model, and in the embodiment of the invention, the deep convolutional neural network is composed of an input layer, a convolutional layer, a full-connection layer and an output layer.
The input layer of the deep convolutional neural network can be directly used as the input of the training sample, in the embodiment of the invention, the training sample of P × 3 is input to the input layer, wherein 3 identifies the three RGB color channels of the image.
In the deep convolutional neural network designed in the embodiment of the invention, convolutional layers comprise two types, one type is a single convolutional network, and the other type is a convolutional network applying a pooling layer, so that the aim of reducing data processing amount on the basis of keeping effective information and accelerating the speed of network training is fulfilled. Specifically, which 5 of the 11 convolutional layers has the pooling layer applied thereto may be set according to an actual test process.
In the embodiment of the invention, three full-connection layers are designed, the neuron nodes of each layer are respectively transmitted forward through the weights on the connection lines, and the weighted combination is carried out to obtain the input of the neuron nodes of the next layer.
The embodiment of the invention designs an output layer, and the output layer of the neural network is a Softmax classifier.
The convolutional neural network of the embodiment of the present invention is applied with an activation function, a specific number of neuron nodes of each layer network is given in the embodiment of the present invention, specifically, W3 × 3 filters are set, and a ReLU activation function is used, and the ReLU activation function may be defined as:
Figure BDA0001564607090000111
in the neural network model in the embodiment of the present invention, after each layer of convolution, a normalization operation is performed on each layer of convolution network in a batch normalization (batch normalization) manner. In batch normalization, the output which is reduced originally is increased by standardizing the output of the activation function to be the mean value and the variance, so that the problem of gradient dispersion is solved to a great extent, and the training of the deep neural network is accelerated.
In the embodiment of the invention, each original image is segmented into 11 sub-images containing different structural information, and the problem of repeated marking is solved for testing consistency and reducing the repeated marking. The corresponding 11 sub-graphs have the same quality score for each original graph.
In the embodiment of the invention, a specific loss function is defined to optimize a neural network model, and the loss function is defined as follows:
Figure BDA0001564607090000112
wherein the content of the first and second substances,
Figure BDA0001564607090000113
image quality score, y, obtained for training the networkiIs the score of the label of the input original image.
Referring to fig. 4, in an embodiment of the present invention, the calculating the test score of each sub-image to obtain the target score corresponding to the test standard chart includes:
s401, distributing corresponding test weight to each test score;
s402, calculating each test score distributed with the test weight to obtain an average value, and taking the average value as a target score corresponding to the test standard graph.
In the embodiment of the invention, corresponding weight is distributed to each sub-image, the average value is obtained by weighting and averaging, and the average value is used as the target score of the test standard graph.
Taking the input of 11 sub-images as an example, the test scores of 11 different image blocks are obtained in the embodiment of the present invention. And designing a weight pool to carry out weighted averaging on the model. In the embodiment of the invention, the magnitude of each weight value is also obtained through actual test. Setting the corresponding scores of 11 different sub-images as K1...K11. The corresponding weight may be defined as P1...P11. The final prediction score can be defined as:
Pscore=(K1*P1…+K11*P11)/11
the image quality determining method provided by the embodiment of the invention can be applied to the field of insurance claim payment, and clients need to upload image related information such as identity cards in the process of handling claims, and the information often has some noises in the processes of shooting and transmitting, so that when the digital image is subjected to structured recognition, the content of the image cannot be recognized in time due to poor image quality. With the increase of the information quantity, the manual checking and checking one by one is more and more difficult to control, in order to improve the efficiency and the real-time performance of structured recognition, the problem of data recognition can be solved very efficiently by evaluating the image quality of the certificate image to be recognized, and a large amount of labor cost is saved.
For example, when 11 sub-images are preferably selected from the test standard graph, the extraction process refers to the method described above, the test process shown in fig. 6 can be performed on the image to be tested, when the quality of the digital image is evaluated, the digital image can be split and preprocessed, then the split and preprocessed image is input into the neural network model, the score of each sub-image is calculated, then the final score of the image is obtained through weighting calculation, and the quality is determined according to the final score.
Corresponding to the method for determining image quality shown in fig. 1, an embodiment of the present invention further provides an apparatus for determining image quality, which is used to implement the method for determining image quality shown in fig. 1 specifically, and the apparatus for determining image quality provided in the embodiment of the present invention may be applied to a processor of a computer or a terminal, and a schematic structural diagram of the apparatus is shown in fig. 7, and specifically includes:
the processing unit 501 is configured to perform normalization processing on a current image to be tested, and obtain a test standard chart corresponding to the current image to be tested;
an obtaining unit 502, configured to obtain a sub-image set corresponding to the test standard chart, where the sub-image set includes a plurality of sub-images extracted from the test standard chart;
the testing unit 503 is configured to input each sub-image into a pre-established neural network model for testing, and obtain a test score of each sub-image;
the calculating unit 504 is configured to calculate a test score of each sub-image to obtain a target score corresponding to the test standard graph, and determine the image quality of the current image to be tested according to the target score.
The image quality determining device provided by the embodiment of the invention is used for converting each image to be tested, which needs to be subjected to quality evaluation, into a test standard graph to ensure the accuracy of the evaluation, then extracting a plurality of sub-images from the test standard graph according to a preset extraction rule, respectively inputting each sub-image into a built neural network model, obtaining a test score corresponding to each sub-image after the neural network model is used for analyzing and testing, calculating each obtained test score to obtain a target score corresponding to the test standard graph, further determining the quality of the image to be tested according to the target score, and realizing the evaluation of the quality of the image to be tested. According to the image quality determination method provided by the invention, the neural network model is applied to score each sub-image in the test standard graph, the operation speed is high, and the efficiency of evaluating the image quality is improved.
In the apparatus for determining image quality according to the present invention, the acquiring unit 502 includes:
and the extraction subunit is used for extracting the sub-images from the test standard graph.
An embodiment of the present invention further provides a storage medium, where the storage medium includes a stored program, where, when the program runs, a device where the storage medium is located is controlled to execute the method for querying a repayment net charge value, where the method specifically includes:
normalizing the current image to be tested to obtain a test standard graph corresponding to the current image to be tested;
acquiring a sub-image set corresponding to the test standard diagram, wherein the sub-image set comprises a plurality of sub-images extracted from the test standard diagram;
inputting each sub-image into a pre-established neural network model for testing, and obtaining a test score of each sub-image;
and calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score.
In the foregoing method, optionally, the sub-image set includes a first sub-set and a second sub-set;
the extraction process of each sub-image in the first sub-set comprises the following steps:
according to a preset scale coefficient, carrying out scale segmentation on the test standard diagram to obtain a plurality of segmentation subimages of the test standard diagram; the size of each of the segmentation sub-images is the same;
adding each of the segmented sub-images to the first sub-set.
In the foregoing method, optionally, the extracting process of each sub-image in the second sub-set includes:
performing foreground and background segmentation on the test standard image, and selecting N foreground areas in the test standard image according to a preset selection rule; n is a positive integer;
when N foreground regions are selected, performing a first operation on the N foreground regions to obtain N foreground sub-images, and adding each foreground sub-image into the second subset;
the first operation includes:
determining whether a first class foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the first class foreground area is larger than that of the segmentation sub-image;
when the foreground sub-images exist, center clipping is carried out on each first type of foreground area, and foreground sub-images with the same size as the size of the segmentation sub-images are clipped from the first type of foreground areas;
and
determining whether a second type of foreground region exists in the N foreground regions, wherein the size of a circumscribed rectangle of the second type of foreground region is smaller than that of the segmentation subimage;
and if the foreground sub-images exist, intercepting the foreground sub-images which comprise the second type of foreground areas and have the same size as the segmentation sub-images from the test standard image.
In the above method, optionally, when the test standard chart includes M foreground regions, M is smaller than N, and M is a positive integer; the method further comprises the following steps:
selecting random factors after the M foreground areas are selected and first operation is performed on the M foreground areas, and randomly selecting X random sub-images from the test standard chart according to the random factors and adding the random sub-images into the second sub-set, wherein the size of each random sub-image is the same as that of the segmentation sub-image, X is the difference value of N and M, and X is a positive integer.
Optionally, in the method, the neural network model is pre-established by using a deep convolutional neural network;
the deep convolutional neural network includes:
1 input layer;
11 convolutional layers, wherein 5 pocolling layers are applied to the 11 convolutional layers;
3 full connection layers;
1 output layer.
Optionally, in the foregoing method, the calculating a test score of each sub-image to obtain a target score corresponding to the test standard chart includes:
assigning a test weight corresponding to each test score;
and calculating each test score distributed with the test weight to obtain an average value, and taking the average value as a target score corresponding to the test standard chart.
An electronic device is provided in an embodiment of the present invention, and its schematic structural diagram is shown in fig. 8, specifically including a memory 601, and one or more programs 602, where the one or more programs 602 are stored in the memory 601, and configured to be executed by one or more processors 603, and the one or more programs 602 include instructions for:
normalizing the current image to be tested to obtain a test standard graph corresponding to the current image to be tested;
acquiring a sub-image set corresponding to the test standard diagram, wherein the sub-image set comprises a plurality of sub-images extracted from the test standard diagram;
inputting each sub-image into a pre-established neural network model for testing, and obtaining a test score of each sub-image;
and calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in a plurality of software and/or hardware when implementing the invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The method and the apparatus for determining image quality provided by the present invention are described in detail above, and a specific example is applied in the text to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A method for determining image quality, comprising:
normalizing the current image to be tested to obtain a test standard graph corresponding to the current image to be tested;
acquiring a sub-image set corresponding to the test standard diagram, wherein the sub-image set comprises a plurality of sub-images extracted from the test standard diagram;
inputting each sub-image into a pre-established neural network model for testing, and obtaining a test score of each sub-image;
calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score;
wherein the sub-image set comprises a first sub-set and a second sub-set;
the extraction process of each sub-image in the first sub-set comprises the following steps:
dividing the test standard diagram in proportion according to a preset proportion coefficient to obtain a plurality of sub-segmentation images of the test standard diagram, wherein the size of each sub-segmentation image is the same;
adding each of the segmented sub-images to the first subset;
the extraction process of each sub-image in the second sub-set comprises the following steps:
performing foreground and background segmentation on the test standard image, and selecting N foreground areas in the test standard image according to a preset selection rule, wherein N is a positive integer;
when N foreground regions are selected, performing a first operation on the N foreground regions to obtain N foreground sub-images, and adding each foreground sub-image into the second subset;
the first operation includes:
determining whether a first class foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the first class foreground area is larger than that of the segmentation sub-image;
when the foreground sub-images exist, center clipping is carried out on each first type of foreground area, and foreground sub-images with the same size as the size of the segmentation sub-images are clipped from the first type of foreground areas;
determining whether a second type of foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the second type of foreground area is smaller than that of the segmentation subimage;
and if the foreground sub-images exist, intercepting the foreground sub-images which comprise the second type of foreground areas and have the same size as the segmentation sub-images from the test standard image.
2. The method according to claim 1, wherein when M foreground regions are included in the test standard map, M is smaller than N, and M is a positive integer; the method further comprises the following steps:
selecting random factors after the M foreground areas are selected and first operation is performed on the M foreground areas, and randomly selecting X random sub-images from the test standard chart according to the random factors and adding the random sub-images into the second sub-set, wherein the size of each random sub-image is the same as that of the segmentation sub-image, X is the difference value of N and M, and X is a positive integer.
3. The method of claim 1, wherein the neural network model is pre-built using a deep convolutional neural network;
the deep convolutional neural network includes:
1 input layer;
11 convolutional layers, wherein 5 pocolling layers are applied to the 11 convolutional layers;
3 full connection layers;
1 output layer.
4. The method of claim 1, wherein the calculating the test score of each sub-image to obtain the target score corresponding to the test standard graph comprises:
assigning a test weight corresponding to each test score;
and calculating each test score distributed with the test weight to obtain an average value, and taking the average value as a target score corresponding to the test standard chart.
5. An apparatus for determining image quality, comprising:
the processing unit is used for carrying out normalization processing on the current image to be tested to obtain a test standard chart corresponding to the current image to be tested;
an obtaining unit, configured to obtain a sub-image set corresponding to the test standard chart, where the sub-image set includes a plurality of sub-images extracted from the test standard chart, and the sub-image set includes a first sub-set and a second sub-set, and an extraction process of each sub-image in the first sub-set includes: dividing the test standard diagram in proportion according to a preset proportion coefficient to obtain a plurality of sub-segmentation images of the test standard diagram, wherein the size of each sub-segmentation image is the same; adding each of the segmented sub-images to the first subset; the extraction process of each sub-image in the second sub-set comprises the following steps: performing foreground and background segmentation on the test standard image, and selecting N foreground areas in the test standard image according to a preset selection rule, wherein N is a positive integer; when N foreground regions are selected, performing a first operation on the N foreground regions to obtain N foreground sub-images, and adding each foreground sub-image into the second subset; the first operation includes: determining whether a first class foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the first class foreground area is larger than that of the segmentation sub-image; when the foreground sub-images exist, center clipping is carried out on each first type of foreground area, and foreground sub-images with the same size as the size of the segmentation sub-images are clipped from the first type of foreground areas; determining whether a second type of foreground area exists in the N foreground areas, wherein the size of a circumscribed rectangle of the second type of foreground area is smaller than that of the segmentation subimage; if the foreground sub-images exist, the foreground sub-images which contain the second foreground areas and have the same size as the segmentation sub-images are intercepted from the test standard image;
the testing unit is used for inputting each sub-image into a pre-established neural network model for testing and obtaining a testing score of each sub-image;
and the calculating unit is used for calculating the test score of each sub-image to obtain a target score corresponding to the test standard graph, and determining the image quality of the current image to be tested according to the target score.
6. The apparatus of claim 5, wherein the obtaining unit comprises:
and the extraction subunit is used for extracting the sub-images from the test standard graph.
7. A storage medium comprising a stored program, wherein a device on which the storage medium is located is controlled to execute the image quality determination method according to any one of claims 1 to 4 when the program is executed.
8. An electronic device comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors to perform the method for determining image quality according to any one of claims 1-4.
CN201810094333.3A 2018-01-31 2018-01-31 Image quality determination method and device Active CN108335293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810094333.3A CN108335293B (en) 2018-01-31 2018-01-31 Image quality determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810094333.3A CN108335293B (en) 2018-01-31 2018-01-31 Image quality determination method and device

Publications (2)

Publication Number Publication Date
CN108335293A CN108335293A (en) 2018-07-27
CN108335293B true CN108335293B (en) 2020-11-03

Family

ID=62926861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810094333.3A Active CN108335293B (en) 2018-01-31 2018-01-31 Image quality determination method and device

Country Status (1)

Country Link
CN (1) CN108335293B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522950B (en) * 2018-11-09 2022-04-22 网易传媒科技(北京)有限公司 Image scoring model training method and device and image scoring method and device
CN111488776B (en) * 2019-01-25 2023-08-08 北京地平线机器人技术研发有限公司 Object detection method, object detection device and electronic equipment
CN110175530A (en) * 2019-04-30 2019-08-27 上海云从企业发展有限公司 A kind of image methods of marking and system based on face
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN111932521B (en) * 2020-08-13 2023-01-03 Oppo(重庆)智能科技有限公司 Image quality testing method and device, server and computer readable storage medium
CN115830028B (en) * 2023-02-20 2023-05-23 阿里巴巴达摩院(杭州)科技有限公司 Image evaluation method, device, system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN107123123A (en) * 2017-05-02 2017-09-01 电子科技大学 Image segmentation quality evaluating method based on convolutional neural networks
CN107633513A (en) * 2017-09-18 2018-01-26 天津大学 The measure of 3D rendering quality based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN107123123A (en) * 2017-05-02 2017-09-01 电子科技大学 Image segmentation quality evaluating method based on convolutional neural networks
CN107633513A (en) * 2017-09-18 2018-01-26 天津大学 The measure of 3D rendering quality based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Exploiting;Cenhui Pan, et al.;《2016 Visual Communications and Image Processing》;IEEE;20170105;全文 *
No-Reference Quality Assessment for Multiply Distorted Images based on Deep Learning;Qingbing Sang, et al.;《2017 International Smart Cities Conference》;IEEE;20171102;全文 *
基于深度学习模型的图像质量评价方法;李琳,余胜生;《华中科技大学学报(自然科学版)》;华中科技大学;20161231;第44卷(第12期);第70-75页 *
李琳,余胜生.基于深度学习模型的图像质量评价方法.《华中科技大学学报(自然科学版)》.华中科技大学,2016,第44卷(第12期),70-75. *

Also Published As

Publication number Publication date
CN108335293A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108335293B (en) Image quality determination method and device
TWI773189B (en) Method of detecting object based on artificial intelligence, device, equipment and computer-readable storage medium
EP3876201B1 (en) Object detection and candidate filtering system
CN110264274B (en) Guest group dividing method, model generating method, device, equipment and storage medium
CN108717547B (en) Sample data generation method and device and model training method and device
CN110929836B (en) Neural network training and image processing method and device, electronic equipment and medium
CN114282581A (en) Training sample obtaining method and device based on data enhancement and electronic equipment
CN113469997A (en) Method, device, equipment and medium for detecting plane glass
CN110969641A (en) Image processing method and device
CN111461302A (en) Data processing method, device and storage medium based on convolutional neural network
CN112967191B (en) Image processing method, device, electronic equipment and storage medium
CN111241993B (en) Seat number determining method and device, electronic equipment and storage medium
CN111062914B (en) Method, apparatus, electronic device and computer readable medium for acquiring facial image
CN112348809A (en) No-reference screen content image quality evaluation method based on multitask deep learning
CN112614108A (en) Method and device for detecting nodules in thyroid ultrasound image based on deep learning
CN115546554A (en) Sensitive image identification method, device, equipment and computer readable storage medium
CN115393756A (en) Visual image-based watermark identification method, device, equipment and medium
CN115019057A (en) Image feature extraction model determining method and device and image identification method and device
CN114821173A (en) Image classification method, device, equipment and storage medium
CN115004245A (en) Target detection method, target detection device, electronic equipment and computer storage medium
CN111612783A (en) Data quality evaluation method and system
CN113628175B (en) Image quality score distribution prediction method, system, terminal and medium
CN113239943B (en) Three-dimensional component extraction and combination method and device based on component semantic graph
CN112990349B (en) Writing quality evaluation method and device and electronic equipment
WO2024057578A1 (en) Extraction system, extraction method, and extraction program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant