CN113409288B - Image definition detection method, device, equipment and storage medium - Google Patents

Image definition detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN113409288B
CN113409288B CN202110725212.6A CN202110725212A CN113409288B CN 113409288 B CN113409288 B CN 113409288B CN 202110725212 A CN202110725212 A CN 202110725212A CN 113409288 B CN113409288 B CN 113409288B
Authority
CN
China
Prior art keywords
texture
image
value
model
richness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110725212.6A
Other languages
Chinese (zh)
Other versions
CN113409288A (en
Inventor
孙高峰
李甫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110725212.6A priority Critical patent/CN113409288B/en
Publication of CN113409288A publication Critical patent/CN113409288A/en
Application granted granted Critical
Publication of CN113409288B publication Critical patent/CN113409288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure provides a method, a device, equipment and a storage medium for detecting image definition, relates to the field of artificial intelligence, in particular to a computer vision and deep learning technology, and can be particularly used in a video analysis scene. The specific implementation scheme is as follows: determining a texture richness value of a target image, and determining a texture category to which the target image belongs according to the texture richness value of the target image; selecting a target definition detection model from the candidate definition detection models according to the texture category to which the target image belongs; and detecting the definition of the target image based on a target definition detection model. The embodiment of the disclosure can improve the accuracy of image definition.

Description

Image definition detection method, device, equipment and storage medium
Technical Field
The disclosure relates to the field of artificial intelligence, in particular to computer vision and deep learning technology, which can be particularly used in video analysis scenes, and particularly relates to a method, a device, equipment and a storage medium for detecting image definition.
Background
With the continuous enrichment of image and video content on the internet, how to pick out high-quality content from mass data is important. The sharpness, color, brightness, etc. of an image are all important factors affecting its quality. Of these factors, sharpness is the most important indicator. Sharpness refers to the sharpness of each detail texture and its boundaries in the image.
Disclosure of Invention
The present disclosure provides a detection method, apparatus, device, and storage medium for image sharpness.
According to an aspect of the present disclosure, there is provided a method for detecting image sharpness, including:
determining a texture richness value of a target image, and determining a texture category to which the target image belongs according to the texture richness value of the target image;
selecting a target definition detection model from the candidate definition detection models according to the texture category to which the target image belongs;
and detecting the definition of the target image based on a target definition detection model.
According to still another aspect of the present disclosure, there is provided an image sharpness detection apparatus including:
the texture determining module is used for determining the texture richness value of the target image and determining the texture category of the target image according to the texture richness value of the target image;
the model selection module is used for selecting a target definition detection model from the candidate definition detection models according to the texture category to which the target image belongs;
and the definition detection module is used for detecting the definition of the target image based on the target definition detection model.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of detecting image sharpness provided by any of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute a method for detecting image sharpness provided by any of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of detecting image sharpness provided by any of the embodiments of the present disclosure.
According to the technology of the present disclosure, accuracy of image sharpness can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of a method for detecting image sharpness provided according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an image sharpness detection provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of another method for detecting image sharpness provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a training process for a texture detection model provided in accordance with an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of yet another method for detecting image sharpness provided in accordance with an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of an apparatus for detecting image sharpness according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a method of detecting image sharpness in accordance with an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes in detail the solution provided by the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an image sharpness detection method according to an embodiment of the present disclosure, where the embodiment of the present disclosure may be applicable to a case of quality detection of an image. The method can be implemented by an image definition detection device, which can be implemented in hardware and/or software and can be configured in an electronic device. Referring to fig. 1, the method specifically includes the following:
s110, determining a texture richness value of a target image, and determining a texture category to which the target image belongs according to the texture richness value of the target image;
s120, selecting a target definition detection model from candidate definition detection models according to the texture category to which the target image belongs;
s130, detecting the definition of the target image based on a target definition detection model.
In an embodiment of the present disclosure, a texture richness value is used to characterize the richness of textures in an image. The texture category refers to a texture richness category, and N different texture categories can be preset. And, can divide a plurality of texture richness ranges according to the texture richness value, each texture richness range is associated with self texture category respectively.
For each texture class, a candidate sharpness detection model may be constructed, and an association between the candidate sharpness detection model and the texture class may be established. Specifically, for any texture class, an image belonging to the texture class may be used as a sample to perform model training, so as to obtain an associated candidate sharpness detection model.
Fig. 2 is a schematic diagram of image sharpness detection provided according to an embodiment of the present disclosure, with reference to fig. 2, after a target image is acquired, texture richness detection may be performed on the target image to obtain a texture richness value of the target image, the texture richness of the target image is compared with each candidate texture richness range, a target texture richness range to which the target image belongs is determined, a texture class associated with the target texture richness range is taken as a texture class of the target image, and a candidate sharpness detection model associated with the texture class of the target image is taken as a target sharpness detection model.
Referring to fig. 2, after the target image is acquired, feature extraction may also be performed on the target image to obtain feature data of the target image; and taking the characteristic data of the target image as the input of a target definition detection model, and obtaining a definition detection result of the target image according to the output of the target definition detection model.
As the shooting scene, shooting conditions, etc. change, the sharpness indexes of the images show different rules, resulting in incomparability of sharpness indexes in different shooting scenes and different shooting conditions. For example, when the shot picture is a page in a book, the texture and boundary of the text content are very obvious, and the high-frequency components in the image are relatively large; but when the blue sky with clear and cloudless blue is shot, no obvious texture and boundary are arranged on the image, and the high-frequency component is small. Therefore, it is not possible to simply judge whether the image is clear or not by using the amount of the high-frequency component or the magnitude of the difference between the adjacent pixel points as a definition index. In addition, even if the objects of the same type are shot, the physical characteristics of the images show different rules, and the clear indexes are difficult to unify. For example, the content of the text in the pages is the same, some pages are denser, some pages are sparser, if gradient or energy is used as the definition index, the difference is large, and the definition is inaccurate.
However, according to the embodiment of the disclosure, different candidate sharpness detection models are selected for each texture category, so that sharpness indexes of images in the same texture category are utilized to accord with the same rule, and the sharpness detection models have comparability, can unify the characteristics of sharpness indexes, and can improve sharpness detection accuracy of images.
According to the technical scheme, the candidate definition detection model associated with the texture category to which the target image belongs is used as the target definition detection model, so that the target definition detection model can reflect the physical characteristic rule of the target image, and the definition detection accuracy of the image is improved.
Fig. 3 is a schematic diagram of another method for detecting image sharpness according to an embodiment of the present disclosure. This embodiment is an alternative to the embodiments described above. Referring to fig. 3, the method for detecting image sharpness provided in this embodiment includes:
s210, detecting the texture richness of the target image based on a texture detection model to obtain a texture richness value of the target image;
s220, selecting a target definition detection model from the candidate definition detection models according to the texture category to which the target image belongs;
s230, detecting the definition of the target image based on a target definition detection model;
the texture detection model is obtained by taking a first image and a second image in a sample image pair as inputs, and taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value to perform model training.
In the embodiment of the disclosure, the texture detection model is used for determining the texture richness value of the image, and the image can be classified according to the texture richness value of the image. The first texture comparison probability refers to a probability that the first image is richer than the second image texture, and the second texture comparison probability refers to a probability that the second image is richer than the first image texture.
The first image and the second image in the sample image pair are respectively used as input, the first texture comparison probability or the second texture comparison probability is used as a label, and model training is carried out to obtain a texture detection model, namely, the texture detection model is trained by introducing comparison information between the first image and the second image in the sample image pair, and compared with the method of directly scoring the texture richness values of the first image and the second image, the robustness of the texture detection model can be improved, so that the accuracy of the texture richness value of the target image is improved.
In an optional embodiment, before the texture richness detection on the target image based on the texture detection model, the method further includes: taking the first image as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and taking the second image as the input of a second model in the twin network to obtain a second texture richness value output by the second model; determining a model output probability according to a richness difference value between the first texture richness value and the second texture richness value; model parameters in the first model and the second model are updated according to a probability difference between the label value and the model output probability, and the trained first model or second model is used as the texture detection model.
Fig. 4 is a schematic diagram of a training process of a texture detection model provided according to an embodiment of the present disclosure. Referring to fig. 4, a texture detection model is constructed using a twin network including a first model and a second model, which have the same mesh structure and may share model parameters, for example, the same neural network structure may be employed.
Specifically, a first image is used as input of a first model to obtain a first texture richness value output by the first model, and a second image is used as input of a second model to obtain a second texture richness value output by the second model; and determining a model output probability according to a richness difference between the first texture richness value and the second texture richness value.
Wherein the model output probability is determined in a manner related to the tag value. In the case where the tag value is the first texture comparison probability, the model output probability may be determined by p=sigmoid (S1-S2); in the case where the tag value is the second texture comparison probability, the model output probability may be determined by p=sigmoid (S2-S1); wherein P is the model output probability, sigmoid is the neural network activation function, and S1 and S2 are the first texture richness value and the second texture richness value, respectively.
And comparing the label value with the model output probability to obtain a probability difference value between the label value and the model output probability, establishing a loss function according to the probability difference value, and updating model parameters of the first model and the second model to realize an end-to-end training process. After training is completed, either the first model or the second model may be used as a texture detection model. The comparison result of the two images is predicted by a modeling mode of the twin network, and the texture richness value of the images is not directly predicted, so that the robustness of the texture detection model can be improved.
According to the technical scheme, the texture detection model is built through the modeling mode of the twin network, so that the robustness of the texture detection model can be improved, and the definition detection accuracy of the image is further improved.
The sample construction of the texture detection model is described below.
In an alternative embodiment, before the model training using the texture comparison probability between the first image and the second image as a label, the method further includes: determining a texture score of the first image and a texture score of the second image according to a texture richness comparison result between the first image and the second image; updating the texture richness value of the first image according to the texture richness value of the first image, the texture score of the first image and the first texture comparison probability; and updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability; and respectively updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
In an embodiment of the disclosure, a first image and a second image in a sample image pair are provided to an annotator, and the annotator annotates a texture richness comparison result of the first image and the second image. The labeling personnel judge which textures in the two images are richer, so that the labeling personnel can score the textures more easily compared with the direct texture degree of the images, and labeling results of different labeling personnel on the same sample image pair can be more consistent. The same initial texture richness value, for example, 1000, can be given to each image, and the method of iterative solution is used for continuously refreshing according to the texture richness comparison result until the texture richness value with stable first image and second image is obtained, and compared with the manual labeling of the texture richness value, the accuracy of the texture richness value of the image in the sample data set can be improved.
Specifically, when the texture richness comparison result is that the texture of the first image is richer, the texture score of the first image may be 1, and the texture score of the second image may be 0; and under the condition that the texture richness comparison result is that the second image is richer, the texture score of the first image is 0, and the texture score of the second image is 1.
In an alternative embodiment, the texture richness value of the first image and the texture richness value of the second image are updated by the following formula:
R′ A =R A +K×(S A -P B>A )];
R′ B =R B +K×(S B -P A>B )];
wherein R is A And R is B Respectively a texture richness value of the first image and a texture richness value of the second image, S A And S is B Respectively, a texture score of the first image and a texture score of the second image, P A>B And P B>A The first texture comparison probability and the second texture comparison probability are respectively; k is a reward factor, R' A And R'. B Respectively, the updated texture richness value of the first image and the updated texture richness value of the second image. K may be an empirical value proportional to the initial texture richness value, and still taking the initial texture richness value of 1000 as an example, K may be 16.
In an alternative embodiment, the first texture comparison probability and the second texture comparison probability may be determined by the following formula:
Figure BDA0003138338560000071
Figure BDA0003138338560000072
wherein P is A>B And P B>A MK may be an empirical value, proportional to the initial texture richness value, and M may be 400, taking an initial texture richness value of 1000 as an example.
In an alternative embodiment, the first texture comparison probability and the second texture comparison probability are updated by the following formulas, respectively:
Figure BDA0003138338560000081
Figure BDA0003138338560000082
wherein R 'is' A And R'. B Respectively updating the texture richness value of the first image and the texture richness value of the second image, wherein M is a difference measurement factor and P' A>B And P' B>A The updated first texture comparison probability and the updated second texture comparison probability are respectively.
And continuously refreshing the texture richness value of each image in the data set according to the manually marked texture richness comparison result. When the number of labels is large enough, the texture richness value of each image tends to be stable. By the labeling mode, the accuracy of the texture richness data set can be improved.
Fig. 5 is a schematic diagram of yet another method for detecting image sharpness provided according to an embodiment of the present disclosure. This embodiment is an alternative to the embodiments described above. Referring to fig. 5, the method for detecting image sharpness provided in this embodiment includes:
s310, determining a texture richness value of a target image, and determining a texture category to which the target image belongs according to the texture richness value of the target image;
s320, selecting a target definition detection model from the candidate definition detection models according to the texture category to which the target image belongs;
s330, determining a gray scale map of the target image;
s340, determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
s350, taking the statistical index value as the input of the target definition detection model to obtain a definition detection result of the target image.
In the disclosed embodiments, sharpness indicators used include, but are not limited to, the following: brenner gradient, gray variance product, image energy.
Wherein the brenner gradient can be determined by the following formula:
S11=∑ x,y |I(x+2,y)-I(x,y)| 2
wherein S11 is a Brinner gradient, I is an image gray scale, and x and y are coordinates of pixel points. The brenner gradient is used to calculate the sum of squares of pixel value differences between two pixel points spaced 1 apart in the horizontal direction, the clearer the texture, the greater the value of the brenner gradient.
Wherein the gray variance product can be determined by the following formula:
S22=∑ x,y |I(x,y)-I(x,y-1)|+|I(x,y)-I(x+1,y)|
the method calculates the pixel difference between adjacent points, sums the absolute values of the pixel differences, and the larger the numerical value of the gray variance product is, the clearer the image is.
Wherein, the image energy can be determined by the following formula:
S33=∑ x,y |I(x+1,y)-I(x,y)| 2 ×|I(x,y+1)-I(x,y)| 2
the method calculates the square of the pixel difference between adjacent points, sums the products of the horizontal and vertical directions, and the larger the value of the image energy is, the clearer the image is.
And counting different definition index values of the target image, and taking the definition index values as the input of a target definition detection model to obtain a definition detection result of the target image. The definition detection result can be in two categories of definition or blurring. The candidate sharpness detection models may be two-class models, for example, may be modeled using a machine learning method such as a binary tree model, a K-nearest neighbor model, a logistic regression model, etc., and model parameters are fitted by sharpness labeling data. By introducing different definition index values to calculate the relative relation between the pixel points in the target image and using the relative relation as a way for measuring whether the target image is clear or not, the accuracy of the definition detection result can be improved.
According to the technical scheme, the image texture richness is classified by means of the image classification method, different model parameters, namely different definition index thresholds, are used for different texture types, and the index thresholds are applied to the characteristics extracted by the traditional algorithm, so that whether the image is clear or not is determined according to the index thresholds, and the problem that judgment standards are different due to the change of shooting conditions and shooting objects can be well solved. The classification capability of the neural network and the interpretability of the traditional algorithm on the definition of the image are fused, and the embarrassment that the traditional algorithm is easy to fail when facing to thousands of scenes on the Internet is solved.
Fig. 6 is a schematic diagram of an image sharpness detection device according to an embodiment of the present disclosure, where the embodiment is applicable to a situation where an autonomous vehicle is riding, and the device is configured in an electronic apparatus, so as to implement the image sharpness detection method according to any embodiment of the present disclosure. Referring to fig. 6, the image clarity detecting apparatus 400 specifically includes the following:
the texture determining module 401 is configured to determine a texture richness value of a target image, and determine a texture class to which the target image belongs according to the texture richness value of the target image;
the model selection module 402 is configured to select a target sharpness detection model from the candidate sharpness detection models according to a texture class to which the target image belongs;
the sharpness detection module 403 is configured to perform sharpness detection on the target image based on a target sharpness detection model.
In an alternative embodiment, the texture determining module 401 is specifically configured to:
based on a texture detection model, detecting the texture richness of the target image to obtain a texture richness value of the target image;
the texture detection model is obtained by taking a first image and a second image in a sample image pair as input, and taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value to perform model training.
In an alternative embodiment, the image sharpness detection apparatus 400 further includes a texture model building module, where the texture model building module includes:
the image input unit is used for taking the first image as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and taking the second image as the input of a second model in the twin network to obtain a second texture richness value output by the second model;
the probability output unit is used for determining the model output probability according to the richness difference value between the first texture richness value and the second texture richness value;
and a model parameter updating unit, configured to update model parameters in the first model and the second model according to a probability difference between the tag value and the model output probability, and take the trained first model or second model as the texture detection model.
In an alternative embodiment, the image sharpness detection apparatus 400 further includes a data construction module, where the data construction module includes:
a texture score unit, configured to determine a texture score of the first image and a texture score of the second image according to a texture richness comparison result between the first image and the second image;
the enriching value updating unit is used for updating the texture enriching value of the first image according to the texture enriching value of the first image, the texture score of the first image and the first texture comparison probability; and updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability;
and the comparison probability updating unit is used for updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
In an alternative embodiment, the richness value updating unit is specifically configured to:
updating the texture richness value of the first image and the texture richness value of the second image by the following formula:
R′ A =R A +K×(S A -P B>A );
R′ B =R B +K×(S B -P A>B );
wherein R is A And R is B Respectively a texture richness value of the first image and a texture richness value of the second image, S A And S is B Respectively, a texture score of the first image and a texture score of the second image, P A>B And P B>A The first texture comparison probability and the second texture comparison probability are respectively; k is a reward factor, R' A And R'. B Respectively, the updated texture richness value of the first image and the updated texture richness value of the second image.
In an alternative embodiment, the comparison probability updating unit is specifically configured to:
updating the first texture comparison probability and the second texture comparison probability respectively by the following formula:
Figure BDA0003138338560000111
Figure BDA0003138338560000112
wherein R 'is' A And R'. B Texture richness value and more of the updated first image respectivelyThe texture richness value of the new second image, M is a difference measurement factor, P' A>B And P' B>A The updated first texture comparison probability and the updated second texture comparison probability are respectively.
In an alternative embodiment, the sharpness detection module includes:
a gray scale map unit for determining a gray scale map of the target image;
the definition index unit is used for determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
and the definition detection unit is used for taking the definition index value as the input of the target definition detection model to obtain a definition detection result of the target image.
According to the technical scheme, the image classifying method is used for classifying the texture richness of the image, and different definition index thresholds are used for different texture types, so that the problem that judgment standards are different due to the change of shooting conditions and shooting objects can be well solved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 shows a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units executing machine learning model algorithms, a digital information processor (DSP), and any suitable processor, controller, microcontroller, etc. The calculation unit 501 performs the respective methods and processes described above, for example, a detection method of image sharpness. For example, in some embodiments, the method of detecting image sharpness may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the above-described image sharpness detection method may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the method of detecting image sharpness in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs executing on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. A method for detecting image sharpness, comprising:
based on the texture detection model, detecting the texture richness of the target image to obtain a texture richness value of the target image; the texture detection model is obtained by taking a first image and a second image in a sample image pair as input, and taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value to perform model training;
determining the texture category of the target image according to the texture richness value of the target image; wherein different sharpness index thresholds are used for different texture classes;
selecting a target definition detection model from candidate definition detection models according to the texture category to which the target image belongs; wherein, there is an association between the candidate sharpness detection model and the texture class;
determining a gray scale map of the target image;
determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
taking the definition index value as the input of the target definition detection model to obtain a definition detection result of a target image;
before the model training is performed by taking the texture comparison probability between the first image and the second image as a label value, the method comprises the following steps:
determining a texture score of the first image and a texture score of the second image according to a texture richness comparison result between the first image and the second image;
updating the texture richness value of the first image according to the texture richness value of the first image, the texture score of the first image and the first texture comparison probability; and updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability;
and respectively updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
2. The method of claim 1, further comprising, prior to texture richness detection of the target image based on the texture detection model:
taking the first image as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and taking the second image as the input of a second model in the twin network to obtain a second texture richness value output by the second model;
determining a model output probability according to a richness difference value between the first texture richness value and the second texture richness value;
model parameters in the first model and the second model are updated according to a probability difference between the label value and the model output probability, and the trained first model or second model is used as the texture detection model.
3. The method of claim 1, wherein the updating of the texture richness value of the first image is based on the texture richness value of the first image, the texture score of the first image, and the first texture comparison probability; and updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability, comprising:
updating the texture richness value of the first image and the texture richness value of the second image by the following formula:
Figure QLYQS_1
;
Figure QLYQS_2
;
wherein R is A And R is B Respectively a texture richness value of the first image and a texture richness value of the second image, S A And S is B Respectively, a texture score of the first image and a texture score of the second image, P A>B And P B>A The first texture comparison probability and the second texture comparison probability are respectively; k is the bonus factor and is used to determine,
Figure QLYQS_3
and->
Figure QLYQS_4
Respectively, the updated texture richness value of the first image and the updated texture richness value of the second image.
4. The method of claim 1, wherein the updating the first and second texture comparison probabilities according to the updated texture richness value of the first image and the updated texture richness value of the second image, respectively, comprises:
updating the first texture comparison probability and the second texture comparison probability respectively by the following formula:
Figure QLYQS_5
;
Figure QLYQS_6
;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_7
and->
Figure QLYQS_8
Respectively updating the texture richness value of the first image and the texture richness value of the second image, wherein M is a difference measuring factor, < ->
Figure QLYQS_9
And->
Figure QLYQS_10
The updated first texture comparison probability and the updated second texture comparison probability are respectively.
5. An image sharpness detection apparatus, comprising:
the richness value acquisition module is used for detecting the richness of the texture of the target image based on the texture detection model to obtain the richness value of the texture of the target image; the texture detection model is obtained by taking a first image and a second image in a sample image pair as input, and taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value to perform model training;
the texture determining module is used for determining the texture category of the target image according to the texture richness value of the target image; wherein different sharpness index thresholds are used for different texture classes;
the model selection module is used for selecting a target definition detection model from candidate definition detection models according to the texture category to which the target image belongs; wherein, there is an association between the candidate sharpness detection model and the texture class;
a sharpness detection module comprising:
a gray scale map unit for determining a gray scale map of the target image;
the definition index unit is used for determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
the definition detection unit is used for taking the definition index value as the input of the target definition detection model to obtain a definition detection result of a target image;
a data construction module comprising:
a texture score unit, configured to determine a texture score of the first image and a texture score of the second image according to a texture richness comparison result between the first image and the second image;
the enriching value updating unit is used for updating the texture enriching value of the first image according to the texture enriching value of the first image, the texture score of the first image and the first texture comparison probability; and updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability;
and the comparison probability updating unit is used for updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
6. The apparatus of claim 5, further comprising a texture model building module, the texture model building module comprising:
the image input unit is used for taking the first image as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and taking the second image as the input of a second model in the twin network to obtain a second texture richness value output by the second model;
the probability output unit is used for determining the model output probability according to the richness difference value between the first texture richness value and the second texture richness value;
and a model parameter updating unit, configured to update model parameters in the first model and the second model according to a probability difference between the tag value and the model output probability, and take the trained first model or second model as the texture detection model.
7. The apparatus of claim 5, wherein the richness value updating unit is specifically configured to:
updating the texture richness value of the first image and the texture richness value of the second image by the following formula:
Figure QLYQS_11
;
Figure QLYQS_12
;
wherein R is A And R is B Respectively a texture richness value of the first image and a texture richness value of the second image, S A And S is B Respectively, a texture score of the first image and a texture score of the second image, P A>B And P B>A The first texture comparison probability and the second texture comparison probability are respectively; k is the bonus factor and is used to determine,
Figure QLYQS_13
and->
Figure QLYQS_14
Respectively, the updated texture richness value of the first image and the updated texture richness value of the second image.
8. The apparatus of claim 5, wherein the comparison probability updating unit is specifically configured to:
updating the first texture comparison probability and the second texture comparison probability respectively by the following formula:
Figure QLYQS_15
;
Figure QLYQS_16
;
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_17
and->
Figure QLYQS_18
Respectively updating the texture richness value of the first image and the texture richness value of the second image, wherein M is a difference measuring factor, < ->
Figure QLYQS_19
And->
Figure QLYQS_20
The updated first texture comparison probability and the updated second texture comparison probability are respectively.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN202110725212.6A 2021-06-29 2021-06-29 Image definition detection method, device, equipment and storage medium Active CN113409288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110725212.6A CN113409288B (en) 2021-06-29 2021-06-29 Image definition detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110725212.6A CN113409288B (en) 2021-06-29 2021-06-29 Image definition detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113409288A CN113409288A (en) 2021-09-17
CN113409288B true CN113409288B (en) 2023-06-27

Family

ID=77680003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110725212.6A Active CN113409288B (en) 2021-06-29 2021-06-29 Image definition detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113409288B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533097A (en) * 2019-08-27 2019-12-03 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device, electronic equipment and storage medium
CN111027577A (en) * 2019-11-13 2020-04-17 湖北省纤维检验局 Fabric abnormal texture type identification method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101123677B (en) * 2006-08-11 2011-03-02 松下电器产业株式会社 Method, device and integrated circuit for improving image acuteness
WO2019104705A1 (en) * 2017-12-01 2019-06-06 华为技术有限公司 Image processing method and device
CN109344752B (en) * 2018-09-20 2019-12-10 北京字节跳动网络技术有限公司 Method and apparatus for processing mouth image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533097A (en) * 2019-08-27 2019-12-03 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device, electronic equipment and storage medium
CN111027577A (en) * 2019-11-13 2020-04-17 湖北省纤维检验局 Fabric abnormal texture type identification method and device

Also Published As

Publication number Publication date
CN113409288A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN112560874B (en) Training method, device, equipment and medium for image recognition model
CN111768381A (en) Part defect detection method and device and electronic equipment
CN115359308B (en) Model training method, device, equipment, storage medium and program for identifying difficult cases
CN114882321A (en) Deep learning model training method, target object detection method and device
CN117333443A (en) Defect detection method and device, electronic equipment and storage medium
CN113409288B (en) Image definition detection method, device, equipment and storage medium
CN116228301A (en) Method, device, equipment and medium for determining target user
CN113537192B (en) Image detection method, device, electronic equipment and storage medium
CN115359322A (en) Target detection model training method, device, equipment and storage medium
CN114692778B (en) Multi-mode sample set generation method, training method and device for intelligent inspection
CN112749978B (en) Detection method, apparatus, device, storage medium, and program product
CN115248890B (en) User interest portrait generation method and device, electronic equipment and storage medium
CN115631370A (en) Identification method and device of MRI (magnetic resonance imaging) sequence category based on convolutional neural network
CN117274778B (en) Image search model training method based on unsupervised and semi-supervised and electronic equipment
CN114092739B (en) Image processing method, apparatus, device, storage medium, and program product
CN113361524B (en) Image processing method and device
CN113947771B (en) Image recognition method, apparatus, device, storage medium, and program product
CN117333487B (en) Acne classification method, device, equipment and storage medium
CN114926447B (en) Method for training a model, method and device for detecting a target
CN115146725B (en) Method for determining object classification mode, object classification method, device and equipment
CN116049335A (en) POI classification and model training method, device, equipment and storage medium
CN116167978A (en) Model updating method and device, electronic equipment and storage medium
CN117710994A (en) Target detection model training method, device, equipment and storage medium
CN117671424A (en) Model training method, image description method, device, medium and equipment
CN116975653A (en) Sample information determining method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant