CN109671051B - Image quality detection model training method and device, electronic equipment and storage medium - Google Patents

Image quality detection model training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109671051B
CN109671051B CN201811359236.9A CN201811359236A CN109671051B CN 109671051 B CN109671051 B CN 109671051B CN 201811359236 A CN201811359236 A CN 201811359236A CN 109671051 B CN109671051 B CN 109671051B
Authority
CN
China
Prior art keywords
image
quality
marked
mark
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811359236.9A
Other languages
Chinese (zh)
Other versions
CN109671051A (en
Inventor
张学森
伊帅
闫俊杰
王晓刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201811359236.9A priority Critical patent/CN109671051B/en
Publication of CN109671051A publication Critical patent/CN109671051A/en
Application granted granted Critical
Publication of CN109671051B publication Critical patent/CN109671051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The disclosure relates to an image quality detection model training method and apparatus, an electronic device, and a storage medium. The method comprises the following steps: acquiring a first mark image which is artificially marked with a quality mark, wherein the quality mark is used for indicating that the image quality meets a set quality condition; searching an image to be marked by using an image library, and determining a quality mark of the image to be marked according to a search result to obtain a second marked image; and training an image quality detection model according to the first label image and the second label image, wherein the image quality detection model is used for detecting the image quality. In the embodiment of the disclosure, the first labeled image and the second labeled image can be mutually supplemented, the quality labeling of the sample image is more comprehensive and accurate, and the trained image quality detection model can more accurately label the image quality.

Description

Image quality detection model training method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image quality detection model training method and apparatus, an electronic device, and a storage medium.
Background
In the field of image processing technology, there is an increasing demand for detecting image quality. How to accurately detect the quality of an image by using a neural network model is an urgent problem to be solved in the field of image processing.
Disclosure of Invention
The present disclosure provides a technical scheme for training an image quality detection model.
According to an aspect of the present disclosure, there is provided an image quality detection model training method, including:
acquiring a first mark image which is artificially marked with a quality mark, wherein the quality mark is used for indicating that the image quality meets a set quality condition;
searching an image to be marked by using an image library, and determining a quality mark of the image to be marked according to a search result to obtain a second marked image;
and training an image quality detection model according to the first label image and the second label image, wherein the image quality detection model is used for detecting the image quality.
In a possible implementation manner, the retrieving, by using an image library, an image to be marked, and determining a quality label of the image to be marked according to a retrieval result to obtain a second marked image includes:
searching images similar to the images to be marked in the image library to serve as search images;
and determining a quality mark of the image to be marked according to the mark of the image to be marked and the mark of the retrieval image, and determining the image to be marked comprising the quality mark as a second marked image, wherein the mark is the mark of a target object in the image.
In a possible implementation manner, the retrieving, as a retrieved image, an image similar to the image to be tagged in the image library includes:
extracting first features of the image to be marked and second features of each image in the image library;
determining similarity between the first feature and a second feature of each image in the image library;
and determining the image corresponding to the second feature with the highest similarity with the first feature in the image library as a retrieval image.
In a possible implementation manner, the determining the quality label of the image to be labeled according to the identifier of the image to be labeled and the identifier of the retrieval image includes:
when the identification of the image to be marked is consistent with the identification of the retrieval image, determining that the quality mark of the image to be marked is a first quality mark, or
And when the identifications of the image to be marked and the retrieval image are not consistent, determining that the quality mark of the image to be marked is a second quality mark.
In a possible implementation manner, before retrieving an image to be marked by using an image library, determining a quality label of the image to be marked according to a retrieval result, and obtaining a second marked image, the method further includes:
determining a plurality of image pairs in an original image, each image pair comprising a first image and a second image, the target object in the first image and the second image being the same;
determining each first image as the image to be marked;
and forming the second images into the image library.
In one possible implementation, the image quality detection model is a residual network model.
In one possible implementation, the set quality condition includes at least one of the following conditions:
the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
According to an aspect of the present disclosure, there is provided an image quality detection model training apparatus, the apparatus including:
the image quality control device comprises a first mark image acquisition module, a quality control module and a quality control module, wherein the first mark image acquisition module is used for acquiring a first mark image which is artificially marked with a quality mark, and the quality mark is used for indicating that the image quality meets a set quality condition;
the second marked image acquisition module is used for searching the image to be marked by utilizing the image library and determining the quality mark of the image to be marked according to the searching result to obtain a second marked image;
and the training module is used for training an image quality detection model according to the first labeled image and the second labeled image, and the image quality detection model is used for detecting the image quality.
In one possible implementation, the second marker image obtaining module includes:
the retrieval image acquisition submodule is used for retrieving an image similar to the image to be marked in the image library as a retrieval image;
and the quality mark determining submodule is used for determining the quality mark of the image to be marked according to the mark of the image to be marked and the mark of the retrieval image, determining the image to be marked comprising the quality mark as a second marked image, and determining the mark as the mark of the target object in the image.
In a possible implementation manner, the retrieval image obtaining sub-module is configured to:
extracting first features of the image to be marked and second features of each image in the image library;
determining similarity between the first feature and a second feature of each image in the image library;
and determining the image corresponding to the second feature with the highest similarity with the first feature in the image library as a retrieval image.
In a possible implementation, the quality indicators include a first quality indicator and a second quality indicator, and the quality indicator determination sub-module is configured to:
when the identification of the image to be marked is consistent with the identification of the retrieval image, determining that the quality mark of the image to be marked is a first quality mark, or
And when the identifications of the image to be marked and the retrieval image are not consistent, determining that the quality mark of the image to be marked is a second quality mark.
In one possible implementation, the apparatus further includes:
an image pair determination module for determining a plurality of image pairs in an original image, each of the image pairs comprising a first image and a second image, the first image and the second image having a same target object;
the image to be marked determining module is used for determining each first image as the image to be marked;
and the image library determining module is used for forming the second images into the image library.
In one possible implementation, the image quality detection model is a residual network model.
In one possible implementation, the set quality condition includes at least one of the following conditions:
the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
According to an aspect of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any of the above.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the method of any one of the above.
In the embodiment of the disclosure, a first marker image artificially marked with a quality marker is acquired; searching the image to be marked by using the image library, and determining the quality mark of the image to be marked according to the searching result to obtain a second marked image; and training an image quality detection model according to the first label image and the second label image. The first marked image and the second marked image can be mutually supplemented, the quality marking of the sample image is more comprehensive and accurate, and the trained image quality detection model can more accurately mark the image quality.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of an image quality detection model training method according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of an image quality detection model training method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating an occlusion and an non-occlusion of a target object in an image selection method according to an embodiment of the present disclosure;
FIG. 4 shows a block diagram of an image quality detection model training apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an image quality detection model training method according to an embodiment of the present disclosure, which includes, as shown in fig. 1:
in step S10, a first marker image artificially marked with a quality marker indicating that the image quality satisfies a set quality condition is acquired.
In one possible implementation, the first labeled image labeled with the quality label manually may be used as a sample image to train the image quality detection model. The target object in the first marker image may be various types of objects such as a person, an animal, a vehicle, and a building. The target object in the first marker image may comprise one or more objects. One or any combination of letters, numbers or symbols can be used as the marking content of the quality mark. For example, the number "1" may be used to indicate the first image quality indicator, which represents that the image quality is good and meets the use requirement. The second image quality flag is represented by a number "0", which represents that the image quality is poor and does not satisfy the use requirement. The present disclosure is not limited thereto.
And step S20, searching the image to be marked by using the image library, and determining the quality mark of the image to be marked according to the searching result to obtain a second marked image.
In one possible implementation, the target object in the second marker image may be various types of objects such as a person, an animal, a vehicle, a building, and the like. The target object in the second marker image may comprise one or more objects. Images similar to the images to be marked and retrieved from the image library can be determined as retrieval images; and determining a quality mark of the image to be marked according to the mark of the image to be marked and the mark of the retrieval image, and determining the image to be marked comprising the quality mark as a second marked image, wherein the mark is the mark of a target object in the image.
In one possible implementation, images of multiple objects may be included in the image library. The target object in the image to be marked can be retrieved among the objects in the images of the image library. The retrieval image can be retrieved according to the object characteristics in each image of the image library and the characteristics of the target object in the retrieval image to be marked.
In a possible implementation manner, the determining, as a retrieval image, an image similar to the image to be tagged retrieved from the image library includes: extracting first features of the image to be marked and second features of each image in the image library; determining similarity between the first feature and a second feature of each image in the image library; and determining the image corresponding to the second feature with the highest similarity with the first feature in the image library as a retrieval image.
In one possible implementation, a neural network model may be used to extract first features of the images to be labeled and to extract second features of each image in the image library. The first feature and the second feature have the same feature content. The extracted features can be used for searching the image to be marked in the image library. The second image having the highest similarity to the first feature may be determined as the retrieval image. For example, the image to be marked may be an image of a pedestrian to be selected, and clothing features, limb features, facial features and the like of the pedestrian in the image of the pedestrian to be selected may be extracted as features of the image to be marked.
In a possible implementation manner, the determining, according to the identifier of the image to be marked and the identifier of the retrieval image, a quality marker of the image to be marked includes: when the marks of the image to be marked and the retrieval image are consistent, determining that the quality mark of the image to be marked is a first quality mark, or when the marks of the image to be marked and the retrieval image are not consistent, determining that the quality mark of the image to be marked is a second quality mark.
In one possible implementation, the image to be marked includes an identifier of a pedestrian a, and each image in the image library may also include an identifier of each object (pedestrian) in the image. When the identifier of the image to be marked is consistent with the identifier of the retrieval image, the image quality of the image to be marked is considered to be good, the recognition rate during image processing is high, and the quality mark of the image to be marked can be determined to be a quality mark representing good image quality. When the identifier of the image to be marked is inconsistent with the identifier of the retrieval image, the image quality of the image to be marked is considered to be poor, the recognition rate during image processing is low, and the quality mark of the image to be marked can be determined to be a quality mark representing poor image quality.
For example, the image library includes images of a plurality of pedestrians. And searching in the image library according to the pedestrian A in the image to be marked, obtaining a pedestrian similar to the pedestrian A in the image to be marked, and determining the searched image as a searched image. When the identification of the retrieval image is also the pedestrian a, it may be determined that the quality flag of the image to be marked represents a quality flag of good image quality. When the identification of the retrieval image is not the pedestrian a, it may be determined that the quality label of the image to be labeled is a quality label representing a difference in image quality.
In a possible implementation manner, as with the image quality detection model in the image selection method, the generation process of the second labeled image also performs feature extraction through the neural network model, so that the generation process of the second labeled image can perform more accurate quality labeling on the image which cannot be manually labeled with accurate quality.
In one possible implementation, step S10 and step S20 may be performed simultaneously or sequentially in any order. The present disclosure is not limited thereto.
Step S30, training an image quality detection model according to the first label image and the second label image, the image quality detection model being used for detecting image quality.
In one possible implementation, the first marker image and the second marker image may be combined to obtain the sample image. One image or a group of images in the sample image can be input into the image quality detection model to be processed, and a quality detection result is obtained. The loss of the image quality detection model can be obtained according to the obtained quality detection result and the quality mark of the sample image. The gradient of loss can be reversely propagated to the image quality detection model to adjust the parameters of the image quality detection model and finish one-time training of the image quality detection model. The image quality detection model may be iteratively trained. When the set iteration times are met or the image quality detection model meets the set convergence condition, the training of the image quality detection model can be stopped, and the trained image quality detection model is obtained.
In the present embodiment, a first marker image artificially marked with a quality marker is acquired; searching the image to be marked by using the image library, and determining the quality mark of the image to be marked according to the searching result to obtain a second marked image; and training an image quality detection model according to the first label image and the second label image. The first marked image and the second marked image can be mutually supplemented, the quality marking of the sample image is more comprehensive and accurate, and the trained image quality detection model can more accurately mark the image quality.
Fig. 2 shows a flowchart of an image quality detection model training method according to an embodiment of the present disclosure, and as shown in fig. 2, the image quality detection model training method further includes:
in step S40, a plurality of image pairs are determined in the original image, each of the image pairs including a first image and a second image, the target object in the first image and the second image being the same.
Step S50, determining each first image as the image to be marked.
Step S60, the second images are combined into the image library.
In one possible implementation, the original image may be a monitoring image captured by a monitoring device. A pair of images in which the target object in the image is the same may be determined as an image pair in the original image. For example, the original image may be a monitoring image of a road surface. In the original image, a pair of images of the pedestrian a may be determined as the image pair 1, and a pair of images of the pedestrian B may be determined as the image pair 2 … ….
In one possible implementation, the first image in each image pair may be used as the image to be marked, and the image library may be composed according to the second image in each image pair. For example, the first image a and the second image a are two images in the image pair a, and when the retrieved image retrieved from the first image a is the second image a, it can be represented that the image quality of the first image a is good. When the retrieved image retrieved from the first image a is not the second image a, it can be represented that the image quality of the first image a is poor.
In this embodiment, a plurality of image pairs are determined in an original image, each image pair including a first image and a second image; determining each first image as the image to be marked; and forming the second images into the image library. According to the first image and the second image in the image pair, the retrieval efficiency of retrieving the image library according to the image to be marked can be improved.
In one possible implementation, the image quality of the first image and the second image satisfies at least one of the following conditions: the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
In one possible implementation, the image quality of the first image and the second image of the pair of constituent images may be screened. An image satisfying at least one of an image resolution greater than a resolution threshold, an image sharpness greater than a sharpness threshold, no occlusion of a target object in the image, and a target object in the image being a living body may be taken as the first image or the second image. The second marked image generated after the image group is obtained after the condition screening is utilized, the image quality is high, and the second marked image can be better used as the supplement of the first marked image, so that the training result of the image quality detection model is more accurate.
In the present embodiment, the image quality of the first image and the second image satisfies at least one of the following conditions: the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body. The first image and the second image which are subjected to quality screening have high image quality, so that the training process of the image quality detection model is more efficient, and the result is more accurate.
In one possible implementation, the image quality detection model is a residual network model.
In one possible implementation, the residual network may include a residual block formed by connecting more than two convolutional layers through a shortcut. The shortcut connection may skip one or more convolutional layers for connection. The shortcut connection may perform identity mapping and add its output to the output of the stack of residual blocks. In the residual error network, with the increase of the number of layers, the training error is smaller and smaller compared with the error of the traditional multilayer convolutional neural network, the problems of gradient disappearance and gradient explosion are solved, and the good network performance can be ensured while a deeper image quality detection model is trained.
In this embodiment, the image quality detection model is a residual error network model. The residual error network model can enable the depth of the image quality detection model to be deeper, and the network performance to be better.
In one possible implementation, the set quality condition includes at least one of the following conditions:
the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
In one possible implementation, the set quality condition may be determined on demand. The marking content, the marking type and the total number of markings of a quality marking can be determined according to requirements and set quality conditions. For example, the set quality condition may include a sharpness threshold. Only the image to be selected whose image definition is equal to or greater than the definition threshold may be given the first quality mark. And simultaneously giving a second quality mark to the image to be selected with the image definition smaller than the definition threshold. At this time, the first quality flag may be used to indicate that the image quality is good. The second quality indicia may be used to indicate poor image quality.
The set quality condition may also comprise two resolution thresholds, the first resolution threshold being higher than the second resolution threshold. The image to be selected with the resolution greater than the first resolution threshold may be given a first quality label, the image to be selected with the resolution less than the first resolution threshold and greater than or equal to the second resolution threshold may be given a second quality label, and the image to be selected with the resolution less than the second resolution may be given a third quality label. At this time, the first quality flag indicates that the image quality is the best, the second quality flag indicates that the image quality is the second best, and the third quality flag indicates that the image quality is the worst.
In one possible implementation, the image resolution refers to the number of pixel points included in a unit size of an image. The higher the resolution of the image, the finer the detail of the image. The image definition refers to the definition of each detail part and its boundary on an image. The non-occlusion of the target object in the image means that the target object in the image is not occluded by other objects. For example, the pedestrian in the image of the pedestrian to be selected is not shielded, which means that the target pedestrian is not shielded by the vehicle or other pedestrians. FIG. 3 is a schematic diagram illustrating an occlusion and an non-occlusion of a target object in an image selection method according to an embodiment of the present disclosure. As shown in fig. 3, the pedestrian is occluded by other pedestrians in the image of the upper half portion in fig. 3, the pedestrian is not occluded in the image of the lower half portion in fig. 3, and the whole body of the pedestrian is included in the image. The target object in the image is a living body, which means that the target object in the image is not from other images. For example, the pedestrian in the image of the pedestrian to be selected is a living body, and means that the pedestrian in the image is not a non-living body such as a character picture on a roadside billboard.
In one possible implementation manner, one or more conditions that the image resolution is greater than the resolution threshold, the image definition is greater than the definition threshold, the target object in the image is not blocked, and the target object in the image is a living body may be arbitrarily combined according to requirements. And the quality condition obtained by the combination of the arbitrary conditions is met, and the selected image with good quality can be determined in the image set to be selected.
In this embodiment, the image resolution, the image sharpness, the non-occlusion of the target object in the image, and the living body of the target object in the image can be used to measure the image quality in each aspect of the image, so as to determine the quality of the selected image.
Application example:
in a security administration such as public security, a vehicle on a road surface may be photographed by a camera provided on the road surface or the like, an image including various vehicles may be obtained, and an image set may be obtained. The image processing is performed on the images in the image set, and the method can be used for different safety management requirements such as suspect vehicle tracking and the like. The image quality detection model may be trained. The trained image quality detection model can perform quality marking on images including vehicles, so that the quality-marked image set can only store images with good image quality, the storage space is saved, and the use efficiency of the image set is improved.
The quality marking can be carried out on the shot image of the vehicle by utilizing a manual marking mode, and a first marked image is obtained. Images that meet the set quality conditions can be given corresponding quality labels. The set quality condition includes at least one of the following conditions: the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body. The quality flags given by the image quality detection model include a quality flag "1" indicating that the image quality is good and a quality flag "0" indicating that the image quality is poor.
The image of the vehicle obtained by shooting can be used as an image to be marked, the image to be marked can be searched by utilizing the existing image library, the quality mark of the image to be marked is determined according to the searching result, and a second marked image is obtained. The detailed process can be referred to the above embodiments and is not described again. In the acquisition process of the second marked image, the same characteristics as the image quality detection model are used, and accurate quality marking can be carried out on the image quality which cannot be accurately distinguished by manual marking.
An image quality detection model may be trained based on the first label image and the second label image. The first labeled image and the second labeled image can complement each other, and training efficiency of training of the image quality detection model is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides an image quality detection model training apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any image quality detection model training method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are omitted for brevity.
Fig. 4 shows a block diagram of an image quality detection model training apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 4:
a first marker image obtaining module 10, configured to obtain a first marker image artificially marked with a quality marker, where the quality marker is used to indicate that image quality meets a set quality condition;
the second marked image obtaining module 20 is configured to retrieve an image to be marked by using an image library, and determine a quality mark of the image to be marked according to a retrieval result to obtain a second marked image;
a training module 30, configured to train an image quality detection model according to the first labeled image and the second labeled image, where the image quality detection model is used to detect image quality.
In one possible implementation, the second marker image obtaining module includes:
the retrieval image acquisition submodule is used for retrieving an image similar to the image to be marked in the image library as a retrieval image;
and the quality mark determining submodule is used for determining the quality mark of the image to be marked according to the mark of the image to be marked and the mark of the retrieval image, determining the image to be marked comprising the quality mark as a second marked image, and determining the mark as the mark of the target object in the image.
In a possible implementation manner, the retrieval image obtaining sub-module is configured to:
extracting first features of the image to be marked and second features of each image in the image library;
determining similarity between the first feature and a second feature of each image in the image library;
and determining the image corresponding to the second feature with the highest similarity with the first feature in the image library as a retrieval image.
In a possible implementation, the quality indicators include a first quality indicator and a second quality indicator, and the quality indicator determination sub-module is configured to:
when the identification of the image to be marked is consistent with the identification of the retrieval image, determining that the quality mark of the image to be marked is a first quality mark, or
And when the identifications of the image to be marked and the retrieval image are not consistent, determining that the quality mark of the image to be marked is a second quality mark.
In one possible implementation, the apparatus further includes:
an image pair determination module for determining a plurality of image pairs in an original image, each of the image pairs comprising a first image and a second image, the first image and the second image having a same target object;
the image to be marked determining module is used for determining each first image as the image to be marked;
and the image library determining module is used for forming the second images into the image library.
In one possible implementation, the image quality detection model is a residual network model.
In one possible implementation, the set quality condition includes at least one of the following conditions:
the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 is a block diagram illustrating an electronic device 800 in accordance with an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (14)

1. An image quality detection model training method, characterized in that the method comprises:
acquiring a first mark image which is artificially marked with a quality mark, wherein the quality mark is used for indicating that the image quality meets a set quality condition;
searching an image to be marked by using an image library, and determining a quality mark of the image to be marked according to a search result to obtain a second marked image;
training an image quality detection model according to the first label image and the second label image, wherein the image quality detection model is used for detecting the image quality;
the method for retrieving the image to be marked by using the image library, determining the quality mark of the image to be marked according to the retrieval result, and obtaining the second marked image comprises the following steps:
searching images similar to the images to be marked in the image library to serve as search images;
and determining a quality mark of the image to be marked according to the mark of the image to be marked and the mark of the retrieval image, and determining the image to be marked comprising the quality mark as a second marked image, wherein the mark is the mark of a target object in the image.
2. The method according to claim 1, wherein the retrieving, as the retrieved image, the image similar to the image to be marked in the image library comprises:
extracting first features of the image to be marked and second features of each image in the image library;
determining similarity between the first feature and a second feature of each image in the image library;
and determining the image corresponding to the second feature with the highest similarity with the first feature in the image library as a retrieval image.
3. The method according to claim 1, wherein the quality label comprises a first quality label and a second quality label, and the determining the quality label of the image to be labeled according to the identifier of the image to be labeled and the identifier of the retrieval image comprises:
when the identification of the image to be marked is consistent with the identification of the retrieval image, determining that the quality mark of the image to be marked is a first quality mark, or
And when the identifications of the image to be marked and the retrieval image are not consistent, determining that the quality mark of the image to be marked is a second quality mark.
4. The method according to any one of claims 1 to 3, wherein before retrieving an image to be marked with an image library, determining a quality label of the image to be marked according to the retrieval result, and obtaining a second marked image, the method further comprises:
determining a plurality of image pairs in an original image, each image pair comprising a first image and a second image, the target object in the first image and the second image being the same;
determining each first image as the image to be marked;
and forming the second images into the image library.
5. The method of claim 1, wherein the image quality detection model is a residual network model.
6. The method according to claim 3, wherein the set quality condition comprises at least one of the following conditions:
the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
7. An image quality inspection model training apparatus, characterized in that the apparatus comprises:
the image quality control device comprises a first mark image acquisition module, a quality control module and a quality control module, wherein the first mark image acquisition module is used for acquiring a first mark image which is artificially marked with a quality mark, and the quality mark is used for indicating that the image quality meets a set quality condition;
the second marked image acquisition module is used for searching the image to be marked by utilizing the image library and determining the quality mark of the image to be marked according to the searching result to obtain a second marked image;
the training module is used for training an image quality detection model according to the first label image and the second label image, and the image quality detection model is used for detecting the image quality;
wherein the second marker image acquisition module comprises:
the retrieval image acquisition submodule is used for retrieving an image similar to the image to be marked in the image library as a retrieval image;
and the quality mark determining submodule is used for determining the quality mark of the image to be marked according to the mark of the image to be marked and the mark of the retrieval image, determining the image to be marked comprising the quality mark as a second marked image, and determining the mark as the mark of the target object in the image.
8. The apparatus of claim 7, wherein the retrieved image acquisition sub-module is configured to:
extracting first features of the image to be marked and second features of each image in the image library;
determining similarity between the first feature and a second feature of each image in the image library;
and determining the image corresponding to the second feature with the highest similarity with the first feature in the image library as a retrieval image.
9. The apparatus of claim 7, wherein the quality indicia comprises a first quality indicia and a second quality indicia, the quality indicia determination sub-module to:
when the identification of the image to be marked is consistent with the identification of the retrieval image, determining that the quality mark of the image to be marked is a first quality mark, or
And when the identifications of the image to be marked and the retrieval image are not consistent, determining that the quality mark of the image to be marked is a second quality mark.
10. The apparatus of any one of claims 7 to 9, further comprising:
an image pair determination module for determining a plurality of image pairs in an original image, each of the image pairs comprising a first image and a second image, the first image and the second image having a same target object;
the image to be marked determining module is used for determining each first image as the image to be marked;
and the image library determining module is used for forming the second images into the image library.
11. The apparatus of claim 7, wherein the image quality detection model is a residual network model.
12. The apparatus of claim 9, wherein the set quality condition comprises at least one of:
the image resolution is greater than a resolution threshold, the image definition is greater than a definition threshold, the target object in the image is not shielded, and the target object in the image is a living body.
13. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 6.
14. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 6.
CN201811359236.9A 2018-11-15 2018-11-15 Image quality detection model training method and device, electronic equipment and storage medium Active CN109671051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811359236.9A CN109671051B (en) 2018-11-15 2018-11-15 Image quality detection model training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811359236.9A CN109671051B (en) 2018-11-15 2018-11-15 Image quality detection model training method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109671051A CN109671051A (en) 2019-04-23
CN109671051B true CN109671051B (en) 2021-01-26

Family

ID=66142557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811359236.9A Active CN109671051B (en) 2018-11-15 2018-11-15 Image quality detection model training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109671051B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091069A (en) * 2019-11-27 2020-05-01 云南电网有限责任公司电力科学研究院 Power grid target detection method and system guided by blind image quality evaluation
SG10201913056VA (en) * 2019-12-23 2021-04-29 Sensetime Int Pte Ltd Method and apparatus for obtaining sample images, and electronic device
CN112949709A (en) * 2021-02-26 2021-06-11 北京达佳互联信息技术有限公司 Image data annotation method and device, electronic equipment and storage medium
CN113792661A (en) * 2021-09-15 2021-12-14 北京市商汤科技开发有限公司 Image detection method, image detection device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109145A (en) * 2018-01-02 2018-06-01 中兴通讯股份有限公司 Picture quality detection method, device, storage medium and electronic device
CN108171256A (en) * 2017-11-27 2018-06-15 深圳市深网视界科技有限公司 Facial image matter comments model construction, screening, recognition methods and equipment and medium
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108389182A (en) * 2018-01-24 2018-08-10 北京卓视智通科技有限责任公司 A kind of picture quality detection method and device based on deep neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043088B2 (en) * 2016-06-23 2018-08-07 Siemens Healthcare Gmbh Image quality score using a deep generative machine-learning model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171256A (en) * 2017-11-27 2018-06-15 深圳市深网视界科技有限公司 Facial image matter comments model construction, screening, recognition methods and equipment and medium
CN108288027A (en) * 2017-12-28 2018-07-17 新智数字科技有限公司 A kind of detection method of picture quality, device and equipment
CN108109145A (en) * 2018-01-02 2018-06-01 中兴通讯股份有限公司 Picture quality detection method, device, storage medium and electronic device
CN108389182A (en) * 2018-01-24 2018-08-10 北京卓视智通科技有限责任公司 A kind of picture quality detection method and device based on deep neural network

Also Published As

Publication number Publication date
CN109671051A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN111339846B (en) Image recognition method and device, electronic equipment and storage medium
CN109635142B (en) Image selection method and device, electronic equipment and storage medium
CN109671051B (en) Image quality detection model training method and device, electronic equipment and storage medium
CN109615006B (en) Character recognition method and device, electronic equipment and storage medium
CN109934275B (en) Image processing method and device, electronic equipment and storage medium
CN109543536B (en) Image identification method and device, electronic equipment and storage medium
CN110990801B (en) Information verification method and device, electronic equipment and storage medium
CN112465843A (en) Image segmentation method and device, electronic equipment and storage medium
CN110858924B (en) Video background music generation method and device and storage medium
CN110781813B (en) Image recognition method and device, electronic equipment and storage medium
CN113128520B (en) Image feature extraction method, target re-identification method, device and storage medium
CN112911239B (en) Video processing method and device, electronic equipment and storage medium
CN111898676B (en) Target detection method and device, electronic equipment and storage medium
CN111553864A (en) Image restoration method and device, electronic equipment and storage medium
CN110532956B (en) Image processing method and device, electronic equipment and storage medium
CN109101542B (en) Image recognition result output method and device, electronic device and storage medium
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN112991553A (en) Information display method and device, electronic equipment and storage medium
CN111104920A (en) Video processing method and device, electronic equipment and storage medium
CN113326768A (en) Training method, image feature extraction method, image recognition method and device
CN111523346A (en) Image recognition method and device, electronic equipment and storage medium
CN114332503A (en) Object re-identification method and device, electronic equipment and storage medium
CN111523599B (en) Target detection method and device, electronic equipment and storage medium
CN113722541A (en) Video fingerprint generation method and device, electronic equipment and storage medium
CN113313115A (en) License plate attribute identification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant