US11487995B2 - Method and apparatus for determining image quality - Google Patents
Method and apparatus for determining image quality Download PDFInfo
- Publication number
- US11487995B2 US11487995B2 US16/050,346 US201816050346A US11487995B2 US 11487995 B2 US11487995 B2 US 11487995B2 US 201816050346 A US201816050346 A US 201816050346A US 11487995 B2 US11487995 B2 US 11487995B2
- Authority
- US
- United States
- Prior art keywords
- face
- image
- probability
- category
- obscured
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G06N3/0454—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
- G06V10/993—Evaluation of the quality of the acquired pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G06N3/0481—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
Definitions
- the present disclosure relates to the field of computer technology, specifically relates to the field of Internet technology, and more specifically to a method and apparatus for determining image quality.
- Facial recognition has been used in many application scenarios, such as face payment, face authentication and face beautification. If the quality of an image containing a face is not up to standard (for example, many parts of the face are obscured) in the process of facial recognition, a recognition error may occur, or abnormal situations such as system crash may happen. If the quality of the image containing face is checked before the facial recognition, the subsequent facial recognition on an image with quality not up to standard may be avoided, thereby improving the facial recognition efficiency. Therefore, it is significantly important to determine the quality of an image containing a face.
- An objective of some embodiments of the present disclosure is to propose a method and apparatus for determining image quality.
- some embodiments of the present disclosure provide a method for determining image quality, the method including: acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image; extracting a face image from the to-be-recognized image on the basis of the facial region information; inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set, the convolutional neural network being used to represent a corresponding relationship between an image containing a face and a probability of a pixel belonging to a category indicated by a category identifier in the category identifier set, and the category identifier set having a category identifier indicating a category representing a face part; inputting the face image into a pre-t
- the convolutional neural network is trained by: extracting a preset training sample including a sample image displaying a face and an annotation of the sample image, the annotation including a data flag for representing whether a pixel in the sample image belongs to a category indicated by a category identifier in the category identifier set; and training using a machine learning method on the basis of the sample image, the annotation, a preset classification loss function and a back propagation algorithm to obtain the convolutional neural network, the classification loss function being used for representing a degree of difference between a probability output by the convolutional neural network and the data flag included in the annotation.
- the convolutional neural network includes five convolutional layers and five deconvolutional layers, the convolutional layers being used for downsampling inputted information with a preset window sliding step, and the deconvolutional layers being used for upsampling the inputted information with a preset amplification factor.
- the window sliding step is 2, and the amplification factor is 2.
- the determining a probability of the face image being obscured on the basis of the probabilities and the coordinates includes: inputting the probabilities of the each pixel included in the face image belonging to the category indicated by the each category identifier in the category identifier set and the coordinates of the each key face point included in the face image into a preset probability calculation model to obtain the probability of the face image being obscured, wherein the probability calculation model is used to represent a corresponding relationship between inputted information and a probability of a face image being obscured, and the inputted information includes: probabilities of each pixel included in an image containing a face belonging to a category indicated by each category identifier in the category identifier set and coordinates of each key face point included in the image.
- the determining a probability of the face image being obscured on the basis of the probabilities and the coordinates further includes: determining a face part region set on the basis of the coordinates; determining, for each pixel included in the face image, a category indicated by a category identifier corresponding to a maximum probability corresponding to the pixel as a category the pixel attributed to; calculating, for each face part region, a probability of the face part region being obscured on the basis of a category to each pixel included in the face part region attributed to; and determining the probability of the face image being obscured on the basis of probabilities of each face part region in the face part region set being obscured.
- the calculating, for each face part region, a probability of the face part region being obscured on the basis of a category each pixel included in the face part region attributed to includes: determining, for the each face part region, a number of pixels in the face part region attributed to a category not matching a face part represented by the face part region, and determining a ratio of the number to a total number of pixels included in the face part region as the probability of the face part region being obscured.
- the calculating, for each face part region, a probability of the face part region being obscured on the basis of a category each pixel included in the face part region attributed to further includes: determining, for the each face part region, a target pixel group including a target pixel, in the face part region, attributed to a category not matching a face part represented by the face part region, summing probabilities of each target pixel in the determined target pixel group belonging to a category matching the face part to obtain a first value, summing probabilities of each pixel in the face part region belonging to a category matching the face part to obtain a second value, and determining a ratio of the first value to the second value as the probability of the facial region being obscured.
- the determining the probability of the face image being obscured on the basis of probabilities of each face part region in the face part region set being obscured includes: determining the probability of the face image being obscured on the basis of an average of the probabilities of the each face part region in the face part region set being obscured.
- the determining the probability of the face image being obscured on the basis of probabilities of each face part region in the face part region set being obscured further includes: acquiring a preset weight for representing an importance level of a face part; and weighting and summing the probabilities of the each face part region in the face part region set being obscured on the basis of the weight to obtain a numeric value, and defining the numeric value as the probability of the face image being obscured.
- the extracting a face image from the to-be-recognized image on the basis of the facial region information includes: enlarging a range of the facial region indicated by the facial region information to obtain a first facial region; and extracting the first facial region to obtain the face image.
- the facial region is a rectangular region; and the enlarging the range of the facial region indicated by the facial region information to obtain a first facial region includes: increasing a height and width of the facial region indicated by the facial region information by a preset multiplier factor, or increasing the height and width of the facial region by a preset numeric value.
- the determining whether the quality of the face image is up to standard on the basis of the probability of the face image being obscured includes: determining whether the probability of the face image being obscured is less than a probability threshold, and if the probability of the face image being obscured is less than the probability threshold, determining the quality of the face image being up to standard.
- some embodiments of the present disclosure provide an apparatus for determining image quality, the apparatus including: an acquisition unit, configured for acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image; an extraction unit, configured for extracting a face image from the to-be-recognized image on the basis of the facial region information; a first input unit, configured for inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set, the convolutional neural network being used to represent a corresponding relationship between an image containing a face and a probability of a pixel belonging to a category indicated by a category identifier in the category identifier set, and the category identifier set having a category identifier indicating
- the convolutional neural network is trained by: extracting a preset training sample including a sample image displaying a face and an annotation of the sample image, the annotation including a data flag for representing whether a pixel in the sample image belongs to a category indicated by a category identifier in the category identifier set; and training using a machine learning method on the basis of the sample image, the annotation, a preset classification loss function and a back propagation algorithm to obtain the convolutional neural network, the classification loss function being used for representing a degree of difference between a probability output by the convolutional neural network and the data flag included in the annotation.
- the first determination unit includes: a first determination subunit, configured for determining a face part region set on the basis of the coordinates; a second determination unit, configured for determining, for each pixel included in the face image, a category indicated by a category identifier corresponding to a maximum probability corresponding to the pixel as a category the pixel attributed to; a calculation subunit, configured for calculating, for each face part region, a probability of the face part region being obscured on the basis of a category each pixel included in the face part region attributed to; and a third determination subunit, configured for determining the probability of the face image being obscured on the basis of probabilities of each face part region in the face part region set being obscured.
- the convolutional neural network includes five convolutional layers and five deconvolutional layers, the convolutional layers being used for downsampling inputted information with a preset window sliding step, and the deconvolutional layers being used for upsampling the inputted information with a preset amplification factor.
- the window sliding step is 2, and the amplification factor is 2.
- the first determination unit further includes: an input subunit, configured for inputting the probabilities of the each pixel included in the face image belonging to the category indicated by the each category identifier in the category identifier set and the coordinates of the each key face point included in the face image into a preset probability calculation model to obtain the probability of the face image being obscured, wherein the probability calculation model is used to represent a corresponding relationship between inputted information and a probability of a face image being obscured, and the inputted information includes: probabilities of each pixel included in an image containing a face belonging to a category indicated by each category identifier in the category identifier set and coordinates of each key face point included in the image.
- the calculation subunit is further configured for: determining, for each face part region, a number of pixels in the face part region attributed to a category not matching a face part represented by the face part region, and determining a ratio of the number to a total number of pixels included in the face part region as the probability of the face part region being obscured.
- the calculation subunit is also further configured for: determining, for each face part region, a target pixel group including a target pixel, in the face part region, attributed to a category not matching a face part represented by the face part region, summing probabilities of each target pixel in the determined target pixel group belonging to a category matching the face part to obtain a first value, summing probabilities of each pixel in the face part region belonging to a category matching the face part to obtain a second value, and determining a ratio of the first value to the second value as the probability of the facial region being obscured.
- the third determination subunit is further configured for: determining the probability of the face image being obscured on the basis of an average of the probabilities of the each face part region in the face part region set being obscured.
- the third determination subunit is also further configured for: acquiring a preset weight for representing an importance level of a face part; and weighting and summing the probabilities of the each face part region in the face part region set being obscured on the basis of the weight to obtain a numeric value, and defining the numeric value as the probability of the face image being obscured.
- the extraction unit includes: an enlarging subunit, configured for enlarging a range of the facial region indicated by the facial region information to obtain a first facial region; and an extracting subunit, configured for extracting the first facial region to obtain the face image.
- the facial region is a rectangular region; the enlarging subunit is further configured for: increasing a height and width of the facial region indicated by the facial region information by a preset multiplier factor, or increasing the height and width of the facial region by a preset numeric value.
- the second determination unit is further configured for: determining whether the probability of the face image being obscured is less than a probability threshold, and if the probability of the face image being obscured is less than the probability threshold, determining the quality of the face image being up to standard.
- some embodiments of the present disclosure provide an electronic device, the electronic device including: one or more processors; and a storage, for storing one or more programs, the one or more programs, when executed by the one or more processors, causing the one or more processors to implement the method according to any implementation in the first aspect.
- some embodiments of the present disclosure provide a computer readable storage medium storing a computer program, the program, when executed by a processor, implementing the method according to any implementation in the first aspect.
- the method and apparatus for determining image quality as provided in some embodiments of the present disclosure make full use of the extraction for face images, shorten the determination range and improve the image quality determination efficiency by: acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image to extract a face image from the to-be-recognized image on the basis of the facial region information; and then inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set; inputting the face image into a pre-trained key face point positioning model to obtain coordinates of each key face point included in the face image; and finally, determining a probability of the face image being obscured on the basis of the probabilities and the coordinates so as to determine whether the quality of the face image is up
- probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set is determined and coordinates of each key face point included in the face image are determined, so that a probability of the face image being obscured may be determined on the basis of the probabilities and the coordinates, thereby improving the accuracy of the probability of the face image being obscured so as to improve the accuracy of the image quality determination result.
- FIG. 1 is an architectural diagram of a system in which some embodiments of the present disclosure may be implemented
- FIG. 2 is a flowchart of an embodiment of a method for determining image quality according to the present disclosure
- FIG. 3 is a schematic diagram of an application scenario of a method for determining image quality according to some embodiments of the present disclosure
- FIG. 4 is a flowchart of another embodiment of a method for determining image quality according to the present disclosure
- FIG. 5 is a structural schematic diagram of an embodiment of an apparatus for determining image quality according to the present disclosure.
- FIG. 6 is a structural schematic diagram of a computer system adapted to implement an electronic device of some embodiments of the present disclosure.
- FIG. 1 shows a system architecture 100 which may be used by a method for determining image quality or an apparatus for determining image quality according to some embodiments of the present disclosure.
- the system architecture 100 may include a data storage server 101 , a network 102 and an image processing server 103 .
- the network 102 serves as a medium providing a communication link between the data storage server 101 and the image processing server 103 .
- the network 102 may include various types of connections, such as wired or wireless transmission links, or optical fibers.
- the data storage server 101 may be a server providing various services, for example, a server for storing an image containing a face and facial region information for indicating a facial region in the image.
- the data storage server 101 may further have the function of facial recognition, and the facial region information may be generated after the data storage server 101 performs facial recognition on the image.
- the image processing server 103 may be a server providing various services, for example, acquiring a to-be-recognized image and facial region information for indicating a facial region in the to-be-recognized image from the data storage server 101 , and performing determination based on the to-be-recognized image and the facial region information to obtain a determination result.
- the method for determining image quality is generally executed by the image processing server 103 . Accordingly, an apparatus for determining image quality is generally installed on the image processing server 103 .
- the system architecture 100 may not include the data storage server 101 .
- FIG. 2 shows a flow 200 of an embodiment of a method for determining image quality according to the present disclosure.
- the flow 200 of the method for determining image quality includes the following steps.
- Step 201 acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image.
- an electronic device (the image processing server 103 shown in FIG. 1 , for example) on which the method for determining image quality is performed may acquire a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image from a data storage server (the data storage server 101 shown in FIG. 1 , for example) connected thereto by means of a wired connection or a wireless connection.
- a data storage server the data storage server 101 shown in FIG. 1 , for example
- the electronic device may acquire the to-be-recognized image and the facial region information locally.
- the facial region may be a facial region in any shape (for example, a circular region, a rectangular region).
- the facial region information may include, for example, coordinates of a center point of the facial region and a radius of the facial region.
- the facial region information may include, for example, coordinates, height and width of at least one vertex of the facial region.
- the to-be-recognized image and the facial region information may be acquired by the electronic device actively or by the electronic device passively (for example, the to-be-recognized image and the facial region information are sent to the electronic device by the data storage server), which is not limited by the present embodiment.
- the electronic device may also acquire the to-be-recognized image and the facial region information from a terminal device connected thereto. It should be noted that the present embodiment does not limit the source of the to-be-recognized image and the facial region information.
- Step 202 extracting a face image from the to-be-recognized image on the basis of the facial region information.
- the electronic device after acquiring the to-be-recognized image and the facial region information, may extract a face image from the to-be-recognized image on the basis of the facial region information.
- the electronic device may extract a facial region indicated by the facial region information in the to-be-recognized image to obtain a face image.
- Step 203 inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set.
- the electronic device after obtaining the face image, may input the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set.
- the convolutional neural network may be used to represent a corresponding relationship between an image containing a face and a probability of a pixel belonging to a category indicated by a category identifier in the category identifier set.
- the category identifier set may include a category identifier indicating a category representing a face part (for example, eye, eyebrow, forehead, chin, nose, or gill).
- the category identifier set may include a category identifier indicating a category representing hair or background.
- a probability output by the convolutional neural network may be a numeric value within the interval of [0, 1].
- the convolutional neural network is a feedforward neural network, and the artificial neurons thereof may respond to a part of the surrounding cells within the coverage area and have excellent performance in image processing. Therefore, the convolutional neural network may be used to determine a probability of a pixel belonging to a category indicated by a category identifier in the preset category identifier set.
- the convolutional neural network may be obtained by supervising and training the existing deep convolutional neural network (for example, DenseBox, VGGNet, ResNet, SegNet) using a machine learning method.
- the convolutional neural network may include at least one convolutional layer and at least one deconvolutional layer.
- the convolutional layer may, for example, downsample inputted information.
- the deconvolutional layer may, for example, upsample the inputted information.
- the convolutional neural network may also use various nonlinear activation functions (for example, ReLU (rectified linear units) function, or sigmoid function) to perform nonlinear calculation on the information.
- ReLU rectified linear units
- sigmoid function sigmoid function
- Step 204 inputting the face image into a pre-trained key face point positioning model to obtain coordinates of each key face point included in the face image.
- the electronic device may also input the extracted face image into a pre-trained key face point positioning model to obtain coordinates of each key face point included in the face image.
- the key face point positioning model may be used for representing a corresponding relationship between an image containing a face and coordinates of each key face point.
- the key face point may be a pre-designated point with strong semantic information (for example, a corner of the eye, a corner of the mouth, a wing of the nose, a point in the contour) in a face.
- the number of key face points may be 72 or other preset value, which is not limited by the present embodiment.
- the key face point positioning model here may be a corresponding relationship table pre-established by a technician on the basis of a large number of statistics and used to represent a corresponding relationship between an image containing a face and coordinates of a key face point.
- the key face point positioning model may also be obtained by training using various existing logistic regression models (LR).
- steps 203 and 204 may be executed in parallel or in series. When the steps 203 and 204 are executed in series, the execution order of the steps 203 and 204 is not limited by the present embodiment.
- the key face point positioning model may be obtained by supervising and training the existing deep convolutional neural network using a machine learning method.
- the key face point positioning model may include, for example, at least one convolutional layer, at least one pooling layer, and at least one fully connected layer (FC).
- the convolutional layer may be used to extract image features (the image features may be various basic elements of an image, such as color, line or texture); the pooling layer may be used to downsample inputted information; and the at least one fully connected layer may include a fully connected layer for outputting coordinates of each key face point.
- the key face point positioning model may also use various nonlinear activation functions (for example, ReLU (rectified linear units) function and sigmoid function) to perform nonlinear calculation on the information.
- ReLU rectified linear units
- Step 205 determining a probability of the face image being obscured on the basis of the probabilities and the coordinates.
- the electronic device after executing the step 203 and the step 204 , may determine a probability of the face image being obscured on the basis of the probabilities as obtained in the step 203 and the coordinates as obtained in the step 204 .
- the electronic device may first obtain semantic information of each key face point included in the face image on the basis of the position of the each key face point in the face image.
- the electronic device may then query a probability of a key face point belonging to a category indicated by a category identifier in the category identifier set on the basis of the probabilities obtained in step 203 (the probability of the key face point may be a probability determined in step 203 and corresponding to a pixel at the same position as the key face point), and a category identifier corresponding to a maximum probability may be used as a category identifier corresponding to the key face point.
- the electronic device may find a key face point having semantic information not matching the category indicated by the corresponding category identifier from the key face points included in the face image (for example, the semantic information indicates canthus, and the category indicated by the corresponding category identifier indicates gill, so the semantic information does not match the category since the canthus does not belong to gills), and classify the key face point into a key face point group.
- the electronic device may determine the ratio of the total number of key face points included in the key face point group to the total number of key face points included in the face image as the probability of the face image being obscured.
- the electronic device may input the probabilities of the each pixel included in the face image belonging to the category indicated by the each category identifier in the category identifier set and the coordinates of the each key face point included in the face image into a preset probability calculation model to obtain the probability of the face image being obscured.
- the probability calculation model may be used to represent a corresponding relationship between inputted information and a probability of a face image being obscured.
- the inputted information may include: probabilities of each pixel included in an image containing a face belonging to a category indicated by each category identifier in the category identifier set and the coordinates of each key face point included in the image.
- the probability calculation model may be a corresponding relationship table pre-established by a technician on the basis of a large number of statistics and used to represent a corresponding relationship between inputted information and the probability of the face image being obscured.
- the probability calculation model may also be a calculation formula pre-established by a technician on the basis of a large number of statistics and used to calculate a probability of an image containing a face being obscured on the basis of the probabilities of each pixel included in the image containing the face belonging to a category indicated by each category identifier in the category identifier set and the coordinates of each key face point included in the image.
- Step 206 determining whether the quality of the face image is up to standard on the basis of the probability of the face image being obscured.
- the electronic device after determining the probability of the face image being obscured, may determine whether the quality of the face image is up to standard on the basis of the probability of the face image being obscured. As an example, the electronic device may compare the probability of the face image being obscured with a preset probability threshold, and if the probability of the face image being obscured is not less than the probability threshold, the electronic device may determine that the quality of the face image is not up to standard.
- the probability threshold may be, for example, a numeric value of 0.5 or the like, and the probability threshold may be modified according to actual needs, which is not limited in the embodiment.
- the electronic device may determine that the quality of the face image is up to standard.
- the electronic device may also output the to-be-recognized image.
- the to-be-recognized image is output to a system for facial recognition on the to-be-recognized image.
- the electronic device may also generate a determination result.
- the determination result may include, for example, an identifier for indicating whether the quality of the face image is up to standard.
- the to-be-recognized image may have an image identifier, and the determination result may also include at least one of: the image identifier, the position of the face image in the to-be-recognized image, or the probability of the face image being obscured.
- FIG. 3 shows a schematic diagram of an application scenario of a method for determining image quality according to some embodiments.
- an image processing server 301 may acquire a to-be-recognized image 303 and facial region information 304 obtained by performing facial recognition on the to-be-recognized image 303 in advance and used for indicating a facial region in the to-be-recognized image 303 from a data storage server 302 connected thereto.
- the image processing server 301 may then extract a facial region indicated by the facial region information 304 in the to-be-recognized image 303 to obtain a face image 305 .
- the image processing server 301 may then input the face image 305 into a pre-trained convolutional neural network and a key face point positioning model simultaneously to obtain probabilities 306 , output by the convolutional neural network, of each pixel included in the face image 305 belonging to a category indicated by each category identifier in a preset category identifier set and obtain coordinates 307 , output by the key face point positioning model, of each key face point.
- the image processing server 301 may then determine a probability 308 of the face image 305 being obscured on the basis of the probabilities 306 and the coordinates 307 .
- the image processing server 301 may compare the probability 308 of the image being obscured with a probability threshold to obtain a determination result 309 as to whether the quality of the face image 305 is up to standard.
- the method provided by some embodiments of the present disclosure makes full use of the extraction for face images, shortens the determination range and improves the image quality determination efficiency.
- probabilities each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set are determined and coordinates of each key face point included in the face image are determined, so that a probability of the face image being obscured may be determined on the basis of the probabilities and the coordinates, thereby improving the accuracy of the probability of the face image being obscured so as to improve the accuracy of the image quality determination result.
- FIG. 4 shows a flow 400 of another embodiment of a method for determining image quality.
- the flow 400 of the method for determining image quality includes the following steps.
- Step 401 acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image.
- an electronic device (the image processing server 103 shown in FIG. 1 , for example) on which the method for determining image quality is performed may acquire a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image from a data storage server (the data storage server 101 shown in FIG. 1 , for example) connected thereto by means of a wired connection or a wireless connection.
- the electronic device may acquire the to-be-recognized image and the facial region information locally.
- the facial region may be a rectangular region.
- Step 402 enlarging the range of the facial region indicated by the facial region information to obtain a first facial region, and extracting the first facial region to obtain the face image.
- the electronic device after obtaining the to-be-recognized image and the facial region information, may enlarge the range of the facial region indicated by the facial region information to obtain a first facial region.
- the electronic device may extract the first facial region to obtain the face image.
- the electronic device may increase the height and width of the facial region indicated by the facial region information by a preset multiplier factor or increase the height and width of the facial region by a preset numeric value, and take the enlarged facial region as a first facial region.
- the preset multiplier factor here may be, for example, a numeric value such as 1.
- the height and the width may correspond to the same preset numeric value or different preset numeric values.
- a preset numeric value corresponding to the height is the same numeric value as the height
- a preset numeric value corresponding to the width is the same numeric value as the height.
- the preset multiplier factor and the preset numeric value may be modified according to the actual needs, which is not limited by the present embodiment.
- Step 403 inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set.
- the electronic device after obtaining the face image, may input the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set.
- the convolutional neural network may be used to represent a corresponding relationship between an image including a face and a probability of a pixel belonging to a category indicated by a category identifier in the category identifier set.
- the category identifier set may include a category identifier indicating a category representing a face part (for example, eye, eyebrow, forehead, chin, nose, or gill).
- the category identifier set may include a category identifier indicating a category representing hair or background.
- the convolutional neural network here may include, for example, five convolutional layers and five deconvolutional layers.
- the convolutional layer may be used for downsampling inputted information with a preset window sliding step.
- the deconvolutional layer may be used for upsampling the inputted information with a preset amplification factor.
- the window sliding step may be 2, and the amplification factor may be 2.
- the convolutional neural network may be obtained by training through the following training steps.
- the electronic device may extract a preset training sample including a sample image displaying a face and an annotation of the sample image.
- the annotation may include a data flag for representing whether a pixel in the sample image belongs to the category indicated by each category identifier in the category identifier set.
- the number of data flags corresponding to each pixel is the same as the number of category identifiers in the category identifier set.
- the data flags may include 0 and 1. 0 may represent “not belonging to”, and 1 may represent “belonging to”. As an example, if a data flag associated with a pixel and a category identifier is 0, it may be represented that the pixel does not belong to a category indicated by the category identifier.
- the annotation may be expressed with a matrix.
- the electronic device may train using a machine learning method on the basis of the sample image, the annotation, a preset classification loss function and a back propagation algorithm to obtain a convolutional neural network.
- the classification loss function may be used for representing the degree of difference between a probability output by the convolutional neural network and the data flag included in the annotation.
- the classification loss function may be various loss functions for classification (for example, a hinge loss function or a softmax loss function).
- the classification loss function may constrain the way and direction a convolutional kernel is modified. The goal of the training is to minimize the value of the classification loss function. Therefore, the parameters of the convolutional neural network obtained by training are the corresponding parameters when the value of the classification loss function is the minimum.
- BP algorithm may also be called error back propagation (BP) algorithm.
- the BP algorithm consists of two learning processes: the forward propagation of a signal and the backward propagation of an error.
- a feedforward network an input signal is inputted through an input layer, and is outputted through an output layer upon hidden layer calculation. An output value is compared with a mark value. If there is an error, the error 25 is propagated from the output layer to the input layer in the reverse direction, in this process, a gradient descent algorithm (for example, a random gradient descent algorithm) may be used to adjust a neuron weight (for example, parameters of a convolutional kernel in a convolutional layer).
- the classification loss function here may be used to represent the error between the output value and the mark value.
- Step 404 inputting the face image into a pre-trained key face point positioning model to obtain coordinates of each key face point included in the face image.
- the electronic device may also input the obtained face image into a pre-trained key face point positioning model to obtain coordinates of each key face point included in the face image.
- the key face point positioning model may be used to represent a corresponding relationship between an image containing a face and the coordinates of each key face point.
- the key face point positioning model here may be a corresponding relationship table pre-established by a technician on the basis of a large number of statistics and used to represent a corresponding relationship between an image containing a face and coordinates of a key face point.
- the key face point positioning model may also be obtained by training using various existing logistic regression models (LR), or obtained by supervising and training the existing deep convolutional neural network using a machine learning method and a training sample.
- LR logistic regression models
- steps 403 and 404 may be executed in parallel or in series. When the steps 403 and 404 are executed in series, the execution order of the steps 403 and 404 is not limited by the present embodiment.
- Step 405 determining a face part region set on the basis of the coordinates of the each key face point.
- the electronic device after obtaining the coordinates of each key face point included in the face image, may determine a face part region set on the basis of the coordinates.
- Different face part regions may include different parts of the face, such as eyes, mouth, nose, forehead and chin.
- the eyes may also be divided into the left eye and the right eye.
- the electronic device here may determine semantic information of each key face point on the basis of the position of each key face point in the face image, such as semantic information representing canthus, corners of the mouth and nose wings.
- the electronic device may determine at least one closed region for representing a part of a face on the basis of the semantic information, and determine a face part region set based on each determined closed region.
- the electronic device may find each key face point with a position on the left side of the face and having semantic information related to the eyes (for example, the semantic information indicates canthus, upper eye edge and lower eye edge).
- the electronic device may use the biggest closed region formed by the each key face point as a face part region for representing the left eye.
- Step 406 determining, for each pixel included in the face image, a category indicated by a category identifier corresponding to a maximum probability corresponding to the pixel as a category the pixel attributed to.
- the electronic device may determine, for each pixel included in the face image, a category indicated by a category identifier corresponding to a maximum probability determined in step 403 and corresponding to the pixel as a category the pixel attributed to.
- a category identifier set includes category identifiers A, B and C
- the probability of the pixel P belonging to a category indicated by the category identifier A is 0.6
- the probability of the pixel P belonging to a category indicated by the category identifier B is 0.7
- the probability of the pixel P belonging to a category indicated by the category identifier C is 0.8
- the electronic device may determine the category indicated by category identifier C as the category the pixel P attributed to.
- Step 407 calculating, for each face part region, a probability of the face part region being obscured on the basis of a category each pixel included in the face part region attributed to.
- the electronic device may calculate, for each face part region in the face part region set, a probability of the face part region being obscured on the basis of a category each pixel included in the face part region attributed to. As an example, the electronic device may determine a number of pixels in the face part region attributed to a category which does not match a face part represented by the face part region, and determine the ratio of the number to a total number of pixels included in the face part region as the probability of the face part region being obscured.
- a category indicated by each category identifier in the category identifier set may have a category name.
- the electronic device may establish a corresponding relationship between the category name and a part name in advance. Furthermore, the electronic device may assign a corresponding part name to each of the determined face part regions. The electronic device may determine whether the category a pixel attributed to matches a face part represented by the face part region through the corresponding relationship.
- the electronic device may also determine the probability of the face part region in the face part region set being obscured by: determining, for each face part region, a target pixel group including a target pixel, in the face part region, attributed to a category which does not match a face part represented by the face part region, summing the probabilities of target pixels in the target pixel group belonging to categories that match the face part to obtain a first value, summing the probabilities of pixels in the face part region belonging to categories that match the face part to obtain a second value, and determining the ratio of the first value to the second value as the probability of the facial region being obscured.
- Step 408 determining a probability of the face image being obscured on the basis of probabilities of each face part region in the face part region set being obscured.
- the electronic device may determine the probability of the face image being obscured on the basis of the probabilities. As an example, the electronic device may determine a probability of the face image being obscured on the basis of the average of probabilities of face part regions in the face part region set being obscured.
- the electronic device may also acquire a preset weight for representing an importance level of a face part.
- the electronic device may weight and sum the probabilities of the face part regions in the face part region set being obscured on the basis of the weight to obtain a numeric value, and define the numeric value as the probability of the face image being obscured.
- the face part region set includes face part regions A, B, and C that represent the following face parts in sequence: left eye, right eye and mouth.
- the probabilities of the face part regions A, B, and C being obscured are 0.3, 0.3, and 0.6 respectively.
- the weight corresponding to the left eye is 0.4
- the weight corresponding to the right eye is 0.4
- the weight corresponding to the mouth is 0.2.
- the electronic device may multiply the probability 0.3 of the face part region A being obscured by the weight 0.4 to obtain a product of 0.12
- the electronic device may then sum the three obtained products to obtain 0.36 and define 0.36 as the probability of the face image being obscured.
- the sum of the weights obtained by the electronic device may be 1.
- Step 409 determining whether the quality of the face image is up to standard on the basis of the probability of the face image being obscured.
- the electronic device may determine whether the quality of the face image is up to standard on the basis of the probability of the face image being obscured. As an example, the electronic device may compare the probability of the face image being obscured with a preset probability threshold, and if the probability of the face image being obscured is not less than the probability threshold, the electronic device may determine that the quality of the face image is not up to standard. Otherwise, the electronic device may determine that the quality of the face image is up to standard.
- the probability threshold may be, for example, a numeric value of 0.5 or the like, and the probability threshold may be modified according to the actual needs, which is not limited by the present embodiment.
- the flow 400 of the method for determining image quality in some embodiments highlights the step (that is, step 402 ) of enlarging the range of a facial region in a to-be-recognized image and the steps (that is, steps 405 to 408 ) of determining the probability of the face image being obscured.
- the solution described in some embodiments may enlarge the coverage area of a face image by enlarge the range thereof, so that the face image includes as many key face points as possible. Determining the probability of the face image being obscured by means of the steps 405 to 408 may improve the accuracy of the probability.
- the present disclosure provides an embodiment of an apparatus for determining image quality.
- the apparatus embodiment corresponds to the method embodiment as shown in FIG. 2 , and the apparatus may be specifically applied to various electronic devices.
- an apparatus 500 for determining image quality includes: an acquisition unit 501 , an extraction unit 502 , a first input unit 503 , a second input unit 504 , a first determination unit 505 and a second determination unit 506 .
- the acquisition unit 501 is configured for acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image;
- the extraction unit 502 is configured for extracting a face image from the to-be-recognized image on the basis of the facial region information;
- the first input unit 503 is configured for inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel included in the face image belonging to a category indicated by each category identifier in a preset category identifier set, the convolutional neural network being used to represent a corresponding relationship between an image containing a face and a probability of a pixel belonging to a category indicated by a category identifier in the category identifier set, and the category identifier set having a category identifier indicating a category representing a face part;
- the second input unit 504
- the specific processing by the acquisition unit 501 , the extraction unit 502 , the first input unit 503 , the second input unit 504 , the first determination unit 505 and the second determination unit 506 in the apparatus 500 for determining image quality and the technical effects brought thereby may refer to the steps 201 , 202 , 203 , 204 , 205 , and 206 in the corresponding embodiment of FIG. 2 respectively, and will not be repeatedly described here.
- the convolutional neural network may be trained by: extracting a preset training sample including a sample image displaying a face and an annotation of the sample image, wherein the annotation may include a data flag for representing whether a pixel in the sample image belongs to a category indicated by a category identifier in the category identifier set; and training using a machine learning method on the basis of the sample image, the annotation, a preset classification loss function and a back propagation algorithm to obtain the convolutional neural network, wherein the classification loss function may be used for representing a degree of difference between a probability output by the convolutional neural network and the data flag included in the annotation.
- the first determination unit 505 may include: a first determination subunit (not shown), configured for determining a face part region set on the basis of the coordinates; a second determination unit (not shown), configured for determining, for each pixel included in the face image, a category indicated by a category identifier corresponding to a maximum probability corresponding to the pixel as a category the pixel attributed to; a calculation subunit (not shown), configured for calculating, for each face part region, a probability of the face part region being obscured on the basis of a category each pixel included in the face part region attributed to; and a third determination subunit (not shown), configured for determining the probability of the face image being obscured on the basis of probabilities of each face part region in the face part region set being obscured.
- the convolutional neural network may include five convolutional layers and five deconvolutional layers, wherein the convolutional layers may be used for downsampling inputted information with a preset window sliding step, and the deconvolutional layers may be used for upsampling the inputted information with a preset amplification factor.
- the window sliding step is 2, and the amplification factor is 2.
- the first determination unit 505 may further include: an input subunit (not shown), configured for inputting the probabilities of the each pixel included in the face image belonging to the category indicated by the each category identifier in the category identifier set and the coordinates of the each key face point included in the face image into a preset probability calculation model to obtain the probability of the face image being obscured.
- the probability calculation model may be used to represent a corresponding relationship between inputted information and a probability of a face image being obscured.
- the inputted information may include: probabilities of each pixel included in an image containing a face belonging to a category indicated by each category identifier in the category identifier set and coordinates of each key face point included in the image.
- the calculation subunit may be further configured for: determining, for each face part region, a number of pixels in the face part region attributed to a category not matching a face part represented by the face part region, and determining a ratio of the number to a total number of pixels included in the face part region as the probability of the face part region being obscured.
- the calculation subunit may also be further configured for: determining, for each face part region, a target pixel group including a target pixel, in the face part region, attributed to a category not matching a face part represented by the face part region, summing probabilities of each target pixel in the determined target pixel group belonging to a category matching the face part to obtain a first value, summing probabilities of each pixel in the face part region belonging to a category matching the face part to obtain a second value, and determining a ratio of the first value to the second value as the probability of the facial region being obscured.
- the third determination subunit may be further configured for: determining the probability of the face image being obscured on the basis of an average of the probabilities of the each face part region in the face part region set being obscured.
- the third determination subunit may also be further configured for: acquiring a preset weight for representing an importance level of a face part; and weighting and summing the probabilities of the each face part region in the face part region set being obscured on the basis of the weight to obtain a numeric value, and defining the numeric value as the probability of the face image being obscured.
- the extraction unit 502 may include: an enlarging subunit (not shown), configured for enlarging a range of the facial region indicated by the facial region information to obtain a first facial region; and an extracting subunit (not shown), configured for extracting the first facial region to obtain the face image.
- the facial region may be a rectangular region; and the enlarging subunit may be further configured for: increasing a height and width of the facial region indicated by the facial region information by a preset multiplier factor, or increasing the height and width of the facial region by a preset numeric value.
- the second determination unit 506 may be further configured for: determining whether the probability of the face image being obscured is less than a probability threshold, and if the probability of the face image being obscured is less than the probability threshold, determining the quality of the face image being up to standard.
- FIG. 6 a structural schematic diagram of a computer system 600 adapted to implement a server of some embodiments of the present disclosure is shown.
- the server shown in FIG. 6 is merely an example, and should not bring any limitations to the functions and the scope of use of the embodiments of the present disclosure.
- the computer system 600 includes a central processing unit (CPU) 601 , which may execute various appropriate actions and processes in accordance with a program stored in a read-only memory (ROM) 602 or a program loaded into a random access memory (RAM) 603 from a storage portion 608 .
- the RAM 603 also stores various programs and data required by operations of the system 600 .
- the CPU 601 , the ROM 602 and the RAM 603 are connected to each other through a bus 604 .
- An input/output (I/O) interface 605 is also connected to the bus 604 .
- the following components are connected to the I/O interface 605 : an input portion 606 including a keyboard, a mouse etc.; an output portion 607 comprising a cathode ray tube (CRT), a liquid crystal display device (LCD), a speaker etc.; a storage portion 608 including a hard disk and the like; and a communication portion 609 comprising a network interface card, such as a LAN card and a modem.
- the communication portion 609 performs communication processes via a network, such as the Internet.
- a driver 610 is also connected to the I/O interface 605 as required.
- a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, and a semiconductor memory, may be installed on the driver 610 , to facilitate the retrieval of a computer program from the removable medium 611 , and the installation thereof on the storage portion 608 as needed.
- an embodiment of the present disclosure includes a computer program product, which comprises a computer program that is tangibly embedded in a machine-readable medium.
- the computer program comprises program codes for executing the method as illustrated in the flow chart.
- the computer program may be downloaded and installed from a network via the communication portion 609 , and/or may be installed from the removable media 611 .
- the computer program when executed by the central processing unit (CPU) 601 , implements the above mentioned functionalities as defined by the methods of the present disclosure.
- the computer readable medium in the present disclosure may be computer readable signal medium or computer readable storage medium or any combination of the above two.
- An example of the computer readable storage medium may include, but not limited to: electric, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, elements, or a combination any of the above.
- a more specific example of the computer readable storage medium may include but is not limited to: electrical connection with one or more wire, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), a fibre, a portable compact disk read only memory (CD-ROM), an optical memory, a magnet memory or any suitable combination of the above.
- the computer readable storage medium may be any physical medium containing or storing programs which can be used by a command execution system, apparatus or element or incorporated thereto.
- the computer readable signal medium may include data signal in the base band or propagating as parts of a carrier, in which computer readable program codes are carried.
- the propagating signal may take various forms, including but not limited to: an electromagnetic signal, an optical signal or any suitable combination of the above.
- the signal medium that can be read by computer may be any computer readable medium except for the computer readable storage medium.
- the computer readable medium is capable of transmitting, propagating or transferring programs for use by, or used in combination with, a command execution system, apparatus or element.
- the program codes contained on the computer readable medium may be transmitted with any suitable medium including but not limited to: wireless, wired, optical cable, RF medium etc., or any suitable combination of the above.
- each of the blocks in the flow charts or block diagrams may represent a module, a program segment, or a code portion, said module, program segment, or code portion comprising one or more executable instructions for implementing specified logic functions.
- the functions denoted by the blocks may occur in a sequence different from the sequences shown in the figures. For example, any two blocks presented in succession may be executed, substantially in parallel, or they may sometimes be in a reverse sequence, depending on the function involved.
- each block in the block diagrams and/or flow charts as well as a combination of blocks may be implemented using a dedicated hardware-based system executing specified functions or operations, or by a combination of a dedicated hardware and computer instructions.
- the units involved in the embodiments of the present disclosure may be implemented by means of software or hardware.
- the described units may also be provided in a processor, for example, described as: a processor, comprising an acquisition unit, an extraction unit, a first input unit, a second input unit, a first determination unit, and a second determination unit, where the names of these units do not in some cases constitute a limitation to such units themselves.
- the acquisition unit may also be described as “a unit for acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image.”
- some embodiments of the present disclosure further provide a computer-readable storage medium.
- the computer-readable storage medium may be the computer storage medium included in the electronic device in the above described embodiments, or a stand-alone computer-readable storage medium not assembled into the electronic device.
- the computer-readable storage medium stores one or more programs.
- the one or more programs when executed by an electronic device, cause the electronic device to: acquiring a to-be-recognized image and facial region information obtained by performing facial recognition on the to-be-recognized image in advance and used for indicating a facial region in the to-be-recognized image; extracting a face image from the to-be-recognized image on the basis of the facial region information; inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel comprised in the face image belonging to a category indicated by each category identifier in a preset category identifier set, the convolutional neural network being used to represent a corresponding relationship between an image containing a face and a probability of a pixel belonging to a category indicated by a category identifier in the category identifier set, and the category identifier set having a category identifier indicating a category representing a face part; inputting the face image into a pre-trained key face point positioning model to
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710908705.7A CN107679490B (zh) | 2017-09-29 | 2017-09-29 | 用于检测图像质量的方法和装置 |
CN201710908705.7 | 2017-09-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190102603A1 US20190102603A1 (en) | 2019-04-04 |
US11487995B2 true US11487995B2 (en) | 2022-11-01 |
Family
ID=61138595
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/050,346 Active 2041-09-02 US11487995B2 (en) | 2017-09-29 | 2018-07-31 | Method and apparatus for determining image quality |
Country Status (2)
Country | Link |
---|---|
US (1) | US11487995B2 (zh) |
CN (1) | CN107679490B (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210312627A1 (en) * | 2017-11-17 | 2021-10-07 | Sysmex Corporation | Image analysis method, apparatus, program, and learned deep learning algorithm |
US20230036366A1 (en) * | 2021-07-30 | 2023-02-02 | Lemon Inc. | Image attribute classification method, apparatus, electronic device, medium and program product |
Families Citing this family (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017015390A1 (en) * | 2015-07-20 | 2017-01-26 | University Of Maryland, College Park | Deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition |
CN108389172B (zh) * | 2018-03-21 | 2020-12-18 | 百度在线网络技术(北京)有限公司 | 用于生成信息的方法和装置 |
CN108197618B (zh) * | 2018-04-08 | 2021-10-22 | 百度在线网络技术(北京)有限公司 | 用于生成人脸检测模型的方法和装置 |
CN108956653A (zh) * | 2018-05-31 | 2018-12-07 | 广东正业科技股份有限公司 | 一种焊点质量检测方法、系统、装置及可读存储介质 |
CN108986169B (zh) * | 2018-07-06 | 2024-08-30 | 北京字节跳动网络技术有限公司 | 用于处理图像的方法和装置 |
CN109117736B (zh) * | 2018-07-19 | 2020-11-06 | 厦门美图之家科技有限公司 | 一种判定人脸点可见性的方法及计算设备 |
CN109034085B (zh) * | 2018-08-03 | 2020-12-04 | 北京字节跳动网络技术有限公司 | 用于生成信息的方法和装置 |
CN109242831A (zh) * | 2018-08-20 | 2019-01-18 | 百度在线网络技术(北京)有限公司 | 图像质量检测方法、装置、计算机设备和存储介质 |
CN109344752B (zh) * | 2018-09-20 | 2019-12-10 | 北京字节跳动网络技术有限公司 | 用于处理嘴部图像的方法和装置 |
CN109241930B (zh) * | 2018-09-20 | 2021-03-02 | 北京字节跳动网络技术有限公司 | 用于处理眉部图像的方法和装置 |
CN109840486B (zh) * | 2019-01-23 | 2023-07-21 | 深圳市中科晟达互联智能科技有限公司 | 专注度的检测方法、计算机存储介质和计算机设备 |
CN111488774A (zh) * | 2019-01-29 | 2020-08-04 | 北京搜狗科技发展有限公司 | 一种图像处理方法、装置和用于图像处理的装置 |
CN109948441B (zh) * | 2019-02-14 | 2021-03-26 | 北京奇艺世纪科技有限公司 | 模型训练、图像处理方法、装置、电子设备及计算机可读存储介质 |
CN109871802A (zh) * | 2019-02-15 | 2019-06-11 | 深圳和而泰数据资源与云技术有限公司 | 图像检测方法及图像检测装置 |
CN109978840A (zh) * | 2019-03-11 | 2019-07-05 | 太原理工大学 | 一种基于卷积神经网络的含纹理图像质量的判别方法 |
CN109949281A (zh) * | 2019-03-11 | 2019-06-28 | 哈尔滨工业大学(威海) | 一种胃镜图像质量检测方法及装置 |
CN111723828B (zh) * | 2019-03-18 | 2024-06-11 | 北京市商汤科技开发有限公司 | 注视区域检测方法、装置及电子设备 |
CN110084317B (zh) * | 2019-05-06 | 2023-04-21 | 北京字节跳动网络技术有限公司 | 用于识别图像的方法和装置 |
CN110147776B (zh) * | 2019-05-24 | 2021-06-11 | 北京百度网讯科技有限公司 | 确定人脸关键点位置的方法和装置 |
CN112085191B (zh) * | 2019-06-12 | 2024-04-02 | 上海寒武纪信息科技有限公司 | 一种神经网络的量化参数确定方法及相关产品 |
CN110349152A (zh) * | 2019-07-16 | 2019-10-18 | 广州图普网络科技有限公司 | 人脸图像质量检测方法及装置 |
CN110632094B (zh) * | 2019-07-24 | 2022-04-19 | 北京中科慧眼科技有限公司 | 一种基于逐点比对分析的图纹质量检测方法,装置与系统 |
CN110689479B (zh) * | 2019-09-26 | 2023-05-30 | 北京达佳互联信息技术有限公司 | 一种人脸上妆方法、装置、设备及介质 |
CN110827195B (zh) * | 2019-10-31 | 2023-09-22 | 北京达佳互联信息技术有限公司 | 虚拟物品添加方法、装置、电子设备及存储介质 |
CN110827261B (zh) * | 2019-11-05 | 2022-12-06 | 泰康保险集团股份有限公司 | 图像质量检测方法及装置、存储介质及电子设备 |
CN111027504A (zh) * | 2019-12-18 | 2020-04-17 | 上海眼控科技股份有限公司 | 人脸关键点检测方法、装置、设备及存储介质 |
CN111144344B (zh) * | 2019-12-30 | 2023-09-22 | 广州市百果园网络科技有限公司 | 人物年龄的确定方法、装置、设备及存储介质 |
CN111209873A (zh) * | 2020-01-09 | 2020-05-29 | 杭州趣维科技有限公司 | 一种基于深度学习的高精度人脸关键点定位方法及系统 |
CN111311542B (zh) * | 2020-01-15 | 2023-09-19 | 歌尔股份有限公司 | 一种产品质量检测方法及装置 |
CN111310624B (zh) * | 2020-02-05 | 2023-11-21 | 腾讯科技(深圳)有限公司 | 遮挡识别方法、装置、计算机设备及存储介质 |
CN111507354B (zh) * | 2020-04-17 | 2023-12-12 | 北京百度网讯科技有限公司 | 信息抽取方法、装置、设备以及存储介质 |
CN111523480B (zh) * | 2020-04-24 | 2021-06-18 | 北京嘀嘀无限科技发展有限公司 | 一种面部遮挡物的检测方法、装置、电子设备及存储介质 |
CN111695495B (zh) * | 2020-06-10 | 2023-11-14 | 杭州萤石软件有限公司 | 人脸识别方法、电子设备及存储介质 |
CN111738193A (zh) * | 2020-06-29 | 2020-10-02 | 湖南国科微电子股份有限公司 | 人脸抓拍方法和人脸抓拍系统 |
CN111915567A (zh) * | 2020-07-06 | 2020-11-10 | 浙江大华技术股份有限公司 | 一种图像质量评估方法、装置、设备及介质 |
CN111797773A (zh) * | 2020-07-07 | 2020-10-20 | 广州广电卓识智能科技有限公司 | 一种人脸关键部位遮挡检测方法、装置和设备 |
CN111898561B (zh) * | 2020-08-04 | 2024-07-12 | 腾讯科技(深圳)有限公司 | 一种人脸认证方法、装置、设备及介质 |
CN112085701B (zh) * | 2020-08-05 | 2024-06-11 | 深圳市优必选科技股份有限公司 | 一种人脸模糊度检测方法、装置、终端设备及存储介质 |
CN112052746A (zh) * | 2020-08-17 | 2020-12-08 | 北京大米科技有限公司 | 目标检测方法、装置、电子设备和可读存储介质 |
CN111931712B (zh) * | 2020-09-18 | 2023-05-26 | 杭州海康威视数字技术股份有限公司 | 人脸识别方法、装置、抓拍机及系统 |
CN112419170B (zh) * | 2020-10-16 | 2023-09-22 | 上海哔哩哔哩科技有限公司 | 遮挡检测模型的训练方法及人脸图像的美化处理方法 |
CN112365451B (zh) * | 2020-10-23 | 2024-06-21 | 微民保险代理有限公司 | 图像质量等级的确定方法、装置、设备及计算机可读介质 |
CN112149762A (zh) * | 2020-11-24 | 2020-12-29 | 北京沃东天骏信息技术有限公司 | 目标跟踪方法、目标跟踪装置及计算机可读存储介质 |
CN112826446A (zh) * | 2020-12-30 | 2021-05-25 | 上海联影医疗科技股份有限公司 | 一种医学扫描语音增强方法、装置、系统及存储介质 |
CN112733743B (zh) * | 2021-01-14 | 2024-03-15 | 北京爱笔科技有限公司 | 模型训练方法、数据、图像质量评估方法及相关装置 |
CN112802108B (zh) * | 2021-02-07 | 2024-03-15 | 上海商汤科技开发有限公司 | 目标对象定位方法、装置、电子设备及可读存储介质 |
CN113221718B (zh) * | 2021-05-06 | 2024-01-16 | 新东方教育科技集团有限公司 | 公式识别方法、装置、存储介质和电子设备 |
CN113486759B (zh) * | 2021-06-30 | 2023-04-28 | 上海商汤临港智能科技有限公司 | 危险动作的识别方法及装置、电子设备和存储介质 |
CN113723310B (zh) * | 2021-08-31 | 2023-09-05 | 平安科技(深圳)有限公司 | 基于神经网络的图像识别方法及相关装置 |
CN113762181A (zh) * | 2021-09-13 | 2021-12-07 | 联想(北京)有限公司 | 一种图像处理方法及电子设备 |
CN115272682A (zh) * | 2022-07-29 | 2022-11-01 | 上海弘玑信息技术有限公司 | 目标对象检测方法、目标检测模型的训练方法及电子设备 |
CN115661142B (zh) * | 2022-12-14 | 2023-03-28 | 广东工业大学 | 一种基于关键点检测的舌诊图像处理方法、设备及介质 |
Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160358321A1 (en) * | 2015-06-05 | 2016-12-08 | Sony Corporation | Full reference image quality assessment based on convolutional neural network |
US20170147905A1 (en) * | 2015-11-25 | 2017-05-25 | Baidu Usa Llc | Systems and methods for end-to-end object detection |
US20170177979A1 (en) * | 2015-12-22 | 2017-06-22 | The Nielsen Company (Us), Llc | Image quality assessment using adaptive non-overlapping mean estimation |
US20170243053A1 (en) * | 2016-02-18 | 2017-08-24 | Pinscreen, Inc. | Real-time facial segmentation and performance capture from rgb input |
US20170262736A1 (en) * | 2016-03-11 | 2017-09-14 | Nec Laboratories America, Inc. | Deep Deformation Network for Object Landmark Localization |
US20170308734A1 (en) * | 2016-04-22 | 2017-10-26 | Intel Corporation | Eye contact correction in real time using neural network based machine learning |
US20170351905A1 (en) * | 2016-06-06 | 2017-12-07 | Samsung Electronics Co., Ltd. | Learning model for salient facial region detection |
US9928410B2 (en) * | 2014-11-24 | 2018-03-27 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object, and method and apparatus for training recognizer |
US20180096457A1 (en) * | 2016-09-08 | 2018-04-05 | Carnegie Mellon University | Methods and Software For Detecting Objects in Images Using a Multiscale Fast Region-Based Convolutional Neural Network |
US20180129919A1 (en) * | 2015-07-08 | 2018-05-10 | Beijing Sensetime Technology Development Co., Ltd | Apparatuses and methods for semantic image labeling |
US20180137388A1 (en) * | 2016-11-14 | 2018-05-17 | Samsung Electronics Co., Ltd. | Method and apparatus for analyzing facial image |
US10002415B2 (en) * | 2016-04-12 | 2018-06-19 | Adobe Systems Incorporated | Utilizing deep learning for rating aesthetics of digital images |
US20180286032A1 (en) * | 2017-04-04 | 2018-10-04 | Board Of Regents, The University Of Texas System | Assessing quality of images or videos using a two-stage quality assessment |
US20180373924A1 (en) * | 2017-06-26 | 2018-12-27 | Samsung Electronics Co., Ltd. | Facial verification method and apparatus |
US20190080433A1 (en) * | 2017-09-08 | 2019-03-14 | Baidu Online Network Technology(Beijing) Co, Ltd | Method and apparatus for generating image |
US20190080456A1 (en) * | 2017-09-12 | 2019-03-14 | Shenzhen Keya Medical Technology Corporation | Method and system for performing segmentation of image having a sparsely distributed object |
US20190122115A1 (en) * | 2017-10-24 | 2019-04-25 | Vmaxx, Inc. | Image Quality Assessment Using Similar Scenes as Reference |
US10410330B2 (en) * | 2015-11-12 | 2019-09-10 | University Of Virginia Patent Foundation | System and method for comparison-based image quality assessment |
US20200129263A1 (en) * | 2017-02-14 | 2020-04-30 | Dignity Health | Systems, methods, and media for selectively presenting images captured by confocal laser endomicroscopy |
US20200167930A1 (en) * | 2017-06-16 | 2020-05-28 | Ucl Business Ltd | A System and Computer-Implemented Method for Segmenting an Image |
US20210104043A1 (en) * | 2016-12-30 | 2021-04-08 | Skinio, Llc | Skin Abnormality Monitoring Systems and Methods |
US20210133518A1 (en) * | 2017-04-07 | 2021-05-06 | Intel Corporation | Joint training of neural networks using multi-scale hard example mining |
US20210165852A1 (en) * | 2017-07-26 | 2021-06-03 | Richard Granger | Computer-implemented perceptual apparatus |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104252628B (zh) * | 2013-06-28 | 2020-04-10 | 广州华多网络科技有限公司 | 人脸图像标注方法和系统 |
CN104504376A (zh) * | 2014-12-22 | 2015-04-08 | 厦门美图之家科技有限公司 | 一种人脸图像的年龄分类方法和系统 |
CN105868797B (zh) * | 2015-01-22 | 2019-09-13 | 清华大学 | 识别景物类型的网络参数训练方法、景物类型识别方法及装置 |
CN106485215B (zh) * | 2016-09-29 | 2020-03-06 | 西交利物浦大学 | 基于深度卷积神经网络的人脸遮挡检测方法 |
CN106599830B (zh) * | 2016-12-09 | 2020-03-17 | 中国科学院自动化研究所 | 人脸关键点定位方法及装置 |
-
2017
- 2017-09-29 CN CN201710908705.7A patent/CN107679490B/zh active Active
-
2018
- 2018-07-31 US US16/050,346 patent/US11487995B2/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9928410B2 (en) * | 2014-11-24 | 2018-03-27 | Samsung Electronics Co., Ltd. | Method and apparatus for recognizing object, and method and apparatus for training recognizer |
US20160358321A1 (en) * | 2015-06-05 | 2016-12-08 | Sony Corporation | Full reference image quality assessment based on convolutional neural network |
US20180129919A1 (en) * | 2015-07-08 | 2018-05-10 | Beijing Sensetime Technology Development Co., Ltd | Apparatuses and methods for semantic image labeling |
US10410330B2 (en) * | 2015-11-12 | 2019-09-10 | University Of Virginia Patent Foundation | System and method for comparison-based image quality assessment |
US20170147905A1 (en) * | 2015-11-25 | 2017-05-25 | Baidu Usa Llc | Systems and methods for end-to-end object detection |
US20170177979A1 (en) * | 2015-12-22 | 2017-06-22 | The Nielsen Company (Us), Llc | Image quality assessment using adaptive non-overlapping mean estimation |
US20170243053A1 (en) * | 2016-02-18 | 2017-08-24 | Pinscreen, Inc. | Real-time facial segmentation and performance capture from rgb input |
US20170262736A1 (en) * | 2016-03-11 | 2017-09-14 | Nec Laboratories America, Inc. | Deep Deformation Network for Object Landmark Localization |
US10002415B2 (en) * | 2016-04-12 | 2018-06-19 | Adobe Systems Incorporated | Utilizing deep learning for rating aesthetics of digital images |
US20170308734A1 (en) * | 2016-04-22 | 2017-10-26 | Intel Corporation | Eye contact correction in real time using neural network based machine learning |
US20170351905A1 (en) * | 2016-06-06 | 2017-12-07 | Samsung Electronics Co., Ltd. | Learning model for salient facial region detection |
US20180096457A1 (en) * | 2016-09-08 | 2018-04-05 | Carnegie Mellon University | Methods and Software For Detecting Objects in Images Using a Multiscale Fast Region-Based Convolutional Neural Network |
US20180137388A1 (en) * | 2016-11-14 | 2018-05-17 | Samsung Electronics Co., Ltd. | Method and apparatus for analyzing facial image |
US20210104043A1 (en) * | 2016-12-30 | 2021-04-08 | Skinio, Llc | Skin Abnormality Monitoring Systems and Methods |
US20200129263A1 (en) * | 2017-02-14 | 2020-04-30 | Dignity Health | Systems, methods, and media for selectively presenting images captured by confocal laser endomicroscopy |
US20180286032A1 (en) * | 2017-04-04 | 2018-10-04 | Board Of Regents, The University Of Texas System | Assessing quality of images or videos using a two-stage quality assessment |
US20210133518A1 (en) * | 2017-04-07 | 2021-05-06 | Intel Corporation | Joint training of neural networks using multi-scale hard example mining |
US20200167930A1 (en) * | 2017-06-16 | 2020-05-28 | Ucl Business Ltd | A System and Computer-Implemented Method for Segmenting an Image |
US20180373924A1 (en) * | 2017-06-26 | 2018-12-27 | Samsung Electronics Co., Ltd. | Facial verification method and apparatus |
US20210165852A1 (en) * | 2017-07-26 | 2021-06-03 | Richard Granger | Computer-implemented perceptual apparatus |
US20190080433A1 (en) * | 2017-09-08 | 2019-03-14 | Baidu Online Network Technology(Beijing) Co, Ltd | Method and apparatus for generating image |
US20190080456A1 (en) * | 2017-09-12 | 2019-03-14 | Shenzhen Keya Medical Technology Corporation | Method and system for performing segmentation of image having a sparsely distributed object |
US20190122115A1 (en) * | 2017-10-24 | 2019-04-25 | Vmaxx, Inc. | Image Quality Assessment Using Similar Scenes as Reference |
Non-Patent Citations (21)
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210312627A1 (en) * | 2017-11-17 | 2021-10-07 | Sysmex Corporation | Image analysis method, apparatus, program, and learned deep learning algorithm |
US11842556B2 (en) * | 2017-11-17 | 2023-12-12 | Sysmex Corporation | Image analysis method, apparatus, program, and learned deep learning algorithm |
US20230036366A1 (en) * | 2021-07-30 | 2023-02-02 | Lemon Inc. | Image attribute classification method, apparatus, electronic device, medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN107679490A (zh) | 2018-02-09 |
CN107679490B (zh) | 2019-06-28 |
US20190102603A1 (en) | 2019-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11487995B2 (en) | Method and apparatus for determining image quality | |
US10853623B2 (en) | Method and apparatus for generating information | |
CN109214343B (zh) | 用于生成人脸关键点检测模型的方法和装置 | |
CN108509915B (zh) | 人脸识别模型的生成方法和装置 | |
US10719693B2 (en) | Method and apparatus for outputting information of object relationship | |
US20230081645A1 (en) | Detecting forged facial images using frequency domain information and local correlation | |
US10691928B2 (en) | Method and apparatus for facial recognition | |
WO2020006961A1 (zh) | 用于提取图像的方法和装置 | |
CN108197618B (zh) | 用于生成人脸检测模型的方法和装置 | |
WO2022105118A1 (zh) | 基于图像的健康状态识别方法、装置、设备及存储介质 | |
CN107590807A (zh) | 用于检测图像质量的方法和装置 | |
EP3933708A2 (en) | Model training method, identification method, device, storage medium and program product | |
CN111310705A (zh) | 图像识别方法、装置、计算机设备及存储介质 | |
CN113657269A (zh) | 人脸识别模型的训练方法、装置及计算机程序产品 | |
CN113343826A (zh) | 人脸活体检测模型的训练方法、人脸活体检测方法及装置 | |
US12026600B2 (en) | Systems and methods for target region evaluation and feature point evaluation | |
CN109670065A (zh) | 基于图像识别的问答处理方法、装置、设备和存储介质 | |
CN113627361B (zh) | 人脸识别模型的训练方法、装置及计算机程序产品 | |
CN113591763B (zh) | 人脸脸型的分类识别方法、装置、存储介质及计算机设备 | |
CN114092759A (zh) | 图像识别模型的训练方法、装置、电子设备及存储介质 | |
CN115861462B (zh) | 图像生成模型的训练方法、装置、电子设备及存储介质 | |
US20240331093A1 (en) | Method of training fusion model, method of fusing image, device, and storage medium | |
CN115050064A (zh) | 人脸活体检测方法、装置、设备及介质 | |
US20230115765A1 (en) | Method and apparatus of transferring image, and method and apparatus of training image transfer model | |
CN115984930A (zh) | 微表情识别方法、装置、微表情识别模型的训练方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DU, KANG;REEL/FRAME:061228/0930 Effective date: 20180125 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |