CN110866500A - Face detection alignment system, method, device, platform, mobile terminal and storage medium - Google Patents

Face detection alignment system, method, device, platform, mobile terminal and storage medium Download PDF

Info

Publication number
CN110866500A
CN110866500A CN201911131214.1A CN201911131214A CN110866500A CN 110866500 A CN110866500 A CN 110866500A CN 201911131214 A CN201911131214 A CN 201911131214A CN 110866500 A CN110866500 A CN 110866500A
Authority
CN
China
Prior art keywords
face
key point
target
point information
corrector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911131214.1A
Other languages
Chinese (zh)
Inventor
周康明
牛寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN201911131214.1A priority Critical patent/CN110866500A/en
Publication of CN110866500A publication Critical patent/CN110866500A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

A kind of human face detection aligns the system, this system includes human face detector and human face corrector, human face detector and human face corrector cascade, the carry-out terminal of the human face detector inserts the input end of the human face corrector, the said human face detector, is used for discerning the input picture after preconditioning, get the position information of at least one human face candidate area; the face corrector is used for acquiring a target face according to the position information of the face candidate area, and correcting the position coordinates of the target face to obtain key point information of the target face; the face corrector is further used for outputting an alignment result according to the key point information.

Description

Face detection alignment system, method, device, platform, mobile terminal and storage medium
Technical Field
The invention belongs to the technical field of image recognition and artificial intelligence, and particularly relates to a face detection alignment system, method, device, platform, mobile terminal and storage medium.
Background
In the field of face recognition, face detection and alignment are a key link in face recognition technology. The typical flow of face recognition mainly comprises three steps:
the first step is face detection, namely finding out the positions of all faces in a given image;
secondly, aligning the human face, namely correcting the detected human face;
and thirdly, performing feature extraction and feature comparison on the corrected human face to finish the human face recognition process.
The recognition result is directly influenced by the face detection and alignment effects. At present, a deep learning method is adopted to obtain better detection accuracy, but the model calculation amount is large, so that the method cannot be used at a mobile terminal with limited calculation force. The mobile terminal can only select a light-weight network, so that the detection accuracy is low.
If speed and precision are considered, a better compromise can be obtained by using a multi-scale face detection model based on a cascade network generally, but in a high-resolution scene, the speed is very slow to keep higher detection precision, and even the speed is not available at a mobile terminal.
Disclosure of Invention
The embodiment of the invention provides a face detection alignment system, a face detection alignment method, a face detection alignment device, a face detection alignment platform, a mobile terminal and a storage medium, and aims to solve the problem that the face recognition accuracy and the recognition speed are reduced due to the limitation of the computing capacity of a mobile terminal.
In one embodiment of the invention, the system comprises a face detector and a face corrector, wherein the face detector and the face corrector are cascaded, the output end of the face detector is connected to the input end of the face corrector, and the face detector is used for identifying a preprocessed input picture to obtain the position information of at least one face candidate area; the face corrector is used for acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face and detecting the key point information of the target face; the face corrector is further used for outputting an alignment result according to the key point information.
In one embodiment of the present invention, a face detection alignment apparatus includes a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations: recognizing the preprocessed input picture to obtain position information of at least one face candidate region; acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face; and outputting an alignment result according to the key point information.
In one embodiment of the invention, a face detection and alignment mobile terminal comprises a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations: recognizing the preprocessed input picture to obtain position information of at least one face candidate region; acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face; and outputting an alignment result according to the key point information.
In one embodiment of the invention, a face detection alignment platform comprises a server, wherein the server is provided with a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations: recognizing the preprocessed input picture to obtain position information of at least one face candidate region; acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face; and outputting an alignment result according to the key point information.
In one embodiment of the present invention, a face detection alignment method includes: recognizing the preprocessed input picture to obtain position information of at least one face candidate region; acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face; and outputting an alignment result according to the key point information.
In an embodiment of the present invention, a storage medium stores a computer program thereon, and when the computer program is executed by a processor, the method for detecting and aligning a human face is implemented.
The embodiment of the invention provides a high-resolution rapid face detection and alignment system, a method, a device, a platform, a mobile terminal and a storage medium based on a cascade network based on the usability consideration of a mobile terminal. The method can improve the inference speed of the mobile terminal in a high-resolution scene while keeping high detection precision.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 is a block diagram of a face detection and alignment apparatus according to an embodiment of the present invention.
FIG. 2 is a block diagram of a face detector according to one embodiment of the invention.
Fig. 3 is a block diagram of a face corrector according to one embodiment of the present invention.
In fig. 3, propulses represents the detection result of the face detector, resize represents the cropping operation performed on propulses, input represents the input picture, conv represents the convolution operation, mp represents the max-pooling operation, fc represents the full-join operation, and there are 3 correction tasks after the fc operation: face classification, position coordinate regression, facial feature point positioning, and finaldetection represents the final output result of the face corrector.
Detailed Description
According to one or more embodiments, as shown in fig. 1, a face detection alignment system includes a face detector and a face corrector, the face detector is cascaded with the face corrector, and an output end of the face detector is connected to an input end of the face corrector.
The face detector is used for identifying the preprocessed input picture to obtain the position information of at least one face candidate region;
the face corrector is used for acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face and detecting the key point information of the target face;
the face corrector is further used for outputting an alignment result according to the key point information.
According to one or more embodiments, the face corrector is specifically configured to: and outputting an alignment result after performing non-maximum suppression processing on the data represented by the key point information.
According to one or more embodiments, the key point information specifically includes: and key point information for indicating coordinates of a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner of the target face.
According to one or more embodiments, the face detector specifically includes an identification module and a first filtering module: the identification module is used for obtaining a plurality of feature maps with different sizes according to the size of the input picture; the first screening module is used for respectively determining a plurality of detection branches according to the feature maps, and the detection branches are used for determining a plurality of face candidate areas; the first screening module is further configured to filter the plurality of face candidate regions to obtain location information of the at least one face candidate region.
According to one or more embodiments, the face corrector specifically includes: the processing module and the second screening module; the processing module is used for cutting the image of the at least one face candidate area to obtain a uniform target image; processing the target image through 4 convolutional layers and 1 full-connection layer to obtain a 256-dimensional feature vector; and the second screening module is used for acquiring a target face based on the feature vector, correcting the position coordinates of the target face and detecting key point information of the target face.
Further, when an original picture with high resolution of the face is received, an input picture is obtained after compression and preprocessing. And carrying out non-maximum suppression processing on the detection result.
Further, the face detector initially screens out possible faces by adopting a convolutional neural network, detects the position coordinates of the faces at the same time, and finally outputs the position coordinates of a face candidate area. The face corrector further screens and corrects a face candidate region obtained from the face detector by adopting a convolutional neural network, simultaneously detects 5 key points of the face, wherein the 5 key points comprise coordinates of a left eye, a right eye, a nose tip, a left mouth angle and a right mouth angle, are used for aligning the face, and output a final detection result.
In accordance with one or more embodiments, as shown in FIG. 1, a face detection and alignment system is configured. The face detection and alignment system comprises a face detector and a face corrector which are in a cascade structure,
the working process is as follows:
a. compressing and preprocessing an original picture with high resolution to obtain an input picture;
b. sending the input picture into a face detector to obtain a preliminarily screened face candidate frame;
c. the obtained face candidate frames are sequentially sent to a face corrector, the face corrector can further screen out possible faces and correct coordinates of face positions, and meanwhile, key point positions of the face of the person can be detected to align the face;
d. and carrying out non-maximum value inhibition treatment on the result, and removing repeated results to obtain a final detection result. The face detector comprises a multitask model, and the model is in charge of primarily screening out possible faces on one hand, detecting position coordinates of the faces on the other hand, and finally outputting the positions of face candidate areas. The face detector adopts a convolutional neural network, and the structure is shown in fig. 2. In fig. 2, input indicates an input picture, conv indicates a convolution operation, mp indicates a maximum pooling operation, cls indicates classification, reg indicates regression, branch1, branch2, branch3 respectively indicate detection branch1, detection branch2, and detection branch3, detector indicates a detector, and prosalas indicates a detection result of a face detector. The structure of the face detector is explained as follows:
a) the input picture size is 160x160, the total number of convolution layers is 5, the convolution kernels of 3x3 are adopted, the downsampling mode is maximum pooling, and the multiple is 16;
b) 3 detection branches 1, 2 and 3 are respectively led out from feature maps (feature maps) obtained from the last three convolutional layers, and the sizes of the feature maps are 40x40,20x20 and 10x 10; the branch1, branch2, branch3 have 1,3,5 preset boxes (anchor boxes), respectively, that is, each point on the feature map detects 1,3,5 boxes, respectively, each box predicts whether a face is (probability of yes and probability of no, total 2 values) and the coordinates of the face position (offset of coordinates of upper left corner and lower right corner, total 4 values), so that the output size of each detection branch is 40x40x6,20x20x18,10x10x30, respectively. The feature size may be in millimeters or other units.
c. And combining the detection results of the 3 detection branches into a detector, and performing post-processing on the detection results to obtain a face candidate area.
Further, the working flow of the face detector is as follows:
a) the size of an input picture is 160x160, and feature maps with different sizes are obtained after 5-layer convolution calculation;
b) 3 detection branches 1, 2 and 3 are drawn from 3 feature maps with the sizes of 40x40,20x20 and 10x10, and each branch is responsible for screening possible faces and predicting coordinates of the positions of the faces;
c) merging the detection results of the 3 detection branches into a detector, filtering the detection results, screening out the human faces larger than a threshold value, and obtaining coordinates corresponding to the original image;
d) further performing non-maximum value inhibition on the screened result, and rejecting repeated candidate frames;
e) and outputting a predicted face candidate frame result.
The face corrector comprises a multitask model, further screens and corrects a face candidate region output by a previous-level network, and simultaneously detects 5 key points of the face: and coordinates of the left eye, the right eye, the nose tip, the left mouth corner and the right mouth corner are used for aligning the human face, and a final detection result is output.
The face corrector adopts a convolutional neural network, and the structure is shown in fig. 3. The structure of the face corrector is explained as follows:
a) the input picture size is 48x48, and the input picture size is 4 convolutional layers and 2 full-connection layers;
b) the last fully-connected layer includes 3 task branches: face classification, position coordinate correction and face key point detection; the face classification branch further screens out possible faces, the position coordinate correction branch corrects the position coordinates of the faces, and the face key point detection branch predicts the coordinates of 5 key points of the faces;
further, the processing algorithm of the face corrector is as follows:
a) sequentially inputting the face candidate regions output by the face detector into a face corrector, and cutting the face candidate regions into input pictures with the size of 48x 48;
b) inputting a picture, and obtaining 256-dimensional characteristic vectors after the picture passes through 4 convolutional layers and 1 full-connection layer;
c) 3 branching tasks are performed simultaneously: further screening out possible faces; correcting the corresponding position coordinates; predicting coordinates of 5 key points of the face;
d) further performing non-maximum value inhibition on the screened result, and rejecting repeated candidate frames;
e) and outputting a final face detection result.
According to one or more embodiments, a face detection alignment method employs a face detector and a face corrector, both of which are in a cascade structure, and the method includes the steps of:
a. compressing and preprocessing an original picture with high resolution to obtain an input picture;
b. sending the input picture into a face detector to obtain a preliminarily screened face candidate frame;
c. the obtained face candidate frames are sequentially sent to a face corrector, the face corrector can further screen out possible faces and correct coordinates of face positions, and meanwhile, key point positions of the face of the person can be detected to align the face;
d. and carrying out non-maximum value inhibition treatment on the result, and removing repeated results to obtain a final detection result.
The working flow of the face detector is as follows:
a. the size of an input picture is 160x160, and feature maps with different sizes are obtained after 5-layer convolution calculation;
b. 3 detection branches 1, 2 and 3 are led out of 3 characteristic maps with the sizes of 40x40,20x20 and 10x 10; the branch1, branch2 and branch3 respectively have 1,3 and 5 anchor boxes, that is, each point on the feature map detects 1,3 and 5 boxes respectively, each box predicts whether a face is (the probability of yes and the probability of no yes, which are 2 values) and the coordinates of the face position (the offset of the coordinates of the upper left corner and the lower right corner, which are 4 values), so that the output sizes of each detection branch are 40x40x6,20x20x18 and 10x10x30 respectively; each branch is responsible for screening possible faces and predicting coordinates of the face position;
c. merging the detection results of the 3 detection branches into a detector, filtering the detection results, screening out the human faces larger than a threshold value, and obtaining coordinates corresponding to the original image;
d. and further performing non-maximum value inhibition on the screened result, and rejecting repeated candidate frames.
The processing algorithm of the face corrector is as follows:
a. sequentially inputting the face candidate regions output by the face detector into a face corrector, and cutting the face candidate regions into input pictures with the size of 48x 48;
b. inputting a picture, and obtaining 256-dimensional characteristic vectors after the picture passes through 4 convolutional layers and 1 full-connection layer;
c. 3 branching tasks are performed simultaneously: further screening out possible faces; correcting the corresponding position coordinates; predicting coordinates of 5 key points of the face;
d. further performing non-maximum value inhibition on the screened result, and rejecting repeated candidate frames;
e. and outputting a final face detection result.
Wherein, regarding the determination of the sizes of the anchor boxes in the face detector, the steps are as follows:
a. performing k-means clustering on all bounding boxes of the training set group, wherein the distance metric of the clustering is as follows: d is 1-IOU, and the IOU is the intersection ratio of the bounding box and the class center point;
b. the clustering coefficients k are respectively set to be 1,3 and 5, and the width and height values of 1,3 and 5 groups of anchor boxes can be respectively obtained;
wherein, relate to the sample training, have again:
1) the face detector has 2 task branches: face classification and position coordinate regression; the face corrector has 3 task branches: face classification, position coordinate regression and face key point regression;
2) the face classification adopts cross entropy as a loss function:
Figure BDA0002278341890000071
wherein L isdetTo classify loss, ydetA tag value of groudtuth (0 or 1);
Figure BDA0002278341890000072
a classification confidence predicted for the network;
3) the position coordinate regression and the face key point regression both adopt Euclidean distance as a loss function:
Figure BDA0002278341890000073
Figure BDA0002278341890000074
wherein L isboxRegression for position coordinates loss, yboxIs the position coordinate of the groudtruth;
Figure BDA0002278341890000075
predicted location coordinates for the network; l islandmarkRegression of loss, y for key points on the facelandmarkIs the key point coordinate of the grountruth;
Figure BDA0002278341890000076
predicted keypoint coordinates for the network.
According to one or more embodiments, a face detection alignment apparatus includes a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face; and outputting an alignment result according to the key point information.
In accordance with one or more embodiments, a face detection aligned mobile terminal includes a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face;
and outputting an alignment result according to the key point information.
In accordance with one or more embodiments, a face detection alignment platform includes a server having a memory; and a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face;
and outputting an alignment result according to the key point information.
According to one or more embodiments, a face detection alignment method includes: recognizing the preprocessed input picture to obtain position information of at least one face candidate region; acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face; and outputting an alignment result according to the key point information.
According to one or more embodiments, a storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements a face detection alignment method as described above.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may also be an electric, mechanical or other form of connection.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A face detection alignment system comprises a face detector and a face corrector, wherein the face detector and the face corrector are cascaded, the output end of the face detector is connected with the input end of the face corrector,
the face detector is used for identifying the preprocessed input picture to obtain the position information of at least one face candidate region;
the face corrector is used for acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face and detecting the key point information of the target face;
the face corrector is further used for outputting an alignment result according to the key point information.
2. The face detection alignment system of claim 1, wherein the face corrector is specifically configured to:
and outputting an alignment result after performing non-maximum suppression processing on the data represented by the key point information.
3. The face detection alignment system according to claim 1, wherein the key point information specifically includes:
and key point information for indicating coordinates of a left eye, a right eye, a nose tip, a left mouth corner and a right mouth corner of the target face.
4. The face detection alignment system according to claim 1, wherein the face detector specifically comprises a recognition module and a first filtering module:
the identification module is used for obtaining a plurality of feature maps with different sizes according to the size of the input picture;
the first screening module is used for respectively determining a plurality of detection branches according to the feature maps, and the detection branches are used for determining a plurality of face candidate areas; the first screening module is further configured to filter the plurality of face candidate regions to obtain location information of the at least one face candidate region.
5. The face detection alignment system according to claim 1, wherein the face corrector specifically comprises: the processing module and the second screening module;
the processing module is used for cutting the image of the at least one face candidate area to obtain a uniform target image; processing the target image through 4 convolutional layers and 1 full-connection layer to obtain a 256-dimensional feature vector;
and the second screening module is used for acquiring a target face based on the feature vector, correcting the position coordinates of the target face and detecting key point information of the target face.
6. A face detection alignment apparatus, comprising a memory; and
a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor to:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face;
and outputting an alignment result according to the key point information.
7. A face detection alignment mobile terminal is characterized in that the mobile terminal comprises a memory; and
a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor to:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face;
and outputting an alignment result according to the key point information.
8. A face detection alignment platform is characterized in that the platform comprises a server, wherein the server is provided with a memory; and
a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor to:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face;
and outputting an alignment result according to the key point information.
9. A face detection alignment method is characterized by comprising the following steps:
recognizing the preprocessed input picture to obtain position information of at least one face candidate region;
acquiring a target face according to the position information of the face candidate area, correcting the position coordinates of the target face, and detecting key point information of the target face;
and outputting an alignment result according to the key point information.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements the face detection alignment method of claim 9.
CN201911131214.1A 2019-11-19 2019-11-19 Face detection alignment system, method, device, platform, mobile terminal and storage medium Pending CN110866500A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911131214.1A CN110866500A (en) 2019-11-19 2019-11-19 Face detection alignment system, method, device, platform, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911131214.1A CN110866500A (en) 2019-11-19 2019-11-19 Face detection alignment system, method, device, platform, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN110866500A true CN110866500A (en) 2020-03-06

Family

ID=69655569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911131214.1A Pending CN110866500A (en) 2019-11-19 2019-11-19 Face detection alignment system, method, device, platform, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110866500A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001280A (en) * 2020-08-13 2020-11-27 浩鲸云计算科技股份有限公司 Real-time online optimization face recognition system and method
CN112149571A (en) * 2020-09-24 2020-12-29 深圳龙岗智能视听研究院 Face recognition method based on neural network affine transformation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109961006A (en) * 2019-01-30 2019-07-02 东华大学 A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN110175504A (en) * 2019-04-08 2019-08-27 杭州电子科技大学 A kind of target detection and alignment schemes based on multitask concatenated convolutional network
CN110222565A (en) * 2019-04-26 2019-09-10 合肥进毅智能技术有限公司 A kind of method for detecting human face, device, electronic equipment and storage medium
CN110309706A (en) * 2019-05-06 2019-10-08 深圳市华付信息技术有限公司 Face critical point detection method, apparatus, computer equipment and storage medium
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800648A (en) * 2018-12-18 2019-05-24 北京英索科技发展有限公司 Face datection recognition methods and device based on the correction of face key point
CN109558864A (en) * 2019-01-16 2019-04-02 苏州科达科技股份有限公司 Face critical point detection method, apparatus and storage medium
CN109961006A (en) * 2019-01-30 2019-07-02 东华大学 A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN110175504A (en) * 2019-04-08 2019-08-27 杭州电子科技大学 A kind of target detection and alignment schemes based on multitask concatenated convolutional network
CN110222565A (en) * 2019-04-26 2019-09-10 合肥进毅智能技术有限公司 A kind of method for detecting human face, device, electronic equipment and storage medium
CN110309706A (en) * 2019-05-06 2019-10-08 深圳市华付信息技术有限公司 Face critical point detection method, apparatus, computer equipment and storage medium
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI LIU ET AL.: "SSD: Single Shot MultiBox Detector" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001280A (en) * 2020-08-13 2020-11-27 浩鲸云计算科技股份有限公司 Real-time online optimization face recognition system and method
CN112149571A (en) * 2020-09-24 2020-12-29 深圳龙岗智能视听研究院 Face recognition method based on neural network affine transformation

Similar Documents

Publication Publication Date Title
JP5121506B2 (en) Image processing apparatus, image processing method, program, and storage medium
US20220335748A1 (en) Method for identifying an object within an image and mobile device for executing the method
CN111274977B (en) Multitasking convolutional neural network model, using method, device and storage medium
CN111401257A (en) Non-constraint condition face recognition method based on cosine loss
CN110147708B (en) Image data processing method and related device
CN112381061B (en) Facial expression recognition method and system
JP2008102611A (en) Image processor
US7831068B2 (en) Image processing apparatus and method for detecting an object in an image with a determining step using combination of neighborhoods of a first and second region
US11854209B2 (en) Artificial intelligence using convolutional neural network with hough transform
CN112149533A (en) Target detection method based on improved SSD model
CN110866500A (en) Face detection alignment system, method, device, platform, mobile terminal and storage medium
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN113887494A (en) Real-time high-precision face detection and recognition system for embedded platform
US20230394871A1 (en) Method for verifying the identity of a user by identifying an object within an image that has a biometric characteristic of the user and separating a portion of the image comprising the biometric characteristic from other portions of the image
CN107368847B (en) Crop leaf disease identification method and system
CN113228105A (en) Image processing method and device and electronic equipment
CN116110095A (en) Training method of face filtering model, face recognition method and device
CN112084874B (en) Object detection method and device and terminal equipment
CN114296545A (en) Unmanned aerial vehicle gesture control method based on vision
US20220383663A1 (en) Method for obtaining data from an image of an object of a user that has a biometric characteristic of the user
JP4298283B2 (en) Pattern recognition apparatus, pattern recognition method, and program
CN112287769A (en) Face detection method, device, equipment and storage medium
CN113642428B (en) Face living body detection method and device, electronic equipment and storage medium
Li et al. Scalenet-improve cnns through recursively rescaling objects
CN117315287A (en) Object edge detection method and device based on instance segmentation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination