CN107066955B - Method for restoring whole human face from local human face area - Google Patents

Method for restoring whole human face from local human face area Download PDF

Info

Publication number
CN107066955B
CN107066955B CN201710181236.3A CN201710181236A CN107066955B CN 107066955 B CN107066955 B CN 107066955B CN 201710181236 A CN201710181236 A CN 201710181236A CN 107066955 B CN107066955 B CN 107066955B
Authority
CN
China
Prior art keywords
image
face
eye region
dictionary
complete
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710181236.3A
Other languages
Chinese (zh)
Other versions
CN107066955A (en
Inventor
姚琪
卓越
罗畅
刘靖峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Deepcam Information Technology Co ltd
Original Assignee
Wuhan Deepcam Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Deepcam Information Technology Co ltd filed Critical Wuhan Deepcam Information Technology Co ltd
Priority to CN201710181236.3A priority Critical patent/CN107066955B/en
Publication of CN107066955A publication Critical patent/CN107066955A/en
Application granted granted Critical
Publication of CN107066955B publication Critical patent/CN107066955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a method for restoring a whole face from a local face area, which utilizes a K-SVD algorithm to train the whole face and a local incomplete face synchronously to obtain two associated dictionaries. In practical application, the two dictionaries are inquired in a crossed mode, and the whole face is restored.

Description

Method for restoring whole human face from local human face area
Technical Field
The invention relates to the technical field of face recognition, in particular to a method for restoring a face from a local area.
Background
In smart cities, security and public security technical reconnaissance, face recognition is a common artificial intelligence technical means. When a detected person carries a mask or adopts other means to shield part of the face, the existing face recognition technology cannot detect the face recognition of the detected object, and the existing face recognition technology needs the whole face to recognize.
Human face recognition is a typical technology of biological identity recognition, and because active cooperation of detected individuals is not needed, the human face recognition technology is widely applied to human-computer interaction, security protection, identity authentication, entertainment, medical care and the like in recent years. The face recognition technology comprises the following steps: face detection, feature extraction, feature matching and classification. The face detection method comprises the following steps: HARR scan, HOG scan, adaboost learning, deep learning CNN object detection, etc. The characteristic extraction method comprises the following steps: PCA eigenface, deep learning CNN feature extraction, etc. Feature matching and classification includes: 1-NN, k-NN, and SVM. The above-mentioned various methods of face detection, feature extraction and feature matching are organically combined, so that the current general face recognition technology can be obtained.
Face detection and feature extraction in the existing face recognition technology are all based on the whole face. In real life, when a detected person covers a mask, a mask or shields a part of the face in other ways, the existing face recognition technology fails to recognize the person. Therefore, there is a need for a method for face recognition by local facial features that improves the recognition accuracy when the detected person is wearing a mask, or otherwise covering a portion of the face.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for restoring a whole face from a local face region, which utilizes a K-SVD algorithm to train the whole face and a local incomplete face synchronously to obtain two associated dictionaries. In practical application, the complete face is restored through cross query, and the method can effectively improve the identification accuracy rate when the detected person covers the mask, the face or shields a part of the face in other modes in practical application.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the invention provides a method for restoring a whole face from a local face area, which comprises the following steps:
s1, training phase
Carrying out graying and illumination equalization processing on the face images in the face training set, carrying out landmark, and generating an image Y corresponding to each complete face and an image Y corresponding to a local face(ii) a Then according to the K-SVD algorithm, the empty dictionary is used as an initial dictionary, and the complete face image Y and the local face image Y are usedAs input to a dictionary, forSynchronously training the dictionaries to obtain a dictionary D corresponding to the optimized complete face image and a dictionary D corresponding to the local face image which are mutually related
S2, identifying a reduction stage
Selecting partial face parts from a frame of a target face image with shielding or defect, carrying out graying and illumination equalization processing, and then carrying out landmark to obtain an image Y corresponding to the partial target face', an image Y' as a dictionary DIs input according to
Figure DEST_PATH_GDA0001289415050000021
Obtaining a local target face image YAnd inputting the sparse coefficient X into a dictionary D to obtain a recovered whole face Y' according to reverse query of Y-DX.
The invention has the beneficial effects that:
the invention provides a method for restoring/guessing the whole face from the eye area of the blocked face, so that the existing face recognition technology can still be used under the condition that the face is blocked.
Drawings
Fig. 1 is a method for restoring a whole face from a local face, which is implemented by taking an eye region as an example.
FIG. 2 is a comparison graph of ROC after testing with an original intact face and a face recovered from the eye region
Detailed Description
The invention is further explained below with reference to the drawings and examples.
The invention provides a method for restoring a whole face from a local face area, which takes an eye area as an example and comprises the following steps:
s1, training phase
Carrying out graying and illumination equalization processing on the face images in the face training set, carrying out landmark, and generating an image Y corresponding to each complete face and an image Y corresponding to an eye region(ii) a Then according to the K-SVD algorithm, the empty dictionary is used as the initial wordTypically, the whole face image Y and the eye region image Y are usedAs dictionary input, synchronous training is carried out on the dictionary to obtain a dictionary D corresponding to the optimized complete face image and a dictionary D corresponding to the eye region image which are mutually relatedAnd a common sparse coefficient X.
The method specifically comprises the following substeps:
s101, at least 100 million face pictures are crawled from the Internet by utilizing a web crawler technology, or the face pictures are obtained by police; the more the original face pictures are in the training set, the more accurate the dictionary is obtained after training; graying and illumination equalization processing are carried out on the face picture, and landmark marking is carried out on the face picture by utilizing a gradient histogram HOG algorithm to generate an image Y corresponding to a complete face;
s102, aiming at each complete face picture, manually framing out eye region parts and setting the gray level of the face of the non-framed part to zero to generate a local face image Y
S103, using the empty dictionary as an initial dictionary, and taking the complete face image Y and the eye region image Y corresponding to the complete face image YAs input to a dictionary, solving a formula
Figure DEST_PATH_GDA0001289415050000031
Obtaining a complete face image Y and an eye region image YCorresponding optimized dictionaries D and Dβ is a weight value of eye region training, and the weight value is 80-150, so that training is more biased to an unshielded part, namely an eye region, and the order of sparse coefficients is between 20-50.
S2, identifying a reduction phase, as shown in figure 1,
selecting eye region part in the shielded or incomplete target face image, performing graying and illumination equalization processing, and performing landmark to obtain image Y corresponding to the eye region of the target face', an image Y' as a dictionary DIs input according to
Figure DEST_PATH_GDA0001289415050000041
Obtaining the image Y of the eye region of the target human faceAnd inputting the sparse coefficient X into a dictionary D to obtain a recovered whole face Y' according to reverse query of Y-DX.
The method specifically comprises the following substeps:
s201, selecting a shielded target face image in a frame, and carrying out graying and illumination equalization processing; performing landmark marking on the target face image by using a gradient histogram HOG algorithm, reserving the gray value of the pixels of the target face eye region image and setting the gray value of the blocked partial pixels to zero to obtain an image Y corresponding to the target face eye region';
S202, the image Y of the target human face eye region' input dictionary DMake a query in accordance with
Figure DEST_PATH_GDA0001289415050000042
Obtaining the image Y of the eye region of the target human face' corresponding sparse coefficient X;
s203, the image Y of the target human face eye regionInputting corresponding sparse coefficient X into dictionary D, and performing reverse query to obtain target human face eye region image Y'corresponding full face image Y'.
Preferably, in actual use, different dictionaries are generated according to the upper 10%, the lower 10%, the 0% and the 10% of the eye area, and three complete faces are recovered through the different dictionaries respectively, so that the recovery accuracy is improved.
The method comprises the following specific steps:
a training stage:
s101', a large number of face pictures are crawled from the Internet by utilizing a web crawler technology, or the face pictures are obtained by police; graying and illumination equalization processing are carried out on the face picture, and landmark marking is carried out on the face picture by utilizing a gradient histogram HOG algorithm to generate an image Y corresponding to a complete face;
s102', aiming at each complete face picture, selecting the eye region and reserving the pixel gray of the eye region imageSetting the gray scale of the non-frame-selected part to zero at the same time to generate an eye region image Y∧0(ii) a Shifting up the eye height by 10% based on the eye region selected by the frame, retaining the gray value of the pixels of the selected part after shifting, and zeroing the gray values of the pixels of the other part outside the selected part to obtain the eye region image Y shifted up∧1'; downwards shifting 10% of the eye height by taking the eye region as a reference, reserving the gray value of the image pixel of the selected region after shifting, and setting the gray values of other pixels outside the selected region to zero to obtain a downwards shifted local target face image Y∧2';
S103', using the empty dictionary as an initial dictionary, and taking the complete face image Y and the eye region image Y corresponding to the complete face image Y∧0As input to the dictionary, solve formula (1)
Figure DEST_PATH_GDA0001289415050000051
Obtaining a complete face image Y and an eye region image Y∧0Corresponding optimized dictionary D0And D∧0
In the same way, the method for preparing the composite material,
the complete human face image Y and the corresponding eye region image Y which shifts upwards∧1As dictionary input, solving formula (1) to obtain complete human face image Y and eye region image Y∧1Corresponding optimized dictionary D1And D∧1
The complete human face image Y and the corresponding eye region image Y which is shifted downwards are processed∧2As dictionary input, solving formula (1) to obtain complete human face image Y and eye region image Y∧2Corresponding optimized dictionary D2And D∧2
And (3) identifying a reduction stage:
s201', selecting a shielded target face image from a frame, and carrying out graying and illumination equalization processing; performing landmark marking on the target face image by using a gradient histogram HOG algorithm, reserving the gray value of the image pixel of the eye region and zeroing the gray value of the shielded part of the pixelTo obtain the image Y of the eye region of the target human face';
S202', the image Y of the target human face eye region' separately input dictionary D∧0、D∧1、D∧2Make a query in accordance with
Figure DEST_PATH_GDA0001289415050000052
Obtaining the image Y of the eye region of the target human face' corresponding sparse coefficient X0、X1、X2
S203', the target human face eye region image Y is processed' corresponding sparse coefficient X0、X1、X2Respectively input dictionary D0、D1、D2And performing reverse query to obtain a target human face eye region image Y' complete face image Y of possible correspondence0'、Y1'、Y2';
After the whole recovered face Y is obtained, the face Y can be input into the existing face recognition software for further identity recognition or manually distinguished.
Fig. 2 is a comparison of ROC (receiver operating characteristic curve) after a test using an original whole face and a face restored from an eye region, in which a dotted line shows an effect of the original whole face recognition and a solid line shows an effect of the whole face recognition after restoration from an eye portion.
If the shielded area is other area, only Y is required to be shieldedChanging into the remaining part without shielding, Y is the whole face, and reproducing two dictionaries D and D by using K-SVD algorithmTraining is carried out, and the method can be used for restoring other blocked areas, for example, in VR (virtual reality game), the eyes of a player are blocked, and other parts of the face are exposed, so that the eyes can be restored according to other parts of the face.
The method can be realized by an integrated circuit, an embedded circuit and cloud server software.
The parts not described in the specification are prior art or common general knowledge. The present embodiments are illustrative only and not intended to limit the scope of the present invention, and modifications and equivalents thereof by those skilled in the art are considered to fall within the scope of the present invention as set forth in the claims.

Claims (8)

1. A method for restoring a whole face from a local face area is characterized in that: the method comprises the following steps:
s1, training phase
Carrying out graying and illumination equalization processing on the face images in the face training set, carrying out landmark, and generating an image Y corresponding to each complete face and an image Y corresponding to an eye region^(ii) a Then according to the K-SVD algorithm, the empty dictionary is used as an initial dictionary to obtain a complete human face image Y and an eye region image Y^As dictionary input, synchronous training is carried out on the dictionary to obtain a dictionary D corresponding to the optimized complete face image and a dictionary D corresponding to the eye region image which are mutually related^
S2, restoring and identifying stage
Selecting partial face parts from a frame of a target face image with shielding or incomplete, carrying out graying and illumination equalization processing, and then carrying out landmark to obtain an image Y corresponding to a target eye region', an image Y' as a dictionary DIs input according to
Figure FDA0002487377030000011
Obtaining an image Y of the target human eye regionInputting the sparse coefficient X into a dictionary D, and reversely inquiring according to Y (DX) to obtain a recovered whole face Y';
the step S1 specifically includes the following steps:
s101, crawling a large number of face pictures from the Internet by utilizing a web crawler technology, or obtaining the face pictures through police, wherein the number of the pictures is more than a million level; graying and illumination equalization processing are carried out on the face picture, and landmark is carried out on the face picture by utilizing a gradient histogram HOG algorithm and an SVM so as to generate an image Y corresponding to a complete face;
s102, aiming at each complete face picture, selecting eye region parts in a frame mode, setting the gray level of the face of the non-frame selected part to zero, and generating an eye region image Y
S103, using the empty dictionary as an initial dictionary, and taking the complete face image Y and the eye region image Y corresponding to the complete face image YAs input to a dictionary, solving a formula
Figure FDA0002487377030000021
Obtaining a complete face image Y and an eye region image YCorresponding optimized dictionaries D and Dβ is the weight value for training eye region.
2. The method of claim 1, wherein the method comprises the following steps: the step S2 specifically includes the following sub-steps:
s201, selecting a shielded target face image in a frame, and carrying out graying and illumination equalization processing; performing landmark marking on the target human face image by using a gradient histogram HOG algorithm, reserving the gray value of the pixel of the characteristic partial image of the eye region and setting the gray value of the shielded partial pixel to zero to obtain an image Y corresponding to the target human eye region';
S202, displaying the image Y of the target human eye region' input dictionary DMake a query in accordance with
Figure FDA0002487377030000022
Obtaining an image Y of the target human eye region^' corresponding sparse coefficient X;
s203, the image Y of the eye region of the target person^Inputting corresponding sparse coefficient X into dictionary D, and performing reverse query to obtain target human eye region image Y^'corresponding full face image Y'.
3. The method for restoring the whole face from the local face area according to claim 1 or 2, wherein the weighted value β ranges from 80 to 150, and the order of the sparse coefficient X ranges from 20 to 50.
4. The method of claim 1, wherein the method comprises the following steps: the step S1 specifically includes the following sub-steps:
s101', a large number of face pictures are crawled from the Internet by utilizing a web crawler technology, or the face pictures are obtained by police, and the data capacity is more than a million level; graying and illumination equalization processing are carried out on the face picture, and landmark marking is carried out on the face picture by utilizing a gradient histogram HOG algorithm to generate an image Y corresponding to a complete face;
s102', for each complete human face picture, selecting the eye region characteristic part by frame, reserving the pixel gray value of the eye region characteristic part image, and simultaneously setting the gray value of the human face of the non-frame selected part to zero to generate an eye region image Y^0(ii) a Shifting upward by a designated proportion by taking the eye region characteristic part selected by the frame as a reference, reserving the gray value of the image pixel of the selected part after shifting, and setting the gray values of the pixels of other parts outside the selected area to zero to obtain an eye region image Y shifted upward^1'; downwards shifting the characteristic part of the eye region as a reference by a designated proportion, reserving the gray value of the image pixel of the selected region after shifting, and setting the gray values of other pixels outside the selected region to zero to obtain a downwards shifted target human eye region image Y^2';
S103', using the empty dictionary as an initial dictionary, and taking the complete face image Y and the eye region image Y corresponding to the complete face image Y^0As input to the dictionary, solve formula (1)
Figure FDA0002487377030000031
Obtaining a complete face image Y and an eye region image Y^0Corresponding optimized dictionary D0And D∧0
In the same way, the method for preparing the composite material,
the complete human face image Y and the corresponding eye region image Y which shifts upwards∧1As dictionary input, solving formula (1) to obtain complete human face image Y and eye region image Y∧1Corresponding optimized dictionary D1And D∧1
The complete human face image Y and the corresponding eye region image Y which is shifted downwards are processed^2As dictionary input, solving formula (1) to obtain complete human face image Y and eye region image Y∧2Corresponding optimized dictionary D2And D∧2
5. The method of claim 4, wherein the method comprises the following steps: the step S2 specifically includes the following sub-steps:
s201', selecting a shielded target face image from a frame, and carrying out graying and illumination equalization processing; performing landmark marking on the target human face image by using a gradient histogram HOG algorithm, reserving the gray value of the image pixels of the eye region and setting the gray value of the shielded part of pixels to zero to obtain an image Y corresponding to the target human eye region';
S202', the image Y of the target human eye region' separately input dictionary D∧0、D∧1、D^2Make a query in accordance with
Figure FDA0002487377030000041
Obtaining an image Y of the target human eye region' corresponding sparse coefficient X0、X1、X2
S203', the image Y of the target human eye region^' corresponding sparse coefficient X0、X1、X2Respectively input dictionary D0、D1、D2And performing reverse query to obtain an image Y of the eye region of the target person^' complete face image Y of possible correspondence0'、Y1'、Y2'。
6. A method for restoring a whole face from a local face region according to claim 4 or 5, characterized in that: the specified proportion refers to 10% of the height of the local feature portion selected by the frame.
7. A method for restoring a whole face from a local face region according to claim 3, wherein: the method is realized by an integrated circuit, an embedded circuit or cloud server software.
8. The method of claim 6, wherein the method comprises the following steps: the method is realized by an integrated circuit, an embedded circuit or cloud server software.
CN201710181236.3A 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area Active CN107066955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710181236.3A CN107066955B (en) 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710181236.3A CN107066955B (en) 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area

Publications (2)

Publication Number Publication Date
CN107066955A CN107066955A (en) 2017-08-18
CN107066955B true CN107066955B (en) 2020-07-17

Family

ID=59618228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710181236.3A Active CN107066955B (en) 2017-03-24 2017-03-24 Method for restoring whole human face from local human face area

Country Status (1)

Country Link
CN (1) CN107066955B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019153175A1 (en) * 2018-02-08 2019-08-15 国民技术股份有限公司 Machine learning-based occluded face recognition system and method, and storage medium
CN109063506B (en) * 2018-07-09 2021-07-06 江苏达实久信数字医疗科技有限公司 Privacy processing method for medical operation teaching system
CN111353943B (en) * 2018-12-20 2023-12-26 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN110457990B (en) * 2019-06-19 2020-06-12 特斯联(北京)科技有限公司 Machine learning security monitoring video occlusion intelligent filling method and system
CN111093029B (en) * 2019-12-31 2021-07-06 深圳云天励飞技术有限公司 Image processing method and related device
CN113222830A (en) * 2021-03-05 2021-08-06 北京字跳网络技术有限公司 Image processing method and device
CN113486394B (en) * 2021-06-18 2023-05-16 武汉科技大学 Privacy protection and tamper-proof method and system based on face block chain

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN105550634A (en) * 2015-11-18 2016-05-04 广东微模式软件股份有限公司 Facial pose recognition method based on Gabor features and dictionary learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding
CN105550634A (en) * 2015-11-18 2016-05-04 广东微模式软件股份有限公司 Facial pose recognition method based on Gabor features and dictionary learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Jian Xu等.COUPLED K-SVD DICTIONARY TRAINING FOR SUPER-RESOLUTION.《2014 IEEE International Conference on Image Processing (ICIP)》.2015, *
Shiming Xiang等.Image Deblurring with Coupled Dictionary Learning.《International Journal of Computer Vision》.2015,第114卷(第2-3期), *
李艺敏.基于改进字典学习算法的人脸识别.《电脑编程技巧与维护》.2014, *

Also Published As

Publication number Publication date
CN107066955A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
CN107066955B (en) Method for restoring whole human face from local human face area
US20220092882A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
JP6411510B2 (en) System and method for identifying faces in unconstrained media
JP7386545B2 (en) Method for identifying objects in images and mobile device for implementing the method
WO2019223254A1 (en) Construction method for multi-scale lightweight face detection model and face detection method based on model
WO2019237567A1 (en) Convolutional neural network based tumble detection method
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
Akhtar et al. Face spoof attack recognition using discriminative image patches
CN112766160A (en) Face replacement method based on multi-stage attribute encoder and attention mechanism
CN106127164A (en) The pedestrian detection method with convolutional neural networks and device is detected based on significance
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
WO2023082784A1 (en) Person re-identification method and apparatus based on local feature attention
CN110569756A (en) face recognition model construction method, recognition method, device and storage medium
Bhavana et al. Hand sign recognition using CNN
CN103729614A (en) People recognition method and device based on video images
CN105718873A (en) People stream analysis method based on binocular vision
CN104298974A (en) Human body behavior recognition method based on depth video sequence
Guo et al. Optimization of visual information presentation for visual prosthesis
Reddy et al. Ocularnet: deep patch-based ocular biometric recognition
CN109271941A (en) A kind of biopsy method for taking the photograph attack based on anti-screen
Kauba et al. Towards using police officers’ business smartphones for contactless fingerprint acquisition and enabling fingerprint comparison against contact-based datasets
CN112381987A (en) Intelligent entrance guard epidemic prevention system based on face recognition
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN113762009B (en) Crowd counting method based on multi-scale feature fusion and double-attention mechanism
CN110008922A (en) Image processing method, unit, medium for terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant