CN113705393A - 3D face model-based depression angle face recognition method and system - Google Patents
3D face model-based depression angle face recognition method and system Download PDFInfo
- Publication number
- CN113705393A CN113705393A CN202110935424.7A CN202110935424A CN113705393A CN 113705393 A CN113705393 A CN 113705393A CN 202110935424 A CN202110935424 A CN 202110935424A CN 113705393 A CN113705393 A CN 113705393A
- Authority
- CN
- China
- Prior art keywords
- face
- depression angle
- angle
- model
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a depression angle face recognition method and system based on a 3D face model. Firstly, acquiring a clear front face picture to construct a face sample library, and generating a 3D face model from the front face picture in the sample library; then, estimating the angle of a face picture to be identified with a depression angle by using a face posture estimation algorithm, and converting the 3D face model into the same angle as the face picture with the depression angle; and finally, inputting the generated depression angle face and the depression angle face to be recognized into a face recognition network for recognition. The method provided by the invention aims at the face recognition problem in the actual overlooking monitoring scene, and obviously improves the face recognition precision of the depression angle.
Description
Technical Field
The invention belongs to the technical field of computer vision, and relates to a depression angle face recognition method and system for a surveillance video, in particular to a depression angle face recognition method and system based on a 3D face model.
Technical Field
The public security organization shoots and records video images through the monitoring camera, and tracks the target suspect by using a face recognition technology to lock the identity of a criminal. However, the public monitoring camera is usually mounted at a relatively high position such as a telegraph pole and a eave, the shooting angle is usually in a overlooking posture, and overlooking shooting is carried out to collect human face pictures which are often of a side face, a depression angle and low definition. The face with the depression angle has the problems of chin information loss, serious deformation and the like, so that the performance of the common face recognition technologies such as Arcface, Facenet and the like is sharply reduced.
The current multi-pose face recognition scheme is mainly based on the principle of face rectification and multi-frame information complementation. The goal of multi-pose face correction is to synthesize a positive face by an algorithm given a variable angle face image, and a generation countermeasure network (GAN) becomes a mainstream scheme for face correction. The GAN-based method corrects the acquired side face picture to the front face posture by using the principle of the bilateral symmetry of the human face so as to improve the recognition precision. However, since the overlooking face lacks available bilateral symmetry such as a side face, the self-blocking part of the chin is difficult to estimate, and the GAN multi-pose face recognition scheme is directly applied to the overlooking face recognition, so that the expected effect cannot be achieved. In addition, massive data is needed for training the GAN network, and the quality of the generated front face picture is low due to the fact that the training data set is too small. The face recognition based on the video sequence uses multi-frame information complementary face pictures to synthesize a single recognition feature, and the feature fusion is carried out on the pictures with different angles by using multiple frames, so that the face recognition based on the video sequence cannot be used for recognizing a single face with a depression angle.
In a word, a monitoring camera in a social security occasion is usually installed at a high position, a shot depression angle face image is difficult to be accurately identified by the existing face identification system, and an effective scheme is urgently needed to be provided for a face identification task in a overlooking monitoring scene.
Disclosure of Invention
In order to solve the technical problem, the invention combines a 3D face model and face posture estimation, converts a front face picture in a face sample library into a face with the same angle as the face with the depression angle to be recognized, and then recognizes the face. Because the 3D face model rotation and rendering can be applied to any angle without losing details, firstly, a clear front face picture in a sample library is established into the 3D face model; then, estimating the angle of a face picture to be identified with a depression angle by using a face posture estimation algorithm and converting the 3D face model into the same angle as the face picture with the depression angle; and finally, inputting the generated depression angle face and the depression angle face picture to be recognized into a face recognition network for recognition. The face with the front face changing into the depression angle is changed from a picture with more information into a picture with less information, so that the face distortion phenomenon is effectively avoided.
The method adopts the technical scheme that: a depression angle face recognition method based on a 3D face model comprises the following steps:
step 1: constructing a depression angle face sample library;
collecting a face front image, inputting the face front image into a 3D face reconstruction network to generate a 3D face model, rotating the 3D face model according to a preset angle interval, remapping the 3D face model back to a 2D face image, and storing the 2D face image in a depression angle face sample library;
step 2: and when a new depression angle face picture to be recognized is input, estimating face depression angle information, and selecting all face pictures with the most similar angles from a depression angle face sample library for face recognition.
The technical scheme adopted by the system of the invention is as follows: A3D face model-based depression angle face recognition system comprises the following modules:
the module 1 is used for constructing a depression angle face sample library;
collecting a face front image, inputting the face front image into a 3D face reconstruction network to generate a 3D face model, rotating the 3D face model according to a preset angle interval, remapping the 3D face model back to a 2D face image, and storing the 2D face image in a depression angle face sample library;
and the module 2 is used for estimating the face depression angle information when a depression angle face picture to be identified is newly input, and selecting all face pictures with the most similar angles from the depression angle face sample library for face identification.
In the existing multi-pose face recognition scheme based on pose correction, a face library stores a front face picture, and an input pose-changed face is corrected into a front face and then compared with the front face in the face library during recognition. The process of correcting the posture-changing face into the front face is the treatment of converting an information loss object into an information complete object, so that distortion exists inevitably, and the subsequent face recognition precision is influenced. The invention adopts opposite strategies to directly store the faces with different postures in the face library (the invention adopts a 3D modeling mode to generate a multi-posture version of the front face), and directly identifies the input posture-changing face without correcting in advance, thereby avoiding the distortion effect caused by posture correction. Therefore, compared with the existing multi-pose face recognition method, the method has the following advantages and positive effects:
(1) the invention provides a nose-down angle face recognition scheme of a face with a front face turning into a nose-down angle based on a 3D face model. The scheme can overcome the problem that the face with the depression angle is lack of symmetry information, and the face with the front face turned into the face with the depression angle is a picture with more information and less information, so that the face distortion phenomenon is effectively avoided.
(2) The method does not need a depression angle face data set for training. The face is reconstructed by adopting the universal 3D face model, only a face frontal picture is needed to be reconstructed, and only a universal face data set is needed for face key point detection needed for calculating the angle of the face rotation. The method of the invention adopts a universal face data set in the training process, thereby avoiding the problem of lacking of a depression angle face data set.
Drawings
FIG. 1: the method of the embodiment of the invention is a schematic block diagram.
FIG. 2: the invention discloses a flow chart for constructing a depression angle human face sample library.
FIG. 3: the invention provides a flow chart of face recognition.
FIG. 4: the embodiment of the invention collects human face picture samples with different overlooking angles.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and the implementation examples, it is to be understood that the implementation examples described herein are only for the purpose of illustration and explanation and are not to be construed as limiting the present invention.
The construction of a sample library of the depression angle face recognition system is different from that of a normal face recognition system, the sample library of the common face recognition only needs to be a front face, the ID and name information corresponding to the face are recorded, and then in the face recognition process, when a new face picture to be recognized is input, all faces in the library can be directly retrieved. However, the step of 3D face reconstruction and depression face picture synthesis is added in the construction of the sample library in the depression face recognition provided by the invention.
A depression angle face recognition system is different from a normal face recognition system in face recognition. The common face recognition directly compares the face to be recognized with all face pictures in a sample library. When a face picture to be recognized is input, the depression angle face recognition system estimates the angle of the face picture to be recognized, and compares the angle with the face picture which is the closest to the angle under each user ID in a sample library.
Referring to fig. 1, the depression angle face recognition method based on a 3D face model provided by the present invention includes the following steps:
step 1: constructing a depression angle face sample library;
firstly, shooting a face front picture by using a camera, inputting the front picture into a 3D face reconstruction network to generate a 3D face model, rotating the 3D face model at certain angle intervals, remapping the 3D face model back to a 2D face picture, and storing the 2D face picture in a depression angle face sample library.
In this embodiment, the 3D face reconstruction network uses an existing network, such as Prnet, 3 DDFA-V2.
Referring to fig. 2, in the present embodiment, the specific implementation of step 1 includes the following sub-steps:
step 1.1: firstly, a high-definition face front picture of a new user ID is shot by using a camera.
Step 1.2: and detecting a face frame in the face front picture by using a face detection algorithm RetinaFace, cutting the face picture according to the face frame, and generating a 3D face model by using a 3D face reconstruction network after the face is cut.
Firstly, a Face Alignment algorithm Face _ Alignment is used for regressing the coordinates of key points of the Face (68 key points, 96 key points, 106 key points or denser key points); and then mapping the 2D face to a 3D face model according to the face key point coordinates to obtain 3D shape information V ═ V of the face1,v2,...,vn]N represents the number of vertices of the 3D face model, vi=[xi,yi,zi]TRepresenting the spatial position of the vertex; and then obtaining face texture information T ═ T by using texture coordinate mapping1,t2,...,tn],ti=[ri,gi,bi]TTexture color information representing the vertices; and finally, fusing the 3D shape information V of the face and the texture information T of the face to form a final 3D face model M ═ V, T }.
Step 1.3: converting a 3D model established by the face front image according to an angle interval of 15 degrees, wherein the angle conversion formula is as follows:
Vtransform=s*o*R*V+h
where s represents the scaling factor of the 3D face model, o is an orthogonal matrix, R is a rotation matrix, and h is an offset matrix.
Thus, the pictures stored in each user ID in the depression face sample library are: the original real face picture, the generated face pictures with different depression angles obtained by utilizing the 3D face model transformation are recorded as:
Iset={I1,I2,...,IN}
Θ={θ1,θ2,...,θn}
wherein, IkThe representation database stores all face angle pictures with user ID k. I issetAll pictures stored in the database are represented and classified according to the user ID. Θ represents the angle of the face image stored in the database. The specific number of angles and the angle value are not fixed, and comprehensive experiment evaluation and storage consumption determination of a sample library are required.
In this embodiment, the high-definition front face image and the plurality of face images obtained in step 1.3 are numbered as the same ID, and the file is named according to the angle information. And newly establishing a user ID in a table of the database, and storing the information of the picture in the user ID.
In order to increase the storage efficiency of the face pictures in the face sample library and improve the speed of face recognition, the embodiment of the invention adopts the database to store the face sample library. The table of the database designed by the invention adopts a MySQL database. The main key of the table is the ID number of the face picture, and each ID represents a group of pictures of each person. Each ID number contains different angle pictures of the same person, and the pictures are distinguished and searched according to a key theta in the table. In which there is only a minimum angle theta1The picture path is the face picture collected really, and the rest is theta2,...,θnThe picture paths represented by the points are all the established synthetic face pictures.
Step 2: when a new face picture of the dip angle to be recognized is input, face dip angle information is estimated by using a face posture estimation algorithm, and all sample library pictures of angles close to the dip angle information are selected from a dip angle face sample library for face recognition.
In this embodiment, an existing face pose estimation algorithm, such as PFLD, FSA-Net, is selected.
Referring to fig. 3, in the present embodiment, the specific implementation of step 2 includes the following sub-steps:
step 2.1: and detecting the position of the face in the input picture, and cutting the face area.
Step 2.2: carrying out attitude estimation on the face to be recognized, and estimating the depression angle information of the face to be recognized by using a face attitude estimation algorithmThe specific process is as follows:
take 98 individual face key points as an example, let (x)i,yi) Representing the coordinates of the detected face key points i, and representing the distance of a connecting line between the face key points by d:
wherein A ═ y1-y31,B=x31-x1,C=x1y2-x2y1。(x1,y1)、(x2,y2)、(x31,y31)、(x51,y51) The coordinates of the 1 st, 2 nd, 31 st and 51 st key points respectively represent the human face.
then searching an angle between the angle theta and the face image to be recognized in the face image angles theta stored in the databaseClosest angle θiNamely: find a thetai∈Θ,s.tIs provided with
Step 2.3: the angle in all the id in the face picture to be recognized and the sample library is thetaiThe face image is input into the existing face recognition network ArcFace to obtain a face feature vector. And comparing every two faces to find out two faces with the maximum similarity.
Step 2.4: if the similarity is larger than the set threshold xi, the two pictures with the maximum similarity are the same person, otherwise, the picture of the face to be recognized is not in the sample library.
In the specific face recognition process, in order to improve the accuracy and speed of the depression angle face recognition of the algorithm, the threshold value xi is set, even if the depression angle attitude of the face picture to be recognized is estimated by using the face attitude estimation algorithm, when the depression angle theta of the face picture to be recognized is less than or equal to xi, the face picture to be recognized and the front face picture in the face sample library are directly subjected to similarity comparison.
The invention collects real depression angle face samples to carry out experiments, and partial face samples are shown in figure 4. Table 1 shows the results of the face recognition accuracy at different depression angles, and it can be seen that the accuracy of ArcFace direct recognition is higher at low depression angles such as 15 °, 30 °, etc., but the accuracy of the method of the present invention is significantly improved at high depression angles such as 45 °, 60 °, 75 °, etc., and particularly, the accuracy is improved by more than 10% at 75 °.
TABLE 1
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. A depression angle face recognition method based on a 3D face model is characterized by comprising the following steps:
step 1: constructing a depression angle face sample library;
collecting a face front image, inputting the face front image into a 3D face reconstruction network to generate a 3D face model, rotating the 3D face model according to a preset angle interval, remapping the 3D face model back to a 2D face image, and storing the 2D face image in a depression angle face sample library;
step 2: and when a new depression angle face picture to be recognized is input, estimating face depression angle information, and selecting all face pictures with the most similar angles from a depression angle face sample library for face recognition.
2. The 3D face model-based depression angle face recognition method according to claim 1, characterized in that: in the step 1, a face detection algorithm is used for detecting a face frame in a face front picture, the face picture is cut according to the face frame, and a 3D face reconstruction network is used for generating a 3D face model according to the following processes:
firstly, regressing the coordinates of key points of the face by using a face alignment algorithm; and then mapping the 2D face to a 3D face model according to the face key point coordinates to obtain 3D shape information V ═ V of the face1,v2,…,vn]N represents the number of vertices of the 3D face model, vi=[xi,yi,zi]TRepresenting the spatial position of the vertex; and then obtaining face texture information T ═ T by using texture coordinate mapping1,t2,…,tn],ti=[ri,gi,bi]TTexture color information representing the vertices; and finally, fusing the 3D shape information V of the face and the texture information T of the face to form a final 3D face model M ═ V, T }.
3. The 3D face model-based depression angle face recognition method according to claim 2, characterized in that: in the step 1, a 3D model established by the face front image is converted according to an angle interval of 15 degrees to obtain a depression angle face sample library, wherein an angle conversion formula is as follows:
Vtransform=s*o*R*V+h
wherein s represents a scaling factor of the 3D face model, o is an orthogonal matrix, R is a rotation matrix, and h is an offset matrix;
the pictures stored in each user ID in the depression angle face sample library comprise: the original real face picture, the generated face pictures with different depression angles obtained by utilizing the 3D face model transformation are recorded as:
Iset={I1,I2,…,IN}
Θ={θ1,θ2,…,θn}
wherein, IkThe representation database stores all face angle pictures with user ID k, wherein only the minimum angle theta1The picture path is the face picture collected really, and the rest is theta2,…,θnThe image paths represented by the points are all the established synthesized face images; i issetRepresenting all pictures stored in a database, classifying the pictures according to user IDs, and N represents the total number of users; Θ represents the angle of the face image stored in the database.
4. The 3D face model-based depression angle face recognition method according to claim 1, wherein the step 2 is implemented by the following steps:
step 2.1: detecting the position of a face in an input picture, and cutting a face area;
step 2.2: carrying out attitude estimation on the face to be recognized, and estimating depression angle information of the face to be recognizedThen searching an angle with the face image to be recognized in the face image angle theta stored in the depression angle face sample libraryClosest angle θi;
Step 2.3: the angle theta in all user IDs in the face picture to be recognized and the depression angle face sample libraryiInputting the face picture into a face recognition network to obtain a face feature vector; comparing every two faces to find out two faces with the maximum similarity;
step 2.4: if the similarity is larger than the set threshold epsilon, the two pictures with the maximum similarity are the same person, otherwise, the picture of the face to be recognized is not in the depression angle face sample library.
5. The 3D face model-based depression angle face recognition method according to claim 4, characterized in that: step 2.2 depression informationThe calculation method comprises the following steps:
wherein d represents the distance of the connecting line between the key points of the face, and (x)i,yi) Coordinates, y, representing face key points i51、y31The y-coordinate of 51 st and 31 th key points is shown.
6. A depression angle face recognition system based on a 3D face model is characterized by comprising the following modules:
the module 1 is used for constructing a depression angle face sample library;
collecting a face front image, inputting the face front image into a 3D face reconstruction network to generate a 3D face model, rotating the 3D face model according to a preset angle interval, remapping the 3D face model back to a 2D face image, and storing the 2D face image in a depression angle face sample library;
and the module 2 is used for estimating the face depression angle information when a depression angle face picture to be identified is newly input, and selecting all face pictures with the most similar angles from the depression angle face sample library for face identification.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110935424.7A CN113705393A (en) | 2021-08-16 | 2021-08-16 | 3D face model-based depression angle face recognition method and system |
PCT/CN2021/122347 WO2023019699A1 (en) | 2021-08-16 | 2021-09-30 | High-angle facial recognition method and system based on 3d facial model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110935424.7A CN113705393A (en) | 2021-08-16 | 2021-08-16 | 3D face model-based depression angle face recognition method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113705393A true CN113705393A (en) | 2021-11-26 |
Family
ID=78652754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110935424.7A Pending CN113705393A (en) | 2021-08-16 | 2021-08-16 | 3D face model-based depression angle face recognition method and system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113705393A (en) |
WO (1) | WO2023019699A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115914583B (en) * | 2023-02-28 | 2023-06-02 | 中国科学院长春光学精密机械与物理研究所 | Old man monitoring equipment and monitoring method based on visual identification |
CN117831187B (en) * | 2024-01-08 | 2024-08-23 | 浙江德方智能科技有限公司 | Park sidewalk management method and system based on face recognition authorization |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203400A (en) * | 2016-07-29 | 2016-12-07 | 广州国信达计算机网络通讯有限公司 | A kind of face identification method and device |
CN109684951A (en) * | 2018-12-12 | 2019-04-26 | 北京旷视科技有限公司 | Face identification method, bottom library input method, device and electronic equipment |
CN110287880A (en) * | 2019-06-26 | 2019-09-27 | 西安电子科技大学 | A kind of attitude robust face identification method based on deep learning |
CN110569768A (en) * | 2019-08-29 | 2019-12-13 | 四川大学 | construction method of face model, face recognition method, device and equipment |
CN110991281A (en) * | 2019-11-21 | 2020-04-10 | 电子科技大学 | Dynamic face recognition method |
CN111597894A (en) * | 2020-04-15 | 2020-08-28 | 杭州东信北邮信息技术有限公司 | Face database updating method based on face detection technology |
CN111754391A (en) * | 2020-05-15 | 2020-10-09 | 新加坡依图有限责任公司(私有) | Face correcting method, face correcting equipment and computer readable storage medium |
CN112101127A (en) * | 2020-08-21 | 2020-12-18 | 深圳数联天下智能科技有限公司 | Face shape recognition method and device, computing equipment and computer storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105654048A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Multi-visual-angle face comparison method |
CN107316340B (en) * | 2017-06-28 | 2020-06-19 | 河海大学常州校区 | Rapid face modeling method based on single photo |
CN107729806A (en) * | 2017-09-05 | 2018-02-23 | 西安理工大学 | Single-view Pose-varied face recognition method based on three-dimensional facial reconstruction |
CN108257210A (en) * | 2018-02-28 | 2018-07-06 | 浙江神造科技有限公司 | A kind of method that human face three-dimensional model is generated by single photo |
CN110163814A (en) * | 2019-04-16 | 2019-08-23 | 平安科技(深圳)有限公司 | The method, apparatus and computer equipment of modification picture based on recognition of face |
-
2021
- 2021-08-16 CN CN202110935424.7A patent/CN113705393A/en active Pending
- 2021-09-30 WO PCT/CN2021/122347 patent/WO2023019699A1/en unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203400A (en) * | 2016-07-29 | 2016-12-07 | 广州国信达计算机网络通讯有限公司 | A kind of face identification method and device |
CN109684951A (en) * | 2018-12-12 | 2019-04-26 | 北京旷视科技有限公司 | Face identification method, bottom library input method, device and electronic equipment |
CN110287880A (en) * | 2019-06-26 | 2019-09-27 | 西安电子科技大学 | A kind of attitude robust face identification method based on deep learning |
CN110569768A (en) * | 2019-08-29 | 2019-12-13 | 四川大学 | construction method of face model, face recognition method, device and equipment |
CN110991281A (en) * | 2019-11-21 | 2020-04-10 | 电子科技大学 | Dynamic face recognition method |
CN111597894A (en) * | 2020-04-15 | 2020-08-28 | 杭州东信北邮信息技术有限公司 | Face database updating method based on face detection technology |
CN111754391A (en) * | 2020-05-15 | 2020-10-09 | 新加坡依图有限责任公司(私有) | Face correcting method, face correcting equipment and computer readable storage medium |
CN112101127A (en) * | 2020-08-21 | 2020-12-18 | 深圳数联天下智能科技有限公司 | Face shape recognition method and device, computing equipment and computer storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023019699A1 (en) | 2023-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021196294A1 (en) | Cross-video person location tracking method and system, and device | |
CN108038420B (en) | Human behavior recognition method based on depth video | |
CN110490158B (en) | Robust face alignment method based on multistage model | |
Kwon et al. | Tracking by sampling trackers | |
Chang et al. | Tracking Multiple People Under Occlusion Using Multiple Cameras. | |
Tang et al. | ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans | |
CN113298934B (en) | Monocular visual image three-dimensional reconstruction method and system based on bidirectional matching | |
CN110766024B (en) | Deep learning-based visual odometer feature point extraction method and visual odometer | |
CN112562081B (en) | Visual map construction method for visual layered positioning | |
Ling et al. | Virtual contour guided video object inpainting using posture mapping and retrieval | |
JP2009157767A (en) | Face image recognition apparatus, face image recognition method, face image recognition program, and recording medium recording this program | |
CN113705393A (en) | 3D face model-based depression angle face recognition method and system | |
CN107563978A (en) | Face deblurring method and device | |
Raghavendra et al. | 3d face reconstruction and multimodal person identification from video captured using smartphone camera | |
CN113255608B (en) | Multi-camera face recognition positioning method based on CNN classification | |
CN110766782A (en) | Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation | |
CN112528902A (en) | Video monitoring dynamic face recognition method and device based on 3D face model | |
CN111582036B (en) | Cross-view-angle person identification method based on shape and posture under wearable device | |
CN114266823A (en) | Monocular SLAM method combining SuperPoint network characteristic extraction | |
CN108694348B (en) | Tracking registration method and device based on natural features | |
Pons-Moll et al. | Efficient and robust shape matching for model based human motion capture | |
CN112418250A (en) | Optimized matching method for complex 3D point cloud | |
CN115330992A (en) | Indoor positioning method, device and equipment with multi-visual feature fusion and storage medium | |
Wei et al. | Rgb-based category-level object pose estimation via decoupled metric scale recovery | |
Ashraf et al. | View-invariant action recognition using rank constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |