CN112541434B - Face recognition method based on central point tracking model - Google Patents

Face recognition method based on central point tracking model Download PDF

Info

Publication number
CN112541434B
CN112541434B CN202011466389.0A CN202011466389A CN112541434B CN 112541434 B CN112541434 B CN 112541434B CN 202011466389 A CN202011466389 A CN 202011466389A CN 112541434 B CN112541434 B CN 112541434B
Authority
CN
China
Prior art keywords
face
sequence
tracking
central point
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011466389.0A
Other languages
Chinese (zh)
Other versions
CN112541434A (en
Inventor
曹攀
杨赛
顾全林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Xishang Bank Co ltd
Original Assignee
Wuxi Xishang Bank Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Xishang Bank Co ltd filed Critical Wuxi Xishang Bank Co ltd
Priority to CN202011466389.0A priority Critical patent/CN112541434B/en
Publication of CN112541434A publication Critical patent/CN112541434A/en
Application granted granted Critical
Publication of CN112541434B publication Critical patent/CN112541434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of machine vision, and particularly discloses a face recognition method based on a central point tracking model, which comprises the following steps: acquiring a plurality of frames of target images; carrying out face tracking on target images of the previous frame and the next frame, and determining a corresponding tracking sequence ID; carrying out face alignment on the face images of the tracking sequence ID, and extracting corresponding face features; after one tracking sequence ID is finished, constructing a pre-processing face ID sequence; performing quality evaluation on the face image in the pre-processed face ID sequence, and selecting the face features in the front Num personal face image with the best quality as a face ID sequence feature group after cleaning; and denoising the human face features in the human face ID sequence feature group, and determining a final human face recognition result through reordering of the human face features. The method can reduce the influence of factors such as illumination, stream occlusion, face angle and the like on the face recognition effect in a dynamic video environment, and improve the face recognition precision.

Description

Face recognition method based on central point tracking model
Technical Field
The invention relates to the technical field of machine vision, in particular to a face recognition method based on a central point tracking model.
Background
With the popularization of video monitoring systems and the rapid development of computer vision technologies, video-based dynamic face recognition technology has advanced greatly, and industrialization is gradually realized in the fields of intelligent transportation, smart cities, information security, security and the like.
The application scenes of the dynamic face recognition method are mostly video-to-static image recognition, the video-to-static image recognition usually adopts frame-by-frame face images as input, and the recognition or the verification is realized by comparing with a static image face database and fusing results of all frames according to cosine distance, Euclidean distance or majority vote. On one hand, the frame-by-frame comparison method consumes a large amount of server calculation; on the other hand, frame-by-frame comparison increases the error rate of recognition without any doubt. Meanwhile, under the dynamic video environment, under the influence of factors such as illumination, people stream shielding, face angles and the like, how to effectively filter and compensate various face changes in the video is also the key for improving the face recognition robustness.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a face recognition method based on a central point tracking model, which can reduce the influence of factors such as illumination, people stream shielding, image shooting quality, face angle and the like on the face recognition effect in a dynamic video environment and improve the face recognition precision.
As a first aspect of the present invention, there is provided a face recognition method based on a central point tracking model, including:
step S1: acquiring multi-frame target images in a monitoring video;
step S2: carrying out face tracking on target images of the previous and next frames, determining a corresponding tracking sequence ID, and simultaneously obtaining a face detection frame and face key points;
step S3: performing face alignment on face images of the tracking sequence ID, extracting corresponding face features, calculating cosine distances between the face features of the current frame and the face features of the previous frame in the tracking sequence ID, if the cosine distances between the same tracking sequence ID are larger than a set threshold, judging the same tracking object, if the cosine distances are smaller than the set threshold, comparing the same tracking object with the face features of the previous frame of other tracking sequence ID, if a matched tracking sequence ID exists, bringing the face images into the matched tracking sequence ID, otherwise, identifying the face images as new target face images and distributing new tracking sequence IDs;
step S4: after one tracking sequence ID is finished, constructing a pre-processing face ID sequence according to face images and face attribute information contained in the tracking sequence ID, wherein the face attribute information comprises a face detection frame, face key points and face features;
step S5: performing quality evaluation on the face image in the pre-processed face ID sequence, and selecting the face features in the front Num personal face image with the best quality as a face ID sequence feature group after cleaning;
step S6: and denoising the human face features in the cleaned human face ID sequence feature group, carrying out witness matching on the human face features, and determining a final human face recognition result through reordering of the human face features.
Further, the step S1 further includes:
the monitoring video is acquired by a camera in real time.
Further, the step S2 further includes:
target images of front and rear frames are taken as input and are brought into a central point-based tracking model to obtain a human face and a human face key point thermodynamic diagram
Figure BDA0002834379550000021
Wherein W is the width of the target image, H is the height of the target image, R is the output size scaling, 1 represents a face central point thermodynamic diagram, c represents c face key point thermodynamic diagrams, and the face key point thermodynamic diagram comprises a face central point thermodynamic diagram and c face key point thermodynamic diagrams;
and acquiring a face central point through the face central point thermodynamic diagram, acquiring c face key points through the c face key point thermodynamic diagram, additionally outputting the width and height of the face detection frame, the offset of the face central point and the offset of the face key points, and obtaining a corresponding tracking sequence ID according to the face central point, the width and height of the face detection frame and the offset of the face central point.
Further, the step S5 further includes:
evaluating the brightness, the definition, the integrity and the face angle attributes of the face images in the pre-processing face ID sequence through a face quality evaluation mechanism, carrying out weighted summation on the evaluation scores of each attribute, selecting the face features in the front Num personal face images with the best quality as a face ID sequence feature group after cleaning, wherein the score formula is as follows (1):
Figure BDA0002834379550000022
wherein S is the final quality evaluation score of the face image in the pre-processing face ID sequence, wiTo correspond to the weight of the attribute, QiScores for the evaluations of the different attributes.
Further, the step S6 further includes:
carrying out internal denoising on the human face features in the cleaned human face ID sequence feature group, and eliminating images with long distance in the same ID sequence to optimize the human face ID sequence;
carrying out witness comparison on the optimized human face ID sequence to generate a candidate object;
and further reordering the candidate objects, and counting a plurality of orderings in the same face ID sequence to obtain a final face recognition result.
The face recognition method based on the central point tracking model provided by the invention has the following advantages: the three-in-one end-to-end model of face detection, key point detection and tracking is realized, and the resource expenditure is effectively reduced; a Deepsort idea is integrated into a central point tracking model, so that the probability of tracking errors of front and rear frame face images is further reduced; the face ID sequence is optimized through a quality model, a face image with high quality is selected, and the preprocessing obviously reduces the influence of recognition errors possibly caused by an image input end in face recognition; in the aspect of face feature comparison, face ID sequence features are used for replacing single-frame face features to carry out witness comparison, and a ReRanking thought is integrated on the basis, so that the recognition accuracy of face recognition is further improved; therefore, the method can effectively reduce the influence of the complex environment on the success rate of face recognition, improves the face recognition precision, is suitable for various complex scenes with higher requirements on the face recognition precision, such as banks, airports, intelligent monitoring scenes and the like, and has good popularization and application values.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a flowchart of a face recognition method based on a central point tracking model according to the present invention.
Fig. 2 is a flowchart for determining a face tracking sequence ID according to the present invention.
Fig. 3 is a flowchart of a specific embodiment of the face recognition method based on the central point tracking model according to the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to the embodiments, structures, features and effects of the center point tracking model-based face recognition method according to the present invention with reference to the accompanying drawings and preferred embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
In this embodiment, a face recognition method based on a central point tracking model is provided, as shown in fig. 1, the face recognition method based on the central point tracking model includes:
step S1: acquiring multi-frame target images in a monitoring video;
acquiring a multi-frame target image through a video acquired by a camera in real time or based on a stored monitoring video;
step S2: carrying out face tracking on target images of the previous and next frames, determining a corresponding tracking sequence ID, and simultaneously obtaining a face detection frame and face key points;
step S3: performing face alignment on face images of the tracking sequence ID, extracting corresponding face features, calculating cosine distances between the face features of the current frame and the face features of the previous frame in the tracking sequence ID, if the cosine distances between the same tracking sequence ID are larger than a set threshold, judging the same tracking object, if the cosine distances are smaller than the set threshold, comparing the same tracking object with the face features of the previous frame of other tracking sequence ID, if a matched tracking sequence ID exists, bringing the face images into the matched tracking sequence ID, otherwise, identifying the face images as new target face images and distributing new tracking sequence IDs;
step S4: after one tracking sequence ID is finished, constructing a pre-processing face ID sequence according to face images and face attribute information contained in the tracking sequence ID, wherein the face attribute information comprises a face detection frame, face key points and face features;
step S5: performing quality evaluation on the face image in the pre-processed face ID sequence, and selecting the face features in the front Num personal face image with the best quality as a face ID sequence feature group after cleaning;
step S6: and denoising the human face features in the cleaned human face ID sequence feature group, carrying out witness matching on the human face features, and determining a final human face recognition result through reordering of the human face features.
Preferably, in step S1, the method further includes:
the monitoring video is acquired by a camera in real time.
Preferably, as shown in fig. 2-3, the step S2 further includes:
training a tracking model based on a face central point by combining a CenterPoint tracking frame;
taking target images of front and back frames as input, substituting the target images into a central point tracking model (CenterPoint) to obtain a human face and a human face key point thermodynamic diagram
Figure BDA0002834379550000031
Wherein W is the width of the target image, H is the height of the target image, R is the output size scaling, 1 represents a face central point thermodynamic diagram, c represents c face key point thermodynamic diagrams, and the face key point thermodynamic diagram comprises a face central point thermodynamic diagram and c face key point thermodynamic diagrams;
acquiring a face central point (cPoint (x, y) through a face central point thermodynamic diagram, acquiring c personal face key points c multiplied by kPoint (x, y) through a c personal face key point thermodynamic diagram, and additionally outputting the width and height hw (h) of the face detection framef,wf) Offset coffset (x) of center point of facef,yf) And the offset of the key points of the face, and coffset (x) of the central point of the face according to the central point of the face, the width and the height of the face detection frame and the offset of the central point of the facef,yf) Obtaining a corresponding tracking sequence ID;
the CenterPoint model can track and simultaneously acquire the face and key points of the face, and is prepared for the subsequent face recognition step.
Note that the width and height hw (h) of the frame is detected by the facef,wf) Offset coffset (x) from the center point of the facef,yf) To obtain the coordinates bbox (x) of the face framelt,ylt,xrb,yrb);
Based on the idea that CenterNet regresses key points of a human body in posture estimation, the method is applied to human face key point detection, and generation is targeted
Figure BDA0002834379550000041
C, representing the number of key points of the face, and generating a C personal face key point thermodynamic diagram;
offset coffset (x) through face center point cPoint (x, y) and face key pointk,yk) Preliminarily acquiring coordinate PointK (x) of key pointk,yk) Then, obtaining a final face key point by matching the key point coordinates in the face key point thermodynamic diagram;
these two coordinates can be used in the face alignment and face feature extraction in step S3, and the face key point coordinates can be reused to estimate the face angle to screen the problem picture in step S4.
Preferably, in step S3, the method further includes:
based on the idea of feature comparison of the DeepSort to the tracking sequence, deep operation is carried out on the ID sequence, and face alignment is carried out on the face image through the deviation of the face key points and the face key points;
inputting the aligned human face into an insight face human face recognition model, extracting human face features, substituting the human face features matched with the current frame and the human face features matched with the previous frame into a cosine formula to calculate the distance, judging whether the human face is the human face of the previous frame or not according to the distance, judging the human face is the same tracking object if the cosine distance between the same ID is larger than a threshold value, comparing the human face with the human face recognition features of the previous frame of other ID if the cosine distance between the same ID is smaller than the threshold value, bringing the human face into a matched ID sequence if the matched ID exists, otherwise, determining the human face as a new target and distributing a new ID.
Preferably, in step S4, the method further includes:
the method comprises the steps of tracking each frame of image to obtain the face in each frame of image, storing the face in each frame of image into a face ID sequence according to a tracking result, and after one tracking ID sequence is finished, constructing face images and face attribute information (face frames, key points and face features) contained in the face ID sequence, preprocessing the cleaned face images and the face attributes to form the face ID sequence.
Preferably, in step S5, the method further includes:
evaluating the brightness, the definition, the integrity and the face angle attributes of the face images in the pre-processing face ID sequence by a face quality evaluation mechanism, performing weighted summation on the evaluation scores of each attribute, selecting the face features in the front Num personal face images with the best quality as a face ID sequence feature group after cleaning, wherein the number of the face features is 10, and the score formula is as follows (1):
Figure BDA0002834379550000051
wherein S is the final quality evaluation score of the face image in the pre-processing face ID sequence, wiTo correspond to the weight of the attribute, QiScores for the evaluations of the different attributes.
Preferably, in step S6, the method further includes:
carrying out internal Ranking denoising on the human face features in the cleaned human face ID sequence feature group, and eliminating images with long distance in the same ID sequence to optimize the human face ID sequence;
carrying out witness comparison on the optimized human face ID sequence, and generating N candidate objects for each image;
based on the ReRanking idea, the N candidate objects are further reordered, and a final face recognition result is obtained by counting a plurality of sequences in the same face ID sequence.
The face recognition method based on the central point tracking model provided by the invention realizes a three-in-one end-to-end model of face detection, key point detection and tracking, and effectively reduces the resource expenditure; a Deepsort idea is integrated into a central point tracking model, so that the probability of tracking errors of front and rear frame face images is further reduced; the face ID sequence is optimized through a quality model, a face image with high quality is selected, and the preprocessing obviously reduces the influence of recognition errors possibly caused by an image input end in face recognition; in the aspect of face feature comparison, face ID sequence features are used for replacing single-frame face features to carry out witness comparison, and a ReRanking thought is integrated on the basis, so that the recognition accuracy of face recognition is further improved; therefore, the method can effectively reduce the influence of the complex environment on the success rate of face recognition, improves the face recognition precision, is suitable for various complex scenes with higher requirements on the face recognition precision, such as banks, airports, intelligent monitoring scenes and the like, and has good popularization and application values.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (4)

1. A face recognition method based on a central point tracking model is characterized by comprising the following steps:
step S1: acquiring multi-frame target images in a monitoring video;
step S2: carrying out face tracking on target images of the previous and next frames, determining a corresponding tracking sequence ID, and simultaneously obtaining a face detection frame and face key points;
step S3: performing face alignment on the face image of the tracking sequence ID, extracting corresponding face features, calculating the cosine distance between the face feature of the current frame and the face feature of the previous frame in the tracking sequence ID, if the cosine distance is greater than a set threshold, judging the current frame to be the same tracking object, if the cosine distance is less than the set threshold, comparing the face feature of the current frame with the face feature of the previous frame of other tracking sequence ID, if a matched tracking sequence ID exists, bringing the face image corresponding to the face feature of the current frame into the matched tracking sequence ID, otherwise, identifying the current frame to be a new target face image and distributing a new tracking sequence ID;
step S4: after one tracking sequence ID is finished, constructing a pre-processing face ID sequence according to face images and face attribute information contained in the tracking sequence ID, wherein the face attribute information comprises a face detection frame, face key points and face features;
step S5: performing quality evaluation on the face image in the pre-processed face ID sequence, and selecting the face features in the front Num personal face image with the best quality as a face ID sequence feature group after cleaning;
step S6: denoising the human face features in the cleaned human face ID sequence feature group, carrying out witness matching on the human face features, and determining a final human face recognition result through reordering of the human face features;
wherein the step S2 includes: target images of front and rear frames are taken as input and are brought into a central point-based tracking model to obtain a human face and a human face key point thermodynamic diagram
Figure FDA0003508754580000011
The acquired face key point thermodynamic diagrams comprise a face central point thermodynamic diagram and c individual face key point thermodynamic diagrams, wherein W is the width of a target image, H is the height of the target image, R is the output size scaling, 1 represents a face central point thermodynamic diagram, and c represents c individual face key point thermodynamic diagrams;
and acquiring a face central point through the face central point thermodynamic diagram, acquiring c face key points through the c face key point thermodynamic diagram, additionally outputting the width and height of the face detection frame, the offset of the face central point and the offset of the face key points, and obtaining a corresponding tracking sequence ID according to the face central point, the width and height of the face detection frame and the offset of the face central point.
2. The method for recognizing a face based on a center point tracking model according to claim 1, wherein the step S1 further comprises:
the monitoring video is acquired by a camera in real time.
3. The method for recognizing a face based on a center point tracking model according to claim 1, wherein the step S5 further comprises:
evaluating the brightness, the definition, the integrity and the face angle attributes of the face images in the pre-processing face ID sequence through a face quality evaluation mechanism, carrying out weighted summation on the evaluation scores of each attribute, selecting the face features in the front Num personal face images with the best quality as a face ID sequence feature group after cleaning, wherein the score formula is as follows (1):
Figure FDA0003508754580000012
wherein S is the final quality evaluation score of the face image in the pre-processing face ID sequence, wiTo correspond to the weight of the attribute, QiScores for the evaluations of the different attributes.
4. The method for recognizing a face based on a center point tracking model according to claim 1, wherein the step S6 further comprises:
carrying out internal denoising on the human face features in the cleaned human face ID sequence feature group, and eliminating images with long distance in the same ID sequence to optimize the human face ID sequence;
carrying out witness comparison on the optimized human face ID sequence to generate a candidate object;
and reordering the candidate objects, and counting a plurality of sequences in the same face ID sequence to obtain a final face recognition result.
CN202011466389.0A 2020-12-14 2020-12-14 Face recognition method based on central point tracking model Active CN112541434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011466389.0A CN112541434B (en) 2020-12-14 2020-12-14 Face recognition method based on central point tracking model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011466389.0A CN112541434B (en) 2020-12-14 2020-12-14 Face recognition method based on central point tracking model

Publications (2)

Publication Number Publication Date
CN112541434A CN112541434A (en) 2021-03-23
CN112541434B true CN112541434B (en) 2022-04-12

Family

ID=75018539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011466389.0A Active CN112541434B (en) 2020-12-14 2020-12-14 Face recognition method based on central point tracking model

Country Status (1)

Country Link
CN (1) CN112541434B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283305B (en) * 2021-04-29 2024-03-26 百度在线网络技术(北京)有限公司 Face recognition method, device, electronic equipment and computer readable storage medium
CN112990167B (en) * 2021-05-19 2021-08-10 北京焦点新干线信息技术有限公司 Image processing method and device, storage medium and electronic equipment
CN113255627B (en) * 2021-07-15 2021-11-12 广州市图南软件科技有限公司 Method and device for quickly acquiring information of trailing personnel
CN113674318A (en) * 2021-08-16 2021-11-19 支付宝(杭州)信息技术有限公司 Target tracking method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
WO2017016516A1 (en) * 2015-07-24 2017-02-02 上海依图网络科技有限公司 Method for face recognition-based video human image tracking under complex scenes
CN108388885A (en) * 2018-03-16 2018-08-10 南京邮电大学 The identification in real time of more people's features towards large-scale live scene and automatic screenshot method
CN108734107A (en) * 2018-04-24 2018-11-02 武汉幻视智能科技有限公司 A kind of multi-object tracking method and system based on face

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6973258B2 (en) * 2018-04-13 2021-11-24 オムロン株式会社 Image analyzers, methods and programs

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306290A (en) * 2011-10-14 2012-01-04 刘伟华 Face tracking recognition technique based on video
WO2017016516A1 (en) * 2015-07-24 2017-02-02 上海依图网络科技有限公司 Method for face recognition-based video human image tracking under complex scenes
CN108388885A (en) * 2018-03-16 2018-08-10 南京邮电大学 The identification in real time of more people's features towards large-scale live scene and automatic screenshot method
CN108734107A (en) * 2018-04-24 2018-11-02 武汉幻视智能科技有限公司 A kind of multi-object tracking method and system based on face

Also Published As

Publication number Publication date
CN112541434A (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN112541434B (en) Face recognition method based on central point tracking model
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
Tran et al. Extreme 3D Face Reconstruction: Seeing Through Occlusions.
CN110348376B (en) Pedestrian real-time detection method based on neural network
CN109919977B (en) Video motion person tracking and identity recognition method based on time characteristics
CN106570507B (en) Multi-view-angle consistent plane detection and analysis method for monocular video scene three-dimensional structure
CN103593464B (en) Video fingerprint detecting and video sequence matching method and system based on visual features
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
CN104573614B (en) Apparatus and method for tracking human face
US11017215B2 (en) Two-stage person searching method combining face and appearance features
CN110427905A (en) Pedestrian tracting method, device and terminal
CN109145745B (en) Face recognition method under shielding condition
CN109472191B (en) Pedestrian re-identification and tracking method based on space-time context
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
CN111160291B (en) Human eye detection method based on depth information and CNN
CN111161313B (en) Multi-target tracking method and device in video stream
Li et al. Depthwise nonlocal module for fast salient object detection using a single thread
CN107767358B (en) Method and device for determining ambiguity of object in image
CN111027555B (en) License plate recognition method and device and electronic equipment
CN114419102A (en) Multi-target tracking detection method based on frame difference time sequence motion information
Makris et al. Robust 3d human pose estimation guided by filtered subsets of body keypoints
CN116682178A (en) Multi-person gesture detection method in dense scene
Bajpai et al. An experimental comparison of face detection algorithms
CN112613457B (en) Image acquisition mode detection method, device, computer equipment and storage medium
CN110059651B (en) Real-time tracking and registering method for camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant