CN110516623B - Face recognition method and device and electronic equipment - Google Patents

Face recognition method and device and electronic equipment Download PDF

Info

Publication number
CN110516623B
CN110516623B CN201910810373.8A CN201910810373A CN110516623B CN 110516623 B CN110516623 B CN 110516623B CN 201910810373 A CN201910810373 A CN 201910810373A CN 110516623 B CN110516623 B CN 110516623B
Authority
CN
China
Prior art keywords
image
user
face
pedestrian
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910810373.8A
Other languages
Chinese (zh)
Other versions
CN110516623A (en
Inventor
黄洪
王国栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weidang Life Technology Co ltd
Original Assignee
International Intelligent Machines Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Intelligent Machines Co ltd filed Critical International Intelligent Machines Co ltd
Priority to CN201910810373.8A priority Critical patent/CN110516623B/en
Publication of CN110516623A publication Critical patent/CN110516623A/en
Application granted granted Critical
Publication of CN110516623B publication Critical patent/CN110516623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the invention relates to the technical field of electronic information, and discloses a face recognition method, a face recognition device and electronic equipment.

Description

Face recognition method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of electronic information, in particular to a face recognition method, a face recognition device and electronic equipment.
Background
The identity recognition technology for recognizing the identity of the user through the biological information of the human body to confirm the identity information of the user has higher safety, and the technology is the development direction of the future safety recognition technology.
At present, the identity recognition technology based on face recognition is widely concerned due to the convenience. However, in practical applications, for example, in a monitoring system in public places such as a cell, a user or a pedestrian may have a face that is blocked, or a person face that is not obtained may only be obtained as a back shadow or a side body, in which case, the user face cannot be directly detected to identify the identity of the user.
Disclosure of Invention
In view of the foregoing defects in the prior art, an object of the embodiments of the present invention is to provide a method and an apparatus for face recognition, and an electronic device, which can perform face detection and recognize user identity information when a face is blocked or a face is not obtained.
The purpose of the embodiment of the invention is realized by the following technical scheme:
in order to solve the foregoing technical problem, in a first aspect, an embodiment of the present invention provides a face recognition method, including:
acquiring an image of a user;
judging whether the face of the user exists in the image or not;
if yes, judging whether the face of the user is shielded;
if the human face exists, acquiring the shielding proportion of the human face of the user;
judging whether the shielding proportion is larger than or equal to a preset proportion or not;
if the walking posture is larger than or equal to the walking posture of the user, extracting an image of the pedestrian matched with the walking posture of the user from the historical image according to a preset pedestrian re-recognition algorithm;
determining the identity information of the user according to the extracted image;
if the image size is smaller than the preset size, restoring the face image of the user in the image according to a preset face restoration algorithm;
and determining the identity information of the user according to the restored image.
In some embodiments, the extracting, according to a preset pedestrian re-recognition algorithm, an image of a pedestrian in a history image that matches the walking posture of the user specifically includes:
identifying human body joint points on a plurality of human body regions of the user from the images and human body joint points on a plurality of human body regions of each pedestrian from each historical image through a preset gesture identification algorithm;
matching each human body area of the user with each human body area of each pedestrian one by one to obtain a matching score of each human body area;
weighting and calculating the matching score of each human body region to obtain the matching value of the user and the pedestrian;
judging whether the matching value exceeds a preset threshold value or not;
and if so, confirming that the user and the pedestrian are the same person, and extracting the confirmed image of the pedestrian from the historical image.
In some embodiments, the body region comprises a head region;
the step of matching each human body region of the user with each human body region of each pedestrian one by one to obtain a matching score of each human body region further includes:
and calculating the Euclidean distance between the user head image and the head image of each pedestrian through a preset FaceNet model to obtain the matching score of the head region.
In some embodiments, the body region further comprises a body region comprising a torso region, a left leg region, and a right leg region;
the step of matching each human body region of the user with each human body region of each pedestrian one by one to obtain a matching score of each human body region further includes:
converting an image color space of the body region into an HSV space;
according to the HSV space, obtaining the value of each pixel point in the user body image and the pedestrian body image;
and acquiring the difference between the histogram of the body image of the user and the histogram of the body image of the pedestrian according to the value of the pixel point to obtain the matching score.
In some embodiments, the step of calculating the matching score of each human body region by weighting to obtain the matching value between the user and the pedestrian specifically includes:
acquiring a weight coefficient of each human body region;
obtaining the matching value according to the matching score and the weight coefficient, wherein a calculation formula for calculating the matching value is as follows:
Similar_score=α*head_score+β*body_score+γ1*lleg_score+γ2*rleg_score
wherein [ alpha, beta, gamma ]12]For the weight coefficient of each human body region, the similarity _ score is the matching value, the head _ score is the matching score of the head region, the body _ score is the matching score of the torso region, the lleg _ score is the matching score of the left leg region, and the rleg _ score is the matching score of the right leg region.
In some embodiments, the step of restoring the face image of the user in the image according to a preset face restoration algorithm further includes:
locally erasing the source image of the face in the face public data set to obtain an erased image;
establishing a corresponding relation between the erased image and the source image, and training a face reduction model through a depth residual error network;
and inputting the face image of the user into the face reduction model for calculation to obtain the reduced image.
In some embodiments, before the step of determining the identity information of the user from the restored image, the method further comprises:
calculating a reduction index according to the source image and the reduced image;
judging whether the reduction index is greater than or equal to a preset index or not;
and if so, executing the step of determining the identity information of the user according to the restored image.
In some embodiments, the calculating of the reduction index according to the source image and the reduced image includes the following calculation formula:
MSE=(1/S)*∑(I-J)2
PSNR=10*log(2552/MSE)
wherein, I is the source image, J is the restored image, S is the area of the source image, MSE is the mean square error between the source image and the restored image, and PSNR is the restoration index.
In order to solve the above technical problem, in a second aspect, an embodiment of the present invention provides a face recognition apparatus, including:
the first acquisition module is used for acquiring an image of a user;
the first judging module is used for judging whether the face of the user exists in the image or not;
the second judgment module is used for judging whether the face of the user is shielded or not when the face of the user exists in the image;
the second acquisition module is used for acquiring the shielding proportion of the face of the user when the face of the user is shielded;
the third judgment module is used for judging whether the shielding proportion is larger than or equal to a preset proportion or not;
the extraction module is used for extracting an image of a pedestrian matched with the walking posture of the user from the historical image according to a preset pedestrian re-recognition algorithm when the shielding proportion is larger than or equal to a preset proportion;
the first determining module is used for determining the identity information of the user according to the extracted image;
the restoring module is used for restoring the face image of the user in the image according to a preset face restoring algorithm when the shielding proportion is smaller than a preset proportion;
and the second determining module is used for determining the identity information of the user according to the restored image.
In order to solve the above technical problem, in a third aspect, an embodiment of the present invention provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect as described above.
In order to solve the above technical problem, in a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method according to the first aspect.
In order to solve the above technical problem, in a fifth aspect, the present invention further provides a computer program product, which includes a computer program stored on a computer-readable storage medium, the computer program including program instructions, which, when executed by a computer, cause the computer to execute the method according to the first aspect.
Compared with the prior art, the invention has the beneficial effects that: the embodiment of the invention provides a face recognition method, a face recognition device and electronic equipment, wherein the method comprises the steps of obtaining and judging whether a face of a user exists in an image of the user, judging whether the face of the user exists in a shielding mode when the face of the user exists, obtaining a shielding proportion of the face of the user and judging whether the shielding proportion is larger than or equal to a preset proportion when the shielding proportion exists. When the walking posture is larger than or equal to a preset proportion, extracting an image of a pedestrian matched with the walking posture of the user from a historical image according to a preset pedestrian re-recognition algorithm so as to determine the identity information of the user; when the face is smaller than the preset proportion, restoring the face image of the user in the image according to a preset face restoration algorithm, and determining the identity information of the user according to the restored image.
Drawings
One or more embodiments are illustrated by the accompanying figures in the drawings that correspond thereto and are not to be construed as limiting the embodiments, wherein elements/modules and steps having the same reference numerals are represented by like elements/modules and steps, unless otherwise specified, and the drawings are not to scale.
Fig. 1 is a schematic diagram of an exemplary system structure of an embodiment of a face recognition method applied to an embodiment of the present invention;
fig. 2 is a flowchart of a face recognition method according to an embodiment of the present invention;
FIG. 3 is a sub-flow diagram of step 160 of the method of FIG. 2;
FIG. 4 is another sub-flow diagram of step 160 of the method of FIG. 2;
FIG. 5 is a sub-flow diagram of step 180 of the method of FIG. 2;
FIG. 6 is another sub-flow diagram of step 180 of the method of FIG. 2;
fig. 7 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an electronic device for executing the face recognition method according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the present application. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. Further, the terms "first," "second," "third," and the like, as used herein, do not limit the data and the execution order, but merely distinguish the same items or similar items having substantially the same functions and actions.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a schematic diagram of an exemplary system structure applied to the embodiment of the face recognition method of the present invention. As shown in fig. 1, the system architecture includes: the electronic device comprises an electronic device 10 and a camera 20, wherein the electronic device 10 is in communication connection with the camera 20, and the electronic device 10 can acquire images acquired by the camera 20. The communication connection may be a network connection and may include various connection types such as wired, wireless communication, or fiber optic cables.
The electronic device 10 is capable of recognizing the identity information of the user of the image captured by the camera 20 in the case where the face of the person is occluded. Specifically, the electronic device 10 can determine whether a human face exists in the image, whether the human face is blocked, and whether the blocking ratio is greater than a preset ratio, and after the human face image is subjected to image processing through a pedestrian re-recognition algorithm and a human face restoration algorithm which are pre-stored in the electronic device 10, identify information of the user through the processed image.
The electronic device 10 is a device capable of storing a large amount of image data and performing arithmetic processing on the image data, and may be a physical terminal server, for example, and is connected to the camera 20 through a network by communication via a predetermined communication protocol. Further, the system may also be a cloud server, a cloud host, a cloud service platform, a cloud computing platform, or the like, and may also be communicatively connected to the camera 20 through a certain communication protocol via a network. The network can be an ethernet or a local area network, the communication protocol can be TCP/IP, NETBEUI, IPX/SPX and other communication protocols, and the communication connection can be a wireless connection or a wired connection, and in particular, can be set according to actual needs.
In other embodiments, the electronic device 10 may also be a face recognition robot with a camera function, which can not only capture images by its own camera device, but also can be in communication with the camera 20 to capture images captured by the camera 20.
It should be noted that the face recognition method provided in the embodiment of the present application is generally executed by the electronic device 10, and accordingly, the face recognition apparatus is generally disposed in the electronic device 10.
The camera 20 is a device capable of acquiring an image, where the image may be a video or a picture, and the picture is taken as an example in the embodiment of the present invention. The camera 20 may be a device for capturing images, such as a camera, a mobile phone, a video recorder, a video camera, or a night vision device, and the camera 20 can be communicatively connected to the electronic device 10 to transmit image information captured in real time to the electronic device 10. The number of cameras 20 may be several, such as 1, 2, 3, 4, etc. The plurality of image cameras 20 may be the same image capturing device or different image capturing devices, so as to meet different requirements.
Specifically, the embodiments of the present invention will be further explained below with reference to the drawings.
An embodiment of the present invention provides a face recognition method, which can be executed by the electronic device 10, please refer to fig. 2, which shows a flowchart of a face recognition method applied according to the above system structure, and the method includes, but is not limited to, the following steps:
step 110: an image of a user is acquired.
In the embodiment of the present invention, the image of the user may be acquired by the camera 20 as described in fig. 1, and the camera 20 may set the installation position thereof according to the application scenario of the face recognition method in the embodiment of the present invention. For example, when identification of user identification information in a public place such as an entrance/exit cell is required, one or more cameras 20 may be installed at each exit of the cell to allow a user to acquire images of the user entering/exiting the cell. It should be noted that only the images acquired by the camera 20 in real time include images of the user/pedestrian for further judgment and processing.
Step 120: and judging whether the face of the user exists in the image or not. If yes, go to step 130, if not, go back to step 110.
The embodiment of the invention adopts a face recognition method to determine the identity information of the user, so that in the user image collected in real time, an image with the face of the user needs to be screened out for further judgment, if the image without the face of the user shows that the image shoots the user but does not shoot the head of the user, the step 110 is returned to obtain the image of the user again.
Step 130: and judging whether the human face of the user is shielded or not. If yes, go to step 141; if not, go to step 142.
Since there may be a situation that the user carries a cap, a mask, or other shielding object to shield the face of the user when the user passes through the field of view of the camera 20, it is also necessary to determine whether the face of the user is shielded to determine whether the face of the user can be directly identified.
Step 141: and acquiring the shielding proportion of the face of the user, and skipping to the step 150.
When the user face is shielded, the shielding proportion of the user face needs to be obtained, and a corresponding algorithm is adopted according to the shielding proportion to obtain a complete face image of the user.
Step 142: and determining the identity information of the user according to the face.
If the face of the user is not shielded, the face of the user can be matched with the face image of the pedestrian in the historical image according to the face image record of the pedestrian in the pre-stored historical image and the corresponding identity information of the pedestrian in the historical image so as to identify the identity information of the user.
It should be noted that, in the embodiment of the present invention, after the user identity information corresponding to the obtained face image of the user is identified, the obtained image/face image of the user and the corresponding user identity information thereof are stored in the historical image, and the historical image data set in the database is updated.
Step 150: and judging whether the shielding proportion is larger than or equal to a preset proportion. If the value is larger than or equal to the preset value, jumping to step 160; if so, go to step 180.
After the human face is determined to be shielded and the shielding proportion is obtained, whether the shielding proportion exceeds a preset proportion is judged to determine whether the part of the human face which is not shielded is enough to be used as the basis of the identity information of the user to be recognized, and further, the identity information of the user is recognized according to different algorithms.
The preset proportion can be set according to the restoring capability of a human face image in a preset human face restoring algorithm, and the proportion of the human face which can be restored by the preset human face restoring algorithm and is blocked at most is taken as the preset proportion. For example, the preset face restoration algorithm can restore the face image within a range of 50% at most when the face is half-blocked, that is, 50%, and takes 50% as the preset proportion.
Step 160: and extracting the images of the pedestrians matched with the walking postures of the user in the historical images according to a preset pedestrian re-recognition algorithm, and skipping to the step 170.
When the shielding proportion is larger than or equal to the preset proportion, the face image cannot be restored for identification through the non-shielded face image, so that in the embodiment of the invention, the image of the pedestrian matched with the walking posture of the user image in the historical image can be identified through a preset pedestrian re-identification algorithm, the pedestrian matched with the walking posture of the user is obtained, and the image of the pedestrian is obtained, namely the image of the user can be obtained.
Step 170: and determining the identity information of the user according to the extracted image.
Further, the face features of the user can be extracted from the extracted image, and the identity information of the user can be determined according to the face features.
Step 180: and restoring the face image of the user in the image according to a preset face restoration algorithm, and skipping to step 190.
When the shielding proportion is smaller than the preset proportion, the shielded range of the face is not large, the shielded range of the face is in the restoring range of the preset face restoring algorithm, and at the moment, the face in the image can be restored through the preset face restoring algorithm.
Step 190: and determining the identity information of the user according to the restored image.
Further, the face features of the user can be extracted from the restored image, and the identity information of the user can be determined according to the face features. Specifically, the identity information of the user may be determined according to the face features, and face recognition may be performed according to various existing face recognition algorithms in combination with face image records in the history image to determine the identity information of the user.
The embodiment of the invention provides a face recognition method, which comprises the steps of obtaining and judging whether a face of a user exists in an image of the user, judging whether the face of the user exists in a shielding mode when the face of the user exists, obtaining the shielding proportion of the face of the user and judging whether the shielding proportion is larger than or equal to a preset proportion when the shielding proportion exists. When the walking posture is larger than or equal to a preset proportion, extracting an image of a pedestrian matched with the walking posture of the user from a historical image according to a preset pedestrian re-recognition algorithm so as to determine the identity information of the user; when the face is smaller than the preset proportion, restoring the face image of the user in the image according to a preset face restoration algorithm, and determining the identity information of the user according to the restored image.
In some embodiments, please refer to fig. 3, which is a sub-flowchart of step 160 in the method shown in fig. 2, wherein step 160 specifically includes:
step 161: and identifying human body joint points on a plurality of human body regions of the user from the images and human body joint points on a plurality of human body regions of each pedestrian from each historical image through a preset gesture identification algorithm.
Step 162: and matching each human body area of the user with each human body area of each pedestrian one by one to obtain the matching score of each human body area.
Step 163: and weighting and calculating the matching scores of the human body regions to obtain the matching value of the user and the pedestrian.
Step 164: and judging whether the matching value exceeds a preset threshold value. If yes, jumping to step 165; if not, go to step 161.
Step 165: confirming that the user and the pedestrian are the same person, and extracting an image of the confirmed pedestrian from the history image.
In the embodiment of the present invention, since the history image may be a plurality of images, and one or more pedestrians may exist in each image, it is necessary to match each human body region of the user with each human body region of each pedestrian in each history image one by one to obtain a matching score of each human body region. The matching process of the step of extracting the image of the pedestrian matched with the walking posture of the user in the historical image through the preset posture recognition algorithm is a matching method based on local features.
Firstly, recognizing the postures of pedestrians, carrying out 2D posture estimation on a detected human body by adopting an OpenPose human body posture recognition algorithm, dividing the human body into a plurality of human body regions by utilizing human body joint points, extracting the characteristic information of each human body region after obtaining the plurality of human body regions, and matching the characteristic information of each human body region in the image of a user with the characteristic information of each human body region in the image of each pedestrian one by one to obtain the matching score of each human body region.
Then, the matching scores of all human body regions are weighted to obtain matching results, matching values obtained through weighting calculation are compared with a preset threshold value, and when the matching values exceed the preset threshold value, the matching degree is determined to be high, namely the walking posture similarity of the pedestrian image and the user image is high, so that the fact that the pedestrian and the user are the same person can be determined. Further, the identity information of the user can be recognized by acquiring the face features of the pedestrian. The preset threshold value can be determined according to the recognition capability, the recognition precision and the like of the preset gesture recognition algorithm on the walking gesture in the image, and the limitation of the embodiment of the invention is not required.
Specifically, in the embodiment of the present invention, the human body is divided into the head region and the body region by using the human body joint points of the human neck, and then the body region is divided into the trunk region, the left leg region and the right leg region by using the human body joint points of the two thigh roots as an example, so as to perform the feature matching. After the matching scores of the four regions, namely the head region, the trunk region, the left leg region and the right leg region, are obtained, the matching scores of the four regions are weighted and calculated to obtain the matching value, and whether the matching value exceeds a threshold value or not is judged to judge whether the pedestrian and the user are the same person or not.
Further, in some embodiments, the human body region includes a head region, please refer to fig. 4, which is another sub-flowchart of step 160 in the method shown in fig. 2, and the step 162 specifically includes:
step 1621: and calculating the Euclidean distance between the user head image and the head image of each pedestrian through a preset FaceNet model to obtain the matching score of the head region.
Specifically, mapping of the user head image and the head images of the pedestrians to points on a Euclidean space is obtained, and a distance of a mapping point, on the Euclidean space, of a feature of a certain head image corresponding to the user head image and the head images of the pedestrians is calculated, namely the Euclidean distance, and the Euclidean distance can be used for representing the similarity between the user head image and the head images of the pedestrians, so that the Euclidean distance can be used as a matching score of a head region.
For example, the mapping of the head image of the user and the head image of each pedestrian to the point on the european space is obtained, the distance between the mapping point of the feature of the nose on the european space in the head image of the user and the mapping point of the feature of the nose on the european space in the head image of each pedestrian is calculated, the distance between the mapping points of the two noses on the european space is the european distance, and the smaller the value, the more similar the two head images are.
In some embodiments, the body region further comprises a body region comprising a torso region, a left leg region, and a right leg region, please continue with fig. 4, the step 162 further comprises:
step 1622: converting the image color space of the body region into an HSV space.
Step 1623: and acquiring the value of each pixel point in the user body image and the pedestrian body image according to the HSV space.
Step 1624: and acquiring the difference between the histogram of the body image of the user and the histogram of the body image of the pedestrian according to the value of the pixel point to obtain the matching score.
In the embodiment of the present invention, since the camera 20 acquires the image of the user, the image uses the RGB color space as the image parameter to represent the image parameter, so as to facilitate the statistics of the pixel points in the histogram, the image color space of the body area is converted into the HSV space, and the three color parameters, i.e., the hue H, the saturation S, and the lightness V, represent the color information of the pixel points. And then, calculating the value of each pixel point in the user body image and the pedestrian body image according to the color parameter of each pixel point in the HSV space, accumulating and overlapping the values of the same pixel points, counting the number of the pixel points, and converting the pixel points into a histogram with low calculation cost. And finally, obtaining the difference between the histogram of the body image of the user and the histogram of the body image of the pedestrian, namely obtaining the matching score.
Specifically, firstly, converting image color spaces of the trunk, the left leg and the right leg from RGB to HSV space, taking the value of each pixel point as Val ═ delta ^ H + epsilon ^ S + zeta ^ V, calculating a histogram for Val, and obtaining the difference between the Val histograms of the trunk, the left leg and the right leg of two persons (users and pedestrians), namely the similarity/matching scores of the two persons on the trunk, the left leg and the right leg. Where [ δ, ε, ζ ] is another class of weighting coefficients, which is divided into two groups, corresponding to body and leg (left and right legs), respectively, i.e., body and leg will adopt different [ δ, ε, ζ ] color weights.
In some embodiments, please continue to refer to fig. 4, wherein the step 163 specifically includes:
step 1631: and acquiring the weight coefficient of each human body region.
Step 1632: and acquiring the matching value according to the matching score and the weight coefficient.
After the matching scores of the head area, the trunk area, the left leg area and the right leg area are obtained, the matching scores of the four matching areas are subjected to weighted calculation, and then the matching value can be obtained. Further, whether the matching value exceeds a preset threshold value or not is judged, and whether the pedestrian and the user are the same person or not can be obtained.
Wherein, the calculation formula for calculating the matching value is as follows:
Similar_score=α*head_score+β*body_score+γ1*lleg_score+γ2*rleg_score
wherein [ alpha, beta, gamma ]12]For the weight coefficient of each human body region, the similarity _ score is the matching value, the head _ score is the matching score of the head region, the body _ score is the matching score of the torso region, the lleg _ score is the matching score of the left leg region, and the rleg _ score is the matching score of the right leg region.
In some embodiments, please refer to fig. 5, which is a sub-flowchart of step 180 in the method shown in fig. 2, wherein the step 180 specifically includes:
step 181: and carrying out local erasing on the source image of the face in the face public data set to obtain an erased image.
Step 182: and establishing a corresponding relation between the erased image and the source image, and training a face reduction model through a depth residual error network.
Step 183: and inputting the face image of the user into the face reduction model for calculation to obtain the reduced image.
In the embodiment of the invention, firstly, part of a face in a certain face public Data Set such as LFW (laboratory Faces in the wild), FDDB (face Detection Data Set and benchmark), MegaFace and the like is erased, a corresponding relation between an erased image and a source image is established, and a face reduction model is trained by adopting a depth residual error network Res-Net. In order to restore the human face, the human face image in the historical image needs to be recorded in the contrast data set. And then inputting the shielded human face image into the human face restoration model for calculation to obtain a restored image.
It should be noted that, in the embodiment of the present invention, the face public data set further includes a face image record data set in the history image, and after the data set identifies and confirms the identity information of the user each time, the identity information of the user and the user image that is obtained correspondingly are updated as elements of a set. The history image may also be an image in which a daily face of the user is recorded in advance.
On the basis of a pre-training model of an open data set, recording the daily face of a user to obtain the historical image, and then putting the historical image into the pre-training model for transfer learning. And adopting the migrated face restoration model to restore and recover to obtain a complete face, calculating the feature vector of the complete face through the face feature model, performing modular operation on the feature vector of the face in the face image record in the comparison data set to obtain a group of face similarity, and finding the face in the face image record corresponding to the maximum similarity value so as to determine the identity information of the user.
Further, in some embodiments, please refer to fig. 6, which is another sub-flowchart of step 180 of the method shown in fig. 2, before the step 190, the method further includes:
step 184: and calculating a reduction index according to the source image and the reduced image.
In the embodiment of the invention, a PSNR value (peak signal-to-noise ratio value) is taken as an evaluation index of face restoration, a source image is set as I, an image is erased as I ', the image restored by the face restoration model after I' is set as J, the area of the source image is set as S, MSE is the mean square error of the source image I and the restored image J, and then
The calculation formula for calculating the reduction index is as follows:
MSE=(1/S)*∑(I-J)2
PSNR=10*log(2552/MSE)
wherein, I is the source image, J is the restored image, S is the area of the source image, MSE is the mean square error between the source image and the restored image, and PSNR is the restoration index. The source image is a standard image included in the identity information registered by the user in advance, and may be, for example, a certificate photo, an identity card image, and the like of the user.
The larger the PSNR value is, the smaller the difference between the source image I and the restored image J is, and the more ideal the restoration effect is.
Step 185: and judging whether the reduction index is greater than or equal to a preset index. If yes, go to step 190; if not, go to step 180.
After the PSNR value of the restoration index is obtained, whether the PSNR value is greater than or equal to a preset index needs to be further determined, and if the PSNR value is greater than the preset index, it indicates that the difference between the source image I and the restored image J is small, and the restoration effect is ideal, so that the identity information of the user can be further determined according to the restored image.
An embodiment of the present invention further discloses a face recognition apparatus, please refer to fig. 7, which shows a structure of the face recognition apparatus provided in the embodiment of the present application, and the face recognition apparatus 200 includes: the device comprises a first obtaining module 210, a first judging module 220, a second judging module 230, a second obtaining module 240, a third judging module 250, an extracting module 260, a first determining module 270, a restoring module 280 and a second determining module 290.
The first obtaining module 210 is configured to obtain an image of a user.
The first determining module 220 is configured to determine whether a face of the user exists in the image.
The second determining module 230 is configured to determine whether the face of the user is occluded when the face of the user exists in the image.
The second obtaining module 240 is configured to obtain an occlusion ratio of the face of the user when the face of the user has occlusion.
The third determining module 250 is configured to determine whether the shielding ratio is greater than or equal to a preset ratio.
The extraction module 260 is configured to extract an image of a pedestrian in the history image, which is matched with the walking posture of the user, according to a preset pedestrian re-recognition algorithm when the shielding ratio is greater than or equal to a preset ratio.
The first determining module 270 is configured to determine the identity information of the user according to the extracted image.
The restoring module 280 is configured to restore the face image of the user in the image according to a preset face restoring algorithm when the shielding ratio is smaller than a preset ratio.
The second determining module 290 is configured to determine the identity information of the user according to the restored image.
The embodiment of the present invention provides a face recognition apparatus 200, which acquires a face of a user through a first acquisition module 210 and determines whether the face of the user exists in an image of the user through a first determination module 220, determines whether the face of the user exists in an occlusion state through a second determination module 230 when the face of the user exists in the occlusion state, acquires an occlusion proportion of the face of the user through a second acquisition module 240 when the face of the user exists in the occlusion state, and determines whether the occlusion proportion is greater than or equal to a preset proportion through a third determination module 250. When the ratio is greater than or equal to the preset ratio, the extraction module 260 extracts an image of a pedestrian matched with the walking posture of the user from the historical image according to a preset pedestrian re-recognition algorithm, so that the first determination module 270 can determine the identity information of the user; when the ratio is smaller than the preset ratio, the restoring module 280 restores the face image of the user in the image according to a preset face restoring algorithm, so that the second determining module 290 can determine the identity information of the user according to the restored image.
In some embodiments, the extracting module 260 is further configured to identify human body joint points on a plurality of human body regions of the user from the images through a preset gesture recognition algorithm, and human body joint points on a plurality of human body regions of each pedestrian from each of the historical images; matching each human body area of the user with each human body area of each pedestrian one by one to obtain a matching score of each human body area; weighting and calculating the matching score of each human body region to obtain the matching value of the user and the pedestrian; judging whether the matching value exceeds a preset threshold value or not; and if so, confirming that the user and the pedestrian are the same person, and extracting the confirmed image of the pedestrian from the historical image.
In some embodiments, the body region comprises a head region; the extraction module 260 is further configured to calculate the euclidean distance between the user head image and the head image of each pedestrian through a preset FaceNet model, so as to obtain the matching score of the head region.
In some embodiments, the body region further comprises a body region comprising a torso region, a left leg region, and a right leg region; the extraction module 260 is further configured to convert the image color space of the body region into HSV space; according to the HSV space, obtaining the value of each pixel point in the user body image and the pedestrian body image; and acquiring the difference between the histogram of the body image of the user and the histogram of the body image of the pedestrian according to the value of the pixel point to obtain the matching score.
In some embodiments, the extracting module 260 is further configured to obtain a weight coefficient of each human body region; obtaining the matching value according to the matching score and the weight coefficient, wherein a calculation formula for calculating the matching value is as follows:
Similar_score=α*head_score+β*body_score+γ1*lleg_score+
γ2*rleg_score
wherein [ alpha, beta, gamma ]12]For each of said human body regionsThe weight coefficient of the domain, the similarity _ score is the matching value, the head _ score is the matching score of the head region, the body _ score is the matching score of the torso region, the lleg _ score is the matching score of the left leg region, and the rleg _ score is the matching score of the right leg region.
In some embodiments, the restoring module 280 is further configured to locally erase a source image of a face in the face public data set to obtain an erased image; establishing a corresponding relation between the erased image and the source image, and training a face reduction model through a depth residual error network; and inputting the face image of the user into the face reduction model for calculation to obtain the reduced image.
In some embodiments, the restoration module 280 is further configured to calculate a restoration indicator according to the source image and the restored image; judging whether the reduction index is greater than or equal to a preset index or not; and if so, executing the step of determining the identity information of the user according to the restored image.
In some embodiments, the calculation formula for calculating the reduction indicator is as follows:
MSE=(1/S)*∑(I-J)2
PSNR=10*log(2552/MSE)
wherein, I is the source image, J is the restored image, S is the area of the source image, MSE is the mean square error between the source image and the restored image, and PSNR is the restoration index.
An embodiment of the present invention further provides an electronic device, please refer to fig. 8, which shows a hardware structure of a server capable of executing the face recognition method described in fig. 2 to fig. 6. The electronic device 10 may be the electronic device 10 shown in fig. 1.
The electronic device 10 includes: at least one processor 11; and a memory 12 communicatively coupled to the at least one processor 11, which is exemplified by one processor 11 in fig. 8. The memory 12 stores instructions executable by the at least one processor 11, and the instructions are executable by the at least one processor 11 to enable the at least one processor 11 to perform the face recognition method described above with reference to fig. 2 to 6. The processor 11 and the memory 12 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The memory 12 is a non-volatile computer-readable storage medium and can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the face recognition method in the embodiment of the present application, for example, the modules shown in fig. 7. The processor 11 executes various functional applications of the server and data processing by running the nonvolatile software programs, instructions and modules stored in the memory 12, namely, implements the face recognition method of the above-mentioned method embodiment.
The memory 12 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the face recognition apparatus, and the like. Further, the memory 12 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 12 may optionally include memory located remotely from the processor 11, which may be connected to the face recognition device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 12, and when executed by the one or more processors 11, perform the face recognition method in any of the above-described method embodiments, for example, perform the method steps of fig. 2 to 6 described above, and implement the functions of the modules and units in fig. 7.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application also provide a non-transitory computer-readable storage medium storing computer-executable instructions for execution by one or more processors, for example, to perform the method steps of fig. 2-6 described above to implement the functions of the modules in fig. 7.
Embodiments of the present application further provide a computer program product, including a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, which, when executed by a computer, cause the computer to perform the face recognition method in any of the above-described method embodiments, for example, to perform the method steps in fig. 2 to 6 described above, and implement the functions of the modules in fig. 7.
The embodiment of the invention provides a face recognition method, a face recognition device and electronic equipment. When the walking posture is larger than or equal to a preset proportion, extracting an image of a pedestrian matched with the walking posture of the user from a historical image according to a preset pedestrian re-recognition algorithm so as to determine the identity information of the user; when the face is smaller than the preset proportion, restoring the face image of the user in the image according to a preset face restoration algorithm, and determining the identity information of the user according to the restored image.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A face recognition method, comprising:
acquiring an image of a user;
judging whether the face of the user exists in the image or not;
if yes, judging whether the face of the user is shielded;
if the human face exists, acquiring the shielding proportion of the human face of the user;
judging whether the shielding proportion is larger than or equal to a preset proportion or not;
if the walking posture is larger than or equal to the walking posture of the user, extracting an image of the pedestrian matched with the walking posture of the user from the historical image according to a preset pedestrian re-recognition algorithm;
determining the identity information of the user according to the extracted image;
if the number of the face images is less than the preset number of the face restoration algorithm, restoring the face images of the users in the images according to the preset face restoration algorithm, wherein the source images of the faces in the face public data set are locally erased to obtain erased images, the corresponding relation between the erased images and the source images is established, a face restoration model is trained through a depth residual error network, the face images of the users are input into the face restoration model to be calculated, and the restored images are obtained;
calculating a reduction index according to the source image and the reduced image, and judging whether the reduction index is greater than or equal to a preset index;
if yes, determining the identity information of the user according to the restored image.
2. The method according to claim 1, wherein the extracting, according to a preset pedestrian re-recognition algorithm, the image of the pedestrian matching the walking posture of the user from the historical image specifically comprises:
identifying human body joint points on a plurality of human body regions of the user from the images and human body joint points on a plurality of human body regions of each pedestrian from each historical image through a preset gesture identification algorithm;
matching each human body area of the user with each human body area of each pedestrian one by one to obtain a matching score of each human body area;
weighting and calculating the matching score of each human body region to obtain the matching value of the user and the pedestrian;
judging whether the matching value exceeds a preset threshold value or not;
and if so, confirming that the user and the pedestrian are the same person, and extracting the confirmed image of the pedestrian from the historical image.
3. The method of claim 2, wherein the body region comprises a head region;
the step of matching each human body region of the user with each human body region of each pedestrian one by one to obtain a matching score of each human body region further includes:
and calculating the Euclidean distance between the user head image and the head image of each pedestrian through a preset FaceNet model to obtain the matching score of the head region.
4. The method of claim 3, wherein the body region further comprises a body region comprising a torso region, a left leg region, and a right leg region;
the step of matching each human body region of the user with each human body region of each pedestrian one by one to obtain a matching score of each human body region further includes:
converting an image color space of the body region into an HSV space;
according to the HSV space, obtaining the value of each pixel point in the user body image and the pedestrian body image;
and acquiring the difference between the histogram of the body image of the user and the histogram of the body image of the pedestrian according to the value of the pixel point to obtain the matching score.
5. The method according to claim 4, wherein the step of weighting and calculating the matching score of each human body region to obtain the matching value between the user and the pedestrian specifically comprises:
acquiring a weight coefficient of each human body region;
obtaining the matching value according to the matching score and the weight coefficient, wherein a calculation formula for calculating the matching value is as follows:
Similar_score=α*head_score+β*body_score+γ1*lleg_score+γ2*rleg_score
wherein [ alpha, beta, gamma ]12]For the weight coefficient of each human body region, the similarity _ score is the matching value, the head _ score is the matching score of the head region, the body _ score is the matching score of the torso region, the lleg _ score is the matching score of the left leg region, and the rleg _ score is the matching score of the right leg region.
6. The method according to claim 1, wherein the reduction index is calculated from the source image and the reduced image, and the calculation formula for calculating the reduction index is as follows:
MSE=(1/S)*∑(I-J)
PSNR=10*log(255/MSE)
wherein, I is the source image, J is the restored image, S is the area of the source image, MSE is the mean square error between the source image and the restored image, and PSNR is the restoration index.
7. A face recognition apparatus, comprising:
the first acquisition module is used for acquiring an image of a user;
the first judging module is used for judging whether the face of the user exists in the image or not;
the second judgment module is used for judging whether the face of the user is shielded or not when the face of the user exists in the image;
the second acquisition module is used for acquiring the shielding proportion of the face of the user when the face of the user is shielded;
the third judgment module is used for judging whether the shielding proportion is larger than or equal to a preset proportion or not;
the extraction module is used for extracting an image of a pedestrian matched with the walking posture of the user from the historical image according to a preset pedestrian re-recognition algorithm when the shielding proportion is larger than or equal to a preset proportion;
the first determining module is used for determining the identity information of the user according to the extracted image;
the restoring module is used for restoring the face image of the user in the image according to a preset face restoring algorithm when the shielding proportion is smaller than a preset proportion, wherein a source image of a face in a face public data set is locally erased to obtain an erased image, a corresponding relation between the erased image and the source image is established, a face restoration model is trained through a depth residual error network, and the face image of the user is input into the face restoration model to be calculated to obtain the restored image;
the restoration module is further used for calculating a restoration index according to the source image and the restored image, and judging whether the restoration index is greater than or equal to a preset index;
and the second determining module is used for determining the identity information of the user according to the restored image when the restoration index is greater than or equal to a preset index.
8. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
CN201910810373.8A 2019-08-29 2019-08-29 Face recognition method and device and electronic equipment Active CN110516623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910810373.8A CN110516623B (en) 2019-08-29 2019-08-29 Face recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910810373.8A CN110516623B (en) 2019-08-29 2019-08-29 Face recognition method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110516623A CN110516623A (en) 2019-11-29
CN110516623B true CN110516623B (en) 2022-03-22

Family

ID=68629184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910810373.8A Active CN110516623B (en) 2019-08-29 2019-08-29 Face recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110516623B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461047A (en) * 2020-04-10 2020-07-28 北京爱笔科技有限公司 Identity recognition method, device, equipment and computer storage medium
CN111624617A (en) * 2020-05-28 2020-09-04 联想(北京)有限公司 Data processing method and electronic equipment
CN112132057A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Multi-dimensional identity recognition method and system
CN112800885B (en) * 2021-01-16 2023-09-26 南京众鑫云创软件科技有限公司 Data processing system and method based on big data
CN112836655B (en) * 2021-02-07 2024-05-28 上海卓繁信息技术股份有限公司 Method and device for identifying identity of illegal actor and electronic equipment
CN114554113B (en) * 2022-04-24 2022-08-16 浙江华眼视觉科技有限公司 Express item code recognition machine express item person drawing method and device
CN115205950B (en) * 2022-09-16 2023-04-25 广州昇创达科技有限公司 Block chain-based intelligent traffic subway passenger detection and checkout method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574555A (en) * 2015-01-14 2015-04-29 四川大学 Remote checking-in method adopting face classification algorithm based on sparse representation
WO2017036115A1 (en) * 2015-09-01 2017-03-09 京东方科技集团股份有限公司 Identity identifying device and method for manufacturing same, and identity identifying method
CN106910176A (en) * 2017-03-02 2017-06-30 中科视拓(北京)科技有限公司 A kind of facial image based on deep learning removes occlusion method
CN107423678A (en) * 2017-05-27 2017-12-01 电子科技大学 A kind of training method and face identification method of the convolutional neural networks for extracting feature
CN108062542A (en) * 2018-01-12 2018-05-22 杭州智诺科技股份有限公司 The detection method for the face being blocked
CN108710859A (en) * 2018-05-23 2018-10-26 曜科智能科技(上海)有限公司 Method for detecting human face and equipment, face identification method and equipment and storage medium
CN109344842A (en) * 2018-08-15 2019-02-15 天津大学 A kind of pedestrian's recognition methods again based on semantic region expression
WO2019134246A1 (en) * 2018-01-03 2019-07-11 平安科技(深圳)有限公司 Facial recognition-based security monitoring method, device, and storage medium
CN110032940A (en) * 2019-03-13 2019-07-19 华中科技大学 A kind of method and system that video pedestrian identifies again

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI637354B (en) * 2017-10-23 2018-10-01 緯創資通股份有限公司 Image detection method and image detection device for determining postures of user

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574555A (en) * 2015-01-14 2015-04-29 四川大学 Remote checking-in method adopting face classification algorithm based on sparse representation
WO2017036115A1 (en) * 2015-09-01 2017-03-09 京东方科技集团股份有限公司 Identity identifying device and method for manufacturing same, and identity identifying method
CN106910176A (en) * 2017-03-02 2017-06-30 中科视拓(北京)科技有限公司 A kind of facial image based on deep learning removes occlusion method
CN107423678A (en) * 2017-05-27 2017-12-01 电子科技大学 A kind of training method and face identification method of the convolutional neural networks for extracting feature
WO2019134246A1 (en) * 2018-01-03 2019-07-11 平安科技(深圳)有限公司 Facial recognition-based security monitoring method, device, and storage medium
CN108062542A (en) * 2018-01-12 2018-05-22 杭州智诺科技股份有限公司 The detection method for the face being blocked
CN108710859A (en) * 2018-05-23 2018-10-26 曜科智能科技(上海)有限公司 Method for detecting human face and equipment, face identification method and equipment and storage medium
CN109344842A (en) * 2018-08-15 2019-02-15 天津大学 A kind of pedestrian's recognition methods again based on semantic region expression
CN110032940A (en) * 2019-03-13 2019-07-19 华中科技大学 A kind of method and system that video pedestrian identifies again

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Bayer图像色彩还原线性插值方法;孙辉 等;《液晶与显示》;20130630;第28卷(第3期);全文 *
Nuclear-L1 Norm Joint Regression for Face Reconstruction and Recognition;Lei Luo 等;《http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/ACCV_2014/pages/PDF/84.pdf》;20141231;全文 *

Also Published As

Publication number Publication date
CN110516623A (en) 2019-11-29

Similar Documents

Publication Publication Date Title
CN110516623B (en) Face recognition method and device and electronic equipment
CN109325964B (en) Face tracking method and device and terminal
US9734404B2 (en) Motion stabilization and detection of articulated objects
CN110070029B (en) Gait recognition method and device
CN106919921B (en) Gait recognition method and system combining subspace learning and tensor neural network
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN110348331B (en) Face recognition method and electronic equipment
CN111753643B (en) Character gesture recognition method, character gesture recognition device, computer device and storage medium
CN108416291B (en) Face detection and recognition method, device and system
CN112200056B (en) Face living body detection method and device, electronic equipment and storage medium
CN108108711B (en) Face control method, electronic device and storage medium
CN108171138B (en) Biological characteristic information acquisition method and device
CN109948439B (en) Living body detection method, living body detection system and terminal equipment
CN112257669A (en) Pedestrian re-identification method and device and electronic equipment
JP7151875B2 (en) Image processing device, image processing method, and program
JP6025557B2 (en) Image recognition apparatus, control method thereof, and program
CN112784712B (en) Missing child early warning implementation method and device based on real-time monitoring
CN111291612A (en) Pedestrian re-identification method and device based on multi-person multi-camera tracking
CN111126250A (en) Pedestrian re-identification method and device based on PTGAN
CN110580708B (en) Rapid movement detection method and device and electronic equipment
CN108090473B (en) Method and device for recognizing human face under multiple cameras
CN110276288A (en) A kind of personal identification method and device based on biological characteristic
CN114022845A (en) Real-time detection method and computer readable medium for electrician insulating gloves
CN113792700B (en) Storage battery car in-box detection method and device, computer equipment and storage medium
Hossain et al. A real-time face to camera distance measurement algorithm using object classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230508

Address after: 518000 C5, college industrialization complex building, Shenzhen Virtual University Park, No. 2, Yuexing Third Road, high tech Zone community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Patentee after: Shenzhen Weidang Life Technology Co.,Ltd.

Address before: 518000 floor 2, torch venture building, No. 22, Yanshan Road, Nanshan District, Shenzhen, Guangdong

Patentee before: INTERNATIONAL INTELLIGENT MACHINES Co.,Ltd.