CN111144169A - Face recognition method and device and electronic equipment - Google Patents

Face recognition method and device and electronic equipment Download PDF

Info

Publication number
CN111144169A
CN111144169A CN201811303261.5A CN201811303261A CN111144169A CN 111144169 A CN111144169 A CN 111144169A CN 201811303261 A CN201811303261 A CN 201811303261A CN 111144169 A CN111144169 A CN 111144169A
Authority
CN
China
Prior art keywords
target object
image
color image
dimensional model
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811303261.5A
Other languages
Chinese (zh)
Inventor
徐旺源
郑茂铃
余涛
郭显兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Semiconductor Co Ltd
Original Assignee
Shenzhen BYD Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen BYD Microelectronics Co Ltd filed Critical Shenzhen BYD Microelectronics Co Ltd
Priority to CN201811303261.5A priority Critical patent/CN111144169A/en
Publication of CN111144169A publication Critical patent/CN111144169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides a face recognition method, a face recognition device and electronic equipment, wherein the face recognition method comprises the following steps: acquiring a thermal distribution image of a target object; acquiring a color image of a target object; acquiring depth information corresponding to the color image; constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information; and determining the identity information corresponding to the target object according to the three-dimensional model. By the method and the device, the identity information of the target object can be recognized based on the three-dimensional model, the accuracy of face recognition is improved, and the technical problem that in the prior art, the face is recognized based on a two-dimensional image recognition technology, and the recognition accuracy is low is solved.

Description

Face recognition method and device and electronic equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a face recognition method, a face recognition device, and an electronic device.
Background
With the continuous development of information security technology, more and more terminals, such as mobile phones, access control systems, security systems, and the like, can perform authentication processing according to the biometric information of the user. When the user passes the authentication, the authority information of the user is legal, so that the user can pass the access control or perform operations such as starting, payment, game playing and the like. The face recognition technology is a biological feature recognition technology widely used by users, identity recognition can be carried out when the users face the camera, and operation is simple.
In the related art, a human face is recognized based on a two-dimensional image recognition technique. Specifically, a face image is collected through a first recognition unit and a second recognition unit which are different in installation position, then the face image collected by the first recognition unit is recognized through a face recognition processing unit to obtain a first candidate set, the face image collected by the second recognition unit is recognized to obtain a second candidate set, and finally a candidate object with similarity meeting a preset rule is selected from the first candidate set and the second candidate set to serve as a final recognition result.
However, in some scenarios, the accuracy of the identification is low, for example, when a user wears a simulated mask, the user may be misidentified as a legitimate user.
Disclosure of Invention
The application provides a face recognition method, a face recognition device and electronic equipment, so that identity information of a target object is recognized based on a three-dimensional model, the accuracy of face recognition is improved, and the technical problem that in the prior art, the face is recognized based on a two-dimensional image recognition technology, and the recognition accuracy is low is solved.
In one aspect of the present application, a face recognition method provided in an embodiment includes:
acquiring a thermal distribution image of a target object;
acquiring a color image of the target object;
acquiring depth information corresponding to the color image;
constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information;
and determining the identity information corresponding to the target object according to the three-dimensional model.
According to the face recognition method, the thermal distribution image and the color image of the target object are obtained, the depth information corresponding to the color image is obtained, then the three-dimensional model of the target object is constructed according to the thermal distribution image, the color image and the depth information, and finally the identity information corresponding to the target object is determined according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
In another aspect of the present application, an embodiment of a face recognition apparatus includes:
the first acquisition module is used for acquiring a heat distribution image of a target object;
the second acquisition module is used for acquiring a color image of the target object;
the third acquisition module is used for acquiring depth information corresponding to the color image;
a construction module for constructing a three-dimensional model of a target object according to the thermal distribution image, the color image and the depth information;
and the determining module is used for determining the identity information corresponding to the target object according to the three-dimensional model.
The face recognition device of the embodiment of the application acquires the thermal distribution image and the color image of the target object and acquires the depth information corresponding to the color image, then constructs the three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information, and finally determines the identity information corresponding to the target object according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
In another aspect, an embodiment of the present application provides an electronic device, including:
a first recognition unit for acquiring a thermal distribution image of a target object;
a second recognition unit for acquiring a color image of the target object;
the third identification unit is used for acquiring depth information corresponding to the color image;
the electronic device further includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the program, implementing:
constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information;
and determining the identity information corresponding to the target object according to the three-dimensional model.
According to the electronic equipment, the thermal distribution image and the color image of the target object are obtained, the depth information corresponding to the color image is obtained, then the three-dimensional model of the target object is constructed according to the thermal distribution image, the color image and the depth information, and finally the identity information corresponding to the target object is determined according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
A non-transitory computer-readable storage medium is provided, which stores thereon a computer program, and the computer program is executed by a processor to implement the face recognition method according to the foregoing embodiment of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a face recognition method according to an embodiment of the present application;
FIG. 2 is a schematic view of a thermal distribution image in an embodiment of the present application;
FIG. 3 is a schematic diagram of a color image in an embodiment of the present application;
fig. 4 is a schematic diagram of depth information obtained in the embodiment of the present application;
fig. 5 is a schematic flow chart of a face recognition method according to a second embodiment of the present application;
fig. 6 is a schematic flow chart of a face recognition method according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of a light supplement device in an embodiment of the present application;
fig. 8 is a schematic flowchart of a face recognition method according to a fourth embodiment of the present application;
fig. 9 is a schematic structural diagram of a face recognition apparatus according to a fifth embodiment of the present application;
fig. 10 is a schematic structural diagram of a face recognition apparatus according to a sixth embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an eighth embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application. On the contrary, the embodiments of the application include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
The application mainly aims at the technical problems that in the prior art, the face is recognized based on a two-dimensional image recognition technology, and the recognition accuracy is low, and provides a face recognition method.
According to the face recognition method, the thermal distribution image and the color image of the target object are obtained, the depth information corresponding to the color image is obtained, then the three-dimensional model of the target object is constructed according to the thermal distribution image, the color image and the depth information, and finally the identity information corresponding to the target object is determined according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
The following describes a face recognition method, a face recognition device and an electronic device according to embodiments of the present application with reference to the drawings.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present application.
The face recognition method in the embodiment of the application can be applied to application programs which need identity authentication on electronic equipment, such as applications of games, payments, social contacts and the like, or can also be applied to places which need identity authentication, such as entrance guard, security and the like, or can also be only applied to application programs with a face recognition function, such as applications of photographing and the like, without limitation.
The electronic device can be a hardware device with various operating systems, touch screens and/or display screens, such as a mobile phone, a tablet computer, a personal digital assistant, a wearable device, and the like.
As shown in fig. 1, the face recognition method may include the following steps:
step 101, acquiring a thermal distribution image of a target object.
In the embodiment of the present application, the target object may specifically refer to a human face. The thermal distribution image of the target object refers to a surface temperature distribution image of the target object, which is different from a visible light image because the surface temperature distribution of the target object cannot be directly observed by naked eyes.
It can be understood that due to the existence of black body radiation, in nature, all objects radiate infrared rays outwards according to different temperatures. Therefore, in the present application, a target object may be imaged by a device sensitive to thermal infrared, such as a thermal infrared imager, to obtain a thermal distribution image from which a temperature distribution of a surface of the target object may be determined.
As an example, referring to fig. 2, fig. 2 is a schematic diagram of a thermal distribution image in an embodiment of the present application. As can be seen from fig. 2, the colors displayed in the thermal distribution image are different between the face region and the background region due to different temperatures, and the colors displayed in the thermal distribution image are different between different key points of the face due to the inconsistency of the surface temperatures. Therefore, from the thermal distribution image, the temperature distribution of the target object surface can be determined.
Step 102, acquiring a color image of the target object.
Alternatively, the color image of the target object may be acquired by an image sensor, such as a Charge Coupled Device (CCD), a Complementary Metal Oxide Semiconductor (cmos), or the like.
As a possible implementation manner, in order to improve the effective resolution and color reducibility of the image sensor, thereby improving the definition and stability of the color image, in this application, an infrared cut filter may be disposed in front of the image sensor to filter the infrared rays.
As an example, referring to fig. 3, fig. 3 is a schematic diagram of a color image in the embodiment of the present application.
And 103, acquiring depth information corresponding to the color image.
As a possible implementation manner, the depth information corresponding to the target object may be acquired through structured light or a Time of Flight (TOF) image sensor.
As another possible implementation, the depth information of the target object may be acquired by the ultrasonic wave transmitting and receiving device. Specifically, the ultrasonic wave transmitting and receiving device may transmit an ultrasonic wave to the target object and automatically detect a received echo reflected by the target object, thereby determining depth information of the target object.
As an example, referring to fig. 4, fig. 4 is a schematic view of depth information obtained in an embodiment of the present application.
It should be noted that, in order to improve the real-time performance of face recognition, steps 101 to 103 may be executed in parallel, and in the embodiment of the present application, only the steps 101 to 103 are executed in sequence.
And 104, constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information.
It can be understood that if a three-dimensional model of the target object is constructed only based on the color image and the depth information, although a two-dimensional picture can be recognized, the accuracy of face recognition is improved. However, in some scenarios, for example, when a user wears a simulated mask, or when the facies of two different users are highly similar, such as a sibling twin, if a three-dimensional model is constructed from color images and depth data, a false recognition situation may occur.
In the embodiment of the application, it is considered that different users have different body temperatures or different temperature distributions, and even if the users wear the simulation mask, the temperature distribution of the users and the temperature distribution of the users with authority are different, or even if the long phases of the users a and the users B are highly similar, the temperature distributions of the users a and the users B are different. Thus, in the present application, a three-dimensional model of the target object may be constructed from the thermal distribution image, the color image, and the depth information.
As a possible implementation manner, the thermal distribution image and the color image may be subjected to a fusion process to obtain the target image, for example, the thermal distribution image and the color image may be subjected to a fusion process according to a preset first weight and a preset second weight to obtain the target image.
The first weight and the second weight are preset, the first weight is the weight corresponding to the thermal distribution image, the second weight is the weight corresponding to the color image, and the sum of the first weight and the second weight is 1.
In the embodiment of the application, after the target image is generated, the three-dimensional model of the target object can be constructed according to the target image and the depth information.
For example, the method may include performing key point identification on the target image based on a face key point detection technology to obtain a third key point corresponding to the target image, and then determining a relative position of a first key point corresponding to the third key point in the three-dimensional model of the target object according to depth information of the third key point and a position of the third key point on the target image, so that adjacent first key points may be connected according to the relative position of the first key point in the three-dimensional space to generate a local face three-dimensional frame. The local face may include facial parts such as a nose, lips, eyes, cheeks, and the like. After the local face three-dimensional frames are generated, different local face three-dimensional frames can be spliced according to the same first key points contained in different local face three-dimensional frames to obtain frames of the face three-dimensional model, and finally, the target image is mapped to the frames of the face three-dimensional model to obtain the three-dimensional model of the target object.
And 105, determining the identity information corresponding to the target object according to the three-dimensional model.
In the embodiment of the application, after the three-dimensional model is generated, the matched target three-dimensional model can be determined from the pre-stored three-dimensional models, so that the associated character information of the target three-dimensional model can be used as the identity information corresponding to the target object. And the pre-stored three-dimensional models are three-dimensional models corresponding to users with authority information.
As a possible implementation manner, the matched target three-dimensional model may be determined from pre-stored three-dimensional models according to the relative position of each first keypoint in the three-dimensional models. In specific implementation, one three-dimensional model can be selected from the pre-stored three-dimensional models one by one and matched with the three-dimensional model of the target object. Specifically, for each first key point in the three-dimensional model of the target object, the first key point may be matched with a corresponding key point in the selected three-dimensional model, so as to obtain a matching degree between the first key point and the corresponding key point in the selected three-dimensional model.
It should be noted that the matching degree between the first key point and the corresponding key point in the selected three-dimensional model may be an overall matching degree between the first key point and the corresponding key point in the selected three-dimensional model, for example, the matching degree may include an overall matching degree of feature information such as a shape, a size, a contour, and a position. In specific implementation, for each first keypoint, the shape, size, contour, or relative position of the first keypoint in the three-dimensional model of the face, and other feature information of the first keypoint may be respectively matched with the shape, size, contour, or relative position of the keypoint in the selected three-dimensional model, and the matching degree of the shape feature, the matching degree of the size feature, the matching degree of the contour feature, the matching degree of the relative position feature, and the like may be obtained. And then, the obtained matching degrees can be accumulated to obtain an average value, and the average value is used as the matching degree of the first key point and the corresponding key point in the selected three-dimensional model.
Or, the matching degree of the first key point and the corresponding key point in the selected three-dimensional model may refer to a partial matching degree of the first key point and the corresponding key point in the selected three-dimensional model, for example, the matching degree may include a matching degree of a shape feature, a matching degree of a size feature, a matching degree of a contour feature, and/or a matching degree of a relative position feature. In specific implementation, for each first key point, the shape of the first key point may be matched with the shape of a corresponding key point in the selected three-dimensional model to obtain a matching degree of shape features, and/or the size of the first key point may be matched with the size of a corresponding key point in the selected three-dimensional model to obtain a matching degree of size features, and/or the contour of the first key point may be matched with the contour of a corresponding key point in the selected three-dimensional model to obtain a matching degree of contour features, and/or the relative position of the first key point in the three-dimensional model of the face may be matched with the relative position of the corresponding key point in the selected three-dimensional model to obtain a matching degree of relative position features. Then, the matching degree of the shape feature, the matching degree of the size feature, the matching degree of the contour feature, and/or the matching degree of the relative position feature may be used as the matching degree of the first key point and the corresponding key point in the selected three-dimensional model, which is not limited.
In the embodiment of the application, after the matching degree of each first key point and the corresponding key point in the selected three-dimensional model is determined, the matching degree of the three-dimensional model of the target object and the selected three-dimensional model can be determined. Specifically, a weight corresponding to each first key point may be preset, after the matching degree between each first key point and a corresponding key point in the selected three-dimensional model is determined, the weight corresponding to the first key point may be multiplied by the matching degree between the first key point and the corresponding key point in the selected three-dimensional model to obtain a product value, and then all the product values are accumulated to obtain the matching degree between the three-dimensional model of the target object and the selected three-dimensional model.
Optionally, when the matching degree between the three-dimensional model of the target object and the selected three-dimensional model exceeds a threshold, the selected three-dimensional model is considered as the target three-dimensional model matched with the three-dimensional model of the target object. The threshold is preset, and for example, the threshold may be 95%.
Further, there may be more than one matching three-dimensional model, and therefore, the three-dimensional model corresponding to the highest value in the matching degree between the three-dimensional model of the target object and the selected three-dimensional model may be used as the target three-dimensional model.
In this embodiment, the target three-dimensional model may carry information of a person associated with the target three-dimensional model, for example, identification information of the associated person, such as information of name, job number, gender, and/or age. After determining the target three-dimensional model matching the three-dimensional model of the target object, the character information associated with the target three-dimensional model may be used as the identity information corresponding to the target object. For example, the personal information may be information of a job number, a name, etc. of the user when the face recognition method is applied to an access system of a company, or may be information of a name, a sex, an age, etc. of the user when the face recognition method is applied only to an application having a face recognition function, such as a camera.
According to the face recognition method, the thermal distribution image and the color image of the target object are obtained, the depth information corresponding to the color image is obtained, then the three-dimensional model of the target object is constructed according to the thermal distribution image, the color image and the depth information, and finally the identity information corresponding to the target object is determined according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
It is understood that when the target object wears accessories, such as a mask, most of the facial features of the target object are occluded and cannot be identified based on the color image, and it can be identified based on the depth information and the thermal profile image. Further, when the target object is worn with the accessory, since the thermal diffusion of the target object is low, if the thermal distribution image is fused with the color image by the first weight and the second weight set in advance to regenerate the three-dimensional model of the target object, in this case, there may be a case where the target object cannot be recognized. Therefore, when the target object wears the ornament, the first weight can be increased, and the second weight can be correspondingly decreased, so that the accuracy of the identification result is improved. The above process is described in detail below with reference to fig. 5.
Fig. 5 is a schematic flow chart of a face recognition method according to a second embodiment of the present application.
As shown in fig. 5, before the thermal distribution image and the color image are fused to obtain the target image, the face recognition method may further include the following steps:
step 201, identifying whether the target object wears the ornament.
In the embodiment of the application, the ornaments can be masks, glasses, veil and the like.
In the embodiment of the present application, whether the target object wears the accessory may be identified, if yes, step 202 is executed, and if no, the thermal distribution image and the color image may be fused without any processing, i.e., with the preset first weight and the preset second weight.
As a possible implementation mode, the depth data of various ornaments can be collected as sample data, the sample data is used for training to obtain an identification model for identifying the ornaments, and then the trained identification model can be used for identifying whether the target object wears the ornaments or not.
In step 202, if yes, the first weight is increased and the second weight is decreased correspondingly.
In the embodiment of the application, when the target object wears the ornament, due to the fact that thermal diffusion is low, at the moment, the first weight corresponding to the thermal distribution image can be increased, the second weight corresponding to the color image can be correspondingly reduced, and accuracy of the identification result can be improved.
It is understood that when in different seasons, the first weight corresponding to the thermal distribution image and the second weight corresponding to the color image are different due to different thermal diffusion of the target object. The above process is described in detail below with reference to fig. 6.
Fig. 6 is a schematic flow chart of a face recognition method according to a third embodiment of the present application.
As shown in fig. 6, the face recognition method may include the following steps:
step 301, determining the current shooting time.
It is understood that after the image is captured, information such as the name of the image, the storage path, and the capturing time may be recorded in the attribute information of the image, and therefore, in the present application, the current capturing time may be determined according to the attribute information of the image, for example, the current capturing time may be determined according to the attribute information of the color image or the thermal distribution image.
Step 302, if the current shooting moment is in summer, the first weight is decreased, and the second weight is correspondingly increased.
In the embodiment of the application, when the current shooting moment is in summer, because the thermal diffusion of the target object is high, at this time, in order to improve the accuracy of face recognition, the first weight corresponding to the thermal distribution image can be reduced, and the second weight corresponding to the color image can be correspondingly increased.
Step 303, if the current shooting moment is in winter, the first weight is increased, and the second weight is correspondingly decreased.
In the embodiment of the application, when the current shooting time is in winter, because the thermal diffusion of the target object is low, at this time, in order to improve the accuracy of face recognition, the first weight corresponding to the thermal distribution image can be increased, and the second weight corresponding to the color image can be correspondingly reduced.
It should be noted that, in order to save the workload, it is only necessary to construct a three-dimensional model of the target object to perform face recognition on the target object when the target object is a living body, and when the target object is not a living body, for example, a two-dimensional picture, it is possible to recognize the target object without constructing a three-dimensional model of the target object. Specifically, it is possible to detect whether or not the determination target object is a living body from the thermal distribution image.
It can be understood that the preset key points in the human face are heating, such as eyelids, mouth corners, and the like, and therefore, in the present application, the key point recognition technology may be utilized to perform key point recognition on the thermal distribution image, and then determine whether each preset key point in the thermal distribution image is heating, if so, determine that the target object is a living body, and if not, determine that the target object is not a living body.
As a possible implementation manner, because the imaging quality of the color image acquired by the image sensor may be poor under the condition that the shooting environment is dark, in the present application, in order to improve the imaging quality, the ambient brightness value may be measured, and when the ambient brightness value is lower than the preset threshold, the light may be supplemented to the target object.
For example, the environmental brightness value may be measured by using an independent photometric device, or the camera may automatically adjust the sensitivity ISO value according to the environmental brightness, so that in this application, the ISO value automatically adjusted by the camera may be read, and the environmental brightness value may be reversely derived according to the read ISO value, which is not limited herein.
As a possible implementation manner, when the ambient brightness value is lower than the preset threshold, the light supplement device may be used to supplement light to the target object. The light supplement device may be composed of a plurality of light emitting diodes.
As an example, referring to fig. 7, in fig. 7, the light supplement device is circular, and the number of the light emitting diodes is 8. The middle area of light filling device can be hollowed out and set up, provides embedding space for image processor and camera lens, and 8 emitting diode can surround the equidistant setting of camera lens to can provide the even light of distribution for the target object, promote image sensor's imaging quality.
As an example, referring to fig. 8, fig. 8 is a schematic flowchart of a face recognition method according to a fourth embodiment of the present application.
As shown in fig. 8, the face recognition method may include the following steps:
1. a color image of the target object is acquired.
2. And acquiring depth information corresponding to the color image.
3. A thermal distribution image of the target object is acquired.
4. And constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information.
5. And matching the three-dimensional model of the target object with the pre-stored three-dimensional models.
6. And judging whether the three-dimensional model of the target object is matched with one of the three-dimensional models, if so, executing the step 7, and if not, executing the step 8.
7. And taking the associated character information of the matched three-dimensional model as the identity information corresponding to the target object.
8. And prompting and/or alarming.
In the embodiment of the application, when the three-dimensional model of the target object is not matched with any one of the pre-stored three-dimensional models, the target object is indicated as an illegal user without authority information, and therefore prompting and/or alarming can be performed.
For example, when the face recognition method is applied to an access control system of a company, in order to ensure information security of an enterprise, when a target object does not have a right, alarm processing may be performed. And when the target object has the right, the identity information of the target object can be displayed on the access control system, and the door is opened for releasing.
Alternatively, when the face recognition method is only applied to an application having a face recognition function, such as a payment application, a photographing application, and the like, in order to ensure the property security and privacy security of the user, when the target object does not have a right, a prompt may be made on a display interface of the electronic device, such as "does not have a right".
In order to implement the above embodiments, the present application further provides a face recognition apparatus.
Fig. 9 is a schematic structural diagram of a face recognition device according to a fifth embodiment of the present application.
As shown in fig. 9, the face recognition apparatus 100 includes: a first obtaining module 101, a second obtaining module 102, a third obtaining module 103, a constructing module 104, and a determining module 105.
The first acquiring module 101 is configured to acquire a thermal distribution image of a target object.
The second acquiring module 102 is configured to acquire a color image of the target object.
And the third obtaining module 103 is configured to obtain depth information corresponding to the color image.
As a possible implementation manner, the third obtaining module 103 is configured to obtain depth information corresponding to the color image through the ultrasound transmitting and receiving device.
And the building module 104 is used for building a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information.
And the determining module 105 is configured to determine, according to the three-dimensional model, identity information corresponding to the target object.
As a possible implementation manner, the determining module 105 is specifically configured to: determining a matched target three-dimensional model from pre-stored three-dimensional models according to the relative position of each first key point in the three-dimensional models; the relative position of each second key point in the target three-dimensional model is matched with the relative position of each first key point in the three-dimensional model; and taking the associated character information of the target three-dimensional model as the identity information corresponding to the target object.
Further, in a possible implementation manner of the embodiment of the present application, referring to fig. 10, on the basis of the embodiment shown in fig. 9, the face recognition apparatus 100 may further include:
as a possible implementation manner, the building module 104 is specifically configured to: fusing the heat distribution image and the color image to obtain a target image; and constructing a three-dimensional model of the target object according to the target image and the depth information.
Optionally, the building module 104 is further configured to: fusing the heat distribution image and the color image according to a preset first weight and a preset second weight to obtain a target image; the first weight is the weight corresponding to the thermal distribution image, the second weight is the weight corresponding to the color image, and the sum of the first weight and the second weight is 1.
And the identification module 106 is configured to identify whether the target object wears the accessory before the thermal distribution image and the color image are fused to obtain the target image.
A processing module 107, configured to increase the first weight and correspondingly decrease the second weight if yes.
And the determining module 108 is configured to determine a current shooting time before the thermal distribution image and the color image are fused to obtain the target image.
The processing module 107 is further configured to reduce the first weight and correspondingly increase the second weight if the current shooting time is in summer; if the current shooting moment is in winter, the first weight is increased, and the second weight is correspondingly decreased.
The detecting module 109 is configured to determine whether the target object is a living body according to whether each preset key point in the thermal distribution image generates heat before acquiring the color image including the target object.
As a possible implementation manner, the second obtaining module 102 is specifically configured to: a color image of the target object is acquired by an image sensor.
And a measuring module 110, configured to measure an ambient brightness value.
And a supplementary lighting module 111, configured to perform supplementary lighting when the ambient brightness value is lower than a preset threshold.
It should be noted that the foregoing explanation on the embodiment of the face recognition method is also applicable to the face recognition apparatus 100 of this embodiment, and is not repeated here.
The face recognition device of the embodiment of the application acquires the thermal distribution image and the color image of the target object and acquires the depth information corresponding to the color image, then constructs the three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information, and finally determines the identity information corresponding to the target object according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
In order to implement the above embodiments, the present application further provides an electronic device.
Fig. 11 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present application.
As shown in fig. 11, the electronic device may include: a first recognition unit 210, a second recognition unit 220, and a third recognition unit 230.
The first identification unit 210 is configured to acquire a thermal distribution image of the target object.
As a possible implementation, the first recognition unit 210 includes a thermal infrared imager.
And a second recognition unit 220 for acquiring a color image of the target object.
And a third identifying unit 230, configured to acquire depth information corresponding to the color image.
As a possible implementation, the third identification unit 230 comprises an ultrasound receiving and transmitting device.
Further, the electronic device further includes: a memory 240, a processor 250, and a computer program stored on the memory 240 and executable on the processor 250, the processor 250 when executing the program, implements: constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information; and determining the identity information corresponding to the target object according to the three-dimensional model.
It should be noted that the foregoing explanation on the embodiment of the face recognition method is also applicable to the electronic device of the embodiment, and is not repeated here.
According to the electronic equipment, the thermal distribution image and the color image of the target object are obtained, the depth information corresponding to the color image is obtained, then the three-dimensional model of the target object is constructed according to the thermal distribution image, the color image and the depth information, and finally the identity information corresponding to the target object is determined according to the three-dimensional model. In the application, due to the fact that the thermal distribution images corresponding to different users are different, the three-dimensional model of the target object is built according to the thermal distribution images, the color images and the depth information, the conditions that the user wears a simulation mask, the users with similar long-phase heights and the like can be identified, and accuracy of an identification result is improved. And moreover, the identity information of the target object is identified based on the three-dimensional model, so that the accuracy of face identification can be further improved.
As a possible implementation manner, referring to fig. 12, on the basis of the embodiment shown in fig. 11, the second identifying unit 220 includes: a lens 221, an image sensor 222, and an infrared cut filter 223.
An infrared cut filter 223 is disposed in front of the lens 221 and the image sensor 222, and is used for filtering infrared rays.
As a possible implementation, the electronic device may further include: and a light supplement device.
And the light supplementing device is used for supplementing light when the ambient brightness value is lower than a preset threshold value.
As a possible implementation manner, the light supplement device includes a plurality of light emitting diodes.
As an example, referring to fig. 7, a plurality of light emitting diodes of the light supplement device may be disposed around a lens of the second recognition unit at equal intervals, so as to provide uniformly distributed light for the target object and improve the imaging quality of the image sensor.
The present application also provides a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the face recognition method as described above.
The present application also provides a computer program product, which when executed by an instruction processor in the computer program product, implements the face recognition method as described above.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. A face recognition method, characterized in that the method comprises the following steps:
acquiring a thermal distribution image of a target object;
acquiring a color image of the target object;
acquiring depth information corresponding to the color image;
constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information;
and determining the identity information corresponding to the target object according to the three-dimensional model.
2. The method of claim 1, wherein constructing a three-dimensional model of a target object from the thermal distribution image, the color image, and the depth information comprises:
fusing the thermal distribution image and the color image to obtain a target image;
and constructing a three-dimensional model of the target object according to the target image and the depth information.
3. The face recognition method according to claim 2, wherein the fusing the thermal distribution image and the color image to obtain a target image comprises:
according to a preset first weight and a preset second weight, carrying out fusion processing on the heat distribution image and the color image to obtain a target image; wherein the first weight is a weight corresponding to the thermal distribution image, the second weight is a weight corresponding to the color image, and a sum of the first weight and the second weight is 1.
4. The face recognition method according to claim 3, wherein before the fusing the thermal distribution image and the color image to obtain the target image, the method further comprises:
identifying whether the target object wears the ornament or not;
if so, increasing the first weight and correspondingly decreasing the second weight.
5. The face recognition method according to claim 3, wherein before the fusing the thermal distribution image and the color image to obtain the target image, the method further comprises:
determining the current shooting moment;
if the current shooting moment is in summer, reducing the first weight and correspondingly increasing the second weight;
and if the current shooting moment is in winter, increasing the first weight and correspondingly decreasing the second weight.
6. The method of claim 1, wherein prior to obtaining the color image including the target object, the method further comprises:
and determining whether the target object is a living body according to whether each preset key point in the heat distribution image is heated.
7. The method of claim 1, wherein the determining identity information corresponding to the target object according to the three-dimensional model comprises:
determining a matched target three-dimensional model from pre-stored three-dimensional models according to the relative position of each first key point in the three-dimensional models; the relative position of each second key point in the target three-dimensional model is matched with the relative position of each first key point in the three-dimensional model;
and taking the associated character information of the target three-dimensional model as the identity information corresponding to the target object.
8. The face recognition method according to any one of claims 1 to 7, wherein the obtaining depth information corresponding to the color image comprises:
and acquiring depth information corresponding to the color image through an ultrasonic transmitting and receiving device.
9. The face recognition method of any one of claims 1-7, wherein obtaining a color image of the target object comprises:
acquiring a color image of the target object by an image sensor;
before the acquiring the color image of the target object, the method further comprises:
measuring the environmental brightness value;
and supplementing light when the environment brightness value is lower than a preset threshold value.
10. An apparatus for face recognition, the apparatus comprising:
the first acquisition module is used for acquiring a heat distribution image of a target object;
the second acquisition module is used for acquiring a color image of the target object;
the third acquisition module is used for acquiring depth information corresponding to the color image;
a construction module for constructing a three-dimensional model of a target object according to the thermal distribution image, the color image and the depth information;
and the determining module is used for determining the identity information corresponding to the target object according to the three-dimensional model.
11. An electronic device, comprising:
a first recognition unit for acquiring a thermal distribution image of a target object;
a second recognition unit for acquiring a color image of the target object;
the third identification unit is used for acquiring depth information corresponding to the color image;
the electronic device further includes: a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the program, implementing:
constructing a three-dimensional model of the target object according to the thermal distribution image, the color image and the depth information;
and determining the identity information corresponding to the target object according to the three-dimensional model.
12. The electronic device of claim 11, wherein the first identification unit comprises a thermal infrared imager, the third identification unit comprises an ultrasonic receiving and transmitting device, and the second identification unit comprises: the device comprises a lens, an image sensor and an infrared cut-off filter;
the infrared cut filter is arranged in front of the lens and the image sensor and used for filtering infrared rays.
13. The electronic device of claim 12, further comprising:
and the light supplementing device is used for supplementing light when the ambient brightness value is lower than a preset threshold value and comprises a plurality of light emitting diodes.
14. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the face recognition method according to any one of claims 1-9.
CN201811303261.5A 2018-11-02 2018-11-02 Face recognition method and device and electronic equipment Pending CN111144169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811303261.5A CN111144169A (en) 2018-11-02 2018-11-02 Face recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811303261.5A CN111144169A (en) 2018-11-02 2018-11-02 Face recognition method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111144169A true CN111144169A (en) 2020-05-12

Family

ID=70515469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811303261.5A Pending CN111144169A (en) 2018-11-02 2018-11-02 Face recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111144169A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798342A (en) * 2020-07-07 2020-10-20 德能森智能科技(无锡)有限公司 Smart community system based on cloud platform
CN112214773A (en) * 2020-09-22 2021-01-12 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection and electronic equipment
CN112328995A (en) * 2020-07-08 2021-02-05 德能森智能科技(成都)有限公司 Social management system based on TOF image sensor verification
CN112669817A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Language identification method and device and electronic equipment
CN112749413A (en) * 2020-08-03 2021-05-04 德能森智能科技(成都)有限公司 Authority verification device and method based on intelligent park management

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network
CN106096503A (en) * 2016-05-30 2016-11-09 东南大学 A kind of based on key point with the three-dimensional face identification method of local feature
CN106446779A (en) * 2016-08-29 2017-02-22 深圳市软数科技有限公司 Method and apparatus for identifying identity
CN106548138A (en) * 2016-10-21 2017-03-29 哈尔滨工业大学深圳研究生院 Identity identifying method and system based on the dynamic feature of lip
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN107067470A (en) * 2017-04-05 2017-08-18 东北大学 Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN108073891A (en) * 2017-11-10 2018-05-25 广东日月潭电源科技有限公司 A kind of 3 D intelligent face identification system
CN108229239A (en) * 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 A kind of method and device of image procossing
CN108447017A (en) * 2018-05-31 2018-08-24 Oppo广东移动通信有限公司 Face virtual face-lifting method and device
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus
CN108694389A (en) * 2018-05-15 2018-10-23 Oppo广东移动通信有限公司 Safe verification method based on preposition dual camera and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN105608450A (en) * 2016-03-01 2016-05-25 天津中科智能识别产业技术研究院有限公司 Heterogeneous face identification method based on deep convolutional neural network
CN106096503A (en) * 2016-05-30 2016-11-09 东南大学 A kind of based on key point with the three-dimensional face identification method of local feature
CN106446779A (en) * 2016-08-29 2017-02-22 深圳市软数科技有限公司 Method and apparatus for identifying identity
CN106548138A (en) * 2016-10-21 2017-03-29 哈尔滨工业大学深圳研究生院 Identity identifying method and system based on the dynamic feature of lip
CN108229239A (en) * 2016-12-09 2018-06-29 武汉斗鱼网络科技有限公司 A kind of method and device of image procossing
CN106919899A (en) * 2017-01-18 2017-07-04 北京光年无限科技有限公司 The method and system for imitating human face expression output based on intelligent robot
CN107067470A (en) * 2017-04-05 2017-08-18 东北大学 Portable three-dimensional reconstruction of temperature field system based on thermal infrared imager and depth camera
CN107066983A (en) * 2017-04-20 2017-08-18 腾讯科技(上海)有限公司 A kind of auth method and device
CN108073891A (en) * 2017-11-10 2018-05-25 广东日月潭电源科技有限公司 A kind of 3 D intelligent face identification system
CN108694389A (en) * 2018-05-15 2018-10-23 Oppo广东移动通信有限公司 Safe verification method based on preposition dual camera and electronic equipment
CN108447017A (en) * 2018-05-31 2018-08-24 Oppo广东移动通信有限公司 Face virtual face-lifting method and device
CN108550185A (en) * 2018-05-31 2018-09-18 Oppo广东移动通信有限公司 Beautifying faces treating method and apparatus

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798342A (en) * 2020-07-07 2020-10-20 德能森智能科技(无锡)有限公司 Smart community system based on cloud platform
CN112328995A (en) * 2020-07-08 2021-02-05 德能森智能科技(成都)有限公司 Social management system based on TOF image sensor verification
CN112749413A (en) * 2020-08-03 2021-05-04 德能森智能科技(成都)有限公司 Authority verification device and method based on intelligent park management
CN112861170A (en) * 2020-08-03 2021-05-28 德能森智能科技(成都)有限公司 Smart park management system capable of protecting privacy
CN112214773A (en) * 2020-09-22 2021-01-12 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection and electronic equipment
CN112669817A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Language identification method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN111144169A (en) Face recognition method and device and electronic equipment
JP7151814B2 (en) Information processing device, information processing method and program
US20210287026A1 (en) Method and apparatus with liveness verification
WO2019218621A1 (en) Detection method for living being, device, electronic apparatus, and storage medium
CN108197586B (en) Face recognition method and device
KR102466997B1 (en) Liveness test method and apparatus
JP6587435B2 (en) Image processing apparatus, information processing method, and program
CN107341481A (en) It is identified using structure light image
CN110287671B (en) Verification method and device, electronic equipment and storage medium
TW201401186A (en) System and method for identifying human face
CN107479801A (en) Displaying method of terminal, device and terminal based on user's expression
TW202026948A (en) Methods and devices for biological testing and storage medium thereof
CN107563304A (en) Unlocking terminal equipment method and device, terminal device
KR101754152B1 (en) Thermal Patient Monitering System by Using Multiple Band Camera and Method thereof
KR102317180B1 (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
KR101919090B1 (en) Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
WO2016203698A1 (en) Face detection device, face detection system, and face detection method
CN109766755A (en) Face identification method and Related product
CN111104833A (en) Method and apparatus for in vivo examination, storage medium, and electronic device
CN106991376B (en) Depth information-combined side face verification method and device and electronic device
US20170300685A1 (en) Method and system for visual authentication
CN113029349A (en) Temperature monitoring method and device, storage medium and equipment
JPWO2020065954A1 (en) Authentication device, authentication method, authentication program and recording medium
WO2019024350A1 (en) Biometric recognition method and apparatus
CN108875472A (en) Image collecting device and face auth method based on the image collecting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518119 No.1 Yan'an Road, Kuiyong street, Dapeng New District, Shenzhen City, Guangdong Province

Applicant after: BYD Semiconductor Co.,Ltd.

Address before: 518119 No.1 Yan'an Road, Kuiyong street, Dapeng New District, Shenzhen City, Guangdong Province

Applicant before: BYD Semiconductor Co.,Ltd.

Address after: 518119 No.1 Yan'an Road, Kuiyong street, Dapeng New District, Shenzhen City, Guangdong Province

Applicant after: BYD Semiconductor Co.,Ltd.

Address before: 518119 No.1 Yan'an Road, Kuiyong street, Dapeng New District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN BYD MICROELECTRONICS Co.,Ltd.

CB02 Change of applicant information