CN210166794U - Face recognition device and electronic equipment - Google Patents

Face recognition device and electronic equipment Download PDF

Info

Publication number
CN210166794U
CN210166794U CN201920867396.8U CN201920867396U CN210166794U CN 210166794 U CN210166794 U CN 210166794U CN 201920867396 U CN201920867396 U CN 201920867396U CN 210166794 U CN210166794 U CN 210166794U
Authority
CN
China
Prior art keywords
image
module
face recognition
infrared
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201920867396.8U
Other languages
Chinese (zh)
Inventor
曾伟平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Priority to CN201920867396.8U priority Critical patent/CN210166794U/en
Application granted granted Critical
Publication of CN210166794U publication Critical patent/CN210166794U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A face recognition device and an electronic device can improve the safety of face recognition. The face recognition device includes: the system comprises an infrared light emitting module, a structured light projection module, an infrared image acquisition module and a processor; the infrared light emitting module is connected to the processor and used for emitting infrared light to an identification target; the structured light projection module is connected to the processor and used for projecting structured light to the identification target; the infrared image acquisition module is connected to the processor and is used for receiving a reflected infrared light signal of the infrared light reflected by the identification target to obtain a two-dimensional infrared image of the identification target and receiving a reflected structured light signal of the structured light reflected by the identification target to obtain a depth image of the identification target; the processor is used for receiving the two-dimensional infrared image and the depth image and carrying out face recognition based on the two-dimensional infrared image and the depth image.

Description

Face recognition device and electronic equipment
Technical Field
The present application relates to the field of biometric identification technology, and more particularly, to a face recognition apparatus and an electronic device.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. The method comprises the steps of collecting images or video streams containing human faces by using a camera or a camera, automatically detecting and tracking the human faces in the images, and further performing a series of related technologies such as image preprocessing, image feature extraction, matching and recognition of the detected human faces, wherein the related technologies are generally called portrait recognition or facial recognition. With the rapid development of computer and network technologies, face recognition technology has been widely applied in many industries and fields such as intelligent entrance guard, intelligent door lock, mobile terminal, public security, entertainment, military and the like.
Currently, Two-Dimensional (2D) images based on human faces are generally used for face recognition, and because Three-Dimensional (3D) face information does not exist in the 2D images, an existing face recognition device can also successfully recognize a picture or a curved surface model attached with the picture, so that the face cannot be accurately judged, and the safety performance is poor.
SUMMERY OF THE UTILITY MODEL
The embodiment of the application provides a face recognition device, a face recognition method and electronic equipment, and the face recognition safety can be improved.
In a first aspect, a face recognition apparatus is provided, including: the system comprises an infrared light emitting module, a structured light projection module, an infrared image acquisition module and a processor;
the infrared light emitting module is connected to the processor and used for emitting infrared light to an identification target;
the structured light projection module is connected to the processor and used for projecting structured light to the identification target;
the infrared image acquisition module is connected to the processor and is used for receiving a reflected infrared light signal of the infrared light reflected by the identification target to obtain a two-dimensional infrared image of the identification target and receiving a reflected structured light signal of the structured light reflected by the identification target to obtain a depth image of the identification target;
the processor is used for receiving the two-dimensional infrared image and the depth image and carrying out face recognition based on the two-dimensional infrared image and the depth image.
According to the scheme of the embodiment of the application, the infrared image acquisition module receives infrared light reflected by the recognition target and structured light to obtain two-dimensional infrared images and depth images, the processor receives the two-dimensional infrared images and the depth images, the two-dimensional characteristics of the face can be judged based on the two-dimensional infrared images, the three-dimensional characteristics of the face can be judged based on the depth images, and meanwhile, the accuracy of recognition can be improved by carrying out face recognition according to the two-dimensional infrared images and the depth images. The safety performance is improved.
In one possible implementation, the apparatus further includes: the visible light image acquisition module is connected to the processor and used for acquiring the visible light image of the identification target, and the processor is also used for controlling the infrared light emitting module to emit the infrared light according to the visible light image output by the visible light image acquisition module.
In one possible implementation manner, the recognition target is a face of a user, and the apparatus further includes: and the display is connected with the processor and used for displaying the visible light image or the two-dimensional infrared image or the three-dimensional point cloud image corresponding to the depth image and indicating a user to adjust the position of the face.
In one possible implementation, the structured light is a dot matrix light or random speckle, and the structured light projection module is a dot matrix light projector or a speckle structured light projector.
In one possible implementation, the apparatus further includes: and the distance detection module is connected with the processor and is used for detecting the distance from the recognition target to the face recognition device.
In one possible implementation, the distance detection module is an ultrasonic detector or an electromagnetic wave detector.
In a possible implementation manner, the processor is further configured to control the structured light projection module to project the structured light to the recognition target according to the distance from the recognition target output by the distance detection module to the face recognition device.
In one possible implementation, the apparatus further includes: and the output module is connected with the processor and used for outputting the distance information from the recognition target to the face recognition device.
In one possible implementation, the output module includes: a display module, a sound module, a light module or a vibration module.
In a possible implementation manner, the processor is further configured to control whether the output module outputs information of the distance according to the distance from the recognition target to the face recognition device.
In a possible implementation manner, when the distance from the recognition target to the face recognition device is in a first distance range interval, the processor is configured to control the output module not to output the distance information from the recognition target to the face recognition device;
and when the distance from the recognition target to the face recognition device is out of a first distance range interval, the processor is used for controlling the output module to output the distance information from the recognition target to the face recognition device.
In one possible implementation, the processor is further configured to: matching the two-dimensional infrared image with a plurality of infrared image templates;
and when the matching is successful, determining that the two-dimensional face recognition is successful, or when the matching is failed, determining that the face recognition is failed.
In one possible implementation, the processor is further configured to:
when the two-dimensional face recognition is successful, processing the depth image to obtain a three-dimensional point cloud image, and matching the three-dimensional point cloud image with a plurality of three-dimensional point cloud image templates;
and when the matching is successful, determining that the face recognition is successful, or when the matching is failed, determining that the face recognition is failed.
In one possible implementation manner, the infrared image acquisition module is an infrared camera, and the infrared camera comprises a filter and an infrared light detection array.
In a second aspect, a face recognition method is provided, including:
acquiring a two-dimensional infrared image and a three-dimensional point cloud image of an identification target;
and carrying out face recognition based on the two-dimensional infrared image and the three-dimensional point cloud image.
In one possible implementation, the method further includes: emitting infrared light to the recognition target; and receiving a reflected infrared light signal after the infrared light is reflected by the identification target, and converting the reflected infrared light signal to obtain the two-dimensional infrared image.
In one possible implementation, the method further includes: carrying out face detection on the visible light image of the recognition target; when a face image is detected on the visible light image, infrared light is emitted to the recognition target.
In one possible implementation, the method further includes: and acquiring the visible light image.
In a possible implementation manner, the recognition target is a face of a user, and the visible light image or the two-dimensional infrared image or the three-dimensional point cloud image is used to instruct the user to adjust the position of the face.
In one possible implementation, the method further includes: projecting structured light onto the recognition target;
the acquiring of the two-dimensional infrared image and the three-dimensional point cloud image of the recognition target comprises the following steps:
and receiving a reflected structure light signal of the structure light reflected by the identification target, and converting the reflected structure light signal to obtain the three-dimensional point cloud image.
In one possible implementation, the structured light is lattice light or random speckle.
In one possible implementation, the method further includes: transmitting continuous near-infrared pulses to the identification target;
and receiving the near-infrared pulse reflected by the identification target, and processing to obtain the three-dimensional point cloud image.
In a possible implementation manner, the method for face recognition is applied to a device for face recognition, and the method further includes: and detecting the distance from the recognition target to the face recognition device.
In one possible implementation, the method further includes:
and receiving the distance from the recognition target to the face recognition device, and judging whether the distance information from the recognition target to the face recognition device is output or not.
In a possible implementation manner, when the distance from the recognition target to the face recognition device is in a first distance range interval, the determining whether to output the distance information from the recognition target to the face recognition device includes:
not outputting the distance information from the recognition target to the face recognition device;
when the distance from the recognition target to the face recognition device is outside a first distance range interval, the determining whether to output the distance information from the recognition target to the face recognition device includes:
and outputting the distance information from the recognition target to the face recognition device.
In one possible implementation manner, when the distance from the recognition target to the face recognition device is in a first distance range interval, the structured light is projected to the recognition target.
In one possible implementation manner, the performing face recognition based on the two-dimensional infrared image and the three-dimensional point cloud image includes:
matching the two-dimensional infrared image with a plurality of infrared image templates;
and when the matching is successful, determining that the two-dimensional face recognition is successful, or when the matching is failed, determining that the face recognition is failed.
In a possible implementation manner, the performing face recognition based on the two-dimensional infrared image and the three-dimensional point cloud image further includes:
when the two-dimensional face recognition is successful, matching the three-dimensional point cloud image with a plurality of three-dimensional point cloud image templates;
and when the matching is successful, determining that the face recognition is successful, or when the matching is failed, determining that the face recognition is failed.
In one possible implementation, the method further includes:
and displaying the two-dimensional infrared image and/or the three-dimensional point cloud image and/or the visible light image.
In a third aspect, an electronic device is provided, which includes the face recognition apparatus as in the first aspect or any possible implementation manner of the first aspect.
In one possible implementation, the electronic device further includes: and the display screen is used for displaying the two-dimensional infrared image and/or the three-dimensional point cloud image and/or the visible light image.
In one possible implementation, the electronic device further includes: and the wireless network access module is used for transmitting the data of the face recognition to a wireless local area network.
In one possible implementation, the electronic device further includes: and the motor control module is used for controlling the mechanical device according to the result of the face recognition.
In a fourth aspect, a chip is provided, where the chip includes an input/output interface, at least one processor, at least one memory, and a bus, where the at least one memory is used to store instructions, and the at least one processor is used to call the instructions in the at least one memory to perform the method of the second aspect or any possible implementation manner of the second aspect.
In a fifth aspect, a computer-readable medium is provided for storing a computer program comprising instructions for performing the second aspect or any possible implementation of the second aspect.
A sixth aspect provides a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of face recognition in the second aspect or any of its possible implementations.
In particular, the computer program product may be run on the electronic device of the above third aspect.
Drawings
Fig. 1 is a schematic diagram of a face recognition apparatus according to an embodiment of the present application.
Fig. 2 is a schematic diagram of another face recognition apparatus according to an embodiment of the present application.
Fig. 3 is a schematic diagram of another face recognition apparatus according to an embodiment of the present application.
Fig. 4 is a schematic diagram of another face recognition apparatus according to an embodiment of the present application.
Fig. 5 is a schematic flow chart of a face recognition process according to an embodiment of the present application.
Fig. 6 is a schematic flow chart of another face recognition process according to an embodiment of the present application.
Fig. 7 is a schematic flow chart of another face recognition process according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of an electronic device according to an embodiment of the application.
FIG. 9 is a schematic block diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the application can be applied to optical face recognition devices, including but not limited to products based on optical face imaging. The optical face recognition device can be applied to various electronic devices with image acquisition devices (such as cameras), the electronic devices can be mobile phones, tablet computers, intelligent wearable devices, intelligent door locks and the like, and the embodiment of the disclosure is not limited to this.
It should be understood that the specific examples are provided herein only to assist those skilled in the art in better understanding the embodiments of the present application and are not intended to limit the scope of the embodiments of the present application.
It should also be understood that the formula in the embodiment of the present application is only an example, and is not intended to limit the scope of the embodiment of the present application, and the formula may be modified, and the modifications should also fall within the scope of the protection of the present application.
It should also be understood that, in the various embodiments of the present application, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic of the processes, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It should also be understood that the various embodiments described in this specification can be implemented individually or in combination, and the examples in this application are not limited thereto.
Unless otherwise defined, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
For easy understanding, the process of unlocking the electronic device based on face recognition of a 2D image will be briefly described with reference to fig. 1.
As shown in fig. 1, the face recognition device 10 includes an infrared light source 110, an infrared camera 120, and a processor 130. The infrared Light source 110 is configured to emit an infrared Light signal, and may be an infrared Light Emitting Diode (LED) or other infrared Light emitting Light sources. The infrared camera 120 includes an infrared image sensor for receiving an infrared light signal and converting the received infrared light signal into a corresponding electrical signal, thereby generating an infrared image. The processor 130 may be a Microprocessor Unit (MPU), and may control the infrared light source 110 to emit light, control the infrared camera 120 to perform 2D infrared image acquisition, and perform 2D face recognition.
Specifically, in the 2D face recognition process, when the recognition target is located in front of the face recognition apparatus 10, for example, as shown in fig. 1, when the recognition target is a face 103, the infrared light source 110 emits an infrared light signal 101, the infrared light signal 101 is reflected by the face to form a reflected infrared light signal 102 carrying face shape information, and the reflected infrared light signal 102 is received by the infrared camera 120 to form an infrared face image corresponding to the face 103.
Optionally, the processor 130 includes a storage unit, where an infrared image template library of the user face is stored in the storage unit, where the infrared image template library includes a plurality of infrared image templates of the user face with different face angles. The plurality of user face infrared image templates are template data vectors obtained by processing infrared images of a plurality of face angles captured by the infrared camera 120. The processor 130 matches a data vector obtained by processing the currently acquired infrared face image corresponding to the face 103 with template data vectors of a plurality of face infrared images in the infrared image template library, and if matching is successful, 2D face recognition is successful, and if matching is failed, 2D face recognition is failed. For convenience of description, hereinafter, the data vector processed for the image template is also simply referred to as an image template; the data vector obtained by processing the image in the matching process is also referred to as the image for short.
In fig. 1, a face recognition apparatus 10 acquires a 2D infrared image of a face, and determines whether the 2D image of the face conforms to a characteristic face in a characteristic face template library to perform face recognition, so as to unlock an electronic device and an Application (APP) on the electronic device. In the unlocking process, the face recognition device 10 only performs face recognition according to the two-dimensional features on the 2D image, and cannot recognize whether the acquired 2D infrared image is derived from a live human face or other non-live human face objects such as other photos, videos and the like, in other words, the face recognition device 10 does not have an anti-counterfeiting function, and can unlock the electronic device and the application program by stealing information such as photos, videos and the like with the user face, so that the safety performance of the face recognition device and the electronic device is greatly influenced.
In order to solve the above problems, embodiments of the present application provide a face recognition method combining 2D recognition and 3D recognition, and on the basis of 2D face recognition, a 3D feature image of a face is further collected to perform 3D face recognition determination, so that a two-dimensional picture and a 3D model of a pseudo face are avoided, and even real faces of other users are subjected to 3D face recognition, thereby enhancing the safety performance of a face recognition apparatus and an electronic device.
Next, a face recognition apparatus provided in an embodiment of the present application will be described in detail with reference to fig. 2 to 4.
Fig. 2 is a device 200 for face recognition according to an embodiment of the present application, including:
an infrared light emitting module 230 connected to the processor 220 for emitting infrared light to the recognition target;
a structured light projection module 250, connected to the processor 220, for projecting structured light to the recognition target;
the infrared image acquisition module 210 is connected to the processor 220, and is configured to receive a reflected infrared light signal of the infrared light reflected by the identification target to obtain a two-dimensional infrared image of the identification target, and receive a reflected structured light signal of the structured light reflected by the identification target to obtain a depth image of the identification target;
and the processor 220 is configured to receive the two-dimensional infrared image and the depth image, and perform face recognition based on the two-dimensional infrared image and the depth image.
It should be understood that the recognition target includes, but is not limited to, a human face, a photograph, a video, a three-dimensional model, and any other object. For example, the recognition target may be a face of a user, a face of another person, a photo of the user, a curved surface model with a photo attached, and the like.
Optionally, in this embodiment of the application, the infrared image capturing module 210 may be any device for capturing a 2D infrared image, such as a camera, and the like. For example, the infrared image capturing module 210 may be an infrared camera, and includes a filter 211 and an infrared image sensor 212, where the filter 211 is configured to transmit an optical signal with a target wavelength and filter an optical signal with a non-target wavelength, and the infrared image sensor 212 performs optical detection based on the target wavelength and converts the detected optical signal into an electrical signal. Optionally, the infrared light image sensor is a Charge-coupled device (CCD) image sensor or a Complementary Metal Oxide Semiconductor (CMOS) image sensor. Optionally, the infrared light image sensor 212 includes a plurality of pixel units, one pixel unit for converting the light signal to form a pixel value in a 2D infrared image. Alternatively, the pixel unit may employ a photodiode (photo diode), a Metal Oxide Semiconductor Field Effect Transistor (MOSFET), or the like. Optionally, the pixel unit has higher light sensitivity and higher quantum efficiency for the target wavelength light, so as to detect the optical signal of the corresponding wavelength.
Specifically, in the embodiment of the present application, the target wavelength belongs to an infrared light band, for example, near infrared light with a target wavelength of 940nm, the filter 211 is configured to transmit an infrared light signal with a wavelength of 940nm, block visible light with a wavelength other than 940nm and other infrared light from passing through, and the infrared light image sensor 212 detects the infrared light with a wavelength of 940nm and forms a 2D infrared image corresponding to the recognition target.
Optionally, the processor 220 may be a processor of the face recognition apparatus 200, or may be a processor of an electronic device including the face recognition apparatus 200, which is not limited in this embodiment.
Specifically, in this embodiment of the application, the processor 220 may control the infrared image collection module 210 to collect a 2D infrared image and a depth image, and the infrared image collection module 210 sends the collected 2D infrared image and the collected depth image to the processor 220.
Specifically, in the embodiment of the present application, each pixel value in the 2D infrared image is represented as a gray scale value of the image, and the appearance shape of the identification target is represented by gray scale value information of the image. Optionally, the range of the gray value can be 0-255.
Specifically, in the embodiment of the present application, the Depth Image (Depth Image) is also referred to as a Range Image (Range Image). The pixel values in the depth image of the recognition target represent distance information between each point on the surface of the recognition target and the same point or the same plane, for example, in one possible embodiment, the infrared image capturing module 210 acquires the depth image of the recognition target, and the pixel values represent distances between each point on the surface of the recognition target and the image capturing module. When the depth image is a gray image, the change of the image pixel value can be expressed as the gray change of the image, so that the gray change of the depth image also corresponds to the depth change of the recognition target, and directly reflects the geometric shape of the visible surface of the recognition target and the depth information.
Alternatively, the depth image may be represented as a matrix of pixel values, the pixel values (also referred to as gray values) of the depth image being 0 to 255, different pixel values corresponding to different depth information.
It should be understood that the depth image may be calculated as point cloud data for identifying the target through coordinate transformation, and point cloud data with regular and necessary information may be inversely calculated as depth image data.
Specifically, in the embodiment of the present application, the processor 220 processes the depth image into corresponding 3D point cloud data, where the 3D point cloud image is also referred to as 3D point cloud data, includes depth information of the recognition target, and can represent a surface shape and a three-dimensional structure of the recognition target. The 3D Point Cloud data is a massive Point set which expresses target space distribution and target surface characteristics under the same space reference system, and after the space coordinates of each sampling Point on the surface of an object are obtained, the Point set is obtained and called Point Cloud.
Through the scheme of this application embodiment, based on the 2D infrared image is discerned and can be judged the 2D characteristic of people's face, based on depth image or 3D point cloud image are discerned and can be judged the 3D characteristic of people's face, carry out face identification according to 2D infrared image and depth image or 3D point cloud image simultaneously and can improve the accuracy of discernment. The safety performance is improved.
It should be understood that, in the embodiment of the present application, the infrared image capturing module 210 may be the infrared camera 110 in fig. 1. The processor 220 may be the processor 130 in fig. 1. The process of acquiring the infrared image by the infrared image acquisition module 210 may be the same as the process of acquiring the infrared image of the face of the user by the infrared camera 110 in fig. 1. The process of the processor 220 performing face recognition on the 2D infrared image may be the same as the process of performing 2D face recognition by the processor 130 in fig. 1.
Optionally, in a possible implementation, a deep learning network is used to perform face recognition matching on the 2D infrared image. The deep learning network may be a Convolutional Neural Network (CNN) or other deep learning Networks, which is not limited in this embodiment of the present application. The processor 220 classifies the 2D infrared image according to a plurality of infrared image templates by using a deep learning network, and a classification result is used to determine whether the 2D infrared image is matched with the plurality of infrared image templates; when matching is successful, it is determined that 2D face recognition is successful, or when matching is failed, it is determined that face recognition is failed.
For example, a face classification recognition is performed on an acquired face image through a convolutional neural network, specifically, a face recognition convolutional neural network which judges whether the face image is a face of a user is trained through a plurality of samples to obtain relevant parameters of the convolutional neural network, wherein the face recognition convolutional neural network is classified according to a plurality of infrared image templates in a template library. During face recognition, the collected data of the 2D infrared image is input into a face recognition convolutional neural network, the features of the data of the 2D infrared image are extracted through calculation processing of a convolutional layer (convolutional layer), an excitation layer (activation layer), a pooling layer (posing layer), a full-connected layer (full-connected layer) and the like, classification and judgment are carried out, whether the 2D infrared image is matched with a plurality of infrared image templates in a template library or not is judged, and therefore a 2D recognition result is obtained.
Further, when it is determined that the 2D face recognition is successful, the processor 220 performs 3D face recognition on the depth image or the 3D point cloud data. The following description takes the processor 220 to perform 3D face recognition on the 3D point cloud data as an example.
Optionally, the processor 220 matches the 3D point cloud image with a plurality of three-dimensional point cloud image templates in a 3D point cloud image template library; and when the matching is successful, determining that the face recognition is successful, or when the matching is failed, determining that the face recognition is failed.
Specifically, the plurality of 3D point cloud image templates are user face 3D point cloud images of a plurality of different face angles. The plurality of user face 3D point cloud images are obtained by image acquisition of user faces at different angles through an infrared image acquisition module.
Optionally, in this embodiment of the application, a deep learning network may also be used to perform 3D face recognition matching on the 3D point cloud data. For example, in one possible implementation, the processor 220 performs face classification recognition on the acquired 3D point cloud data of the 3D point cloud image through a point cloud processing network (point net), specifically, the point net includes a feature extraction layer, a feature mapping layer, a feature map compression layer, a full connection layer, and the like, and the network includes a plurality of training parameters. The method comprises the steps of firstly training a point net through a plurality of samples to obtain optimized training parameters, so that the result of face recognition by the point net is more accurate, wherein the samples comprise 3D point cloud data of a plurality of user faces and also comprise 3D point cloud data of other non-user faces, for example, the 3D point cloud data of other faces or point cloud data of objects such as three-dimensional models, two-dimensional photos and the like. And then, in the 3D face recognition process, inputting the collected 3D point cloud data of the recognition target into the point cloud processing network, extracting, processing and classifying the features of all layers in the network, and judging whether the 3D point cloud data is matched with the 3D point cloud data of the faces of a plurality of users in the 3D point cloud data template base.
Optionally, the infrared image template library and the 3D point cloud image template library may be stored in a storage unit of the processor 220, or may also be stored in a memory of an electronic device in which the face recognition apparatus 200 is located, which is not limited in this embodiment of the present application.
Therefore, based on the control processing method of the processor 220, when the 2D face recognition is successful, the face recognition is not represented as successful, and further 3D face recognition is required, and only when both the 2D face recognition and the 3D face recognition are successful, the face recognition is represented as successful, and the next operation can be performed. And when the 2D face recognition fails, the direct face recognition fails, so that the waste of the computing resources of the processor caused by the 3D recognition is avoided. Therefore, the processor 220 performs face recognition by combining the 2D infrared image and the 3D point cloud image, so that a non-user face can be quickly recognized, the recognition efficiency is improved, and the security is enhanced.
As shown in fig. 2, the apparatus 200 for face recognition may further include an infrared light emitting module 230, configured to emit infrared light to a surface of a recognition target, where after the infrared light is reflected by the recognition target, the reflected infrared light is received by the infrared image collecting module 210, and a 2D infrared image of the recognition target is formed. The infrared light emitting module 230 is added to the face recognition device 200, so that the light intensity of infrared light can be increased, the light intensity of an infrared light signal reflected by a recognition target is increased, and the quality of a 2D infrared image of the recognition target is improved.
Alternatively, the infrared Light Emitting module 230 may be any Light Emitting device that emits infrared Light signals, including but not limited to an infrared Light Emitting Diode (LED), a Vertical Cavity Surface Emitting Laser (VCSEL), a Fabry-Perot (FP) Laser (LD), a Distributed Feedback (DFB) Laser, and an Electro-absorption Modulated Laser (Electro-absorption Modulated Laser,
EML), which is not limited in the examples of the present application.
Optionally, in a possible embodiment, the processor 220 performs face detection (face detection) on the 2D infrared image or the 3D point cloud image, that is, determines whether a face exists on the 2D infrared image or the 3D point cloud image. When the presence of the human face is detected, the processor 220 controls the infrared light emitting module 230 to emit infrared light to the recognition target. Specifically, when the recognition target is a face of the user, the processor 220 controls the infrared light emitting module 230 to emit an infrared light signal to the face of the user when recognizing the 2D infrared image as a complete front face image.
Optionally, as shown in fig. 2, the apparatus 200 for face recognition may further include a structured light projection module 250 for projecting structured light to a recognition target. The infrared image acquisition module 210 is specifically configured to receive a reflected structured light signal obtained by reflecting the structured light by the recognition target, convert the reflected structured light signal to obtain a depth image, send the depth image to the processor 220, and convert the depth image into a corresponding 3D point cloud image.
Specifically, the structured light is light having a specific pattern, which has a pattern such as a dot, a line, a surface, and the like, and specifically may be an infrared light signal having a specific pattern. The principle of acquiring 3D point cloud data based on structured light is as follows: the structured light is projected to a target object, and after the structured light is reflected by the surface of the target object, a corresponding image with the structured light is captured. Because the pattern of the structured light is deformed due to the surface shape of the target object, the depth information of each sampling point in the target object can be obtained by calculating the position and the deformation degree of the pattern in the structured light in the captured image by utilizing the trigonometric principle, so that the depth image representing the three-dimensional space structure of the target object is formed.
Optionally, in the embodiment of the present application, the structured light belongs to an optical signal in an infrared band, for example, the structured light is an optical signal with a pattern mode with a wavelength of 940 nm.
Optionally, the structured light includes, but is not limited to, speckle images, dot matrix light, and the like, with a structured pattern of light signals. The structured light projection module 250 can be any device structure that projects structured light, including but not limited to: and light emitting devices such as a lattice light projector using a VCSEL light source, a speckle structure light projector, and the like.
Optionally, in another possible embodiment, the apparatus 200 for face recognition may further include a Time of Flight (TOF) optical module, configured to transmit continuous near infrared pulses to the recognition target, and the infrared image acquisition module 210 receives the light pulses reflected by the target object, and by comparing a phase difference between the transmitted light pulses and the light pulses reflected by the object, the transmission delay between the light pulses may be calculated to obtain a distance between the target object and the transmitter, so as to finally obtain 3D point cloud data of the recognition target.
It should be understood that the apparatus 200 for face recognition may also include other module apparatuses capable of acquiring 3D point cloud data of a recognition target, which is not limited in this embodiment of the application.
Preferably, in another possible implementation, the processor 220 may further acquire a visible light image of the recognition target for face detection. When detecting that a human face exists on the visible light image, the processor 220 controls the infrared light emitting module 230 to emit infrared light to the recognition target.
As shown in fig. 3, the face recognition device 200 may further include a visible light image acquisition module 240 connected to the processor 220 for acquiring the visible light image. The visible light collection module 240 may be a camera, or other devices that collect visible light to form a color image, and the visible light collection module 240 includes a visible light image sensor, and is configured to receive a visible light signal in an environment and convert the received visible light signal into a corresponding electrical signal, so as to generate a visible light color image. Optionally, the processor 220 controls the visible light collection module to start collecting the visible light image, and receives and processes the visible light image.
Further, the visible light color image can be used for displaying to a user, and the visible light image acquisition device acts like a mirror, and can present the acquired visible light color image to the user in real time. Specifically, the visible light image collecting module 240 sends the generated visible light color image to the processor 220, and the processor sends the visible light color image to the display screen for displaying. For example, when the recognition target is the face of the user, the user is located in front of the face recognition device 200, and the face position is adjusted by observing the visible light color image on the display screen, so that the user can completely observe the face of the user, that is, the user can move the position of the user according to the image indication on the display screen, and after the position adjustment, the infrared image acquisition module can acquire the 2D infrared image and the depth image of the face of the user, which are suitable for the face recognition, more conveniently.
Optionally, the 3D point cloud image corresponding to the 2D infrared image and/or the depth image may also be used to be displayed to a user, so that the user can adjust the face position according to the 2D infrared image and/or the 3D point cloud image presented in real time.
Optionally, as shown in fig. 4, the apparatus 200 for face recognition may further include a distance detection module 260, connected to the processor 220, for detecting a distance from a recognition target to the face recognition apparatus 200. Optionally, the distance detection module 260 further sends the distance from the recognition target to the face recognition apparatus 200 to the processor 220, and the processor 220 may control the distance detection module 260 to start detecting the distance.
Alternatively, when the processor 220 succeeds in 2D face recognition based on the 2D infrared image, the distance detection module 260 is controlled to start detecting the distance.
Alternatively, the distance detection module 260 may detect the distance by using signals such as electromagnetic waves, ultrasonic waves, etc., and the principle is as follows: the distance detection module 260 transmits ultrasonic waves or electromagnetic waves to the recognition target, and measures the distance from the recognition target to the face recognition device 200 by the reflection of the recognition target, the time difference or the phase difference after the reception of the echo. The ultrasonic waves emitted by the distance detection module 260 are sound waves with a frequency higher than 20000 hz, and the ultrasonic waves have good directivity and strong penetration capability, and are easy to obtain more concentrated sound energy. The electromagnetic wave emitted by the distance detection module 260 is an optical pulse or an optical wave or a microwave modulated by a high-frequency current, and has a rapid response and high measurement accuracy.
Therefore, in the embodiment of the present application, the distance detection module 260 may be an ultrasonic detector, an electromagnetic wave detector, or the like.
Optionally, as shown in fig. 4, the apparatus 200 for face recognition may further include an output module 270 for prompting the user with distance information. Optionally, the output module 270, including but not limited to a display module, a sound module, a light module, a vibration module, etc., may present the distance information to the user directly or indirectly.
Optionally, in a possible implementation, the processor 220 is configured to: receiving the distance from the recognition target to the face recognition apparatus 200, and sending the distance information to the output module 270, where the output module 270 presents the distance information to the user in real time. For example, the output module 270 is a display screen, displays the distance information as a numerical value on the screen, and gives an indication of whether or not it is at an appropriate distance. For another example, the output module 270 is a sound module, which emits a first warning sound when the distance is within a suitable range, such as a certain first distance range, and emits a second warning sound when the distance is outside the first distance range. At this time, the user moves the face to be in an appropriate position according to an instruction on the screen or different prompt tones.
Preferably, in another possible implementation, the processor 220 is configured to: the distance from the recognition target to the face recognition apparatus 200 is received, and whether the output module 270 outputs the distance information from the recognition target to the face recognition apparatus is controlled. Specifically, the processor 220 determines whether the distance is within a suitable range, for example, a certain first distance range, and controls the output module to not act, for example, not to emit a warning tone, when the distance is within the first distance range. When the distance is out of the first distance range, the output module 270 is controlled to act, for example, to emit a warning tone.
Further, the processor 220 controls the output module 270 to perform different actions according to different distance information. For example: when the distance data is in the first distance range interval, the processor 220 controls the output module 270 not to act; when the distance data is smaller than the first distance range interval, the processor 220 controls the output module 270 to send out a first prompt sound, and when the distance data is larger than the first distance range interval, the processor 220 controls the output module 270 to send out a second prompt sound. At this time, the user moves the face according to different prompt tones until no prompt tone appears, which indicates that the face is at a proper position at this time.
Optionally, when the processor 220 receives and determines that the distance from the recognition target to the face recognition device is within a suitable distance range, for example, a first distance range interval, the processor 220 controls the structured light projection module 250 to project structured light to the recognition target, or controls the TOF light module to emit continuous near-infrared pulses to the recognition target, so that the infrared image acquisition module 210 acquires a 3D point cloud data image.
It should be understood that the output module 270 may also be used to output other information besides distance information. For example, when the user performs template entry or face recognition, a warning tone indicating success or failure of entry or success or failure of recognition is issued.
In the embodiment of the present application, the distance detection module 260 performs distance detection, and the output module 270 outputs distance information, so that the face of the user can more conveniently move to a suitable area for face recognition. And the structured light is sent to the face of the user after the face is moved to a proper distance, so that the eyes of the user can be prevented from being injured by the structured light, and the user experience is improved.
The embodiments of the face recognition apparatus of the present application are described in detail with reference to fig. 2 to 4, wherein the processor 220 serves as a control center and a processing center of the face recognition apparatus 200, is connected to and controls other modules in the face recognition apparatus 200 to execute related actions, and processes image data acquired by the other modules to complete a face recognition process. The following describes in detail embodiments of the face recognition method according to the present application with reference to fig. 5 to 7. It is to be understood that the method embodiments correspond to the apparatus embodiments and similar descriptions may be made with reference to the apparatus embodiments.
Fig. 5 is a schematic flow chart of a face recognition method 20 according to an embodiment of the present application, including:
s240: acquiring a 2D infrared image and a 3D point cloud image of an identification target;
s250: and carrying out face recognition based on the 2D infrared image and the 3D point cloud image.
Optionally, the face recognition method 20 in this embodiment of the application may be applied to the face recognition apparatus 200, where the processor 220 controls the infrared image acquisition module 210 to acquire a 2D infrared image and a depth image of a recognition target, and after the infrared image acquisition module 210 sends the 2D infrared image and the depth image to the processor 220, the processor 220 processes the depth image to obtain a 3D point cloud image, and performs face recognition based on the 2D infrared image and the 3D point cloud image.
Optionally, as shown in fig. 5, the face recognition method 20 further includes:
s230: and emitting infrared light to the recognition target for enhancing the infrared light signal irradiated on the surface of the recognition target, so that the light intensity of the infrared light signal reflected by the face is increased, and the quality of the acquired 2D infrared image is improved.
Optionally, the processor 220 controls the infrared light emitting module 210 to emit infrared light to the identification target.
Optionally, as shown in fig. 5, the face recognition method 20 further includes:
s210: acquiring a visible light image of an identification target;
s220: and carrying out face detection on the visible light image.
Optionally, the processor 220 controls the visible light image acquisition module 240 to acquire a visible light image of the recognition target, and after the visible light image is sent to the processor 220, the processor 220 performs face detection on the visible light image.
Alternatively, when a human face is detected on the visible light image, the process proceeds to step S230, i.e., the infrared light emitting module 230 is controlled to emit infrared light to the recognition target. When the human face is not detected on the visible light image, the process proceeds to step S210, that is, the visible light image acquisition module 240 is controlled again to acquire the visible light image of the recognition target, and the human face detection is performed again.
Alternatively, as shown in fig. 6, the step S240 includes: s241: acquiring a 2D infrared image of an identification target; and S242: and acquiring a 3D point cloud image of the recognition target.
The step S250 includes: s251: 2D face recognition is carried out based on the 2D infrared image; and
alternatively, when the 2D face recognition based on the 2D infrared image is successful, S252 is executed: and 3D face recognition is carried out based on the 3D point cloud image.
Specifically, in the embodiment of the present application, when the 2D face recognition based on the 2D infrared image is successful in step S251, step S242 is executed, otherwise, step S262 is executed, that is, the face recognition is failed, and the method flow ends.
In step S252, if the 3D face recognition based on the 3D point cloud image is successful, step S261 is performed, that is, the face recognition is successful, otherwise, step S262 is performed, and the face recognition is failed.
Therefore, based on the above flow method, when the 2D face recognition is successful, the face recognition is not represented as successful, and further 3D face recognition is required, and only when both the 2D face recognition and the 3D face recognition are successful, the face recognition is represented as successful, and the next operation can be performed. And when the 2D face recognition fails, the direct face recognition fails, so that the waste of the computing resources of the processor caused by the 3D recognition is avoided. Therefore, the method combining 2D face recognition and 3D face recognition can be used for rapidly recognizing the face of a non-user, and 3D recognition detection is carried out again on the basis of successful 2D recognition, so that 3D anti-counterfeiting can be carried out on the face, and the safety of the recognition process is enhanced.
Alternatively, as shown in fig. 7, in step S251: 2D face recognition is performed based on the 2D infrared image, and after the 2D face recognition is successful, the face recognition method 20 further includes:
s270: whether the distance from the recognition target to the recognition device is within the first distance range interval is detected.
Specifically, the processor 220 detects whether the distance from the recognition target to the recognition device is within the first distance range section, and when the distance from the recognition target to the recognition device is within the first distance range section, proceeds to step S242: and acquiring a 3D point cloud image of the recognition target. Optionally, the processor 220 controls the structured light projection module 250 to project structured light towards the identification target, or controls the TOF light module to emit successive near-infrared pulses towards the identification target.
When the distance from the recognition target to the recognition device is not within the first distance range section, the flow proceeds to step S271: and outputting the distance information. Optionally, the processor 220 controls the output module 270 to output different prompt messages for different distances. At the moment, the user moves the face according to different prompt messages, so that the face is in a proper position.
As shown in fig. 8, an embodiment of the present application further provides an electronic device 2, where the electronic device 2 may include the face recognition apparatus 200 according to the embodiment of the application.
The electronic device 2 includes, but is not limited to, a smart door lock, a mobile phone, a computer, an access control system, and other devices that require face recognition.
Optionally, as shown in fig. 9, the electronic device may further include a display screen 300, where the display screen is configured to display the two-dimensional infrared image and/or the three-dimensional point cloud image and/or the visible light image, so as to facilitate a user to observe and move a face position to be in a suitable face recognition position.
Optionally, the display screen 300 may also display distance information of the output module 270, and other prompt information related to face recognition such as success or failure of recognition, template entry or failure, and the like.
Alternatively, in the case that the face recognition apparatus 200 does not include the processor 220 and the output module 270, as shown in fig. 9, the electronic device 2 may include a processor 400 and an output module 500.
Alternatively, the output module 500 may be the same as the output module 270 in fig. 4.
It should be understood that the output module 500 may also be used to output information other than actions related to face recognition. For example, the electronic device may be powered on or powered off, and the like, which is not limited in this embodiment of the application.
It should also be understood that the processor 400 is a processor of the electronic device 2, and the display 300 is a display of the electronic device 2, and is mainly used for controlling various components in the electronic device 2 and displaying a main interface of the electronic device, in other words, the face recognition apparatus 200 is only a functional component in the electronic device 2, and an action that needs to be performed is only a part controlled by the processor 400, and an image that needs to be displayed is only a part of the display content of the display 300.
Optionally, as shown in fig. 9, the electronic device 2 may further include a memory 600, a motor control module 700, and a wireless network access module 800.
Optionally, the infrared image template library and/or the 3D point cloud data template library may also be stored in the memory 600.
Optionally, when the electronic device is an access control system or an intelligent door lock, and when the face recognition device 200 successfully recognizes the face, the processor 400 may control the motor control module 700 to unlock the lock.
Optionally, the Wireless network access module 800 is configured to access a Wireless Local Area Network (WLAN) to implement network transmission of data in the processor 400 and the memory 600.
It should be understood that the processor of the embodiments of the present application may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It is to be understood that the face recognition apparatus of the embodiments of the present application may further include a memory, which may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), Synchronous dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
Embodiments of the present application also provide a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, enable the portable electronic device to perform the method of the embodiments shown in fig. 5-7.
Embodiments of the present application also provide a computer program, which includes instructions that, when executed by a computer, enable the computer to perform the method of the embodiments shown in fig. 5 to 7.
The embodiment of the present application further provides a chip, where the chip includes an input/output interface, at least one processor, at least one memory, and a bus, where the at least one memory is used to store instructions, and the at least one processor is used to call the instructions in the at least one memory to execute the method of the embodiment shown in fig. 5 to 7.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. An apparatus for face recognition, comprising: the system comprises an infrared light emitting module, a structured light projection module, an infrared image acquisition module and a processor;
the infrared light emitting module is connected to the processor and used for emitting infrared light to an identification target;
the structured light projection module is connected to the processor and used for projecting structured light to the identification target;
the infrared image acquisition module is connected to the processor and is used for receiving a reflected infrared light signal of the infrared light reflected by the identification target to obtain a two-dimensional infrared image of the identification target and receiving a reflected structured light signal of the structured light reflected by the identification target to obtain a depth image of the identification target;
the processor is used for receiving the two-dimensional infrared image and the depth image and carrying out face recognition based on the two-dimensional infrared image and the depth image.
2. The apparatus of claim 1, further comprising: the visible light image acquisition module is connected to the processor and used for acquiring the visible light image of the identification target, and the processor is also used for controlling the infrared light emitting module to emit the infrared light according to the visible light image output by the visible light image acquisition module.
3. The apparatus of claim 2, wherein the recognition target is a face of a user, the apparatus further comprising: and the display is connected with the processor and used for displaying the visible light image or the two-dimensional infrared image or the three-dimensional point cloud image corresponding to the depth image and indicating a user to adjust the position of the face.
4. The apparatus of any one of claims 1-3, wherein the structured light is a dot matrix light or random speckle and the structured light projection module is a dot matrix light projector or a speckle structured light projector.
5. The apparatus of any one of claims 1-3, further comprising: and the distance detection module is connected with the processor and is used for detecting the distance from the recognition target to the face recognition device.
6. The apparatus of claim 5, wherein the distance detection module is an ultrasonic detector or an electromagnetic wave detector.
7. The apparatus of claim 5, wherein the processor is further configured to control the structured light projecting module to project structured light to the recognition target according to the distance from the recognition target to the face recognition apparatus output by the distance detecting module.
8. The apparatus of claim 5, further comprising: and the output module is connected with the processor and used for outputting the distance information from the recognition target to the face recognition device.
9. The apparatus of claim 8, wherein the output module comprises: a display module, a sound module, a light module or a vibration module.
10. The device according to any one of claims 1-3, wherein the infrared image acquisition module is an infrared camera comprising a filter and an infrared light detection array.
11. An electronic device, comprising:
an apparatus for face recognition according to any one of claims 1 to 10.
12. The electronic device of claim 11, further comprising: and the wireless network access module is connected with the face recognition device and is used for transmitting the face recognition data to a wireless local area network.
13. The electronic device according to claim 11 or 12, characterized in that the electronic device further comprises: and the motor control module is connected with the face recognition device and used for controlling the mechanical device according to the face recognition result.
CN201920867396.8U 2019-06-06 2019-06-06 Face recognition device and electronic equipment Active CN210166794U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201920867396.8U CN210166794U (en) 2019-06-06 2019-06-06 Face recognition device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201920867396.8U CN210166794U (en) 2019-06-06 2019-06-06 Face recognition device and electronic equipment

Publications (1)

Publication Number Publication Date
CN210166794U true CN210166794U (en) 2020-03-20

Family

ID=70171599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201920867396.8U Active CN210166794U (en) 2019-06-06 2019-06-06 Face recognition device and electronic equipment

Country Status (1)

Country Link
CN (1) CN210166794U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836658A (en) * 2021-02-08 2021-05-25 南京林业大学 Face recognition method based on transfer learning and sparse loss function
CN114942072A (en) * 2022-06-02 2022-08-26 广州睿芯微电子有限公司 Multispectral imaging chip and object identification system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836658A (en) * 2021-02-08 2021-05-25 南京林业大学 Face recognition method based on transfer learning and sparse loss function
CN112836658B (en) * 2021-02-08 2022-03-25 南京林业大学 Face recognition method based on transfer learning and sparse loss function
CN114942072A (en) * 2022-06-02 2022-08-26 广州睿芯微电子有限公司 Multispectral imaging chip and object identification system

Similar Documents

Publication Publication Date Title
CN111095297B (en) Face recognition device and method and electronic equipment
CN110383288B (en) Face recognition method and device and electronic equipment
CN110462633B (en) Face recognition method and device and electronic equipment
US10956714B2 (en) Method and apparatus for detecting living body, electronic device, and storage medium
WO2020243969A1 (en) Facial recognition apparatus and method, and electronic device
KR102036978B1 (en) Liveness detection method and device, and identity authentication method and device
US11928195B2 (en) Apparatus and method for recognizing an object in electronic device
US8155394B2 (en) Wireless location and facial/speaker recognition system
US20210049391A1 (en) Systems and methods for facial liveness detection
US10846866B2 (en) Irradiation system, irradiation method, and program storage medium
CN112232155B (en) Non-contact fingerprint identification method and device, terminal and storage medium
WO2020258119A1 (en) Face recognition method and apparatus, and electronic device
KR20200006757A (en) Apparatus and method for confirming object in electronic device
WO2020258120A1 (en) Face recognition method and device, and electronic apparatus
CN210166794U (en) Face recognition device and electronic equipment
CN112232163B (en) Fingerprint acquisition method and device, fingerprint comparison method and device, and equipment
CN112016525A (en) Non-contact fingerprint acquisition method and device
CN112232159B (en) Fingerprint identification method, device, terminal and storage medium
KR20170070754A (en) Security system using uwb radar and method for providing security service thereof
WO2021218695A1 (en) Monocular camera-based liveness detection method, device, and readable storage medium
CN112232157A (en) Fingerprint area detection method, device, equipment and storage medium
CN113255401A (en) 3D face camera device
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium
CN112232152B (en) Non-contact fingerprint identification method and device, terminal and storage medium
CN110874876B (en) Unlocking method and device

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant