WO2020243969A1 - 人脸识别的装置、方法和电子设备 - Google Patents

人脸识别的装置、方法和电子设备 Download PDF

Info

Publication number
WO2020243969A1
WO2020243969A1 PCT/CN2019/090425 CN2019090425W WO2020243969A1 WO 2020243969 A1 WO2020243969 A1 WO 2020243969A1 CN 2019090425 W CN2019090425 W CN 2019090425W WO 2020243969 A1 WO2020243969 A1 WO 2020243969A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face recognition
recognition
face
dimensional
Prior art date
Application number
PCT/CN2019/090425
Other languages
English (en)
French (fr)
Inventor
曾伟平
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to CN201980000872.9A priority Critical patent/CN110383289A/zh
Priority to PCT/CN2019/090425 priority patent/WO2020243969A1/zh
Publication of WO2020243969A1 publication Critical patent/WO2020243969A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics

Definitions

  • This application relates to the field of biometric recognition technology, and more specifically, to a face recognition device, method, and electronic equipment.
  • Face recognition is a kind of biometric recognition technology based on human facial feature information. Use a camera or camera to collect images or video streams containing human faces, and automatically detect and track human faces in the images, and then perform facial image preprocessing, image feature extraction, and matching and recognition of the detected human faces. Related technology is usually called face recognition or facial recognition. With the rapid development of computer and network technology, face recognition technology has been widely used in many industries and fields such as smart access control, smart door locks, mobile terminals, public safety, entertainment, and military.
  • the current face recognition devices on the market need to use a visible light camera and a near-infrared camera, where the infrared camera is used to capture two-dimensional near-infrared images for the input of image templates and the recognition of the user's face.
  • the visible light camera is only used as a "mirror" to display the visible light image of the user's face, indicating that the user's face is in a proper position so that the infrared camera is convenient for photographing for face recognition, thus increasing the cost of the face recognition device.
  • the visible light image captured by the visible light camera easily leaks the privacy of the user, and there is a security problem.
  • the ambient light is too strong or too dark, the quality of the image captured by the visible light camera is poor, which will also affect the user experience.
  • the embodiments of the present application provide a face recognition apparatus, method, and electronic equipment, which can reduce costs, improve the security of face recognition and user experience.
  • a face recognition device including:
  • the image acquisition module is used to acquire the image of the recognition target
  • the processor is configured to process the image of the recognition target to form a virtual image, and perform face recognition according to the image of the recognition target, wherein the virtual image is used for displaying on a display screen.
  • the processor virtualizes the image acquired by the image acquisition module and then displays it to the user on the display screen, and performs face recognition on the image acquired by the same image acquisition module, which can reduce the appearance of the human face.
  • additional image acquisition devices are added, such as the cost of a camera for displaying images to the user, and the use of virtual image display can avoid leaking the user's facial image privacy and improve user experience.
  • the recognition target is a user's face
  • the virtual image is used to prompt the user to adjust the position of the face.
  • the processor is configured to: perform face information extraction on the image of the recognition target to obtain a characteristic face image, and process the characteristic face image to obtain the virtual image.
  • the processor is configured to: match the characteristic face image with multiple virtual image templates to obtain the virtual image.
  • the size of the virtual image is the same as the size of the characteristic face image, and/or the outline of the virtual image is the same as the outline of the characteristic face image, and/or the The position of the virtual image in the display screen is the same as the position of the characteristic face image in the two-dimensional infrared image.
  • the image of the recognition target includes a two-dimensional infrared image
  • the processor is configured to process the two-dimensional infrared image to form a two-dimensional virtual image, and according to the two-dimensional infrared image Perform face recognition.
  • the device further includes: an infrared light emitting module, configured to emit infrared light to the identification target;
  • the image acquisition module is further configured to receive the reflected infrared light signal of the infrared light reflected by the identification target, and convert the reflected infrared light signal to obtain the two-dimensional infrared image.
  • the processor is specifically configured to: match the two-dimensional infrared image with a plurality of infrared image templates, and when the matching is successful, determine that the face recognition is successful, or when the matching fails , Confirm that the face recognition failed.
  • the image of the recognition target includes a three-dimensional point cloud image
  • the processor is configured to process the three-dimensional point cloud image to form a virtual three-dimensional image, and perform processing based on the three-dimensional point cloud image Face recognition.
  • the processor is specifically configured to: match the three-dimensional point cloud image with multiple three-dimensional point cloud image templates, and when the matching is successful, determine that the face recognition is successful, or when the matching is successful When it fails, it is determined that face recognition has failed.
  • the image of the recognition target includes a two-dimensional infrared image
  • the processor is configured to process the two-dimensional infrared image to form a virtual two-dimensional image, and according to the two-dimensional infrared image And 3D point cloud image for face recognition.
  • the processor is specifically configured to: perform two-dimensional face recognition according to the two-dimensional infrared image, and when the two-dimensional face recognition fails, determine that the face recognition fails;
  • the two-dimensional face recognition When the two-dimensional face recognition is successful, perform three-dimensional face recognition according to the three-dimensional point cloud image, when the three-dimensional face recognition is successful, it is determined that the face recognition is successful, and when the three-dimensional face recognition fails, it is determined that the face recognition fails.
  • the device further includes: a structured light projection module, configured to project structured light to the identification target;
  • the image acquisition module is further configured to receive the reflected structured light signal after the structured light is reflected by the identification target, and convert the reflected structured light signal to obtain the three-dimensional point cloud image.
  • the structured light is dot matrix light or random speckle
  • the structured light projection module is a dot matrix light projector or a speckle structured light projector.
  • the device further includes: a distance detection module configured to detect the distance from the recognition target to the face recognition device.
  • the processor when the distance from the recognition target to the face recognition device is within a first distance range interval, the processor is specifically configured to control the structured light projection module to project structured light to the face recognition device.
  • the recognition target when the distance from the recognition target to the face recognition device is within a first distance range interval, the processor is specifically configured to control the structured light projection module to project structured light to the face recognition device.
  • the recognition target when the distance from the recognition target to the face recognition device is within a first distance range interval, the processor is specifically configured to control the structured light projection module to project structured light to the face recognition device.
  • the device further includes: an output module;
  • the processor is further configured to receive the distance from the recognition target to the face recognition device, and control whether the output module outputs the distance information from the recognition target to the face recognition device.
  • the processor when the distance from the recognition target to the face recognition device is within a first distance range interval, the processor is specifically configured to control the output module not to output the recognition target to Distance information of the face recognition device;
  • the processor is specifically configured to control the output module to output the recognition target to the face recognition device Distance information.
  • the image acquisition module is an infrared camera, which includes a filter and an infrared light detection array.
  • the device further includes: a display screen for displaying the virtual image.
  • a method for face recognition including:
  • the image of the recognition target is processed to form a virtual image, and face recognition is performed based on the image, wherein the virtual image is used for display on a display screen.
  • the recognition target is a user's face
  • the virtual image is used to prompt the user to adjust the position of the face.
  • the processing the image of the recognition target to form a virtual image includes:
  • the processing to obtain the virtual image according to the characteristic face image includes:
  • the size of the virtual image is the same as the size of the characteristic face image, and/or the outline of the virtual image is the same as the outline of the characteristic face image, and/or the The position of the virtual image in the display screen is the same as the position of the characteristic face image in the two-dimensional infrared image.
  • the image of the recognition target includes a two-dimensional infrared image
  • performing face recognition according to the image of the recognition target includes:
  • the two-dimensional infrared image is processed to form a two-dimensional virtual image, and face recognition is performed based on the two-dimensional infrared image.
  • the method further includes: emitting infrared light to the identification target;
  • the performing face recognition based on the two-dimensional infrared image includes: matching the two-dimensional infrared image with a plurality of infrared image templates, and when the matching is successful, determining that the face recognition is successful , Or, when the matching fails, the face recognition fails.
  • the image of the recognition target includes a three-dimensional point cloud image
  • performing face recognition according to the image of the recognition target includes:
  • the three-dimensional point cloud image is processed to form a virtual three-dimensional image, and face recognition is performed according to the three-dimensional point cloud image.
  • the performing face recognition according to the three-dimensional point cloud image includes:
  • the three-dimensional point cloud image is matched with multiple three-dimensional point cloud image templates, and when the matching is successful, it is determined that the face recognition is successful, or when the matching fails, it is determined that the face recognition fails.
  • the image of the recognition target includes a two-dimensional infrared image
  • performing face recognition according to the image of the recognition target includes:
  • the two-dimensional infrared image is processed to form a virtual two-dimensional image, and face recognition is performed based on the two-dimensional infrared image and the three-dimensional point cloud image.
  • the performing face recognition based on the two-dimensional infrared image and the three-dimensional point cloud image includes:
  • the two-dimensional face recognition When the two-dimensional face recognition is successful, perform three-dimensional face recognition according to the three-dimensional point cloud image, when the three-dimensional face recognition is successful, it is determined that the face recognition is successful, and when the three-dimensional face recognition fails, it is determined that the face recognition fails.
  • the method further includes: projecting structured light to the recognition target;
  • the structured light is lattice light or random speckle.
  • the method for face recognition is applied to a face recognition device, and the method further includes:
  • the distance from the recognition target to the face recognition device is detected.
  • the structured light is projected to the recognition target.
  • the method further includes:
  • the judgment whether to output the distance from the recognition target to the face recognition device includes:
  • the determining whether to output distance information from the recognition target to the face recognition device includes:
  • the method further includes:
  • the virtual image is displayed.
  • an electronic device including:
  • the electronic device further includes: a display screen for displaying the virtual image.
  • the electronic device further includes: a wireless network access module for transmitting face recognition data to the wireless local area network.
  • the electronic device further includes: a motor control module, configured to control the mechanical device according to the result of face recognition.
  • a chip in a fourth aspect, includes an input and output interface, at least one processor, at least one memory, and a bus.
  • the at least one memory is used to store instructions
  • the at least one processor is used to call Instructions to execute the method in the second aspect or any possible implementation of the second aspect.
  • a computer-readable medium for storing a computer program, and the computer program includes instructions for executing the foregoing second aspect or any possible implementation of the second aspect.
  • a computer program product including instructions is provided.
  • the computer runs the instructions of the computer program product, the computer executes the second aspect or any of the possible implementations of the second aspect. Methods of face recognition.
  • the computer program product can run on the electronic device of the third aspect.
  • FIG. 1 is a schematic structural diagram of an electronic device to which an embodiment of the present application is applied.
  • Fig. 2 is a schematic diagram of a face recognition device according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of another face recognition device according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another face recognition device according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of another face recognition device according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of another face recognition device according to an embodiment of the present application.
  • Fig. 7 is a schematic flowchart of a face recognition process according to an embodiment of the present application.
  • Fig. 8 is a schematic flowchart of another face recognition process according to an embodiment of the present application.
  • Fig. 9 is a schematic flowchart of another face recognition process according to an embodiment of the present application.
  • Fig. 10 is a schematic flowchart of another face recognition process according to an embodiment of the present application.
  • Fig. 11 is a schematic flowchart of another face recognition process according to an embodiment of the present application.
  • Fig. 12 is a schematic block diagram of an electronic device according to an embodiment of the present application.
  • Fig. 13 is a schematic block diagram of another electronic device according to an embodiment of the present application.
  • the embodiments of the present application may be applicable to optical face recognition devices, including but not limited to products based on optical face imaging.
  • the optical face recognition device can be applied to various electronic devices with image acquisition devices (such as cameras).
  • the electronic devices can be mobile phones, tablet computers, smart wearable devices, smart door locks, etc. This is the embodiment of the present disclosure. Not limited.
  • the size of the sequence number of each process does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • FIG. 1 is a schematic structural diagram of an electronic device 1 to which the embodiment of the application can be applied.
  • the electronic device 1 includes an infrared camera 110, a visible light camera 120, a display screen 130, and a processor 140.
  • the infrared camera 110 includes an infrared image sensor, which is used to receive an infrared light signal and convert the received infrared light signal into a corresponding electrical signal, thereby generating an infrared image.
  • the visible light camera 120 includes a visible light image sensor for receiving visible light signals in the environment and converting the received visible light signals into corresponding electrical signals to generate visible light images.
  • the processor 140 may be a microprocessor (Microprocessor Unit, MPU) or other device with a processing control function, which can control various device components in an electronic device, and can also perform calculation processing on data generated by the device components.
  • MPU Microprocessor Unit
  • the recognition target is in front of the electronic device 10.
  • the recognition target is a human face 103
  • the visible light in the environment is reflected by the human face 103 to form a human face.
  • the reflected visible light signal 102 of the morphological information is received by the visible light camera 120 to form a visible light color human face image corresponding to the human face 103.
  • the processor 140 controls the visible light camera 120 to collect visible light color facial images of multiple human faces 103, and sends the multiple color facial images to the display screen 130, and the display screen 130 dynamically displays the multiple color facial images in real time to form Dynamic color video of human faces.
  • the user adjusts the position and angle of the face according to the face video displayed in real time on the display screen 130, so that the face is completely in the display screen and at a suitable angle and position.
  • the infrared light is reflected by the human face 103 to form a reflected infrared light signal 101 that carries information about the shape of the human face.
  • the reflected infrared light signal 101 is received by the infrared camera 110, it forms an infrared light gray scale corresponding to the human face 103 Face image.
  • the processor 140 controls the infrared camera 120 to collect grayscale infrared images of one or more human faces.
  • the infrared camera 110 sends the collected grayscale infrared images of one or more human faces to the processor 140, and the processor 140 performs two-dimensional (Two Dimensional, 2D) face recognition on the infrared image of the human face.
  • the processor 140 includes a storage unit, and the storage unit stores an infrared image template library of the user's face, which contains a plurality of infrared image templates of the user's face with different face angles.
  • the multiple user facial infrared image templates are also template data vectors obtained by processing infrared images of multiple facial angles captured by the infrared camera 110.
  • the processor 140 matches the data vector obtained by processing the currently collected infrared face image corresponding to the face 103 with the template data vectors of the multiple infrared images in the infrared image template library. If the matching is successful, the 2D person The face recognition is successful. If the matching fails, the 2D face recognition fails.
  • the data vector obtained by processing the image template is also referred to as the image template; the data vector obtained by the image processing during the matching process is also referred to as the image.
  • the processor performs face classification and recognition on the collected face images through Convolutional Neural Networks (CNN). Specifically, a judgment is first trained through multiple samples. Whether it is a face recognition convolutional neural network of the user's face, the relevant parameters of the convolutional neural network are obtained, where the face recognition convolutional neural network is classified according to multiple infrared image templates in the template library.
  • CNN Convolutional Neural Networks
  • the collected 2D infrared image data is input to the face recognition convolutional neural network, through the convolutional layer, activation layer, pooling layer, and Fully-connected layer (fully-connected layer) and other calculation processing, after extracting the features of the 2D infrared image data, classification and discrimination are performed to determine whether the 2D infrared image matches multiple infrared image templates in the template library, so as to obtain the 2D recognition result.
  • the electronic device 1 uses the visible light camera 120 to photograph the user's face in real time, so that the user can observe his own face image on the display screen 130, so that the user's face is in a suitable position, which is convenient
  • the infrared camera 110 captures an infrared image and sends it to the processor, so that the processor 140 can easily perform face recognition.
  • the visible light camera 120 only acts as a "mirror", which increases the cost of face recognition in electronic equipment.
  • the color user face image captured by the visible light camera 120 is easy to leak, and the leaked color user face image also conforms to the two-dimensional features of the user's face. Therefore, the processor 140 can also successfully recognize the color user face image in two dimensions. Make electronic devices have safety issues.
  • the ambient light is too strong or too dark, the image captured by the visible light camera 120 has low contrast and poor quality. Therefore, the low-quality facial dynamic video displayed on the display screen 130 will also affect the user experience.
  • this application provides a solution that only uses an image acquisition module for image acquisition, and uses a processor to perform face recognition and virtualized processing to display the collected images, thereby saving the cost of using visible light cameras and virtualizing
  • the modified image will not leak user information, nor will it affect the image quality due to changes in ambient light, which can improve security and user experience.
  • FIG. 2 is a face recognition apparatus 200 provided by an embodiment of this application, including:
  • the image acquisition module 210 is used to acquire an image of the recognition target
  • the processor 220 is configured to process the image of the recognition target to form a virtual image, and perform face recognition according to the image of the recognition target, wherein the virtual image is used to be displayed on a display screen.
  • the recognition target includes, but is not limited to, any objects such as human faces, photos, videos, and three-dimensional models.
  • the recognition target may be a user's face, other people's faces, a user's photo, a curved surface model with photos, etc.
  • the image acquisition module 210 may be any device that acquires images, such as a camera, a camera, and so on.
  • the image acquisition module is an infrared light image acquisition module, which senses infrared light signals to form image signals.
  • the image acquisition module may also sense optical signals of other non-infrared wavelengths to form image signals, which is not limited in the embodiment of the present application.
  • the image acquisition module 210 may be an infrared camera, including a filter 211 and an infrared light image sensor 212.
  • the filter 211 is used to transmit light signals of a target wavelength and filter light signals of non-target wavelengths.
  • the light sensor 212 performs light detection based on the target wavelength, and converts the detected light signal into an electrical signal.
  • the infrared light image sensor is a charge-coupled device (CCD) image sensor, or a complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) image sensor.
  • the infrared light image sensor 212 includes a plurality of pixel units, and one pixel unit is used to convert a light signal to form a pixel value in an identification target image.
  • the pixel unit may use a photodiode (photodiode), a metal oxide semiconductor field effect transistor (Metal Oxide Semiconductor Field Effect Transistor, MOSFET) and other devices.
  • the pixel unit has a higher optical sensitivity and a higher quantum efficiency for the light of the target wavelength, so as to facilitate the detection of the optical signal of the corresponding wavelength.
  • the target wavelength belongs to the infrared light band.
  • the filter 211 is used to transmit infrared light signals at 940nm and block visible light with wavelengths other than 940nm
  • the infrared light image sensor 212 detects the 940 nm infrared light and forms a 2D infrared image corresponding to the recognition target.
  • the processor 220 may be a processor of the face recognition apparatus 200 or a processor of an electronic device including the face recognition apparatus 200, which is not limited in the embodiment of the present application.
  • the processor 220 is configured to process the 2D infrared image to form a 2D virtual image, and the 2D virtual image is configured to be displayed on the display screen of the electronic device.
  • the image acquisition device 210 may also acquire a three-dimensional (Three Dimensional, 3D) image of the recognition target, and the processor 220 may also be used to process the 3D image A 3D virtual image is formed, and the 3D virtual image is used to be displayed on the display screen of the electronic device.
  • 3D Three-dimensional
  • the processor 220 includes a storage unit 221 that stores a virtual image template library, including a plurality of virtual image templates, and the virtual image template may be a 2D virtual image template or a 3D virtual image template.
  • the multiple virtual image templates may include virtual images of all or part of human faces from different angles, or virtual images of other objects.
  • the processor 220 processes the 2D or 3D image of the recognition target, and extracts the facial information in the image, including but not limited to the contour, size, position, and facial skin texture information of the facial features, etc., to form Characteristic face image.
  • the characteristic face image may be a cropped area in the image that only includes the face image.
  • the processor 220 matches the characteristic face image with facial information obtained by processing with multiple virtual image templates, and displays the best matching result on the display screen. The virtual image on the display screen is compared with the actually collected 2D image.
  • the virtual image is displayed on the display screen instead of the collected image, and the user adjusts the position and direction of the face according to the virtual image, that is, the user can move himself accordingly according to the image instructions on the display screen , So that the complete virtual image of the human face is displayed on the display screen at a suitable angle and position.
  • the virtual image may be a 2D virtual image or a 3D virtual image.
  • the size of the virtual image may be the same as or similar to the size of the characteristic face image, and the virtual image may more intuitively reflect the distance between the user's face and the recognition device.
  • the contour of the virtual image may be the same as the contour of the characteristic face image.
  • the virtual image is also shown as a face pattern.
  • the facial features are displayed on the face pattern, which can correspond to the facial features on the characteristic face image, thereby improving user experience.
  • the position of the virtual image in the display screen is the same as the position of the characteristic face image in the two-dimensional infrared image, and the virtual image can more intuitively reflect the user's face and image collection Location between devices.
  • the virtual face image on the display screen also corresponds to only a half-frontal face.
  • the user moves the position of the face according to the half-side face presented, so that the image acquisition module 210 can collect a complete infrared image of the front face, and the processor 220 processes the complete infrared image of the front face into a complete front face.
  • the virtual image of the face is then presented on the display screen.
  • the virtual image template library may be stored in the storage unit 221 in the processor 220, and may also be stored in the memory in the electronic device where the face recognition apparatus 200 is located. Not limited.
  • the image acquisition module 210 may be the infrared camera 110 in FIG. 1.
  • the processor 220 may be the processor 140 in FIG. 1.
  • the process of acquiring infrared images by the image acquisition module 210 may be the same as the process of acquiring infrared images of the user's face by the infrared camera 120 in FIG. 1.
  • the process of the processor 220 performing face recognition on the infrared image may be the same as the process of performing 2D face recognition by the processor 140 in FIG. 1.
  • the image acquisition module 210 performs image acquisition on the recognition target, the processor 220 virtualizes the captured image and displays it to the user, and performs face recognition on the image, which can reduce the use of additional cameras. Because of the cost of image display, and the use of virtual image display, it can avoid leaking the user’s facial image privacy and improve user experience.
  • the face recognition apparatus 200 may further include an infrared light emitting module 230, which is used to emit infrared light 201 to the surface of the recognition target, which is reflected by the recognition target. After that, the reflected infrared light 202 is received by the image acquisition module 210 to form a 2D infrared image of the identified target. Adding the infrared light emitting module 230 to the face recognition device 200 can increase the intensity of infrared light, increase the intensity of the infrared light signal reflected by the recognition target, and improve the quality of the 2D infrared image of the recognition target.
  • the infrared light emitting module 230 can be any light emitting device that emits infrared light signals, including but not limited to infrared light emitting diodes (Light Emitting Diode, LED), vertical cavity surface emitting laser (Vertical Cavity Surface Emitting Laser, VCSEL) , Fabry Perot (FP) Laser diode (LD), Distribute Feedback (DFB) laser, and Electro-absorption Modulated Laser (EML), examples of this application There is no restriction on this.
  • infrared light emitting diodes Light Emitting Diode, LED
  • vertical cavity surface emitting laser Very Cavity Surface Emitting Laser, VCSEL
  • FP Fabry Perot
  • LD Distribute Feedback
  • EML Electro-absorption Modulated Laser
  • the processor 220 may control the infrared light emitting module 230 to emit infrared light signals to the recognition target. While the infrared light emitting module 230 emits infrared light signals, the image acquisition module 210 collects 2D infrared images.
  • the image acquisition module 210 acquires the 2D infrared image of the identified target, it sends the 2D infrared image to the processor 220, and the processor 220 processes the 2D infrared image to obtain
  • the virtual 2D image is used for displaying to the user, and 2D face recognition is performed based on the 2D infrared image.
  • the storage unit 221 in the processor 220 stores a 2D infrared image template library, including a plurality of 2D infrared image templates, and the plurality of 2D infrared image templates may be 2D infrared images of a user's face from different angles.
  • the processor 220 matches the 2D infrared image with a plurality of 2D infrared image templates in the 2D infrared image template library. If the matching is successful, the 2D face recognition is successful, and if the matching fails, the 2D face recognition fails.
  • the 2D infrared image template library may be stored in the storage unit 221 in the processor 220, and may also be stored in the memory in the electronic device where the face recognition apparatus 200 is located. Not limited.
  • the face recognition apparatus 200 may further include a structured light projection module 240 for projecting structured light 301 to the recognition target.
  • the image acquisition module 210 is specifically configured to receive the reflected structured light signal 302 after the structured light is reflected by the identification target, and convert the reflected structured light signal 302 to obtain 3D data.
  • the three-dimensional data contains the depth information of the recognition target and can represent the surface shape of the recognition target.
  • the three-dimensional data can be expressed as depth images (Depth Image), 3D point clouds (Point Cloud), geometric models and other different forms, where the 3D point clouds express the spatial distribution of the target and the target surface under the same spatial reference system. Mass point collection of characteristics.
  • 3D point cloud data is also referred to as a 3D point cloud image.
  • the structured light is light with a specific pattern, which has pattern patterns such as dots, lines, surfaces, etc., and may specifically be an infrared light signal with a pattern pattern.
  • the principle of obtaining 3D point cloud data based on structured light is: project structured light to a target object, and capture the corresponding image with structured light after reflecting on the surface of the target object. Since the pattern pattern of structured light will be deformed due to the surface shape of the target object, the position of the pattern pattern in the structured light in the captured image and the degree of deformation can be calculated using the principle of triangulation to obtain the space of each sampling point in the target object Coordinates to form 3D point cloud data representing the three-dimensional structure of the target object.
  • the structured light belongs to an optical signal in the infrared waveband, for example, the structured light is an optical signal with a pattern pattern with a wavelength of 940 nm.
  • the infrared image sensor in the image acquisition module 210 receives and processes the reflected structured light signal reflected by the identification target to obtain 3D point cloud data.
  • the structured light includes, but is not limited to, speckle images, dot matrix light and other light signals with structure patterns.
  • the structured light projection module 240 can be any device structure that projects structured light, including but not limited to: a dot matrix light projector using a VCSEL light source, a speckle structured light projector and other light emitting devices.
  • the face recognition apparatus 200 may further include a Time of Flight (TOF) optical module for emitting continuous near-infrared pulses to the recognition target
  • TOF Time of Flight
  • the image acquisition module 210 receives the light pulses reflected by the target object, and by comparing the phase difference between the emitted light pulse and the light pulse reflected by the object, the transmission delay between the light pulses can be calculated to obtain the target object relative to the transmitter. Distance, and finally obtain the 3D point cloud data of the identified target.
  • TOF Time of Flight
  • the face recognition device 200 may also include other module devices capable of acquiring 3D point cloud data of the recognition target, which is not limited in the embodiment of the present application.
  • 3D point cloud data with rules and necessary information can be inversely calculated as depth image data, and the depth image can also be calculated as 3D point cloud data for the recognition target after coordinate conversion.
  • the 3D point cloud data is sent to the processor 220, and the processor 220 performs the processing on the 3D point cloud The data undergoes 3D face recognition.
  • the storage unit 221 in the processor 220 stores a 3D point cloud data template library, which contains a plurality of user face 3D point cloud data of different face angles.
  • the 3D point cloud data of the multiple user faces are also obtained by emitting structured light or infrared pulses to the user's face at different angles, and the image collection module 210 is obtained by receiving the reflected structured light or infrared pulses.
  • the processor 220 matches the 3D point cloud data collected this time with the 3D point cloud data of multiple user faces in the 3D point cloud data template library. If the matching is successful, the 3D face recognition is successful, and if the matching fails, then 3D face recognition failed.
  • a deep learning network may also be used to perform face recognition matching on 3D point cloud data.
  • the processor 220 performs face classification and recognition on the collected 3D point cloud data through a point cloud processing network (point net).
  • point net includes a feature extraction layer, Mapping layer, feature map compression layer, and fully connected layer, etc.
  • the network includes multiple training parameters. Firstly, point net is trained through multiple samples to obtain optimized training parameters, so that the result of point net's face recognition is more accurate.
  • Multiple samples include 3D point cloud data of multiple user faces, as well as other non- The 3D point cloud data of the user's face, for example, the 3D point cloud data of other faces or the point cloud data of a three-dimensional model, two-dimensional photos and other objects. Then, in the 3D face recognition process, the collected 3D point cloud data of the recognition target is input into the point cloud processing network, and after the feature extraction, processing and classification of each layer in the network, it is judged whether the 3D point cloud data is Match with the 3D point cloud data of multiple user faces in the 3D point cloud data template library.
  • the 3D point cloud data template library may be stored in the storage unit 221 in the processor 220, and may also be stored in the memory in the electronic device where the face recognition apparatus 200 is located. This is not limited.
  • 2D face recognition only performs face recognition based on the two-dimensional features on the 2D infrared image, it cannot identify whether the collected 2D infrared image comes from a living human face or other non-living human face objects such as other photos or videos.
  • 2D Face recognition does not have anti-counterfeiting function. It can steal photos, videos and other information with the user’s face, and through 2D face recognition, so only rely on 2D face recognition to unlock electronic devices and their applications. Face recognition The safety performance of devices and electronic equipment has been greatly affected.
  • the judgment of face recognition is performed based on 3D point cloud data. Since the 3D point cloud data carries 3D information of the face, it can be used to determine whether the recognition target is a living face, thereby preventing user photos or The fake 3D model with the photo overlaid on the three-dimensional surface unlocks the electronic device. Further, improving the accuracy of 3D point cloud data in 3D face recognition can be used to determine whether the recognition target is a user's live face, further enhancing the security performance of the face recognition device and electronic equipment, and making the recognition result more reliable.
  • the face recognition apparatus 200 may include an infrared light emitting module 230 and a structured light projection module 240 at the same time.
  • the processor 220 controls the infrared light emitting module 230 to emit infrared light 201 to the identification target, and the image acquisition module 210 generates a 2D infrared image based on the reflected infrared light 202, and then the 2D infrared image Send processor 220.
  • the processor 220 performs 2D face recognition based on the 2D infrared image, and when the recognition fails, the face recognition fails.
  • the processor 220 controls the structured light projection module 240 to project the structured light 301 to the recognition target.
  • the image acquisition module 210 generates 3D point cloud data based on the reflected structured light 302, and then converts the 3D point cloud data It is sent to the processor 220, which performs 3D face recognition based on the 3D point cloud data.
  • the processor 220 performs 3D face recognition based on the 3D point cloud data.
  • the processor 220 combines the 2D infrared image and the 3D point cloud image to perform face recognition, which can quickly recognize a non-user face, improve recognition efficiency, and enhance security.
  • the face recognition apparatus 200 may further include a distance detection module 250 for detecting the distance from the recognition target to the face recognition apparatus 200 .
  • the distance detection module 250 can use electromagnetic waves, ultrasonic waves and other signals to detect the distance.
  • the principle is: the distance detection module 250 emits ultrasonic or electromagnetic waves to the identification target, and the reflection of the identification target and the time difference after the echo is received. Or the phase difference is used to measure the distance from the recognition target to the face recognition device 200.
  • the ultrasonic wave emitted by the distance detection module 250 is a sound wave with a frequency higher than 20000 Hz, which has good directivity, strong penetrating ability, and is easy to obtain concentrated sound energy.
  • the electromagnetic wave emitted by the distance detection module 250 is a light pulse or a light wave or microwave modulated by a high-frequency current, which has a rapid response and high measurement accuracy.
  • the distance detection module 250 may be an ultrasonic detector, an electromagnetic wave detector, or other devices that detect distance.
  • the face recognition apparatus 200 may further include an output module 260 for prompting the user with distance information.
  • the output module 260 includes, but is not limited to, a display module, a sound module, an optical module, a vibration module, and other functional modules that can directly or indirectly present distance information to the user.
  • the processor 220 is configured to: receive the distance from the recognition target to the face recognition device 200, and send the distance information to the output module 260,
  • the output module 260 presents the distance information to the user in real time.
  • the output module 260 may be a display screen, which displays the distance information as a numerical value on the screen and gives an indication whether it is at a suitable distance.
  • the output module 260 is a sound module. When the distance is within a suitable range, for example, within a determined first distance range interval, a first prompt sound is emitted, and when the distance is outside the first distance range interval, a second prompt sound is emitted . At this time, the user moves the face to make it in a suitable position according to the instructions on the screen or different prompts.
  • the processor 220 is configured to: receive the distance from the recognition target to the face recognition device 200, and control whether the output module 260 outputs the recognition target Distance information to the face recognition device. Specifically, the processor 220 determines whether the distance is within a suitable range, for example, within a determined first distance range, and when the distance is within the first distance range, it controls the output module not to act, for example, does not issue Prompt tone, etc. When the distance is outside the first distance range, the output module 260 is controlled to act, for example, a prompt sound is emitted.
  • the processor 220 also controls the output module 260 to perform different actions according to different distance information. For example: when the distance data is in the first distance range interval, the processor 220 controls the output module 260 to not act; when the distance data is less than the first distance range interval, the processor 220 controls the output module 260 to emit a first prompt sound When the distance data is greater than the first distance range interval, the processor 220 controls the output module 260 to emit a second prompt sound. At this time, the user moves the face according to different prompt sounds until no prompt sound appears, indicating that the face is in a suitable position at this time.
  • the processor 220 controls the structured light transmission
  • the module 250 projects structured light to the recognition target, or controls the TOF light module to emit continuous near-infrared pulses to the recognition target, so that the infrared image acquisition module 210 collects 3D point cloud data images.
  • the output module 260 may also be used to output other information besides distance information. For example, when the user performs template entry or face recognition, a prompt tone indicating success or failure of entry and success or failure of recognition is issued.
  • distance detection is performed by the distance detection module 250, and the output module 260 outputs distance information, so that the user's face can be moved to a suitable area for face recognition more conveniently.
  • structured light is sent to the user's face, which can prevent the structured light from harming human eyes and improve user experience.
  • the face recognition apparatus 200 may further include a display screen 270, and the display screen 270 is used for displaying virtual images. The user adjusts the position of the face according to the dynamic display of the virtual image on the display screen 270.
  • the display screen 270 further includes a touch layer 271 that can sense the touch position of a finger on the display screen 270 and convert it into an electrical signal to the processor 220. The user can control the processor 220 to perform related actions through the touch layer 271.
  • FIG. 7 is a schematic flowchart of a face recognition method 20 according to an embodiment of the present application, including:
  • S250 Process the image of the recognition target to form a virtual image, where the virtual image is used for display on a display screen;
  • S260 Perform face recognition according to the image of the recognition target.
  • the face recognition method 20 in the embodiment of the present application can be applied to the face recognition device 200, wherein the processor 220 controls the infrared image acquisition module 210 to acquire the image of the recognition target, and the image acquisition module 210 After the image is sent to the processor 220, the processor 220 processes the image to form a virtual image, and performs face recognition based on the image.
  • a face recognition method 20 further includes:
  • the processor 220 controls the infrared light emitting module 230 to emit infrared light to the recognition target.
  • step S240 includes:
  • Step S250 includes:
  • S251 Process the 2D infrared image to form a 2D virtual image, where the 2D virtual image is used for display on a display screen.
  • Step S260 includes:
  • S261 Perform 2D face recognition according to the 2D infrared image.
  • the recognition target includes a human face
  • the image acquisition module 210 acquires a 2D infrared image of the human face
  • the 2D infrared image is sent to the processor 220, and the processing The device 220 processes the 2D infrared images to form virtual 2D graphics, and performs 2D face recognition.
  • the processor 220 matches the 2D infrared image with multiple 2D infrared image templates in the 2D infrared image template library. If the matching is successful, then S271: face recognition is successful, and if the matching fails, then S272: face Recognition failed.
  • another face recognition method 20 further includes:
  • step S232 is entered: structured light is emitted to the recognition target.
  • the processor 220 controls the structured light transmission module 240 to emit structured light to the recognition target.
  • Step S240 includes:
  • S242 Acquire 3D point cloud data of the recognition target. Specifically, the image acquisition module 210 acquires 3D point cloud data of the recognition target according to the structured light signal reflected from the surface of the recognition target.
  • Step S250 includes:
  • S252 Process the 3D point cloud data to form a virtual image, where the virtual image is used to be displayed on a display screen.
  • Step S260 includes:
  • S262 Perform 3D face recognition according to the 3D point cloud data.
  • the recognition target includes a human face
  • the image acquisition module 210 obtains the 3D point cloud data of the human face and sends the 3D point cloud data to the processor 220, so The processor 220 processes the 3D point cloud data to form virtual 2D or 3D graphics, and performs 3D face recognition.
  • the processor 220 matches the 3D point cloud data with a plurality of 3D point cloud data templates in the 3D point cloud data template library. If the matching is successful, the face recognition is successful, and if the matching fails, the face recognition is failure.
  • another face recognition method 20 further includes:
  • S231 Transmit infrared light to the recognition target
  • S251 The 2D infrared image is processed to form a virtual image, where the virtual image is used to be displayed on a display screen.
  • S261 Perform 2D face recognition according to the 2D infrared image.
  • S242 Acquire 3D point cloud data of the recognition target.
  • S262 Perform 3D face recognition according to the 3D point cloud data.
  • the 2D face recognition when the 2D face recognition is successful, it does not mean that the face recognition is successful, and further 3D face recognition is needed. Only when the 2D face recognition and the 3D face recognition are successful, the face is represented The recognition is successful and you can proceed to the next step.
  • the 2D face recognition fails, the face recognition fails directly, avoiding 3D recognition and wasting computing resources of the processor. Therefore, the method of combining 2D face recognition and 3D face recognition can quickly identify non-user faces, and on the basis of successful 2D recognition, 3D recognition and detection are performed again, which can perform 3D anti-counterfeiting on the face and strengthen Identify the safety of the process.
  • step S261 2D face recognition is performed based on the 2D infrared image, and after the 2D face recognition is successful, the face recognition method 20 further includes:
  • S280 Detect whether the distance from the recognition target to the recognition device is within a first threshold interval.
  • the processor 220 detects whether the distance from the recognition target to the recognition device is within the first threshold interval, and when the distance from the recognition target to the recognition device is within the first threshold interval, enter step S232: emit structured light to the recognition target .
  • the processor 220 controls the structured light projection module 240 to project structured light to the recognition target, or controls the TOF light module to emit continuous near-infrared pulses to the recognition target.
  • step S281 When the distance from the recognition target to the recognition device is not within the first threshold interval, proceed to step S281: output distance information.
  • the processor 220 controls the output module 260 to output different prompt information for different distances. At this time, the user moves the face according to different prompts, so that the face is in a proper position.
  • an embodiment of the present application further provides an electronic device 2, which may include the face recognition apparatus 200 of the foregoing application embodiment.
  • the electronic device 2 includes, but is not limited to, smart door locks, mobile phones, computers, access control systems, and other devices that require face recognition.
  • the electronic device 2 may include a processor 300, an output module 400, and a display screen. ⁇ 500.
  • the output module 400 may be the same as the output module 260 in FIG. 6, and the display screen 500 may be the same as the display screen 270 in FIG.
  • the display screen 500 is used to display a virtual image, which is convenient for the user to observe and move the position of the face so that it is in a suitable position for face recognition.
  • the display screen 500 may also display distance information of the output module 260, and other prompt information related to face recognition such as success or failure of recognition, template entry or failure, etc.
  • the output module 400 may also be used to emit prompt sounds or vibration patterns other than actions related to face recognition.
  • the power-on and power-off prompt sounds of the electronic device are not limited in the embodiment of the present application.
  • the processor 300 is the processor of the electronic device 2
  • the display screen 500 is the display screen of the electronic device 2, which is mainly used to control various parts of the electronic device 2 and display the main interface of the electronic device.
  • the face recognition device 200 is only a functional component in the electronic device 2, the actions it needs to perform are only a part of the control of the processor 300, and the virtual image that it needs to display is only a part of the content displayed on the display screen 500.
  • the electronic device 2 may further include a memory 600, a motor control module 700, and a wireless network access module 800.
  • the virtual image template library and/or the infrared image template library and/or the 3D point cloud data template library may also be stored in the memory 600.
  • the processor 300 may control the motor control module 700 to unlock the lock.
  • the wireless network access module 800 is used to access a wireless local area network (Wireless Local Area Networks, WLAN) to implement network transmission of data in the processor 300 and the memory 600.
  • a wireless local area network Wireless Local Area Networks, WLAN
  • the processor of the embodiment of the present application may be an integrated circuit chip with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA ready-made programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the face recognition device of the embodiment of the present application may further include a memory
  • the memory may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • Synchlink DRAM SLDRAM
  • DR RAM Direct Rambus RAM
  • the embodiment of the present application also proposes a computer-readable storage medium that stores one or more programs, and the one or more programs include instructions.
  • the instructions are included in a portable electronic device that includes multiple application programs When executed, the portable electronic device can be made to execute the method of the embodiment shown in Figs. 7-11.
  • the embodiment of the present application also proposes a computer program, the computer program includes instructions, when the computer program is executed by the computer, the computer can execute the method of the embodiment shown in FIG. 7-11.
  • An embodiment of the present application also provides a chip that includes an input and output interface, at least one processor, at least one memory, and a bus.
  • the at least one memory is used to store instructions
  • the at least one processor is used to call the at least one memory. Command to execute the method of the embodiment shown in FIG. 7-11.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种人脸识别装置、方法和电子设备,能够提升人脸识别的安全性以及用户体验。该人脸识别装置包括:图像采集模块,用于获取识别目标的图像;处理器,用于对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别,其中,所述虚拟图像用于显示在显示屏上。

Description

人脸识别的装置、方法和电子设备 技术领域
本申请涉及生物特征识别技术领域,并且更具体地,涉及一种人脸识别装置、方法和电子设备。
背景技术
人脸识别,是基于人的脸部特征信息进行身份识别的一种生物识别技术。用摄像机或摄像头采集含有人脸的图像或视频流,并自动在图像中检测和跟踪人脸,进而对检测到的人脸进行脸部的图像预处理、图像特征提取以及匹配与识别等一系列相关技术,通常也叫做人像识别或面部识别。随着计算机和网络技术的飞速发展,人脸识别技术已广泛地应用于智能门禁、智能门锁、移动终端、公共安全、娱乐、军事等诸多行业及领域。
目前市场上的人脸识别装置,需要采用一颗可见光摄像头和一颗近红外摄像头,其中红外摄像头用于拍摄二维近红外图像,用于进行图像模板的录入以及用户人脸的识别。而可见光摄像头仅作为“镜子”,用于显示用户人脸可见光图像,指示用户人脸处于合适的位置使红外摄像头便于进行人脸识别的拍摄,因此,增加了人脸识别装置的成本。此外,可见光摄像头拍摄的可见光图像容易泄露用户的隐私,存在安全问题,且在环境光过强或过暗时,可见光摄像头拍摄的图像质量差,也会影响用户体验。
发明内容
本申请实施例提供了一种人脸识别装置、方法和电子设备,能够降低成本、提升人脸识别的安全性以及用户体验。
第一方面,提供了一种人脸识别装置,包括:
图像采集模块,用于获取识别目标的图像;
处理器,用于对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别,其中,所述虚拟图像用于显示在显示屏上。
通过本申请实施例的方案,处理器对图像采集模块获取的图像进行虚拟化处理后在显示屏上显示给用户,并且对同一个图像采集模块获取的图像进行人脸识别,能够降低在人脸识别过程中,额外增加其它图像采集装置,例 如摄像头用于显示图像给用户的成本,且采用虚拟图像显示,能够避免泄露用户的人脸图像隐私,提升用户体验。
在一种可能的实现方式中,所述识别目标为用户人脸,所述虚拟图像用于提示用户调整人脸位置。
在一种可能的实现方式中,所述处理器用于:对所述识别目标的图像进行人脸信息提取得到特征人脸图像,并根据所述特征人脸图像处理得到所述虚拟图像。
在一种可能的实现方式中,所述处理器用于:将所述特征人脸图像与多个虚拟图像模板进行匹配得到所述虚拟图像。
在一种可能的实现方式中,所述虚拟图像大小与所述特征人脸图像的大小相同,和/或所述虚拟图像的轮廓与所述特征人脸图像的轮廓相同,和/或所述虚拟图像在所述显示屏中的位置与所述特征人脸图像在所述二维红外图像中的位置相同。
在一种可能的实现方式中,所述识别目标的图像包括二维红外图像,所述处理器用于:对所述二维红外图像进行处理形成二维虚拟图像,并根据所述二维红外图像进行人脸识别。
在一种可能的实现方式中,所述装置还包括:红外发光模块,用于发射红外光至所述识别目标;
其中,所述图像采集模块还用于接收所述红外光经所述识别目标反射后的反射红外光信号,并将所述反射红外光信号转换得到所述二维红外图像。
在一种可能的实现方式中,所述处理器具体用于:将所述二维红外图像与多个红外图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,确定人脸识别失败。
在一种可能的实现方式中,所述识别目标的图像包括三维点云图像,所述处理器用于:对所述三维点云图像进行处理形成虚拟三维图像,并根据所述三维点云图像进行人脸识别。
在一种可能的实现方式中,所述处理器具体用于:将所述三维点云图像与多个三维点云图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,确定人脸识别失败。
在一种可能的实现方式中,所述识别目标的图像包括二维红外图像,所述处理器用于:对所述二维红外图像进行处理形成虚拟二维图像,并根据所 述二维红外图像和三维点云图像进行人脸识别。
在一种可能的实现方式中,所述处理器具体用于:根据所述二维红外图像进行二维人脸识别,当二维人脸识别失败时,确定人脸识别失败;
当二维人脸识别成功时,根据所述三维点云图像进行三维人脸识别,当三维人脸识别成功时,确定人脸识别成功,当三维人脸识别失败时,确定人脸识别失败。
在一种可能的实现方式中,所述装置还包括:结构光投射模块,用于投射结构光至所述识别目标;
其中,所述图像采集模块还用于接收所述结构光经所述识别目标反射后的反射结构光信号,并将所述反射结构光信号转换得到所述三维点云图像。
在一种可能的实现方式中,所述结构光为点阵光或者随机散斑,所述结构光投射模块为点阵光投射器,或者散斑结构光投射器。
在一种可能的实现方式中,所述装置还包括:距离探测模块,用于探测所述识别目标至所述人脸识别装置的距离。
在一种可能的实现方式中,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,所述处理器具体用于控制所述结构光投射模块投射结构光至所述识别目标。
在一种可能的实现方式中,所述装置还包括:输出模块;
所述处理器还用于:接收所述识别目标至所述人脸识别装置的距离,并控制所述输出模块是否输出所述识别目标至所述人脸识别装置的距离信息。
在一种可能的实现方式中,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,所述处理器具体用于控制所述输出模块不输出所述识别目标至所述人脸识别装置的距离信息;
当所述识别目标至所述人脸识别装置的距离处于所述第一距离范围区间外时,所述处理器具体用于控制所述输出模块输出所述识别目标至所述人脸识别装置的距离信息。
在一种可能的实现方式中,所述图像采集模块为红外摄像头,包括滤波片和红外光检测阵列。
在一种可能的实现方式中,所述装置还包括:显示屏,用于显示所述虚拟图像。
第二方面,提供了一种人脸识别的方法,包括:
获取识别目标的图像;
对所述识别目标的图像进行处理形成虚拟图像,并根据所述图像进行人脸识别,其中,所述虚拟图像用于显示在显示屏上。
在一种可能的实现方式中,所述识别目标为用户人脸,所述虚拟图像用于提示用户调整人脸位置。
在一种可能的实现方式中,所述对所述识别目标的图像进行处理形成虚拟图像包括:
对所述识别目标的图像进行人脸信息提取得到特征人脸图像,并根据所述特征人脸图像处理得到所述虚拟图像。
在一种可能的实现方式中,所述根据所述特征人脸图像处理得到所述虚拟图像包括:
将所述特征人脸图像与多个虚拟图像模板进行匹配得到所述虚拟图像。
在一种可能的实现方式中,所述虚拟图像大小与所述特征人脸图像的大小相同,和/或所述虚拟图像的轮廓与所述特征人脸图像的轮廓相同,和/或所述虚拟图像在所述显示屏中的位置与所述特征人脸图像在所述二维红外图像中的位置相同。
在一种可能的实现方式中,所述识别目标的图像包括二维红外图像,所述对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别包括:
对所述二维红外图像进行处理形成二维虚拟图像,并根据所述二维红外图像进行人脸识别。
在一种可能的实现方式中,所述方法还包括:发射红外光至所述识别目标;
接收所述红外光经所述识别目标反射后的反射红外光信号,并将所述反射红外光信号转换得到所述二维红外图像。
在一种可能的实现方式中,所述根据所述二维红外图像进行人脸识别包括:将所述二维红外图像与多个红外图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,人脸识别失败。
在一种可能的实现方式中,所述识别目标的图像包括三维点云图像,所述对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别包括:
对所述三维点云图像进行处理形成虚拟三维图像,并根据所述三维点云图像进行人脸识别。
在一种可能的实现方式中,所述根据所述三维点云图像进行人脸识别包括:
将所述三维点云图像与多个三维点云图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,确定人脸识别失败。
在一种可能的实现方式中,所述识别目标的图像包括二维红外图像,所述对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别包括:
对所述二维红外图像进行处理形成虚拟二维图像,并根据所述二维红外图像和三维点云图像进行人脸识别。
在一种可能的实现方式中,所述根据所述二维红外图像和三维点云图像进行人脸识别包括:
根据所述二维红外图像进行二维人脸识别,当二维人脸识别失败时,确定人脸识别失败;
当二维人脸识别成功时,根据所述三维点云图像进行三维人脸识别,当三维人脸识别成功时,确定人脸识别成功,当三维人脸识别失败时,确定人脸识别失败。
在一种可能的实现方式中,所述方法还包括:投射结构光至所述识别目标;
接收所述结构光经所述识别目标反射后的反射结构光信号,并将所述反射结构光信号转换得到所述三维点云图像。
在一种可能的实现方式中,所述结构光为点阵光或者随机散斑。
在一种可能的实现方式中,所述人脸识别的方法应用于人脸识别装置,所述方法还包括:
探测所述识别目标至所述人脸识别装置的距离。
在一种可能的实现方式中,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,投射所述结构光至所述识别目标。
在一种可能的实现方式中,所述方法还包括:
接收所述识别目标至所述人脸识别装置的距离,并判断是否输出所述识别目标至所述人脸识别装置的距离信息。
在一种可能的实现方式中,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,所述判断是否输出所述识别目标至所述人脸识别的装置的距离信息包括:
不输出所述识别目标至所述人脸识别的装置的距离信息;
当所述识别目标至所述人脸识别装置的距离处于所述第一距离范围区间外时,所述判断是否输出所述识别目标至所述人脸识别的装置的距离信息包括:
输出所述识别目标至所述人脸识别的装置的距离信息。
在一种可能的实现方式中,所述方法还包括:
显示所述虚拟图像。
第三方面,提供了一种电子设备,包括:
包括如第一方面或第一方面的任一可能的实现方式中的人脸识别装置。
在一种可能的实现方式中,所述电子设备还包括:显示屏,用于显示所述虚拟图像。
在一种可能的实现方式中,所述电子设备还包括:无线网络接入模块,用于传输人脸识别的数据至无线局域网络。
在一种可能的实现方式中,所述电子设备还包括:电机控制模块,用于根据人脸识别的结果进行控制机械装置。
第四方面,提供了一种芯片,该芯片包括输入输出接口、至少一个处理器、至少一个存储器和总线,该至少一个存储器用于存储指令,该至少一个处理器用于调用该至少一个存储器中的指令,以执行第二方面或第二方面的任一可能的实现方式中的方法。
第五方面,提供了一种计算机可读介质,用于存储计算机程序,所述计算机程序包括用于执行上述第二方面或第二方面的任一可能的实现方式中的指令。
第六方面,提供了一种包括指令的计算机程序产品,当计算机运行所述计算机程序产品的所述指时,所述计算机执行上述第二方面或第二方面的任一可能的实现方式中的人脸识别的方法。
具体地,该计算机程序产品可以运行于上述第三方面的电子设备上。
附图说明
图1是本申请实施例所适用的电子设备的结构示意图。
图2是根据本申请实施例的一种人脸识别装置的示意性图。
图3是根据本申请实施例的另一种人脸识别装置的示意性图。
图4是根据本申请实施例的另一种人脸识别装置的示意性图。
图5是根据本申请实施例的另一种人脸识别装置的示意性图。
图6是根据本申请实施例的另一种人脸识别装置的示意性图。
图7是根据本申请实施例的一种人脸识别流程的示意性流程图。
图8是根据本申请实施例的另一种人脸识别流程的示意性流程图。
图9是根据本申请实施例的另一种人脸识别流程的示意性流程图。
图10是根据本申请实施例的另一种人脸识别流程的示意性流程图。
图11是根据本申请实施例的另一种人脸识别流程的示意性流程图。
图12是根据本申请实施例的电子设备的示意性框图。
图13是根据本申请实施例的另一电子设备的示意性框图。
具体实施方式
下面将结合附图,对本申请实施例中的技术方案进行描述。
本申请实施例可适用于光学人脸识别装置,包括但不限于基于光学人脸成像的产品。该光学人脸识别装置可以应用于具有图像采集装置(如摄像头)的各种电子设备,该电子设备可以为手机,平板电脑,智能可穿戴装置、智能门锁等,本公开的实施例对此不做限定。
应理解,本文中的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。
还应理解,本申请实施例中的公式只是一种示例,而非限制本申请实施例的范围,各公式可以进行变形,这些变形也应属于本申请保护的范围。
还应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,本说明书中描述的各种实施方式,既可以单独实施,也可以组合实施,本申请实施例对此并不限定。
除非另有说明,本申请实施例所使用的所有技术和科学术语与本申请的技术领域的技术人员通常理解的含义相同。本申请中所使用的术语只是为了 描述具体的实施例的目的,不是旨在限制本申请的范围。本申请所使用的术语“和/或”包括一个或多个相关的所列项的任意的和所有的组合。
如图1所示为本申请实施例可以适用的电子设备1的结构示意图,所述电子设备1包括红外摄像头110,可见光摄像头120,显示屏130,以及处理器140。所述红外摄像头110包括红外图像传感器,该红外图像传感器用于接收红外光信号,并将接收的红外光信号转换为对应的电信号,从而生成红外图像。类似地,所述可见光摄像头120包括可见光图像传感器,用于接收环境中的可见光信号,并将接收的可见光信号转换为对应的电信号,从而生成可见光图像,当可见光图像实时动态显示在显示屏130上时,形成实时动态的彩色可见光视频。所述处理器140可以为一种微处理器(Microprocessor Unit,MPU)或者其它具有处理控制功能的装置,可以控制电子设备中各装置组件,也可以对装置组件产生的数据进行计算处理。
具体地,在2D人脸识别的过程中,识别目标处于电子设备10前方,例如,如图1所示,识别目标为人脸103时,环境中的可见光经过人脸103反射后,形成携带有人脸形态信息的反射可见光信号102,该反射可见光信号102被可见光摄像头120接收后,形成对应于人脸103的可见光彩色人脸图像。处理器140控制可见光摄像头120采集多张人脸103的可见光彩色人脸图像,并将该多张彩色人脸图像发送给显示屏130,显示屏130动态实时的显示多张彩色人脸图像以形成人脸的动态彩色视频。用户根据显示屏130上的实时显示的人脸视频来调整人脸的位置以及角度,使人脸完全处于显示屏中,并位于合适的角度和位置。
与此同时,红外光经过人脸103反射后,形成携带有人脸形态信息的反射红外光信号101,该反射红外光信号101被红外摄像头110接收后,形成对应于人脸103的红外光灰度人脸图像。处理器140控制红外摄像头120采集一张或多张人脸的灰度红外图像。然后,红外摄像头110将采集到的一张或多张人脸的灰度红外图像发送给处理器140,处理器140对该人脸红外图像进行二维(Two Dimensional,2D)人脸识别。
可选地,所述处理器140中含有存储单元,该存储单元中存储有用户人脸的红外图像模板库,其中包含有多个不同人脸角度的用户人脸红外图像模板。该多个用户人脸红外图像模板也是对红外摄像头110拍摄得到的多个人脸角度的红外图像处理得到的模板数据向量。处理器140将当前采集到的对 应于人脸103的红外人脸图像经过处理后得到的数据向量与红外图像模板库中的多个红外图像的模板数据向量进行匹配,若匹配成功,则2D人脸识别成功,若匹配失败,则2D人脸识别失败。为了便于描述,在下文中,对图像模板处理得到的数据向量也简称为图像模板;匹配过程中对图像处理得到的数据向量也简称为图像。
例如,在一种可能的实施方式中,所述处理器通过卷积神经网络(Convolutional Neural Networks,CNN)对采集的人脸图像进行人脸分类识别,具体地,首先通过多个样本训练一个判断是否为用户人脸的人脸识别卷积神经网络,得到该卷积神经网络的相关参数,其中,该人脸识别卷积神经网络按照模板库中的多个红外图像模板分类。人脸识别时,将采集到的2D红外图像的数据输入至人脸识别卷积神经网络中,通过卷积层(convolutional layer)、激励层(activation layer),池化层(pooling layer)、以及全连接层(fully-connected layer)等计算处理,将2D红外图像的数据的特征提取后,进行分类判别,判断该2D红外图像是否与模板库中多个红外图像模板匹配,从而得到2D识别的结果。
在本申请实施例中,电子设备1通过可见光摄像头120对用户人脸进行实时拍摄,使用户能够在显示屏130上能够观察到自身的人脸图像,让用户人脸处于合适的位置,从而方便红外摄像头110拍摄红外图像并发送给处理器,使得处理器140易于进行人脸识别。在此过程中,可见光摄像头120仅充当“镜子”的作用,增加了电子设备中用于人脸识别的成本。此外,可见光摄像头120拍摄的彩色用户人脸图像容易泄露,泄露的彩色用户人脸图像同样符合用户人脸的二维特征,因此,处理器140对彩色用户人脸图像也能够二维识别成功,使电子设备存在安全问题。并且,在环境光过强或过暗时,可见光摄像头120拍摄的图像对比度小,质量差,因此,显示屏130上显示的低质量的人脸动态视频也会影响用户体验。
为解决上述问题,本申请提供一种仅采用图像采集模块进行图像采集,通过处理器对采集的图像进行人脸识别以及虚拟化处理进行显示的方案,从而节约了使用可见光摄像头的成本,且虚拟化的图像不会泄露用户信息,也不会因为环境光变化影响图像质量,可以提升安全性以及用户体验。
下面,结合图2至图6,对本申请实施例提供的人脸识别装置进行详细介绍。
图2为本申请实施例提供的一种人脸识别的装置200,包括:
图像采集模块210,用于获取识别目标的图像;
处理器220,用于对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别,其中,所述虚拟图像用于显示在显示屏上。
应理解,所述识别目标包括但不限于人脸、照片、视频、三维模型等任意物体。例如,所述识别目标可以为用户人脸、其他人的人脸、用户照片、贴有照片的曲面模型等等。
可选地,在本申请实施例中,所述图像采集模块210可以为任意采集图像的装置,例如摄像头、相机等等。可选地,所述图像采集模块为红外光图像采集模块,感应红外光信号以形成图像信号。可选地,所述图像采集模块还可以感应其它非红外波长的光信号以形成图像信号,本申请实施例对此不做限定。
例如,所述图像采集模块210可以为红外摄像头,包括滤波片211和红外光图像传感器212,所述滤波片211用于透过目标波长的光信号,滤除非目标波长的光信号,所述红外光传感器212基于所述目标波长进行光检测,并将检测到的光信号转换为电信号。可选地,所述红外光图像传感器为电荷耦合器件(Charge-coupled Device,CCD)图像传感器,或者为互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)图像传感器。可选地,该红外光图像传感器212包括多个像素单元,一个像素单元用于转换光信号形成一个识别目标图像中的一个像素值。可选地,所述像素单元可以采用光电二极管(photo diode)、金属氧化物半导体场效应管(Metal Oxide Semiconductor Field Effect Transistor,MOSFET)等器件。可选地,所述像素单元对于目标波长光具有较高的光灵敏度和较高的量子效率,以便于检测相应波长的光信号。
具体地,在本申请实施例中,所述目标波长属于红外光波段,例如,目标波长为940nm的近红外光,则滤波片211用于透过940nm的红外光信号,阻挡非940nm波长的可见光以及其他红外光通过,红外光图像传感器212对940nm的红外光进行检测并形成对应于识别目标的2D红外图像。
可选地,所述处理器220可以为所述人脸识别装置200的处理器,也可以为包括人脸识别装置200的电子设备的处理器,本申请实施例不做限定。
可选地,在一种可能的实施方式中,处理器220用于对2D红外图像进行处理形成2D虚拟图像,所述2D虚拟图像用于显示在电子设备的显示屏上。
可选地,在另一种可能的实施方式中,所述图像采集装置210还可以采集所述识别目标的三维(Three Dimensional,3D)图像,处理器220还可以用于对该3D图像进行处理形成3D虚拟图像,所述3D虚拟图像用于显示在电子设备的显示屏上。
可选地,所述处理器中220包括存储单元221,存储有虚拟图像模板库,包括多个虚拟图像模板,该虚拟图像模板可以为2D虚拟图像模板或者为3D虚拟图像模板。该多个虚拟图像模板可以包括不同角度的全部或部分人脸的虚拟图像或者还包括其它物体的虚拟图像。
所述处理器220对所述识别目标的2D或3D图像进行处理,提取出图像中的人脸信息,包括但不限于人脸五官的轮廓,大小,位置、以及人脸皮肤纹理信息等,形成特征人脸图像。该特征人脸图像可以为在图像中裁剪截取的仅包括人脸图像的区域。所述处理器220将处理得到带有人脸信息的特征人脸图像与多个虚拟图像模板进行匹配,将最佳的匹配结果显示在显示屏上,显示屏上的虚拟图像与实际采集到的2D或3D图像相对应,换言之,所述虚拟图像替换采集的图像显示在显示屏上,用户根据虚拟图像调整人脸的位置和方向,也就是用户可以根据显示屏上的图像指示来对应的移动自己的位置,使完整的人脸虚拟图像显示在显示屏中,并处于合适的角度和位置。其中,所述虚拟图像可以为2D虚拟图像或者3D虚拟图像。
可选地,所述虚拟图像大小可以与所述特征人脸图像的大小相同或相近,所述虚拟图像可以更直观的体现用户人脸与识别装置的距离远近。
可选地,所述虚拟图像的轮廓可以与所述特征人脸图像的轮廓相同。例如,虚拟图像也表现为人脸图案,进一步的,该人脸图案上显示有五官,可以对应于特征人脸图像上的五官,提高用户体验。
可选地,所述虚拟图像在所述显示屏中的位置与所述特征人脸图像在所述二维红外图像中的位置相同,所述虚拟图像可以更直观的体现用户人脸与图像采集装置之间的位置。
例如,若图像采集模块210采集到的红外图像为半侧正面人脸,则显示屏上的虚拟人脸图像也对应只呈现半侧正面人脸。用户根据呈现的半侧人 脸,移动人脸的位置,使得图像采集模块210能够采集到完整的正面人脸的红外图像,处理器220将完整的正面人脸的红外图像处理为完整的正面人脸的虚拟图像,进而呈现在显示屏上。
可选地,所述虚拟图像模板库可以存储在处理器220中的存储单元221中,还可以存储在所述人脸识别的装置200所在的电子设备中的存储器中,本申请实施例对此不做限定。
应理解,在本申请实施例中,所述图像采集模块210可以为图1中的红外摄像头110。所述处理器220可以为图1中的处理器140。所述图像采集模块210获取红外图像的过程可以和图1中红外摄像头120采集用户人脸的红外图像的过程相同。所述处理器220对红外图像进行人脸识别的过程可以和图1中处理器140进行2D人脸识别的过程相同。
还应理解,在本申请实施例中,还可以采用除卷积神经网络以外其他的深度学习网络对2D红外图像进行人脸识别匹配,本申请实施例对此不做限定。
通过本申请实施例的方案,图像采集模块210对识别目标进行图像采集,处理器220对采集的图像进行虚拟化处理后显示给用户,并且针对该图像进行人脸识别,能够降低额外增加摄像头用于图像显示的成本,且采用虚拟图像显示,能够避免泄露用户的人脸图像隐私,提升用户体验。
可选地,如图3所示,在一种可能的实施方式中,所述人脸识别的装置200还可以包括红外发光模块230,用于发射红外光201到识别目标表面,经过识别目标反射后,反射的红外光202被图像采集模块210接收,形成所述识别目标的2D红外图像。在人脸识别装置200中增加红外发光模块230,可以增加红外光的光强,使经过识别目标反射后的红外光信号光强变大,提高所述识别目标的2D红外图像的质量。
可选地,所述红外发光模块230可以任意发射红外光信号的发光装置,包括但不限于红外光发光二极管(Light Emitting Diode,LED)、垂直腔面发射激光器(Vertical Cavity Surface Emitting Laser,VCSEL)、法布里-泊罗(Fabry Perot,FP)激光器(Laser diode,LD)、分布式反馈(Distribute Feedback,DFB)激光器以及电吸收调制激光器(Electro-absorption Modulated Laser,EML),本申请实施例对此不做限定。
可选地,所述处理器220可以控制所述红外发光模块230向识别目标发 射红外光信号。在所述红外发光模块230发射红外光信号的同时,所述图像采集模块210进行2D红外图像的采集。
可选地,在本申请实施例中,所述图像采集模块210获取识别目标的2D红外图像后,将该2D红外图像发送给处理器220,所述处理器220对所述2D红外图像处理得到虚拟2D图像用于显示给用户,并且根据所述2D红外图像进行2D人脸识别。
具体地,所述处理器220中的存储单元221存储有2D红外图像模板库,包括多个2D红外图像模板,该多个2D红外图像模板可以为多个不同角度的用户人脸的2D红外图像。处理器220将该2D红外图像与2D红外图像模板库中的多个2D红外图像模板进行匹配,若匹配成功,则2D人脸识别成功,若匹配失败,则2D人脸识别失败。
应理解,所述2D红外图像模板库可以存储在处理器220中的存储单元221中,还可以存储在所述人脸识别的装置200所在的电子设备中的存储器中,本申请实施例对此不做限定。
可选地,如图4所示,在另一种可能的实施方式中,所述人脸识别的装置200可以还包括结构光投射模块240,用于向识别目标投射结构光301。图像采集模块210具体用于接收所述结构光经所述识别目标反射后的反射结构光信号302,并将所述反射结构光信号302转换得到3D数据。其中,三维数据包含了识别目标的深度信息,能够表示识别目标的表面形状。且所述三维数据可以表示为深度图(Depth Image)、3D点云(Point Cloud)、几何模型等不同形式,其中,所述3D点云是在同一空间参考系下表达目标空间分布和目标表面特性的海量点集合,在获取物体表面每个采样点的空间坐标后,得到的是点的集合,称之为“点云”,其优点为获取便捷,易于存储,具有离散和稀疏特性,且方便扩展为高维的特征信息。在本申请实施例中,3D点云数据也称为3D点云图像。
具体地,所述结构光是具有特定模式的光,其具有例如点、线、面等模式图案,具体可以为具有特定模式图案的红外光信号。基于结构光获取3D点云数据的原理是:将结构光投射至目标物体,经目标物体表面反射后,捕获相应的带有结构光的图像。由于结构光的模式图案会因为目标物体的表面形状发生变形,因此通过结构光中的模式图案在捕捉得到的图像中的位置以及形变程度利用三角原理计算即可得到目标物体中各采样点的空间坐标,从 而形成表示目标物体三维空间结构的3D点云数据。
可选地,在本申请实施例中,所述结构光属于红外波段的光信号,例如,所述结构光为波长为940nm的带有图案模式的光信号。可选地,图像采集模块210中的红外图像传感器接收经所述识别目标反射后的反射结构光信号处理得到3D点云数据。
可选地,所述结构光包括但不限于散斑图像,点阵光等带有结构图案的光信号。所述结构光投射模块240可以为任意投射结构光的装置结构,包括但不限于:采用VCSEL光源的点阵光投射器,散斑结构光投射器等发光装置。
可选地,在不包括结构光透射模块240的情况下,所述人脸识别的装置200还可以包括飞行时间(Time of Flight,TOF)光模块,用于对识别目标发射连续的近红外脉冲,图像采集模块210接收由目标物体反射回的光脉冲,通过比较发射光脉冲与经过物体反射的光脉冲的相位差,可以推算得到光脉冲之间的传输延迟进而得到目标物体相对于发射器的距离,最终得到识别目标的3D点云数据。
应理解,所述人脸识别的装置200还可以包括采用其他能够获取识别目标的3D点云数据的模块装置,本申请实施例对此不做限定。
还应理解,有规则及必要信息的3D点云数据可以反算为深度图像数据,深度图像经过坐标转换也可以计算为识别目标的3D点云数据。
可选地,在本申请实施例中,所述图像采集模块210获取识别目标的3D点云数据后,将该3D点云数据发送给处理器220,所述处理器220对所述3D点云数据进行3D人脸识别。
具体地,所述处理器220中的存储单元221中存储有3D点云数据模板库,其中包含有多个不同人脸角度的用户人脸3D点云数据。该多个用户人脸3D点云数据也是通过发射结构光或者红外脉冲至不同角度的用户人脸上,图像采集模块210通过接收反射的结构光或者红外脉冲得到的。处理器220将本次采集到的3D点云数据与3D点云数据模板库中的多个用户人脸3D点云数据进行匹配,若匹配成功,则3D人脸识别成功,若匹配失败,则3D人脸识别失败。
可选地,在本申请实施例中,也可以采用深度学习网络对3D点云数据进行人脸识别匹配。例如,在一种可能的实施方式中,所述处理器220通过 点云处理网络(point net)对采集的3D点云数据进行人脸分类识别,具体地,该point net包括特征提取层,特征映射层,特征图压缩层,以及全连接层等,网络中包括多个训练参数。首先通过多个样本对point net进行训练,得到优化的训练参数,使point net对人脸识别的结果更准确,多个样本中包括多个用户人脸的3D点云数据,也包括有其它非用户人脸的3D点云数据,例如,其它人脸的3D点云数据或者三维模型,二维照片等物体的点云数据。然后,在3D人脸识别过程中,将采集到的识别目标的3D点云数据输入至该点云处理网络中,经过网络中各层的特征提取,处理以及分类,判断该3D点云数据是否与3D点云数据模板库中多个用户人脸3D点云数据匹配。
应理解,所述3D点云数据模板库可以存储在处理器220中的存储单元221中,还可以存储在所述人脸识别的装置200所在的电子设备中的存储器中,本申请实施例对此不做限定。
由于2D人脸识别仅仅依据2D红外图像上的二维特征进行人脸识别,无法识别采集的2D红外图像是否来源自活人人脸或者其他照片、视频等其他非活人人脸物体,换言之,2D人脸识别不具有防伪功能,可以通过盗取带有用户人脸的照片、视频等信息,通过2D人脸识别,因此仅仅依靠2D人脸识别对电子设备及其应用程序进行解锁,人脸识别装置及电子设备的安全性能受到了极大的影响。
在本申请实施例中,基于3D点云数据进行人脸识别的判断,由于3D点云数据中携带人脸3D信息,可以用于判断识别目标是否为活体人脸,进而可以防止通过用户照片或者照片覆盖于三维曲面的假3D模型对电子设备解锁。进一步,提高3D人脸识别中3D点云数据的精度,可以用于判断识别目标是否为用户的活体人脸,进一步增强人脸识别装置及电子设备的安全性能,使识别结果更为可靠。
优选地,如图5所示,在本申请实施例中,所述人脸识别的装置200可以同时包括红外发光模块230以及结构光投射模块240。
在本申请实施例中,所述处理器220控制所述红外发光模块230向识别目标发射红外光201,所述图像采集模块210基于反射的红外光202生成2D红外图像,然后将该2D红外图像发送处理器220。所述处理器220基于2D红外图像进行2D人脸识别,当识别失败时,则人脸识别失败。当识别成功时,所述处理器220控制所述结构光投射模块240向识别目标投射结构光 301,所述图像采集模块210基于反射的结构光302生成3D点云数据,然后将3D点云数据发送给处理器220,所述处理器220基于所述3D点云数据进行3D人脸识别,当3D人脸识别成功时,人脸识别成功,当3D人脸识别失败时,人脸识别失败。
因此,基于上述处理器220的控制处理方法,当2D人脸识别成功时,不代表人脸识别成功,还需要进一步进行3D人脸识别,只有当2D人脸识别以及3D人脸识别均成功时,才代表人脸识别成功,能够进行下一步操作。而当2D人脸识别失败时,则直接人脸识别失败,避免进行3D识别浪费处理器的运算资源。因此,处理器220结合所述2D红外图像和所述3D点云图像进行人脸识别,可以快速识别出非用户人脸,提高识别效率,且加强安全性。
可选地,如图6所示,在一种可能的实施方式中,所述人脸识别的装置200还可以包括距离探测模块250,用于探测识别目标至所述人脸识别装置200的距离。
可选地,所述距离探测模块250可以利用电磁波、超声波等信号进行距离的检测,其原理为:距离探测模块250向识别目标发射超声波或者电磁波,通过识别目标的反射、回波接收后的时差或相位差来测量识别目标至所述人脸识别装置200的距离。其中,距离探测模块250发射的超声波为一种频率高于20000赫兹的声波,它的方向性好,穿透能力强,易于获得较集中的声能。距离探测模块250发射的电磁波为光脉冲或者用高频电流调制的光波或者微波,其响应迅速,且测量精度高。
因此,在本申请实施例中,所述距离探测模块250可以为超声波探测器,电磁波探测器等探测距离的装置。
可选地,如图6所示,所述人脸识别的装置200还可以包括输出模块260,用于向用户提示距离信息。可选地,所述输出模块260包括但不限于显示模块、声音模块、光模块、振动模块等可以将距离信息直接或间接的呈献给用户的功能模块。
可选地,在一种可能的实施方式中,所述处理器220用于:接收所述识别目标至所述人脸识别装置200的距离,并将该距离信息发送给所述输出模块260,所述输出模块260将距离信息实时呈现给用户。例如,输出模块260可以为显示屏,将距离信息显示为数值呈现在屏幕上,并给出是否处于合适 距离的指示。又例如,输出模块260为声音模块,当距离在合适的范围,例如确定的第一距离范围区间之内,发出第一提示音,当距离在第一距离范围区间之外,发出第二提示音。此时,用户根据屏幕上的指示或者不同的提示音,移动人脸,使其处于合适的位置。
优选地,在另一种可能的实施方式中,所述处理器220用于:接收所述识别目标至所述人脸识别装置200的距离,并控制所述输出模块260是否输出所述识别目标至所述人脸识别装置的距离信息。具体地,所述处理器220判断距离是否在合适的范围之内,例如确定的第一距离范围之内,当距离在第一距离范围内,则控制所述输出模块不动作,例如,不发出提示音等。当距离在第一距离范围之外时,则控制所述输出模块260动作,例如发出提示音等。
进一步地,所述处理器220还根据不同的距离信息,控制所述输出模块260进行不同的动作。例如:当距离数据处于第一距离范围区间时,所述处理器220控制输出模块260不动作;当距离数据小于第一距离范围区间时,所述处理器220控制输出模块260发出第一提示音,当距离数据大于第一距离范围区间时,所述处理器220控制输出模块260发出第二提示音。此时,用户根据不同的提示音移动人脸,直至无提示音出现,则说明此时人脸处于合适的位置。
可选地,在处理器220接收并判断所述识别目标至所述人脸识别装置的距离在合适的距离范围,例如在第一距离范围区间时,所述处理器220控制所述结构光透射模块250向识别目标投射结构光,或者控制TOF光模块向所述识别目标发射连续的近红外脉冲,从而使红外图像采集模块210采集到3D点云数据图像。
应理解,所述输出模块260还可以用于输出除距离信息以外的其它信息。例如,在用户进行模板录入、人脸识别时,发出录入成功或失败,识别成功或失败的提示音。
在本申请实施例中,通过距离探测模块250进行距离探测,以及输出模块260输出距离信息,使用户人脸更加方便的移动于人脸识别的合适区域。且在移动至合适的距离后向用户人脸发送结构光,可以防止结构光伤害人眼,提高用户体验。
可选地,如图6所示,所述人脸识别装置200还可以包括显示屏270, 所述显示屏270用于显示虚拟图像。用户根据显示屏270上虚拟图像的动态显示调整人脸位置。
可选地,所述显示屏270还包括触摸层271,该触摸层271可以感应手指在显示屏270上的触摸位置并转换为电信号给处理器220。用户可以通过触摸层271控制处理器220执行相关动作。
上文结合图2至图6,详细描述了本申请的人脸识别装置实施例,下文结合图7至图11,详细描述本申请的人脸识别方法实施例,应理解,方法实施例与装置实施例相互对应,类似的描述可以参照装置实施例。
图7是根据本申请实施例的人脸识别方法20的示意性流程图,包括:
S240:获取识别目标的图像;
S250:对所述识别目标的图像进行处理形成虚拟图像,其中,所述虚拟图像用于显示在显示屏上;
S260:根据所述识别目标的图像进行人脸识别。
可选地,本申请实施例中的人脸识别方法20可以应用在上述人脸识别装置200中,其中,上述处理器220控制红外图像采集模块210获取识别目标的图像,图像采集模块210将所述图像发送给处理器220后,所述处理器220对所述图像进行处理形成虚拟图像,并且根据所述图像进行人脸识别。
可选地,如图8所示,一种人脸识别方法20还包括:
S230:发射红外光至所述识别目标;
具体地,所述处理器220控制所述红外发光模块230发射红外光至所述识别目标。
可选地,步骤S240包括:
S241:获取识别目标的2D红外图像。
步骤S250包括:
S251:对所述2D红外图像进行处理形成2D虚拟图像,其中,所述2D虚拟图像用于显示在显示屏上。
步骤S260包括:
S261:根据所述2D红外图像进行2D人脸识别。
可选地,经过S220的人脸检测后,所述识别目标为包括人脸,所述图像采集模块210获取人脸的2D红外图像后,将该2D红外图像发送给处理器220,所述处理器220对所述2D红外图像进行处理形成虚拟2D图形,且 进行2D人脸识别。
可选地,处理器220将该2D红外图像与2D红外图像模板库中的多个2D红外图像模板进行匹配,若匹配成功,则S271:人脸识别成功,若匹配失败,则S272:人脸识别失败。
可选地,如图9所示,另一种人脸识别方法20还包括:
当检测到人脸时,进入步骤S232:发射结构光至所述识别目标。
可选地,所述处理器220控制所述结构光透射模块240发射结构光至所述识别目标。
步骤S240包括:
S242:获取识别目标的3D点云数据。具体地,图像采集模块210根据经过识别目标表面反射的结构光信号获取识别目标的3D点云数据。
步骤S250包括:
S252:对所述3D点云数据进行处理形成虚拟图像,其中,所述虚拟图像用于显示在显示屏上。
步骤S260包括:
S262:根据所述3D点云数据进行3D人脸识别。
可选地,经过S220的人脸检测后,所述识别目标为包括人脸,所述图像采集模块210获取人脸的3D点云数据后,将该3D点云数据发送给处理器220,所述处理器220对所述3D点云数据进行处理形成虚拟2D或3D图形,且进行3D人脸识别。
可选地,处理器220将该3D点云数据与3D点云数据模板库中的多个3D点云数据模板进行匹配,若匹配成功,则人脸识别成功,若匹配失败,则人脸识别失败。
优选地,如图10所示,另一种人脸识别方法20还包括:
S231:发射红外光至所述识别目标;
S241:获取识别目标的2D红外图像。
S251:所述2D红外图像进行处理形成虚拟图像,其中,所述虚拟图像用于显示在显示屏上。
S261:根据2D红外图像进行2D人脸识别。
当2D人脸识别成功时,进入S232:发射结构光至所述识别目标;
当2D人脸识别失败时,进入S272:人脸识别失败,流程方法结束。
S242:获取识别目标的3D点云数据。
S262:根据3D点云数据进行3D人脸识别。
当3D人脸识别成功时,进入S271:人脸识别成功。
当3D人脸识别失败时,进入S272,人脸识别失败。
因此,基于上述流程方法,当2D人脸识别成功时,不代表人脸识别成功,还需要进一步进行3D人脸识别,只有当2D人脸识别以及3D人脸识别均成功时,才代表人脸识别成功,能够进行下一步操作。而当2D人脸识别失败时,则直接人脸识别失败,避免进行3D识别浪费处理器的运算资源。因此,采用2D人脸识别与3D人脸识别结合的方法,可以快速识别出非用户人脸,且在2D识别成功的基础上,再次进行3D识别检测,可以对人脸进行3D防伪,加强了识别流程的安全性。
可选地,如图11所示,在步骤S261:根据2D红外图像进行2D人脸识别,且2D人脸识别成功后,所述人脸识别方法20还包括:
S280:检测识别目标至识别装置的距离是否在第一阈值区间内。
具体地,处理器220检测识别目标至识别装置的距离是否在第一阈值区间内,当识别目标至识别装置的距离在第一阈值区间内时,进入步骤S232:发射结构光至所述识别目标。可选地,处理器220控制结构光投射模块240向识别目标投射结构光,或者控制TOF光模块向所述识别目标发射连续的近红外脉冲。
当识别目标至识别装置的距离不在第一阈值区间内时,进入步骤S281:输出距离信息。可选地,处理器220控制输出模块260针对不同的距离输出不同的提示信息。此时,用户根据不同的提示信息移动人脸,使得人脸处于合适的位置。
如图12所示,本申请实施例还提供了一种电子设备2,该电子设备2可以包括上述申请实施例的人脸识别装置200。
所述电子设备2包括但不限于智能门锁、手机、电脑、门禁系统等等需要应用人脸识别的设备。
可选地,在上述人脸识别装置200不包括处理器220、输出模块260和显示屏270的情况下,如图13所示,所述电子设备2可以包括处理器300、输出模块400和显示屏500。
可选地,所述输出模块400可以与图6中的输出模块260相同,所述显 示屏500可以与图6中的显示屏270相同。
可选地,所述显示屏500用于显示虚拟图像,方便用户观察并移动人脸位置,使其处于合适的人脸识别位置。
可选地,所述显示屏500还可以显示输出模块260的距离信息,以及其它识别成功或失败,模板录入或失败等人脸识别相关的提示信息等。
应理解,所述输出模块400还可以用于发出除与人脸识别相关动作以外的提示音或者振动模式。例如,电子设备的开机、关机提示音等等,本申请实施例对此不做限定。
还应理解,所述处理器300为电子设备2的处理器,所述显示屏500为电子设备2的显示屏,主要用于控制电子设备2中的各部分组件以及显示电子设备的主要界面,换言之,人脸识别装置200仅为电子设备2中的一个功能部件,其需要执行的动作仅为处理器300控制的一部分,其需要显示的虚拟图像仅为显示屏500显示内容中的一部分。
可选地,如图13所示,所述电子设备2还可以包括存储器600、电机控制模块700和无线网络接入模块800。
可选地,所述虚拟图像模板库和/或所述红外图像模板库和/或3D点云数据模板库还可以存储在所述存储器600中。
可选地,当电子设备为门禁系统或者智能门锁时,当人脸识别装置200人脸识别成功时,所述处理器300可以控制电机控制模块700对锁具进行开锁动作。
可选地,所述无线网络接入模块800用于接入无线局域网络(Wireless Local Area Networks,WLAN),实现处理器300以及存储器600中的数据进行网络传输。
应理解,本申请实施例的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以 是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例的人脸识别装置还可以包括存储器,存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的便携式电子设备执行时,能够使该便携式电子设备执行图7-图11所示实施例的方法。
本申请实施例还提出了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行图7-图11所示实施例的方法。
本申请实施例还提供了一种芯片,该芯片包括输入输出接口、至少一个处理器、至少一个存储器和总线,该至少一个存储器用于存储指令,该至少一个处理器用于调用该至少一个存储器中的指令,以执行图7-图11所示实 施例的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应所述理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者所述技术方案的部分可以以软件产品的形式体现出来,所述计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘 等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。

Claims (43)

  1. 一种人脸识别的装置,其特征在于,包括:
    图像采集模块,用于获取识别目标的图像;
    处理器,用于对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别,其中,所述虚拟图像用于显示在显示屏上。
  2. 根据权利要求1所述的装置,其特征在于,所述识别目标为用户人脸,所述虚拟图像用于提示用户调整人脸位置。
  3. 根据权利要求1或2所述的装置,其特征在于,所述处理器用于:对所述识别目标的图像进行人脸信息提取得到特征人脸图像,并根据所述特征人脸图像处理得到所述虚拟图像。
  4. 根据权利要求3所述的装置,其特征在于,所述处理器用于:将所述特征人脸图像与多个虚拟图像模板进行匹配得到所述虚拟图像。
  5. 根据权利要求3或4所述的装置,其特征在于,所述虚拟图像大小与所述特征人脸图像的大小相同,和/或所述虚拟图像的轮廓与所述特征人脸图像的轮廓相同,和/或所述虚拟图像在所述显示屏中的位置与所述特征人脸图像在所述二维红外图像中的位置相同。
  6. 根据权利要求1-5中任一项所述的装置,其特征在于,所述识别目标的图像包括二维红外图像,所述处理器用于:对所述二维红外图像进行处理形成二维虚拟图像,并根据所述二维红外图像进行人脸识别。
  7. 根据权利要求6所述的装置,其特征在于,所述装置还包括:红外发光模块,用于发射红外光至所述识别目标;
    其中,所述图像采集模块还用于接收所述红外光经所述识别目标反射后的反射红外光信号,并将所述反射红外光信号转换得到所述二维红外图像。
  8. 根据权利要求6或7所述的装置,其特征在于,所述处理器具体用于:将所述二维红外图像与多个红外图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,确定人脸识别失败。
  9. 根据权利要求1-5中任一项所述的装置,其特征在于,所述识别目标的图像包括三维点云图像,所述处理器用于:对所述三维点云图像进行处理形成虚拟三维图像,并根据所述三维点云图像进行人脸识别。
  10. 根据权利要求9所述的装置,其特征在于,所述处理器具体用于:将所述三维点云图像与多个三维点云图像模板进行匹配,当匹配成功时,确 定人脸识别成功,或者,当匹配失败时,确定人脸识别失败。
  11. 根据权利要求1-5中任一项所述的装置,其特征在于,所述识别目标的图像包括二维红外图像,所述处理器用于:对所述二维红外图像进行处理形成虚拟二维图像,并根据所述二维红外图像和所述三维点云图像进行人脸识别。
  12. 根据权利要求11所述的装置,其特征在于,所述处理器具体用于:根据所述二维红外图像进行二维人脸识别,当二维人脸识别失败时,确定人脸识别失败;
    当二维人脸识别成功时,根据所述三维点云图像进行三维人脸识别,当三维人脸识别成功时,确定人脸识别成功,当三维人脸识别失败时,确定人脸识别失败。
  13. 根据权利要求9-12中任一项所述的装置,其特征在于,所述装置还包括:结构光投射模块,用于投射结构光至所述识别目标;
    其中,所述图像采集模块还用于接收所述结构光经所述识别目标反射后的反射结构光信号,并将所述反射结构光信号转换得到所述三维点云图像。
  14. 根据权利要求13所述的装置,其特征在于,所述结构光为点阵光或者随机散斑,所述结构光投射模块为点阵光投射器,或者散斑结构光投射器。
  15. 根据权利要求13或14所述的装置,其特征在于,所述装置还包括:距离探测模块,用于探测所述识别目标至所述人脸识别装置的距离。
  16. 根据权利要求15所述的装置,其特征在于,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,所述处理器具体用于控制所述结构光投射模块投射结构光至所述识别目标。
  17. 根据权利要求15或16所述的装置,其特征在于,所述装置还包括:输出模块;
    所述处理器还用于:接收所述识别目标至所述人脸识别装置的距离,并控制所述输出模块是否输出所述识别目标至所述人脸识别装置的距离信息。
  18. 根据权利要求17所述的装置,其特征在于,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,所述处理器具体用于控制所述输出模块不输出所述识别目标至所述人脸识别装置的距离信息;
    当所述识别目标至所述人脸识别装置的距离处于所述第一距离范围区 间外时,所述处理器具体用于控制所述输出模块输出所述识别目标至所述人脸识别装置的距离信息。
  19. 根据权利要求1-18中任一项所述的装置,其特征在于,所述图像采集模块为红外摄像头,包括滤波片和红外光检测阵列。
  20. 根据权利要求1-19中任一项所述的装置,其特征在于,所述装置还包括:显示屏,用于显示所述虚拟图像。
  21. 一种人脸识别的方法,其特征在于,包括:
    获取识别目标的图像;
    对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别,其中,所述虚拟图像用于显示在显示屏上。
  22. 根据权利要求21所述的方法,其特征在于,所述识别目标为用户人脸,所述虚拟图像用于提示用户调整人脸位置。
  23. 根据权利要求21或22所述的方法,其特征在于,所述对所述识别目标的图像进行处理形成虚拟图像包括:
    对所述识别目标的图像进行人脸信息提取得到特征人脸图像,并根据所述特征人脸图像处理得到所述虚拟图像。
  24. 根据权利要求23所述的方法,其特征在于,所述根据所述特征人脸图像处理得到所述虚拟图像包括:
    将所述特征人脸图像与多个虚拟图像模板进行匹配得到所述虚拟图像。
  25. 根据权利要求23或24所述的方法,其特征在于,所述虚拟图像大小与所述特征人脸图像的大小相同,和/或所述虚拟图像的轮廓与所述特征人脸图像的轮廓相同,和/或所述虚拟图像在所述显示屏中的位置与所述特征人脸图像在所述二维红外图像中的位置相同。
  26. 根据权利要求21-25中任一项所述的方法,其特征在于,所述识别目标的图像包括二维红外图像,所述对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别包括:
    对所述二维红外图像进行处理形成二维虚拟图像,并根据所述二维红外图像进行人脸识别。
  27. 根据权利要求26所述的方法,其特征在于,所述方法还包括:发射红外光至所述识别目标;
    接收所述红外光经所述识别目标反射后的反射红外光信号,并将所述反 射红外光信号转换得到所述二维红外图像。
  28. 根据权利要求26或27所述的方法,其特征在于,所述根据所述二维红外图像进行人脸识别包括:将所述二维红外图像与多个红外图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,人脸识别失败。
  29. 根据权利要求21-25中任一项所述的方法,其特征在于,所述识别目标的图像包括三维点云图像,所述对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别包括:
    对所述三维点云图像进行处理形成虚拟三维图像,并根据所述三维点云图像进行人脸识别。
  30. 根据权利要求29所述的方法,其特征在于,所述根据所述三维点云图像进行人脸识别包括:
    将所述三维点云图像与多个三维点云图像模板进行匹配,当匹配成功时,确定人脸识别成功,或者,当匹配失败时,确定人脸识别失败。
  31. 根据权利要求21-25中任一项所述的方法,其特征在于,所述识别目标的图像包括二维红外图像,所述对所述识别目标的图像进行处理形成虚拟图像,并根据所述识别目标的图像进行人脸识别包括:
    对所述二维红外图像进行处理形成虚拟二维图像,并根据所述二维红外图像和三维点云图像进行人脸识别。
  32. 根据权利要求31所述的方法,其特征在于,所述根据所述二维红外图像和三维点云图像进行人脸识别包括:
    根据所述二维红外图像进行二维人脸识别,当二维人脸识别失败时,确定人脸识别失败;
    当二维人脸识别成功时,根据所述三维点云图像进行三维人脸识别,当三维人脸识别成功时,确定人脸识别成功,当三维人脸识别失败时,确定人脸识别失败。
  33. 根据权利要求29-32中任一项所述的方法,其特征在于,所述方法还包括:投射结构光至所述识别目标;
    接收所述结构光经所述识别目标反射后的反射结构光信号,并将所述反射结构光信号转换得到所述三维点云图像。
  34. 根据权利要求33所述的方法,其特征在于,所述结构光为点阵光 或者随机散斑。
  35. 根据权利要求33或34所述的方法,其特征在于,所述人脸识别的方法应用于人脸识别装置,所述方法还包括:
    探测所述识别目标至所述人脸识别装置的距离。
  36. 根据权利要求35所述的方法,其特征在于,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,投射所述结构光至所述识别目标。
  37. 根据权利要求35或36所述的方法,其特征在于,所述方法还包括:
    接收所述识别目标至所述人脸识别装置的距离,并判断是否输出所述识别目标至所述人脸识别装置的距离信息。
  38. 根据权利要求37所述的方法,其特征在于,当所述识别目标至所述人脸识别装置的距离处于第一距离范围区间时,所述判断是否输出所述识别目标至所述人脸识别的装置的距离信息包括:
    不输出所述识别目标至所述人脸识别的装置的距离信息;
    当所述识别目标至所述人脸识别装置的距离处于所述第一距离范围区间外时,所述判断是否输出所述识别目标至所述人脸识别的装置的距离信息包括:
    输出所述识别目标至所述人脸识别的装置的距离信息。
  39. 根据权利要求21-38中任一项所述的方法,其特征在于,所述方法还包括:
    显示所述虚拟图像。
  40. 一种电子设备,其特征在于,包括:
    如权利要求1至19所述的人脸识别的装置。
  41. 根据权利要求40所述的电子设备,其特征在于,所述电子设备还包括:显示屏,用于显示所述虚拟图像。
  42. 根据权利要求40或41所述的电子设备,其特征在于,所述电子设备还包括:无线网络接入模块,用于传输人脸识别的数据至无线局域网络。
  43. 根据权利要求40-42中任一项所述的电子设备,其特征在于,所述电子设备还包括:电机控制模块,用于根据人脸识别的结果控制机械装置。
PCT/CN2019/090425 2019-06-06 2019-06-06 人脸识别的装置、方法和电子设备 WO2020243969A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000872.9A CN110383289A (zh) 2019-06-06 2019-06-06 人脸识别的装置、方法和电子设备
PCT/CN2019/090425 WO2020243969A1 (zh) 2019-06-06 2019-06-06 人脸识别的装置、方法和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/090425 WO2020243969A1 (zh) 2019-06-06 2019-06-06 人脸识别的装置、方法和电子设备

Publications (1)

Publication Number Publication Date
WO2020243969A1 true WO2020243969A1 (zh) 2020-12-10

Family

ID=68261542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/090425 WO2020243969A1 (zh) 2019-06-06 2019-06-06 人脸识别的装置、方法和电子设备

Country Status (2)

Country Link
CN (1) CN110383289A (zh)
WO (1) WO2020243969A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493595A (zh) * 2019-09-30 2019-11-22 腾讯科技(深圳)有限公司 摄像头的检测方法和装置、存储介质及电子装置
CN113409056A (zh) * 2021-06-30 2021-09-17 深圳市商汤科技有限公司 支付方法、装置、本地识别设备、人脸支付系统及设备
CN113642481A (zh) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 识别方法、训练方法、装置、电子设备以及存储介质
CN113961133A (zh) * 2021-10-27 2022-01-21 深圳市商汤科技有限公司 电子设备的显示控制方法及装置、电子设备和存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110710852B (zh) * 2019-10-30 2020-11-17 广州铁路职业技术学院(广州铁路机械学校) 基于送餐机器人的送餐方法、系统、介质及智能设备
CN110956134B (zh) * 2019-11-29 2023-08-25 华人运通(上海)云计算科技有限公司 人脸识别方法、装置、设备以及计算机可读存储介质
CN113657227A (zh) * 2021-08-06 2021-11-16 姜政毫 人脸识别方法及基于深度学习算法的人脸识别系统
CN113965695B (zh) * 2021-09-07 2024-06-21 福建库克智能科技有限公司 一种图像显示的方法、系统、装置、显示单元和介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295504A (zh) * 2016-07-26 2017-01-04 车广为 人脸识别基础上的增强显示方法
JP2017062757A (ja) * 2015-09-25 2017-03-30 富士電機株式会社 コントローラシステム、その支援装置
CN107368730A (zh) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 解锁验证方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142154B (zh) * 2011-05-10 2012-09-19 中国科学院半导体研究所 生成脸部虚拟图像的方法与装置
CN105139450B (zh) * 2015-09-11 2018-03-13 重庆邮电大学 一种基于人脸模拟的三维虚拟人物构建方法及系统
CN108573201A (zh) * 2017-03-13 2018-09-25 金德奎 一种基于人脸识别技术的用户身份识别匹配方法
KR101891597B1 (ko) * 2017-05-04 2018-10-04 (주)필링크 얼굴 인식 기반의 가상객체 합성 방법 및 이를 위한 장치
CN109173263B (zh) * 2018-08-31 2021-08-24 腾讯科技(深圳)有限公司 一种图像数据处理方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017062757A (ja) * 2015-09-25 2017-03-30 富士電機株式会社 コントローラシステム、その支援装置
CN106295504A (zh) * 2016-07-26 2017-01-04 车广为 人脸识别基础上的增强显示方法
CN107368730A (zh) * 2017-07-31 2017-11-21 广东欧珀移动通信有限公司 解锁验证方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110493595A (zh) * 2019-09-30 2019-11-22 腾讯科技(深圳)有限公司 摄像头的检测方法和装置、存储介质及电子装置
CN113409056A (zh) * 2021-06-30 2021-09-17 深圳市商汤科技有限公司 支付方法、装置、本地识别设备、人脸支付系统及设备
CN113409056B (zh) * 2021-06-30 2022-11-08 深圳市商汤科技有限公司 支付方法、装置、本地识别设备、人脸支付系统及设备
CN113642481A (zh) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 识别方法、训练方法、装置、电子设备以及存储介质
CN113961133A (zh) * 2021-10-27 2022-01-21 深圳市商汤科技有限公司 电子设备的显示控制方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN110383289A (zh) 2019-10-25

Similar Documents

Publication Publication Date Title
WO2020243969A1 (zh) 人脸识别的装置、方法和电子设备
WO2020243968A1 (zh) 人脸识别的装置、方法和电子设备
US11869255B2 (en) Anti-counterfeiting face detection method, device and multi-lens camera
WO2020243967A1 (zh) 人脸识别的方法、装置和电子设备
CN107466411B (zh) 二维红外深度感测
US11048953B2 (en) Systems and methods for facial liveness detection
Fanello et al. Learning to be a depth camera for close-range human capture and interaction
US9432593B2 (en) Target object information acquisition method and electronic device
CN112232109B (zh) 一种活体人脸检测方法及系统
CN110462633B (zh) 一种人脸识别的方法、装置和电子设备
WO2019080580A1 (zh) 3d人脸身份认证方法与装置
US11928195B2 (en) Apparatus and method for recognizing an object in electronic device
US10949692B2 (en) 3D dynamic structure estimation using synchronized images
KR20170050465A (ko) 얼굴 인식 장치 및 방법
US20140306874A1 (en) Near-plane segmentation using pulsed light source
WO2020258120A1 (zh) 人脸识别的方法、装置和电子设备
KR20210062381A (ko) 라이브니스 검사 방법 및 장치, 생체 인증 방법 및 장치
CN210166794U (zh) 人脸识别的装置和电子设备
KR20210069404A (ko) 라이브니스 검사 방법 및 라이브니스 검사 장치
WO2021218695A1 (zh) 一种基于单目摄像头的活体检测方法、设备和可读存储介质
CN108629278B (zh) 基于深度相机实现信息安全显示的系统及方法
US11170204B2 (en) Data processing method, electronic device and computer-readable storage medium
WO2021046773A1 (zh) 人脸防伪检测方法、装置、芯片、电子设备和计算机可读介质
CN212484402U (zh) 图像传感装置和电子设备
TWI740143B (zh) 三維活體識別之人臉識別系統、三維活體識別方法及儲存媒體

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931984

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931984

Country of ref document: EP

Kind code of ref document: A1