WO2019196559A1 - 人脸识别方法、装置及移动终端、存储介质 - Google Patents

人脸识别方法、装置及移动终端、存储介质 Download PDF

Info

Publication number
WO2019196559A1
WO2019196559A1 PCT/CN2019/075384 CN2019075384W WO2019196559A1 WO 2019196559 A1 WO2019196559 A1 WO 2019196559A1 CN 2019075384 W CN2019075384 W CN 2019075384W WO 2019196559 A1 WO2019196559 A1 WO 2019196559A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
image
sensor
infrared
living body
Prior art date
Application number
PCT/CN2019/075384
Other languages
English (en)
French (fr)
Inventor
周海涛
惠方方
郭子青
谭筱
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810326645.2A external-priority patent/CN108616688A/zh
Priority claimed from CN201810327410.5A external-priority patent/CN108596061A/zh
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to US16/483,805 priority Critical patent/US11410458B2/en
Priority to EP19749165.7A priority patent/EP3576016A4/en
Publication of WO2019196559A1 publication Critical patent/WO2019196559A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Definitions

  • the present application relates to the field of mobile terminal technologies, and in particular, to a face recognition method and apparatus, and a mobile terminal and a storage medium.
  • biometric-based identification technology has become increasingly mature and has shown great advantages in practical applications.
  • identity verification can be performed based on face recognition, and terminal unlocking, electronic payment, and the like are performed after the verification is passed.
  • the present application proposes a face recognition method, a face recognition device, a mobile terminal, and a computer readable storage medium.
  • the embodiment of the present application provides a face recognition method, including: controlling an image sensor to perform imaging; acquiring imaging data obtained by image sensor imaging; and performing living body detection on the imaged object according to the imaging data.
  • the embodiment of the present application provides a face recognition device, including a control module, an acquisition module, and a detection module; the control module is configured to control an image sensor for imaging; the acquisition module is configured to acquire imaging data obtained by imaging the image sensor; and the detection module is configured to: The living subject is subjected to living body detection based on the imaging data.
  • the embodiment of the present application provides a mobile terminal, including: an imaging sensor, a memory, a micro-processing chip MCU, a processor, and a trusted application stored on the memory and operable in a trusted execution environment of the processor. a program; the MCU is dedicated hardware of the trusted execution environment, coupled to the imaging sensor and the processor, configured to control the imaging sensor to perform imaging, and transmit imaging data to the processor; When the processor executes the trusted application, the face recognition method of the above embodiment is implemented.
  • the embodiment of the present application proposes a computer readable storage medium on which a computer program is stored, and when the program is executed by the processor, the face recognition method of the above embodiment is implemented.
  • FIG. 1 is a schematic flowchart diagram of a face recognition method according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart diagram of a method for performing living body detection according to an infrared image according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for performing living body detection according to an infrared image and a visible light image according to an embodiment of the present application;
  • FIG. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic flowchart diagram of a face recognition method according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart diagram of another face recognition method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic flowchart diagram of a method for performing living body detection according to a structured light image according to an embodiment of the present application
  • FIG. 9 is a schematic flowchart diagram of still another method for recognizing a face according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart diagram of a method for performing living body detection according to a visible light image according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • identity authentication can be performed based on face recognition, and terminal unlocking, electronic payment, etc. are performed after verification, which is more convenient and safer than traditional password verification.
  • the traditional single face recognition and the like can only ensure that the characteristics of the person can be effectively verified, and the terminal can also be used for terminal unlocking, electronic payment, and the like. It can be seen that the existing identity recognition technology based on face recognition technology has low security and reliability.
  • the embodiment of the present application proposes a face recognition method, which performs living body detection before using the structured light depth model for identity verification, and then performs verification of the depth model of the face after the living body detection is passed. To avoid the use of imitations such as photos for authentication, improve the security and reliability of authentication.
  • FIG. 1 is a schematic flowchart diagram of a face recognition method according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 1 is applicable not only to the electronic device shown in FIG. 2, but also to the electronic device shown in FIG. 2 as a schematic description.
  • the electronic device of the execution environment can also be used for other electronic devices having a trusted execution environment and dedicated hardware for the trusted execution environment, which is not limited in this embodiment.
  • the electronic device includes: a laser camera, a floodlight, a visible light camera, a laser, and a Microcontroller Unit (MCU).
  • the MCU includes Pulse Width Modulation (PWM), depth engine, bus interface, and random access memory RAM.
  • the electronic device further includes a processor having a trusted execution environment, the MCU is a trusted execution environment dedicated hardware, and the trusted application executing the method shown in FIG. 1 runs in the trusted execution environment; There may be a normal execution environment that is isolated from the trusted execution environment.
  • the PWM is used to modulate the floodlight to emit infrared light, and to modulate the laser light to emit structured light
  • the laser camera is used to acquire a structured light image or a visible light image of the imaged object
  • a depth engine is used according to the structured light image, Calculate the depth data corresponding to the imaged object
  • the bus interface is used to send the depth data to the processor, and the trusted application running on the processor uses the depth data to perform the corresponding operation.
  • the bus interface includes: Mobile Industry Processor Interface (MIPI), I2C synchronous serial bus interface, Serial Peripheral Interface (SPI).
  • the face recognition method includes:
  • Step 101 Control an image sensor to perform imaging, wherein the image sensor comprises a structured light sensor.
  • the face recognition method can be executed by a trusted application, wherein the trusted application runs in a trusted execution environment, and the trusted application can be understood as an application involving information security such as user resources and user privacy. Programs that require a higher level of security, such as electronic payment programs, unlock programs, and more.
  • a trusted execution environment is a secure area on the main processor of an electronic device (including a smartphone, tablet, etc.) that ensures the security, confidentiality, and integrity of the code and data loaded into the environment.
  • the Trusted Execution Environment provides an isolated execution environment that provides security features including isolated execution, integrity of trusted applications, confidentiality of trusted data, secure storage, and more.
  • the execution environment provided by the Trusted Execution Environment provides a higher level of security than common mobile operating systems such as ISO, Android, and others.
  • the trusted application runs in a trusted execution environment, and the security of the authentication is improved from the operating environment.
  • the image sensor can be controlled to be imaged through dedicated hardware of the trusted execution environment.
  • the dedicated hardware may be an MCU, and the image sensor may include a structured light sensor.
  • the structured light sensor may include a laser camera and a laser.
  • the MCU can modulate the laser light on the electronic device to emit structured light, and the structured light is projected onto the imaged object.
  • the structured light is obstructed by the imaged object, reflected by the imaged object, and the laser camera captures the structured light reflected by the imaged object for imaging.
  • the body part may be selected as an imaging object.
  • the imaged object may be a face part, a face part (eye, nose, mouth) or a body part such as a hand. .
  • Step 102 Acquire imaging data obtained by imaging the image sensor.
  • the obtained imaging data imaged by the image sensor such as the depth data obtained by imaging the structured light sensor, can be acquired through dedicated hardware.
  • Step 103 Perform living body detection on the imaged object according to the imaging data.
  • the depth of the imaging data can be used to perform living body detection on the imaged object. Specifically, constructing a structured light depth model according to the depth data, and identifying the target organ from the structured light depth model, specifically, comparing the structured light depth model with the pre-existing structured light depth model of the facial organ to The target organ is identified in the light depth model.
  • the imaging object Since the imaging object is a living body, the imaging object cannot always remain stationary. When an organ is in motion, its depth data also changes. Therefore, in this embodiment, the target organ is tracked to determine whether the target organ is in motion. .
  • the depth image of the imaged object is continuously acquired, and a continuous multi-frame depth image is acquired. Depth data in the continuous multi-frame depth image of the same organ is compared to determine if the organ is in motion. When the depth data of the same organ in a continuous multi-frame depth image changes, it can be determined that the organ is in motion.
  • the target organ When the target organ is in motion, it indicates that the imaging subject is not a copy, such as a photo or the like, and it can be determined that the imaged subject is a living body. When the target organ is at rest, it can be determined that the imaged object is not a living body, and may be a copy of a photograph or the like.
  • the target organ is tracked to determine whether the target organ is in motion, thereby determining whether the imaging object is a living body, and the accuracy of the living body detection is high.
  • Step 104 If the living body detection passes, the structured light depth model constructed according to the depth data in the imaging data is matched with the preset face depth model.
  • the structured light depth model of the imaged object is matched with the preset face depth model.
  • the constructed structured light depth model can be compared with a preset face depth model.
  • the similarity exceeds a preset threshold, the structured light depth model and the preset face can be considered. Depth model matching.
  • the preset face depth model here is a pre-stored structured light image obtained by imaging the face of the owner of the electronic device by using the structured light sensor, and the pre-prepared by using the depth data in the structured light image. Set the face depth model for authentication.
  • Step 105 When the structured light depth model matches the preset face depth model, it is determined that the identity verification is passed.
  • the structural light depth model of the imaging object matches the preset face depth model, it is determined that the identity verification is performed, and subsequent operations such as completing electronic payment, unlocking the electronic device, and the like can be performed.
  • the message that does not pass the living body detection may be returned, or when the structural light depth model of the imaging object does not match the preset face depth model, the information of the authentication failure is returned.
  • the imaging object is first subjected to living body detection according to the imaging data, and after the living body is detected, the identity verification is performed according to the structural light depth model, thereby avoiding the use of the imitation object such as photo identity verification, thereby improving the use of the human face. Security and reliability for authentication.
  • the imaging data required for identity verification and living body detection is obtained through dedicated hardware in a trusted environment, thereby ensuring the security of the identity verification and the source data of the living body detection, and further improving the security. Safety and reliability.
  • the image sensor that controls imaging by dedicated hardware may further include an infrared sensor including a laser camera and a floodlight.
  • the PWM can modulate the floodlight on the electronic device to emit infrared light and project it to the imaged object.
  • the infrared light is obstructed by the imaged object, reflected by the imaged object, and the laser camera captures the imaged image corresponding to the reflected infrared light.
  • FIG. 3 is a schematic flowchart diagram of a method for performing living body detection according to an infrared image according to an embodiment of the present application.
  • the living body detection method includes:
  • Step 301 extracting an imaging contour from the infrared image.
  • the imaged contour can be extracted according to the edge pixel points in the infrared image.
  • Step 302 determining the temperature of the imaged object of the infrared image based on the local infrared image that is inside the imaged contour.
  • the imaging contour can be divided into multiple parts, the temperature corresponding to each local infrared image is determined, and the temperature corresponding to each local infrared image is added to obtain an average value, and the average value is taken as an imaging object of the infrared image. temperature.
  • the infrared image is obtained by imaging an infrared light emitted by the infrared sensor, and the value of each pixel in the infrared image corresponds to the temperature of the human body, thereby determining the temperature of the imaged object.
  • the infrared image is obtained by actively projecting infrared light to the human body, and after being reflected by the human body, the infrared sensor receives infrared light image reflected by the human body.
  • the response frequency of the infrared sensor should cover the infrared light frequency of the active projection and the infrared light frequency emitted by the human body. Therefore, in the infrared image, the value of each pixel is the effect of the infrared light reflected by the human body and the infrared light emitted by the human body. .
  • the intensity of the projected infrared light is known, according to the correspondence between the values of the pixels of the infrared image and the infrared radiation temperature, after determining the infrared radiation temperature corresponding to each pixel point, according to the intensity of the projected infrared light. Determine the corresponding infrared radiation correction temperature.
  • the infrared radiation is used to correct the temperature, and the infrared radiation temperature corresponding to each pixel point is corrected, and the corrected infrared radiation temperature is taken as the temperature of the imaged object.
  • Step 303 if the imaging contour matches the preset facial contour, and the temperature of the imaging object of the infrared image is within the body temperature range, determining that the imaging object of the infrared image is a living body.
  • the imaging contour is matched with the preset facial contour.
  • the matching may be performed in segments.
  • the imaging contour may be considered to match the preset imaging contour, ie
  • the imaged object is a pre-stored imaged object.
  • the facial contour can be divided into upper half and lower half by dividing the eyebrow as a boundary, and the segments are compared. Because the upper part (including the eyebrows) is affected by the shape of the eyebrows and the hairstyle, the relative change is relatively large, the credibility is relatively low, and the lower part, such as the eyes, eyebrows, nose, mouth, etc., is relatively fixed, so the upper part corresponds to The preset threshold of similarity is relatively small relative to the lower half.
  • the two parts are respectively compared, when the upper part of the imaged contour and the upper half of the pre-existing face contour exceed the corresponding preset threshold, and the lower half of the imaged contour and the pre-existing face contour When the similarity of the lower half exceeds the corresponding preset threshold, the imaged contour can be considered to match the pre-existing face contour.
  • the imaging contour matches the preset facial contour, and the temperature of the imaging object of the infrared image is within the body temperature range, the imaging object of the infrared image may be determined to be a living body. Otherwise, it can be considered that the imaged object of the infrared image is not a living body.
  • whether the imaging target is a living body is determined by whether the imaging contour matches the preset human face and whether the temperature of the imaging object is within the body temperature range, thereby improving the accuracy of the living body recognition.
  • the image sensor controlled to be turned on by dedicated hardware may include an infrared sensor and a visible light sensor, and the infrared image and the visible light sensor are imaged to obtain an infrared image and a visible light image.
  • the imaging subject can be in vivo detected by an infrared image and a visible light image.
  • FIG. 4 is a schematic flowchart diagram of a method for performing living body detection according to an infrared image and a visible light image according to an embodiment of the present application.
  • the living body detection method includes:
  • Step 401 identifying a face region in the visible light image, and determining, in the infrared image, a first target region corresponding to the face region.
  • the face area is detected, and if the face area is not detected, the visible light image and the infrared image are re-acquired. If a human face is detected, a human face contour is recognized in the infrared image, and a first target region corresponding to the face region in the visible light image is determined. It can be understood that the first target area here is a face area in the infrared image.
  • Step 402 Determine, according to the first target area, a second target area that includes the first target area and is larger than the first target area.
  • the range is expanded on the infrared image based on the first target area to obtain a second target area. It can be understood that the second target area includes the first target area and is larger than the first target area.
  • step 403 a histogram is counted in the second target area, and the contrast is calculated according to the histogram.
  • a histogram is statistically generated in the second target area on the infrared image as shown in equation (1).
  • ⁇ (i,j)
  • that is, the gradation difference between adjacent pixels
  • p ⁇ (i, j) is the pixel distribution probability of the gradation difference between adjacent pixels
  • Step 404 if the contrast is greater than the threshold, determining that the imaged object of the infrared image and the visible light image is a living body.
  • the contrast is greater than a certain threshold, it can be determined that the imaged object of the infrared image and the visible light image is a living body, otherwise it is a mimic.
  • the infrared image and the visible light image are used to determine whether the imaging object is alive or not, and the accuracy of the living body detection is improved.
  • the infrared sensor is also included in the image sensor that controls imaging, the infrared sensor is controlled to be imaged by dedicated hardware. If the infrared image obtained by the infrared sensor imaging determines that the imaging object is a living body, the structured light sensor is controlled to perform imaging.
  • the floodlight is adjusted by a dedicated hardware MCU to emit infrared light that is irradiated to the imaged object.
  • the infrared light is blocked by the imaging object, and the infrared light is reflected, and the infrared sensor receives the infrared light reflected by the imaging object to perform imaging.
  • the infrared image obtained by imaging the infrared sensor is obtained by the MCU, and the imaging object is subjected to the living body detection according to the infrared image.
  • the specific detection method refer to the method described in the above embodiment, and details are not described herein again.
  • the structured light sensor is controlled to perform imaging to perform identity verification according to the structured light depth model.
  • the infrared sensor is first controlled for imaging, and after the imaging object is determined to be a living body according to the infrared image, the structured light sensor is controlled to perform imaging, so that the structured light sensor does not need to be in a working state all the time, and the electronic device can be saved well.
  • the infrared sensor and the structured light sensor in the image sensor can be synchronously controlled for imaging, so that after the imaging object is determined to be a living body according to the infrared image, the imaging data directly obtained by imaging the structured light sensor can be obtained. Authenticating and increasing the speed of authentication.
  • the image sensor for controlling imaging includes a visible light sensor, an infrared sensor, and a structured light sensor
  • the visible light sensor and the infrared sensor may be first controlled for imaging. If the infrared image obtained by the infrared sensor imaging and the visible light image obtained by the visible light sensor are determined to be a living body, the structured light sensor is controlled to perform imaging.
  • the visible light sensor and the infrared sensor are first controlled for imaging, and after the imaged object is determined to be a living body according to the visible light image and the infrared image, the structured light sensor is controlled to perform imaging, and the identity is verified according to the structured light depth model to perform identity.
  • the detection of the living body before the verification improves the reliability and safety of using the face for identity verification, and the structured light sensor can not always be in the working state of imaging, which greatly saves the energy of the electronic device and improves the battery life of the electronic device. ability.
  • FIG. 5 is a schematic structural diagram of a face recognition apparatus according to an embodiment of the present application.
  • the device includes: a control module 501, an obtaining module 502, a detecting module 503, and a matching module 504.
  • control module 501 configured to control an image sensor to perform imaging, wherein the image sensor includes a structured light sensor;
  • the obtaining module 502 is configured to acquire imaging data obtained by imaging the image sensor
  • the detecting module 503 is configured to perform living body detection on the imaged object according to the imaging data
  • the matching module 504 is configured to match the structured light depth model constructed according to the depth data in the imaging data with the preset face depth model if the living body detects passing; when the structured light depth model matches the preset face depth model , to determine the authentication passed.
  • the image sensor further includes an infrared sensor
  • the imaging data includes an infrared image
  • the detecting module 503 is further configured to:
  • the imaged object of the infrared image is a living body.
  • the detecting module 503 is further configured to:
  • the imaging contour matches the preset facial contour, and the temperature of the imaging object of the infrared image is within the body temperature range, the imaging object of the infrared image is determined to be a living body.
  • the image sensor further includes an infrared sensor and a visible light sensor
  • the imaging data includes an infrared image and a visible light image
  • the detecting module 503 is further configured to:
  • the contrast is greater than the threshold, it is determined that the imaged object of the infrared image and the visible light image is a living body.
  • control module 501 is further configured to:
  • the structured light sensor is controlled to perform imaging.
  • control module 501 is further configured to:
  • the infrared sensor and the structured light sensor in the image sensor are synchronously controlled for imaging.
  • control module 501 is further configured to:
  • the structured light sensor is controlled to perform imaging.
  • the face recognition device may have a trusted execution environment
  • the control module 501 is further configured to control the image sensor to perform imaging through dedicated hardware in the trusted execution environment
  • the obtaining module 502 may The number of images obtained by imaging the image sensor is obtained by dedicated hardware.
  • the imaging data required for identity verification and live detection is obtained through dedicated hardware, which ensures the security of the source of identity verification and living detection data, further improving security and reliability.
  • each module in the face recognition device described above is for illustrative purposes only. In other embodiments, the face recognition device may be divided into different modules as needed to complete all or part of the functions of the face recognition device.
  • the face recognition device of the embodiment of the present invention acquires imaging data obtained by imaging the image sensor by controlling the image sensor, and performs living body detection on the imaged object according to the imaging data. If the living body is detected, the depth data according to the imaging data is used. The constructed structured light depth model is matched with the preset face depth model, and when the structured light depth model matches the preset face depth model, the identity verification is determined to pass.
  • the living body detection is performed according to the imaging data, and the depth model of the face is verified according to the structured light depth model after the living body is detected, and the depth model of the face is verified after the living body is detected. Therefore, it is possible to avoid the use of imitations such as photo authentication, and improve the security and reliability of using face authentication.
  • imaging data of a face is acquired by an image sensor on an electronic device, and then authentication is performed based on the imaging data.
  • the image sensor is directly called to collect images for identity verification, and the verification method is single and the security is low.
  • the image sensor consumes a large amount of energy when collecting images, and direct calling often affects the endurance of the electronic device.
  • the embodiment of the present application proposes a face recognition method, which can prevent the use of imitations such as photo verification and improve the identity by determining the imaging object as a pre-stored imaging object. Verification of security and reliability. Since the image sensor is turned on after the image object is determined to be a pre-stored image object, so that the image sensor does not need to be always turned on, the power of the electronic device can be saved and the battery life of the electronic device can be improved.
  • FIG. 6 is a schematic flowchart diagram of a face recognition method according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes: a laser camera, a floodlight, a visible light camera, a laser, and a Microcontroller Unit (MCU).
  • the MCU includes Pulse Width Modulation (PWM), depth engine, bus interface, and random access memory RAM.
  • the electronic device further includes a processor having a trusted execution environment, the MCU is a trusted execution environment dedicated hardware, and the trusted application executing the method shown in FIG. 6 runs in the trusted execution environment; the processor further There may be a normal execution environment that is isolated from the trusted execution environment. It should be noted that the application of the method shown in FIG. 6 can also be run in a normal execution environment.
  • FIG. 6 is applicable not only to the electronic device shown in FIG. 2, but also to the electronic device shown in FIG. 2 as a schematic description, and the corresponding method of FIG. 6 can also be applied to other methods.
  • An electronic device having a trusted execution environment and dedicated hardware for the trusted execution environment is not limited in this embodiment.
  • the PWM is used to modulate the floodlight to emit infrared light, and to modulate the laser light to emit structured light
  • the laser camera is used to acquire a structured light image or a visible light image of the imaged object
  • a depth engine is used according to the structured light image, Calculate the depth data corresponding to the imaged object
  • the bus interface is used to send the depth data to the processor, and the trusted application running on the processor uses the depth data to perform the corresponding operation.
  • the bus interface includes: Mobile Industry Processor Interface (MIPI), I2C synchronous serial bus interface, Serial Peripheral Interface (SPI).
  • the face recognition method includes:
  • step 110 the infrared sensor is controlled to perform imaging.
  • the face recognition method can be executed by a trusted application, wherein the trusted application runs in a trusted execution environment, and the trusted application can be understood as an application involving information security such as user resources and user privacy. Programs that require a higher level of security, such as electronic payment programs, unlock programs, and more.
  • a trusted execution environment is a secure area on the main processor of an electronic device (including a smartphone, tablet, etc.). Compared with a normal execution environment, it can ensure the security, confidentiality and integrity of the code and data loaded into the environment. Sex.
  • the Trusted Execution Environment provides an isolated execution environment that provides security features including isolated execution, integrity of trusted applications, confidentiality of trusted data, secure storage, and more.
  • the execution environment provided by the Trusted Execution Environment provides a higher level of security than common mobile operating systems such as ISO, Android, and others.
  • the trusted application runs in a trusted execution environment, and the security of the authentication is improved from the operating environment.
  • the electronic device may include an infrared sensor, a visible light sensor, and a structured light sensor.
  • the infrared sensor can perform infrared imaging according to the infrared light reflected by the imaging object; the visible light sensor images the visible light reflected by the imaging object to obtain a visible light image; and the structured light sensor can image the structured light reflected by the imaging object to obtain a structured light image.
  • the imaged object may be a human face, or may be other features such as a hand, an eye, a mouth, and the like.
  • the infrared sensor can be controlled to be imaged through dedicated hardware of the trusted execution environment.
  • the dedicated hardware can be an MCU.
  • the infrared sensor may include a laser camera and a floodlight.
  • the MCU can modulate the floodlights on the electronic device to emit infrared light that is projected onto the imaged object.
  • the infrared light is obstructed by the imaged object, reflected by the imaged object, and the laser camera captures the imaged image corresponding to the reflected infrared light.
  • Step 120 Acquire first imaging data obtained by imaging the infrared sensor.
  • the first imaging data obtained by imaging the infrared sensor can be obtained through a dedicated hardware such as an MCU.
  • the dedicated hardware obtains the first imaging data, here an infrared image, according to the imaging result of the infrared sensor.
  • Step 130 Compare the imaged object with the pre-stored imaged object according to the first imaging data.
  • the body part may be selected as an imaging object.
  • the imaged object may be a face part, a face part (eye, nose, mouth) or a body part such as a hand.
  • the imaging contour may be extracted from the infrared image. Specifically, edge pixels of the infrared image and pixel points with similar pixel values may be extracted to obtain an imaging contour.
  • the imaged contour is then matched to the imaged contour of the pre-stored imaged object.
  • the matching may be performed in segments.
  • the imaging contour may be considered to match the preset imaging contour, ie
  • the imaged object is a pre-stored imaged object.
  • pixel points of the image edge and pixel points whose pixel value difference is smaller than a preset threshold may be extracted from the infrared image to obtain an imaging contour.
  • the face contour can be divided into the upper half and the lower half by dividing the eyebrows into segments, and the segments are compared. Because the upper part (including the eyebrows) is affected by the shape of the eyebrows and the hairstyle, the relative change is relatively large, the credibility is relatively low, and the lower part, such as the eyes, eyebrows, nose, mouth, etc., is relatively fixed, so the upper part corresponds to The preset threshold of similarity is relatively small relative to the lower half.
  • the two parts are respectively compared, when the upper part of the imaged contour and the upper half of the pre-existing face contour exceed the corresponding preset threshold, and the lower half of the imaged contour and the pre-existing face contour When the similarity of the lower half exceeds the corresponding preset threshold, the imaging contour can be considered to match the pre-existing facial contour, that is, the imaging object is a pre-existing human face.
  • Step 140 if the imaging object is a pre-stored imaging object, then control to turn on the image sensor for imaging.
  • the imaging object is a pre-stored imaging object
  • the imaging object belongs to the owner of the electronic device, and the image sensor can be controlled to be turned on.
  • the image sensor turned on here may be a visible light sensor or a structured light sensor, or may be a visible light sensor and a structured light sensor.
  • the visible light sensor includes a visible light camera, and the visible light camera can capture visible light reflected by the imaging object to obtain a visible light image.
  • the structured light sensor includes a laser light and a laser camera shared with the infrared sensor.
  • the PWM can modulate the laser to emit structured light, and the structured light illuminates the imaged object.
  • the laser camera can capture the structured light reflected by the imaged object to obtain a structured light image.
  • the image sensor Since the image sensor is turned on after the imaged object is matched with the pre-stored imaged object, so that the image sensor does not need to be always turned on, the power of the electronic device can be saved and the endurance of the electronic device can be improved.
  • Step 150 Acquire second imaging data obtained by imaging the image sensor.
  • the second data obtained by the visible light sensor imaging can be obtained by dedicated hardware.
  • the open sensor is a structured light sensor
  • the structured light image obtained by the visible light sensor can be obtained by dedicated hardware.
  • the depth engine can calculate the depth data corresponding to the imaged object according to the structured light image. Specifically, the depth engine demodulates the phase information corresponding to the deformed position pixel in the structured light image, converts the phase information into height information, and determines the height information according to the height information. The depth data corresponding to the subject, thereby obtaining a depth image based on the depth data.
  • the open sensor is a visible light sensor and a structured light sensor
  • visible light images and depth images can be acquired by dedicated hardware.
  • Step 160 Perform living body detection on the imaged object according to the second imaging data.
  • the object to be imaged may be detected by using the over-depth image
  • the living body may be detected by the visible light image
  • the living body may be detected by the visible light image and the infrared image.
  • the face recognition method of the embodiment of the present application by first turning on the infrared light sensor, after determining that the imaged object matches the pre-stored imaged object, turning on the image sensor and performing the living body detection can not only save energy, but also improve the security of the identity verification. Sex and reliability. Since the image sensor is turned on after the image object is determined to be a pre-stored image object, so that the image sensor does not need to be always turned on, the power of the electronic device can be saved and the battery life of the electronic device can be improved.
  • the face recognition method of the embodiment may be executed by a trusted application, and the trusted application runs in a trusted execution environment.
  • the image data of the identity verification is obtained through the dedicated hardware in the trusted environment. To ensure the security of the source of authentication data, further improving the security and reliability of authentication.
  • the face recognition method may further include:
  • Step 170 If the living body detects, pass the depth image to form a structured light depth model, and match the preset face depth model.
  • the structured light depth model is constructed according to the depth data in the depth image and matched with the preset face depth model. Specifically, the structured light depth model of each part of the face in the constructed structured light depth model may be compared with the depth model of each organ in the preset face depth model, and when the similarity exceeds a preset threshold, The structured light depth model is considered to match the preset face depth model.
  • the preset face depth model here is a pre-stored structured light image obtained by imaging the face of the owner of the electronic device by using the structured light sensor, and the pre-prepared by using the depth data in the structured light image. Set the face depth model for authentication.
  • Step 180 When the structured light depth model matches the preset face depth model, it is determined that the identity verification is passed.
  • the structured light depth model matches the preset face depth model, it is determined that the authentication is performed, and subsequent operations, such as completing electronic payment, unlocking the electronic device, and the like, can be performed.
  • the message that does not pass the living body detection may be returned, or when the structural light depth model does not match the preset face depth model, the information that the non-authentication failure is returned.
  • the face recognition method of the embodiment of the present invention improves the security and reliability of the identity verification after detecting the living object by the imaging object, that is, when the object for confirming the identity verification is not a copy (photograph), and then performing identity verification. Sex.
  • FIG. 8 is a schematic flowchart diagram of a method for performing living body detection according to a structured light image according to an embodiment of the present application. As shown in Figure 8, it includes:
  • Step 310 identifying the target organ from the structured light depth model.
  • a structured light depth model of a plurality of human organs can be stored in advance. After acquiring the structured light image of the imaging object through dedicated hardware, the depth data is acquired from the structured light image, the depth data constitutes the depth image, and the structured light depth model is constructed according to the depth image, and the structured light depth model and the structured light of the pre-existing organ are The depth models are compared to identify the target organ from the structured light depth model.
  • an eyebrow, an eye, a nose, a mouth, and the like are identified from a structured light depth model of a human face.
  • step 320 the depth image is continuously collected, and the target organ is tracked to identify whether the target organ is in motion.
  • the imaging object Since the imaging object is a living body, the imaging object cannot always remain stationary. When an organ is in motion, its depth data also changes. Therefore, in this embodiment, the target organ is tracked to determine whether the target organ is in motion. .
  • the depth image of the imaging object is continuously acquired, and a continuous multi-frame depth image is acquired.
  • Depth data in the continuous multi-frame depth image of the same organ is compared to determine if the organ is in motion.
  • the depth data of the same organ in a continuous multi-frame depth image changes, it can be determined that the organ is in motion.
  • the target organ as an example of the mouth, it is in a closed state in the currently acquired depth image. After several frames of depth image, the mouth is in an open state, so that the mouth can be determined to be in motion.
  • Step 330 if the target organ is in a motion state, it is determined that the imaging object is a living body.
  • the target organ When the target organ is in motion, it indicates that the imaging subject is not a copy, such as a photo or the like, and it can be determined that the imaged subject is a living body. When the target organ is at rest, it can be determined that the imaged object is not a living body, and may be a copy of a photograph or the like.
  • the target organ is tracked to determine whether the target organ is in motion, thereby determining whether the imaging object is a living body, and the accuracy of the living body detection is high.
  • the image sensor that controls the opening may be a visible light sensor to detect whether the imaging object is a living body according to the visible light image.
  • FIG. 9 is a schematic flowchart of still another method for recognizing a face according to an embodiment of the present application.
  • the method includes:
  • Step 410 controlling to turn on the infrared sensor for imaging.
  • Step 420 Acquire first imaging data obtained by imaging the infrared sensor.
  • Step 430 comparing the imaged object with the pre-stored imaged object according to the first imaging data.
  • the method for determining whether the imaged object is matched with the pre-stored imaged object according to the first image data is similar to the method described in the steps 110 to 130 in the above embodiment, and thus is not described herein again.
  • Step 440 if the imaging object is a pre-stored imaging object, then control the visible light sensor to perform imaging.
  • the visible light sensor can be controlled to be turned on to image the visible light reflected by the imaging object.
  • Step 450 Acquire a visible light image obtained by imaging the visible light sensor.
  • the value of each pixel obtained by imaging the visible light sensor can be obtained through a dedicated hardware such as an MCU, thereby obtaining a visible light image.
  • Step 460 performing a living body detection on the imaged object according to the visible light image.
  • FIG. 10 is a schematic flowchart diagram of a method for performing living body detection according to a visible light image according to an embodiment of the present application. As shown in FIG. 10, the living body detection method includes:
  • Step 510 identifying a target organ in the face region from the visible light image.
  • the visible light image of the plurality of organs of the face may be pre-stored, and the visible light image of the imaged object is compared with the visible light image of the pre-existing facial organ, and the visible light image of the imaged object and the pre-stored pixel of an organ are A region of similar value is identified as the organ.
  • an area of the visible light image of the imaging subject that is close to the pixel value of the pre-stored visible light image of the nose is recognized as a nose.
  • step 520 the visible light image is continuously collected, and the target organ is tracked to identify whether the target organ is in motion.
  • the imaged object Since the imaged object is a living body, the imaged object cannot always remain stationary. When an organ is in motion, its position also changes. Therefore, in this embodiment, the target organ is tracked to determine whether the target organ is in motion.
  • the visible light image of the face is continuously collected, and a visible light image of the face of the continuous plurality of frames is obtained.
  • the relative position of the two organs in two consecutive frames or multiple frames of visible light images is compared to determine whether the target organ is in motion.
  • Step 530 if the target organ is in a motion state, it is determined that the imaging object is a living body.
  • the target organ When the target organ is in motion, it can be determined that the face is a living body, instead of mimicking a person's face as in a photo, indicating that the face has passed the living body detection.
  • the target organ When the target organ is at rest, it can be considered that the face is not a living body but a copy, indicating that the face has not passed the living body detection.
  • FIG. 4 is a schematic flowchart diagram of a method for performing living body detection according to a visible light image and an infrared image according to an embodiment of the present application. As shown in FIG. 4, the method includes:
  • Step 401 identifying a face region in the visible light image, and determining, in the infrared image, a first target region corresponding to the face region.
  • the face area is detected, and if the face area is not detected, the visible light image and the infrared image are re-acquired. If a human face is detected, a human face contour is recognized in the infrared image, and a first target region corresponding to the face region in the visible light image is determined. It can be understood that the first target area here is a face area in the infrared image.
  • Step 402 Determine, according to the first target area, a second target area that includes the first target area and is larger than the first target area.
  • the range is expanded on the infrared image based on the first target area to obtain a second target area. It can be understood that the second target area includes the first target area and is larger than the first target area.
  • step 403 a histogram is counted in the second target area, and the contrast is calculated according to the histogram.
  • a histogram is statistically generated in the second target area on the infrared image as shown in equation (1).
  • ⁇ (i,j)
  • that is, the gradation difference between adjacent pixels
  • p ⁇ (i, j) is the pixel distribution probability of the gradation difference between adjacent pixels
  • Step 404 if the contrast is greater than the threshold, determining that the imaged object of the infrared image and the visible light image is a living body.
  • the contrast is greater than a certain threshold, it can be determined that the imaged object of the infrared image and the visible light image is a living body, otherwise it is a mimic.
  • Step 470 after the living body detection passes, control to open the structured light sensor for imaging.
  • the structured light sensor when the imaging object is detected as a living body, can be controlled by dedicated hardware.
  • the laser can be modulated by dedicated hardware to emit structured light that is projected onto the imaged object.
  • the imaging object reflects the structured light, and the structured light sensor images the imaged object according to the structured light reflected by the imaging object.
  • Step 480 Acquire third imaging data obtained by imaging the structured light sensor.
  • the structured light sensor can image a structured light image according to the structured light reflected by the imaging object.
  • the depth data can be obtained from the structured light image through dedicated hardware, and then the structured light depth model of the imaged object can be constructed according to the depth data.
  • the structured light depth model is included in the third imaging data.
  • Step 490 Match the structured light depth model in the third imaging data with the preset face depth model to determine whether to pass the identity verification.
  • the identity is verified by the structured light depth model.
  • the structured light depth model is matched to a preset face depth model.
  • the structured light depth model of each part of the face in the constructed structured light depth model may be compared with the depth model of each organ in the preset face depth model, and when the similarity exceeds a preset threshold, It is considered that the structured light depth model matches the preset face depth model, and it can be determined that the authentication is passed.
  • the electronic device After the identity verification is determined, the electronic device performs operations that can be performed only after the authentication is passed, such as completing electronic payment, unlocking, and the like.
  • the face recognition method of the embodiment of the present application determines whether the imaged object belongs to the owner through the first imaging data, determines that it belongs to the owner, and then turns on the visible light sensor to perform the living body detection according to the visible light image. After determining that the imaged object is a living body according to the visible light image, the structured light sensor is turned on, and the identity is verified according to the structured light depth model. Since the visible light sensor and the structured light sensor can be used without being always turned on, the energy of the electronic device can be saved, and the endurance of the electronic device can be improved. Moreover, in this embodiment, it is first determined whether the imaging object belongs to the owner, and in the case of belonging to the owner, the biometric verification is performed, which can improve the security and reliability of the identity verification.
  • the face recognition method of the embodiment of the present application controls the infrared sensor to perform imaging, and after determining the imaging object as a pre-stored imaging object according to the imaging data obtained by the infrared sensor imaging, the image sensor is controlled to be imaged to image according to the image sensor.
  • the obtained imaging data is subjected to the living body detection, and since the living object detection is performed after the imaging object is determined to be a pre-stored imaging object, the occurrence of the use of the imitation object such as photo verification can be avoided, and the security and reliability of the identity verification are improved. Since the image sensor is turned on after the image object is determined to be a pre-stored image object, so that the image sensor does not need to be always turned on, the power of the electronic device can be saved and the battery life of the electronic device can be improved.
  • the present application also proposes an image processing apparatus.
  • the device includes: a control module 710, an obtaining module 720, a matching module 730, and a detecting module 740.
  • the control module 710 is configured to control the infrared sensor to perform imaging; and control the image sensor to be imaged when the imaging object is compared to the pre-stored imaging object.
  • the obtaining module 720 is configured to acquire first imaging data obtained by imaging the infrared sensor, and acquire second imaging data obtained by imaging the image sensor;
  • the matching module 730 is configured to compare the imaging object with the pre-stored imaging object according to the first imaging data
  • the detecting module 740 is configured to perform living body detection on the imaged object according to the second imaging data.
  • the apparatus further includes:
  • the matching module 730 is further configured to: if the living body detects passing, form a structured light depth model by using the depth image, and match the preset face depth model;
  • a determining module is configured to determine that the authentication is passed when the structured light depth model matches the preset face depth model.
  • the detecting module 740 is further configured to:
  • the target organ is in motion, it is determined that the imaged subject is a living body.
  • the image sensor is a visible light sensor
  • the second imaging data includes a visible light image
  • the device may further include:
  • the control module 710 is further configured to control the open structure photosensor to perform imaging after the living body detects the passage;
  • the obtaining module 720 is further configured to acquire third imaging data obtained by imaging the structured light sensor;
  • the matching module 730 is further configured to match the structured light depth model in the third imaging data with the preset face depth model;
  • a determining module is configured to determine that the authentication is passed when the structured light depth model matches the preset face depth model.
  • the detecting module 740 is further configured to:
  • the target organ is in motion, it is determined that the imaged subject is a living body.
  • the first imaging data is an infrared image
  • the detecting module 740 is further configured to:
  • the contrast is greater than the threshold, it is determined that the imaged object of the infrared image and the visible light image is a living body.
  • the first imaging data is an infrared image
  • the matching module 730 is further configured to:
  • the imaging contour matches the imaging contour of the pre-stored imaging subject, it is determined that the imaging object belongs to the owner.
  • the image processing apparatus may have a trusted execution environment.
  • the imaging data required for authentication is obtained through dedicated hardware, which ensures the security of the source of the authentication data, further improving security and reliability.
  • each module in the above image processing apparatus is for illustrative purposes only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
  • the image processing apparatus of the embodiment of the present application acquires the first imaging data obtained by imaging the infrared sensor by controlling the infrared sensor to be imaged, and compares the imaged object with the pre-stored imaging object according to the first imaging data, if the imaging object is The pre-stored imaging object controls the image sensor to be imaged, acquires second imaging data obtained by imaging the image sensor, and performs living body detection on the imaged object according to the second imaging data.
  • the infrared sensor is controlled to be imaged, and after the imaging object is determined to be a pre-stored imaging object according to the imaging data obtained by the infrared sensor imaging, the image sensor is controlled to be imaged to perform living according to the imaging data obtained by the image sensor imaging.
  • the detection because the imaging object is determined to be a pre-existing imaging object, and then the living body detection, thereby avoiding the occurrence of the use of the imitation, such as photo verification, improves the security and reliability of the authentication. Since the image sensor is turned on after the image object is determined to be a pre-stored image object, so that the image sensor does not need to be always turned on, the power of the electronic device can be saved and the battery life of the electronic device can be improved.
  • FIG. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present disclosure.
  • the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, and the like.
  • the mobile terminal includes an imaging sensor 810, a memory 820, an MCU 830, a processor 840, and a trusted application stored on the memory 820 and operable in a trusted execution environment of the processor 840.
  • the MCU 830 is dedicated hardware of the trusted execution environment, and is connected to the imaging sensor 810 and the processor 840 for controlling the imaging sensor 810 for imaging and transmitting the imaging data to the processor 840.
  • the MCU 830 and the processor 840 communicate by using an encryption method.
  • the MCU 830 may encrypt the image by using a row and column pixel scrambling method. Specifically, the MCU 830 can rearrange the pixel information in the original image, and the processor can restore the original image through a one-to-one correspondence.
  • MCU830 can also adopt chaotic-based image encryption method. Specifically, two Logistic chaotic sequences are generated, and two Logistics are reconstructed to obtain two y-sequences. The original images are replaced by yl and y2 sequences. Among them, the secret key is the initial state value of the chaotic system.
  • the imaging sensor 810 may include: an infrared sensor, a structured light sensor, and a visible light sensor.
  • the infrared sensor comprises a laser camera and a floodlight;
  • the structured light sensor comprises: a laser light, and a laser camera shared with the infrared sensor,
  • the visible light sensor comprises: a visible light camera.
  • the MCU 830 includes a PWM, a depth engine, a bus interface, and a random access memory RAM.
  • the PWM is used to modulate the floodlight to emit infrared light, and to modulate the laser to emit structured light;
  • a laser camera for acquiring a structured light image of an imaged object
  • a depth engine for calculating depth data corresponding to the imaged object according to the structured light image
  • a bus interface for transmitting depth data to the processor 840 and performing corresponding operations using the depth data by a trusted application running on the processor 840.
  • the authentication may be performed according to the depth data.
  • the specific process refer to the foregoing embodiment, and details are not described herein again.
  • the embodiment of the present application further provides a computer readable storage medium having stored thereon a computer program, which when executed by the processor, implements the face recognition method as described in the foregoing embodiments.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the application can be implemented in hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware and in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), and the like.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like. While the embodiments of the present application have been shown and described above, it is understood that the above-described embodiments are illustrative and are not to be construed as limiting the scope of the present application. The embodiments are subject to variations, modifications, substitutions and variations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Studio Devices (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种人脸识别方法、装置及移动终端、存储介质,其中,方法包括:(101)控制图像传感器进行成像;(102)获取图像传感器成像得到的成像数据;(103)根据成像数据,对成像对象进行活体检测。

Description

人脸识别方法、装置及移动终端、存储介质
优先权信息
本申请请求2018年4月12日向中国国家知识产权局提交的、专利申请号为201810327410.5和201810326645.2的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及移动终端技术领域,尤其涉及一种人脸识别方法、装置及移动终端、存储介质。
背景技术
随着科技的发展,基于生物特征的身份识别技术日益成熟并在实际应用中展现出极大的优越性。目前,可以基于人脸识别进行身份验证,在验证通过后进行终端解锁、电子支付等。
发明内容
本申请提出一种人脸识别方法、人脸识别装置、移动终端和计算机可读存储介质。
本申请实施例提出了一种人脸识别方法,包括:控制图像传感器进行成像;获取图像传感器成像得到的成像数据;根据所述成像数据,对成像对象进行活体检测。
本申请实施例提出了一种人脸识别装置,包括控制模块、获取模块和检测模块;控制模块用于控制图像传感器进行成像;获取模块用于获取图像传感器成像得到的成像数据;检测模块用于根据所述成像数据,对成像对象进行活体检测。
本申请实施例提出了一种移动终端,包括:成像传感器、存储器、微处理芯片MCU、处理器及存储在所述存储器上并可在所述处理器的可信执行环境下运行的可信应用程序;所述MCU为所述可信执行环境的专用硬件,与所述成像传感器和所述处理器连接,用于控制所述成像传感器进行成像,并将成像数据发送至所述处理器;所述处理器执行所述可信应用程序时,实现上述实施例的人脸识别方法。
本申请实施例提出了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例的人脸识别方法。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为本申请实施例提供的一种人脸识别方法的流程示意图;
图2为本申请实施例提供的一种电子设备的结构示意图;
图3为本申请实施例提供的一种根据红外图像进行活体检测的方法的流程示意图;
图4为本申请实施例提供的一种根据红外图像和可见光图像进行活体检测的方法的流程示意图;
图5为本申请实施例提供的一种人脸识别装置的结构示意图;
图6为本申请实施例提供的一种人脸识别方法的流程示意图;
图7为本申请实施例提供的另一种人脸识别方法的流程示意图;
图8为本申请实施例提供的一种根据结构光图像进行活体检测的方法的流程示意图;
图9为本申请实施例提供的又一种人脸识别方法的流程示意图;
图10为本申请实施例提供的一种根据可见光图像进行活体检测的方法的流程示意图。
图11为本申请实施例提供的一种人脸识别装置的结构示意图。
图12为本申请实施例提供的一种移动终端的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
下面参考附图描述本申请一个实施例的人脸识别方法及人脸识别装置。
目前,可以基于人脸识别进行身份验证,在验证通过后进行终端解锁、电子支付等,相比传统的密码验证更加方便、安全。但是,传统单一的人脸识别等只能保证人的特征能够被有效验证,而导致利用照片也可以进行终端解锁、电子支付等。可见,现有的基于人脸识别技术的身份验证安全性和可靠性低。
针对这一问题,本申请实施例提出一种人脸识别方法,该方法在利用结构光深度模型进行身份验证之前,先进行活体检测,在活体检测通过后,再进行人脸的深度模型的验证,以避免利用仿照物如照片进行身份验证,提高了身份验证的安全性和可靠性。
图1为本申请实施例提供的一种人脸识别方法的流程示意图。
该人脸识别方法可应用电子设备,作为一种可能的实现方式,该电子设备的结构可参见图2,图2为本申请实施例提供的一种电子设备的结构示意图。
需要说明的是,本领域技术人员可以知晓,图1对应方法不仅适用于图2所示的电子设备,图2所示电子设备仅作为一种示意性描述,图1对应方法可以用于具有普通执行环境的电子设备,还可以用于其他具有可信执行环境,以及可信执行环境专用硬件的电子设备,本实施例中对此不作限定。
如图2所示,该电子设备包括:激光摄像头、泛光灯、可见光摄像头、镭射灯以及微处理器(Microcontroller Unit,简称MCU)。其中,MCU包括脉冲宽度调制(Pulse Width Modulation,简称PWM)、深度引擎、总线接口以及随机存取存储器RAM。另外,电子设备还包括处理器,该处理器具有可信执行环境,MCU为可信执行环境专用硬件,执行图1所示方法的可信应用程序运行于该可信执行环境下;处理器还可以具有普通执行环境,该普通执行环境与可信执行环境相互隔离。
其中,PWM用于调制泛光灯以使发出红外光,以及调制镭射灯以发出结构光;激光摄像头,用于采集成像对象的结构光图像或可见光图像;深度引擎,用于根据结构光图像,计算获得成像对象对应的深度数据;总线接口,用于将深度数据发送至处理器,并由处理器上运行的可信应用程序利用深度数据执行相应的操作。其中,总线接口包括:移动产业处理器接口(Mobile Industry Processor Interface简称MIPI)、I2C同步串行总线接口、串行外设接口(Serial Peripheral Interface,简称SPI)。
如图1所示,该人脸识别方法包括:
步骤101,控制图像传感器进行成像,其中,图像传感器包括结构光传感器。
本实施例中,该人脸识别方法可由可信应用程序执行,其中,可信应用程序运行于可信执行环境中,可信应用程序可以理解为涉及用户资源、用户隐私等信息安全性的应用程序,该类应用程序性需 要的安全级别较高,例如电子支付程序、解锁程序等等。
可信执行环境是电子设备(包含智能手机、平板电脑等)主处理器上的一个安全区域,其可以保证加载到该环境内部的代码和数据的安全性、机密性以及完整性。可信执行环境提供一个隔离的执行环境,提供的安全特征包含:隔离执行、可信应用程序的完整性、可信数据的机密性、安全存储等。总之,可信执行环境提供的执行空间比常见的移动操作系统,如ISO、Android等,提供更高级别的安全性。
本实施例中,可信应用程序运行于可信执行环境中,从运行环境上提高了身份验证的安全性。
当可信应用程序执行时,如进行电子支付、电子设备解锁时,可通过可信执行环境的专用硬件,控制开启图像传感器进行成像。其中,专用硬件可以为MCU,图像传感器可包括结构光传感器。
本实施例中,结构光传感器可包括激光摄像头和镭射灯。MCU可以调制电子设备上的镭射灯发出结构光,结构光投射到成像对象。结构光受到成像对象的阻碍,被成像对象反射,激光摄像头捕获成像对象反射的结构光进行成像。
本实施例中,由于每个人的身体部分的特征一般是不相同的,可以选取身体部位作为成像对象,例如,成像对象可以为人脸、面部器官(眼睛、鼻子、嘴巴)或者手部等身体部位。
步骤102,获取图像传感器成像得到的成像数据。
本实施例中,可通过专用硬件,获取图像传感器成像的得到的成像数据,如结构光传感器成像得到的深度数据。
步骤103,根据成像数据,对成像对象进行活体检测。
本实施例中,可利用成像数据中的深度数据,对成像对象进行活体检测。具体而言,根据深度数据构建结构光深度模型,并从结构光深度模型中识别目标器官,具体地,将结构光深度模型与预存的脸部器官的结构光深度模型进行比对,以从结构光深度模型中识别出目标器官。
由于成像对象为活体时,成像对象不可能始终保持静止,当某器官处于运动状态时,其深度数据也会发生变化,因此本实施例中对目标器官进行跟踪,以确定目标器官是否处于运动状态。
在识别出目标器官后,继续采集成像对象的深度图像,获取连续的多帧深度图像。通过比较同一器官在连续的多帧深度图像中的深度数据,以确定该器官是否处于运动状态。当同一器官在连续的多帧深度图像中的深度数据发生了变化,可以确定该器官处于运动状态。
当目标器官处于运动状态时,说明成像对象不是仿照物,如照片等,可以确定该成像对象为活体。当目标器官处于静止状态时,可以确定该成像对象不是活体,可能为照片等仿照物。
本实施例中,通过从结构光深度模型中识别出目标器官,对目标器官进行跟踪,以确定目标器官是否处于运动状态,进而确定成像对象是否为活体,活体检测的准确率高。
步骤104,若活体检测通过,将依据成像数据中的深度数据构建的结构光深度模型,与预设人脸深度模型进行匹配。
如果成像对象通过活体检测,将成像对象的结构光深度模型与预设的人脸深度模型进行匹配。
作为一种可能的实现方式,可将构建的结构光深度模型,与预设的人脸深度模型进行比对,当相似度超过预设阈值时,可以认为结构光深度模型与预设的人脸深度模型匹配。
可以理解的是,这里预设的人脸深度模型,是预先存储的利用结构光传感器对电子设备的机主的人脸进行成像得到的结构光图像,利用结构光图像中深度数据构建得到的预设的人脸深度模型,以用于身份验证。
步骤105,当结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
当成像对象的结构光深度模型与预设人脸深度模型匹配时,确定通过了身份验证,可以进行后续的操作,如完成电子支付、电子设备解锁等等。
当未通过活体检测时,可返回未通过活体检测的消息,或者当成像对象的结构光深度模型与预设人脸深度模型不匹配时,返回身份验证失败的信息。
本实施例中,先根据成像数据对成像对象进行活体检测,在活体检测通过后,再根据结构光深度模型进行身份验证,从而可以避免利用仿照物如照片身份验证通过的情况,提高了用人脸进行身份验证的安全性和可靠性。
进一步而言,在前述身份验证和活体检测过程中,在可信环境下通过专用硬件获取身份验证和活体检测所需的成像数据,保证了身份验证和活体检测数据来源的安全性,进一步提高了安全性和可靠性。
上述实施例中,通过专用硬件,控制进行成像的图像传感器还可包括红外传感器,红外传感器包括激光摄像头和泛光灯。在控制红外传感器进行成像时,PWM可以调制电子设备上的泛光灯发出红外光,投射到成像对象。红外光受到成像对象的阻碍,被成像对象反射,激光摄像头捕获成像对应反射的红外光进行成像。
在进行活体检测时,可通过红外传感器成像得到的红外图像,识别红外图像的成像对象是否为活体。图3为本申请实施例提供的一种根据红外图像进行活体检测的方法的流程示意图。
如图3所示,该活体检测方法包括:
步骤301,从红外图像中提取成像轮廓。
本实施例中,可根据红外图像中的边缘像素点,提取得到成像轮廓。
步骤302,根据处于成像轮廓内部的局部红外图像,确定红外图像的成像对象的温度。
本实施例中,可将成像轮廓划分多个部分,确定每个局部红外图像对应的温度,将每个局部红外图像对应的温度相加求出平均值,将平均值作为红外图像的成像对象的温度。
作为一种可能的实现方式,红外图像是红外传感器采集人体发出的红外光成像得到的,该红外图像中各像素点的取值与人体温度相对应,据此可以确定出成像对象的温度。
作为另一种可能的实现方式,红外图像是主动向人体投射红外光后,经人体反射后,红外传感器接收人体反射的红外光成像得到的。红外传感器的响应频率应当同时覆盖主动投射的红外光频率以及人体发出的红外光频率,从而,在红外图像中,各像素点的取值是人体反射的红外光与人体发出的红外光叠加的效果。由于,投射的红外光的强度是已知的,在根据红外图像各像素点取值与红外辐射温度之间对应关系,确定出各像素点对应的红外辐射温度之后,根据投射的红外光的强度确定对应的红外辐射修正温度。采用该红外辐射修正温度,对各像素点对应的红外辐射温度进行修正,将修正后的红外辐射温度作为成像对象的温度。
步骤303,若成像轮廓与预设人脸轮廓匹配,且红外图像的成像对象的温度处于体温范围内,确定红外图像的成像对象为活体。
本实施例中,将成像轮廓与预设人脸轮廓进行匹配。作为一个示例,在进行成像轮廓匹配时,可分段进行匹配,当每个分段相似程度均超过每个分段的预设阈值时,可以认为该成像轮廓与预设的成像轮廓匹配,即成像对象为预存的成像对象。
在将成像轮廓与预设的人脸轮廓进行比对时,可将人脸轮廓分以眉毛为分界分为上半部分和下半部分,分段进行比对。由于上半部分(包括眉毛),受到眉形、发型的影响,相对变化比较大,可信度比较低,而下半部分,如眼睛、眉毛、鼻子、嘴巴等比较固定,因此上半部分对应的相似度的预设 阈值,相对下半部分相对较小。
针对两个部分分别进行比对,当成像轮廓的上半部分与预存的人脸轮廓的上半部分的相似度超过对应的预设阈值,且成像轮廓的下半部分与预存的人脸轮廓的下半部分的相似度超过对应的预设阈值时,可以认为成像轮廓与预存的人脸轮廓匹配。
若成像轮廓与预设的人脸轮廓匹配,且红外图像的成像对象的温度处于人体体温范围内,可以确定红外图像的成像对象为活体。否则,可以认为红外图像的成像对象不是活体。
本实施例中,通过成像轮廓是否与预设人脸轮匹配,以及成像对象的温度是否在人体体温范围内,判断成像对象是否为活体,从而提高了活体识别的准确率。
上述实施例中,通过专用硬件,控制开启的图像传感器可包括红外传感器和可见光传感器,通过红外传感器和可见光传感器成像得到红外图像和可见光图像。在进行活体检测时,可通过红外图像和可见光图像,对成像对象进行活体检测。图4为本申请实施例提供的一种根据红外图像和可见光图像进行活体检测的方法的流程示意图。
如图4所示,该活体检测方法包括:
步骤401,在可见光图像中识别人脸区域,并在红外图像中,确定与人脸区域相对应的第一目标区域。
本实施例中,在可见光图像上,检测人脸区域,如果没有检测到人脸区域,重新采集可见光图像和红外图像。如果检测到人脸,则在红外图像中识别出人脸轮廓,确定与可见光图像中人脸区域对应的第一目标区域。可以理解的是,这里第一目标区域为红外图像中的人脸区域。
步骤402,根据第一目标区域,确定包含第一目标区域且大于第一目标区域的第二目标区域。
在红外图像上在第一目标区域的基础上扩大范围,得到第二目标区域。可以理解的是,第二目标区域包含第一目标区域且大于第一目标区域。
步骤403,在第二目标区域内统计直方图,并根据直方图计算对比度。
在红外图像上的第二目标区域内统计直方图,如公式(1)所示。
C=∑ δδ(i,j) 2p δ(i,j)   (1)
其中,δ(i,j)=|i-j|,即相邻像素间灰度差,p δ(i,j)为相邻像素间的灰度差的像素分布概率。
步骤404,若对比度大于阈值,确定红外图像和可见光图像的成像对象为活体。
当对比度大于一定的阈值时,可以确定红外图像和可见光图像的成像对象为活体,否则为仿照物。
本实施例中,通过红外图像和可见光图像两种图像,确定成像对象是否活体,提高了活体检测的准确率。
进一步地,在提高利用人脸进行身份验证的安全性和可靠性的情况下,能够节省电子设备能量,提高续航能力。若控制成像的图像传感器中还包括红外传感器,则通过专用硬件,控制开启红外传感器进行成像。若根据红外传感器成像得到的红外图像确定成像对象为活体,控制结构光传感器进行成像。
具体而言,通过专用硬件MCU调整泛光灯以发出红外光,红外光照射至成像对象。红外光受到成像对象的阻碍,反射红外光,红外传感器接收到成像对象反射的红外光,进行成像。
通过MCU获取红外传感器成像得到的红外图像,并根据红外图像对成像对象进行活体检测,具体的检测方法可参见上述实施例中描述的方法,在此不再赘述。
若根据红外图像确定成像对象为活体,再控制结构光传感器进行成像,以根据结构光深度模型进 行身份验证。
本实施例中,先控制红外传感器进行成像,在根据红外图像确定成像对象为活体后,再控制结构光传感器进行成像,从而使得结构光传感器不需要一直处于工作状态,可以很好地节省电子设备的电量,提高电子设备的续航能力。
可以理解的是,为了提高身份验证的速度,可同步控制图像传感器中的红外传感器和结构光传感器进行成像,从而在根据红外图像确定成像对象为活体后,直接根据结构光传感器成像得到的成像数据进行身份验证,提高了身份验证的速度。
上述实施例中,若控制成像的图像传感器中包括可见光传感器、红外传感器、结构光传感器,为了节省电子设备的能量,可先控制可见光传感器和红外传感器进行成像。若根据红外传感器成像得到的红外图像和可见光传感器成像得到的可见光图像确定成像对象为活体,控制结构光传感器进行成像。
其中,根据可见光图像和红外图像,检测成像对象是否为活体的过程,可参见上述实施例中所述的方法,在此不再赘述。
本实施例中,先控制可见光传感器和红外传感器进行成像,在根据可见光图像和红外图像,确定成像对象为活体后,再控制结构光传感器进行成像,以根据结构光深度模型进行身份验证,进行身份验证之前先进行活体检测,提高了利用人脸进行身份验证的可靠性和安全性,而且结构光传感器可以不需要一直处于成像的工作状态,大大节省了电子设备的能量,提高了电子设备的续航能力。
本申请实施例还提出一种人脸识别装置。图5为本申请实施例提供的一种人脸识别装置的结构示意图。
如图5所示,该装置包括:控制模块501、获取模块502、检测模块503、匹配模块504。
控制模块501,用于控制图像传感器进行成像,其中,图像传感器包括结构光传感器;
获取模块502,用于获取图像传感器成像得到的成像数据;
检测模块503,用于根据成像数据,对成像对象进行活体检测;
匹配模块504,用于若活体检测通过,将依据成像数据中的深度数据构建的结构光深度模型,与预设人脸深度模型进行匹配;当结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
在本实施例一种可能的实现方式中,图像传感器还包括红外传感器,成像数据包括红外图像,检测模块503还用于:
根据成像数据中的红外图像,识别红外图像的成像对象是否为活体。
进一步地,在本实施例一种可能的实现方式中,检测模块503还用于:
从红外图像中提取成像轮廓;
根据处于成像轮廓内部的局部红外图像,确定红外图像的成像对象的温度;
若成像轮廓与预设人脸轮廓匹配,且红外图像的成像对象的温度处于体温范围内,确定红外图像的成像对象为活体。
在本实施例一种可能的实现方式中,图像传感器还包括红外传感器和可见光传感器,成像数据包括红外图像和可见光图像,检测模块503还用于:
在可见光图像中识别人脸区域,并在红外图像中,确定与人脸区域相对应的第一目标区域;
根据第一目标区域,确定包含第一目标区域且大于第一目标区域的第二目标区域;
在第二目标区域内统计直方图,并根据直方图计算对比度;
若对比度大于阈值,确定红外图像和可见光图像的成像对象为活体。
在本实施例一种可能的实现方式中,控制模块501还用于:
控制图像传感器中的红外传感器进行成像;
若根据红外传感器成像得到的红外图像确定成像对象为活体,控制结构光传感器进行成像。
在本实施例一种可能的实现方式中,控制模块501还用于:
同步控制图像传感器中的红外传感器和结构光传感器进行成像。
在本实施例一种可能的实现方式中,控制模块501还用于:
控制图像传感器中的红外传感器和可见光传感器进行成像;
若根据红外传感器成像得到的红外图像和可见光传感器成像得到的可见光图像确定成像对象为活体,控制结构光传感器进行成像。
在本实施例一种可能的实现方式中,该人脸识别装置可具有可信执行环境,控制模块501还用于通过可信执行环境中的专用硬件,控制图像传感器进行成像,获取模块502可通过专用硬件获取图像传感器成像得到的成像数。
在前述身份验证和活体检测过程中,通过专用硬件获取身份验证和活体检测所需的成像数据,保证了身份验证和活体检测数据来源的安全性,进一步提高了安全性和可靠性。
上述人脸识别装置中各个模块的划分仅用于举例说明,在其他实施例中,可将人脸识别装置按照需要划分为不同的模块,以完成上述人脸识别装置的全部或部分功能。
需要说明的是,前述对人脸识别方法实施例的解释说明,也适用于该实施例的人脸识别装置,故在此不再赘述。
本申请实施例的人脸识别装置,通过控制图像传感器进行成像,获取图像传感器成像得到的成像数据,根据成像数据,对成像对象进行活体检测,若活体检测通过,将依据成像数据中的深度数据构建的结构光深度模型,与预设人脸深度模型进行匹配,当结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。本实施例中,根据成像数据进行活体检测,在活体检测通过后再根据结构光深度模型,进行人脸的深度模型的验证,由于在活体检测通过后,再进行人脸的深度模型的验证,从而可以避免利用仿照物如照片身份验证通过的情况,提高了利用人脸进行身份验证的安全性和可靠性。
下面参考附图描述本申请另一个实施例的图像处理方法(下文统称为人脸识别方法)及图像处理装置(下文统称为人脸识别装置)。
目前,以通过电子设备上的图像传感器采集人脸的成像数据,然后基于成像数据进行身份验证。但是,目前直接调用图像传感器采集图像进行身份验证,验证方式单一,安全性较低。而且图像传感器采集图像时能耗较大,直接调用往往会影响到电子设备的续航能力。
针对这一问题,本申请实施例提出一种人脸识别方法,通过在确定成像对象为预存的成像对象后再进行活体检测,从而可以避免利用仿照物如照片验证通过的情况出现,提高了身份验证的安全性和可靠性。由于在确定成像对象为预存的成像对象匹配后,再开启图像传感器,从而使得图像传感器不需要一直处于开启状态,可以很好地节省电子设备的电量,提高电子设备的续航能力。
图6为本申请实施例提供的一种人脸识别方法的流程示意图。
该人脸识别方法应用电子设备,作为一种可能的实现方式,电子设备的结构可参见图2,图2为本申请实施例提供的一种电子设备的结构示意图。
如图2所示,该电子设备包括:激光摄像头、泛光灯、可见光摄像头、镭射灯以及微处理器(Microcontroller Unit,简称MCU)。其中,MCU包括脉冲宽度调制(Pulse Width Modulation,简称PWM)、深度引擎、总线接口以及随机存取存储器RAM。另外,电子设备还包括处理器,该处理器具有可信执行环境,MCU为可信执行环境专用硬件,执行图6所示方法的可信应用程序运行于该可信 执行环境下;处理器还可以具有普通执行环境,该普通执行环境与可信执行环境相互隔离。需要说明的是,图6所示方法的应用程序也可在普通执行环境中运行。
需要说明的是,本领域技术人员可以知晓,图6对应方法不仅适用于图2所示的电子设备,图2所示电子设备仅作为一种示意性描述,图6对应方法还可以用于其他具有可信执行环境,以及可信执行环境专用硬件的电子设备,本实施例中对此不作限定。
其中,PWM用于调制泛光灯以使发出红外光,以及调制镭射灯以发出结构光;激光摄像头,用于采集成像对象的结构光图像或可见光图像;深度引擎,用于根据结构光图像,计算获得成像对象对应的深度数据;总线接口,用于将深度数据发送至处理器,并由处理器上运行的可信应用程序利用深度数据执行相应的操作。其中,总线接口包括:移动产业处理器接口(Mobile Industry Processor Interface简称MIPI)、I2C同步串行总线接口、串行外设接口(Serial Peripheral Interface,简称SPI)。
如图6所示,该人脸识别方法包括:
步骤110,控制开启红外传感器进行成像。
本实施例中,该人脸识别方法可由可信应用程序执行,其中,可信应用程序运行于可信执行环境中,可信应用程序可以理解为涉及用户资源、用户隐私等信息安全性的应用程序,该类应用程序性需要的安全级别较高,例如电子支付程序、解锁程序等等。
可信执行环境是电子设备(包含智能手机、平板电脑等)主处理器上的一个安全区域,相对普通执行环境,其可以保证加载到该环境内部的代码和数据的安全性、机密性以及完整性。可信执行环境提供一个隔离的执行环境,提供的安全特征包含:隔离执行、可信应用程序的完整性、可信数据的机密性、安全存储等。总之,可信执行环境提供的执行空间比常见的移动操作系统,如ISO、Android等,提供更高级别的安全性。
本实施例中,可信应用程序运行于可信执行环境中,从运行环境上提高了身份验证的安全性。
本实施例中,电子设备可包括红外传感器、可见光传感器、结构光传感器。其中,红外传感器可以根据成像对象反射的红外光进行红外成像;可见光传感器利用成像对象反射的可见光进行成像,得到可见光图像;结构光传感器可以根据成像对象反射的结构光成像,得到结构光图像。
其中,成像对象可以为人脸,也可以为其他具有特征的部分如手部、眼睛、嘴巴等等。
当可信应用程序执行时,如进行电子支付、电子设备解锁时,可通过可信执行环境的专用硬件,控制开启红外传感器进行成像。其中,专用硬件可以为MCU。
本实施例中,红外传感器可包括激光摄像头和泛光灯。MCU可以调制电子设备上的泛光灯发出红外光,投射到成像对象。红外光受到成像对象的阻碍,被成像对象反射,激光摄像头捕获成像对应反射的红外光进行成像。
步骤120,获取红外传感器成像得到的第一成像数据。
本实施例中,可通过专用硬件如MCU,获取红外传感器成像得到的第一成像数据。具体地,专用硬件根据红外传感器的成像结果,得到第一成像数据,这里为红外图像。
步骤130,根据第一成像数据,对成像对象与预存的成像对象进行比对。
本实施例中,由于每个人的身体部分的特征一般是不相同的,可以选取身体部位作为成像对象,例如,成像对象可以为人脸、面部器官(眼睛、鼻子、嘴巴)或者手部等身体部位。在获得成像对象的第一成像数据即红外图像后,可从红外图像中提取成像轮廓,具体而言,可以提取红外图像的边缘像素点,以及像素值相近的像素点,以得到成像轮廓。
然后,将成像轮廓与预存的成像对象的成像轮廓进行匹配。作为一个示例,在进行成像轮廓匹配 时,可分段进行匹配,当每个分段相似程度均超过每个分段的预设阈值时,可以认为该成像轮廓与预设的成像轮廓匹配,即成像对象为预存的成像对象。
具体而言,从红外图像中可以提取图像边缘的像素点,以及像素值的差值小于预设阈值的像素点,即像素值相近的像素点,以得到成像轮廓。
在将成像轮廓与预设的人脸轮廓进行比对,可将人脸轮廓分以眉毛为分界分为上半部分和下半部分,分段进行比对。由于上半部分(包括眉毛),受到眉形、发型的影响,相对变化比较大,可信度比较低,而下半部分,如眼睛、眉毛、鼻子、嘴巴等比较固定,因此上半部分对应的相似度的预设阈值,相对下半部分相对较小。
针对两个部分分别进行比对,当成像轮廓的上半部分与预存的人脸轮廓的上半部分的相似度超过对应的预设阈值,且成像轮廓的下半部分与预存的人脸轮廓的下半部分的相似度超过对应的预设阈值时,可以认为成像轮廓与预存的人脸轮廓匹配,即成像对象为预存的人脸。
步骤140,如果成像对象为预存的成像对象,则控制开启图像传感器进行成像。
当成像对象为预存的成像对象时,可以说明该成像对象属于电子设备的机主,这时可以控制开启图像传感器。这里开启的图像传感器可以为可见光传感器或者结构光传感器,也可以为可见光传感器和结构光传感器。
本实施例中,可见光传感器包括可见光摄像头,可见光摄像头可以捕获由成像对象反射的可见光进行成像,得到可见光图像。结构光传感器包括镭射灯,以及与红外传感器共用的激光摄像头。PWM可以调制镭射灯以发出结构光,结构光照射至成像对象,激光摄像头可以捕获由成像对象反射的结构光进行成像,得到结构光图像。
由于在确定成像对象与预存的成像对象匹配后,再开启图像传感器,从而使得图像传感器不需要一直处于开启状态,可以很好地节省电子设备的电量,提高电子设备的续航能力。
步骤150,获取图像传感器成像得到的第二成像数据。
本实施例中,当开启的传感器为可见光传感器时,可通过专用硬件获取可见光传感器成像得到的第二数据,即可见光图像。当开启的传感器为结构光传感器时,可通过专用硬件获取可见光传感器成像得到的结构光图像。深度引擎根据结构光图像,可计算获得成像对象对应的深度数据,具体而言,深度引擎解调结构光图像中变形位置像素对应的相位信息,将相位信息转化为高度信息,根据高度信息确定被摄物对应的深度数据,从而根据深度数据得到深度图像。当开启的传感器为可见光传感器和结构光传感器时,可通过专用硬件获取可见光图像和深度图像。
步骤160,根据第二成像数据,对成像对象进行活体检测。
本实施例中,可利用过深度图像对成像对象进行活体检测,也可以通过可见光图像进行活体检测,也可以通过可见光图像和红外图像进行活体检测。具体过程可详见后续实施例。
本申请实施例的人脸识别方法,通过先开启红外光传感器,在确定成像对象与预存的成像对象匹配后,再开启图像传感器,进行活体检测,不仅可以节省能量,而且可以提高身份验证的安全性和可靠性。由于在确定成像对象为预存的成像对象匹配后,再开启图像传感器,从而使得图像传感器不需要一直处于开启状态,可以很好地节省电子设备的电量,提高电子设备的续航能力。
本实施例的人脸识别方法,可由可信应用程序执行,可信应用程序运行于可信执行环境中,在前述身份验证的过程中,在可信环境下通过专用硬件获取身份验证的成像数据,保证了身份验证数据来源的安全性,进一步提高了身份验证的安全性和可靠性。
进一步地,如图7所示,在图6所示的基础上,该人脸识别方法在步骤160之后,还可包括:
步骤170,若活体检测通过,利用深度图像形成结构光深度模型,与预设人脸深度模型进行匹配。
当成像对象通过活体检测时,通过结构光深度模型进行身份验证。具体而言,根据深度图像中的深度数据构建结构光深度模型,并与预设的人脸深度模型进行匹配。具体地,可将构建的结构光深度模型中脸部各个器官的结构光深度模型,与预设的人脸深度模型中各个器官的深度模型进行比对,当相似度超过预设阈值时,可以认为结构光深度模型与预设的人脸深度模型匹配。
可以理解的是,这里预设的人脸深度模型,是预先存储的利用结构光传感器对电子设备的机主的人脸进行成像得到的结构光图像,利用结构光图像中深度数据构建得到的预设的人脸深度模型,以用于身份验证。
步骤180,当结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
当结构光深度模型与预设人脸深度模型匹配时,确定通过了身份验证,可以进行后续的操作,如完成电子支付、电子设备解锁等等。
可以理解的是,当未通过活体检测时,可返回未通过活体检测的消息,或者当结构光深度模型与预设人脸深度模型不匹配时,返回未身份验证失败的信息。
本申请实施例的人脸识别方法,在通过成像对象活体检测后,也就是在确认进行身份验证的对象不是仿照物(照片)时,再进行身份验证,从而提高了身份验证的安全性和可靠性。
由于当第二成像数据不同时,进行活体检测的方法不同,当第二成像数据为结构光图像时,本申请实施例提供了一种进行活体检测的方法。图8为本申请实施例提供的一种根据结构光图像进行活体检测的方法的流程示意图。如图8所示,包括:
步骤310,从结构光深度模型中识别目标器官。
本实施例中,可预先存储多个人体器官的结构光深度模型。在通过专用硬件获取成像对象的结构光图像后,从结构光图像中获取深度数据,深度数据构成深度图像,并根据深度图像构建结构光深度模型,将结构光深度模型与预存的器官的结构光深度模型进行比对,以从结构光深度模型中识别出目标器官。
举例而言,从人脸的结构光深度模型中识别出眉毛、眼睛、鼻子、嘴巴等器官。
步骤320,继续采集深度图像,对目标器官进行跟踪,识别目标器官是否处于运动状态。
由于成像对象为活体时,成像对象不可能始终保持静止,当某器官处于运动状态时,其深度数据也会发生变化,因此本实施例中对目标器官进行跟踪,以确定目标器官是否处于运动状态。
具体地,在识别出目标器官后,继续采集成像对象的深度图像,获取连续的多帧深度图像。通过比较同一器官在连续的多帧深度图像中的深度数据,以确定该器官是否处于运动状态。当同一器官在连续的多帧深度图像中的深度数据发生了变化,可以确定该器官处于运动状态。
以目标器官为嘴巴为例,在当前采集的深度图像中处于闭合状态,经过几帧深度图像后,嘴巴处于张开状态,从而可以确定嘴巴处于运动状态。
步骤330,如果目标器官处于运动状态,则确定成像对象为活体。
当目标器官处于运动状态时,说明成像对象不是仿照物,如照片等,可以确定该成像对象为活体。当目标器官处于静止状态时,可以确定该成像对象不是活体,可能为照片等仿照物。
本实施例中,通过从结构光深度模型中识别出目标器官,对目标器官进行跟踪,以确定目标器官是否处于运动状态,进而确定成像对象是否为活体,活体检测的准确率高。
上述实施例中,当确定成像对象为预存的成像对象时,控制开启的图像传感器可以是可见光传感器,以根据可见光图像检测成像对象是否为活体。图9为本申请实施例提供的又一种人脸识别方法的 流程示意图。
如图9所示,该方法包括:
步骤410,控制开启红外传感器进行成像。
步骤420,获取红外传感器成像得到的第一成像数据。
步骤430,根据第一成像数据,对成像对象与预存的成像对象进行比对。
本实施例中,根据第一成像数据,判断成像对象是否与预存的成像对象匹配的方法,与上述实施例中步骤110-步骤130中记载的方法类似,故在此不再赘述。
步骤440,如果成像对象为预存的成像对象,则控制开启可见光传感器进行成像。
当成像对象为预存的成像对象时,可控制开启可见光传感器,以使成像对象反射的可见光,在可见光传感器上成像。
步骤450,获取可见光传感器成像得到的可见光图像。
本实施例中,可通过专用硬件如MCU,获取可见光传感器成像得到的各个像素点的值,进而得到可见光图像。
步骤460,根据可见光图像,对成像对象进行活体检测。
作为一种可能的实现方式,可仅根据可见光图像对成像对象进行活体检测。图10为本申请实施例提供的一种根据可见光图像进行活体检测的方法的流程示意图。如图10所示,该活体检测方法包括:
步骤510,从可见光图像中识别人脸区域中的目标器官。
本实施例中,可预先存储脸部多个器官的可见光图像,将成像对象的可见光图像与预存的脸部器官的可见光图像进行比对,将成像对象的可见光图像中与预存的某器官的像素值相近的区域,确定为该器官。
举例而言,将成像对象的可见光图像中与预存的鼻子的可见光图像的像素值相近的区域识别为鼻子。
步骤520,继续采集可见光图像,对目标器官进行跟踪,识别目标器官是否处于运动状态。
由于成像对象为活体时,成像对象不可能始终保持静止,当某器官处于运动状态时,其位置也会发生变化,因此本实施例中对目标器官进行跟踪,以确定目标器官是否处于运动状态。
具体地,在识别出人脸区域中的目标器官后,继续采集人脸的可见光图像,得到连续多帧的人脸的可见光图像。通过比较两个器官在连续两帧或多帧可见光图像中的相对位置,以确定该目标器官是否处于运动状态。
当两个目标器官的相对位置发生了变化,可以认为这两个器官处于运动状态。
步骤530,如果目标器官处于运动状态,则确定成像对象为活体。
当目标器官处于运动状态时,可以确定人脸为活体,而不是仿照物如照片中的人脸,说明人脸通过了活体检测。当目标器官处于静止状态时,可以认为人脸不是活体而是仿照物,说明人脸没有通过活体检测。
作为另一种可能的实现方式,还可根据可见光图像和红外图像,对人脸进行活体检测。图4为本申请实施例提供的一种根据可见光图像和红外图像进行活体检测的方法的流程示意图。如图4所示,该方法包括:
步骤401,在可见光图像中识别人脸区域,并在红外图像中,确定与人脸区域相对应的第一目标区域。
本实施例中,在可见光图像上,检测人脸区域,如果没有检测到人脸区域,重新采集可见光图像 和红外图像。如果检测到人脸,则在红外图像中识别出人脸轮廓,确定与可见光图像中人脸区域对应的第一目标区域。可以理解的是,这里第一目标区域为红外图像中的人脸区域。
步骤402,根据第一目标区域,确定包含第一目标区域且大于第一目标区域的第二目标区域。
在红外图像上在第一目标区域的基础上扩大范围,得到第二目标区域。可以理解的是,第二目标区域包含第一目标区域且大于第一目标区域。
步骤403,在第二目标区域内统计直方图,并根据直方图计算对比度。
在红外图像上的第二目标区域内统计直方图,如公式(1)所示。
C=∑ δδ(i,j) 2p δ(i,j)      (1)
其中,δ(i,j)=|i-j|,即相邻像素间灰度差,p δ(i,j)为相邻像素间的灰度差的像素分布概率。
步骤404,若对比度大于阈值,确定红外图像和可见光图像的成像对象为活体。
当对比度大于一定的阈值时,可以确定红外图像和可见光图像的成像对象为活体,否则为仿照物。
步骤470,若活体检测通过后,控制开启结构光传感器进行成像。
本实施例中,当检测成像对象为活体时,可通过专用硬件,控制开启结构光传感器。具体而言,可通过专用硬件调制镭射灯以发出结构光,结构光投射到成像对象。成像对象反射结构光,结构光传感器根据成像对象反射的结构光对成像对象进行成像。
步骤480,获取结构光传感器成像得到的第三成像数据。
本实施例中,结构光传感器根据成像对象反射的结构光,可成像得到结构光图像。可通过专用硬件,根据结构光图像获取深度数据,进而根据深度数据构建得到成像对象的结构光深度模型。从而,第三成像数据中包括结构光深度模型。
步骤490,将第三成像数据中的结构光深度模型,与预设人脸深度模型进行匹配,确定是否通过身份验证。
本实施例中,当成像对象通过活体检测后,通过结构光深度模型进行身份验证。具体而言,将结构光深度模型与预设的人脸深度模型进行匹配。具体地,可将构建的结构光深度模型中脸部各个器官的结构光深度模型,与预设的人脸深度模型中各个器官的深度模型进行比对,当相似度超过预设阈值时,可以认为结构光深度模型与预设的人脸深度模型匹配,可以确定通过了身份验证。
在确定身份验证后,电子设备进行只有在身份验证通过后才可进行的操作,如完成电子支付、解锁等。
本申请实施例的人脸识别方法,通过第一成像数据确定成像对象是否属于机主,确定属于机主后,再开启可见光传感器,根据可见光图像进行活体检测。在根据可见光图像确定成像对象为活体后,再开启结构光传感器,根据结构光深度模型进行身份验证。由于可见光传感器和结构光传感器,可以不用一直处于开启状态,从而可以节省电子设备的能量,提高电子设备的续航能力。并且,本实施例中,先确定成像对象是否属于机主,在属于机主的情况下,再进行活体验证,可以提高身份验证对安全性和可靠性。
本申请实施例的人脸识别方法,通过控制开启红外传感器进行成像,在根据红外传感器成像得到的成像数据确定成像对象为预存的成像对象后,再控制开启图像传感器进行成像,以根据图像传感器成像得到的成像数据进行活体检测,由于在确定成像对象为预存的成像对象后再进行活体检测,从而可以避免利用仿照物如照片验证通过的情况出现,提高了身份验证的安全性和可靠性。由于在确定成像对象为预存的成像对象匹配后,再开启图像传感器,从而使得图像传感器不需要一直处于开启状态, 可以很好地节省电子设备的电量,提高电子设备的续航能力。
为了实现上述实施例,本申请还提出一种图像处理装置。如图11所示,该装置包括:控制模块710、获取模块720、匹配模块730、检测模块740。
控制模块710用于控制红外传感器进行成像;以及在比对出成像对象为预存的成像对象时,控制开启图像传感器进行成像。
获取模块720用于获取红外传感器成像得到的第一成像数据,以及获取图像传感器成像得到的第二成像数据;
匹配模块730用于根据第一成像数据,对成像对象与预存的成像对象进行比对;
检测模块740用于根据第二成像数据,对成像对象进行活体检测。
在本实施例一种可能的实现方式中,该装置还包括:
匹配模块730,还用于若活体检测通过,利用深度图像形成结构光深度模型,与预设人脸深度模型进行匹配;
确定模块,用于当结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
在本实施例一种可能的实现方式中,检测模块740还用于:
从结构光深度模型中识别目标器官;
继续采集深度图像,对目标器官进行跟踪,识别目标器官是否处于运动状态;
如果目标器官处于运动状态,则确定成像对象为活体。
在本实施例一种可能的实现方式中,所述图像传感器为可见光传感器,所述第二成像数据包括可见光图像,该装置还可包括:
控制模块710,还用于若活体检测通过后,控制开启结构光传感器进行成像;
获取模块720,还用于获取结构光传感器成像得到的第三成像数据;
匹配模块730,还用于将第三成像数据中的结构光深度模型,与预设人脸深度模型进行匹配;
确定模块,用于当结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
在本实施例一种可能的实现方式中,检测模块740还用于:
从可见光图像中识别人脸区域中的目标器官;
继续采集可见光图像,对目标器官进行跟踪,识别目标器官是否处于运动状态;
如果目标器官处于运动状态,则确定成像对象为活体。
在本实施例一种可能的实现方式中,第一成像数据为红外图像,检测模块740还用于:
在可见光图像中识别人脸区域,并在所述红外图像中,确定与人脸区域相对应的第一目标区域;
根据第一目标区域,确定包含第一目标区域且大于第一目标区域的第二目标区域;
在第二目标区域内统计直方图,并根据直方图计算对比度;
若对比度大于阈值,确定红外图像和可见光图像的成像对象为活体。
在本实施例一种可能的实现方式中,第一成像数据为红外图像,匹配模块730还用于:
从红外图像中提取成像轮廓;
将成像轮廓与预存的成像对象的成像轮廓匹配;
如果成像轮廓与预存的成像对象的成像轮廓匹配,则确定成像对象属于机主。
在本实施例一种可能的实现方式中,该图像处理装置可具有可信执行环境。在身份验证过程中,通过专用硬件获取身份验证所需的成像数据,保证了身份验证数据来源的安全性,进一步提高了安全性和可靠性。
上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。
需要说明的是,前述对人脸识别方法实施例的解释说明,也适用于该实施例的图像处理装置,故在此不再赘述。
本申请实施例的图像处理装置,通过控制开启红外传感器进行成像,获取红外传感器成像得到的第一成像数据,根据第一成像数据,对成像对象与预存的成像对象进行比对,如果成像对象为预存的成像对象,则控制开启图像传感器进行成像,获取图像传感器成像得到的第二成像数据,根据第二成像数据,对成像对象进行活体检测。本实施例中,控制开启红外传感器进行成像,在根据红外传感器成像得到的成像数据确定成像对象为预存的成像对象后,再控制开启图像传感器进行成像,以根据图像传感器成像得到的成像数据进行活体检测,由于在确定成像对象为预存的成像对象后再进行活体检测,从而可以避免利用仿照物如照片验证通过的情况出现,提高了身份验证的安全性和可靠性。由于在确定成像对象为预存的成像对象匹配后,再开启图像传感器,从而使得图像传感器不需要一直处于开启状态,可以很好地节省电子设备的电量,提高电子设备的续航能力。
本申请实施例还提出一种移动终端。图12为本申请实施例提供的一种移动终端的结构示意图。
本实施例中,移动终端包括但不限于手机、平板电脑等设备。
如图12所示,该移动终端包括:成像传感器810、存储器820、MCU 830、处理器840以及存储在存储器820上并可在处理器840的可信执行环境下运行的可信应用程序。
其中,MCU830为可信执行环境的专用硬件,与成像传感器810和处理器840连接,用于控制成像传感器810进行成像,并将成像数据发送至处理器840。
处理器840执行可信应用程序时,实现前述实施例所述的人脸识别方法。
在本实施例一种可能的实现方式中,MCU830与处理器840之间通过加密方式进行通信。
本实施例中,MCU830可采取行列像素点置乱方法对图像进行加密。具体而言,MCU830可将原图中的像素信息进行了重新排布,处理器可通过一一对应的关系可以恢复原来的图像。
MCU830也可采用基于混沌的图像加密方法,具体地,产生2个Logistic混沌序列,改造2个Logistic,得到两个y序列,由yl和y2序列对原图像进行值替代加密。其中,秘钥为混沌系统的初始状态值。
在本实施例一种可能的实现方式中,成像传感器810可包括:红外传感器、结构光传感器和可见光传感器。
其中,红外传感器包括激光摄像头和泛光灯;结构光传感器包括:镭射灯,以及与红外传感器共用的激光摄像头,可见光传感器包括:可见光摄像头。
在本实施例一种可能的实现方式中,MCU830包括PWM、深度引擎、总线接口以及随机存取存储器RAM。
其中,PWM用于调制泛光灯以使发出红外光,以及调制镭射灯以发出结构光;
激光摄像头,用于采集成像对象的结构光图像;
深度引擎,用于根据结构光图像,计算获得成像对象对应的深度数据;以及
总线接口,用于将深度数据发送至处理器840,并由处理器840上运行的可信应用程序利用深度数据执行相应的操作。
例如,可根据深度数据进行身份验证,具体过程可参见上述实施例,在此不再赘述。
本申请实施例还提出一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时 实现如前述实施例所述的人脸识别方法。
在本说明书的描述中,此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (25)

  1. 一种人脸识别方法,其特征在于,所述方法包括以下步骤:
    控制图像传感器进行成像;
    获取图像传感器成像得到的成像数据;
    根据所述成像数据,对成像对象进行活体检测。
  2. 根据权利要求1所述的人脸识别方法,其特征在于,所述图像传感器包括结构光传感器,所述方法还包括以下步骤:
    若活体检测通过,将依据所述成像数据中的深度数据构建的结构光深度模型,与预设人脸深度模型进行匹配;
    当所述结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
  3. 根据权利要求2所述的人脸识别方法,其特征在于,所述图像传感器还包括红外传感器,所述成像数据包括红外图像,所述根据所述成像数据,对成像对象进行活体检测,包括:
    根据所述成像数据中的红外图像,识别所述红外图像的成像对象是否为活体。
  4. 根据权利要求3所述的人脸识别方法,其特征在于,所述根据所述成像数据中的红外图像,识别所述红外图像的成像对象是否为活体,包括:
    从所述红外图像中提取成像轮廓;
    根据处于所述成像轮廓内部的局部红外图像,确定所述红外图像的成像对象的温度;
    若所述成像轮廓与预设人脸轮廓匹配,且所述红外图像的成像对象的温度处于体温范围内,确定所述红外图像的成像对象为活体。
  5. 根据权利要求2所述的人脸识别方法,其特征在于,所述图像传感器还包括红外传感器和可见光传感器,所述成像数据包括红外图像和可见光图像,所述根据所述成像数据,对成像对象进行活体检测,包括:
    在所述可见光图像中识别人脸区域,并在所述红外图像中,确定与所述人脸区域相对应的第一目标区域;
    根据所述第一目标区域,确定包含所述第一目标区域且大于所述第一目标区域的第二目标区域;
    在所述第二目标区域内统计直方图,并根据所述直方图计算对比度;
    若所述对比度大于阈值,确定所述红外图像和所述可见光图像的成像对象为活体。
  6. 根据权利要求3-5任一项所述的人脸识别方法,其特征在于,所述控制图像传感器进行成像,包括:
    控制图像传感器中的红外传感器进行成像;
    若根据所述红外传感器成像得到的红外图像确定成像对象为活体,控制结构光传感器进行成像。
  7. 根据权利要求3-5任一项所述的人脸识别方法,其特征在于,所述控制图像传感器进行成像,包括:
    同步控制图像传感器中的红外传感器和结构光传感器进行成像。
  8. 根据权利要求3-5任一项所述的人脸识别方法,其特征在于,所述控制图像传感器进行成像,包括:
    控制图像传感器中的红外传感器和可见光传感器进行成像;
    若根据所述红外传感器成像得到的红外图像和所述可见光传感器成像得到的可见光图像确定成像 对象为活体,控制结构光传感器进行成像。
  9. 根据权利要求2-5任一项所述的人脸识别方法,其特征在于,所述方法由可信应用程序执行,所述可信应用程序运行于可信执行环境中。
  10. 根据权利要求1所述的人脸识别方法,其特征在于,所述方法还包括以下步骤:
    控制开启红外传感器进行成像;
    获取所述红外传感器成像得到的第一成像数据;
    根据所述第一成像数据,对成像对象与预存的成像对象进行比对;
    如果所述成像对象为所述预存的成像对象,则进入所述控制图像传感器进行成像的步骤;
    所述图像传感器成像得到的成像数据为第二成像数据,所述根据所述成像数据,对成像对象进行活体检测,包括:
    根据所述第二成像数据,对所述成像对象进行活体检测。
  11. 根据权利要求10所述的方法,其特征在于,所述图像传感器为结构光传感器,所述第二成像数据为深度图像,则所述对所述成像对象进行活体检测之后,还包括:
    若活体检测通过,利用所述深度图像形成结构光深度模型,与预设人脸深度模型进行匹配;
    当所述结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述第二成像数据,对所述成像对象进行活体检测,包括:
    从所述结构光深度模型中识别目标器官;
    继续采集所述深度图像,对所述目标器官进行跟踪,识别所述目标器官是否处于运动状态;
    如果所述目标器官处于运动状态,则确定所述成像对象为活体。
  13. 根据权利要求10所述的方法,其特征在于,所述图像传感器为可见光传感器,所述第二成像数据包括可见光图像,则所述对所述成像对象进行活体检测之后,还包括:
    若活体检测通过后,控制开启所述结构光传感器进行成像;
    获取所述结构光传感器成像得到的第三成像数据;
    将所述第三成像数据中的结构光深度模型,与预设人脸深度模型进行匹配;
    当所述结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述第二成像数据,对所述成像对象进行活体检测,包括:
    从所述可见光图像中识别人脸区域中的目标器官;
    继续采集所述可见光图像,对所述目标器官进行跟踪,识别所述目标器官是否处于运动状态;
    如果所述目标器官处于运动状态,则确定所述成像对象为活体。
  15. 根据权利要求13所述的方法,其特征在于,所述第一成像数据为红外图像,所述根据所述第二成像数据,对所述成像对象进行活体检测,包括:
    在所述可见光图像中识别人脸区域,并在所述红外图像中,确定与所述人脸区域相对应的第一目标区域;
    根据所述第一目标区域,确定包含所述第一目标区域且大于所述第一目标区域的第二目标区域;
    在所述第二目标区域内统计直方图,并根据所述直方图计算对比度;
    若所述对比度大于阈值,确定所述红外图像和所述可见光图像的成像对象为活体。
  16. 根据权利要求10-15任一项所述的方法,其特征在于,所述第一成像数据为红外图像,则所述 根据所述第一成像数据,对成像对象与预存的成像对象进行比对,包括:
    从所述红外图像中提取成像轮廓;
    将所述成像轮廓与预存的成像对象的成像轮廓匹配;
    如果所述成像轮廓与预存的成像对象的成像轮廓匹配,则确定所述成像对象属于机主。
  17. 根据权利要求10-15任一项所述的方法,其特征在于,所述方法由可信应用程序执行,所述可信应用程序运行于可信执行环境中。
  18. 一种人脸识别装置,其特征在于,所述装置包括:
    控制模块,用于控制图像传感器进行成像;
    获取模块,用于获取图像传感器成像得到的成像数据;
    检测模块,用于根据所述成像数据,对成像对象进行活体检测。
  19. 根据权利要求18所述的人脸识别装置,其特征在于,所述图像传感器包括结构光传感器,所述装置还包括:
    匹配模块,用于若活体检测通过,将依据所述成像数据中的深度数据构建的结构光深度模型,与预设人脸深度模型进行匹配;当所述结构光深度模型与预设人脸深度模型匹配时,确定身份验证通过。
  20. 根据权利要求18所述的人脸识别装置,其特征在于,所述控制模块还用于控制红外传感器进行成像;以及在比对出成像对象为预存的成像对象时,控制开启图像传感器进行成像;
    所述获取模块还用于获取红外传感器成像得到的第一成像数据,以及获取所述图像传感器成像得到的第二成像数据;
    所述装置还包括匹配模块,所述匹配模块用于根据所述第一成像数据,对成像对象与预存的成像对象进行比对;
    所述检测模块还用于根据所述第二成像数据,对所述成像对象进行活体检测。
  21. 一种移动终端,其特征在于,包括:成像传感器、存储器、微处理芯片MCU、处理器及存储在所述存储器上并可在所述处理器的可信执行环境下运行的可信应用程序;
    所述MCU,为所述可信执行环境的专用硬件,与所述成像传感器和所述处理器连接,用于控制所述成像传感器进行成像,并将成像数据发送至所述处理器;
    所述处理器执行所述可信应用程序时,实现权利要求1-17中任一项所述的人脸识别方法。
  22. 根据权利要求21所述的移动终端,其特征在于,所述MCU与所述处理器之间通过加密方式进行通信。
  23. 根据权利要求21所述的移动终端,其特征在于,所述成像传感器包括:红外传感器、结构光传感器和可见光传感器;
    其中,所述红外传感器包括激光摄像头和泛光灯;
    所述结构光传感器包括:镭射灯,以及与所述红外传感器共用的激光摄像头;
    所述可见光传感器包括:可见光摄像头。
  24. 根据权利要求23所述的移动终端,其特征在于,所述MCU包括:脉冲宽度调制PWM、深度引擎、总线接口以及随机存取存储器RAM;
    所述PWM,用于调制泛光灯以使发出红外光,以及调制镭射灯以发出结构光;
    所述激光摄像头,用于采集所述成像对象的结构光图像;
    所述深度引擎,用于根据所述结构光图像,计算获得所述成像对象对应的深度数据;以及
    所述总线接口,用于将所述深度数据发送至所述处理器,并由所述处理器上运行的可信应用程序 利用所述深度数据执行相应的操作。
  25. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1-17中任一项所述的人脸识别方法。
PCT/CN2019/075384 2018-04-12 2019-02-18 人脸识别方法、装置及移动终端、存储介质 WO2019196559A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/483,805 US11410458B2 (en) 2018-04-12 2019-02-18 Face identification method and apparatus, mobile terminal and storage medium
EP19749165.7A EP3576016A4 (en) 2018-04-12 2019-02-18 FACIAL RECOGNITION METHOD AND APPARATUS, MOBILE TERMINAL AND STORAGE MEDIUM

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810326645.2A CN108616688A (zh) 2018-04-12 2018-04-12 图像处理方法、装置及移动终端、存储介质
CN201810327410.5 2018-04-12
CN201810326645.2 2018-04-12
CN201810327410.5A CN108596061A (zh) 2018-04-12 2018-04-12 人脸识别方法、装置及移动终端、存储介质

Publications (1)

Publication Number Publication Date
WO2019196559A1 true WO2019196559A1 (zh) 2019-10-17

Family

ID=68163501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075384 WO2019196559A1 (zh) 2018-04-12 2019-02-18 人脸识别方法、装置及移动终端、存储介质

Country Status (4)

Country Link
US (1) US11410458B2 (zh)
EP (1) EP3576016A4 (zh)
TW (1) TW201944290A (zh)
WO (1) WO2019196559A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488756B (zh) * 2019-01-25 2023-10-03 杭州海康威视数字技术股份有限公司 基于面部识别的活体检测的方法、电子设备和存储介质
CH716053A1 (de) * 2019-04-10 2020-10-15 Smart Secure Id Ag Biometrische Bildungsvorrichtung und biometrisches Bildungsverfahren zum Erfassen von Bilddaten eines Körperteils einer Person mit Benutzerführung.
CN111310575B (zh) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 一种人脸活体检测的方法、相关装置、设备及存储介质
US11598671B2 (en) * 2020-06-17 2023-03-07 Microsoft Technology Licensing, Llc Body temperature estimation via thermal intensity distribution
CN112395963B (zh) * 2020-11-04 2021-11-12 北京嘀嘀无限科技发展有限公司 对象识别方法和装置、电子设备及存储介质
CN112329720A (zh) * 2020-11-26 2021-02-05 杭州海康威视数字技术股份有限公司 人脸活体检测方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622588A (zh) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 双验证人脸防伪方法及装置
CN105718925A (zh) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 基于近红外和面部微表情的真人活体身份验证终端设备
CN107277053A (zh) * 2017-07-31 2017-10-20 广东欧珀移动通信有限公司 身份验证方法、装置及移动终端
CN108596061A (zh) * 2018-04-12 2018-09-28 Oppo广东移动通信有限公司 人脸识别方法、装置及移动终端、存储介质
CN108616688A (zh) * 2018-04-12 2018-10-02 Oppo广东移动通信有限公司 图像处理方法、装置及移动终端、存储介质

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000268175A (ja) 1999-03-18 2000-09-29 Omron Corp 個人認証方法および装置
KR100456619B1 (ko) 2001-12-05 2004-11-10 한국전자통신연구원 에스.브이.엠(svm)을 이용한 얼굴 등록/인증 시스템 및방법
JP2005327043A (ja) 2004-05-13 2005-11-24 Seiko Epson Corp 画像レイアウト装置、画像レイアウト方法、画像レイアウトプログラムおよび記録媒体
CN100514353C (zh) 2007-11-26 2009-07-15 清华大学 一种基于人脸生理性运动的活体检测方法及系统
CN102271291B (zh) 2011-08-17 2014-12-17 Tcl集团股份有限公司 基于widi技术的楼宇可视对讲系统及其实现方法
US9202105B1 (en) 2012-01-13 2015-12-01 Amazon Technologies, Inc. Image analysis for user authentication
CN202815718U (zh) 2012-09-28 2013-03-20 王潮 一种个人随身装置
US9251427B1 (en) 2014-08-12 2016-02-02 Microsoft Technology Licensing, Llc False face representation identification
CN105426848B (zh) 2014-11-03 2020-12-18 苏州思源科安信息技术有限公司 一种提高生物识别成功率的成像方法
US20160140390A1 (en) * 2014-11-13 2016-05-19 Intel Corporation Liveness detection using progressive eyelid tracking
CN104966070B (zh) 2015-06-30 2018-04-10 北京汉王智远科技有限公司 基于人脸识别的活体检测方法和装置
CN105138967B (zh) 2015-08-05 2018-03-27 三峡大学 基于人眼区域活动状态的活体检测方法和装置
TW201727537A (zh) 2016-01-22 2017-08-01 鴻海精密工業股份有限公司 人臉識別系統及人臉識別方法
CN105827953A (zh) 2016-03-07 2016-08-03 乐视致新电子科技(天津)有限公司 虚拟现实头盔的体感交互摄像头装置及其启闭控制方法
CN105912908A (zh) 2016-04-14 2016-08-31 苏州优化智能科技有限公司 基于红外的真人活体身份验证方法
CN107368769A (zh) 2016-05-11 2017-11-21 北京市商汤科技开发有限公司 人脸活体检测方法、装置及电子设备
CN107451510B (zh) * 2016-05-30 2023-07-21 北京旷视科技有限公司 活体检测方法和活体检测系统
DE102016009619A1 (de) 2016-07-29 2018-02-01 LÜTH & DÜMCHEN Automatisierungsprojekt GmbH Verfahren zum Nachweis der räumlichen Ausdehnung eines Kameraobjektes als Bestandteil einer Lebenderkennung für Vorrichtungen zur Erfassung personenspezifischer Daten
CN106372601B (zh) 2016-08-31 2020-12-22 上海依图信息技术有限公司 一种基于红外可见双目图像的活体检测方法及装置
CN106407914B (zh) 2016-08-31 2019-12-10 北京旷视科技有限公司 用于检测人脸的方法、装置和远程柜员机系统
US10204262B2 (en) * 2017-01-11 2019-02-12 Microsoft Technology Licensing, Llc Infrared imaging recognition enhanced by 3D verification
CN107133608A (zh) 2017-05-31 2017-09-05 天津中科智能识别产业技术研究院有限公司 基于活体检测和人脸验证的身份认证系统
WO2019056310A1 (en) * 2017-09-22 2019-03-28 Qualcomm Incorporated SYSTEMS AND METHODS FOR DETECTING FACIAL ACTIVITY
CN107832677A (zh) 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 基于活体检测的人脸识别方法及系统
US10657363B2 (en) * 2017-10-26 2020-05-19 Motorola Mobility Llc Method and devices for authenticating a user by image, depth, and thermal detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622588A (zh) * 2012-03-08 2012-08-01 无锡数字奥森科技有限公司 双验证人脸防伪方法及装置
CN105718925A (zh) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 基于近红外和面部微表情的真人活体身份验证终端设备
CN107277053A (zh) * 2017-07-31 2017-10-20 广东欧珀移动通信有限公司 身份验证方法、装置及移动终端
CN108596061A (zh) * 2018-04-12 2018-09-28 Oppo广东移动通信有限公司 人脸识别方法、装置及移动终端、存储介质
CN108616688A (zh) * 2018-04-12 2018-10-02 Oppo广东移动通信有限公司 图像处理方法、装置及移动终端、存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3576016A4 *

Also Published As

Publication number Publication date
EP3576016A4 (en) 2020-03-18
TW201944290A (zh) 2019-11-16
EP3576016A1 (en) 2019-12-04
US20200380244A1 (en) 2020-12-03
US11410458B2 (en) 2022-08-09

Similar Documents

Publication Publication Date Title
WO2019196559A1 (zh) 人脸识别方法、装置及移动终端、存储介质
KR102483642B1 (ko) 라이브니스 검사 방법 및 장치
US10339402B2 (en) Method and apparatus for liveness detection
EP2680192B1 (en) Facial recognition
EP2680191B1 (en) Facial recognition
CN108596061A (zh) 人脸识别方法、装置及移动终端、存储介质
CN105612533B (zh) 活体检测方法、活体检测系统以及计算机程序产品
CN108595942B (zh) 应用程序的安全控制方法、装置及移动终端、存储介质
US20160026781A1 (en) Ear biometric capture, authentication, and identification method and system
CN112825128A (zh) 用于活性测试和/或生物特征识别验证的方法和设备
US11348370B2 (en) Iris authentication device, iris authentication method, and recording medium
US20230368583A1 (en) Authentication device, authentication method, and recording medium
US11682236B2 (en) Iris authentication device, iris authentication method and recording medium
WO2019196792A1 (zh) 应用程序的安全控制方法及装置、移动终端及计算机可读存储介质
CN106557752A (zh) 一种基于虹膜识别的安防控制系统及其方法
CN108614958A (zh) 应用程序的安全控制方法、装置及移动终端、存储介质
US20240071135A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019749165

Country of ref document: EP

Effective date: 20190814

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19749165

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE