WO2019200575A1 - 身份鉴权方法、身份鉴权装置、和电子设备 - Google Patents

身份鉴权方法、身份鉴权装置、和电子设备 Download PDF

Info

Publication number
WO2019200575A1
WO2019200575A1 PCT/CN2018/083618 CN2018083618W WO2019200575A1 WO 2019200575 A1 WO2019200575 A1 WO 2019200575A1 CN 2018083618 W CN2018083618 W CN 2018083618W WO 2019200575 A1 WO2019200575 A1 WO 2019200575A1
Authority
WO
WIPO (PCT)
Prior art keywords
tested
identity authentication
image information
dimensional image
infrared
Prior art date
Application number
PCT/CN2018/083618
Other languages
English (en)
French (fr)
Inventor
田浦延
Original Assignee
深圳阜时科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳阜时科技有限公司 filed Critical 深圳阜时科技有限公司
Priority to CN201880000314.8A priority Critical patent/CN108513662A/zh
Priority to PCT/CN2018/083618 priority patent/WO2019200575A1/zh
Publication of WO2019200575A1 publication Critical patent/WO2019200575A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the application relates to an identity authentication method, an identity authentication device, and an electronic device.
  • fingerprint recognition technology For example, fingerprint recognition technology, iris recognition technology, and the like.
  • fingerprint recognition technology and iris recognition technology have their own limitations.
  • fingerprint recognition technology can not perform long-distance sensing, and iris recognition technology has slower sensing response speed.
  • the embodiments of the present application aim to at least solve one of the technical problems existing in the prior art. To this end, the embodiments of the present application need to provide an identity authentication method, an identity authentication device, and an electronic device.
  • the application provides an identity authentication method, including:
  • Step S1 projecting infrared structured light to the object to be tested, and sensing an infrared image of the object to be tested;
  • Step S2 determining, according to the infrared image, whether the object to be tested has a facial feature
  • Step S3 sensing a planar image of the object to be tested
  • Step S4 Obtaining two-dimensional image information of the object to be tested according to the planar image, and comparing whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information of an object face;
  • Step S5 According to the execution result of step S2 and step S4, it is confirmed whether the identity of the object to be tested is legal.
  • the identity of the object to be tested is authenticated by means of optical image sensing.
  • the infrared image can reflect the 3D attribute information of the object to be tested, and thus, according to the infrared image, it can be determined whether the object to be tested has a facial feature.
  • the cooperation is confirmed to determine whether the identity of the object to be tested is legal.
  • the optical sensing technology can be applied to sensing over long distances, and the sensing response speed is faster.
  • the longer distance is, for example, a distance within a range of 1 meter or even further.
  • step S2 it is determined whether the object to be tested has a stereoscopic facial feature according to the infrared image.
  • step S2 includes: constructing stereoscopic image information according to the infrared image, and determining whether the object to be tested has a facial feature by determining whether the stereoscopic image information has a facial feature.
  • step S3 infrared flooding is projected onto the object to be tested, and a planar image of the object to be tested is sensed by using an infrared image sensor; or, in step S3, sensing is performed using an RGB image sensor. A planar image of the object to be tested.
  • step S3 when step S3 is to project infrared flooding to the object to be tested and to sense a planar image of the object to be tested by using an infrared image sensor, step S1 and step S3 are performed in time division, wherein step S1 It is executed before or after step S3.
  • step S1 and step S3 the infrared image and the planar image of the object to be tested are sensed by the same infrared image sensor.
  • step S1 and step S3 are performed in time division, wherein step S1 is performed before or after step S3; or Steps S1 and S3 are simultaneously performed.
  • step S2 is performed before or after step S4; or, step S2 and step S4 are performed simultaneously.
  • step S5 when it is confirmed that any one of the steps S2 and S4 is performed first and the result obtained is negative, the identity authentication fails.
  • step S5 when the results obtained after confirming that step S2 and step S4 are performed are all positive results, the identity authentication is successful.
  • the two-dimensional image information of the face of the object is a two-dimensional image information of a human face, which is compared in step S4: the sensed two-dimensional image information of the object to be tested and the pre-stored Whether the two-dimensional image information of the human face matches; in step S2, determining whether the object to be tested has the facial features of the human body based on the infrared image.
  • the object to be tested when the sensed two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information of the human face, the object to be tested includes the human face, and the identity authentication succeeds. .
  • the two-dimensional image information of the object to be tested includes feature information
  • the two-dimensional image information of the pre-existing human face includes facial feature information
  • the object to be measured is compared in step S4. Whether the feature information matches the pre-stored facial feature information.
  • step S4 includes: extracting two-dimensional facial feature information of the object to be tested by a deep learning method.
  • the deep learning method includes: establishing a deep convolutional neural network model, training the deep convolutional neural network model using a predetermined number of facial photos, and extracting the human based on the trained deep convolutional neural network model The characteristic parameters of the face.
  • the wavelength of the infrared structured light in step S1 is 940 nanometers.
  • the wavelength of the infrared flood in step S3 is 940 nanometers.
  • the infrared structured light in step S1 has a wavelength range of [925, 955] nanometers
  • the infrared flooding in step S3 has a wavelength range of [925, 955] nanometers.
  • the infrared structured light projected onto the object to be tested forms a pattern, and the pattern is in any one or more of a dot matrix, a stripe pattern, a speckle pattern, and a mesh format. Combine.
  • step S1 is performed first, and then steps S2 and S3 are performed simultaneously.
  • step S4 is started, otherwise, step S2 is performed.
  • step S2 is performed.
  • the identity authentication fails.
  • step S3 is performed first, and then steps S4 and S1 are performed simultaneously, wherein when it is confirmed in step S4 that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, the execution step is started. S2. Otherwise, when it is confirmed in step S4 that the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information, the identity authentication fails.
  • step S1 is performed first; after step S1 is performed, step S2 is performed; and when it is determined in step S2 that the object to be tested has facial features, step S3 is started, and step S3 is performed.
  • Step S4 is performed; and when it is determined in step S2 that the object to be tested does not have the facial feature, the identity authentication fails; or
  • Step S3 is performed first; after step S3 is performed, step S4 is performed; when it is confirmed in step S4 that the two-dimensional image information of the object to be tested is matched with the pre-stored two-dimensional image information, step S1 is executed, and then step S1 is executed. Step S2; and when it is confirmed in step S4 that the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information, the identity authentication fails; or
  • Step S1, step S3, and step S2 are sequentially performed; when it is determined in step S2 that the object to be tested has a facial feature, step S4 is started, otherwise, identity authentication fails; or
  • Step S1, step S3, and step S4 are sequentially executed; when it is confirmed in step S4 that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, step S2 is started, otherwise, the identity authentication fails; or
  • Step S3, step S1, and step S2 are sequentially executed; when it is determined in step S2 that the object to be tested has the facial feature, step S4 is started, otherwise, the identity authentication fails; or
  • Step S3, step S1, and step S4 are sequentially performed; when it is confirmed in step S4 that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, step S2 is started, otherwise, the identity authentication fails.
  • the identity authentication method is applied to an electronic device for facial recognition of an organism.
  • the application also provides an identity authentication device, including:
  • a memory for pre-storing two-dimensional image information of the sample object
  • a first projector for projecting infrared structured light to the object to be tested
  • An image sensing device configured to capture infrared structured light reflected by the object to be measured, obtain an infrared image of the object to be tested, and also to sense a planar image of the object to be tested;
  • a processor configured to determine, according to the infrared image, whether the object to be tested has a facial feature; the processor is further configured to obtain, according to the planar image, two-dimensional image information of the object to be tested, and compare the information of the object to be measured Whether the two-dimensional image information matches the pre-stored two-dimensional image information of an object face; the processor is configured to confirm whether the identity of the object to be tested is legal according to the foregoing judgment and comparison result.
  • the identity authentication device authenticates the identity of the object to be tested by means of optical image sensing.
  • the infrared image can reflect the 3D attribute information of the object to be tested, so that the processor can determine, according to the infrared image, whether the object to be tested has a facial feature.
  • the processor combines the comparison of the 2D image information to jointly confirm whether the identity of the object to be tested is legal.
  • the present application provides a novel optical identity authentication device for identity authentication.
  • the optical identity authentication device can be applied to sensing over a long distance, and the sensing response speed is faster.
  • the longer distance is, for example, a distance within a range of 1 meter or even further.
  • the processor is configured to determine, according to the infrared image, whether the object to be tested has a stereoscopic facial feature.
  • the processor is configured to construct stereoscopic image information according to the infrared image, and determine whether the object to be tested has a facial feature by determining whether the stereoscopic image information has a facial feature.
  • the image sensing device includes an infrared image sensor for capturing infrared structured light reflected by the object to be measured, and sensing an infrared image of the object to be tested.
  • the identity authentication device further includes a second projector for projecting infrared flooding to the object to be tested; the image sensing device is further configured to capture an infrared ray reflected by the object to be tested Light, sensing obtains a planar image of the object to be tested.
  • the infrared image sensor is used for time-sharing sensing to obtain an infrared image and a planar image of the object to be tested.
  • the image sensing device further includes an RGB image sensor for sensing a planar image of the object to be measured.
  • the identity authentication device further includes control circuitry for controlling the first projector and the second projector for time-sharing control, the control circuit controlling the first projection when performing identity authentication The device works before or after the second projector.
  • control circuit is configured to control the first projector, the second projector, and the image sensing device to cooperate when performing identity authentication.
  • the identity authentication device further includes a high speed data transfer link for transmitting the infrared image signal and the planar image signal sensed by the image sensing device to the processor for processing.
  • the identity authentication fails.
  • the processor determines that the object to be tested has a facial feature and confirms that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, the identity authentication is successful.
  • the processor first performs: determining, according to the infrared image, whether the object to be tested has a facial feature, and performing: obtaining two-dimensional image information of the object to be tested according to the planar image, and comparing Whether the two-dimensional image information matches the pre-stored two-dimensional image information; or the processor first performs: obtaining two-dimensional image information of the object to be tested according to the planar image, and comparing the two-dimensional image Whether the information matches the pre-stored two-dimensional image information, and then executing: determining whether the object to be tested has a facial feature according to the infrared image.
  • the two-dimensional image information of the face of the object is two-dimensional image information of a human face
  • the processor is configured to compare the two-dimensional image information of the object to be tested with the pre-existing face of the human body. Whether the two-dimensional image information matches, and whether it is determined according to the infrared image whether the object to be tested has a facial feature of the human body
  • the processor when the processor confirms that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information of the human face, it may be confirmed that the object to be tested includes the human face, Identity authentication was successful.
  • the processor determines, according to the infrared image, that the object to be tested has a facial feature of the human body, and confirms two-dimensional image information of the object to be tested and the pre-existing two-dimensional image of the human face When the image information is matched, the processor confirms that the object to be tested includes the human face, and the identity authentication is successful.
  • the two-dimensional image information of the object to be tested includes feature information
  • the pre-existing two-dimensional image information of the human face includes facial feature information
  • the processor compares the feature information of the object to be tested.
  • the pre-stored facial feature information is used to confirm whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information.
  • the processor extracts two-dimensional facial feature information of the object to be tested by a deep learning method.
  • the processor trains the deep convolutional neural network model using a predetermined number of facial photos by establishing a deep convolutional neural network model, and extracts the human face according to the trained deep convolutional neural network model Characteristic parameters.
  • the infrared structured light has a wavelength of 940 nanometers.
  • the infrared flooding has a wavelength of 940 nanometers.
  • the first projector projects infrared spectral light having a wavelength in the range of [925, 955] nanometers
  • the second projector projects infrared fluorescent light having a wavelength in the range of [925, 955] nanometers.
  • the first projector projects an infrared structured light pattern that is projected onto the object to be tested, and the pattern is in any one or more of a dot matrix, a stripe pattern, a speckle pattern, and a mesh format. Combine.
  • the second projector when performing identity authentication, is configured to first project infrared flooding to the object to be tested, and then obtain a two-dimensional image of the object to be tested according to the planar image. And comparing the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image information: the first projector projects infrared structure light to the object to be tested, wherein when the processor When it is confirmed that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, the processor further determines, according to the infrared image structure, whether the stereoscopic image information has a facial feature; when the processor confirms the When the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information, the identity authentication fails.
  • the first projector when performing identity authentication, is configured to first project infrared structured light to a target object, and then the processor determines, according to the infrared image, whether the object to be tested has facial features. Meanwhile, the second projector projects infrared flooding to the object to be tested, wherein when the processor determines that the object to be tested has a facial feature, the processor obtains the image according to the planar image. a two-dimensional image information of the object to be tested, and comparing whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information; when the processor determines that the object to be tested does not have a face Identity authentication fails when the feature is authenticated.
  • the processor simultaneously performs: “determining whether the object to be tested has a facial feature according to the infrared image”, and “obtaining two-dimensional image information of the object to be tested according to the planar image, and comparing Whether the two-dimensional image information matches the pre-stored two-dimensional image information.
  • the application further provides an electronic device comprising the identity authentication device according to any one of the above.
  • the electronic device is configured to correspond to whether to perform a corresponding function according to an identity authentication result of the identity authentication device.
  • the respective function includes unlocking, paying, launching any one or more of the pre-stored applications.
  • the electronic device includes any one or more of a consumer electronic product, a home electronic product, a vehicle-mounted electronic product, and a financial terminal product.
  • the electronic device of the present application includes the above-described identity authentication device, the electronic device can realize sensing of a longer distance of the object to be measured, and the sensing response speed is faster.
  • FIG. 1 is a schematic flowchart diagram of an identity authentication method according to the present application.
  • 2 is a schematic diagram showing the relationship between the wavelength and intensity of near-infrared light.
  • FIG. 3 is a schematic diagram of a refinement process of a first embodiment of an identity authentication method according to the present application.
  • FIG. 4 is a schematic diagram of a refinement process of a second embodiment of the identity authentication method of the present application.
  • FIG. 5 is a structural block diagram of a first embodiment of the identity authentication apparatus of the present application.
  • FIG. 6 is a structural block diagram of a second embodiment of the identity authentication apparatus of the present application.
  • FIG. 7 is a schematic structural diagram of an embodiment of an electronic device of the present application.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless specifically and specifically defined otherwise.
  • connection In the description of the present application, it should be noted that the terms “installation”, “connected”, and “connected” are to be understood broadly, and may be fixed or detachable, for example, unless otherwise specifically defined and defined. Connected, or integrally connected; may be mechanically connected, or may be electrically connected or may communicate with each other; may be directly connected or indirectly connected through an intermediate medium, may be internal communication of two elements or interaction of two elements relationship.
  • Connected, or integrally connected may be mechanically connected, or may be electrically connected or may communicate with each other; may be directly connected or indirectly connected through an intermediate medium, may be internal communication of two elements or interaction of two elements relationship.
  • the specific meanings of the above terms in the present application can be understood on a case-by-case basis.
  • step numbers S1, S2, S3, S4, and S5 referred to in the specification and the claims of the present application are only for clearly distinguishing the steps, and do not represent the order of execution of the steps.
  • FIG. 1 is a schematic flowchart diagram of an identity authentication method according to the present application.
  • the identity authentication method is applicable to, for example, but not limited to, an electronic device such as, but not limited to, a suitable type of electronic product such as a consumer electronic product, a home electronic product, a vehicle-mounted electronic product, a financial terminal product, or the like.
  • consumer electronic products such as but not limited to mobile phones, tablets, notebook computers, desktop monitors, computer integrated machines.
  • Home-based electronic products such as, but not limited to, smart door locks, televisions, refrigerators, wearable devices, and the like.
  • Vehicle-mounted electronic products such as, but not limited to, car navigation systems, car DVDs, and the like.
  • the financial terminal products are, for example, but not limited to ATM machines, terminals for self-service business, and the like.
  • the identity authentication method includes:
  • Step S1 projecting infrared structured light to the object to be tested, and sensing an infrared image of the object to be tested;
  • Step S2 determining, according to the infrared image, whether the object to be tested has a facial feature
  • Step S3 sensing a planar image of the object to be tested
  • Step S4 Obtaining two-dimensional image information of the object to be tested according to the planar image, and comparing whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information of an object face;
  • Step S5 According to the execution result of step S2 and step S4, it is confirmed whether the identity of the object to be tested is legal.
  • step S2 and step S4 are performed sequentially.
  • the step S5 confirms that the identity of the object to be tested is illegal, that is, the identity authentication fails, the process ends, and no further execution is required. Unused steps. For example, when it is determined in step S2 that the object to be tested does not have the facial feature, the identity authentication fails, and step S4 does not need to be performed again. Similarly, when the result obtained in step S4 is first executed and the result is negative, the identity authentication fails, and step S2 does not need to be executed again.
  • steps S2 and S4 can be performed simultaneously.
  • an infrared image of the object to be tested is obtained by projecting infrared structured light onto the object to be tested, wherein the infrared image can reflect a three-dimensional (3-Dimension, 3D) attribute of the object to be tested.
  • Information whereby, based on the infrared image, it can be determined whether the object to be tested has a facial feature.
  • a planar image of the object to be tested is obtained, so that two-dimensional (2-Dimension, 2D) image information of the object to be tested can be obtained according to the planar image, and is performed with the two-dimensional image information of the pre-existing object face. Comparison. In this way, the face recognition of the object to be tested is realized by a combination of 3D and 2D.
  • the present application provides a novel optical sensing technology for identity authentication.
  • the optical sensing technology can be applied to sensing over long distances, and the sensing response speed is faster.
  • the longer distance is, for example, a distance within a range of 1 meter or even further.
  • step S2 it is determined whether the object to be tested has a facial feature according to the infrared image, which is to identify the object to be tested once, and when the recognition fails, the identity authentication fails, and the process ends. .
  • step S4 the two-dimensional image information of the object to be tested is obtained according to the planar image, and compared with the two-dimensional image information of the object to be measured and the two-dimensional image information of the pre-stored object face, The object to be tested is identified once, and when the identification fails, the identity authentication fails, and the process ends.
  • the step S5 confirms that the object to be tested includes the object face, and therefore, confirms that the identity of the object to be tested is legal, and the identity authentication is successful.
  • an optical component is used to project infrared structured light to the object to be tested, and an infrared image sensor is used to capture infrared structured light reflected by the object to be tested, and the sensing is obtained.
  • the optical component includes, for example, a light source, a collimating lens, and an optical diffraction element (DOE), wherein the light source is used to generate an infrared laser beam; the collimating lens calibrates the infrared laser beam to form approximately parallel light; and the optical diffraction element is aligned The infrared laser beam is modulated to form a corresponding speckle pattern.
  • DOE optical diffraction element
  • the speckle pattern includes, for example, one or more of a regular lattice pattern, a stripe pattern, a mesh format, a speckle pattern, a coded pattern, and the like. Among them, speckle is also called random dot matrix.
  • the coded pattern consists, for example, of light of different waveforms, each waveform representing a number, the combination of which is the code.
  • the infrared structured light may alternatively be produced by other suitable optical or optical components.
  • the speckle pattern may also include other coding patterns.
  • step S1 a known infrared structured light pattern is projected onto the object to be tested.
  • the image sensing device or processor analyzes the depth information of the object to be tested according to the captured deformed infrared structured light pattern.
  • This type of infrared structured light is defined as spatially structured light.
  • the infrared structured light is projected onto the object to be measured in step S1.
  • the image sensing device or processor calculates the depth information of the object to be measured, for example, by measuring the propagation delay time between the light pulses.
  • This type of infrared structured light is defined as time structured light.
  • the time structured light is, for example but not limited to, a combination of any one or both of a sine wave and a square wave.
  • step S2 for example, stereoscopic image information is constructed based on the infrared image, and it is determined whether the object to be tested has a facial feature by determining whether the stereoscopic image information has a facial feature.
  • position information and depth information of each pixel can be obtained from the infrared image, thereby constructing stereoscopic image information according to the obtained position information and depth information of each pixel. Accordingly, it is possible to determine from the stereoscopic image information whether or not the object to be tested has a facial feature.
  • the stereoscopic image information when it is determined whether the stereoscopic image information has a facial feature, for example, the stereoscopic image information may be correlated with a preset stereoscopic facial template, if the correlation coefficient is greater than or equal to one If the value is set, it is determined that the stereoscopic image information has a stereoscopic facial feature; otherwise, it is determined that the stereoscopic image information does not have the stereoscopic facial feature, the identity authentication fails, and the flow ends.
  • the depth learning method extracts the three-dimensional facial feature information of the object to be tested from the infrared image, and further determines whether the object to be tested has a facial feature.
  • the deep learning method includes: establishing a deep convolutional neural network model, training the deep convolutional neural network model with a predetermined number of facial photos, and extracting a face from the infrared image according to the trained deep convolutional neural network model Characteristic Parameters.
  • the distortion rate it is determined whether the object to be tested has a stereoscopic facial feature based on the infrared image.
  • the object to be tested is a planar object instead of a stereo face
  • stereo image information cannot be constructed from the infrared image, so that it can be determined that the object is to be The measured object does not have facial features.
  • a planar image of the object to be tested is sensed, for example, by an RGB sensor.
  • an infrared floodlight is used to project infrared floodlight to the object to be tested
  • an infrared image sensor is used to capture infrared floodlight reflected by the object to be tested, thereby sensing a planar image of the object to be tested.
  • step S3 an infrared floodlight is used to project infrared floodlight to the object to be tested, and an infrared image sensor is used to capture infrared floodlight reflected by the object to be tested, thereby obtaining a planar image of the object to be tested.
  • step S3 and step S1 need to be time-divisionally performed to avoid the planar image from being aliased with the infrared image. In this case, step S3 is performed before or after step S1.
  • step S3 and step S1 are performed in a time-sharing or simultaneous manner since the problem of image aliasing is not involved.
  • step S4 the two-dimensional facial feature information of the object to be tested is extracted from the planar image, for example, by a depth learning method.
  • the deep learning method includes: establishing a deep convolutional neural network model, training the deep convolutional neural network model with a predetermined number of facial photos, and extracting a face from the planar image according to the trained deep convolutional neural network model Characteristic Parameters.
  • the two-dimensional image information of the pre-stored object face includes facial feature information, and in step S4, it is compared whether the facial feature information of the object to be tested matches the pre-stored facial feature information.
  • facial features such as the nose, eyes, mouth, eyebrows, forehead, cheekbones, chin, face, width of the nose, width of the chin, etc., or/and, may also be extracted.
  • Distance information for any combination of nose, eyes, mouth, eyebrows, forehead, cheekbones, chin, etc. For example, distance information between the nose and the eye.
  • the facial feature information is not limited to the examples listed above, but may be other suitable feature information.
  • the comparison of the two-dimensional image information can be realized, for example, by comparing a plane picture of the object to be measured with a plane picture of the pre-stored object face.
  • step S4 when it is confirmed that the matching coefficient of the two-dimensional image information of the object to be tested and the pre-stored two-dimensional image information is greater than or equal to a predetermined threshold, the two-dimensional image of the object to be tested can be confirmed.
  • the information matches the pre-stored two-dimensional image information.
  • it is confirmed that the matching coefficient is smaller than the predetermined threshold it can be confirmed that the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information.
  • step S5 according to the execution results of the foregoing steps S2 and S4, it is confirmed whether the identity of the object to be tested is legal. It should be noted that, when any one of the steps S2 and S4 is executed first and the result obtained is negative, the step S5 confirms that the identity of the object to be tested is illegal, that is, the identity authentication fails, and the process At the end, there is no need to perform other steps that are not performed. And when any of the steps S2 and S4 is executed first and the result obtained is a positive result, the other step is executed later.
  • the obtained result is a positive result, that is, it is determined according to the infrared image that the object to be tested has a facial feature, and the two-dimensional image of the object to be tested is confirmed.
  • the information is matched with the two-dimensional image information of the face of the pre-stored object, and the identity authentication is successful.
  • the identity authentication method of the above embodiment it is also feasible to additionally add certain steps.
  • a further step is added to confirm whether the object to be tested is a living body based on the infrared image and/or the planar image.
  • the adding step of confirming whether the eye of the object to be tested is in a predetermined range in front of the electronic device or the like according to the infrared image and/or the planar image.
  • the identity authentication method of the present application is for face recognition of an organism.
  • the organism is, for example, a human body or other suitable animal body.
  • the face image template is not limited to, for example, including two-dimensional image information, three-dimensional image information, and the like.
  • step S2 determining whether the object to be tested has a face feature according to the infrared image; in step S4, comparing the two-dimensional image information of the object to be tested with the two-dimensional image information of the registered user's face Whether it matches.
  • step S5 when it is determined in step S2 that it is determined that the object to be tested has a face feature, and the two-dimensional image information of the object to be tested in step S4 matches the two-dimensional image information of the registered user's face, The identity authentication is successful.
  • the industry usually projects near-infrared light with a wavelength of 850 nm to obtain an infrared image of an object to be measured.
  • the inventors of the present application have conducted a large amount of creative labor, analysis and research and found that infrared radiation with a projection wavelength of about 940 nm and infrared structured light with a projection wavelength of about 940 nm can be sensed, and accurate sensing can be obtained. effect.
  • FIG. 2 is a schematic diagram showing the relationship between the radiation intensity of ambient light and the wavelength.
  • the wavelength is represented by the horizontal axis and is indicated by the letter ⁇
  • the radiation intensity is represented by the vertical axis and is indicated by the letter E.
  • Step S1 projects infrared structure light having a wavelength range of [920, 960] nanometer to the object to be measured, and obtains an infrared image of the object to be measured according to the captured infrared structured light, thereby being less susceptible to interference by ambient light, thereby improving image acquisition accuracy.
  • step S3 projects infrared flooding with a wavelength range of [920, 960] nanometers to the object to be measured, and obtaining a planar image of the object to be tested according to the captured infrared flooding, the image light can be less interfered by the ambient light, thereby improving the image. The accuracy of the acquisition.
  • the infrared structured light projected in step S1 is further The wavelength of the infrared flood light projected at step S3 is preferably 940 nm.
  • the wavelength of the infrared structured light projected in step S1 and the wavelength of the infrared flooded light projected in step S3 may be deviated from 940 nm, for example, there may be (+15) nanometers or ( -15) The deviation around the nanometer. Therefore, the wavelength range of the infrared structured light projected in step S1 is, for example, [925, 955] nanometers, and the wavelength range of the infrared flood light projected in step S3 is, for example, [925, 955] nanometers. It can be seen that this wavelength range [925, 955] still falls within the wavelength range [920, 960].
  • the wavelength of the infrared structured light projected in step S1 and the wavelength of the infrared flood light projected in step S3 are any values falling within the above-mentioned wavelength range [920, 960] nanometers.
  • specific numerical values are not listed here, but any value falling within the wavelength range [920, 960] nanometers is feasible.
  • step S1 and step S3 of the identity authentication method of the present application may also be performed by using infrared structured light or infrared flooding having a wavelength of 850 nm or other suitable wavelength.
  • FIG. 3 is a schematic diagram of a refinement process of the first embodiment of the identity authentication method of the present application.
  • step S2 is performed prior to step S4.
  • step S4 is started, and when it is determined in step S2 that the object to be tested does not have the facial feature, the identity authentication fails, and the process ends. .
  • step S2 is performed simultaneously with step S3. In this way, the sensing time can be reduced and the work efficiency can be improved.
  • step S3 may be performed after step S2.
  • step S3 when it is determined in step S2 that the object to be tested has a facial feature, the execution of step S3 is started, and then step S4 is performed. In this way, power consumption can be reduced.
  • step S3 may also be performed before step S2.
  • step S1 can be performed before step S3 or after step S3.
  • the identity authentication method of the present embodiment needs to identify the object to be tested twice, wherein the first recognition of the object to be tested is: determining whether the object to be tested has facial features. If it is determined that the object to be tested does not have the face feature, the identity authentication fails and the process ends.
  • step S4 it is confirmed whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional facial image information of the registered user. If the confirmation does not match, the identity authentication fails, and the flow ends.
  • step S5 when it is confirmed that the object to be tested has a face feature, and the two-dimensional image information of the object to be tested is confirmed to match the pre-stored two-dimensional image information, the identity authentication is successful.
  • FIG. 4 is a schematic diagram of a refinement process of a second embodiment of the identity authentication method according to the present application.
  • step S4 is performed prior to step S2.
  • the execution of step S2 is started, and the confirmation is performed in step S4.
  • the identity authentication fails, and the process ends.
  • step S4 is performed simultaneously with step S1. In this way, the sensing time can be further reduced and the work efficiency can be improved.
  • step S1 can also be performed after step S4.
  • step S4 when it is confirmed in step S4 that the two-dimensional image information of the object to be tested matches the two-dimensional image information of the pre-stored human face, the execution of step S1 is started, and then step S2 is performed. In this way, power consumption can be reduced.
  • step S1 may also be performed before step S4.
  • step S3 can be performed before step S1 or after step S1.
  • the identity authentication method of the embodiment needs to identify the object to be tested twice, wherein the first recognition of the object to be tested is: whether the two-dimensional image information of the object to be tested is compared with the registered user The two-dimensional image information of the face is matched. If the comparison is not matched, the identity authentication fails and the process ends.
  • step S4 If the comparison finds that the two-dimensional image information of the object to be tested matches the two-dimensional image information of the registered user's face, it is not determined that the object to be tested is a legitimate user, because: in step S4, because The judgment and recognition of the two-dimensional image information can also be successfully recognized if the photo or video of the legitimate user is utilized.
  • step S2 by performing step S2, the above-mentioned case of using photo or video recognition is avoided.
  • step S2 based on the infrared image obtained in step S1, it is determined whether the object to be tested has a three-dimensional face feature. If it is determined that the object to be tested does not have the stereo face feature, the identity authentication fails and the process ends. In this case, it is possible that someone else uses a photo or video of a legitimate user for identification.
  • step S5 when it is confirmed that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, and that the object to be tested has the three-dimensional face feature, the identity authentication is successful.
  • FIG. 5 is a structural block diagram of a first embodiment of the identity authentication apparatus of the present application.
  • the identity authentication device 1 includes a first projector 10, a second projector 11, an image sensing device 12, a processor 14, and a memory 16.
  • the memory 16 is configured to pre-store two-dimensional image information of an object's face.
  • the first projector 10 is configured to project infrared structured light to the object to be tested.
  • the second projector 11 is configured to project infrared flooding to the object to be tested.
  • the image sensing device 12 is configured to capture infrared structured light reflected by the object to be measured, and obtain an infrared image of the object to be tested.
  • the image sensing device 12 is further configured to capture the object reflected by the object to be tested.
  • the returned infrared flooding senses a planar image of the object to be tested.
  • the processor 14 is configured to determine, according to the infrared image, whether the object to be tested has a facial feature, and is further configured to obtain two-dimensional image information of the object to be tested according to the planar image, and compare the two-dimensional image of the object to be tested. Whether the information matches the two-dimensional image information of the pre-stored object face, and is used to confirm whether the identity of the object to be tested is legal according to the comparison and the comparison result.
  • the first projector 10 projects infrared structure light onto the object to be tested, and the image sensing device 12 senses an infrared image of the object to be tested.
  • the processor 14 can obtain three-dimensional (3-Dimension, 3D) image information of the object to be tested according to the infrared image, so that the processor 14 can determine whether the object to be tested has facial features according to the infrared image.
  • the second projector 11 projects infrared flooding to the object to be tested, and the image sensing device 12 senses a planar image of the object to be measured, so that the processor 14 can obtain the object to be tested according to the planar image.
  • Two-dimensional (2-Dimension, 2D) image information and compared with the two-dimensional image information of the pre-existing object face.
  • the identity authentication device 1 implements face recognition of the object to be tested.
  • the present application provides a novel optical identity authentication device 1.
  • the optical identity authentication device 1 can be applied to sensing over a long distance, and the sensing response speed is faster.
  • the longer distance is, for example, a distance within a range of 1 meter or even further.
  • the processor 14 confirms that the object to be tested includes the object face, and therefore, the identity of the object to be tested is legal, and the identity authentication is successful.
  • the identity authentication device 1 further comprises, for example, a control circuit 15.
  • the control circuit 15 is connected to the first projector 10, the second projector 11, and the image sensing device 12, respectively.
  • the control circuit 15 is for controlling the first projector 10, the second projector 11, and the image sensing device 12 to work together.
  • the control circuit 15 controls the first projector 10 and the second projector 11 to work in a time-sharing manner, thereby preventing the infrared image sensed by the image sensing device 12 from being aliased with the planar image.
  • the identity authentication device 1 further includes a high speed data transmission link 18 for transmitting a signal representing the infrared image and a signal representing the planar image in the image sensing device 12 to the processor 14 for processing.
  • the high speed data transfer link 18 is, for example, a Mobile Industry Processor Interface (MIPI).
  • MIPI Mobile Industry Processor Interface
  • the first projector 10 employs, for example, an optical component to project infrared structured light to the object to be tested.
  • the optical component includes, for example, a light source, a collimating lens, and an optical diffraction element (DOE), wherein the light source is used to generate an infrared laser beam; the collimating lens calibrates the infrared laser beam to form approximately parallel light; and the optical diffraction element is aligned
  • the infrared laser beam is modulated to form a corresponding speckle pattern.
  • the speckle pattern includes, for example, one or more of a regular lattice pattern, a stripe pattern, a mesh format, a speckle pattern, and the like. Among them, speckle is also called random dot matrix.
  • the coded pattern consists, for example, of light of different waveforms, each waveform representing a number, the combination of which is the code.
  • the infrared structured light may alternatively be produced by other suitable optical or optical components.
  • the speckle pattern may also include other coding patterns.
  • the first projector 10 projecting a known infrared structured light pattern onto the object to be tested.
  • the image sensing device 12 or the processor 14 analyzes the depth information of the object to be tested based on the captured deformed infrared structured light pattern.
  • This type of infrared structured light is defined as spatially structured light.
  • the first projector 10 projects infrared structured light to the object to be tested.
  • the image sensing device 12 or the processor 14 calculates the depth information of the object to be measured, for example, by measuring the propagation delay time between the light pulses.
  • This type of infrared structured light is defined as time structured light.
  • the time structured light is, for example but not limited to, a combination of any one or both of a sine wave and a square wave.
  • the second projector 11 is not limited to, for example, an infrared floodlight.
  • the image sensing device 12 includes, for example, an infrared image sensor 121 for capturing infrared structured light reflected by the object to be measured, and sensing an infrared image of the object to be tested.
  • the infrared image sensor 121 is further configured to capture infrared flood light reflected by the object to be tested, and obtain a planar image of the object to be tested.
  • the above is to use the same infrared image sensor 121 to sense the infrared image and the planar image of the object to be tested, thereby reducing the cost.
  • the first projector 10 projects spatially structured light to the object to be tested.
  • the image sensing device 12 includes, for example, two infrared image sensors, the two infrared image sensors have different structures, different sensing principles, different resolutions, and the like. Wherein, an infrared image sensor is used to capture infrared flooding reflected by the object to be tested, and another infrared image sensor is used to capture infrared structured light reflected by the object to be tested.
  • the processor 14 is, for example but not limited to, an AP (Application Processor).
  • AP Application Processor
  • the processor 14 confirms that the identity of the object to be tested is illegal, that is, the identity authentication fails, and the process ends.
  • the processor 14 does not need to initiate execution of other unauthenticated authentication procedures.
  • the processor 14 when the processor 14 first performs “determining whether the object to be tested has a facial feature according to the infrared image” and determines that the object to be tested does not have a facial feature, the identity authentication fails.
  • the device 14 does not need to restart the execution of "acquiring two-dimensional image information of the object to be tested according to the planar image, and comparing the two-dimensional image information of the object to be measured with the two-dimensional image information of the pre-stored object face" Right program.
  • the processor 14 first performs "acquiring two-dimensional image information of the object to be tested according to the planar image, and comparing the two-dimensional image information of the object to be tested with the two-dimensional image information of the pre-stored object face "matching", when it is confirmed that the two-dimensional image information of the object to be tested does not match the two-dimensional image information of the pre-stored object face, the identity authentication fails, and the processor 14 does not need to restart execution to "determine according to the infrared image. Whether the object to be tested has facial features or the like is authenticated.
  • the processor 14 constructs stereoscopic image information based on the infrared image, for example, and determines whether the object to be tested has a facial feature by determining whether the stereoscopic image information has a facial feature.
  • the processor 14 can obtain position information and depth information of each pixel from the infrared image, so that the processor 14 constructs stereoscopic image information according to the obtained position information and depth information of each pixel.
  • the processor 14 can determine from the stereoscopic image information whether the object to be tested has a facial feature.
  • the processor 14 may, for example, correlate the stereoscopic image information with a preset stereoscopic face template. If the correlation coefficient is greater than or equal to a preset value, the processor 14 determines that the The stereoscopic image information has a stereoscopic facial feature; otherwise, the processor 14 determines that the stereoscopic image information does not have the stereoscopic facial feature, the identity authentication fails, and the flow ends.
  • the preset stereoscopic face template is also stored, for example, in the memory 16.
  • the processor 14 extracts the three-dimensional facial feature information of the object to be tested from the infrared image by using a deep learning method, and further determines whether the object to be tested has a facial feature.
  • the deep learning method includes: establishing a deep convolutional neural network model, training the deep convolutional neural network model with a predetermined number of facial photos, and extracting a face from the infrared image according to the trained deep convolutional neural network model Characteristic Parameters.
  • the distortion rate it is determined whether the object to be tested has a stereoscopic facial feature based on the infrared image.
  • processor 14 may also include other suitable facial features.
  • the object to be tested is a planar object instead of a stereo face
  • stereo image information cannot be constructed from the infrared image, so that it can be determined that the object is to be The measured object does not have facial features.
  • the processor 14 can also extract the two-dimensional facial feature information of the object to be tested from the planar image, for example, by a deep learning method.
  • the deep learning method includes: establishing a deep convolutional neural network model, training the deep convolutional neural network model with a predetermined number of facial photos, and extracting a face from the planar image according to the trained deep convolutional neural network model Characteristic Parameters.
  • the two-dimensional image information of the pre-stored object face includes face feature information.
  • the processor 14 compares the two-dimensional facial feature information of the object to be tested with the pre-stored facial feature information.
  • facial features such as the nose, eyes, mouth, eyebrows, forehead, cheekbones, chin, face, width of the nose, width of the chin, etc., or/and, may also be extracted.
  • Distance information for any combination of nose, eyes, mouth, eyebrows, forehead, cheekbones, chin, etc. For example, distance information between the nose and the eye.
  • the facial feature information is not limited to the examples listed above, but may be other suitable feature information.
  • the processor 14 can also be realized by comparing a plane picture of the object to be tested with a plane picture of the pre-existing object face.
  • the processor 14 may also include other suitable two-dimensional image information comparison methods.
  • the processor 14 when the processor 14 confirms that the matching coefficient of the two-dimensional image information of the object to be tested and the pre-stored two-dimensional image information is greater than or equal to a predetermined threshold, the two-dimensional image information of the object to be tested may be confirmed. Matches pre-stored 2D image information. In contrast, if the processor 14 confirms that the matching coefficient is less than the predetermined threshold, it can be confirmed that the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information.
  • the processor 14 when the processor 14 confirms that the object to be tested has a facial feature, and confirms that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, the object to be tested is confirmed.
  • the identity is legal and the face recognition is successful.
  • the processor 14 may, for example, also perform an additional authentication procedure.
  • the processor 14 further performs: confirming whether the object to be tested is a living body according to the external image and/or the planar image; and, for example, confirming according to the infrared image and/or the planar image. Whether the eye of the object to be tested is in a predetermined range in front of the electronic device or the like. Thereby, the effect of the identity authentication apparatus 1 of the present application is made better. It is to be understood that the present invention is not limited to the technical solutions disclosed in the above, and any invention that is the same or similar to the technical idea of the present application should fall within the protection scope of the present application.
  • the identity authentication device 1 of the present application is used for face recognition of a living body.
  • the organism is, for example, a human body or other suitable animal body.
  • the identity authentication device 1 of the present application will be described by taking face recognition as an example.
  • the user Before the face recognition is performed, the user has registered his or her own face image template in advance and stores it in the memory 16.
  • the face image template is not limited to, for example, including two-dimensional image information, three-dimensional image information, and the like.
  • the processor 14 determines, according to the infrared image, whether the object to be tested has a face feature, and whether the two-dimensional image information of the object to be tested matches the two-dimensional image information of the registered user's face.
  • the processor 14 determines that the object to be tested has a face feature and confirms that the two-dimensional image information of the object to be tested matches the two-dimensional image information of the registered user's face, the face recognition is successful.
  • the industry usually projects near-infrared light with a wavelength of 850 nm to obtain an infrared image of an object to be measured.
  • the inventor of the present application has conducted a lot of creative labor, analysis and research and found that the first projector 10 projects infrared structured light having a wavelength of about 940 nm to the object to be tested, and the second projector 11 projects about 940 nm.
  • the infrared light is flooded to the object to be tested, and the image sensing device 12 can obtain a more accurate sensing effect.
  • near-infrared light having a wavelength range of [920,960] nanometers in ambient light is easily absorbed by the atmosphere and has a large intensity attenuation.
  • the first projector 10 projects infrared light of a wavelength range of [920,960] nanometers
  • the image to be detected and the image sensing device 12 obtain an infrared image of the object to be tested according to the captured infrared structure light, the image can be less interfered by the ambient light, thereby improving the image acquisition accuracy.
  • the image sensing device 12 obtains a planar image of the object to be tested according to the captured infrared structured light, Less interference from ambient light, which improves image acquisition accuracy.
  • the first projector 10 projects The wavelength of the infrared structured light is preferably 940 nm, and the wavelength of the infrared flood light projected by the second projector 11 is preferably 940 nm.
  • the wavelength of the red-pan-structured light projected by the first projector 10 and the wavelength of the infrared floodlight projected by the second projector 11 may deviate from 940 nm, for example, There will be a deviation of (+15) nanometers or (-15) nanometers. Therefore, the wavelength range of the infrared structured light projected by the first projector 10 is, for example, [925, 955] nanometers, and the wavelength range of the infrared flood light projected by the second projector 11 is, for example, [925, 955] nanometers. It can be seen that this wavelength range [925, 955] still falls within the wavelength range [920, 960].
  • the wavelength of the infrared structured light projected by the first projector 10 and the wavelength of the infrared flood light projected by the second projector 11 are any values falling within the wavelength range [920, 960] nanometers.
  • specific numerical values are not listed here, but any value falling within the wavelength range [920, 960] nanometers is feasible.
  • the first projector 10 can also project infrared structured light having a wavelength of 850 nanometers or other suitable wavelengths.
  • the second projector 11 can also project infrared flooding at a wavelength of 850 nanometers or other suitable wavelength.
  • the processor 14 first determines, according to the infrared image, whether the object to be tested has a facial feature, and when the processor 14 determines that the object to be tested does not have a facial feature, The processor 14 confirms that the identity of the object to be tested is illegal, that is, the identity authentication fails, and the process ends; when the processor 14 determines that the object to be tested has a facial feature, the processor 14 further determines the plane according to the plane.
  • the image obtains two-dimensional image information of the object to be tested, and compares whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information. When it is confirmed that the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information, the identity authentication fails, and the process ends.
  • the identity authentication is successful.
  • the processor 14 determines whether the object to be tested has a facial feature according to the infrared image: the second projector 11 projects infrared flooding to the object to be tested, the image The sensing device 12 captures infrared flooding reflected by the object to be tested, and senses a planar image of the object to be tested. In this way, the sensing time can be further reduced and the work efficiency can be improved.
  • the processor 14 determines that the object to be tested has a facial feature
  • the second projector 11 starts to project infrared flooding to the object to be tested
  • the image sensing device 12 captures The infrared flood light reflected from the object to be tested is sensed to obtain a planar image of the object to be tested.
  • the processor 14 obtains two-dimensional image information of the object to be tested according to the planar image, and compares the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image information. In this way, power consumption can be reduced.
  • control circuit 15 can control the first projector 10 to operate before the second projector 11 , for example, after the image sensing device 12 sequentially senses the infrared image and the planar image of the object to be tested.
  • the processor 14 starts to perform “determining whether the object to be tested has a facial feature according to the infrared image”, and when determining that the object to be tested has a facial feature, performing “according to the planar image to obtain the waiting”.
  • the two-dimensional image information of the object is measured and compared with whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information.
  • control circuit 15 can also control the second projector 10 to operate before the first projector 11.
  • the image sensing device 12 sequentially senses the plane image and the infrared image of the object to be tested. of.
  • the identity authentication device 1 of the present embodiment needs to identify the object to be tested twice, wherein the first recognition of the object to be tested is: the processor 14 determines the object to be tested. Whether there is a face feature.
  • the processor 14 can construct stereoscopic image information of the object to be tested according to the infrared image, for example, by determining whether the stereoscopic image information has facial features. It is judged whether the object to be tested has a three-dimensional face feature.
  • the processor 14 cannot construct stereoscopic image information, thereby judging that the object to be tested does not have the stereoscopic face feature.
  • the processor 14 determines that the object to be tested has the stereo face feature, the processor 14 then performs the second recognition on the object to be tested: “Actain the two-dimensional image information of the object to be tested according to the planar image. And whether the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information.
  • the processor 14 If the processor 14 confirms that the two-dimensional image information of the object to be tested does not match the two-dimensional image information of the registered user's face, the identity authentication fails, and the process ends.
  • the processor 14 confirms that the two-dimensional image information of the object to be tested matches the two-dimensional image information of the registered user's face, the identity authentication is successful.
  • the processor 14 first performs "determining whether the object to be tested has a facial feature according to the infrared image", and then determining whether to perform "according to the plane according to the judgment result".
  • the image obtains two-dimensional image information of the object to be measured, and compares the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image information.
  • the processor 14 may first perform "acquiring two-dimensional image information of the object to be tested according to the planar image and comparing the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image information", and then Based on the comparison result, it is determined whether or not a process of "determining whether the object to be tested has a facial feature based on the infrared image" is performed.
  • a process of "determining whether the object to be tested has a facial feature based on the infrared image” is performed.
  • the processor 14 first obtains two-dimensional image information of the object to be tested according to the plane graphic, and compares the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image information. When it is confirmed that the two-dimensional image information of the object to be tested does not match the pre-stored two-dimensional image information, the processor 14 confirms that the identity of the object to be tested is illegal, that is, the identity authentication fails, and the process ends; When the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information, the processor 14 determines whether the object to be tested has a facial feature according to the infrared image; when the processor 14 determines that the object is to be tested When the object does not have facial features, the identity authentication fails and the process ends.
  • the processor 14 determines that the object to be tested has a facial feature
  • the identity authentication succeeds.
  • the processor 14 when the processor 14 is performing "acquiring two-dimensional image information of the object to be tested according to the planar figure, and comparing the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image Simultaneously, the first projector 10 projects infrared structure light to the object to be tested, and the image sensing device 12 senses an infrared image of the object to be tested. In this way, the sensing time can be further reduced and the work efficiency can be improved.
  • the processor 14 determines that the two-dimensional image information of the object to be tested matches the pre-stored two-dimensional image information
  • the first projector 10 projects the infrared structured light to the object to be tested, the image.
  • the sensing device 12 senses an infrared image of the object to be tested.
  • the processor 14 determines, according to the infrared image, that the object to be tested has a stereo feature. In this way, power consumption can be reduced.
  • control circuit 15 controls the second projector 11 to operate before the first projector 10, and after the image sensing device 12 sequentially senses the planar image and the infrared image of the object to be tested.
  • the processor 14 resumes performing "acquiring two-dimensional image information of the object to be tested according to the planar figure and comparing the two-dimensional image information of the object to be tested with the pre-stored two-dimensional image information", and "according to the The infrared image determines whether the object to be tested has facial features.
  • control circuit 15 can also control the first projector 10 to operate before the second projector 11.
  • the image sensing device 12 sequentially senses the infrared image and the planar image of the object to be tested. of.
  • the identity authentication device 1 of the present embodiment needs to identify the object to be tested twice, wherein the first recognition of the object to be tested is: the processor 14 determines the object to be tested. Whether the two-dimensional image information matches the two-dimensional image information of the registered user's face, and if it is determined that the two-dimensional image information of the object to be tested does not match the two-dimensional image information of the registered user's face, the identity is determined. The power failed.
  • the processor 14 determines that the two-dimensional image information of the object to be tested matches the two-dimensional image information of the registered user's face, it is not determined that the object to be tested is a legitimate user, because the reason is: The judgment and recognition of the image information can also be successfully recognized if the photo or video of the legitimate user is utilized.
  • the processor 14 performs a second recognition on the object to be tested: determining, according to the infrared image, whether the object to be tested has a facial feature.
  • the processor 14 determines that the identity of the object to be tested is illegal, that is, the identity authentication fails, and the process ends.
  • the identity authentication succeeds.
  • the two recognitions of the object to be tested by the processor 14 need to match the sensing order of the planar image and the infrared image of the object to be tested.
  • the working order relationship between the above devices can be reasonably determined according to the invention as set forth in the present application.
  • FIG. 6 is a structural block diagram of a second embodiment of the identity authentication apparatus of the present application.
  • the identity authentication device 2 is substantially identical in structure to the identity authentication device 1, the main difference being that the image sensing device 22 of the identity authentication device 2 further comprises an RGB image sensor 222.
  • the RGB image sensor 222 is configured to sense a planar image of the object to be tested. Accordingly, the second projector 11 can be omitted. However, the second projector 11 may be omitted, for example, in the case where the ambient light is sufficient, the RGB image sensor 222 is used to sense the planar image of the object to be tested, in the case of a dark ambient light.
  • the second projector 11 is used to project infrared flooding to the object to be tested, and the infrared image sensor 221 senses a planar image of the object to be tested.
  • the RGB image sensor 222 When the RGB image sensor 222 is used to sense a planar image of the object to be tested, the RGB image sensor and the infrared image sensor 221 can operate in a time-sharing or simultaneous manner.
  • FIG. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
  • the electronic device 100 is, for example but not limited to, a suitable type of electronic product such as a consumer electronic product, a home-based electronic product, a vehicle-mounted electronic product, a financial terminal product, or the like.
  • consumer electronic products such as but not limited to mobile phones, tablets, notebook computers, desktop monitors, computer integrated machines.
  • Home-based electronic products such as, but not limited to, smart door locks, televisions, refrigerators, wearable devices, and the like.
  • Vehicle-mounted electronic products such as, but not limited to, car navigation systems, car DVDs, and the like.
  • the financial terminal products are, for example, but not limited to ATM machines, terminals for self-service business, and the like.
  • the electronic device 100 includes the above-described identity authentication device 1.
  • the electronic device 100 corresponds to whether the corresponding function is executed according to the identity authentication result of the identity authentication device 1.
  • the respective functions are, for example but not limited to, any one or more of an application including unlocking, paying, and starting a pre-stored application.
  • an electronic device will be described as an example of a mobile phone.
  • the mobile phone is, for example, a full-screen mobile phone, and the identification device 1 is provided, for example, at the front top of the mobile phone.
  • the mobile phone is not limited to a full screen mobile phone.
  • the screen for lifting up the mobile phone or touching the mobile phone can function to wake up the identity authentication device 1.
  • the identity authentication device 1 is woken up and recognizes that the user in front of the mobile phone is a legitimate user, the screen is unlocked.
  • the electronic device 100 applies the identity authentication device 1, the electronic device 1 can realize sensing of a longer distance of the object to be measured, and the sensing response speed is faster.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本申请公开了一种身份鉴权方法、身份鉴权装置、以及电子设备。该身份鉴权方法包括:步骤S1:投射红外结构光至待测物体,感测该待测物体的红外图像;步骤S2:根据该红外图像判断该待测物体是否具备脸部特征;步骤S3:感测该待测物体的平面图像;步骤S4:根据所述平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的一物体脸部的二维图像信息是否匹配;步骤S5:根据步骤S2与步骤S4的执行结果,确认该待测物体的身份是否合法。该身份鉴权装置运行该身份鉴权方法。该电子设备包括该身份鉴权装置。

Description

身份鉴权方法、身份鉴权装置、和电子设备 技术领域
本申请涉及一种身份鉴权方法、身份鉴权装置、和电子设备。
背景技术
随着科技的发展,越来越多的场合开始采用各种传感技术对物体进行识别。例如,指纹识别技术、虹膜识别技术等。然,指纹识别技术与虹膜识别技术等都有其各自的局限性,例如,指纹识别技术并不能进行较远距离的感测,虹膜识别技术的感测响应速度较慢等。
因此,有必要提供一种新型的传感技术,用于身份鉴权。
发明内容
本申请实施方式旨在至少解决现有技术中存在的技术问题之一。为此,本申请实施方式需要提供一种身份鉴权方法、身份鉴权装置、和电子设备。
首先,本申请提供一种身份鉴权方法,包括:
步骤S1:投射红外结构光至待测物体,感测该待测物体的红外图像;
步骤S2:根据该红外图像判断该待测物体是否具备脸部特征;
步骤S3:感测该待测物体的平面图像;
步骤S4:根据所述平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的一物体脸部的二维图像信息是否匹配;
步骤S5:根据步骤S2与步骤S4的执行结果,确认该待测物体的身份是否合法。
在本申请的实施方式中,通过光学图像传感的方式对该待测物体的身份进行鉴权。其中,该红外图像能够反应该待测物体的3D属性信息,从而,根据该红外图像能够判断该待测物体是否具备脸部特征。另外,结合2D图像信息的比对,共同配合确认该待测物体的身份是否合法。由此可见,本申请提供了一种新型的光学传感技术来实现身份鉴权。
另外,该光学传感技术可适用于较远距离的感测,且感测响应速度较快。所述较远距离例如为1米范围内或甚至更远一些的距离。
在某些实施方式中,在步骤S2中,根据该红外图像判断该待测物体是否具备立体脸部特征。
在某些实施方式中,步骤S2包括:根据该红外图像构建出立体图像信息,通过判断该立体图像信息是否具备脸部特征来判断该待测物体是否具备脸部特征。
在某些实施方式中,在步骤S3中,投射红外泛光至该待测物体,并利用红外图像传感器感测该待测物体的平面图像;或者,在步骤S3中,利用RGB图像传感器感测该待测物体的平面图像。
在某些实施方式中,当步骤S3是投射红外泛光至该待测物体、并利用红外图像传感器感测该待测物体的平面图像时,步骤S1与步骤S3被分时执行,其中步骤S1先于或后于步骤S3被执行。
在某些实施方式中,在步骤S1与步骤S3中,利用同一红外图像传感器分时感测该待测物体的红外图像和平面图像。
在某些实施方式中,当步骤S3是利用RGB图像传感器感测该待测物体的平面图像时,步骤S1与步骤S3被分时执行,其中步骤S1先于或后于步骤S3被执行;或者,步骤S1与步骤S3被同时执行。
在某些实施方式中,步骤S2先于或后于步骤S4被执行;或,步骤S2与步骤S4被同时执行。
在某些实施方式中,在步骤S5中,当确认步骤S2和步骤S4中的任意一个步骤先被执行而获得的结果是否定的结果时,则身份鉴权失败。
在某些实施方式中,在步骤S5中,当确认步骤S2和步骤S4被执行后而获得的结果都是肯定的结果时,则身份鉴权成功。
在某些实施方式中,该物体脸部的二维图像信息为一人体脸部的二维图像信息在步骤S4中比对的是:感测到的待测物体的二维图像信息与预存的该人体脸部的二维图像信息是否匹配;在步骤S2中,根据该红外图像判断该待测物体是否具备人体的脸部特征。
在某些实施方式中,当感测到的待测物体的二维图像信息与预存的该人体脸部的二维图像信息匹配时,该待测物体包括该人体脸部,则身份鉴权成功。
在某些实施方式中,该待测物体的二维图像信息包括特征信息,该预存的人体脸部的二维图像信息包括脸部特征信息,在步骤S4中比对的是:该待测物体的特征信息与该预存的脸部特征信息是否匹配。
在某些实施方式中,步骤S4包括:通过深度学习方法提取该待测物体的二维人脸特征信息。
在某些实施方式中,该深度学习方法包括:建立深度卷积神经网络模型,使用预定数量的人脸照片训练该深度卷积神经网络模型,根据训练好的该深度卷积神经网络模型提取人脸的特征参数。
在某些实施方式中,步骤S1中的红外结构光的波长为940纳米。
在某些实施方式中,步骤S3中的红外泛光的波长为940纳米。
在某些实施方式中,步骤S1中的红外结构光的波长范围为[925,955]纳米,步骤S3中的红外泛光的波长范围为[925,955]纳米。
在某些实施方式中,在步骤S1中,投射至待测物体的红外结构光形成图案,所述图案呈点阵式、条纹式、散斑式、网格式中的任意一种或几种的结合。
在某些实施方式中,先执行步骤S1,然后同时执行步骤S2和S3,其中,当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S4,否则,当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败。
在某些实施方式中,先执行步骤S3,然后同时执行步骤S4和S1,其中,当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则启动执行步骤S2,否则,当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败。
在某些实施方式中,先执行步骤S1;执行完步骤S1后执行步骤S2;当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S3,执行完步骤S3后再执行步骤S4;而当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败;或,
先执行步骤S3;执行完步骤S3再执行步骤S4;当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则执行步骤S1,执行完步骤S1后再执行步骤S2;而当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败;或,
步骤S1、步骤S3、步骤S2依次执行;当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S4,否则,身份鉴权失败;或,
步骤S1、步骤S3、步骤S4依次执行;当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则启动执行步骤S2,否则,身份鉴权失败;或,
步骤S3、步骤S1、步骤S2依次执行;当步骤S2中判断得知该待测物体具备脸部 特征后,则启动执行步骤S4,否则,身份鉴权失败;或,
步骤S3、步骤S1、步骤S4依次执行;当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则启动执行步骤S2,否则,身份鉴权失败。
在某些实施方式中,该身份鉴权方法应用于一电子设备上,用于生物体的面部识别。
本申请还提供一种身份鉴权装置,包括:
存储器,用于预存样本物体的二维图像信息;
第一投射器,用于投射红外结构光至待测物体;
图像传感装置,用于捕获由该待测物体反射回来的红外结构光、获得该待测物体的红外图像,还用于感测该待测物体的平面图像;和
处理器,用于根据该红外图像判断该待测物体是否具备脸部特征;所述处理器还用于根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的一物体脸部的二维图像信息是否匹配;所述处理器用于根据前述的判断与比对结果,确认所述待测物体的身份是否合法。
在本申请的实施方式中,该身份鉴权装置通过光学图像传感的方式对该待测物体的身份进行鉴权。其中,该红外图像能够反应该待测物体的3D属性信息,从而,该处理器根据该红外图像能够判断得知该待测物体是否具备脸部特征。另外,该处理器结合2D图像信息的比对,共同配合确认该待测物体的身份是否合法。由此可见,本申请提供了一种新型的光学式身份鉴权装置来实现身份鉴权。
另外,该光学式身份鉴权装置可适用于较远距离的感测,且感测响应速度较快。所述较远距离例如为1米范围内或甚至更远一些的距离。
在某些实施方式中,该处理器用于根据该红外图像判断该待测物体是否具备立体脸部特征。
在某些实施方式中,该处理器用于根据该红外图像构建出立体图像信息,通过判断该立体图像信息是否具备人脸特征来判断该待测物体是否具备脸部特征。
在某些实施方式中,该图像传感装置包括红外图像传感器,用于捕获由该待测物体反射回来的红外结构光,感测获得该待测物体的红外图像。
在某些实施方式中,该身份鉴权装置进一步包括第二投射器,用于投射红外泛光至该待测物体;该图像传感装置进一步用于捕获由该待测物体反射回来的红外泛光,感测获得该待测物体的平面图像。
在某些实施方式中,该红外图像传感器用于分时感测获得该待测物体的红外图像和 平面图像。
在某些实施方式中,该图像传感装置进一步包括RGB图像传感器,用于感测获得该待测物体的平面图像。
在某些实施方式中,所述身份鉴权装置进一步包括控制电路,用于控制该第一投射器与第二投射器分时控制,当进行身份鉴权时,所述控制电路控制该第一投射器先于或后于该第二投射器工作。
在某些实施方式中,当进行身份鉴权时,所述控制电路用于控制该第一投射器、第二投射器、和该图像传感装置协同工作。
在某些实施方式中,该身份鉴权装置进一步包括高速数据传送链路,用于把图像传感装置感测获得的红外图像信号和平面图像信号传送到该处理器进行处理。
在某些实施方式中,当“根据该红外图像判断该待测物体是否具备脸部特征”,以及“根据所述平面图像获得该待测物体的二维图像信息、并比对所述二维图像信息与预存的二维图像信息是否匹配”中的任意一者先被该处理器执行而获得的结果是否定的结果时,则身份鉴权失败。
在某些实施方式中,当该处理器判断得知该待测物体具备脸部特征,以及确认该待测物体的二维图像信息与预存的二维图像信息匹配时,则身份鉴权成功。
在某些实施方式中,所述处理器先执行:根据该红外图像判断该待测物体是否具备脸部特征,再执行:根据所述平面图像获得该待测物体的二维图像信息,并比对所述二维图像信息与预存的二维图像信息是否匹配;或,所述处理器先执行:根据所述平面图像获得该待测物体的二维图像信息,并比对所述二维图像信息与预存的二维图像信息是否匹配,再执行:根据该红外图像判断该待测物体是否具备脸部特征。
在某些实施方式中,该物体脸部的二维图像信息为一人体脸部的二维图像信息,所述处理器用于比对该待测物体的二维图像信息与预存的该人体脸部的二维图像信息是否匹配,以及用于根据该红外图像判断该待测物体是否具备人体的脸部特征
在某些实施方式中,当该处理器确认该待测物体的二维图像信息与预存的该人体脸部的二维图像信息匹配时,则可确认该待测物体包括该人体脸部,则身份鉴权成功。
在某些实施方式中,当该处理器根据该红外图像判断得知该待测物体具备人体的脸部特征,且确认该待测物体的二维图像信息与预存的该人体脸部的二维图像信息匹配时,则该处理器确认该待测物体包括该人体脸部,身份鉴权成功。
在某些实施方式中,该待测物体的二维图像信息包括特征信息,该预存的人体脸部 的二维图像信息包括脸部特征信息,该处理器通过比对该待测物体的特征信息与该预存的脸部特征信息,来确认该待测物体的二维图像信息与预存的二维图像信息是否匹配。
在某些实施方式中,该处理器通过深度学习方法提取该待测物体的二维人脸特征信息。
在某些实施方式中,该处理器通过建立深度卷积神经网络模型,使用预定数量的人脸照片训练该深度卷积神经网络模型,并根据训练好的该深度卷积神经网络模型提取人脸的特征参数。
在某些实施方式中,该红外结构光的波长为940纳米。
在某些实施方式中,该红外泛光的波长为940纳米。
在某些实施方式中,该第一投射器投射的红外结构光的波长范围为[925,955]纳米,该第二投射器投射的红外泛光的波长范围为[925,955]纳米。
在某些实施方式中,该第一投射器投射至待测物体的红外结构光形成图案,所述图案呈点阵式、条纹式、散斑式、网格式中的任意一种或几种的结合。
在某些实施方式中,在执行身份鉴权时,所述第二投射器用于先投射红外泛光至待测物体,然后在所述处理器根据所述平面图像获得该待测物体的二维图像信息、并比对所述待测物体的二维图像信息与预存的二维图像信息是否匹配的同时:所述第一投射器投射红外结构光至该待测物体,其中,当所述处理器确认所述待测物体的二维图像信息与预存的二维图像信息匹配时,则所述处理器再根据红外图像构判断该立体图像信息是否具备脸部特征;当所述处理器确认所述待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败。
在某些实施方式中,在执行身份鉴权时,所述第一投射器用于先投射红外结构光至目标物体,然后在所述处理器根据该红外图像判断该待测物体是否具备脸部特征的同时:所述第二投射器投射红外泛光至该待测物体,其中,当所述处理器判断得知该待测物体具备脸部特征时,则所述处理器再根据所述平面图像获得该待测物体的二维图像信息,并比对所述待测物体的二维图像信息与预存的二维图像信息是否匹配;当所述处理器判断得知所述待测物体不具备脸部特征时,则身份鉴权失败。
在某些实施方式中,该处理器同时执行:“根据该红外图像判断该待测物体是否具备脸部特征”,以及“根据所述平面图像获得该待测物体的二维图像信息、并比对所述二维图像信息与预存的二维图像信息是否匹配”。
本申请还提供一种电子设备,包括上述中任意一项所述的身份鉴权装置。
在某些实施方式中,所述电子设备用于根据所述身份鉴权装置的身份鉴权结果来对应是否执行相应的功能。
在某些实施方式中,所述相应的功能包括解锁、支付、启动预存的应用程序中的任意一种或几种。
在某些实施方式中,所述电子设备包括消费性电子产品、家居式电子产品、车载式电子产品、金融终端产品中的任意一种或几种。
由于本申请的电子设备包括上述的身份鉴权装置,因此,该电子设备能够实现对待测物体的较远距离的感测,且感测响应速度较快。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请实施方式的实践了解到。
附图说明
本申请实施方式的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1为本申请的身份鉴权方法的流程示意图。
图2为近红外光的波长与强度之间的关系示意图。
图3为本申请的身份鉴权方法的第一实施方式的细化流程示意图。
图4为本申请的身份鉴权方法的第二实施方式的细化流程示意图。
图5是本申请的身份鉴权装置的第一实施方式的结构框图。
图6是本申请的身份鉴权装置的第二实施方式的结构框图。
图7是本申请的电子设备一实施方式的结构示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
在本申请的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相 连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
下文的公开提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开,下文中对特定例子的部件和设定进行描述。当然,它们仅仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设定之间的关系。
进一步地,所描述的特征、结构可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本申请的实施方式的充分理解。然而,本领域技术人员应意识到,没有所述特定细节中的一个或更多,或者采用其它的结构、组元等,也可以实践本申请的技术方案。在其它情况下,不详细示出或描述公知结构或者操作以避免模糊本申请。
更进一步地,需要提前说明的是,本申请的说明书以及权利要求书中涉及的步骤编号S1、S2、S3、S4、S5只是为了清楚区分各步骤,并不代表步骤执行的先后顺序。
请参阅图1,图1为本申请的身份鉴权方法的流程示意图。该身份鉴权方法例如但不局限于应用在电子设备上,所述电子设备例如但不局限于为消费性电子产品、家居式电子产品、车载式电子产品、金融终端产品等合适类型的电子产品。其中,消费性电子产品例如但不局限为手机、平板电脑、笔记本电脑、桌面显示器、电脑一体机等。家居式电子产品例如但不局限为智能门锁、电视、冰箱、穿戴式设备等。车载式电子产品例如但不局限为车载导航仪、车载DVD等。金融终端产品例如但不局限为ATM机、自助办理业务的终端等。该身份鉴权方法包括:
步骤S1:投射红外结构光至待测物体,感测该待测物体的红外图像;
步骤S2:根据该红外图像判断该待测物体是否具备脸部特征;
步骤S3:感测该待测物体的平面图像;
步骤S4:根据所述平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的一物体脸部的二维图像信息是否匹配;
步骤S5:根据步骤S2与步骤S4的执行结果,确认该待测物体的身份是否合法。
在本实施方式中,较佳地,步骤S2与步骤S4先后进行。当步骤S2与S4中的任意 一个步骤先被执行而获得的结果是否定的结果时,则步骤S5确认所述待测物体的身份非法,即,身份鉴权失败,流程结束,无需再执行其它未进行的步骤。举例,当步骤S2判断得知该待测物体不具备脸部特征时,则身份鉴权失败,步骤S4则不需要再被执行。类似地,当步骤S4先被执行而获得的结果是否定的结果时,则身份鉴权失败,步骤S2不需要再被执行。
相对地,当步骤S2与S4中的任意一个步骤先被执行而获得的结果是肯定的结果时,则继续执行其它未进行的步骤。如此,可以节省感测时间、提高感测响应速度。
当然,可变更地,步骤S2与步骤S4也可同时进行。
在本申请的实施方式中,通过投射红外结构光到该待测物体上,获得该待测物体的红外图像,其中,该红外图像能够反应该待测物体的三维(3-Dimension,3D)属性信息,从而,根据该红外图像能够判断该待测物体是否具备脸部特征。另外,感测获得该待测物体的平面图像,从而能够根据该平面图像获得该待测物体的二维(2-Dimension,2D)图像信息,并与预存的物体脸部的二维图像信息进行比对。如此,通过3D与2D相结合的方式,实现对该待测物体的脸部识别。
可见,本申请提供了一种新型的光学传感技术来实现身份鉴权。另外,该光学传感技术可适用于较远距离的感测,且感测响应速度较快。所述较远距离例如为1米范围内或甚至更远一些的距离。
由上述内容可知,在步骤S2中,根据该红外图像判断该待测物体是否具备脸部特征,这是对该待测物体进行一次识别,当识别不通过时,则身份鉴权失败,流程结束。
另外,在步骤S4中,根据所述平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配,这是对该待测物体进行一次识别,当识别不通过时,则身份鉴权失败,流程结束。
在本实施方式中,可选地,当至少上述两次识别都通过时,步骤S5确认该待测物体包括该物体脸部,因此,确认该待测物体的身份合法,身份鉴权成功。
在某些具体实施方式中,在步骤S1中,例如采用光学组件来投射红外结构光至该待测物体,并利用红外图像传感器捕获由该待测物体反射回来的红外结构光,感测获得该待测物体的红外图像。所述光学组件例如包括光源、准直镜头以及光学衍射元件(DOE),其中光源用于产生一红外激光束;准直镜头将红外激光束进行校准,形成近似平行光;光学衍射元件对校准后的红外激光束进行调制,形成相应的散斑图案。该散斑图案例如包括规则点阵式、条纹式、网格式、散斑式、编码式等中的一种或几种。其 中,散斑式又称为随机点阵式。编码式图案例如由不同波形的光组成,每种波形代表一种数字,各波形的组合即为编码。当然,可变更地,也可由其它合适的光学元件或光学组件来产生该红外结构光。另外,该散斑图案也可包括其它的编码图案。
上述是利用基于光编码原理,在步骤S1中,投射已知的红外结构光图案到该待测物体上。图像传感装置或处理器根据捕获到的变形的红外结构光图案来分析确定该待测物体的深度信息。定义此类红外结构光为空间结构光。
可变更地,例如也可利用基于飞行时间(Time of Flight,ToF)原理,在步骤S1中,投射红外结构光至该待测物体。图像传感装置或处理器例如通过测量光脉冲之间的传输延迟时间来计算待测物体的深度信息。定义此类红外结构光为时间结构光。
该时间结构光例如但不局限于呈正弦波、方波中的任意一种或两种的结合。
在步骤S2中,例如,根据该红外图像构建出立体图像信息,通过判断该立体图像信息是否具备脸部特征来判断该待测物体是否具备脸部特征。
具体地,由该红外图像可以获得各像素的位置信息以及深度信息,从而根据获得的各像素的位置信息以及深度信息构建出立体图像信息。相应地,由该立体图像信息能够判断得知该待测物体是否具备脸部特征。
在一示例中,在判断该立体图像信息是否具备脸部特征时,例如,可以将该立体图像信息与一预设的立体脸部模板进行相关,如果经比对获知相关系数大于或等于一预设值,则判断得知该立体图像信息具备立体脸部特征;反之,则判断得知该立体图像信息不具备立体脸部特征,身份鉴权失败,流程结束。
再例如,通过深度学习方法,从该红外图像中提取该待测物体的三维脸部特征信息,进而可判断得知该待测物体是否具备脸部特征。
该深度学习方法包括:建立深度卷积神经网络模型,使用预定数量的脸部照片训练该深度卷积神经网络模型,根据训练好的该深度卷积神经网络模型从该红外图像中提取脸部的特征参数。
又例如,通过计算扭曲率的方式,来根据该红外图像判断该待测物体是否具备立体脸部特征。
上述只是举例说明,本申请并不限于以上的具体实施方式,还可包括其它的合适的脸部特征的判断方式。
需要说明的是,当该待测物体是平面物体而非立体脸部时,例如,该待测物体是一张照片时,由该红外图像构建不出立体图像信息,从而能够判断得知该待测物体并不具 备脸部特征。
在步骤S3中,例如通过RGB传感器感测该待测物体的平面图像。又例如,采用红外泛光灯投射红外泛光至该待测物体,并利用红外图像传感器捕获由该待测物体反射回来的红外泛光,从而感测获得该待测物体的平面图像。
需要说明的是,当步骤S3采用红外泛光灯投射红外泛光至该待测物体,并利用红外图像传感器捕获由该待测物体反射回来的红外泛光,从而获得该待测物体的平面图像时,步骤S3与步骤S1需要被分时执行,以避免该平面图像与该红外图像混叠。在此种情况下,步骤S3先于或后于步骤S1被执行。
当步骤S3通过RGB传感器感测该待测物体的平面图像时,由于不涉及图像混叠的问题,步骤S3与步骤S1分时或同时被执行都是可以的。
在步骤S4中,例如通过深度学习方法,从该平面图像中提取该待测物体的二维脸部特征信息。
该深度学习方法包括:建立深度卷积神经网络模型,使用预定数量的脸部照片训练该深度卷积神经网络模型,根据训练好的该深度卷积神经网络模型从该平面图像中提取脸部的特征参数。
该预存的物体脸部的二维图像信息包括脸部特征信息,在步骤S4中比对的是:该待测物体的脸部特征信息与该预存的脸部特征信息是否匹配。
可变更地,在其它实施方式中,在步骤S4中,也可提取鼻子、眼睛、嘴巴、眉毛、额头、颧骨、下巴、脸庞、鼻子的宽度、下巴的宽度等面部特征,或/和,鼻子、眼睛、嘴巴、眉毛、额头、颧骨、下巴等中任意组合的距离信息。例如,鼻子和眼睛之间的距离信息。当然,所述面部特征信息并不局限于上面所列举的例子,也可为其它合适的特征信息。
关于二维图像信息的比对,例如,也可通过比对该待测物体的平面图片与预存的物体脸部的平面图片来实现。
上述只是举例说明,本申请并不限于以上的具体实施方式,还可包括其它的合适的二维图像信息的比对方式。
具体地,例如,在步骤S4中,当确认该待测物体的二维图像信息与预存的二维图像信息的匹配系数大于或等于一预定阈值时,则可确认该待测物体的二维图像信息与预存的二维图像信息匹配。相对地,如果确认匹配系数小于该预定阈值时,则可确认该待测物体的二维图像信息与预存的二维图像信息不匹配。
进一步地,在步骤S5中,根据前述步骤S2和步骤S4的执行结果,来确认该待测物体的身份是否合法。需要再次说明的是,当步骤S2与S4中的任意一个步骤先被执行而获得的结果是否定的结果时,则步骤S5确认所述待测物体的身份非法,即,身份鉴权失败,流程结束,无需再执行其它未进行的步骤。而当步骤S2与S4中的任意一个步骤先被执行而获得的结果是肯定的结果时,则另一个步骤之后会被执行。
在本实施方式中,当步骤S2与S4被执行后获得结果都是肯定的结果,即,根据该红外图像判断得知该待测物体具备脸部特征,以及确认该待测物体的二维图像信息与预存的物体脸部的二维图像信息匹配,则身份鉴权成功。
另外,需要说明的是,在上述实施方式的身份鉴权方法中,额外增加某些步骤也是可行的。例如,在执行身份鉴权的过程,进一步增加步骤:根据该红外图像和/或平面图像,确认该待测物体是否为活体。又例如,增加步骤:根据该红外图像和/或平面图像,确认该待测物体的眼睛是否注视在电子设备前方的预定范围内等。从而,使得本申请的身份鉴权方法的效果更好。由前述可知,本申请并不限于以上内容所公开的技术方案,只要是技术思想与本申请的技术思想相同或相似的发明,均应落在本申请的保护范围。
本申请的身份鉴权方法用于生物体的脸部识别。该生物体例如为人体或其它合适的动物体。
下面,以人脸识别为例对本申请的身份鉴权方法进行说明。在进行脸部识别之前,用户已提前注册好其本人的脸部图像模板,并存储在例如一存储器中。该脸部图像模板例如并不局限于包括二维图像信息和三维图像信息等。
相应地,在步骤S2中,根据该红外图像判断该待测物体是否具备人脸特征;在步骤S4中,比对该待测物体的二维图像信息与已注册用户脸部的二维图像信息是否匹配。
在步骤S5中,当确认:步骤S2中判断得知该待测物体具备人脸特征、步骤S4中的该待测物体的二维图像信息与已注册用户脸部的二维图像信息匹配时,则身份鉴权成功。
现有的,业界通常投射波长为850纳米的近红外光,来获得待测物体的红外图像。然而,本申请的发明人经过大量的创造性劳动,分析与研究发现:投射波长为940纳米左右的红外泛光、投射波长为940纳米左右的红外结构光进行感测,可以获得较准确的感测效果。
请参阅图2,图2为环境光的辐射强度与波长之间的关系示意图。其中,波长用横轴表示,且被标示为字母λ,辐射强度用纵轴表示,且被标示为字母E。发明人通过理 论研究、结合大量的实验测试、验证并反复进行分析与研究等,创造性地发现:环境光中波长范围为[920,960]纳米的近红外光易被大气吸收、强度衰减较大,当步骤S1投射波长范围为[920,960]纳米的红外结构光到待测物体,根据捕获的红外结构光获得该待测物体的红外图像时,能够少受环境光的干扰,从而提高图像的获取精度。类似地,当步骤S3投射波长范围为[920,960]纳米的红外泛光到待测物体,根据捕获的红外泛光获得该待测物体的平面图像时,能够少受环境光的干扰,从而提高图像的获取精度。
进一步地,在波长范围为[920,960]纳米的红外光中,波长为940纳米的近红外光更易被大气吸收、强度衰减最大,因此,在本申请的实施方式中,步骤S1投射的红外结构光的波长优选为940纳米,步骤S3投射的红外泛光的波长优选为940纳米。
然而,在实际应用中,步骤S1所投射的红外结构光的波长和步骤S3所投射的红外泛光的波长在940纳米的基础上会有一定的偏差,例如会有(+15)纳米或(-15)纳米左右的偏差。因此,步骤S1投射的红外结构光的波长范围例如为[925,955]纳米,步骤S3投射的红外泛光的波长范围例如为[925,955]纳米。可见,该波长范围[925,955]仍然落在波长范围[920,960]内。
需要说明的是,步骤S1所投射的红外结构光的波长和步骤S3所投射的红外泛光的波长为落在上述波长范围[920,960]纳米中的任意一数值。本申请为了叙述简洁清楚,在此处并未一一列举各具体数值,但落在这波长范围[920,960]纳米中的任意一数值都是可行的。
当然,可变更地,本申请的身份鉴权方法的步骤S1和步骤S3也可采用波长为850纳米或者其它合适波长的红外结构光、红外泛光进行感测。
请参阅图3,图3为本申请的身份鉴权方法的第一实施方式的细化流程示意图。在此实施方式中,步骤S2先于步骤S4进行。当步骤S2中判断得知该待测物体具备脸部特征时,开始启动执行步骤S4,而当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败,流程结束。
在一具体实施例中,步骤S2与步骤S3同时进行。如此,能够减少感测时间,提升工作效率。
然,可变更地,步骤S3也可位于步骤S2之后进行。对于此种情况,当步骤S2中判断得知该待测物体具备脸部特征时,开始启动执行步骤S3,然后再执行步骤S4。如此,能够减少功耗。
另外,步骤S3也可位于步骤S2之前进行。对于此种情况,又可分为两种实施方式, 步骤S1可先于步骤S3进行,也可后于步骤S3进行。
以人脸识别为例,本实施方式的身份鉴权方法需对该待测物体进行两次识别,其中,对该待测物体的第一次识别就是:判断该待测物体是否具备人脸特征,如果判断得知该待测物体不具备人脸特征时,则身份鉴权失败,流程结束。
如果判断得知该待测物体具备人脸特征,则还不能确定该待测物体就是合法用户。接下来,通过执行步骤S4,确认该待测物体的二维图像信息与预存的已注册用户的二维脸部图像信息是否匹配,如果确认不匹配时,则身份鉴权失败,流程结束。
可选地,在步骤S5中,当确认该待测物体具备人脸特征,以及确认该待测物体的二维图像信息与预存的二维图像信息匹配,则身份鉴权成功。
请参阅图4,图4为本申请的身份鉴权方法的第二实施方式的细化流程示意图。在本实施方式中,步骤S4先于步骤S2进行。以人脸识别为例,当步骤S4中确认该待测物体的二维图像信息与预存的人体脸部的的二维图像信息匹配时,开始启动执行步骤S2,而当步骤S4中确认该待测物体的二维图像信息与预存的人体脸部的二维图像信息不匹配时,则身份鉴权失败,流程结束。
其中,在一具体实施例中,步骤S4与步骤S1同时进行。如此,能够更进一步减少感测时间,提升工作效率。
然,可变更地,步骤S1也可位于步骤S4之后进行。对于此种情况,当步骤S4中确认该待测物体的二维图像信息与预存的人体脸部的二维图像信息匹配时,开始启动执行步骤S1,然后再执行步骤S2。如此,能够减少功耗。
另外,步骤S1也可位于步骤S4之前进行。对于此种情况,又可分为两种实施方式,步骤S3可先于步骤S1进行,也可后于步骤S1进行。
本实施方式的身份鉴权方法需对该待测物体进行两次识别,其中,对该待测物体的第一次识别就是:比对该待测物体的二维图像信息是不是与已注册用户脸部的二维图像信息匹配,如果比对得知不匹配,则身份鉴权失败,流程结束。
如果比对得知该待测物体的二维图像信息与该已注册用户脸部的二维图像信息匹配,则还不能确定该待测物体就是合法用户,理由是:在步骤S4中,因为是二维图像信息的判断与识别,如果利用合法用户的照片或视频同样可以识别成功。
接下来,通过执行步骤S2,来避免上述利用照片或视频识别通过的情况。
在步骤S2中,根据步骤S1获得的红外图像,判断该待测物体是否具备立体人脸特征。如果判断得知该待测物体不具备立体人脸特征,则身份鉴权失败,流程结束。在此 种情况下,就有可能是他人利用合法用户的照片或者视频等进行身份识别。
可选地,在步骤S5中,当确认该待测物体的二维图像信息与预存的二维图像信息匹配、以及确认该待测物体具备立体人脸特征时,则身份鉴权成功。
请参阅图5,图5是本申请的身份鉴权装置的第一实施方式的结构框图。该身份鉴权装置1包括第一投射器10、第二投射器11、图像传感装置12、处理器14、和存储器16。其中,该存储器16用于预存一物体脸部的二维图像信息。该第一投射器10用于投射红外结构光至该待测物体。该第二投射器11用于投射红外泛光至该待测物体。该图像传感装置12用于捕获由该待测物体反射回来的红外结构光,感测获得该待测物体的红外图像,另外,该图像传感装置12还用于捕获由该待测物体反射回来的红外泛光,感测获得该待测物体的平面图像。该处理器14用于根据该红外图像判断该待测物体是否具备脸部特征,还用于根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配,以及用于根据前述判断的与比对结果,确认该待测物体的身份是否合法。
在本申请的实施方式中,当进行身份鉴权时,该第一投射器10投射红外结构光到该待测物体上,该图像传感装置12感测获得该待测物体的红外图像,其中,该处理器14根据该红外图像能够获得该待测物体的三维(3-Dimension,3D)图像信息,从而,该处理器14根据该红外图像能够判断该待测物体是否具备脸部特征。另外,该第二投射器11投射红外泛光至该待测物体,该图像传感装置12感测获得该待测物体的平面图像,从而该处理器14能够根据该平面图像获得该待测物体的二维(2-Dimension,2D)图像信息,并与预存的物体脸部的二维图像信息进行比对。如此,通过3D与2D相结合的方式,该身份鉴权装置1实现对该待测物体的脸部识别。
可见,本申请提供了一种新型的光学式身份鉴权装置1。另外,该光学式身份鉴权装置1可适用于较远距离的感测,且感测响应速度较快。所述较远距离例如为1米范围内或甚至更远一些的距离。
由上述内容可知,根据该红外图像判断该待测物体是否具备立体脸部特征,这是该处理器14对该待测物体进行的一次识别,当识别不通过时,则身份鉴权失败,流程结束。
另外,根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配,这是该处理器14对该待测物体进行一次识别,当识别不通过时,则身份鉴权失败,流程结束。
在本实施方式中,可选地,当至少上述两次识别都通过时,该处理器14确认该待测物体包括该物体脸部,因此,该待测物体的身份合法,身份鉴权成功。
该身份鉴权装置1例如进一步包括控制电路15。该控制电路15与该第一投射器10、第二投射器11、和该图像传感装置12分别连接。该控制电路15用于控制该第一投射器10、第二投射器11、和该图像传感装置12协同工作。
该控制电路15控制该第一投射器10和该第二投射器11分时工作,从而避免该图像传感装置12感测到的红外图像与平面图像发生混叠。
可选地,该身份鉴权装置1进一步包括高速数据传送链路18,用于把图像传感装置12中表示该红外图像的信号和表示该平面图像的信号传送到该处理器14中进行处理。该高速数据传送链路18例如为移动行业处理器接口(Mobile Industry Processor Interface,MIPI)。
该第一投射器10例如采用光学组件来投射红外结构光至该待测物体。所述光学组件例如包括光源、准直镜头以及光学衍射元件(DOE),其中光源用于产生一红外激光束;准直镜头将红外激光束进行校准,形成近似平行光;光学衍射元件对校准后的红外激光束进行调制,形成相应的散斑图案。该散斑图案例如包括规则点阵式、条纹式、网格式、散斑式等中的一种或几种。其中,散斑式又称为随机点阵式。编码式图案例如由不同波形的光组成,每种波形代表一种数字,各波形的组合即为编码。当然,可变更地,也可由其它合适的光学元件或光学组件来产生该红外结构光。另外,该散斑图案也可包括其它的编码图案。
上述是利用基于光编码原理,该第一投射器10投射已知的红外结构光图案到该待测物体上。图像传感装置12或处理器14根据捕获到的变形的红外结构光图案来分析确定该待测物体的深度信息。定义此类红外结构光为空间结构光。
可变更地,例如也可利用基于飞行时间(Time of Flight,ToF)原理,该第一投射器10投射红外结构光至该待测物体。图像传感装置12或处理器14例如通过测量光脉冲之间的传输延迟时间来计算待测物体的深度信息。定义此类红外结构光为时间结构光。
该时间结构光例如但不局限于呈正弦波、方波中的任意一种或两种的结合。
该第二投射器11例如并不局限为红外泛光灯。
该图像传感装置12例如包括红外图像传感器121,该红外图像传感器121用于捕获由该待测物体反射回来的红外结构光,感测获得该待测物体的红外图像。较佳地,该红外图像传感器121还用于捕获由该待测物体反射回来的红外泛光,感测获得该待测物体 的平面图像。
上述是利用同一红外图像传感器121来感测该待测物体的红外图像和平面图像,从而可以降低成本。当然,对于此种情况,该第一投射器10投射的是空间结构光至该待测物体。
当该第一投射器10投射的是时间结构光时,该图像传感装置12例如包括二红外图像传感器,该二红外图像传感器的结构不同,感测原理不同,分辨率不同等。其中,一红外图像传感器用于捕获由该待测物体反射回来的红外泛光,另一红外图像传感器用于捕获由该待测物体反射回来的红外结构光。
该处理器14例如但不局限为AP(Application Processor)。在本实施方式中,较佳地,“根据该红外图像、判断该待测物体是否具备脸部特征”,以及“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配”这二者可被该处理器14先后执行。当这二者中的任意一者先被该处理器14执行而获得的结果是否定的结果时,则该处理器14确认该待测物体的身份非法,即,身份鉴权失败,流程结束,该处理器14无需再启动执行其它未进行的鉴权程序。
举例,当该处理器14先执行“根据该红外图像、判断该待测物体是否具备脸部特征”,并判断得知该待测物体不具备脸部特征时,则身份鉴权失败,该处理器14无需再启动执行“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配”等鉴权程序。类似地,当该处理器14先执行“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配”,当确认该待测物体的二维图像信息与预存的物体脸部的二维图像信息不匹配时,则身份鉴权失败,该处理器14无需再启动执行“根据该红外图像、判断该待测物体是否具备脸部特征”等鉴权程序。
相对地,当这二者中的任意一者先被执行而获得的结果是肯定的结果时,则该处理器14继续执行其它未进行的鉴权程序。如此,可以节省感测时间、提高感测响应速度。
当然,可变更地,这二者也可被该处理器14同时执行。
在一示例中,该处理器14例如根据该红外图像构建出立体图像信息,通过判断该立体图像信息是否具备脸部特征来判断该待测物体是否具备脸部特征。
具体地,该处理器14由该红外图像可以获得各像素的位置信息以及深度信息,从而,该处理器14根据获得的各像素的位置信息以及深度信息构建出立体图像信息。相 应地,该处理器14由该立体图像信息能够判断得知该待测物体是否具备脸部特征。
进一步地,该处理器14例如可以将该立体图像信息与一预设的立体脸部模板进行相关,如果经比对获知相关系数大于或等于一预设值,则该处理器14判断得知该立体图像信息具备立体脸部特征;反之,则该处理器14判断得知该立体图像信息不具备立体脸部特征,身份鉴权失败,流程结束。该预设的立体脸部模板例如也存储在该存储器16中。
又例如,该处理器14通过深度学习方法,从该红外图像中提取该待测物体的三维脸部特征信息,进而可判断得知该待测物体是否具备脸部特征。
该深度学习方法包括:建立深度卷积神经网络模型,使用预定数量的脸部照片训练该深度卷积神经网络模型,根据训练好的该深度卷积神经网络模型从该红外图像中提取脸部的特征参数。
又例如,通过计算扭曲率的方式,来根据该红外图像判断该待测物体是否具备立体脸部特征。
上述只是举例说明,本申请并不限于以上的示例,该处理器14还可包括其它的合适的脸部特征的判断方式。
需要说明的是,当该待测物体是平面物体而非立体脸部时,例如,该待测物体是一张照片时,由该红外图像构建不出立体图像信息,从而能够判断得知该待测物体并不具备脸部特征。
另外,该处理器14例如也可通过深度学习方法,从该平面图像中提取该待测物体的二维脸部特征信息。
该深度学习方法包括:建立深度卷积神经网络模型,使用预定数量的脸部照片训练该深度卷积神经网络模型,根据训练好的该深度卷积神经网络模型从该平面图像中提取脸部的特征参数。
该预存的物体脸部的二维图像信息包括脸部特征信息。相应地,该处理器14比对该待测物体的二维脸部特征信息与该预存的脸部特征信息是否匹配。
可变更地,在其它实施方式中,在步骤S4中,也可提取鼻子、眼睛、嘴巴、眉毛、额头、颧骨、下巴、脸庞、鼻子的宽度、下巴的宽度等面部特征,或/和,鼻子、眼睛、嘴巴、眉毛、额头、颧骨、下巴等中任意组合的距离信息。例如,鼻子和眼睛之间的距离信息。当然,所述面部特征信息并不局限于上面所列举的例子,也可为其它合适的特征信息。
关于二维图像信息的比对,例如,该处理器14也可通过比对该待测物体的平面图片与预存的物体脸部的平面图片来实现。
上述只是举例说明,本申请并不限于以上的具体实施方式,该处理器14还可包括其它的合适的二维图像信息的比对方式。
具体地,例如,当该处理器14确认该待测物体的二维图像信息与预存的二维图像信息的匹配系数大于或等于一预定阈值时,则可确认该待测物体的二维图像信息与预存的二维图像信息匹配。相对地,如果该处理器14确认匹配系数小于该预定阈值时,则可确认该待测物体的二维图像信息与预存的二维图像信息不匹配。
在本实施方式中,当该处理器14分别确认该待测物体具备脸部特征,以及确认该待测物体的二维图像信息与预存的二维图像信息匹配时,则确认该待测物体的身份合法,脸部识别成功。
相对地,如前所述,当“根据该红外图像、判断该待测物体是否具备脸部特征”,以及“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配”这二者中的任意一者先被该处理器14执行而获得的结果是否定的结果时,则该处理器14确认该待测物体的身份非法,即,身份鉴权失败,流程结束,该处理器14无需再启动执行其它未进行的鉴权程序。而当这二者中的任意一者先被该处理器14执行而获得的结果是肯定的结果时,则另一者之后会被该处理器14执行。
另外,需要说明的是,在上述实施方式的身份鉴权装置1中,该处理器14例如也可执行额外的鉴权程序。例如,在执行身份鉴权的过程,该处理器14进一步执行:根据该外图像和/或平面图像,确认该待测物体是否为活体;又例如,根据该红外图像和/或平面图像,确认该待测物体的眼睛是否注视在电子设备前方的预定范围内等。从而,使得本申请的身份鉴权装置1的效果更好。由前述可知,本申请并不限于以上内容所公开的技术方案,只要是技术思想与本申请的技术思想相同或相似的发明,均应落在本申请的保护范围。
本申请的身份鉴权装置1用于生物体的脸部识别。该生物体例如为人体或其它合适的动物体。
下面,以人脸识别为例对本申请的身份鉴权装置1进行说明。在进行脸部识别之前,用户已提前注册好其本人的脸部图像模板,并存储在该存储器16中。该脸部图像模板例如并不局限于包括二维图像信息和三维图像信息等。
相应地,该处理器14根据该红外图像判断该待测物体是否具备人脸特征,以及比对该待测物体的二维图像信息与已注册用户脸部的二维图像信息是否匹配。
当该处理器14判断得知该待测物体具备人脸特征,以及确认该待测物体的二维图像信息与已注册用户脸部的二维图像信息匹配时,则人脸识别成功。
现有的,业界通常投射波长为850纳米的近红外光,来获得待测物体的红外图像。然而,本申请的发明人经过大量的创造性劳动,分析与研究发现:该第一投射器10投射波长为940纳米左右的红外结构光至该待测物体,该第二投射器11投射940纳米左右的红外泛光至该待测物体,该图像传感装置12可以获得较准确的感测效果。
请再参阅图2,环境光中波长范围为[920,960]纳米的近红外光易被大气吸收、强度衰减较大,当该第一投射器10投射波长范围为[920,960]纳米的红外结构光到待测物体、该图像传感装置12根据捕获的红外结构光获得该待测物体的红外图像时,能够少受环境光的干扰,从而提高图像的获取精度。类似地,当该第二投射器11投射波长范围为[920,960]纳米的红外结构光到待测物体、该图像传感装置12根据捕获的红外结构光获得该待测物体的平面图像时,能够少受环境光的干扰,从而提高图像的获取精度。
进一步地,在波长范围为[920,960]纳米的红外光中,波长为940纳米的近红外光更易被大气吸收、强度衰减最大,因此,在本申请的实施方式中,该第一投射器10投射的红外结构光的波长优选为940纳米,该第二投射器11投射的红外泛光的波长优选为940纳米。
然而,在实际应用中,该第一投射器10所投射的红泛结构光的波长和该第二投射器11所投射的红外泛光的波长在940纳米的基础上会有一定的偏差,例如会有(+15)纳米或(-15)纳米左右的偏差。因此,该第一投射器10所投射的红外结构光的波长范围例如为[925,955]纳米,该第二投射器11所投射的红外泛光的波长范围例如为[925,955]纳米。可见,该波长范围[925,955]仍然落在波长范围[920,960]内。
需要说明的是,该第一投射器10所投射的红外结构光的波长和该第二投射器11所投射的红外泛光的波长为落在上述波长范围[920,960]纳米中的任意一数值。本申请为了叙述简洁清楚,在此处并未一一列举各具体数值,但落在这波长范围[920,960]纳米中的任意一数值都是可行的。
当然,可变更地,该第一投射器10也可投射波长为850纳米或者其它合适波长的红外结构光。该第二投射器11也可投射波长为850纳米或者其它合适波长的红外泛光。
在某些实施方式中,例如,该处理器14先根据该红外图像判断该待测物体是否为 具备脸部特征,当该处理器14判断得知该待测物体不具备脸部特征时,则该处理器14确认该待测物体的身份不合法,即,身份鉴权失败,流程结束;当该处理器14判断得知该待测物体具备脸部特征时,该处理器14再根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配。当确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败,流程结束。
可选地,当确认该待测物体的二维图像信息与预存的二维图像信息匹配时,则身份鉴权成功。
其中,在一具体实施例中,当该处理器14在根据该红外图像判断该待测物体是否具备脸部特征的同时:该第二投射器11投射红外泛光至该待测物体,该图像传感装置12捕获由该待测物体反射回来的红外泛光,感测获得该待测物体的平面图像。如此,能够更进一步减少感测时间,提升工作效率。
或,可变更地,当该处理器14在判断得知该待测物体具备脸部特征之后,该第二投射器11开始投射红外泛光至该待测物体,该图像传感装置12捕获由该待测物体反射回来的红外泛光,感测获得该待测物体的平面图像。接着,该处理器14再根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配。如此,能够减少功耗。
另外,可变更地,该控制电路15例如控制该第一投射器10先于该第二投射器11工作,当该图像传感装置12先后感测获得该待测物体的红外图像与平面图像之后,该处理器14再开始执行“根据该红外图像判断该待测物体是否具备脸部特征”,并当判断得知该待测物体具备脸部特征时,再执行“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配”。
当然,该控制电路15也可控制该第二投射器10先于该第一投射器11工作,该图像传感装置12先后感测获得该待测物体的平面图像与红外图像,如此都是可行的。
以人脸识别为例,本实施方式的身份鉴权装置1需对该待测物体进行两次识别,其中,对该待测物体的第一次识别就是:该处理器14判断该待测物体是否具备人脸特征。
当该待测物体是立体脸部而非平面物体时,则该处理器14例如可以根据该红外图像构建出该待测物体的立体图像信息,从而通过判断该立体图像信息是否具备人脸特征来判断得知该待测物体是否具备立体人脸特征。
当该待测物体是平面物体时,则该处理器14并不能构建出立体图像信息,从而判 断得知该待测物体并不具备立体人脸特征。
当该处理器14判断得知该待测物体具备立体人脸特征之后,该处理器14接着对该待测物体进行第二次识别:“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配”。
如果该处理器14确认该待测物体的二维图像信息与该已注册用户脸部的二维图像信息不匹配,则身份鉴权失败,流程结束。
可选地,如果该处理器14确认该待测物体的二维图像信息与该已注册用户脸部的二维图像信息匹配,则身份鉴权成功。
在上面的各实施方式中,简而言之,所述处理器14是先执行“根据该红外图像判断该待测物体是否具备脸部特征”,然后根据判断结果再确定是否执行“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配”等流程。然而,该处理器14也可是先执行“根据该平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配”,然后根据比对结果再确定是否执行“根据该红外图像判断该待测物体是否具备脸部特征”等流程。对于后面所述的实施方式,具体说明如下。
在某些实施方式中,例如,该处理器14先根据该平面图形获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配,当确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则该处理器14确认该待测物体的身份不合法,即,身份鉴权失败,流程结束;当确认该待测物体的二维图像信息与预存的二维图像信息匹配时,该处理器14再根据该红外图像判断该待测物体是否具备脸部特征;当该处理器14判断得知该待测物体不具备脸部特征时,则身份鉴权失败,流程结束。
可选地,当该处理器14判断得知该待测物体具备脸部特征时,则身份鉴权成功。
其中,在一具体实施例中,当该处理器14在执行“根据该平面图形获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配”的同时:该第一投射器10投射红外结构光至该待测物体,该图像传感装置12感测获得该待测物体的红外图像。如此,能够更进一步减少感测时间,提升工作效率。
或,可变更地,当该处理器14确认该待测物体的二维图像信息与预存的二维图像信息匹配之后,该第一投射器10再投射红外结构光至该待测物体,该图像传感装置12感测获得该待测物体的红外图像。接着,该处理器14再根据该红外图像判断该待测物 体是具备立体特征。如此,能够减少功耗。
另外,可变更地,该控制电路15控制该第二投射器11先于该第一投射器10工作,当该图像传感装置12先后感测获得该待测物体的平面图像与红外图像之后,该处理器14再开始执行“根据该平面图形获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的二维图像信息是否匹配”,以及“根据该红外图像判断该待测物体是否具备脸部特征”。
当然,该控制电路15也可控制该第一投射器10先于该第二投射器11工作,该图像传感装置12先后感测获得该待测物体的红外图像与平面图像,如此都是可行的。
以人脸识别为例,本实施方式的身份鉴权装置1需对该待测物体进行两次识别,其中,对该待测物体的第一次识别就是:该处理器14判断该待测物体的二维图像信息与已注册用户脸部的二维图像信息是否匹配,如果判断得知该待测物体的二维图像信息与该已注册用户脸部的二维图像信息不匹配,则身份鉴权失败。
如果该处理器14判断得知该待测物体的二维图像信息与该已注册用户脸部的二维图像信息匹配,则还不能确定该待测物体就是合法用户,理由是:因为是二维图像信息的判断与识别,如果利用合法用户的照片或视频同样可以识别成功。
接下来,该处理器14对该待测物体进行第二次识别:根据该红外图像判断该待测物体是否具备脸部特征。当该处理器14判断得知该待测物体不具备脸部特征时,则该处理器14确定该待测物体的身份不合法,即,身份鉴权失败,流程结束。当该处理器14判断得知该待测物体具备脸部特征时,则身份鉴权成功。
需要补充说明的是,在上述的各实施方式中,该处理器14对该待测物体的两次识别需要配合该待测物体的平面图像与红外图像的感测次序。在本申请的上述各实施方式中,对于本领域的一般技术人员而言,其根据本申请所陈述的发明重点,是可以合理确定上述各器件之间的工作次序关系。
请参阅图6,图6为本申请的身份鉴权装置的第二实施方式的结构框图。该身份鉴权装置2与该身份鉴权装置1的结构大致相同,二者主要区别在于,该身份鉴权装置2的图像传感装置22进一步包括RGB图像传感器222。该RGB图像传感器222用于感测该待测物体的平面图像。相应地,该第二投射器11可被省略。然,可变更地,该第二投射器11也可不被省略,例如,在环境光充足的情况下,采用RGB图像传感器222感测该待测物体的平面图像,在环境光较暗的情况下,采用第二投射器11投射红外泛光至该待测物体,由该红外图像传感器221感测获得该待测物体的平面图像。
当采用该RGB图像传感器222感测该待测物体的平面图像时,该RGB图像传感器和该红外图像传感器221可分时或同时工作。
请参阅图7,图7为本申请的电子设备的一实施方式的结构示意图。所述电子设备100例如但不局限于为消费性电子产品、家居式电子产品、车载式电子产品、金融终端产品等合适类型的电子产品。其中,消费性电子产品例如但不局限为手机、平板电脑、笔记本电脑、桌面显示器、电脑一体机等。家居式电子产品例如但不局限为智能门锁、电视、冰箱、穿戴式设备等。车载式电子产品例如但不局限为车载导航仪、车载DVD等。金融终端产品例如但不局限为ATM机、自助办理业务的终端等。所述电子设备100包括上述身份鉴权装置1。所述电子设备100根据所述身份鉴权装置1的身份鉴权结果来对应是否执行相应的功能。所述相应的功能例如但不局限于包括解锁、支付、启动预存的应用程序中的任意一种或几种。
在本实施方式中,以电子设备为手机为例进行说明。所述手机例如为全面屏的手机,所述身份识别装置1例如设置在手机的正面顶端。当然,所述手机也并不限制于全面屏手机。
例如,当用户需要进行开机解锁时,抬起手机或触摸手机的屏幕都可以起到唤醒该身份鉴权装置1的作用。当该身份鉴权装置1被唤醒之后,识别该手机前方的用户是合法的用户时,则解锁屏幕。
可见,由于该电子设备100应用了该身份鉴权装置1,该电子设备1能够实现对待测物体的较远距离的感测,且感测响应速度较快。
在本说明书的描述中,参考术语“一个实施方式”、“某些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (51)

  1. 一种身份鉴权方法,包括:
    步骤S1:投射红外结构光至待测物体,感测该待测物体的红外图像;
    步骤S2:根据该红外图像判断该待测物体是否具备脸部特征;
    步骤S3:感测该待测物体的平面图像;
    步骤S4:根据所述平面图像获得该待测物体的二维图像信息,并比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配;
    步骤S5:根据步骤S2与步骤S4的执行结果,确认该待测物体的身份是否合法。
  2. 如权利要求1所述的身份鉴权方法,其特征在于:在步骤S2中,根据该红外图像判断该待测物体是否具备立体脸部特征。
  3. 如权利要求1所述的身份鉴权方法,其特征在于:步骤S2包括:根据该红外图像构建出立体图像信息,通过判断该立体图像信息是否具备脸部特征来判断该待测物体是否具备脸部特征。
  4. 如权利要求1所述的身份鉴权方法,其特征在于:在步骤S3中,投射红外泛光至该待测物体,并利用红外图像传感器感测该待测物体的平面图像;或者,在步骤S3中,利用RGB图像传感器感测该待测物体的平面图像。
  5. 如权利要求4所述的身份鉴权方法,其特征在于:当步骤S3是投射红外泛光至该待测物体、并利用红外图像传感器感测该待测物体的平面图像时,步骤S1与步骤S3被分时执行,其中步骤S1先于或后于步骤S3被执行。
  6. 如权利要求5所述的身份鉴权方法,其特征在于:在步骤S1与步骤S3中,利用同一红外图像传感器分时感测该待测物体的红外图像和平面图像。
  7. 如权利要求4所述的身份鉴权方法,其特征在于:当步骤S3是利用RGB图像传感器感测该待测物体的平面图像时,步骤S1与步骤S3被分时执行,其中步骤S1先于或后于步骤S3被执行;或者,步骤S1与步骤S3被同时执行。
  8. 如权利要求1所述的身份鉴权方法,其特征在于:步骤S2先于或后于步骤S4被执行;或,步骤S2与步骤S4被同时执行。
  9. 如权利要求8所述的身份鉴权方法,其特征在于:在步骤S5中,当确认步骤S2和步骤S4中的任意一个步骤先被执行而获得的结果是否定的结果时,则身份鉴权失败。
  10. 如权利要求8所述的身份鉴权方法,其特征在于:在步骤S5中,当确认步骤S2 和步骤S4被执行后而获得的结果都是肯定的结果时,则身份鉴权成功。
  11. 如权利要求1-10中任意一项所述的身份鉴权方法,其特征在于:该物体脸部的二维图像信息为人体脸部的二维图像信息,在步骤S4中比对的是:该待测物体的二维图像信息与预存的人体脸部的二维图像信息是否匹配;在步骤S2中,根据该红外图像判断该待测物体是否具备人体的脸部特征。
  12. 如权利要求11所述的身份鉴权方法,其特征在于:当步骤S2中根据该红外图像判断得知该待测物体具备人体的脸部特征、且步骤S4中确认该待测物体的二维图像信息与预存的人体脸部的二维图像信息匹配时,则身份鉴权成功。
  13. 如权利要求11所述的身份鉴权方法,其特征在于:该待测物体的二维图像信息包括特征信息,该预存的人体脸部的二维图像信息包括脸部特征信息,在步骤S4中比对的是:该待测物体的特征信息与该预存的脸部特征信息是否匹配。
  14. 如权利要求13所述的身份鉴权方法,其特征在于:步骤S4包括:通过深度学习方法提取该待测物体的二维人脸特征信息。
  15. 如权利要求14所述的身份鉴权方法,其特征在于:该深度学习方法包括:建立深度卷积神经网络模型,使用预定数量的人脸照片训练该深度卷积神经网络模型,根据训练好的该深度卷积神经网络模型提取人脸的特征参数。
  16. 如权利要求1所述的身份鉴权方法,其特征在于:步骤S1中的红外结构光的波长为940纳米。
  17. 如权利要求5所述的身份鉴权方法,其特征在于:步骤S3中的红外泛光的波长为940纳米。
  18. 如权利要求5所述的身份鉴权方法,其特征在于:步骤S1中的红外结构光的波长范围为[925,955]纳米,步骤S3中的红外泛光的波长范围为[925,955]纳米。
  19. 如权利要求1所述的身份鉴权方法,其特征在于:在步骤S1中,投射至待测物体的红外结构光形成图案,所述图案呈规则点阵式、条纹式、散斑式、网格式、编码式中的任意一种或几种的结合,或,在步骤S1中,投射至待测物体的红外结构光呈正弦波、方波中的任意一种或两种的结合。
  20. 如权利要求1所述的身份鉴权方法,其特征在于:先执行步骤S1,然后同时执行步骤S2和S3,其中,当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S4,否则,当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败。
  21. 如权利要求1所述的身份鉴权方法,其特征在于:先执行步骤S3,然后同时执行步骤S4和S1,其中,当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则启动执行步骤S2,否则,当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败。
  22. 如权利要求1所述的身份鉴权方法,其特征在于:先执行步骤S1;执行完步骤S1后执行步骤S2;当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S3,执行完步骤S3后再执行步骤S4;而当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败;或,
    先执行步骤S3;执行完步骤S3再执行步骤S4;当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则执行步骤S1,执行完步骤S1后再执行步骤S2;而当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败;或,
    步骤S1、步骤S3、步骤S2依次执行;当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S4,否则,当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败;或,
    步骤S1、步骤S3、步骤S4依次执行;当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则启动执行步骤S2,否则,当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败;或,
    步骤S3、步骤S1、步骤S2依次执行;当步骤S2中判断得知该待测物体具备脸部特征后,则启动执行步骤S4,否则,当步骤S2中判断得知该待测物体不具备脸部特征时,则身份鉴权失败;或,
    步骤S3、步骤S1、步骤S4依次执行;当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息匹配后,则启动执行步骤S2,否则,当步骤S4中确认该待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败。
  23. 一种身份鉴权装置,包括:
    存储器,用于预存样本物体的二维图像信息;
    第一投射器,用于投射红外结构光至待测物体;
    图像传感装置,用于捕获由该待测物体反射回来的红外结构光、获得该待测物体的红外图像,还用于感测该待测物体的平面图像;和
    处理器,用于根据该红外图像判断该待测物体是否具备脸部特征,并获得判断结果; 所述处理器还用于根据该平面图像获得该待测物体的二维图像信息,比对该待测物体的二维图像信息与预存的物体脸部的二维图像信息是否匹配,并获得比对结果;所述处理器用于根据该判断结果与比对结果,确认所述待测物体的身份是否合法。
  24. 如权利要求23所述的身份鉴权装置,其特征在于:该处理器用于根据该红外图像判断该待测物体是否具备立体脸部特征。
  25. 如权利要求23所述的身份鉴权装置,其特征在于:该处理器用于根据该红外图像构建出立体图像信息,通过判断该立体图像信息是否具备脸部特征来判断该待测物体是否具备脸部特征。
  26. 如权利要求23所述的身份鉴权装置,其特征在于:该图像传感装置包括红外图像传感器,用于捕获由该待测物体反射回来的红外结构光,感测获得该待测物体的红外图像。
  27. 如权利要求26所述的身份鉴权装置,其特征在于:该身份鉴权装置进一步包括第二投射器,用于投射红外泛光至该待测物体;该图像传感装置进一步用于捕获由该待测物体反射回来的红外泛光,感测获得该待测物体的平面图像。
  28. 如权利要求27所述的身份鉴权装置,其特征在于:该红外图像传感器用于分时感测获得该待测物体的红外图像和平面图像。
  29. 如权利要求26所述的身份鉴权装置,其特征在于:该图像传感装置进一步包括RGB图像传感器,用于感测获得该待测物体的平面图像。
  30. 如权利要求27所述的身份鉴权装置,其特征在于:所述身份鉴权装置进一步包括控制电路,用于控制该第一投射器与第二投射器分时控制,当进行身份鉴权时,所述控制电路控制该第一投射器先于或后于该第二投射器工作。
  31. 如权利要求30所述的身份鉴权装置,其特征在于:当进行身份鉴权时,所述控制电路用于控制该第一投射器、第二投射器、和该图像传感装置协同工作。
  32. 如权利要求23所述的身份鉴权装置,其特征在于:该身份鉴权装置进一步包括高速数据传送链路,用于把图像传感装置中表示该红外图像的信号和表示该平面图像的信号传送到该处理器中进行处理。
  33. 如权利要求23所述的身份鉴权装置,其特征在于:当“根据该红外图像判断该待测物体是否具备脸部特征”,以及“根据所述平面图像获得该待测物体的二维图像信息、并比对所述二维图像信息与预存的二维图像信息是否匹配”中的任意一者先被该处理器执行而获得的结果是否定的结果时,则身份鉴权失败。
  34. 如权利要求33所述的身份鉴权装置,其特征在于:当该处理器判断得知该待测物体具备脸部特征,以及确认该待测物体的二维图像信息与预存的二维图像信息匹配时,则身份鉴权成功。
  35. 如权利要求34所述的身份鉴权装置,其特征在于:所述处理器先执行:根据该红外图像判断该待测物体是否具备脸部特征,再执行:根据所述平面图像获得该待测物体的二维图像信息,并比对所述二维图像信息与预存的二维图像信息是否匹配;或,所述处理器先执行:根据所述平面图像获得该待测物体的二维图像信息,并比对所述二维图像信息与预存的二维图像信息是否匹配,再执行:根据该红外图像判断该待测物体是否具备脸部特征。
  36. 如权利要求23-35中任意一项所述的身份鉴权装置,其特征在于:该物体脸部的二维图像信息为人体脸部的二维图像信息,所述处理器用于比对该待测物体的二维图像信息与预存的人体脸部的二维图像信息是否匹配,以及用于根据该红外图像判断该待测物体是否具备人体的脸部特征。
  37. 如权利要求36所述的身份鉴权装置,其特征在于:当该处理器根据该红外图像判断得知该待测物体具备人体的脸部特征,且确认该待测物体的二维图像信息与预存的该人体脸部的二维图像信息匹配时,则身份鉴权成功。
  38. 如权利要求36所述的身份鉴权装置,其特征在于:该待测物体的二维图像信息包括特征信息,该预存的人体脸部的二维图像信息包括脸部特征信息,该处理器通过比对该待测物体的特征信息与该预存的脸部特征信息,来确认该待测物体的二维图像信息与预存的二维图像信息是否匹配。
  39. 如权利要求38所述的身份鉴权装置,其特征在于:该处理器通过深度学习方法提取该待测物体的二维人脸特征信息。
  40. 如权利要求39所述的身份鉴权装置,其特征在于:该处理器通过建立深度卷积神经网络模型,使用预定数量的人脸照片训练该深度卷积神经网络模型,并根据训练好的该深度卷积神经网络模型提取人脸的特征参数。
  41. 如权利要求23所述的身份鉴权装置,其特征在于:该红外结构光的波长为940纳米。
  42. 如权利要求27所述的身份鉴权装置,其特征在于:该红外泛光的波长为940纳米。
  43. 如权利要求27所述的身份鉴权装置,其特征在于:该第一投射器投射的红外结 构光的波长范围为[925,955]纳米,该第二投射器投射的红外泛光的波长范围为[925,955]纳米。
  44. 如权利要求23所述的身份鉴权装置,其特征在于:该第一投射器投射至待测物体的红外结构光形成图案,所述图案呈规则点阵式、条纹式、散斑式、网格式、编码式中的任意一种或几种的结合,或,该第一投射器投射至待测物体的红外结构光呈正弦波、方波中的任意一种或两种的结合。
  45. 如权利要求27所述的身份鉴权装置,其特征在于:在执行身份鉴权时,所述第二投射器用于先投射红外泛光至待测物体,然后在所述处理器根据所述平面图像获得该待测物体的二维图像信息、并比对所述待测物体的二维图像信息与预存的二维图像信息是否匹配的同时:所述第一投射器投射红外结构光至该待测物体,其中,当所述处理器确认所述待测物体的二维图像信息与预存的二维图像信息匹配时,则所述处理器再根据红外图像构判断该待测物体是否具备脸部特征;当所述处理器确认所述待测物体的二维图像信息与预存的二维图像信息不匹配时,则身份鉴权失败。
  46. 如权利要求27所述的身份鉴权装置,其特征在于:在执行身份鉴权时,所述第一投射器用于先投射红外结构光至目标物体,然后在所述处理器根据该红外图像判断该待测物体是否具备脸部特征的同时:所述第二投射器投射红外泛光至该待测物体,其中,当所述处理器判断得知该待测物体具备脸部特征时,则所述处理器再根据所述平面图像获得该待测物体的二维图像信息,并比对所述待测物体的二维图像信息与预存的二维图像信息是否匹配;当所述处理器判断得知所述待测物体不具备脸部特征时,则身份鉴权失败。
  47. 如权利要求23所述的身份鉴权装置,其特征在于:该处理器同时执行:“根据该红外图像判断该待测物体是否具备脸部特征”,以及“根据所述平面图像获得该待测物体的二维图像信息、并比对所述二维图像信息与预存的二维图像信息是否匹配”。
  48. 一种电子设备,包括权利要求23-47中任意一项所述的身份鉴权装置。
  49. 如权利要求48所述的电子设备,其特征在于:所述电子设备根据所述身份鉴权装置的身份鉴权结果来对应是否执行相应的功能。
  50. 如权利要求49所述的电子设备,其特征在于:所述相应的功能包括解锁、支付、启动预设的应用程序中的任意一种或几种。
  51. 如权利要求48所述的电子设备,其特征在于:所述电子设备包括消费性电子产品、家居式电子产品、车载式电子产品、金融终端产品中的任意一种或几种。
PCT/CN2018/083618 2018-04-18 2018-04-18 身份鉴权方法、身份鉴权装置、和电子设备 WO2019200575A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880000314.8A CN108513662A (zh) 2018-04-18 2018-04-18 身份鉴权方法、身份鉴权装置、和电子设备
PCT/CN2018/083618 WO2019200575A1 (zh) 2018-04-18 2018-04-18 身份鉴权方法、身份鉴权装置、和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/083618 WO2019200575A1 (zh) 2018-04-18 2018-04-18 身份鉴权方法、身份鉴权装置、和电子设备

Publications (1)

Publication Number Publication Date
WO2019200575A1 true WO2019200575A1 (zh) 2019-10-24

Family

ID=63404336

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/083618 WO2019200575A1 (zh) 2018-04-18 2018-04-18 身份鉴权方法、身份鉴权装置、和电子设备

Country Status (2)

Country Link
CN (1) CN108513662A (zh)
WO (1) WO2019200575A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378207B (zh) * 2019-06-10 2022-03-29 北京迈格威科技有限公司 人脸认证方法、装置、电子设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964056A (zh) * 2010-10-26 2011-02-02 徐勇 一种具有活体检测功能的双模态人脸认证方法和系统
CN106372601A (zh) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 一种基于红外可见双目图像的活体检测方法及装置
CN107169483A (zh) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 基于人脸识别的任务执行
CN107483428A (zh) * 2017-08-09 2017-12-15 广东欧珀移动通信有限公司 身份验证方法、装置和终端设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754154B2 (en) * 2013-02-15 2017-09-05 Microsoft Technology Licensing, Llc Identification using depth-based head-detection data
CN107368730B (zh) * 2017-07-31 2020-03-06 Oppo广东移动通信有限公司 解锁验证方法和装置
CN107480613B (zh) * 2017-07-31 2021-03-02 Oppo广东移动通信有限公司 人脸识别方法、装置、移动终端和计算机可读存储介质
CN107563304B (zh) * 2017-08-09 2020-10-16 Oppo广东移动通信有限公司 终端设备解锁方法及装置、终端设备
CN107657245A (zh) * 2017-10-16 2018-02-02 维沃移动通信有限公司 一种人脸识别方法和终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964056A (zh) * 2010-10-26 2011-02-02 徐勇 一种具有活体检测功能的双模态人脸认证方法和系统
CN106372601A (zh) * 2016-08-31 2017-02-01 上海依图网络科技有限公司 一种基于红外可见双目图像的活体检测方法及装置
CN107169483A (zh) * 2017-07-12 2017-09-15 深圳奥比中光科技有限公司 基于人脸识别的任务执行
CN107483428A (zh) * 2017-08-09 2017-12-15 广东欧珀移动通信有限公司 身份验证方法、装置和终端设备

Also Published As

Publication number Publication date
CN108513662A (zh) 2018-09-07

Similar Documents

Publication Publication Date Title
WO2019200574A1 (zh) 身份鉴权方法、身份鉴权装置、和电子设备
WO2019200572A1 (zh) 身份鉴权方法、身份鉴权装置、和电子设备
EP2680191B1 (en) Facial recognition
EP2680192B1 (en) Facial recognition
US9607138B1 (en) User authentication and verification through video analysis
US9158904B1 (en) Facial recognition
US10592728B2 (en) Methods and systems for enhancing user liveness detection
AU2022203880B2 (en) Methods and systems for determining user liveness and verifying user identities
US20210049391A1 (en) Systems and methods for facial liveness detection
US11238143B2 (en) Method and system for authenticating a user on a wearable heads-up display
US11115408B2 (en) Methods and systems for determining user liveness and verifying user identities
JP2018524654A (ja) 活動検出方法及びデバイス、並びに本人認証方法及びデバイス
US9880634B2 (en) Gesture input apparatus, gesture input method, and program for wearable terminal
US10217009B2 (en) Methods and systems for enhancing user liveness detection
WO2019200573A1 (zh) 身份鉴权方法、身份鉴权装置、和电子设备
US20190080065A1 (en) Dynamic interface for camera-based authentication
JP2019194587A (ja) 単画素センサ
WO2019200576A1 (zh) 身份鉴权方法、身份鉴权装置、和电子设备
WO2019200575A1 (zh) 身份鉴权方法、身份鉴权装置、和电子设备
KR20130133676A (ko) 카메라를 통한 얼굴인식을 이용한 사용자 인증 방법 및 장치
WO2019200571A1 (zh) 身份鉴权方法、身份鉴权装置、和电子设备
US20150200776A1 (en) Portable electronic device and secure pairing method therefor
KR20230094062A (ko) 얼굴 인식 시스템 및 얼굴 인식 시스템을 위한 제어 방법
CN209168110U (zh) 身份鉴权装置和电子设备
WO2019196793A1 (zh) 图像处理方法及装置、电子设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18915557

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18915557

Country of ref document: EP

Kind code of ref document: A1