WO2020237482A1 - 用于人脸识别的光学传感器、装置、方法和电子设备 - Google Patents

用于人脸识别的光学传感器、装置、方法和电子设备 Download PDF

Info

Publication number
WO2020237482A1
WO2020237482A1 PCT/CN2019/088653 CN2019088653W WO2020237482A1 WO 2020237482 A1 WO2020237482 A1 WO 2020237482A1 CN 2019088653 W CN2019088653 W CN 2019088653W WO 2020237482 A1 WO2020237482 A1 WO 2020237482A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel unit
type
pixel
range
face image
Prior art date
Application number
PCT/CN2019/088653
Other languages
English (en)
French (fr)
Inventor
吴勇辉
潘雷雷
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Priority to PCT/CN2019/088653 priority Critical patent/WO2020237482A1/zh
Priority to CN201980000834.3A priority patent/CN110462630A/zh
Publication of WO2020237482A1 publication Critical patent/WO2020237482A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • This application relates to the technical field of face recognition, and more specifically, to an optical sensor, device, method, and electronic device for face recognition.
  • the embodiments of the present application provide an optical sensor, a device, a method, and an electronic device for face recognition, which can recognize the true and false of a face, thereby improving the security of face recognition.
  • an optical sensor for face recognition including: a pixel array, and a first set of pixel units in the pixel array includes a first type of pixel unit group and a second type of pixel unit group, wherein:
  • the first-type pixel unit group includes at least one first-type pixel unit, the first-type pixel unit is provided with a first filter, and the first filter is used to pass light signals in a first wavelength range;
  • the second-type pixel unit group includes at least one second-type pixel unit, the second-type pixel unit is provided with a second filter, and the second filter is used to pass light signals in a second wavelength range, and The second waveband range is different from the first waveband range;
  • the pixel units in the first type of pixel unit group and the second type of pixel unit group are used to receive the reflected light signal reflected from the human face by the light signal emitted by the light source, and obtain a partial human face according to the reflected light signal An image, the partial face image is used to determine the authenticity of the face.
  • the first pixel unit set further includes a third type of pixel unit group, the third type of pixel unit group includes at least one third type of pixel unit, and the third type of pixel unit is provided with a third type of pixel unit.
  • the third optical filter is used to pass optical signals in a third waveband range, the third waveband range is different from the first waveband range and the second waveband range, the first type The pixel unit group, the pixel units in the second type pixel unit group and the third type pixel unit group are used to receive the light signal emitted by the light source and the reflected light signal reflected from the face, and according to the The reflected light signal obtains a partial face image, and the partial face image is used to determine the authenticity of the face.
  • the first waveband range, the second waveband range, and the third waveband range are respectively one of the following three waveband ranges:
  • the waveband range of the optical signal emitted by the light source includes the first waveband range, the second waveband range, and the third waveband range.
  • the number of consecutive pixel units in the first pixel unit set is less than or equal to a first threshold.
  • the ratio of the number of pixel units in the first pixel unit set to the total number of pixel units in the pixel array is less than a first ratio.
  • the pixel units in the first pixel unit set are discretely distributed in the pixel array.
  • the face images collected by other pixel units in the pixel array except the first pixel unit set are used for face recognition.
  • other pixel units in the pixel array except the first set of pixel units are not provided with filters.
  • other pixel units in the pixel array except the first set of pixel units are provided with filters of a specific wavelength range.
  • the filter in the specific wavelength range includes a filter in the 940 nm wavelength range.
  • a device for face recognition including:
  • the optical sensor for face recognition in the first aspect to the second aspect and any possible implementation thereof wherein the first type of pixel unit group and the second type of pixel unit group in the first pixel unit set of the optical sensor
  • the pixel units in the pixel-like unit group are used to receive the reflected light signal reflected from the face by the light signal emitted by the light source, and obtain a partial face image according to the reflected light signal;
  • the processor is configured to determine the authenticity of the face according to the partial face image.
  • the first pixel unit set includes a first type of pixel unit group, a second type of pixel unit group, and a third type of pixel unit group, where:
  • the first-type pixel unit group includes at least one first-type pixel unit, the first-type pixel unit is provided with a first filter, and the first filter is used to pass light signals in a first wavelength range;
  • the second-type pixel unit group includes at least one second-type pixel unit, the second-type pixel unit is provided with a second filter, and the second filter is used to pass light signals in a second wavelength range, and The second waveband range is different from the first waveband range;
  • the third-type pixel unit group includes at least one third-type pixel unit, the third-type pixel unit is provided with a third filter, and the third filter is used to pass light signals in a third wavelength range, so The third waveband range is different from the first waveband range and the second waveband range;
  • the pixel units in the first type of pixel unit group, the second type of pixel unit group, and the third type of pixel unit group are used to receive the light reflected from the face by the light signal emitted by the light source And obtain a partial face image according to the reflected light signal, and the partial face image is used to determine the authenticity of the face.
  • the processor is further configured to:
  • the first partial face image collected by the first type of pixel unit group, the second partial face image collected by the second type of pixel unit group, and the first partial face image collected by the third type of pixel unit group Calibration of three partial face images
  • the authenticity of the face is determined.
  • the processor is further configured to:
  • the calibration parameter is determined according to the relationship between the spectral response of the reference object to the first waveband range, the second waveband range and the third waveband range.
  • the optical sensor is also used for:
  • the light source emits a light signal to the reference object, through the first type of pixel unit group, the second type of pixel unit group and the third type of pixel unit group to collect the reference object to the The first waveband range, the spectral response of the second waveband range and the third waveband range;
  • the processor is further configured to determine the calibration parameter according to the response of the reference object to the spectra of the first waveband range, the second waveband range, and the third waveband range. Therefore, the reference objects collected by the first-type pixel unit group, the second-type pixel unit group, and the third-type pixel unit group compare the first waveband range, the second waveband range, and the The response of the spectrum in the third band range is in the same range.
  • the reference object is a white object or a flesh-colored object.
  • the processor is further configured to:
  • the second partial face image and the third partial face image are synthesized to obtain a color partial face image, wherein the color partial face image is Each pixel of includes three response pixel values of the spectrum of the face to the first waveband range, the second waveband range, and the third waveband range;
  • the authenticity of the human face is determined according to the feature information of the color partial human face image.
  • the processor is further configured to:
  • the processor is specifically configured to:
  • the feature information of the color partial face image is processed through a deep learning network to determine the authenticity of the face.
  • the processor is further configured to:
  • the multiple color partial face images are input to a deep learning network for training, and the model and parameters of the deep learning network are obtained.
  • the processor is further configured to:
  • Face recognition is performed according to face images collected by other pixel units in the pixel array except for the first pixel unit set.
  • a method for face recognition including:
  • the first pixel unit set includes a first type pixel unit group and a second type pixel unit group, the first type pixel unit group includes at least one first type pixel unit, and the first type pixel unit is provided with a A filter, the first filter is used to pass the optical signal in the first wavelength range;
  • the second type of pixel unit group includes at least one second type of pixel unit, the second type of pixel unit is provided with a second An optical filter, the second optical filter is used to pass an optical signal in a second waveband range, and the second waveband range is different from the first waveband range;
  • the partial face image is used to determine the authenticity of the face.
  • the first set of pixel units further includes a third type of pixel unit group, the third type of pixel unit group includes at least one third type of pixel unit, and the third type of pixel unit is set to a third type.
  • An optical filter the third optical filter is used to pass an optical signal in a third waveband range, and the third waveband range is different from the first waveband range and the second waveband range.
  • the first waveband range, the second waveband range, and the third waveband range are respectively one of the following three waveband ranges:
  • the waveband range of the optical signal emitted by the light source includes the first waveband range, the second waveband range, and the third waveband range.
  • the determining the authenticity of the human face according to the partial face image includes:
  • the first partial face image collected by the first type of pixel unit group, the second partial face image collected by the second type of pixel unit group, and the first partial face image collected by the third type of pixel unit group Calibration of three partial face images
  • the authenticity of the face is determined.
  • the method further includes:
  • the calibration parameter is determined according to the relationship between the spectral response of the reference object to the first waveband range, the second waveband range, and the third waveband range.
  • the determining the calibration parameter according to the relationship between the spectral response of the reference object to the first waveband range, the second waveband range and the third waveband range includes :
  • the light source emits a light signal to the reference object, through the first type of pixel unit group, the second type of pixel unit group and the third type of pixel unit group to collect the reference object to the The first waveband range, the spectral response of the second waveband range and the third waveband range;
  • the determining the authenticity of the human face based on the calibrated first partial face image, the second partial face image, and the third partial face image includes:
  • the second partial face image and the third partial face image are synthesized to obtain a color partial face image, wherein the color partial face image is Each pixel of includes three response pixel values of the spectrum of the face to the first waveband range, the second waveband range, and the third waveband range;
  • the authenticity of the human face is determined according to the feature information of the color partial human face image.
  • the second partial face image and the third partial face image are synthesized to obtain a color partial face image, including: :
  • the determining the authenticity of the face according to the feature information of the color partial face image includes:
  • the feature information of the color partial face image is processed through a deep learning network to determine the authenticity of the face.
  • the method further includes:
  • the multiple color partial face images are input to a deep learning network for training, and the model and parameters of the deep learning network are obtained.
  • the method further includes:
  • Face recognition is performed according to face images collected by other pixel units in the pixel array except the first pixel unit set.
  • the performing face recognition based on the face images collected by other pixel units in the pixel array except the first pixel unit set includes:
  • the face image matches the registered face image and the face is a real face, it is determined that the face recognition is successful.
  • the number of consecutive pixel units in the first pixel unit set is less than the first threshold.
  • the ratio of the number of pixel units in the first pixel unit set to the total number of pixel units in the pixel array is smaller than the first ratio.
  • the pixel units in the first pixel unit set are discretely distributed in the pixel array.
  • face images collected by other pixel units in the pixel array except the first pixel unit set are used for face recognition.
  • no filter is provided for other pixel units in the pixel array except the first pixel unit set.
  • other pixel units in the pixel array except the first pixel unit set are provided with filters of a specific wavelength range.
  • the filter in the specific wavelength range includes a filter in the 940 nm wavelength range.
  • an electronic device including the apparatus for face recognition as in the second aspect and any possible implementation manner thereof.
  • a computer-readable medium for storing a computer program, and the computer program includes instructions for executing the foregoing third aspect and any possible implementation manner thereof.
  • a computer program product including instructions is provided.
  • the computer runs the instructions of the computer program product, the computer executes the third aspect and any one of its possible implementations for humans. Face recognition method.
  • the computer program product can run on the electronic device of the fourth aspect.
  • the pixel unit provided with the filter can collect the spectral response of the filter in the wavelength range, so that in one exposure
  • the pixel unit provided with the filter can collect the spectral response of the filter in the wavelength range, so that in one exposure
  • at least two spectral responses can be collected, and there is no need to perform multiple collections to obtain the at least two spectral responses, which can increase the collection speed, and furthermore can perform living based on the at least two spectral responses.
  • Recognition helps to improve the security of face recognition.
  • Figure 1 is the reflectance spectrum curve of human skin.
  • Fig. 2 is a schematic structural diagram of an optical sensor for face recognition according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an arrangement of the filter group in the pixel array.
  • Fig. 4 is a schematic diagram of the arrangement of filters in a filter group.
  • Fig. 5 is a schematic structural diagram of an apparatus for face recognition according to an embodiment of the present application.
  • Fig. 6 is a schematic flowchart of a method for face recognition according to an embodiment of the present application.
  • Fig. 7 is an overall flowchart of a method for face recognition according to an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiments of this application can be applied to various face recognition systems.
  • the face recognition system provided in the embodiments of this application can be applied to mobile terminals such as smart phones and tablet computers, door locks, etc. Access control system or other electronic equipment.
  • in-vivo anti-counterfeiting uses interactive methods, such as blinking or changing facial expressions. This method usually requires continuous collection of several frames of images, which reduces the recognition speed.
  • human skin tissue has a certain specificity in the light reflection performance of a specific wavelength range. As shown in Figure 1, human skin is at 560nm Around the wavelength range, the 980nm wavelength range has a special spectral response, which does not exist on the spectral response curve of artificial materials such as paper and molds.
  • the present application provides a method for in vivo anti-counterfeiting of a human face, which can acquire the spectral response of the target to be identified to a specific wavelength range by collecting a frame of image, and further perform in vivo anti-counterfeiting based on the spectral response, which is beneficial to improve the recognition speed. It can also improve the security of face recognition.
  • the target to be recognized in the embodiment of the present application may be a human face, or may also be other parts of the human body, such as a finger or a palm, which is not limited in the embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an optical sensor 20 for face recognition provided by an embodiment of the present application.
  • the optical sensor 20 includes:
  • the first pixel unit set 21 in the pixel array 200 includes a first type of pixel unit group and a second type of pixel unit group, wherein:
  • the first-type pixel unit group includes at least one first-type pixel unit 211, and the first-type pixel unit 211 is provided with a first filter 221, and the first filter 221 is used to pass the first wavelength range.
  • the second-type pixel unit group includes at least one second-type pixel unit 212, the second-type pixel unit 212 is provided with a second filter 222, and the second filter 222 is used to pass the second wavelength range Optical signal, and the second waveband range is different from the first waveband range;
  • the pixel units in the first-type pixel unit group and the second-type pixel unit group are used to receive the reflected light signal reflected from the human face by the light signal emitted by the light source, and obtain a partial human face according to the reflected light signal An image, the partial face image is used to determine the authenticity of the face.
  • the pixel units in the first pixel unit set of the optical sensor may be provided with at least two different types of filters, and the pixel units provided with the same type of filter are regarded as a type of pixel unit group, then the The first set of pixel units can be divided into at least two types of pixel unit groups.
  • the number of corresponding relationship between the pixel units in one type of pixel unit group and the corresponding types of filters of this type of pixel unit group can be a pair One, or many-to-one, that is, one pixel unit corresponds to one filter, or multiple pixel units can share one filter.
  • one first-type pixel unit 211 may correspond to one first filter 221, or it may also be that multiple first-type pixel units 211 correspond to one first filter 221;
  • one second-type pixel unit 212 may correspond to one second filter 222, or, multiple second-type pixel units 212 may correspond to one second filter 222.
  • the filter is arranged in the front light path of the pixel unit.
  • the filter can be arranged above the pixel unit where the filter needs to be arranged, for example, the filter It is pasted on the upper surface of the pixel unit, or the filter material may be directly covered on the pixel unit as long as it can filter light, which is not limited in the embodiment of the present application.
  • the optical filter in the embodiment of the present application only allows the optical signal in a specific wavelength range to pass.
  • the optical filter has a relatively high transmittance to the optical signal in the specific wavelength range, for example, greater than 80% or 90%.
  • the transmittance of optical signals in other wavebands is relatively low, for example, less than 10% or 20%.
  • the wavelength range of the optical signal passed by the filter can be specially designed.
  • the wavelength band with a special spectral response can be designed according to the reflection spectrum curve of human skin
  • a range of optical signals for example, the wavelength range of about 560nm, or other visible light ranges, or the wavelength range of about 980nm, or can also pass through the infrared wavelength range with better face recognition performance, such as the wavelength range of about 940nm, or .
  • filters with different wavelength ranges are provided in at least two types of pixel unit groups of the pixel array, so that the pixel unit provided with the filter can collect the spectral response of the filter in the wavelength range.
  • the pixel unit provided with the filter can collect the spectral response of the filter in the wavelength range.
  • at least two types of spectral responses can be collected, and there is no need to perform multiple collections to obtain the at least two types of spectral responses, which can increase the collection speed.
  • the response of the at least two spectra is used for living body recognition, which is beneficial to improve the security of face recognition.
  • the first pixel unit set 21 may also be a third type of pixel unit group, and the third type of pixel unit group includes at least one third type of pixel unit 213
  • the third type of pixel unit 213 is provided with a third filter 223, and the third filter 223 is used to pass the optical signal in the third wavelength range.
  • the first waveband range, the second waveband range, and the third waveband range are respectively one of the following three waveband ranges:
  • the above three waveband ranges can be determined in the visible light waveband.
  • the first waveband range can be set, and the second waveband range and the third waveband range are respectively red light.
  • Band one of the blue band and the green band.
  • the wavelength range of blue light can be from 440nm to 475nm in the center band, and the upper cutoff band is about 550nm;
  • the wavelength range of green light can be from 520nm to 550nm in the center band, the upper cutoff band is about 620nm and the lower cutoff band is 460nm;
  • the band range of the lower cut-off band is about 550nm.
  • the wavelength range including 560 nm may be a wavelength range of about 560 nm, for example, a wavelength range of 560 nm ⁇ 20 nm, or a wavelength range of 560 nm ⁇ 40 nm, etc.
  • the specific wavelength range may be filtered
  • the processing process control of the optical sheet is not limited in the embodiment of the present application, and other waveband ranges are similar, and will not be repeated here.
  • the waveband range of the optical signal emitted by the light source includes the first waveband range, the second waveband range, and the third waveband range.
  • the human face is illuminated by the light signal of the whole waveband, and the response of multiple spectrums is further extracted through various filters, without using multiple light sources of different wavebands, which saves cost and reduces the complexity of the module.
  • the face images collected by other pixel units in the pixel array except the first pixel unit set are used for face recognition.
  • the face images collected by the other pixel units can be combined with the registered The face image template is matched to determine whether the matching is successful.
  • other pixel units in the pixel array except for the first set of pixel units may not be provided with filters, for example, transparent processing, or transparent materials, or also You can set a filter in a specific wavelength range, such as a filter in the 940nm wavelength range.
  • the number of consecutive pixel units in the first pixel unit set may be set to be less than or equal to a certain threshold, for example, 6, and the number of consecutive pixel units in the first pixel unit set may be set to be filtered.
  • the number of pixel units continuously covered by the light sheet is less than a certain threshold, which can avoid affecting the face recognition performance.
  • the ratio of the number of pixel units in the first pixel unit set to the total number of pixel units in the pixel array is less than a first ratio, such as 5%, to avoid influence Face recognition performance.
  • the pixel units in the first pixel unit set are discretely distributed in the pixel array, and correspondingly, the first filter, the second filter, and the third filter Discretely distributed in the pixel array.
  • the first filter, the second filter, and the third filter may constitute a filter group 220, and the filter group 200 are discretely distributed in the pixel array 200.
  • the filter set 220 can be square, diamond, circular or other regular or irregular patterns arranged in the pixel array of the optical sensor, as long as it does not affect the face recognition performance. This embodiment of the application does not limit this.
  • the filters in a filter set 220 may be discrete, that is, the filters are separated by pixel units without filters, for example, in FIG. 4
  • the design methods h to i, or the filters can also be continuous, such as the design methods a to e in FIG. 4.
  • the embodiment of the present application does not specifically limit the first filter included in a filter set , The number and arrangement of the second filter and the third filter.
  • Fig. 5 is a schematic structural diagram of an apparatus for face recognition according to an embodiment of the present application.
  • the apparatus 50 for face recognition may include:
  • An optical sensor 51 where the pixel units in the first set of pixel units of the optical sensor 51 are used to receive the reflected light signal reflected from the face by the light signal emitted by the light source, and obtain a partial face image according to the reflected light signal;
  • the processor 52 is configured to determine the authenticity of the face according to the partial face image.
  • the sensor 51 may be the optical sensor 20 in the embodiment shown in FIG. 2, and the specific description may refer to the related description of the embodiment shown in FIG. 2, which will not be repeated here.
  • the light source may be the built-in light source of the device 50, or it may be an external light source of the device 50, or the light source in the electronic equipment installed in the device 50 may be reused In this case, the device 50 may not include the light source, which is not limited in the embodiment of the present application.
  • the wavelength range of the light source may include multiple wavelength ranges.
  • the light signal emitted by the light source may also be used for face recognition, that is, the same light source is used for living body recognition and face recognition; in other alternative embodiments,
  • the device 50 for face recognition may further include another light source for face recognition, for example, a light source in the wavelength range of about 940 nm.
  • the local area collected by the pixel unit in the first pixel unit set may be used.
  • the image can be used for living body recognition, or it can also be used for living body recognition through the face image collected by the entire pixel array.
  • the first set of pixel units includes a first type of pixel unit group, a second type of pixel unit group, and a third type of pixel unit group, wherein:
  • the first-type pixel unit group includes at least one first-type pixel unit, the first-type pixel unit is provided with a first filter, and the first filter is used to pass light signals in a first wavelength range;
  • the second-type pixel unit group includes at least one second-type pixel unit, the second-type pixel unit is provided with a second filter, and the second filter is used to pass light signals in a second wavelength range, and The second waveband range is different from the first waveband range;
  • the third-type pixel unit group includes at least one third-type pixel unit, the third-type pixel unit is provided with a third filter, and the third filter is used to pass light signals in a third wavelength range, so The third waveband range is different from the first waveband range and the second waveband range.
  • the first type of pixel unit group can obtain the response of the target to be identified to the light signal in the first wavelength range (or the spectral response) through the first filter
  • the second type of pixel unit group can pass the second
  • the filter obtains the response of the target to be identified to the spectrum in the second wavelength range
  • the third type of pixel unit group can obtain the response of the target to be identified to the spectrum in the third wavelength range through the third filter, that is,
  • the authenticity of the human face can be determined based on the responses of the above-mentioned different spectra.
  • the partial face images collected by each type of pixel unit group reflecting the responses of the above three spectra can be input to the training A good convolutional neural network classifies and determines whether it comes from a real face.
  • the first pixel can be determined
  • the ratio between the pixel value collected by the pixel unit in the unit set and the pixel value collected by the neighboring pixel unit without a filter is used for living body identification based on the ratio, for example, when the ratio is within a certain ratio range, it is determined It is a real face, otherwise it is determined to be a fake face.
  • the specific ratio range may be obtained by statistics of a large number of real face and fake face data, or obtained by machine learning.
  • the response of artificial materials (such as paper) to the spectrum of a certain wavelength range may partially overlap with the response of a living person to the spectrum of that wavelength range. If a living body is identified based on the response of the spectrum of this wavelength range, it may cause misidentification. .
  • Spectral response, the response of the target to be identified to the spectrum in the second band, and the response of the target to be identified to the spectrum in the third band can include people
  • the spectral responses collected by the three types of pixel unit groups should be the same or similar.
  • the magnitudes of the spectral responses collected by the pixel units in the three types of pixel unit groups may have certain differences. In this way, when the three types of spectral responses are synthesized, if a certain spectral response is too large, other spectral responses may not be effectively distinguished.
  • the partial face image collected by each type of pixel unit group may also be calibrated.
  • the processor 52 is further configured to:
  • the first partial face image collected by the first type of pixel unit group, the second partial face image collected by the second type of pixel unit group, and the first partial face image collected by the third type of pixel unit group Calibration of three partial face images
  • the authenticity of the face is determined.
  • the processor 52 is further configured to:
  • the calibration parameter is determined according to the relationship between the spectral response of the reference object to the first waveband range, the second waveband range and the third waveband range.
  • the reference object is used as a test object to determine the calibration parameters.
  • the reference object may be a solid color object, such as a white paper, a flesh-colored object, etc. It is expected that the first type of pixel unit group, the second type of pixel unit group and The response of the reference object collected by the third-type pixel unit to the spectra of the first waveband range, the second waveband range and the third waveband range are at the same level.
  • the A type of pixel unit group, the reference object actually collected by the second type of pixel unit group and the third type of pixel unit is relative to the first waveband range, the second waveband range, and the third waveband range
  • the response of the spectrum is calibrated to determine the calibration parameters.
  • the light source emits an optical signal to the reference object, wherein the wavelength range of the optical signal includes the first wavelength range, the second wavelength range and the third wavelength range, passing through the optical sensor
  • the first type of pixel unit group, the second type of pixel unit group, and the third type of pixel unit collect the reference object to the first waveband range, the second waveband range and the third
  • the response of the spectrum in the wavelength range, and then the calibration parameter is determined according to the relationship between the responses of the multiple spectrums.
  • the pixel value collected by the pixel unit P1 of the first type is 200
  • the pixel value collected by the second type pixel unit P2 adjacent to the first type pixel unit P1 is 100
  • the pixel value collected by the first type pixel unit P1 is
  • the pixel value collected by the third type of pixel unit P3 is 50.
  • the above three pixel values respectively represent the response of the three types of spectra, and the ratio relationship is 4:2:1.
  • the response of this spectrum is at the same level and can be calibrated.
  • the pixel value collected by the second type pixel unit P2 can be multiplied by 2
  • the pixel value collected by the third type pixel unit P3 can be multiplied by 4.
  • the calibration parameter is a preset value, for example, it can be determined according to the empirical value of the three kinds of spectral responses, or it can also be determined according to the above-mentioned calibration step, for example, after the above-mentioned step is determined
  • the calibration parameter can be pre-stored and used for the calibration of the subsequently collected face image, that is, the subsequent calibration of the face image all adopt the calibration parameter, which is not limited in the embodiment of the present application.
  • the response of the three spectra within the same level may mean that the difference of the three spectral responses is less than a certain threshold, or the pixel values of the three spectral responses are equivalent, or in other words, within the same range .
  • the calibration parameter of each pixel unit in the first pixel unit set can be determined, and then the subsequently collected pixel values can be calibrated according to the calibration parameter of each pixel unit, or, The calibration parameters of each pixel unit are averaged to obtain a unified calibration parameter, and the pixel values collected by all pixel units are calibrated according to the unified calibration parameter.
  • the embodiment of the present application does not limit the specific calibration method.
  • the processor 52 is further configured to:
  • the second partial face image and the third partial face image are synthesized to obtain a color partial face image, wherein the color partial face image is Each pixel of includes three response pixel values of the spectrum of the face to the first waveband range, the second waveband range, and the third waveband range;
  • the authenticity of the human face is determined according to the feature information of the color partial human face image.
  • a color partial face image may include three color channels, such as RGB, and one spectral response corresponds to one color channel. That is, each pixel in the color partial face image includes three pixel values (that is, The three response pixel values) correspond to three spectral responses respectively.
  • each of the aforementioned three types of pixel unit groups provided with filters can only obtain one spectral response. Therefore, to obtain the color partial face image, it is necessary to determine the other two spectral responses.
  • the other two spectral responses can be obtained based on the pixel values collected by other adjacent pixel unit groups.
  • the first response pixel value corresponding to the first pixel in the color partial face image may be determined according to the pixel value collected by the first type pixel unit in the first type pixel unit group, where the first response pixel value
  • the response pixel value represents the response of the face to the spectrum in the first waveband range
  • the first type of pixel unit group includes 100 pixel units
  • the second type of pixel unit group includes 100 pixel units
  • the third type of pixel unit group includes 100 pixel units
  • the color partial face image includes 100 pixel units.
  • Each pixel includes three pixel values, corresponding to the response of the three types of spectra.
  • P1 in the color partial face image can be determined.
  • the response of the three spectra of each pixel in the color partial face image can be determined, thereby obtaining the color partial face image.
  • the full-spectrum image corresponding to each type of pixel unit group in the first type of pixel unit group, the second type of pixel unit group, and the third type of pixel unit group may be determined, and further according to the The full-spectrum image corresponding to each type of pixel unit group is determined to determine the color partial face image, where the full-spectrum image is an image including the three types of spectral responses.
  • each first-type pixel unit is relative to the second waveband according to the pixel value collected by the second-type pixel unit adjacent to each first-type pixel unit in the first-type pixel unit group.
  • Range of the spectral response to obtain the first response image which is equivalent to the spectral response of the second wavelength range corresponding to the first type of pixel unit group; it can be based on the proximity to each first type of pixel unit in the first type of pixel unit group Determine the spectral response of each of the first-type pixel units to the third waveband range from the pixel values collected by the third-type pixel unit to obtain the second response image, which is equivalent to the third-type pixel unit group corresponding to the third The spectral response of the band range.
  • each second-type pixel unit is relative to the first-type pixel unit according to the pixel value collected by the first-type pixel unit adjacent to each second-type pixel unit in the second-type pixel unit group.
  • the third response image is obtained by the spectral response of the wavelength range, which is equivalent to the spectral response of the first wavelength range corresponding to the second type of pixel unit group; it can be based on the second type of pixel unit in the second type of pixel unit group.
  • the pixel values collected by adjacent third-type pixel units are determined to determine the spectral response of each second-type pixel unit to the third waveband range to obtain a fourth response image, which is equivalent to the second-type pixel unit group corresponding to the first Spectral response in the three-band range.
  • each third-type pixel unit is relative to the first-type pixel unit according to the pixel value collected by the first-type pixel unit adjacent to each third-type pixel unit in the third-type pixel unit group.
  • the spectral response of the waveband range, the fifth response image is obtained, which is equivalent to the spectral response of the first waveband range corresponding to the third-type pixel unit group; it can be based on each third-type pixel unit in the third-type pixel unit group
  • the pixel values collected by adjacent second-type pixel units are determined to determine the spectral response of each third-type pixel unit to the second waveband range, and a sixth response image is obtained, which is equivalent to the first corresponding third-type pixel unit group Spectral response in the two-band range.
  • processor 52 is further configured to:
  • the third partial face image is synthesized with the fifth response image and the sixth response image to obtain a full-spectrum response image corresponding to the third type of pixel unit group.
  • first response image to the sixth response image are obtained based on the calibrated first partial face image, the second partial face image, and the third partial face image.
  • each pixel in the full-spectrum response image corresponds to three types of spectral responses, that is, each pixel corresponds to three pixel values, which can be considered as RBG values.
  • the full-spectrum response image corresponding to the first-type pixel unit group, the full-spectrum response image corresponding to the second-type pixel unit group, and the full-spectrum response image corresponding to the third-type pixel unit group are reorganized (or , Splicing) to obtain the color partial face image (or RGB image).
  • the processor can perform live body recognition based on the color partial face to identify the true or false of the human face.
  • the processor 52 can extract feature information of the color partial face, for example, color feature information, Specifically, it can be chroma, saturation, and purity (Hue, Saturation, Value, HSV) information, and then input the feature information of the color partial face into the deep learning network for classification to determine the true and false of the face.
  • the deep learning network may be a convolutional neural network or other deep learning networks.
  • the convolutional neural network Take the convolutional neural network as an example to illustrate the specific training process.
  • a convolutional neural network structure For example, a two-layer convolutional neural network can be used, or a three-layer network structure or a multi-layer network structure can also be used.
  • the initial training parameters may be randomly generated, or acquired based on empirical values, or may be parameters of a convolutional neural network model pre-trained based on a large amount of true and false face data.
  • the convergence condition may include at least one of the following:
  • the probability of judging a color partial face image of a real human face as being from a real human face is greater than the first probability, for example, 98%;
  • the probability of judging the color partial face image of the fake human face as coming from the fake human face is greater than the second probability, such as 95%;
  • the probability of judging a color partial face image of a real face as a fake face is less than the third probability, for example, 2%;
  • the probability of judging the color partial face image of the fake face as coming from the real face is less than the fourth probability, for example, 3%.
  • the convolutional neural network can process the above-mentioned color partial face images based on the initial training parameters, and determine that each color For the judgment result of the partial face image, further, according to the judgment result, the structure of the convolutional neural network and/or the training parameters of each layer are adjusted until the judgment result meets the convergence condition, and the training is completed. After that, the color partial face image of the face that needs to be recognized subsequently can be input to the convolutional neural network, and the convolutional neural network can use the trained parameters to process the color partial face image to determine the color partial face image Whether the face image is from a real human face.
  • the processor 52 is further configured to:
  • Face recognition is performed according to face images collected by other pixel units in the pixel array except the first pixel unit set.
  • the processor 52 may further perform a living body recognition on the target to be recognized when the face image collected by the other pixel unit matches the registered face template of the target to be recognized, and the target to be recognized is In the case of a real face, it is determined that the face recognition is successful, and the operation that triggers the face recognition is executed, for example, operations such as terminal unlocking or payment.
  • the processor 52 may further determine the data collected by other pixel units in the pixel array except the first set of pixel units when the target to be recognized is a real human face. Whether the face image matches the registered face template of the target to be recognized, it is determined that the face recognition is successful in the case of matching, and further operations that trigger the face recognition are performed, for example, operations such as terminal unlocking or payment.
  • the device for face recognition can also be applied to other biometric recognition scenarios, such as fingerprint recognition scenarios.
  • biometric recognition scenarios such as fingerprint recognition scenarios.
  • a fingerprint image is collected, at least two types of fingers collected based on partial pixel units
  • the spectral response is further based on the at least two spectral responses to determine the authenticity of the finger.
  • the apparatus 50 for face recognition may include the processor 52, for example, the processing unit may be a micro control unit (MCU) in the apparatus for face recognition, or In other embodiments, the device for face recognition may not include the processor 52. In this case, the function executed by the processor 52 may be installed by the device for face recognition 50
  • the processor in the electronic device for example, a host module executes, which is not limited in the embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a method for face recognition according to an embodiment of the present application. As shown in FIG. 6, the method 60 includes:
  • the first The pixel unit set includes a first type of pixel unit group and a second type of pixel unit group, the first type of pixel unit group includes at least one first type of pixel unit, the first type of pixel unit is provided with a first filter, so The first filter is used to pass light signals in the first wavelength range; the second type of pixel unit group includes at least one second type of pixel unit, and the second type of pixel unit is provided with a second filter, the The second filter is used to pass optical signals in a second waveband range, and the second waveband range is different from the first waveband range;
  • the method 60 can be executed by a device for face recognition, such as the device 50 in the foregoing embodiment.
  • S61 can be executed by the optical sensor 51 in the device 50
  • S62 can be executed by the device 50.
  • the processor 52 is executed by, for example, an MCU; or, the method 60 may also be executed by an electronic device installed in the apparatus for face recognition.
  • S62 may be executed by a processor in the electronic device, such as a Host module. The embodiment does not limit this.
  • the first pixel unit set further includes a third-type pixel unit group, and the third-type pixel unit group includes at least one third-type pixel unit.
  • the pixel unit is provided with a third filter, and the third filter is used to pass optical signals in a third waveband range, and the third waveband range is different from the first waveband range and the second waveband range.
  • the first waveband range, the second waveband range, and the third waveband range are respectively one of the following three waveband ranges:
  • the wavelength range of the optical signal emitted by the light source includes the first wavelength range, the second wavelength range, and the third wavelength range.
  • the determining the authenticity of the human face according to the partial face image includes:
  • the first partial face image collected by the first type of pixel unit group, the second partial face image collected by the second type of pixel unit group, and the first partial face image collected by the third type of pixel unit group Calibration of three partial face images
  • the authenticity of the face is determined.
  • the method 60 further includes:
  • the calibration parameter is determined according to the relationship between the spectral response of the reference object to the first waveband range, the second waveband range and the third waveband range.
  • the determination of the relationship between the spectral response of the reference object to the first waveband range, the second waveband range and the third waveband range Calibration parameters including:
  • the light source emits a light signal to the reference object, through the first type of pixel unit group, the second type of pixel unit group and the third type of pixel unit group to collect the reference object to the The first waveband range, the spectral response of the second waveband range and the third waveband range;
  • the calibration parameter is determined according to the response of the reference object to the spectra of the first waveband range, the second waveband range, and the third waveband range, and the calibration parameter is used to make the first-type pixel Unit group, the reference object collected by the second type pixel unit group and the third type pixel unit group compares the spectra of the first waveband range, the second waveband range and the third waveband range The response is in the same range.
  • the calibrated first partial face image, the second partial face image, and the third partial face image determine the True and false, including:
  • the second partial face image and the third partial face image are synthesized to obtain a color partial face image, wherein the color partial face image is Each pixel of includes three response pixel values of the spectrum of the face to the first waveband range, the second waveband range, and the third waveband range;
  • the authenticity of the human face is determined according to the feature information of the color partial human face image.
  • the calibrated first partial face image, the second partial face image and the third partial face image are synthesized to obtain a color partial Face images, including:
  • the determining the authenticity of the human face according to the feature information of the color partial human face image includes:
  • the feature information of the color partial face image is processed through a deep learning network to determine the authenticity of the face.
  • the method 60 further includes:
  • the multiple color partial face images are input to a deep learning network for training, and the model and parameters of the deep learning network are obtained.
  • the method 60 further includes:
  • Face recognition is performed according to face images collected by other pixel units in the pixel array except the first pixel unit set.
  • the performing face recognition based on the face image collected by other pixel units in the pixel array except the first pixel unit set includes:
  • the face image matches the registered face image template and the face is a real face, it is determined that the face recognition is successful.
  • the number of consecutive pixel units in the first pixel unit set is less than a first threshold.
  • the ratio of the number of pixel units in the first pixel unit set to the total number of pixel units in the pixel array is less than the first ratio.
  • the pixel units in the first pixel unit set are discretely distributed in the pixel array.
  • face images collected by other pixel units in the pixel array except the first pixel unit set are used for face recognition.
  • other pixel units in the pixel array except the first pixel unit set are not provided with filters.
  • other pixel units in the pixel array except for the first pixel unit set are provided with filters of a specific wavelength range.
  • the filter of the specific wavelength range is a filter including the wavelength range of 940 nm.
  • the method may include the following contents:
  • S71 Collect a face image through an optical sensor
  • the face image includes partial face images collected by pixel units in the first pixel unit set and face images collected by other pixel units.
  • classification is performed according to the color feature information of the color partial face image, and the authenticity of the human face is determined.
  • the color partial face image can be input to a deep learning network to determine the authenticity of the face.
  • an embodiment of the present application also provides an electronic device 80.
  • the electronic device 80 may include a device 81 for face recognition, and the device 81 for face recognition may be the foregoing device embodiment.
  • the apparatus 50 for face recognition in FIG. 6 can be used to execute the content in the method embodiments described in FIG. 6 to FIG. 7. For the sake of brevity, details are not repeated here.
  • the electronic device 80 may be a smart phone, a tablet computer, a door lock, or other electronic devices that require high security.
  • the processor or processing unit in the embodiment of the present application may be an integrated circuit chip with signal processing capability.
  • the steps of the foregoing method embodiments can be completed by hardware integrated logic circuits in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • FPGA Field Programmable Gate Array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the face recognition in the embodiments of the present application may further include a memory
  • the memory may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memory.
  • the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
  • the volatile memory may be a random access memory (Random Access Memory, RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • Double Data Rate SDRAM DDR SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • Synchlink DRAM SLDRAM
  • DR RAM Direct Rambus RAM
  • the embodiment of the present application also proposes a computer-readable storage medium that stores one or more programs, and the one or more programs include instructions.
  • the instructions are included in a portable electronic device that includes multiple application programs When executed, the portable electronic device can be made to execute the content of the method embodiment.
  • the embodiment of the present application also proposes a computer program, the computer program includes instructions, when the computer program is executed by a computer, the computer can execute the content of the method embodiment.
  • An embodiment of the present application also provides a chip that includes an input and output interface, at least one processor, at least one memory, and a bus.
  • the at least one memory is used to store instructions
  • the at least one processor is used to call the at least one memory. To execute the content of the method embodiment.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which can be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种用于人脸识别的光学传感器(20),包括:像素阵列(200),像素阵列(200)中的第一像素单元集合(21)包括第一类像素单元组和第二类像素单元组,其中:第一类像素单元组包括至少一个第一类像素单元(211),第一类像素单元(211)设置第一滤光片(221),第一滤光片(221)用于通过第一波段范围的光信号;第二类像素单元组包括至少一个第二类像素单元(212),第二类像素单元(212)设置第二滤光片(222),第二滤光片(222)用于通过第二波段范围的光信号,第二波段范围不同于第一波段范围;第一类像素单元组和第二类像素单元组中的像素单元用于接收由光源发射的光信号从人脸反射的反射光信号,并根据反射光信号获取局部人脸图像,局部人脸图像用于确定人脸的真假。还提供了一种用于人脸识别的装置、方法和电子设备,能够提升人脸识别的安全性。

Description

用于人脸识别的光学传感器、装置、方法和电子设备 技术领域
本申请涉及人脸识别技术领域,并且更具体地,涉及一种用于人脸识别的光学传感器、装置、方法和电子设备。
背景技术
采用人脸识别技术的电子设备给用户带来了安全和便捷的用户体验,但是,通过用户照片(例如,打印的或电子的),或者制造的3D人脸模具等伪造的人脸数据是人脸识别应用中的一个安全隐患。因此,如何识别真假人脸,以提升人脸识别的安全性是一项亟需解决的问题。
发明内容
本申请实施例提供了一种用于人脸识别的光学传感器、装置、方法和电子设备,能够识别人脸的真假,从而能够提升人脸识别的安全性。
第一方面,提供了一种用于人脸识别的光学传感器,包括:像素阵列,所述像素阵列中的第一像素单元集合包括第一类像素单元组和第二类像素单元组,其中:
所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;
所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
所述第一类像素单元组和所述第二类像素单元组中的像素单元用于接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
在一些可能的实现方式中,所述第一像素单元集合还包括第三类像素单元组,所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围,所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组中的像素单 元用于接收由所述光源发射的光信号从所述人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
在一些可能的实现方式中,所述第一波段范围,所述第二波段范围和所述第三波段范围分别为以下三种波段范围中的一种:
包括560nm的波段范围,包括980nm的波段范围,包括940nm的波段范围。
在一些可能的实现方式中,所述光源发射的光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围。
在一些可能的实现方式中,所述第一像素单元集合中连续的像素单元的数量小于或等于第一阈值。
在一些可能的实现方式中,所述第一像素单元集合中的像素单元的数量与所述像素阵列中的像素单元的总数量的比例小于第一比值。
在一些可能的实现方式中,所述第一像素单元集合中的像素单元离散分布在所述像素阵列中。
在一些可能的实现方式中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元采集的人脸图像用于人脸识别。
在一些可能的实现方式中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元不设置滤光片。
在一些可能的实现方式中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元设置特定波段范围的滤光片。
在一些可能的实现方式中,所述特定波段范围的滤光片为包括940nm波段范围的滤光片。
第二方面,提供了一种用于人脸识别的装置,包括:
如第一方面至第二方面及其任一可能的实现方式中的用于人脸识别的光学传感器,其中,所述光学传感器的第一像素单元集合中的第一类像素单元组和第二类像素单元组中的像素单元用于接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像;
处理器,用于根据所述局部人脸图像确定所述人脸的真假。
在一些可能的实现方式中,所述第一像素单元集合包括第一类像素单元组,第二类像素单元组和第三类像素单元组,其中:
所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;
所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围;
其中,所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组中的像素单元用于接收由所述光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
在一些可能的实现方式中,所述处理器还用于:
根据校准参数,对所述第一类像素单元组采集的第一局部人脸图像,所述第二类像素单元组采集的第二局部人脸图像和所述第三类像素单元组采集的第三局部人脸图像进行校准;
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假。
在一些可能的实现方式中,所述处理器还用于:
根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数。在一些可能的实现方式中,所述光学传感器还用于:
在所述光源向所述参考对象发射光信号时,通过所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组分别采集所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应;
所述处理器还用于:根据所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应,确定所述校准参数,所述校准参数用于使得所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组采集的所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应在同一范围内。
在一些可能的实现方式中,所述参考对象为白色物体,或肉色物体。
在一些可能的实现方式中,所述处理器还用于:
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,其中,所述彩色局部人脸图像中的每个像素包括所述人脸对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的三个响应像素值;
将所述彩色局部人脸图像进行特征提取,得到所述彩色局部人脸图像的特征信息;
根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假。
在一些可能的实现方式中,所述处理器还用于:
根据所述第一类像素单元组中第一类像素单元采集的采用像素值,确定所述彩色局部人脸图像中的第一像素对应的第一响应像素值,其中,所述第一响应像素值表示所述人脸对所述第一波段范围的光谱的响应;
根据与所述第一类像素单元邻近的第二类像素单元采集的像素值,确定所述第一像素对应的第二响应像素值,其中,所述第二响应像素值表示所述人脸对所述第二波段范围的光谱响应;以及
根据与所述第一类像素单元邻近的第三类像素单元采集的像素值,确定所述第一像素对应的第三响应像素值,所述第三响应像素值表示所述人脸对所述第三波段范围的光谱响应。
在一些可能的实现方式中,所述处理器具体用于:
通过深度学习网络对所述彩色局部人脸图像的特征信息进行处理,确定所述人脸的真假。
在一些可能的实现方式中,所述处理器还用于:
从所述光学传感器采集的多个真实人脸和虚假人脸的人脸图像中,提取所述第一像素单元集合采集的多个局部人脸图像;
对所述多个局部人脸图像进行校准和合成处理,得到多个彩色局部人脸图像;
将所述多个彩色局部人脸图像输入至深度学习网络进行训练,得到所述深度学习网络的模型和参数。
在一些可能的实现方式中,所述处理器还用于:
根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采 集的人脸图像进行人脸识别。
第三方面,提供了一种用于人脸识别的方法,包括:
通过光学传感器的第一像素单元集合中的像素单元接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像;
其中,所述第一像素单元集合包括第一类像素单元组和第二类像素单元组,所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
根据所述局部人脸图像用于确定所述人脸的真假。
在一些可能的实现方式中所述第一像素单元集合还包括第三类像素单元组,所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围。
在一些可能的实现方式中,所述第一波段范围,所述第二波段范围和所述第三波段范围分别为以下三种波段范围中的一种:
包括560nm的波段范围,包括980nm的波段范围,包括940nm的波段范围。
在一些可能的实现方式中所述光源发射的光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围。
在一些可能的实现方式中所述根据所述局部人脸图像用于确定所述人脸的真假,包括:
根据校准参数,对所述第一类像素单元组采集的第一局部人脸图像,所述第二类像素单元组采集的第二局部人脸图像和所述第三类像素单元组采集的第三局部人脸图像进行校准;
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假。
在一些可能的实现方式中,所述方法还包括:
根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段 范围的光谱的响应之间的关系,确定所述校准参数。
在一些可能的实现方式中,所述根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数,包括:
在所述光源向所述参考对象发射光信号时,通过所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组分别采集所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应;
根据所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应,确定所述校准参数,所述校准参数用于使得所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组采集的所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应在同一范围内。
在一些可能的实现方式中所述根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假,包括:
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,其中,所述彩色局部人脸图像中的每个像素包括所述人脸对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的三个响应像素值;
将所述彩色局部人脸图像进行特征提取,得到所述彩色局部人脸图像的特征信息;
根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假。
在一些可能的实现方式中所述根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,包括:
根据所述第一类像素单元组中第一类像素单元采集的采用像素值,确定所述彩色局部人脸图像中的第一像素对应的第一响应像素值,其中,所述第一响应像素值表示所述人脸对所述第一波段范围的光谱的响应;
根据与所述第一类像素单元邻近的第二类像素单元采集的像素值,确定所述第一像素对应的第二响应像素值,其中,所述第二响应像素值表示所述人脸对所述第二波段范围的光谱响应;以及
根据与所述第一类像素单元邻近的第三类像素单元采集的像素值,确定所述第一像素对应的第三响应像素值,所述第三响应像素值表示所述人脸对所述第三波段范围的光谱响应。
在一些可能的实现方式中所述根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假,包括:
通过深度学习网络对所述彩色局部人脸图像的特征信息进行处理,确定所述人脸的真假。
在一些可能的实现方式中所述方法还包括:
从所述光学传感器采集的多个真实人脸和虚假人脸的人脸图像中,提取所述第一像素单元集合采集的多个局部人脸图像;
对所述多个局部人脸图像进行校准和合成处理,得到多个彩色局部人脸图像;
将所述多个彩色局部人脸图像输入至深度学习网络进行训练,得到所述深度学习网络的模型和参数。
在一些可能的实现方式中所述方法还包括:
根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别。
在一些可能的实现方式中所述根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别,包括:
若所述人脸图像与注册的人脸图像匹配且所述人脸为真实人脸,确定人脸识别成功。
在一些可能的实现方式中所述第一像素单元集合中连续的像素单元的数量小于第一阈值。
在一些可能的实现方式中所述第一像素单元集合中的像素单元的数量与所述像素阵列中的像素单元的总数量的比例小于第一比值。
在一些可能的实现方式中所述第一像素单元集合中的像素单元离散分布在所述像素阵列中。
在一些可能的实现方式中所述像素阵列中除所述第一像素单元集合之外的其他像素单元采集的人脸图像用于人脸识别。
在一些可能的实现方式中所述像素阵列中除所述第一像素单元集合之外的其他像素单元不设置滤光片。
在一些可能的实现方式中所述像素阵列中除所述第一像素单元集合之外的其他像素单元设置特定波段范围的滤光片。
在一些可能的实现方式中,所述特定波段范围的滤光片为包括940nm波段范围的滤光片。
第四方面,提供了一种电子设备,包括如第二方面及其任一可能的实现方式中的用于人脸识别的装置。
第五方面,提供了一种计算机可读介质,用于存储计算机程序,所述计算机程序包括用于执行上述第三方面及其任一可能的实现方式中的指令。
第六方面,提供了一种包括指令的计算机程序产品,当计算机运行所述计算机程序产品的所述指时,所述计算机执行上述第三方面及其任一可能的实现方式中的用于人脸识别的方法。
具体地,该计算机程序产品可以运行于上述第四方面的电子设备上。
基于上述技术方案,通过在像素阵列的至少两类像素单元组设置不同种类的滤光片,从而设置滤光片的像素单元能够采集该滤光片的波段范围的光谱响应,这样,在一次曝光过程中,基于该至少两类像素单元组可以采集至少两种光谱响应,不需要进行多次采集来获取该至少两种光谱响应,能够提升采集速度,进一步可以基于该至少两种光谱响应进行活体识别,有利于提升人脸识别的安全性。
附图说明
图1是人体皮肤的反射光谱曲线。
图2是根据本申请实施例的用于人脸识别的光学传感器的示意性结构图。
图3是滤光片组在像素阵列中的一种排布方式的示意性图。
图4是滤光片组中的滤光片的排布方式的示意性图。
图5是根据本申请实施例的用于人脸识别的装置的示意性结构图。
图6是根据本申请实施例的用于人脸识别的方法的示意性流程图。
图7是根据本申请实施例的用于人脸识别的方法的整体流程图。
图8是根据本申请实施例的电子设备的示意性结构图。
具体实施方式
下面将结合附图,对本申请实施例中的技术方案进行描述。
应理解,本申请实施例可以应用于各种人脸识别系统,作为一种常见的应用场景,本申请实施例提供的人脸识别系统可以应用在智能手机、平板电脑等移动终端和门锁、门禁系统或者其他电子设备。
在传统的人脸识别系统中,活体防伪采用交互的方式,例如,采用眨眼,或表情变化等方式,这种方式通常需要连续采集几帧图像,降低了识别速度。通常来说,受人体皮肤组织的皮层厚度、血红蛋白浓度、黑色素含量等因素的影响,人体皮肤组织对特定波段范围的光线反射性能具有一定的特殊性,如图1所示,人体的皮肤在560nm左右的波段范围,980nm的波段范围具有特殊的光谱响应,这种特殊的光谱响应在纸张、模具等人工材料的光谱响应曲线上是不存在的。
据此,本申请提供了一种人脸活体防伪的方法,能够通过采集一帧图像,获取待识别目标对特定波段范围的光谱响应,进一步基于该光谱响应进行活体防伪,有利于提升识别速度,同时还可以提升人脸识别的安全性。
应理解,本申请实施例中的待识别目标可以为人脸,或者也可以为人体的其他部位,例如手指,手掌,本申请实施例对此不作限定。
以下,结合图2至图5,详细介绍本申请的装置实施例。
应理解,在以下所示出的本申请实施例中的像素单元组、滤波片的数量和排布方式等仅为示例性说明,而不应对本申请构成任何限定。
图2是本申请实施例提供的一种用于人脸识别的光学传感器20的示意性结构图,该光学传感器20包括:
像素阵列200,所述像素阵列200中的第一像素单元集合21包括第一类像素单元组和第二类像素单元组,其中:
所述第一类像素单元组包括至少一个第一类像素单元211,所述第一类像素单元211设置第一滤光片221,所述第一滤光片221用于通过第一波段范围的光信号;
所述第二类像素单元组包括至少一个第二类像素单元212,所述第二类像素单元212设置第二滤光片222,所述第二滤光片222用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
所述第一类像素单元组和所述第二类像素单元组中的像素单元用于接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获 取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
在本申请实施例中,该光学传感器的第一像素单元集合中的像素单元可以设置至少两种不同的滤光片,将设置同一种滤光片的像素单元作为一类像素单元组,则该第一像素单元集合可以分为至少两类像素单元组,可选地,一类像素单元组中的像素单元和该类像素单元组对应种类的滤光片在数量上的对应关系可以是一对一,或者多对一,即一个像素单元对应一个滤光片,或者也可以是多个像素单元共用一个滤光片。
例如,一个第一类像素单元211可以对应一个第一滤光片221,或者,也可以是多个第一类像素单元211对应一个第一滤光片221;
类似地,一个第二类像素单元212可以对应一个第二滤光片222,或者,也可以是多个第二类像素单元212对应一个第二滤光片222。
可选地,在一些实施例中,滤光片设置在所述像素单元的前端光路中,例如,可以将滤光片设置在需要设置滤光片的像素单元的上方,例如,将滤光片粘贴在所述像素单元的上表面,或者,也可以将滤光材料直接覆盖在所述像素单元上只要能够起到滤光作用即可,本申请实施例对此不作限定。
本申请实施例中的滤光片只允许特定波段范围内的光信号通过,或者说,滤光片对特定波段范围内的光信号的透过率较高,例如大于80%或90%,对其他波段范围的光信号的透过率较低,例如,小于10%或20%。
在本申请实施例中,滤光片通过的光信号的波段范围可以是特别设计的,作为一个可选的实现方式,可以根据人体皮肤的反射光谱曲线,设计透过有特殊的光谱响应的波段范围的光信号,例如,560nm左右的波段范围,或者其他可见光范围,或,980nm左右的波段范围,或者也可以透过人脸识别性能较优的红外波段范围,例如940nm左右的波段范围,或者,也可以根据活体的其他生物特征,选择合适的波段范围,只要能够与假体具有明显的区分度即可。
因此,在本申请实施例中,通过在像素阵列的至少两类像素单元组设置不同波段范围的滤光片,从而设置滤光片的像素单元能够采集该滤光片的波段范围的光谱的响应,这样,在一次曝光过程中,基于该至少两类像素单元组可以采集至少两种光谱的响应,不需要进行多次采集来获取该至少两种光谱的响应,能够提升采集速度,进一步可以基于该至少两种光谱的响应进行活体识别,有利于提升人脸识别的安全性。
可选地,在一些实施例中,如图2所示,所述第一像素单元集合21还可以第三类像素单元组,所述第三类像素单元组包括至少一个第三类像素单元213,所述第三类像素单元213设置第三滤光片223,所述第三滤光片223用于通过第三波段范围的光信号。
可选地,在本申请一个实施例中,所述第一波段范围,所述第二波段范围和所述第三波段范围分别为以下三种波段范围中的一种:
包括560nm的波段范围,包括980nm的波段范围,包括940nm的波段范围。
可选地,在其他实施例中,可以在可见光波段中确定上述三个波段范围,例如,可以设置所述第一波段范围,所述第二波段范围和所述第三波段范围分别为红光波段,蓝光波段和绿光波段的一种。例如,蓝光的波段范围可以是中心波段为440nm~475nm,上截止波段约为550nm;绿光的波段范围可以是中心波段为520nm~550nm,上截止波段约为620nm,下截止波段460nm;红光的波段范围可以是下截止波段约为550nm。
应理解,在本申请实施例中,所述包括560nm的波段范围可以为560nm左右的波段范围,例如,560nm±20nm的波段范围,或者560nm±40nm的波段范围等,具体的波段范围可以通过滤光片的加工工艺控制,本申请实施例对此不作限定,其他波段范围类似,这里不再赘述。
在本申请实施例中,光源发射的光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围。这样,通过全波段的光信号照射人脸,进一步通过多种滤光片提取多种光谱的响应,而不需要采用多个不同波段的光源,节约成本,降低了模组的复杂度。
可选地,所述像素阵列中除所述第一像素单元集合之外的其他像素单元采集的人脸图像用于人脸识别,例如可以将所述其他像素单元采集的人脸图像与注册的人脸图像模板进行匹配,确定是否匹配成功。
可选地,在本申请实施例中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元可以不设置滤光片,例如,做透明处理,或者设置透明材料,或者也可以设置特定波段范围的滤光片,例如940nm波段范围的滤光片。
以下,结合体图3和图4,说明第一像素单元集合、滤光片在像素阵列中的排布方式。
可选地,在本申请实施例中,可以设置所述第一像素单元集合中连续的像素单元的数量小于或等于特定阈值,例如,6个,通过设置所述第一像素单元集合中被滤光片连续覆盖的像素单元的个数小于一定阈值,能够避免影响人脸识别性能。
可选地,在本申请实施例中,所述第一像素单元集合中的像素单元的数量与所述像素阵列中的像素单元的总数量的比例小于第一比值,例如5%,以避免影响人脸识别性能。
可选地,所述第一像素单元集合中的像素单元离散分布在所述像素阵列中,对应地,所述第一滤光片,所述第二滤光片和所述第三滤光片离散分布在所述像素阵列中。
可选地,在本申请一个实施例中,所述第一滤光片,所述第二滤光片,所述第三滤光片可以构成一个滤光片组220,所述滤光片组200离散分布在所述像素阵列200中。例如,如图3所示,所述滤光片组220可以呈正方形,菱形,圆形或者其他规则或不规则图案排布在该光学传感器的像素阵列中,只要不影响人脸识别性能即可,本申请实施例对此不作限定。
可选地,在本申请实施例中,一个滤光片组220中的滤光片可以是离散的,即滤光片之间由不设置滤光片的像素单元隔开,例如,图4中的设计方式h~i,或者滤光片之间也可以是连续的,例如图4中的设计方式a~e,本申请实施例不特别限定一个滤光片组中包括的第一滤光片,第二滤光片,第三滤光片的数量和排布方式。
图5是根据本申请实施例的用于人脸识别的装置的示意性结构图,如图5所示,该用于人脸识别的装置50可以包括:
光学传感器51,所述光学传感器51的第一像素单元集合中的像素单元用于接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像;
处理器52,用于根据所述局部人脸图像确定所述人脸的真假。
该传感器51可以为图2所示实施例中的光学传感器20,具体说明可以参考图2所示实施例的相关说明,这里不再赘述。
在本申请实施例中,所述光源可以是所述装置50的内置光源,或者也可以是所述装置50的外置光源,或者也可以复用所述装置50所安装的电子设备中的光源来发射用于活体识别的光信号,此情况下,所述装置50可以 不包括所述光源,本申请实施例对此不作限定。
该光源的波段范围可以包括多个波段范围,在一些实施例中,该光源发射的光信号也可以用于人脸识别,即活体识别和人脸识别采用同一光源;在其他替代实施例中,所述用于人脸识别的装置50还可以包括用于人脸识别的另一个光源,例如,940nm左右波段范围的光源,此情况下,可以通过第一像素单元集合中的像素单元采集的局部图像进行活体识别,或者也可以通过整个像素阵列采集的人脸图像进行活体识别。
可选地,作为一个实施例,所述第一像素单元集合包括第一类像素单元组,第二类像素单元组和第三类像素单元组,其中:
所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;
所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围。
基于上述设置方式,第一类像素单元组可以通过第一滤光片获取待识别目标对第一波段范围的光信号的响应(或称光谱的响应),第二类像素单元组可以通过第二滤光片获取待识别目标对第二波段范围的光谱的响应,所述第三类像素单元组可以通过第三滤光片获取待识别目标对第三波段范围的光谱的响应,也就是说,这三类像素单元组可以分别获得三种不同的光谱的响应。
作为一个可选的实现方式,可以根据上述不同的光谱的响应,确定人脸的真假,例如,可以将每类像素单元组采集的反映上述三种光谱的响应的局部人脸图像输入到训练好的卷积神经网络进行分类,确定是否来自真实人脸。或者,也可以将上述三类像素单元组采集的人脸对三种光谱的响应与邻近的未设置滤光片的像素单元采集的像素值进行比较,进行活体识别,例如,可以确定第一像素单元集合中的像素单元采集的像素值和邻近的未设置滤光片的像素单元采集的像素值之间的比值,根据该比值进行活体识别,例如,在该比值在特定比值范围内时,确定为真实人脸,否则确定为虚假人脸,该 特定比值范围可以是根据大量的真实人脸和虚假人脸的数据统计得到的,或者是机器学习得到的。
但是,人工材质(例如纸张)对某个波段范围的光谱的响应可能与活人对该波段范围的光谱的响应存在部分重叠,如果基于该波段范围的光谱的响应进行活体识别,可能导致误识别。在本申请一个实施例中,可以确定每类像素单元组对于待识别目标的多个波段范围的光谱的响应,该多个波段范围的光谱的响应可以包括该待识别目标对第一波段范围的光谱的响应,该待识别目标对第二波段范围的光谱的响应,以及该待识别目标对第三波段范围的光谱的响应,也就是说,每类像素单元组对应的光谱响应都可以包括人脸对三种不同波段范围的光谱的响应,这样,即使人工材质与活人的某个波段范围的光谱响应存在重叠,也可以通过其他波段范围的光谱的响应进行区分,从而能够提升活体识别的准确度。
在本申请实施例中,对于一个纯色的测试对象,比如,白色纸张,对于不同波段范围的光谱,理论上,上述三类像素单元组采集的光谱响应应该相同或相近,但是,实际应用中,所述三类像素单元组中的像素单元所采集的光谱响应的大小可能具有一定的差别,这样,合成三种光谱响应时,如果某个光谱响应过大,可能导致其他光谱响应不能被有效区分,为了提升活体和假体的区分度,在本申请实施例中,还可以对每类像素单元组采集的局部人脸图像进行校准。
可选地,在本申请实施例中,所述处理器52还用于:
根据校准参数,对所述第一类像素单元组采集的第一局部人脸图像,所述第二类像素单元组采集的第二局部人脸图像和所述第三类像素单元组采集的第三局部人脸图像进行校准;
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假。
可选地,在一些实施例中,所述处理器52还用于:
根据所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数。
以所述参考对象为测试对象,确定所述校准参数,该参考对象可以为纯色物体,例如白色纸张,肉色物体等,期望所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元采集的所述参考对象对所述第一波 段范围,所述第二波段范围和所述第三波段范围的光谱的响应在同一水平,基于此目的,对所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元实际采集的所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应进行校准,确定所述校准参数。
具体地,光源向所述参考对象发射光信号,其中,所述光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围,通过所述光学传感器中的所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元采集所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应,然后根据该多个光谱的响应之间的关系,确定所述校准参数。
假设,第一类像素单元P1采集的像素值为200,与所述第一类像素单元P1邻近的第二类像素单元P2采集的像素值为100,与所述第一类像素单元P1邻近的第三类像素单元P3采集的像素值为50,上述三个像素值分别表示三种光谱的响应,比例关系为4:2:1,为了使得这三个像素单元采集的所述参考对象对三种光谱的响应在同一水平,可以对其进行校准,例如可以将第二类像素单元P2采集的像素值乘以2,将第三类像素单元P3采集的像素值乘以4,这样,在后续基于该三个像素单元采集的像素值合成包括三种光谱响应的彩色局部人脸图像时,可以更好的区分每种光谱响应。
可选地,在其他实施例中,所述校准参数为预设值,例如,可以根据所述三种光谱响应的经验值确定,或者也可以根据上述校准步骤确定,例如,在通过上述步骤确定校准参数之后,可以预存该校准参数,用于后续采集的人脸图像的校准,即后续的人脸图像的校准都采用该校准参数,本申请实施例对此不作限定。
应理解,在本申请实施例中,三种光谱的响应在同一水平内可以指三种光谱响应的差值小于特定阈值,或者三种光谱响应的像素值相当,或者说,处在同一范围内。
可选地,在本申请实施例中,可以确定第一像素单元集合中的每个像素单元的校准参数,然后根据每个像素单元的校准参数对后续采集的像素值进行校准,或者,也可以将每个像素单元的校准参数进行平均,得到统一的校准参数,根据该统一的校准参数对所有像素单元采集的像素值进行校准,本申请实施例对于具体的校准方式不作限定。
可选地,在本申请实施例中,所述处理器52还用于:
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,其中,所述彩色局部人脸图像中的每个像素包括所述人脸对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的三个响应像素值;
将所述彩色局部人脸图像进行特征提取,得到所述彩色局部人脸图像的特征信息;
根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假。
具体而言,彩色局部人脸图像可以包括三个颜色通道,例如RGB,一种光谱响应对应一个颜色通道,也就是说,彩色局部人脸图像中的每个像素包括三个像素值(即所述三个响应像素值),分别对应三种光谱响应。而设置滤光片的前述三类像素单元组中的每类像素单元组只能获得一种光谱响应,因此,要得到所述彩色局部人脸图像需要确定其他两种光谱响应,在一些实施例中,可以根据邻近的其他类像素单元组采集的像素值,获取其他两种光谱响应。
例如,可以根据所述第一类像素单元组中第一类像素单元采集的像素值,确定所述彩色局部人脸图像中的第一像素对应的第一响应像素值,其中,所述第一响应像素值表示所述人脸对所述第一波段范围的光谱的响应;
根据与所述第一类像素单元邻近的第二类像素单元采集的像素值,确定所述第一像素对应的第二响应像素值,其中,所述第二响应像素值表示所述人脸对所述第二波段范围的光谱响应;以及
根据与所述第一类像素单元邻近的第三类像素单元采集的像素值,确定所述第一像素对应的第三响应像素值,所述第三响应像素值表示所述人脸对所述第三波段范围的光谱响应。
假设第一类像素单元组包括100个像素单元,所述第二类像素单元组包括100个像素单元,所述第三类像素单元组包括100个像素单元,所述彩色局部人脸图像包括100个像素,每个像素包括三个像素值,对应三种光谱的响应,则,可以根据第一类像素单元组的一个第一类像素单元P1采集的像素值,确定彩色局部人脸图像中P1对应的像素的一种光谱响应,根据与P1邻近的第二类像素单元P2和第三类像素单元P3,确定P1对应的像素的其他两种光谱响应,进一步可以将这三种光谱响应作为该像素的三个颜色通道 的像素值。按照类似的方法可以确定该彩色局部人脸图像中的每个像素的三种光谱的响应,从而得到这个彩色局部人脸图像。
在其他实施例中,可以确定所述第一类像素单元组,所述第二类像素单元组,所述第三类像素单元组中的每类像素单元组对应的全光谱图像,进一步根据该每类像素单元组对应的全光谱图像,确定所述彩色局部人脸图像,其中,所述全光谱图像为包括所述三种光谱响应的图像。
例如,可以根据与所述第一类像素单元组中的每个第一类像素单元邻近的第二类像素单元采集的像素值,确定所述每个第一类像素单元对于所述第二波段范围的光谱响应,得到第一响应图像,相当于第一类像素单元组对应的第二波段范围的光谱响应;可以根据与所述第一类像素单元组中的每个第一类像素单元邻近的第三类像素单元采集的像素值,确定所述每个第一类像素单元对于所述第三波段范围的光谱响应,得到第二响应图像,相当于第一类像素单元组对应的第三波段范围的光谱响应。
又例如,可以根据与所述第二类像素单元组中的每个第二类像素单元邻近的第一类像素单元采集的像素值,确定所述每个第二类像素单元对于所述第一波段范围的光谱响应,得到第三响应图像,相当于第二类像素单元组对应的第一波段范围的光谱响应;可以根据与所述第二类像素单元组中的每个第二类像素单元邻近的第三类像素单元采集的像素值,确定所述每个第二类像素单元对于所述第三波段范围的光谱响应,得到第四响应图像,相当于第二类像素单元组对应的第三波段范围的光谱响应。
再例如,可以根据与所述第三类像素单元组中的每个第三类像素单元邻近的第一类像素单元采集的像素值,确定所述每个第三类像素单元对于所述第一波段范围的光谱响应,得到第五响应图像,相当于第三类像素单元组对应的第一波段范围的光谱响应;可以根据与所述第三类像素单元组中的每个第三类像素单元邻近的第二类像素单元采集的像素值,确定所述每个第三类像素单元对于所述第二波段范围的光谱响应,得到第六响应图像,相当于第三类像素单元组对应的第二波段范围的光谱响应。
进一步地,所述处理器52还用于:
将所述第一局部人脸图像,与所述第一响应图像和所述第二响应图像进行合成,得到所述第一类像素单元组对应的全光谱响应图像;
将所述第二局部人脸图像,与所述第三响应图像和所述第四响应图像进 行合成,得到所述第二类像素单元组对应的全光谱响应图像;
将所述第三局部人脸图像,与所述第五响应图像和所述第六响应图像进行合成,得到所述第三类像素单元组对应的全光谱响应图像。
应理解,所述第一响应图像至所述第六响应图像是根据校准后的第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像获得的。
至此,得到每类像素单元组对应的全光谱响应图像,在该全光谱响应图像中的每个像素对应三种光谱响应,即每个像素对应三个像素值,可以认为是RBG值,进一步可以将所述第一类像素单元组对应的全光谱响应图像,所述第二类像素单元组对应的全光谱响应图像,所述第三类像素单元组对应的全光谱响应图像进行重组(或者说,拼接),得到所述彩色局部人脸图像(或称RGB图)。
更进一步地,该处理器可以根据该彩色局部人脸进行活体识别,以识别人脸的真假,例如,所述处理器52可以提取该彩色局部人脸的特征信息,例如,色彩特征信息,具体可以为色度,饱和度和纯度(Hue,Saturation,Value,HSV)信息,然后将该彩色局部人脸的特征信息输入到深度学习网络进行分类,确定人脸的真假。
可选地,在本申请实施例中,该深度学习网络可以为卷积神经网络,或者其他深度学习网络。以卷积神经网络为例,说明具体的训练过程。
首先,构建卷积神经网络结构,例如可以采用二层卷积神经网络,或者也可以采用三层网络结构或更多层网络结构等。
其次,设置该卷积神经网络的初始训练参数和收敛条件。
该初始训练参数可以是随机生成的,或根据经验值获取的,或者也可以是根据大量的真假人脸数据预训练好的卷积神经网络模型的参数。
作为示例而非限定,该收敛条件可以包括以下中的至少一项:
1、将真实人脸的彩色局部人脸图像判定为来自真实人脸的概率大于第一概率,例如,98%;
2、将假人脸的彩色局部人脸图像判断为来自假人脸的概率大于第二概率,例如95%;
3、将真实人脸的彩色局部人脸图像判定为来自假人脸的概率小于第三概率,例如,2%;
4、将假人脸的彩色局部人脸图像判断为来自真实人脸的概率小于第四 概率,例如3%。
然后,向该卷积神经网络输入大量的真实人脸和假人脸的彩色局部人脸图像,该卷积神经网络可以基于初始训练参数对上述彩色局部人脸图像进行处理,确定对每个彩色局部人脸图像的判定结果,进一步地,根据该判定结果,调整卷积神经网络的结构和/或各层的训练参数,直至判定结果满足收敛条件,至此,训练完成。之后,可以将后续需要识别的人脸的彩色局部人脸图像输入到该卷积神经网络,该卷积神经网络可以使用训练好的参数对该彩色局部人脸图像进行处理,确定该彩色局部人脸图像是否来自真实人脸。
可选地,在一些实施例中,所述处理器52还用于:
根据所述像素阵列中的除所述第一像素单元集合之外的其他像素单元采集的人脸图像进行人脸识别。例如,该处理器52可以在所述其他像素单元采集的人脸图像与注册的该待识别目标的人脸模板匹配的情况下,进一步对该待识别目标进行活体识别,在该待识别目标为真实人脸的情况下确定人脸识别成功,从而执行触发该人脸识别的操作,例如,进行终端解锁或支付等操作。
可选地,在其他实施例中,该处理器52也可以在该待识别目标为真实人脸的情况下,进一步判断像素阵列中的除所述第一像素单元集合以外的其他像素单元采集的人脸图像是否与注册的该待识别目标的人脸模板匹配,在匹配的情况下确定人脸识别成功,进一步执行触发该人脸识别的操作,例如,进行终端解锁或支付等操作。
应理解,根据本申请实施例的用于人脸识别的装置也可以适用于其他生物特征识别场景,例如指纹识别场景,例如,在采集指纹图像时,基于部分像素单元采集的手指的至少两种光谱响应,进一步基于该至少两种光谱响应,确定手指的真假。
在本申请实施例中,所述用于人脸识别的装置50可以包括该处理器52,例如该处理单元可以为该人脸识别的装置中的微控制单元(Micro Control Unit,MCU),或者,在其他实施例中,该用于人脸识别的装置可以不包括该处理器52,此情况下,所述处理器52所执行的功能可以由所述用于人脸识别的装置50所安装的电子设备中的处理器,例如主控(Host)模块执行,本申请实施例对此不作限定。
上文结合图2至图5,详细描述了本申请的装置实施例,下文结合图6 至图7,详细描述本申请的方法实施例,应理解,方法实施例与装置实施例相互对应,类似的描述可以参照装置实施例。
图6是本申请实施例的用于人脸识别的方法的示意性流程图,如图6所示,该方法60包括:
S61,通过光学传感器的第一像素单元集合中的像素单元接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像;其中,所述第一像素单元集合包括第一类像素单元组和第二类像素单元组,所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
S62,根据所述局部人脸图像用于确定所述人脸的真假。
应理解,该方法60可以由用于人脸识别的装置执行,例如前述实施例中的装置50,具体地,S61可以由该装置50中的光学传感器51执行,S62可以由该装置50中的处理器52,例如MCU执行;或者,该方法60也可以由该用于人脸识别的装置所安装的电子设备执行,例如,S62可以由电子设备中的处理器,例如Host模块执行,本申请实施例对此不作限定。
可选地,在本申请一些实施例中,所述第一像素单元集合还包括第三类像素单元组,所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围。
可选地,在本申请一些实施例中,所述第一波段范围,所述第二波段范围和所述第三波段范围分别为以下三种波段范围中的一种:
包括560nm的波段范围,包括980nm的波段范围,包括940nm的波段范围。
可选地,在本申请一些实施例中,所述光源发射的光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围。
可选地,在本申请一些实施例中,所述根据所述局部人脸图像用于确定所述人脸的真假,包括:
根据校准参数,对所述第一类像素单元组采集的第一局部人脸图像,所 述第二类像素单元组采集的第二局部人脸图像和所述第三类像素单元组采集的第三局部人脸图像进行校准;
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假。
可选地,在本申请一些实施例中,所述方法60还包括:
根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数。
可选地,在本申请一些实施例中,所述根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数,包括:
在所述光源向所述参考对象发射光信号时,通过所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组分别采集所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应;
根据所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应,确定所述校准参数,所述校准参数用于使得所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组采集的所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应在同一范围内。
可选地,在本申请一些实施例中,所述根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假,包括:
根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,其中,所述彩色局部人脸图像中的每个像素包括所述人脸对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的三个响应像素值;
将所述彩色局部人脸图像进行特征提取,得到所述彩色局部人脸图像的特征信息;
根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假。
可选地,在本申请一些实施例中,所述根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色 局部人脸图像,包括:
根据所述第一类像素单元组中第一类像素单元采集的采用像素值,确定所述彩色局部人脸图像中的第一像素对应的第一响应像素值,其中,所述第一响应像素值表示所述人脸对所述第一波段范围的光谱的响应;
根据与所述第一类像素单元邻近的第二类像素单元采集的像素值,确定所述第一像素对应的第二响应像素值,其中,所述第二响应像素值表示所述人脸对所述第二波段范围的光谱响应;以及
根据与所述第一类像素单元邻近的第三类像素单元采集的像素值,确定所述第一像素对应的第三响应像素值,所述第三响应像素值表示所述人脸对所述第三波段范围的光谱响应。
可选地,在本申请一些实施例中,所述根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假,包括:
通过深度学习网络对所述彩色局部人脸图像的特征信息进行处理,确定所述人脸的真假。
可选地,在本申请一些实施例中,所述方法60还包括:
从所述光学传感器采集的多个真实人脸和虚假人脸的人脸图像中,提取所述第一像素单元集合采集的多个局部人脸图像;
对所述多个局部人脸图像进行校准和合成处理,得到多个彩色局部人脸图像;
将所述多个彩色局部人脸图像输入至深度学习网络进行训练,得到所述深度学习网络的模型和参数。
可选地,在本申请一些实施例中,所述方法60还包括:
根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别。
可选地,在本申请一些实施例中,所述根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别,包括:
若所述人脸图像与注册的人脸图像模板匹配且所述人脸为真实人脸,确定人脸识别成功。
可选地,在本申请一些实施例中,所述第一像素单元集合中连续的像素单元的数量小于第一阈值。
可选地,在本申请一些实施例中,所述第一像素单元集合中的像素单元 的数量与所述像素阵列中的像素单元的总数量的比例小于第一比值。
可选地,在本申请一些实施例中,所述第一像素单元集合中的像素单元离散分布在所述像素阵列中。
可选地,在本申请一些实施例中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元采集的人脸图像用于人脸识别。
可选地,在本申请一些实施例中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元不设置滤光片。
可选地,在本申请一些实施例中,所述像素阵列中除所述第一像素单元集合之外的其他像素单元设置特定波段范围的滤光片。
可选地,在本申请一些实施例中,所述特定波段范围的滤光片为包括940nm波段范围的滤光片。
以下,结合图7,说明根据本申请实施例的用于人脸识别的方法的整体流程,如图7所示,该方法可以包括如下内容:
S71,通过光学传感器采集人脸图像;
其中,该人脸图像包括第一像素单元集合中的像素单元所采集的局部人脸图像以及其他像素单元所采集的人脸图像。
进一步地,在S72中,第一像素单元集合中的像素单元所采集的局部人脸图像;
然后在S73中,对所述局部人脸图像进行校准。
具体实现参考前述实施例的相关说明,这里不再赘述。
在S74中,根据校准后的局部人脸图像进行合成,得到彩色局部人脸图像;
在S75中,提取所述彩色局部人脸图像的色彩特征信息,例如HSV信息;
在S76中,根据所述彩色局部人脸图像的色彩特征信息进行分类,确定人脸的真假。具体地,可以将该彩色局部人脸图像输入到深度学习网络,以确定人脸的真假。
如图8所示,本申请实施例还提供了一种电子设备80,所述电子设备80可以包括用于人脸识别的装置81,该用于人脸识别的装置81可以为前述装置实施例中的用于人脸识别的装置50,其能够用于执行图6至图7中所述方法实施例中的内容,为了简洁,这里不再赘述。
可选地,在一些实施例中,所述电子设备80可以为智能手机、平板电脑、门锁等对安全性要求较高的电子设备。
应理解,本申请实施例的处理器或处理单元可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例的人脸识别还可以包括存储器,存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的便携式电子设备执行时,能够使该便携式电子设备执行方法实施例的内容。
本申请实施例还提出了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行方法实施例的内容。
本申请实施例还提供了一种芯片,该芯片包括输入输出接口、至少一个处理器、至少一个存储器和总线,该至少一个存储器用于存储指令,该至少一个处理器用于调用该至少一个存储器中的指令,以执行方法实施例的内容。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应所述理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一 个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者所述技术方案的部分可以以软件产品的形式体现出来,所述计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。

Claims (42)

  1. 一种用于人脸识别的光学传感器,其特征在于,包括:
    像素阵列,所述像素阵列中的第一像素单元集合包括第一类像素单元组和第二类像素单元组,其中:
    所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;
    所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
    所述第一类像素单元组和所述第二类像素单元组用于接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
  2. 根据权利要求1所述的光学传感器,其特征在于,所述第一像素单元集合还包括第三类像素单元组,所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围,所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组中的像素单元用于接收由所述光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
  3. 根据权利要求2所述的光学传感器,其特征在于,所述第一波段范围,所述第二波段范围和所述第三波段范围分别为以下三种波段范围中的一种:
    包括560nm的波段范围,包括980nm的波段范围,包括940nm的波段范围。
  4. 根据权利要求2或3所述的光学传感器,其特征在于,所述光源发射的光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围。
  5. 根据权利要求1至4中任一项所述的光学传感器,其特征在于,所述第一像素单元集合中连续的像素单元的数量小于第一阈值。
  6. 根据权利要求1至5中任一项所述的光学传感器,其特征在于,所 述第一像素单元集合中的像素单元的数量与所述像素阵列中的像素单元的总数量的比例小于第一比值。
  7. 根据权利要求1至6中任一项所述的光学传感器,其特征在于,所述第一像素单元集合中的像素单元离散分布在所述像素阵列中。
  8. 根据权利要求1至7中任一项所述的光学传感器,其特征在于,所述像素阵列中除所述第一像素单元集合之外的其他像素单元采集的人脸图像用于人脸识别。
  9. 根据权利要求8所述的光学传感器,其特征在于,所述像素阵列中除所述第一像素单元集合之外的其他像素单元不设置滤光片。
  10. 根据权利要求8所述的光学传感器,其特征在于,所述像素阵列中除所述第一像素单元集合之外的其他像素单元设置特定波段范围的滤光片。
  11. 根据权利要求10所述的光学传感器,其特征在于,所述特定波段范围的滤光片为包括940nm波段范围的滤光片。
  12. 一种用于人脸识别的装置,其特征在于,包括:
    如权利要求1至11中任一项所述的光学传感器;
    其中,所述光学传感器的第一像素单元集合中的第一类像素单元组和第二类像素单元组中的像素单元用于接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像;
    处理器,用于根据所述局部人脸图像确定所述人脸的真假。
  13. 根据权利要求12所述的装置,其特征在于,所述第一像素单元集合包括第一类像素单元组,第二类像素单元组和第三类像素单元组,其中:
    所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;
    所述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
    所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围;
    其中,所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组中的像素单元用于接收由所述光源发射的光信号从所述人脸反射 的反射光信号,并根据所述反射光信号获取局部人脸图像,所述局部人脸图像用于确定所述人脸的真假。
  14. 根据权利要求13所述的装置,其特征在于,所述处理器还用于:
    根据校准参数,对所述第一类像素单元组采集的第一局部人脸图像,所述第二类像素单元组采集的第二局部人脸图像和所述第三类像素单元组采集的第三局部人脸图像进行校准;
    根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假。
  15. 根据权利要求14所述的装置,其特征在于,所述处理器还用于:
    根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数。
  16. 根据权利要求15所述的装置,其特征在于,所述光学传感器还用于:
    在所述光源向所述参考对象发射光信号时,通过所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组分别采集所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应;
    所述处理器具体用于:根据所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应,确定所述校准参数,所述校准参数用于使得所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组采集的所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应在同一范围内。
  17. 根据权利要求14至16中任一项所述的装置,其特征在于,所述处理器还用于:
    根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,其中,所述彩色局部人脸图像中的每个像素包括所述人脸对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的三个响应像素值;
    将所述彩色局部人脸图像进行特征提取,得到所述彩色局部人脸图像的特征信息;
    根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假。
  18. 根据权利要求17所述的装置,其特征在于,所述处理器还用于:
    根据所述第一类像素单元组中第一类像素单元采集的像素值,确定所述彩色局部人脸图像中的第一像素对应的第一响应像素值,其中,所述第一响应像素值表示所述人脸对所述第一波段范围的光谱的响应;
    根据与所述第一类像素单元邻近的第二类像素单元采集的像素值,确定所述第一像素对应的第二响应像素值,其中,所述第二响应像素值表示所述人脸对所述第二波段范围的光谱响应;以及
    根据与所述第一类像素单元邻近的第三类像素单元采集的像素值,确定所述第一像素对应的第三响应像素值,所述第三响应像素值表示所述人脸对所述第三波段范围的光谱响应。
  19. 根据权利要求17或18所述的装置,其特征在于,所述处理器具体用于:
    通过深度学习网络对所述彩色局部人脸图像的特征信息进行处理,确定所述人脸的真假。
  20. 根据权利要求19所述的装置,其特征在于,所述处理器还用于:
    从所述光学传感器采集的多个真实人脸和虚假人脸的人脸图像中,提取所述第一像素单元集合采集的多个局部人脸图像;
    对所述多个局部人脸图像进行校准和合成处理,得到多个彩色局部人脸图像;
    将所述多个彩色局部人脸图像输入至深度学习网络进行训练,得到所述深度学习网络的模型和参数。
  21. 根据权利要求12至20中任一项所述的装置,其特征在于,所述处理器还用于:
    根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别。
  22. 一种用于人脸识别的方法,其特征在于,包括:
    通过光学传感器的第一像素单元集合中的像素单元接收由光源发射的光信号从人脸反射的反射光信号,并根据所述反射光信号获取局部人脸图像,其中,所述第一像素单元集合包括第一类像素单元组和第二类像素单元组,所述第一类像素单元组包括至少一个第一类像素单元,所述第一类像素单元设置第一滤光片,所述第一滤光片用于通过第一波段范围的光信号;所 述第二类像素单元组包括至少一个第二类像素单元,所述第二类像素单元设置第二滤光片,所述第二滤光片用于通过第二波段范围的光信号,且所述第二波段范围不同于所述第一波段范围;
    根据所述局部人脸图像用于确定所述人脸的真假。
  23. 根据权利要求22所述的方法,其特征在于,所述第一像素单元集合还包括第三类像素单元组,所述第三类像素单元组包括至少一个第三类像素单元,所述第三类像素单元设置第三滤光片,所述第三滤光片用于通过第三波段范围的光信号,所述第三波段范围不同于所述第一波段范围和所述第二波段范围。
  24. 根据权利要求23所述的方法,其特征在于,所述第一波段范围,所述第二波段范围和所述第三波段范围分别为以下三种波段范围中的一种:
    包括560nm的波段范围,包括980nm的波段范围,包括940nm的波段范围。
  25. 根据权利要求23或24所述的方法,其特征在于,所述光源发射的光信号的波段范围包括所述第一波段范围,所述第二波段范围和所述第三波段范围。
  26. 根据权利要求23至25中任一项所述的方法,其特征在于,所述根据所述局部人脸图像用于确定所述人脸的真假,包括:
    根据校准参数,对所述第一类像素单元组采集的第一局部人脸图像,所述第二类像素单元组采集的第二局部人脸图像和所述第三类像素单元组采集的第三局部人脸图像进行校准;
    根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假。
  27. 根据权利要求26所述的方法,其特征在于,所述方法还包括:
    根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数。
  28. 根据权利要求27所述的方法,其特征在于,所述根据参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应之间的关系,确定所述校准参数,包括:
    在所述光源向所述参考对象发射光信号时,通过所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组分别采集所述参考对象 对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应;
    根据所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应,确定所述校准参数,所述校准参数用于使得所述第一类像素单元组,所述第二类像素单元组和所述第三类像素单元组采集的所述参考对象对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的响应在同一范围内。
  29. 根据权利要求26至28中任一项所述的方法,其特征在于,所述根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像确定所述人脸的真假,包括:
    根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,其中,所述彩色局部人脸图像中的每个像素包括所述人脸对所述第一波段范围,所述第二波段范围和所述第三波段范围的光谱的三个响应像素值;
    将所述彩色局部人脸图像进行特征提取,得到所述彩色局部人脸图像的特征信息;
    根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假。
  30. 根据权利要求29所述的方法,其特征在于,所述根据校准后的所述第一局部人脸图像,所述第二局部人脸图像和所述第三局部人脸图像进行合成,得到彩色局部人脸图像,包括:
    根据所述第一类像素单元组中第一类像素单元采集的采用像素值,确定所述彩色局部人脸图像中的第一像素对应的第一响应像素值,其中,所述第一响应像素值表示所述人脸对所述第一波段范围的光谱的响应;
    根据与所述第一类像素单元邻近的第二类像素单元采集的像素值,确定所述第一像素对应的第二响应像素值,其中,所述第二响应像素值表示所述人脸对所述第二波段范围的光谱响应;以及
    根据与所述第一类像素单元邻近的第三类像素单元采集的像素值,确定所述第一像素对应的第三响应像素值,所述第三响应像素值表示所述人脸对所述第三波段范围的光谱响应。
  31. 根据权利要求29或30所述的方法,其特征在于,所述根据所述彩色局部人脸图像的特征信息,确定所述人脸的真假,包括:
    通过深度学习网络对所述彩色局部人脸图像的特征信息进行处理,确定所述人脸的真假。
  32. 根据权利要求31所述的方法,其特征在于,所述方法还包括:
    从所述光学传感器采集的多个真实人脸和虚假人脸的人脸图像中,提取所述第一像素单元集合采集的多个局部人脸图像;
    对所述多个局部人脸图像进行校准和合成处理,得到多个彩色局部人脸图像;
    将所述多个彩色局部人脸图像输入至深度学习网络进行训练,得到所述深度学习网络的模型和参数。
  33. 根据权利要求22至32中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别。
  34. 根据权利要求22至33中任一项所述的方法,其特征在于,所述根据所述像素阵列中除所述第一像素单元集合以外的其他像素单元采集的人脸图像进行人脸识别,包括:
    若所述人脸图像与注册的人脸图像模板匹配且所述人脸为真实人脸,确定人脸识别成功。
  35. 根据权利要求22至34中任一项所述的方法,其特征在于,所述第一像素单元集合中连续的像素单元的数量小于第一阈值。
  36. 根据权利要求22至35中任一项所述的方法,其特征在于,所述第一像素单元集合中的像素单元的数量与所述像素阵列中的像素单元的总数量的比例小于第一比值。
  37. 根据权利要求22至36中任一项所述的方法,其特征在于,所述第一像素单元集合中的像素单元离散分布在所述像素阵列中。
  38. 根据权利要求22至37中任一项所述的方法,其特征在于,所述像素阵列中除所述第一像素单元集合之外的其他像素单元采集的人脸图像用于人脸识别。
  39. 根据权利要求38所述的方法,其特征在于,所述像素阵列中除所述第一像素单元集合之外的其他像素单元不设置滤光片。
  40. 根据权利要求38所述的方法,其特征在于,所述像素阵列中除所 述第一像素单元集合之外的其他像素单元设置特定波段范围的滤光片。
  41. 根据权利要求40所述的方法,其特征在于,所述特定波段范围的滤光片为包括940nm波段范围的滤光片。
  42. 一种电子设备,其特征在于,包括:
    如权利要求12至21中任一项所述的人脸识别的装置。
PCT/CN2019/088653 2019-05-27 2019-05-27 用于人脸识别的光学传感器、装置、方法和电子设备 WO2020237482A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/088653 WO2020237482A1 (zh) 2019-05-27 2019-05-27 用于人脸识别的光学传感器、装置、方法和电子设备
CN201980000834.3A CN110462630A (zh) 2019-05-27 2019-05-27 用于人脸识别的光学传感器、装置、方法和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088653 WO2020237482A1 (zh) 2019-05-27 2019-05-27 用于人脸识别的光学传感器、装置、方法和电子设备

Publications (1)

Publication Number Publication Date
WO2020237482A1 true WO2020237482A1 (zh) 2020-12-03

Family

ID=68492804

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088653 WO2020237482A1 (zh) 2019-05-27 2019-05-27 用于人脸识别的光学传感器、装置、方法和电子设备

Country Status (2)

Country Link
CN (1) CN110462630A (zh)
WO (1) WO2020237482A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113418864A (zh) * 2021-06-03 2021-09-21 奥比中光科技集团股份有限公司 一种多光谱图像传感器及其制造方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112825126A (zh) * 2019-11-20 2021-05-21 上海箩箕技术有限公司 指纹识别装置及其检测方法
CN116114264A (zh) * 2020-12-31 2023-05-12 Oppo广东移动通信有限公司 图像处理管道、图像处理方法、摄像头组件和电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100086215A1 (en) * 2008-08-26 2010-04-08 Marian Steward Bartlett Automated Facial Action Coding System
CN102110695A (zh) * 2009-11-06 2011-06-29 索尼公司 固体摄像器件、其制造方法和设计方法以及电子装置
US20130004028A1 (en) * 2011-06-28 2013-01-03 Jones Michael J Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images
CN105574483A (zh) * 2014-10-15 2016-05-11 倪蔚民 一种移动终端前置和人脸/虹膜识别一体化光电成像系统
CN106254785A (zh) * 2015-06-03 2016-12-21 豪威科技股份有限公司 图像传感器及用于改进非可见照明的方法
CN107205139A (zh) * 2017-06-28 2017-09-26 重庆中科云丛科技有限公司 多通道采集的图像传感器及采集方法
CN107330383A (zh) * 2017-06-18 2017-11-07 天津大学 一种基于深度卷积神经网络的人脸识别方法
CN107609459A (zh) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 一种基于深度学习的人脸识别方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101172745B1 (ko) * 2010-01-29 2012-08-14 한국전기연구원 생체로부터 발생하는 다중 분광 광 영상 검출 및 광치료를 위한 복합 장치
CN102622588B (zh) * 2012-03-08 2013-10-09 无锡中科奥森科技有限公司 双验证人脸防伪方法及装置
JP7060355B2 (ja) * 2017-10-17 2022-04-26 株式会社ソニー・インタラクティブエンタテインメント 情報処理システムおよび情報処理方法
CN208819221U (zh) * 2018-09-10 2019-05-03 杭州海康威视数字技术股份有限公司 一种人脸活体检测装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100086215A1 (en) * 2008-08-26 2010-04-08 Marian Steward Bartlett Automated Facial Action Coding System
CN102110695A (zh) * 2009-11-06 2011-06-29 索尼公司 固体摄像器件、其制造方法和设计方法以及电子装置
US20130004028A1 (en) * 2011-06-28 2013-01-03 Jones Michael J Method for Filtering Using Block-Gabor Filters for Determining Descriptors for Images
CN105574483A (zh) * 2014-10-15 2016-05-11 倪蔚民 一种移动终端前置和人脸/虹膜识别一体化光电成像系统
CN106254785A (zh) * 2015-06-03 2016-12-21 豪威科技股份有限公司 图像传感器及用于改进非可见照明的方法
CN107609459A (zh) * 2016-12-15 2018-01-19 平安科技(深圳)有限公司 一种基于深度学习的人脸识别方法及装置
CN107330383A (zh) * 2017-06-18 2017-11-07 天津大学 一种基于深度卷积神经网络的人脸识别方法
CN107205139A (zh) * 2017-06-28 2017-09-26 重庆中科云丛科技有限公司 多通道采集的图像传感器及采集方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113418864A (zh) * 2021-06-03 2021-09-21 奥比中光科技集团股份有限公司 一种多光谱图像传感器及其制造方法

Also Published As

Publication number Publication date
CN110462630A (zh) 2019-11-15

Similar Documents

Publication Publication Date Title
WO2020237483A1 (zh) 用于人脸识别的光学传感器、装置、方法和电子设备
CN210091193U (zh) 指纹识别装置和电子设备
US10943083B2 (en) Fingerprint identification apparatus and method and terminal device
WO2020237482A1 (zh) 用于人脸识别的光学传感器、装置、方法和电子设备
CN111133446B (zh) 指纹识别装置和电子设备
US9361681B2 (en) Quality metrics for biometric authentication
WO2020258121A1 (zh) 一种人脸识别的方法、装置和电子设备
CN107742274A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
KR20150022016A (ko) 얼굴 검증용 시스템들 및 방법
KR20170078729A (ko) 홍채 기반 생체 측정 시스템에서의 도용 검출을 위한 시스템 및 방법
WO2022110846A1 (zh) 一种活体检测的方法及设备
JP2018106720A (ja) 画像処理装置及びその方法
CN107578372A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
JP7002348B2 (ja) 生体認証装置
CN211628257U (zh) 指纹识别装置和电子设备
CN107770446A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN111801687B (zh) 指纹识别装置和电子设备
CN116894960A (zh) 多光谱图像预处理方法、活体检测方法、设备及存储介质
CN111344711A (zh) 图像采集方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930447

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930447

Country of ref document: EP

Kind code of ref document: A1