WO2022110846A1 - Procédé et dispositif de détection d'organisme vivant - Google Patents

Procédé et dispositif de détection d'organisme vivant Download PDF

Info

Publication number
WO2022110846A1
WO2022110846A1 PCT/CN2021/107937 CN2021107937W WO2022110846A1 WO 2022110846 A1 WO2022110846 A1 WO 2022110846A1 CN 2021107937 W CN2021107937 W CN 2021107937W WO 2022110846 A1 WO2022110846 A1 WO 2022110846A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
gray value
target
value ratio
preset
Prior art date
Application number
PCT/CN2021/107937
Other languages
English (en)
Chinese (zh)
Inventor
黄泽铗
师少光
张丁军
江隆业
黄源浩
肖振中
Original Assignee
奥比中光科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奥比中光科技集团股份有限公司 filed Critical 奥比中光科技集团股份有限公司
Publication of WO2022110846A1 publication Critical patent/WO2022110846A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present application belongs to the technical field of living body detection, and in particular, relates to a method and device for living body detection.
  • Existing living body detection methods generally include face detection, eye detection, hand detection, etc.
  • face-based living body detection is the current mainstream, and face-based living body detection mainly includes facial feature detection, face three-dimensional information detection, Facial optical flow estimation and face spectral image detection.
  • the real face and the fake face are generally distinguished according to the reflectance; the premise of obtaining accurate reflectance information is that the spectral curve of the light source needs to be pre-determined.
  • the embodiments of the present application provide a method and device for living body detection, which can solve the problem that all existing living body detection methods need to add additional modules, the system complexity and cost are high, and it is easily affected by ambient light, resulting in low accuracy of living body detection.
  • the problem can solve the problem that all existing living body detection methods need to add additional modules, the system complexity and cost are high, and it is easily affected by ambient light, resulting in low accuracy of living body detection. The problem.
  • an embodiment of the present application provides a method for detecting a living body, including:
  • Target feature regions and reference feature regions in the multiple face spectral images Acquiring target feature regions and reference feature regions in the multiple face spectral images; the number of target feature regions is greater than or equal to 2; the target feature regions include face feature regions;
  • the preset gray value ratio is the target feature area in the multiple real face spectral images corresponding to the multiple face spectral images
  • the face of the face spectral image is a living face.
  • the acquisition of the target feature regions and the reference feature regions in the multiple face spectral images includes:
  • the feature area that satisfies the preset screening condition is taken as a reference feature area, and the feature area other than the reference feature area is taken as a target feature area.
  • the acquisition of the characteristic area of each face spectral image in the multiple face spectral images includes:
  • Feature regions are extracted from each face spectral image in the plurality of face spectral images according to preset sharpness conditions and preset brightness conditions.
  • the matching of the target gray value ratio with the preset gray value ratio includes:
  • the error information in each spectral band is determined
  • a matching result between the target gray value ratio and a preset gray value ratio corresponding to the target feature region is determined according to the error information.
  • the method before determining the error information under each spectral band according to the target gray value ratio and the preset gray value ratio in the multiple face spectral images, the method further includes:
  • a preset gray value ratio is determined according to the first real face reflectivity and the second real face reflectivity.
  • determining the preset gray value ratio according to the first real face reflectivity and the second real face reflectivity includes:
  • the first real face reflectivity and the second real face reflectivity determine the initial gray value ratio corresponding to the multiple face spectral images under each spectral band
  • the initial gray value ratio with the largest number of occurrences among all the initial gray value ratios is used as a preset gray value ratio.
  • the target feature area also includes a background feature area.
  • the reference feature region is a face feature region or a background feature region whose spectral curve fluctuation is less than a preset threshold.
  • an embodiment of the present application provides a device for living body detection, including:
  • a first acquisition unit used for acquiring multiple face spectral images
  • a second acquiring unit configured to acquire target feature regions and reference feature regions in the multiple face spectral images; the number of target feature regions is greater than or equal to 2; the target feature regions include face feature regions;
  • a first calculation unit configured to calculate the ratio of the first gray value of the target feature region to the second gray value of the reference feature region to obtain the target gray value ratio
  • a matching unit configured to match the target gray value ratio with a preset gray value ratio, wherein the preset gray value ratio is among multiple real face spectral images corresponding to multiple face spectral images the ratio of the third gray value of the target feature area to the fourth gray value of the reference feature area;
  • a determination unit configured to determine that the face of the face spectral image is a living human face if the matching is successful.
  • the second obtaining unit includes:
  • a third acquiring unit configured to acquire the characteristic region of each spectral image of the face in the plurality of spectral images of faces
  • the first processing unit is configured to use the feature area that meets the preset screening condition as a reference feature area, and use the feature area other than the reference feature area as a target feature area.
  • the third obtaining unit is specifically used for:
  • Feature regions are extracted from each face spectral image in the plurality of face spectral images according to preset sharpness conditions and preset brightness conditions.
  • the matching unit is specifically used for:
  • the error information in each spectral band is determined
  • a matching result between the target gray value ratio and a preset gray value ratio corresponding to the target feature region is determined according to the error information.
  • the matching unit is specifically also used for:
  • a preset gray value ratio is determined according to the first real face reflectivity and the second real face reflectivity.
  • the matching unit is specifically also used for:
  • the first real face reflectivity and the second real face reflectivity determine the initial gray value ratio corresponding to the multiple face spectral images under each spectral band
  • the initial gray value ratio with the largest number of occurrences among all the initial gray value ratios is used as a preset gray value ratio.
  • the target feature area also includes a background feature area.
  • the reference feature region is a face feature region or a background feature region whose spectral curve fluctuation is less than a preset threshold.
  • an embodiment of the present application provides a device for living body detection, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer In the program, the method for living body detection as described in the above-mentioned first aspect is realized.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the living body detection according to the first aspect above is implemented Methods.
  • the embodiments of the present application have the following beneficial effects: by acquiring multiple face spectral images, acquiring target feature regions and reference feature regions in the multiple face spectral images, and calculating the first gray value of the target feature region and the ratio of the second gray value of the reference feature area to obtain the target gray value ratio; match the target gray value ratio with the preset gray value ratio, if the matching is successful, the face of the face spectral image is determined to be Live human face.
  • the above method can realize multi-spectral face living detection without using an active light source and a spectrometer. It does not need to use visible light and is not affected by ambient light, which can avoid the influence of complex and changeable ambient light, and improve the accuracy of living detection. And it has high integration, low computing power requirements and low cost.
  • FIG. 1 is a schematic flowchart of a method for detecting a living body provided in a first embodiment of the present application
  • FIG. 2 is a schematic flowchart of the refinement of S102 in a method for living body detection provided by the first embodiment of the present application;
  • FIG. 3 is a schematic flowchart of the refinement of S104 in a method for living body detection provided by the first embodiment of the present application;
  • FIG. 4 is a schematic flowchart of S1043 to S1044 in a method for detecting a living body provided by the first embodiment of the present application;
  • FIG. 5 is a schematic diagram of a device for living body detection provided by a second embodiment of the present application.
  • FIG. 6 is a schematic diagram of a device for living body detection provided by a second embodiment of the present application.
  • the term “if” may be contextually interpreted as “when” or “once” or “in response to determining” or “in response to detecting “.
  • the phrases “if it is determined” or “if the [described condition or event] is detected” may be interpreted, depending on the context, to mean “once it is determined” or “in response to the determination” or “once the [described condition or event] is detected. ]” or “in response to detection of the [described condition or event]”.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • FIG. 1 is a schematic flowchart of a method for detecting a living body provided by the first embodiment of the present application.
  • the execution body of the method for living body detection in this embodiment is a device having a function of living body detection, for example, a mobile device, a server, a processor, and the like.
  • the method for living body detection as shown in FIG. 1 may include:
  • the device acquires multiple face spectral images.
  • the spectral images may include: multispectral images, hyperspectral images, and hyperspectral images.
  • the face spectral image is the image of the face image in each spectral band. Since the detection of the living body in this embodiment is based on the detection of human faces, in this embodiment, it is necessary to acquire a human face image, and the human face image is an image including a human face.
  • the face image should be collected by a spectral camera.
  • the face image may be collected by the device through a camera, or may be received by the device from other devices, which is not limited here.
  • the channels of the multispectral camera correspond to the spectral bands, that is, assuming that the multispectral camera has n channels, which correspond to n target spectral bands respectively, the face image corresponds to n target face spectral images.
  • the spectrum chip is based on ordinary CMOS sensor, which is divided into a group according to every 3*3 pixels, and different pixels inside each group are coated with broadband filters with different spectral transmission characteristics. Obtain the original data, and then use the original data and the transmission characteristics of each filter to calculate the target face spectral image.
  • S102 Acquire target feature regions and reference feature regions in the multiple face spectral images; the number of the target feature regions is greater than or equal to 2; the target feature regions include face feature regions.
  • the device acquires target feature regions in multiple face spectral images.
  • the target feature region is a feature region in the target face spectral image, and the feature can be helpful for subsequent judgment of living body detection.
  • the number of target feature regions is greater than or equal to 2, that is, the target feature regions are not less than two.
  • the target feature area includes the face feature area
  • the face feature area is the area of the human face, including but not limited to the hair area, the forehead area, the eyebrow area, and the pupil. area, cheek area, nose area, lip area, etc.
  • target feature area may also include a background feature area.
  • Background feature areas include but are not limited to white wall areas, green plant areas, and the like.
  • the device further acquires the reference feature regions in the multiple face spectral images, and the reference feature regions are regions that can be used as references for the target feature regions. Since it is used as a reference, the reference feature region is required to be a local region with reference value.
  • some regions with stable preset parameters can be selected as the reference feature region. For example, the skin area can be selected as the reference feature area, because even with makeup, the skin will not change much. If the lip area is selected as the reference feature area, when the lips of the face are painted with lipstick, the lips will change greatly, so the lip area is not suitable as the reference feature area.
  • the reference feature region can be a face feature region or a background feature region whose spectral curve fluctuation is less than a preset threshold, that is, the reflectivity of the spectrum can be selected to be relatively unaffected by the outside world.
  • the affected area serves as the reference feature area.
  • the device When the device obtains the reference feature area, it can simultaneously obtain the target feature area and the reference feature area from the face spectral image; or the device can first obtain the target feature area from the face spectral image, and then select an area from the target feature area.
  • a reference feature area there is no limitation here.
  • the device first obtains the target feature region from the face spectral image, and then selects a region from the target feature region as the reference feature region, and specifically describes how the device determines the target feature region and the reference feature region, S102 It may include S1021-S1022, as shown in FIG. 2, S1021-S1022 are as follows:
  • S1021 Acquire the characteristic area of each spectral image of the face in the plurality of spectral images of the face.
  • the device acquires the characteristic area of each face spectral image in the multiple face spectral images. Likewise, there are at least two feature regions. And the feature area includes at least the face area.
  • the preset clarity condition and the preset brightness condition are stored in the device. condition, extract feature regions from each face spectral image.
  • the exposure parameters of the target feature area need to be adjusted under different lighting conditions.
  • this method of selecting a feature region is also applicable to the above determination of the target feature region.
  • S1022 Use the feature area that meets the preset screening condition as a reference feature area, and use the feature area other than the reference feature area as a target feature area.
  • Preset filter conditions are stored in the device, and the device uses the feature areas that meet the preset filter conditions as reference feature areas.
  • the reference feature region may be a face feature region or a background feature region whose spectral curve fluctuation is less than a preset threshold, and the reference feature region may be a region where the spectral reflectance is not affected by the outside world.
  • the preset filter conditions are not limited here, and the preset filter conditions may include parameter conditions of the reference feature area, and may also include area type conditions of the reference feature area.
  • the device uses the feature regions other than the reference feature regions as the target feature regions.
  • S103 Calculate the ratio of the first gray value of the target feature region to the second gray value of the reference feature region to obtain a target gray value ratio.
  • the device calculates the ratio of the first gray value of the target feature area and the second gray value of the reference feature area to obtain the target gray value ratio, and the target gray value ratio is the target feature area and the reference feature area in each target spectrum.
  • the ratio of gray values under the band is the ratio of gray values under the band.
  • the target gray value ratios calculated from different target feature regions are different; for different face spectral images, the target gray value ratios calculated from the same target feature region are different.
  • S104 Match the target gray value ratio with a preset gray value ratio, wherein the preset gray value ratio is the target in the multiple real face spectral images corresponding to the multiple face spectral images The ratio of the third gray value of the feature area to the fourth gray value of the reference feature area.
  • the gray value ratio of the real face does not change much and can be regarded as a fixed value. Therefore, the device obtains the preset gray value ratio in advance, which is used for the target corresponding to the face image to be detected. Gray value ratio for matching.
  • a preset gray value ratio is pre-stored in the device, and the preset gray value ratio is the third gray value of the target feature region and the fourth gray value of the reference feature region in the multiple real face spectral images corresponding to the multiple face spectral images.
  • the device matches the target gray value ratio with the preset gray value ratio, and the matching process is the comparison process.
  • the gray value ratio is equal to the quantum efficiency (QE) value of the spectral camera in each band multiplied by the reflectance ratio, and the QE value is fixed, so it is possible to calculate the difference between multiple target feature regions and reference feature regions.
  • QE quantum efficiency
  • the reflectivity ratio which can be used to distinguish real and fake faces, and is not affected by ambient light.
  • the QE value of each channel on the multispectral camera can be expressed as QE 1 , QE 2 &QE n , . « For the same multispectral camera, the QE value is fixed and known.
  • the QE value of different channels is different.
  • the light intensity of ambient light in the above n target spectral bands can be expressed as L 1 , L 2 acrossL n , and the light intensity of each band varies under different lighting conditions.
  • the target feature regions under each spectral band can be extracted.
  • Gray value the gray value of area a can be expressed as I a1 , I a2 ; the gray value of area b can be expressed as I b1 , I b2 ;
  • the subscript i represents the ith channel, that is, the reflectivity ratio between the b area and the a area, which is equal to the gray value ratio between the b area and the a area, and has nothing to do with the ambient light.
  • matching the target gray value ratio with the preset gray value ratio is actually also matching the reflectances of both the face to be detected and the real face. Since the reflectivity ratios of different materials are different, according to this property, it can be determined whether the face to be detected is a living face, and whether the face corresponding to the detected face spectral image is a living face.
  • S104 may include S1041 to S1042, as shown in FIG. 3 , the details of S1041 to S1042 are as follows:
  • S1041 Determine error information in each spectral band according to the target gray value ratio and the preset gray value ratio in the multiple face spectral images.
  • the device first obtains the preset gray value ratio, and the device stores the corresponding relationship between the preset feature area and the preset gray value ratio. According to the corresponding relationship between the preset feature area and the preset gray value ratio, the device, Determine the preset real face gray value ratio corresponding to the target feature area.
  • the device compares and matches the target gray value ratio in multiple face spectral images with the preset gray value ratio one by one, calculates the difference between the two, and determines the error information under each spectral band.
  • S1042 Determine, according to the error information, a matching result between the target gray value ratio and a preset gray value ratio corresponding to the target feature area.
  • the device compares and matches the target gray value ratio in each face spectral image with the preset gray value ratio one by one, and obtains the target error information corresponding to each target face spectral image.
  • the device can integrate all error information to determine the matching result between the target gray value ratio and the preset gray value ratio. For example, the device can obtain the mean value of all error information. When the mean value is less than the preset mean value threshold, the matching result is successful; when the mean value is greater than or equal to the preset mean threshold, the matching result is failed.
  • S1043-1044 are as follows:
  • S1043 Acquire the first real face reflectivity of the target feature region and the second real face reflectivity of the reference feature region in the real face spectral image under each spectral band.
  • the device acquires the first real face reflectivity of the target feature region and the second real face reflectivity of the reference feature region in the real face spectral image in each spectral band.
  • the value of the same reflectivity in a specific wavelength band is fixed, but for different wavelength bands, the reflectivity is different, so it is necessary to obtain the reflectivity in multiple wavelength bands.
  • the target feature region and the reference feature region in the real face spectral image are the same as the target feature region and the reference feature region in the face spectral image mentioned above.
  • the first real face reflectivity and the second real face reflectivity are the reflectivity of the real face in the corresponding area.
  • S1044 Determine a preset gray value ratio according to the first real face reflectivity and the second real face reflectivity.
  • the device determines the preset gray value ratio according to the reflectivity of the first real face and the reflectivity of the second real face. Since the preset gray value ratio is a fixed value, the reflectivity of different target spectral bands is different. Therefore, the reflectivity of the first real face and the reflectivity of the second real face under different target spectral bands are The ratios are different, and the device can determine the preset gray value ratio according to the ratio between the reflectivity of the first real face and the reflectivity of the second real face in all spectral bands.
  • the device determines the preset gray value ratio according to the reflectivity of the first real face and the reflectivity of the second real face.
  • the device takes the average value of all initial real face gray value ratios as the real face gray value ratio corresponding to the preset feature area; in another embodiment, the device uses all initial real face gray value ratios The initial real face gray value ratio with the most occurrences among the degree value ratios is used as the real face gray value ratio corresponding to the preset feature area.
  • the face in the spectral image of the face is determined to be a living face, and the live detection is completed; if the matching result is failed, the face in the spectral image of the face is determined to be a fake face.
  • the embodiments of the present application have the following beneficial effects: by acquiring multiple face spectral images, acquiring target feature regions and reference feature regions in the multiple face spectral images, and calculating the first gray value of the target feature region and the ratio of the second gray value of the reference feature area to obtain the target gray value ratio; match the target gray value ratio with the preset gray value ratio, if the matching is successful, the face of the face spectral image is determined to be Live human face.
  • the above method can realize multi-spectral face living detection without using an active light source and a spectrometer. It does not need to use visible light and is not affected by ambient light, which can avoid the influence of complex and changeable ambient light, and improve the accuracy of living detection. And it has high integration, low computing power requirements and low cost.
  • FIG. 5 is a schematic diagram of a device for living body detection provided by a second embodiment of the present application.
  • the included units are used to execute the steps in the embodiments corresponding to FIG. 1 to FIG. 4 .
  • the apparatus 5 for living body detection includes:
  • a first acquiring unit 510 configured to acquire multiple face spectral images
  • the second acquiring unit 520 is configured to acquire target feature regions and reference feature regions in the multiple face spectral images; the number of target feature regions is greater than or equal to 2; the target feature regions include face feature regions;
  • a first calculation unit 530 configured to calculate the ratio of the first gray value of the target feature region to the second gray value of the reference feature region to obtain the target gray value ratio
  • the matching unit 540 is configured to match the target gray value ratio with a preset gray value ratio, wherein the preset gray value ratio is a plurality of real face spectral images corresponding to a plurality of face spectral images The ratio of the third gray value of the target feature area to the fourth gray value of the reference feature area;
  • the determining unit 550 is configured to determine that the face of the face spectral image is a living human face if the matching is successful.
  • the second obtaining unit 520 includes:
  • a third acquiring unit configured to acquire the characteristic region of each spectral image of the face in the plurality of spectral images of faces
  • a first processing unit configured to use the feature area that satisfies a preset screening condition as a reference feature area, and use the feature area other than the reference feature area as a target feature area.
  • the third obtaining unit is specifically used for:
  • Feature regions are extracted from each face spectral image in the plurality of face spectral images according to preset sharpness conditions and preset brightness conditions.
  • the matching unit 540 is specifically used for:
  • the error information in each spectral band is determined
  • a matching result between the target gray value ratio and a preset gray value ratio corresponding to the target feature region is determined according to the error information.
  • the matching unit 540 is specifically also used for:
  • a preset gray value ratio is determined according to the first real face reflectivity and the second real face reflectivity.
  • the matching unit 540 is specifically also used for:
  • the first real face reflectivity and the second real face reflectivity determine the initial gray value ratio corresponding to the multiple face spectral images under each spectral band
  • the initial gray value ratio with the largest number of occurrences among all the initial gray value ratios is used as a preset gray value ratio.
  • the target feature area also includes a background feature area.
  • the reference feature region is a face feature region or a background feature region whose spectral curve fluctuation is less than a preset threshold.
  • FIG. 6 is a schematic diagram of a device for living body detection provided by a third embodiment of the present application.
  • the apparatus 6 for living body detection in this embodiment includes: a processor 60 , a memory 61 , and a computer program 62 stored in the memory 61 and executable on the processor 60 , for example, a living body detection program.
  • the processor 60 executes the computer program 62
  • the steps in each of the above-mentioned embodiments of the living body detection method are implemented, for example, steps 101 to 105 shown in FIG. 1 .
  • the processor 60 executes the computer program 62
  • the functions of the modules/units in each of the foregoing apparatus embodiments such as the functions of the modules 510 to 550 shown in FIG. 5, are implemented.
  • the computer program 62 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 61 and executed by the processor 60 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 62 in the living body detection device 6 .
  • the computer program 62 can be divided into a first acquisition unit, a second acquisition unit, a first calculation unit, a matching unit, and a determination unit, and the specific functions of each unit are as follows:
  • a first acquisition unit used for acquiring multiple face spectral images
  • a second acquiring unit configured to acquire target feature regions and reference feature regions in the multiple face spectral images; the number of target feature regions is greater than or equal to 2; the target feature regions include face feature regions;
  • a first calculation unit configured to calculate the ratio of the first gray value of the target feature region to the second gray value of the reference feature region to obtain the target gray value ratio
  • a matching unit configured to match the target gray value ratio with a preset gray value ratio, wherein the preset gray value ratio is among the multiple real face spectral images corresponding to the multiple face spectral images the ratio of the third gray value of the target feature area to the fourth gray value of the reference feature area;
  • a determination unit configured to determine that the face of the face spectral image is a living human face if the matching is successful.
  • the living body detection device may include, but is not limited to, a processor 60 and a memory 61 .
  • FIG. 6 is only an example of the device 6 for living body detection, and does not constitute a limitation on the device 6 for living body detection, and may include more or less components than shown, or combine some components, Or different components, such as the living body detection device, may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 60 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 61 may be an internal storage unit of the device 6 for living body detection, such as a hard disk or a memory of the device 6 for living body detection.
  • the memory 61 can also be an external storage device of the device 6 for living body detection, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) device on the device 6 for living body detection. Digital, SD) card, flash card (Flash Card), etc.
  • the living body detection device 6 may further include both an internal storage unit of the living body detection device 6 and an external storage device.
  • the memory 61 is used to store the computer program and other programs and data required by the living body detection device.
  • the memory 61 can also be used to temporarily store data that has been output or will be output.
  • An embodiment of the present application also provides a network device, the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor executing The computer program implements the steps in any of the foregoing method embodiments.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • all or part of the processes in the methods of the above embodiments can be implemented by a computer program to instruct the relevant hardware.
  • the computer program can be stored in a computer-readable storage medium, and the computer program When executed by a processor, the steps of each of the above method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, etc.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunication signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/network device and method may be implemented in other manners.
  • the apparatus/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de détection d'organisme vivant, consistant à : acquérir une pluralité d'images spectrales faciales (S101) ; acquérir des régions de traits cibles et des régions de traits de référence dans la pluralité d'images spectrales faciales (S102) ; calculer le rapport d'une première valeur d'échelle de gris des régions de traits cibles à une seconde valeur d'échelle de gris des régions de traits de référence, de sorte à obtenir un rapport de valeurs d'échelles de gris cible (S103) ; mettre en correspondance le rapport de valeurs d'échelles de gris cible avec un rapport de valeurs d'échelles de gris prédéfini (S104) ; et si la correspondance est réussie, déterminer que le visage dans les images spectrales faciales est le visage d'un organisme vivant (S105). Au moyen du procédé, il n'est pas nécessaire d'utiliser de source de lumière active ni de spectrographe pour réaliser une détection multispectrale d'organisme vivant basée sur le visage, aucune lumière visible n'est nécessaire, et la détection d'organisme vivant n'est pas affectée par la lumière ambiante. Il est ainsi possible d'éviter l'influence d'une lumière ambiante complexe et variable, ce qui permet d'améliorer la précision de détection d'organisme vivant, et d'obtenir une intégrité élevée, de faibles exigences de calcul et des coûts réduits.
PCT/CN2021/107937 2020-11-24 2021-07-22 Procédé et dispositif de détection d'organisme vivant WO2022110846A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011327958.3A CN112580433A (zh) 2020-11-24 2020-11-24 一种活体检测的方法及设备
CN202011327958.3 2020-11-24

Publications (1)

Publication Number Publication Date
WO2022110846A1 true WO2022110846A1 (fr) 2022-06-02

Family

ID=75124109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/107937 WO2022110846A1 (fr) 2020-11-24 2021-07-22 Procédé et dispositif de détection d'organisme vivant

Country Status (2)

Country Link
CN (1) CN112580433A (fr)
WO (1) WO2022110846A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100714A (zh) * 2022-06-27 2022-09-23 平安银行股份有限公司 基于人脸图像的活体检测方法、装置及服务器

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580433A (zh) * 2020-11-24 2021-03-30 奥比中光科技集团股份有限公司 一种活体检测的方法及设备
CN113297977B (zh) * 2021-05-26 2023-12-22 奥比中光科技集团股份有限公司 活体检测方法、装置及电子设备
CN113297978B (zh) * 2021-05-26 2024-05-03 奥比中光科技集团股份有限公司 活体检测方法、装置及电子设备
CN113533256B (zh) * 2021-06-30 2024-03-12 奥比中光科技集团股份有限公司 一种光谱反射率的确定方法、装置及设备
CN113609907B (zh) * 2021-07-01 2024-03-12 奥比中光科技集团股份有限公司 一种多光谱数据的获取方法、装置及设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862247A (zh) * 2017-10-13 2018-03-30 平安科技(深圳)有限公司 一种人脸活体检测方法及终端设备
US20190026544A1 (en) * 2016-02-09 2019-01-24 Aware, Inc. Face liveness detection using background/foreground motion analysis
CN111353326A (zh) * 2018-12-20 2020-06-30 上海聚虹光电科技有限公司 基于多光谱人脸差值图的活体检测方法
CN111814564A (zh) * 2020-06-09 2020-10-23 广州视源电子科技股份有限公司 基于多光谱图像的活体检测方法、装置、设备和存储介质
CN112580433A (zh) * 2020-11-24 2021-03-30 奥比中光科技集团股份有限公司 一种活体检测的方法及设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6081612A (en) * 1997-02-28 2000-06-27 Electro Optical Sciences Inc. Systems and methods for the multispectral imaging and characterization of skin tissue
CN102831400B (zh) * 2012-07-31 2015-01-28 西北工业大学 一种多光谱人脸识别方法及其系统
CN103268499B (zh) * 2013-01-23 2016-06-29 北京交通大学 基于多光谱成像的人体皮肤检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190026544A1 (en) * 2016-02-09 2019-01-24 Aware, Inc. Face liveness detection using background/foreground motion analysis
CN107862247A (zh) * 2017-10-13 2018-03-30 平安科技(深圳)有限公司 一种人脸活体检测方法及终端设备
CN111353326A (zh) * 2018-12-20 2020-06-30 上海聚虹光电科技有限公司 基于多光谱人脸差值图的活体检测方法
CN111814564A (zh) * 2020-06-09 2020-10-23 广州视源电子科技股份有限公司 基于多光谱图像的活体检测方法、装置、设备和存储介质
CN112580433A (zh) * 2020-11-24 2021-03-30 奥比中光科技集团股份有限公司 一种活体检测的方法及设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100714A (zh) * 2022-06-27 2022-09-23 平安银行股份有限公司 基于人脸图像的活体检测方法、装置及服务器

Also Published As

Publication number Publication date
CN112580433A (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
WO2022110846A1 (fr) Procédé et dispositif de détection d'organisme vivant
WO2021004180A1 (fr) Procédé d'extraction de caractéristique de texture, appareil d'extraction de caractéristique de texture et équipement terminal
CN107798652A (zh) 图像处理方法、装置、可读存储介质和电子设备
CN110660088A (zh) 一种图像处理的方法和设备
CN109587466B (zh) 颜色阴影校正的方法和装置
EP3664016B1 (fr) Procédé et appareil de détection d'image, et terminal
CN107742274A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN107911625A (zh) 测光方法、装置、可读存储介质和计算机设备
CN107993209A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
US20210006760A1 (en) Meta-learning for camera adaptive color constancy
US11455535B2 (en) Systems and methods for sensor-independent illuminant determination
WO2022257396A1 (fr) Procédé et appareil de détermination de point de pixel de frange de couleur dans une image et dispositif informatique
WO2023273411A1 (fr) Procédé, appareil et dispositif d'acquisition de données multispectrales
WO2022036539A1 (fr) Procédé et dispositif de correction de cohérence de couleur pour de multiples caméras
WO2022127111A1 (fr) Procédé, appareil et dispositif de reconnaissance faciale intermodale, et support d'enregistrement
WO2019029573A1 (fr) Procédé de floutage d'image, support d'informations lisible par ordinateur et dispositif informatique
WO2022247840A1 (fr) Procédés et appareils d'acquisition de spectres de sources lumineuses et d'images de réflectivité multispectrale, et dispositif électronique
CN113676639A (zh) 图像处理方法、处理装置、电子设备及介质
CN107845076A (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
US20230169749A1 (en) Skin color detection method and apparatus, terminal, and storage medium
CN116012895A (zh) 一种手掌特征识别设备及其识别方法、存储介质
CN113297977B (zh) 活体检测方法、装置及电子设备
JP6398860B2 (ja) 色補正装置、色補正方法及び色補正プログラム
WO2022198436A1 (fr) Capteur d'image, procédé d'acquisition de données d'image et dispositif d'imagerie
JP2022117205A (ja) 情報処理装置、撮像システム、情報処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896353

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21896353

Country of ref document: EP

Kind code of ref document: A1