WO2022134754A1 - 数据处理方法、系统、装置、设备和介质 - Google Patents
数据处理方法、系统、装置、设备和介质 Download PDFInfo
- Publication number
- WO2022134754A1 WO2022134754A1 PCT/CN2021/123759 CN2021123759W WO2022134754A1 WO 2022134754 A1 WO2022134754 A1 WO 2022134754A1 CN 2021123759 W CN2021123759 W CN 2021123759W WO 2022134754 A1 WO2022134754 A1 WO 2022134754A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- target
- target object
- sub
- laser light
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000001514 detection method Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000012549 training Methods 0.000 claims description 25
- 239000000463 material Substances 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 230000001747 exhibiting effect Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 24
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 8
- 238000001727 in vivo Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000007123 defense Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000003746 surface roughness Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241000283070 Equus zebra Species 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001678 irradiating effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005693 optoelectronics Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Definitions
- Embodiments of this specification relate generally to data processing, and more particularly to data processing methods, systems, apparatus, electronic devices, and computer storage media.
- face recognition technology is widely used in various fields, such as finance, security, consumption and medical care.
- face recognition technology there are more and more attack methods against the face recognition system.
- typical attack methods include photo remakes, video ripping, and video editing.
- live detection technology is gradually applied in the face recognition system to prevent the above-mentioned diversified attack methods.
- traditional liveness detection techniques have poor performance.
- a data processing scheme is provided.
- a data processing method includes: acquiring a target image associated with a target object, the target image presents speckles, and the speckles are generated by reflecting laser light of a specified wavelength from the target object; determining a target sub-image associated with a specific part of the target object from the target image; and based on The target sub-image performs liveness detection on the target object.
- a data processing system includes: a laser transmitter for emitting laser light of a specified wavelength to a target object; a laser receiver for generating a target image associated with the target object, the target image presents speckles, and the speckle is formed by the target object reflecting the laser light of the specified wavelength. generating; and a controller configured to perform the method according to the first aspect of the present specification.
- an apparatus for data processing includes: an acquisition module, configured to acquire a target image associated with the target object, the target image presents speckles, and the speckles are generated by the target object reflecting laser light of a specified wavelength; a determination module is configured to determine from the target image and the target object a target sub-image associated with a specific part of the target sub-image; and a detection module configured to perform live detection on the target object based on the target sub-image.
- an electronic device in a fourth aspect of the present specification, includes: one or more processors; and a memory for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the first aspect according to the present specification Methods.
- a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method according to the first aspect of the present specification.
- a detection method includes: emitting a laser light of a specified wavelength to a target object; receiving a target image associated with the target object, the target image presents speckles, and the speckle is generated by the target object reflecting the laser light of the specified wavelength; and identifying the reflection of the target object based on the target image Whether the part is a specific material.
- FIG. 1 shows a schematic diagram of an exemplary environment in which embodiments of the present specification can be implemented
- FIG. 2 shows a flowchart of a data processing method according to some embodiments of the present specification
- FIG. 3 shows a schematic diagram of an example of acquiring a target image according to some embodiments of the present specification
- FIG. 4 shows a schematic diagram of an example of determining a target sub-image according to some embodiments of the present specification
- FIG. 5 shows a block diagram of an apparatus for data processing according to some embodiments of the present specification.
- Figure 6 shows a block diagram of an electronic device capable of implementing embodiments of the present specification.
- liveness detection technology is adopted in the face recognition system to prevent malicious attacks.
- Traditional liveness detection techniques can be classified into active liveness detection and silent liveness detection.
- motion liveness detection technology the user is required to make certain interactive actions that enable the user to be identified as a real live user.
- silent live detection technology the user is identified as a real live user or a fake attack target through an algorithm without any interaction from the user.
- silent in vivo detection technology mainly includes monocular in vivo detection technology, binocular in vivo detection technology, colorful in vivo detection technology and deep in vivo detection technology.
- the monocular live detection technology can adopt RGB (red, green and blue) imaging technology. It uses RGB images imaged with natural light to identify live users.
- RGB red, green and blue
- the monocular live detection technology has a low defense rate against attack objects of paper masks and screen displays, and has poor defense ability under natural light.
- Binocular living detection technology utilizes black and white grayscale images imaged by infrared light. Since infrared light imaging is a reflection imaging, it cannot image the attacking objects of the screen display type, so the defense ability of the attacking objects of the screen display type is better. However, binocular live detection technology did not significantly improve the defense ability of paper masks. In addition, since the binocular liveness detection technology also requires an additional infrared camera, the hardware cost is increased.
- the dazzling liveness detection technology performs liveness detection by irradiating natural light of different colors on the target object. Since light affects the accuracy of the colorful living body detection technology, the colorful living body detection technology itself has defects. Usually, under strong light or dim conditions, it is difficult to accurately detect the living body with the colorful living body detection technology. In the process of dazzling living body detection, the color of the light irradiated each time is also displayed, so that malicious attackers may attack according to the color of the light, resulting in low security and privacy.
- deep in vivo detection techniques may utilize depth imaging techniques such as 3D (three-dimensional) structured light imaging. Since real live users are three-dimensional, while paper masks or screen display types of attack objects are flat, it is possible to identify real live users and fake attack objects by obtaining the overall depth information of the target object.
- 3D three-dimensional
- the embodiments of this specification provide a solution for data processing.
- the target image associated with the target object can be acquired.
- the target image appears speckled. Speckles are created by the reflection of laser light of a specified wavelength from a target object.
- a target sub-image associated with a specific part of the target object may be determined from the target image, and the target object may be subjected to in vivo detection based on the target sub-image.
- FIG. 1 shows a schematic diagram of an exemplary environment 100 in which embodiments of the present specification can be implemented.
- a controller 110 is included in the environment 100 .
- the controller 110 may include at least a processor, memory, and other components commonly found in a general-purpose computer to implement functions such as computing, storage, communication, control, and the like.
- the controller 110 may be a smartphone, tablet computer, personal computer, desktop computer, notebook computer, server, mainframe, distributed computing system, or the like.
- controller 110 is configured to acquire target image 130 associated with target object 120 .
- the target image 130 exhibits streaks.
- the speckle is generated by the target object 120 reflecting laser light of a specified wavelength.
- laser speckle technology is mainly used in the inspection of industrial products in the industrial field, such as the detection of the roughness of mobile phone casings and the surface roughness of mechanical parts.
- a monochromatic high-correlation beam such as a laser irradiates an object
- a fine-grained structure appears when the laser is reflected by the object, and the surface of the object, irrespective of the microscopic properties of the object, becomes the source of secondary wavelets. Surfaces of different roughness and materials will reflect or scatter scattered light consisting of different wavelets.
- the laser speckle technology can quickly obtain the characteristic data of the object surface through the speckle image.
- the streak image of the live user will be different from the object of the attack (e.g., the object in the printed photo, photo remake, ripped video, edited video etc.) zebra images.
- the target object 120 can be irradiated with a laser of a specified wavelength, and different speckles generated by light waves scattered by the laser from the surface of different materials can be used to distinguish the skin surface from different materials such as ordinary paper and screens, thereby Distinguish between real live users and fake attack targets.
- a laser light of a specified wavelength may be emitted to the target object 120, and a target image associated with the target object 120 may be received, the target image exhibiting a speckle generated by the target object reflecting the laser light of the specified wavelength.
- the reflection part of the target object 120 is a specific material (eg, skin) based on the target image, so that the target object 120 can be detected by living body.
- a target sub-image 140 associated with a specific part (eg, face, palm, etc.) of the target object 120 may be determined from the target image 130 to improve the efficiency of living body detection.
- the controller 110 may determine the target sub-image 140 associated with the specific part of the target object 120 from the target image 130 .
- the target sub-image 140 is a speckle image of a specific part of the target object 120 .
- objects of different roughness and different materials produce different speckle images. Therefore, the speckle image of the specific part of the real live user will be different from the speckle image of the specific part of the fake attack object (eg, a printed photo, a photo remake, a ripped video, an object in an edited video, etc.).
- the controller 110 may perform living detection on the target object 120 based on the target sub-image 140 to generate a living detection result 150 of whether the target object 120 is a living user.
- FIG. 2 shows a flowchart of a data processing method 200 according to some embodiments of the present specification.
- the method 200 may be implemented by the controller 110 as shown in FIG. 1 .
- the method 200 may also be implemented by other bodies than the controller 110 .
- the method 200 may also include additional steps not shown and/or the steps shown may be omitted, and the scope of the description is not limited in this regard.
- the controller 110 acquires the target image 130 associated with the target object 120 .
- the target image 130 exhibits streaks.
- the speckle is generated by the target object 120 reflecting laser light of a specified wavelength.
- FIG. 3 shows a schematic diagram of an example 300 of acquiring a target image according to some embodiments of the present specification.
- the laser transmitter 310 may be used to emit laser light of a specified wavelength to the target object 120 .
- the laser transmitter 310 may include a power supply device, a laser generating device, a filter device, a lens, and the like.
- the power supply device may supply power to the laser transmitter 310 .
- the laser generating device can generate the initial laser light.
- the initial laser includes a portion at the specified wavelength and a portion not at the specified wavelength.
- the filter device can filter out the portion of the initial laser light that is not at the specified wavelength, so as to generate the laser light of the specified wavelength.
- a lens such as a straight lens may adjust the emission direction of the laser light of the specified wavelength to emit the laser light of the specified wavelength to the target object 120 .
- the laser receiver 320 can receive these interference ripples and generate the target image 130 associated with the target object 120 .
- the laser receiver 320 may include an optoelectronic booster, a data collector, and an image generator.
- the photoelectric gainer may receive interference ripples generated by the target object 120 reflecting laser light of a specified wavelength, and amplify the interference ripples. For example, after receiving the interference ripples, the photoelectric amplifier can amplify the interference ripples according to a specified ratio to a level that can be recognized by the data receiving device.
- the data receiving device can receive the amplified interference ripples and convert the amplified interference ripples into digital signals.
- the image generator may receive the digital signal and generate the target image 130 based on the digital signal. Thereby, the controller 110 can acquire the target image 130 .
- the controller 110 determines a target sub-image 140 associated with a specific part of the target object 120 from the target image 130 .
- the RGB image associated with the target object 120 may be utilized to assist in determining the target sub-image 140 due to the existence of various detection algorithms for RGB images.
- FIG. 4 shows a schematic diagram of an example 400 of determining a target sub-image according to some embodiments of the present specification.
- the controller 110 may acquire a reference image 420 associated with the target object 120 captured by the camera 410 .
- camera 410 may be an RGB camera and reference image 420 may be an RGB image.
- Reference image 420 may be captured at or near the same time as target image 130 .
- the reference image 420 may be captured at a different time than the target image 130, and the present invention is not limited herein.
- the controller 110 may detect the reference sub-image 430 including the specific part in the reference image 420 .
- the controller 110 may utilize any suitable face detection algorithm to determine the region or portion of the reference image 420 that contains the face of the target object 120 and use the portion as the reference sub-image 430 .
- the face detection algorithm may be an MTCNN (Multi-Task Convolutional Neural Network) model, a Facebox model, and the like.
- the controller 110 may determine a portion corresponding to the reference sub-image 430 from the target image 130 as the target sub-image 140 . That is, the controller 110 may map the reference sub-image 430 to the target image 130 to locate a portion of the target image 130 associated with a specific part.
- the controller 110 may determine the coordinates of the reference pixel points of the reference sub-image 430 in the reference image 420 . For example, the controller 110 may determine the coordinates of the upper left corner and the lower right corner of the reference sub-image 430 . The controller 110 may map the coordinates to corresponding coordinates in the target image 130 . For example, the controller 110 may perform operations such as transformation, translation, scaling, etc. on the coordinates to map to the corresponding coordinates in the target image 130 . Thus, the controller 110 may determine the portion indicated by the corresponding coordinates from the target image 130 as the target sub-image 140 .
- the controller 110 performs liveness detection on the target object 120 based on the target sub-image 104 .
- the controller 110 can determine whether the target object 120 is a living user or a non-living attack object (eg, a printed photo, a retaken photo, a ripped video, an object in an edited video, etc.).
- the controller 110 may apply the target sub-image 140 to a trained detection model to perform liveness detection on the target object 120 .
- the detection model may be any suitable convolutional neural network model, such as a ResNet (Residual Network, residual network) model, an Inception model, and the like.
- the trained detection model is trained based on a set of training images. These training images include real training images and fake training images. Real training images present speckles generated by a live user reflecting laser light of specified wavelengths. The fake training images presented speckles generated by non-living users reflecting laser light of the specified wavelengths. Each training image can also be annotated with whether it is associated with a live user or a non-live user. Thus, the trained detection model can accurately classify the target object 120 as a live user or a non-live user.
- the living body detection can be performed simply, quickly and accurately using the speckle generated by the reflection of the laser light of the specified wavelength by the target object.
- FIG. 5 shows a block diagram of an apparatus 500 for data processing according to some embodiments of the present specification.
- the apparatus 500 may be provided in the controller 110 .
- the device 500 includes a description information acquisition module 510 configured to acquire a target image associated with the target object, the target image presents speckles, and the speckles are generated by the target object reflecting laser light of a specified wavelength;
- the determination module 520 is configured by is configured to determine a target sub-image associated with a specific part of the target object from the target image;
- a detection module 530 is configured to perform in vivo detection on the target object based on the target sub-image.
- the determination module 520 includes: a reference image acquisition module configured to acquire a reference image captured by the camera and associated with the target object; a reference sub-image detection module configured to detect a reference image containing a specific part a reference sub-image; and a target sub-image determination module configured to determine a portion corresponding to the reference sub-image from the target image as the target sub-image.
- the target sub-image determination module includes: a coordinate determination module configured to determine coordinates of reference pixels of the reference sub-image in the reference image; a mapping module configured to map the coordinates to corresponding coordinates in the target image coordinates; and a sub-image determination module configured to determine a portion indicated by the corresponding coordinates from the target image as the target sub-image.
- the detection module 530 includes a liveness detection module configured to apply the target sub-image to a trained detection model to perform liveness detection on the target object.
- the trained detection model is trained based on a set of training images, the set of training images comprising a real training image and a fake training image, the real training image representing the speckle generated by a living user reflecting a laser light of a specified wavelength,
- the fake training images presented speckles generated by non-living users reflecting laser light of the specified wavelengths.
- FIG. 6 shows a schematic block diagram of an electronic device 600 that may be used to implement embodiments of the present specification.
- the apparatus 600 may be used to implement the apparatus 500 of FIG. 5 .
- device 600 includes a central processing unit (CPU) 601 that may be loaded into a computer in random access memory (RAM) 603 according to computer program instructions stored in read only memory (ROM) 602 or from storage unit 608 Program instructions to perform various appropriate actions and processes.
- RAM 603 various programs and data required for the operation of the device 600 can also be stored.
- the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
- An input/output (I/O) interface 605 is also connected to bus 604 .
- I/O input/output
- Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606, such as a keyboard, mouse, etc.; an output unit 607, such as various types of displays, speakers, etc.; a storage unit 608, such as a magnetic disk, an optical disk, etc. ; and a communication unit 609, such as a network card, a modem, a wireless communication transceiver, and the like.
- the communication unit 609 allows the device 600 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
- method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608 .
- part or all of the computer program may be loaded and/or installed on device 600 via ROM 602 and/or communication unit 609 .
- CPU 601 may be configured to perform method 200 by any other suitable means (eg, by means of firmware).
- the present specification may be a method, apparatus, system and/or computer program product.
- a computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for carrying out various aspects of this specification.
- a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory static random access memory
- SRAM static random access memory
- CD-ROM compact disk read only memory
- DVD digital versatile disk
- memory sticks floppy disks
- mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
- Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted digital signals.
- the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- the computer program instructions for carrying out the operations of this specification may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
- Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
- LAN local area network
- WAN wide area network
- custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
- Computer readable program instructions are executed to implement various aspects of this specification.
- These computer readable program instructions may be provided to the processing unit of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processing unit of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer-readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
一种用于数据处理的方法、装置、设备和介质,该方法包括:获取与目标对象相关联的目标图像,目标图像呈现斑纹,斑纹由目标对象反射指定波长的激光而生成(210);从目标图像确定与目标对象的特定部位相关联的目标子图像(220);以及基于目标子图像对目标对象进行活体检测(230)。以这种方式,可以准确且高效地确定目标对象是否是活体用户。
Description
优先权声明
本申请要求2020年12月25日提交的中国申请CN202011563272.4的优先权,在此引用其全部内容。
本说明书的实施例一般地涉及数据处理,并且更具体地涉及数据处理方法、系统、装置、电子设备以及计算机存储介质。
当前,人脸识别技术广泛应用于各个领域,例如,金融、安防、消费和医疗等。然而,随着人脸识别技术的广泛应用,针对人脸识别系统的攻击手段也越来越多。例如,典型的攻击手段包括照片翻拍、视频翻录和视频编辑等。为了确保人脸识别系统的安全性和可靠性,活体检测技术被逐步应用在人脸识别系统中,以防止上述多样化的攻击手段。然而,传统的活体检测技术的性能较差。
发明内容
根据本说明书的实施例,提供了一种数据处理方案。
在本说明书的第一方面,提供了一种数据处理方法。该方法包括:获取与目标对象相关联的目标图像,目标图像呈现斑纹,斑纹由目标对象反射指定波长的激光而生成;从目标图像确定与目标对象的特定部位相关联的目标子图像;以及基于目标子图像对目标对象进行活体检测。
在本说明书的第二方面,提供了一种数据处理系统。该系统包括:激光发射器,用于向目标对象发射指定波长的激光;激光接收器,用于生成与目标对象相关联的目标图像,目标图像呈现斑纹,斑纹由目标对象反射指定波长的激光而生成;和控制器,被配置为执行根据本说明书的第一方面的方法。
在本说明书的第三方面,提供了一种用于数据处理的装置。该装置包括:获取模块,被配置为获取与目标对象相关联的目标图像,目标图像呈现斑纹,斑纹由目标对象反射指定波长的激光而生成;确定模块,被配置为从目标图像确定与目标对象的特定部位相关联的目标子图像;以及检测模块,被配置为基于目标子图像对目标对象进行活体检测。
在本说明书的第四方面,提供了一种电子设备。该电子设备包括:一个或多个处理器;以及存储器,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得 电子设备实现根据本说明书的第一方面的方法。
在本说明书的第五方面中,提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现根据本说明书的第一方面的方法。
在本说明书的第六方面中,提供了一种检测方法。该方法包括:向目标对象发射指定波长的激光;接收与目标对象相关联的目标图像,目标图像呈现斑纹,斑纹由目标对象反射指定波长的激光而生成;以及基于目标图像来识别目标对象的反射部位是否为特定材质。
应当理解,发明内容部分中所描述的内容并非旨在限定本说明书的实施例的关键或重要特征,亦非用于限制本说明书的范围。本说明书的其它特征将通过以下的描述变得容易理解。
结合附图并参考以下详细说明,本说明书各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标记表示相同或相似的元素,其中:
图1示出了能够在其中实现本说明书的实施例的示例性环境的示意图;
图2示出了根据本说明书的一些实施例的数据处理方法的流程图;
图3示出了根据本说明书的一些实施例的获取目标图像的示例的示意图;
图4示出了根据本说明书的一些实施例的确定目标子图像的示例的示意图;
图5示出了根据本说明书的一些实施例的用于数据处理的装置的方框图;以及
图6示出了能够实施本说明书的实施例的电子设备的方框图。
下面将参照附图更详细地描述本说明书的实施例。虽然附图中显示了本说明书的一些实施例,然而应当理解的是,本说明书可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本说明书。应当理解的是,本说明书的附图及实施例仅用于示例性作用,并非用于限制本说明书的保护范围。
在本说明书的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。
如上所述,在人脸识别系统中采用了活体检测技术,以防止恶意攻击。传统的活体检测技术可以被分类为动作活体检测和静默活体检测。在动作活体检测技术中,用户被要求做出某些交互动作,使得能够将用户识别为真实的活体用户。在静默活体检测技术中,无需用 户做出任何交互动作,而是通过算法将用户识别为真实的活体用户或者虚假的攻击对象。
传统上,静默活体检测技术主要包括单目活体检测技术、双目活体检测技术、炫彩活体检测技术和深度活体检测技术。单目活体检测技术可以采用RGB(红绿蓝)成像技术。其利用自然光成像的RGB图像来识别活体用户。然而,单目活体检测技术对于纸质面具类和屏幕显示类的攻击对象的防御率较低,并且在自然光下的防御能力较差。
双目活体检测技术利用红外光成像的黑白灰度图。由于红外光成像是反射成像,对于屏幕显示类的攻击对象无法成像,因此对屏幕显示类的攻击对象的防御能力较好。然而,双目活体检测技术对纸质面具类的防御能力没有显著提升。另外,由于双目活体检测技术还需要附加的红外摄像头,因此提高了硬件成本。
炫彩活体检测技术通过将不同颜色的自然光照射到目标对象上来进行活体检测。由于光照会影响炫彩活体检测技术的准确度,因此炫彩活体检测技术本身存在缺陷。通常,在强光或昏暗条件下,炫彩活体检测技术难以准确地进行活体检测。在炫彩活体检测的过程中,还会显示每次照射的光线颜色,使得恶意攻击者可能根据光线颜色来进行攻击,导致安全性和隐秘性较低。
此外,深度活体检测技术可以利用深度成像技术,例如3D(三维)结构光成像。由于真实的活体用户是立体的,而纸质面具类或者屏幕显示类的攻击对象是平面的,因此可以通过获得目标对象的整体深度信息来识别真实的活体用户和虚假的攻击对象。
这些传统活体检测技术除了其各自的缺陷以外,由于例如抽帧等操作需要针对目标对象的高质量图像,导致这些传统活体检测技术的识别速度较慢。综上所述,传统方式均无法以简单、快速和准确的方式来进行活体检测。
为此,本说明书的实施例提供了一种用于数据处理的方案。在该方案中,可以获取与目标对象相关联的目标图像。目标图像呈现斑纹。斑纹由目标对象反射指定波长的激光而生成。进一步地,可以从目标图像确定与目标对象的特定部位相关联的目标子图像,并且基于目标子图像对目标对象进行活体检测。
以此方式,可以利用由目标对象反射指定波长的激光而生成的斑纹,来简单、快速和准确地进行活体检测。以下将参照附图来具体描述本说明书的实施例。
图1示出了本说明书的实施例能够在其中实现的示例性环境100的示意图。环境100中包括控制器110。控制器110可以至少包含处理器、存储器以及其他通常存在于通用计算机中的组件,以便实现计算、存储、通信、控制等功能。例如,控制器110可以是智能电话、平板计算机、个人计算机、台式计算机、笔记本计算机、服务器、大型机、分布式计算系统 等。
在环境100中,控制器110被配置为获取与目标对象120相关联的目标图像130。目标图像130呈现斑纹。斑纹由目标对象120反射指定波长的激光而生成。传统上,激光斑纹技术主要应用于工业领域中的工业产品检测,例如用于检测手机外壳粗糙度以及机械零件表面粗糙度等。当诸如激光的单色高相关束照射物体时,激光经由物体反射会出现细晶状结构,并且与物体的微观性质无关的物体表面会成为次级子波的发生源。不同粗糙度和不同材质的物体表面会反射或散射出由不同子波组成的散射光。由于不同粗糙度和不同材质的物体的散射光的散射角度不同,从而产生的子波不同,因此会在大量具有随机相位差的子波之间产生光波干涉效应,从而产生斑纹图像。激光斑纹技术能够通过斑纹图像快速获取物体表面的特征数据。
鉴于此,由于活体用户的肌肤表面粗糙度和其他材质具有显著区别,因此活体用户的斑纹图像将不同于攻击对象(例如,打印的照片、翻拍的照片、翻录的视频、编辑的视频中的对象等)的斑纹图像。在这种情况下,可以通过利用指定波长的激光来照射目标对象120,并且利用激光从不同材质的表面上散射的光波产生的不同斑纹,来区分肌肤表面和普通纸张、屏幕等不同材质,从而区分真实的活体用户和虚假的攻击对象。具体地,可以向目标对象120发射指定波长的激光,并且接收与目标对象120相关联的目标图像,目标图像呈现斑纹,该斑纹由目标对象反射指定波长的激光而生成。由此,可以基于目标图像来识别目标对象120的反射部位是否为特定材质(例如,肌肤),从而对目标对象120进行活体检测。
进一步地,在一些实施例中,可以从目标图像130中确定与目标对象120的特定部位(例如,脸部、手掌等)相关联的目标子图像140,以提高活体检测的效率。具体地,控制器110可以从目标图像130确定与目标对象120的特定部位相关联的目标子图像140。目标子图像140是目标对象120的特定部位的斑纹图像。如上所述,不同粗糙度和不同材质的物体产生的斑纹图像不同。因此,真实的活体用户的特定部位的斑纹图像将不同于虚假的攻击对象(例如,打印的照片、翻拍的照片、翻录的视频、编辑的视频中的对象等)的特定部位的斑纹图像。鉴于此,控制器110可以基于目标子图像140对目标对象120进行活体检测,以生成目标对象120是否为活体用户的活体检测结果150。
在下文中,将结合图2-图4描述控制器110的操作。图2示出了根据本说明书的一些实施例的数据处理方法200的流程图。方法200可以由如图1所示的控制器110来实现。备选地,方法200也可以由除了控制器110之外的其他主体实现。应当理解的是,方法200还可以包括未示出的附加步骤和/或可以省略所示出的步骤,本说明书的范围在此方面不受限制。
在210,控制器110获取与目标对象120相关联的目标图像130。目标图像130呈现斑纹。斑纹由目标对象120反射指定波长的激光而生成。图3示出了根据本说明书的一些实施例的获取目标图像的示例300的示意图。
如图3所示,激光发射器310可以用于向目标对象120发射指定波长的激光。具体地,在一些实施例中,激光发射器310可以包括供电装置、激光生成装置、滤光装置和透镜等。供电装置可以向激光发射器310供电。在这种情况下,激光生成装置可以生成初始激光。初始激光包括处于指定波长的部分和不处于指定波长的部分。滤光装置可以滤除初始激光中不处于指定波长的部分,以生成指定波长的激光。诸如直透镜的透镜可以调整指定波长的激光的发射方向,以向目标对象120发射指定波长的激光。
如上所述,当指定波长的激光照射到目标对象120的表面上时,表面会反射或散射出由不同子波组成的散射光。在大量具有随机相位差的子波之间会产生光波干涉效应,因此生成干涉波纹。由于活体用户的肌肤表面粗糙度和其他材质具有显著区别,因此可以通过利用指定波长的激光来照射目标对象的特定部位(例如,脸部),并且利用激光从不同材质的表面上散射的光波产生的不同干涉波纹来区分肌肤表面和普通纸张、屏幕等不同材质,从而区分真实的活体用户和虚假的攻击对象。由此,激光接收器320可以接收这些干涉波纹,并且生成与目标对象120相关联的目标图像130。
在一些实施例中,激光接收器320可以包括光电增益器、数据收集器和图像生成器。光电增益器可以接收由目标对象120反射指定波长的激光而生成的干涉波纹,并且放大干涉波纹。例如,光电增益器可以在接收到干涉波纹之后,按照指定比例将干涉波纹放大到数据接收装置可识别的程度。数据接收装置可以接收放大后的干涉波纹,并且将放大后的干涉波纹转换为数字信号。图像生成器可以接收数字信号,并且基于该数字信号生成目标图像130。由此,控制器110可以获取目标图像130。
返回参考图2,在220,控制器110从目标图像130确定与目标对象120的特定部位相关联的目标子图像140。在一些实施例中,由于存在针对RGB图像的各种检测算法,因此可以利用与目标对象120相关联的RGB图像来帮助确定目标子图像140。图4示出了根据本说明书的一些实施例的确定目标子图像的示例400的示意图。
如图4所示,在一些实施例中,控制器110可以获取由相机410捕获的与目标对象120相关联的参考图像420。例如,相机410可以是RGB相机,并且参考图像420可以是RGB图像。参考图像420可以是与目标图像130同时或接近同时捕获的。备选地,参考图像420可以与目标图像130在不同时间捕获,本发明在此不受限制。
控制器110可以检测参考图像420中包含特定部位的参考子图像430。例如,控制器110可以利用任何适当的人脸检测算法来确定参考图像420中包含目标对象120的脸部的区域或部分,并且将该部分作为参考子图像430。例如,人脸检测算法可以是MTCNN(Multi-Task Convolutional Neural Network,多任务卷积神经网络)模型、Facebox模型等。
进一步地,控制器110可以从目标图像130中确定与参考子图像430对应的部分作为目标子图像140。也就是说,控制器110可以将参考子图像430映射到目标图像130,来定位目标图像130中与特定部位相关联的部分。
在一些实施例中,控制器110可以确定参考子图像430的参考像素点在参考图像420中的坐标。例如,控制器110可以确定参考子图像430的左上角坐标和右下角坐标。控制器110可以将该坐标映射到目标图像130中的对应坐标。例如,控制器110可以对该坐标进行转换、平移、缩放等操作以映射到目标图像130中的对应坐标。由此,控制器110可以从目标图像130中确定由对应坐标指示的部分作为目标子图像140。
返回参考图2,在230,控制器110基于目标子图像104对目标对象120进行活体检测。由此,控制器110可以判定目标对象120是活体用户还是非活体的攻击对象(例如,打印的照片、翻拍的照片、翻录的视频、编辑的视频中的对象等)。
在一些实施例中,控制器110可以将目标子图像140应用于经训练的检测模型,以对目标对象120进行活体检测。例如,检测模型可以是任何适当的卷积神经网络模型,诸如ResNet(Residual Network,残差网络)模型、Inception模型等。
经训练的检测模型是基于一组训练图像而被训练的。这些训练图像包括真实训练图像和虚假训练图像。真实训练图像呈现由活体用户反射指定波长的激光而生成的斑纹。虚假训练图像呈现由非活体用户反射指定波长的激光而生成的斑纹。每个训练图像还可以被标注有其与活体用户相关联还是与非活体用户相关联。由此,经训练的检测模型可以准确地将目标对象120分类为活体用户或非活体用户。
以此方式,可以利用由目标对象反射指定波长的激光而生成的斑纹,来简单、快速和准确地进行活体检测。
图5示出了根据本说明书的一些实施例的用于数据处理的装置500的方框图。例如,装置500可以设置在控制器110中。如图5所示,装置500包括描述信息获取模块510,被配置为获取与目标对象相关联的目标图像,目标图像呈现斑纹,斑纹由目标对象反射指定波长的激光而生成;确定模块520,被配置为从目标图像确定与目标对象的特定部位相关联的目标子图像;以及检测模块530,被配置为基于目标子图像对目标对象进行活体检测。
在一些实施例中,确定模块520包括:参考图像获取模块,被配置为获取由相机捕获的与目标对象相关联的参考图像;参考子图像检测模块,被配置为检测参考图像中包含特定部位的参考子图像;以及目标子图像确定模块,被配置为从目标图像中确定与参考子图像对应的部分作为目标子图像。
在一些实施例中,目标子图像确定模块包括:坐标确定模块,被配置为确定参考子图像的参考像素点在参考图像中的坐标;映射模块,被配置为将坐标映射到目标图像中的对应坐标;以及子图像确定模块,被配置为从目标图像中确定由对应坐标指示的部分作为目标子图像。
在一些实施例中,检测模块530包括:活体检测模块,被配置为将目标子图像应用于经训练的检测模型,以对目标对象进行活体检测。
在一些实施例中,经训练的检测模型基于一组训练图像而被训练,一组训练图像包括真实训练图像和虚假训练图像,真实训练图像呈现由活体用户反射指定波长的激光而生成的斑纹,虚假训练图像呈现由非活体用户反射指定波长的激光而生成的斑纹。
图6示出了一个可以用来实施本说明书的实施例的电子设备600的示意性框图。设备600可以用于实现图5的装置500。如图所示,设备600包括中央处理单元(CPU)601,其可以根据存储在只读存储器(ROM)602中的计算机程序指令或者从存储单元608加载到随机访问存储器(RAM)603中的计算机程序指令,来执行各种适当的动作和处理。在RAM 603中,还可存储设备600操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。
设备600中的多个部件连接至I/O接口605,包括:输入单元606,例如键盘、鼠标等;输出单元607,例如各种类型的显示器、扬声器等;存储单元608,例如磁盘、光盘等;以及通信单元609,例如网卡、调制解调器、无线通信收发机等。通信单元609允许设备600通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
上文所描述的各个过程和处理,例如方法200,可由处理单元601执行。例如,在一些实施例中,方法200可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元608。在一些实施例中,计算机程序的部分或者全部可以经由ROM 602和/或通信单元609而被载入和/或安装到设备600上。当计算机程序被加载到RAM 603并由CPU 601执行时,可以执行上文描述的方法200的一个或多个步骤。备选地,在其他实施例中,CPU 601可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行方法200。
本说明书可以是方法、设备、系统和/或计算机程序产品。计算机程序产品可以包括计 算机可读存储介质,其上载有用于执行本说明书的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的数字信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本说明书操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本说明书的各个方面。
这里参照根据本说明书本说明书实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述了本说明书的各个方面。应当理解,流程图和/或框图的每个方框以及流程 图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理单元,从而生产出一种机器,使得这些指令在通过计算机或其他可编程数据处理装置的处理单元执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其他可编程数据处理装置、或其他设备上,使得在计算机、其他可编程数据处理装置或其他设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其他可编程数据处理装置、或其他设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本说明书的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本说明书的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所公开的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术者来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其他普通技术者能理解本文公开的各实施例。
Claims (16)
- 一种数据处理方法,包括:获取与目标对象相关联的目标图像,所述目标图像呈现斑纹,所述斑纹由所述目标对象反射指定波长的激光而生成;从所述目标图像确定与所述目标对象的特定部位相关联的目标子图像;以及基于所述目标子图像对所述目标对象进行活体检测。
- 根据权利要求1所述的方法,其中确定所述目标子图像包括:获取由相机捕获的与所述目标对象相关联的参考图像;检测所述参考图像中包含所述特定部位的参考子图像;以及从所述目标图像中确定与所述参考子图像对应的部分作为所述目标子图像。
- 根据权利要求2所述的方法,其中从所述目标图像中确定与所述参考子图像对应的部分作为所述目标子图像包括:确定所述参考子图像的参考像素点在所述参考图像中的坐标;将所述坐标映射到所述目标图像中的对应坐标;以及从所述目标图像中确定由所述对应坐标指示的所述部分作为所述目标子图像。
- 根据权利要求1所述的方法,其中对所述目标对象进行活体检测包括:将所述目标子图像应用于经训练的检测模型,以对所述目标对象进行活体检测。
- 根据权利要求4所述的方法,其中所述经训练的检测模型基于一组训练图像而被训练,所述一组训练图像包括真实训练图像和虚假训练图像,所述真实训练图像呈现由活体用户反射所述指定波长的激光而生成的斑纹,所述虚假训练图像呈现由非活体用户反射所述指定波长的激光而生成的斑纹。
- 一种数据处理系统,包括:激光发射器,用于向目标对象发射指定波长的激光;激光接收器,用于生成与所述目标对象相关联的目标图像,所述目标图像呈现斑纹,所述斑纹由所述目标对象反射所述指定波长的激光而生成;和控制器,被配置为执行根据权利要求1-5中任一项所述的方法。
- 根据权利要求6所述的方法,其中所述激光发射器被配置为:生成初始激光,所述初始激光包括处于指定波长的部分和不处于所述指定波长的部分;滤除所述初始激光中不处于所述指定波长的部分,以生成所述指定波长的激光;以及调整所述指定波长的激光的发射方向,以向所述目标对象发射所述指定波长的激光。
- 根据权利要求6所述的方法,其中所述激光接收器被配置为:接收由所述目标对象反射所述指定波长的激光而生成的干涉波纹;放大所述干涉波纹;将放大后的干涉波纹转换为数字信号;以及基于所述数字信号,生成所述目标图像。
- 一种用于数据处理的装置,包括:获取模块,被配置为获取与目标对象相关联的目标图像,所述目标图像呈现斑纹,所述斑纹由所述目标对象反射指定波长的激光而生成;确定模块,被配置为从所述目标图像确定与所述目标对象的特定部位相关联的目标子图像;以及检测模块,被配置为基于所述目标子图像对所述目标对象进行活体检测。
- 根据权利要求9所述的装置,其中所述确定模块包括:参考图像获取模块,被配置为获取由相机捕获的与所述目标对象相关联的参考图像;参考子图像检测模块,被配置为检测所述参考图像中包含所述特定部位的参考子图像;以及目标子图像确定模块,被配置为从所述目标图像中确定与所述参考子图像对应的部分作为所述目标子图像。
- 根据权利要求10所述的装置,其中所述目标子图像确定模块包括:坐标确定模块,被配置为确定所述参考子图像的参考像素点在所述参考图像中的坐标;映射模块,被配置为将所述坐标映射到所述目标图像中的对应坐标;以及子图像确定模块,被配置为从所述目标图像中确定由所述对应坐标指示的所述部分作为所述目标子图像。
- 根据权利要求9所述的装置,其中所述检测模块包括:活体检测模块,被配置为将所述目标子图像应用于经训练的检测模型,以对所述目标对象进行活体检测。
- 根据权利要求12所述的装置,其中所述经训练的检测模型基于一组训练图像而被训练,所述一组训练图像包括真实训练图像和虚假训练图像,所述真实训练图像呈现由活体用户反射所述指定波长的激光而生成的斑纹,所述虚假训练图像呈现由非活体用户反射所述指定波长的激光而生成的斑纹。
- 一种电子设备,所述电子设备包括:一个或多个处理器;以及存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述电子设备实现根据权利要求1-5中任一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求1-5中任一项所述的方法。
- 一种检测方法,包括:向目标对象发射指定波长的激光;接收与所述目标对象相关联的目标图像,所述目标图像呈现斑纹,所述斑纹由所述目标对象反射所述指定波长的激光而生成;以及基于所述目标图像来识别所述目标对象的反射部位是否为特定材质。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011563272.4A CN112633181B (zh) | 2020-12-25 | 2020-12-25 | 数据处理方法、系统、装置、设备和介质 |
CN202011563272.4 | 2020-12-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022134754A1 true WO2022134754A1 (zh) | 2022-06-30 |
Family
ID=75325116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/123759 WO2022134754A1 (zh) | 2020-12-25 | 2021-10-14 | 数据处理方法、系统、装置、设备和介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112633181B (zh) |
WO (1) | WO2022134754A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220374643A1 (en) * | 2021-05-21 | 2022-11-24 | Ford Global Technologies, Llc | Counterfeit image detection |
US11636700B2 (en) | 2021-05-21 | 2023-04-25 | Ford Global Technologies, Llc | Camera identification |
US11769313B2 (en) | 2021-05-21 | 2023-09-26 | Ford Global Technologies, Llc | Counterfeit image detection |
CN117011950A (zh) * | 2023-08-29 | 2023-11-07 | 国政通科技有限公司 | 一种活体检测方法及装置 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633181B (zh) * | 2020-12-25 | 2022-08-12 | 北京嘀嘀无限科技发展有限公司 | 数据处理方法、系统、装置、设备和介质 |
CN113435378A (zh) * | 2021-07-06 | 2021-09-24 | 中国银行股份有限公司 | 活体检测方法、装置及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105637532A (zh) * | 2015-06-08 | 2016-06-01 | 北京旷视科技有限公司 | 活体检测方法、活体检测系统以及计算机程序产品 |
CN109145653A (zh) * | 2018-08-01 | 2019-01-04 | Oppo广东移动通信有限公司 | 数据处理方法和装置、电子设备、计算机可读存储介质 |
CN110059638A (zh) * | 2019-04-19 | 2019-07-26 | 中控智慧科技股份有限公司 | 一种身份识别方法及装置 |
US20190335098A1 (en) * | 2018-04-28 | 2019-10-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and device, computer-readable storage medium and electronic device |
CN112633181A (zh) * | 2020-12-25 | 2021-04-09 | 北京嘀嘀无限科技发展有限公司 | 数据处理方法、系统、装置、设备和介质 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3472479B2 (ja) * | 1998-05-22 | 2003-12-02 | シャープ株式会社 | 画像処理装置 |
CN201378323Y (zh) * | 2009-03-20 | 2010-01-06 | 公安部第一研究所 | 多模态结合的身份认证装置 |
WO2012101644A2 (en) * | 2011-01-28 | 2012-08-02 | Bar Ilan University | Method and system for non-invasively monitoring biological or biochemical parameters of individual |
KR20160044267A (ko) * | 2014-10-15 | 2016-04-25 | 삼성전자주식회사 | 생체 정보 획득 장치 및 방법 |
CN205405544U (zh) * | 2016-02-27 | 2016-07-27 | 南京福瑞林生物科技有限公司 | 活体指纹识别装置 |
KR102560710B1 (ko) * | 2016-08-24 | 2023-07-27 | 삼성전자주식회사 | 광학적 스펙클을 이용하는 장치 및 방법 |
CN107316272A (zh) * | 2017-06-29 | 2017-11-03 | 联想(北京)有限公司 | 用于图像处理的方法及其设备 |
CN107820005B (zh) * | 2017-10-27 | 2019-09-17 | Oppo广东移动通信有限公司 | 图像处理方法、装置和电子装置 |
CN108495113B (zh) * | 2018-03-27 | 2020-10-27 | 百度在线网络技术(北京)有限公司 | 用于双目视觉系统的控制方法和装置 |
CN108509888B (zh) * | 2018-03-27 | 2022-01-28 | 百度在线网络技术(北京)有限公司 | 用于生成信息的方法和装置 |
CN108668078B (zh) * | 2018-04-28 | 2019-07-30 | Oppo广东移动通信有限公司 | 图像处理方法、装置、计算机可读存储介质和电子设备 |
CN108833887B (zh) * | 2018-04-28 | 2021-05-18 | Oppo广东移动通信有限公司 | 数据处理方法、装置、电子设备及计算机可读存储介质 |
CN108804895B (zh) * | 2018-04-28 | 2021-01-15 | Oppo广东移动通信有限公司 | 图像处理方法、装置、计算机可读存储介质和电子设备 |
US10956714B2 (en) * | 2018-05-18 | 2021-03-23 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for detecting living body, electronic device, and storage medium |
WO2020132960A1 (zh) * | 2018-12-26 | 2020-07-02 | 合刃科技(深圳)有限公司 | 缺陷检测方法及缺陷检测系统 |
CN110942060B (zh) * | 2019-10-22 | 2023-05-23 | 清华大学 | 基于激光散斑和模态融合的材质识别方法及装置 |
CN111476143B (zh) * | 2020-04-03 | 2022-04-22 | 华中科技大学苏州脑空间信息研究院 | 获取多通道图像、生物多参数以及身份识别的装置 |
CN111639522B (zh) * | 2020-04-17 | 2023-10-31 | 北京迈格威科技有限公司 | 活体检测方法、装置、计算机设备和存储介质 |
CN112016525A (zh) * | 2020-09-30 | 2020-12-01 | 墨奇科技(北京)有限公司 | 非接触式指纹采集方法和装置 |
-
2020
- 2020-12-25 CN CN202011563272.4A patent/CN112633181B/zh active Active
-
2021
- 2021-10-14 WO PCT/CN2021/123759 patent/WO2022134754A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105637532A (zh) * | 2015-06-08 | 2016-06-01 | 北京旷视科技有限公司 | 活体检测方法、活体检测系统以及计算机程序产品 |
US20190335098A1 (en) * | 2018-04-28 | 2019-10-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and device, computer-readable storage medium and electronic device |
CN109145653A (zh) * | 2018-08-01 | 2019-01-04 | Oppo广东移动通信有限公司 | 数据处理方法和装置、电子设备、计算机可读存储介质 |
CN110059638A (zh) * | 2019-04-19 | 2019-07-26 | 中控智慧科技股份有限公司 | 一种身份识别方法及装置 |
CN112633181A (zh) * | 2020-12-25 | 2021-04-09 | 北京嘀嘀无限科技发展有限公司 | 数据处理方法、系统、装置、设备和介质 |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220374643A1 (en) * | 2021-05-21 | 2022-11-24 | Ford Global Technologies, Llc | Counterfeit image detection |
US11636700B2 (en) | 2021-05-21 | 2023-04-25 | Ford Global Technologies, Llc | Camera identification |
US11769313B2 (en) | 2021-05-21 | 2023-09-26 | Ford Global Technologies, Llc | Counterfeit image detection |
US11967184B2 (en) * | 2021-05-21 | 2024-04-23 | Ford Global Technologies, Llc | Counterfeit image detection |
CN117011950A (zh) * | 2023-08-29 | 2023-11-07 | 国政通科技有限公司 | 一种活体检测方法及装置 |
CN117011950B (zh) * | 2023-08-29 | 2024-02-02 | 国政通科技有限公司 | 一种活体检测方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112633181B (zh) | 2022-08-12 |
CN112633181A (zh) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022134754A1 (zh) | 数据处理方法、系统、装置、设备和介质 | |
US10956714B2 (en) | Method and apparatus for detecting living body, electronic device, and storage medium | |
US10937167B2 (en) | Automated generation of pre-labeled training data | |
Li et al. | Underwater image de-scattering and classification by deep neural network | |
US10825157B2 (en) | Glare reduction in captured images | |
KR102559202B1 (ko) | 3d 렌더링 방법 및 장치 | |
US9299188B2 (en) | Automatic geometry and lighting inference for realistic image editing | |
US9886622B2 (en) | Adaptive facial expression calibration | |
US9053573B2 (en) | Systems and methods for generating a virtual camera viewpoint for an image | |
CN113205057B (zh) | 人脸活体检测方法、装置、设备及存储介质 | |
CN111860640B (zh) | 一种基于gan的特定海域数据集增广方法 | |
CN112270745B (zh) | 一种图像生成方法、装置、设备以及存储介质 | |
TWI752473B (zh) | 圖像處理方法及裝置、電子設備和電腦可讀儲存媒體 | |
JP7264929B2 (ja) | 背景なし画像の生成方法及び装置、電子機器、記憶媒体並びにコンピュータプログラム | |
WO2019109722A1 (zh) | 隐私遮蔽处理方法、装置、电子设备及存储介质 | |
Liu et al. | Scene‐adaptive single image dehazing via opening dark channel model | |
US20230245396A1 (en) | System and method for three-dimensional scene reconstruction and understanding in extended reality (xr) applications | |
WO2021046773A1 (zh) | 人脸防伪检测方法、装置、芯片、电子设备和计算机可读介质 | |
Riaz et al. | Single image dehazing with bright object handling | |
US20200267311A1 (en) | Electronic apparatus and controlling method thereof | |
Shieh et al. | Fast facial detection by depth map analysis | |
Lian et al. | [Retracted] Film and Television Animation Sensing and Visual Image by Computer Digital Image Technology | |
KR102412992B1 (ko) | 레이저 센서를 이용한 인터렉티브 전시 영상 구현 시스템 | |
KR20190125702A (ko) | 딥러닝 기반의 추적 모듈에서 코사인 거리와 교차 영역을 활용한 추적 최적화 방법 | |
CN116977911A (zh) | 基于注意力机制的目标检测模型及其训练方法、目标检测方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21908755 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.10.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21908755 Country of ref document: EP Kind code of ref document: A1 |