WO2021204267A1 - Reconnaissance d'identité - Google Patents
Reconnaissance d'identité Download PDFInfo
- Publication number
- WO2021204267A1 WO2021204267A1 PCT/CN2021/086266 CN2021086266W WO2021204267A1 WO 2021204267 A1 WO2021204267 A1 WO 2021204267A1 CN 2021086266 W CN2021086266 W CN 2021086266W WO 2021204267 A1 WO2021204267 A1 WO 2021204267A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- image
- imaging device
- recognized
- depth information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/13—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- This specification relates to the field of identity recognition technology, and in particular to methods, systems and devices for identity recognition based on in-depth information.
- biometric identification ie, biometric identification
- recognition of biological individuals based on their physical signs for example, fingerprints
- identity verification identification
- to identify a biological individual based on the physical signs of the biological individual requires the collection of image data of the biological individual in order to perform identity recognition based on the characteristics of the biological individual in the image data.
- Image data collection and the quality of image data will greatly affect the speed and accuracy of identification.
- One of the embodiments of this specification provides an identity-based identification method, the method includes: acquiring a first image acquired by a first imaging device, the first image includes one or more candidate targets; acquiring a second imaging device A second image collected by the device, where the second image includes depth information of at least one of the one or more candidate targets; based on the first image and the second image, from the second image Extracting the depth information of the one or more candidate targets in, determining at least one candidate target from the one or more candidate targets as the target to be identified based on the depth information of the one or more candidate targets; The depth information of at least a part of the target to be recognized acquires a third image collected by a third imaging device, where the third image includes the at least part of the target to be recognized; and based on the third image, The identification of the target to be identified is performed.
- the device includes a candidate target image acquisition module, a depth information extraction module, a to-be-identified target determination module, and a to-be-identified target image acquisition module.
- the candidate target image acquisition module is configured to acquire a first image, the first image includes one or more candidate targets; and to acquire a second image captured by a second imaging device, the second image includes the Depth information of at least one of the one or more candidate targets.
- the depth information extraction module is configured to extract the depth information of the one or more candidate targets from the second image based on the first image and the second image.
- the to-be-recognized target determination module is configured to determine at least one candidate target as the to-be-recognized target from the one or more candidate targets based on the depth information of the one or more candidate targets.
- the image acquisition module of the target to be recognized is configured to acquire a third image collected by a third imaging device based on the depth information of at least a part of the target to be recognized, and the third image includes the at least Part.
- the device also includes an identification module, which is used to identify the target to be identified based on the third image.
- the system includes a first imaging device, a second imaging device, and a third imaging device.
- the first imaging device is configured to collect a first image, and the first image includes one or more candidate targets.
- the second imaging device is configured to acquire a second image, the second image including depth information of at least one candidate target among the one or more candidate targets.
- the third imaging device is configured to acquire a third image, the third image including at least a part of at least one candidate target among the one or more candidate targets.
- the system further includes a processor and a storage medium, where the storage medium is used to store executable instructions, and the processor is used to execute the executable instructions to implement the above-mentioned identification method.
- One of the embodiments of this specification provides a computer-readable medium, the storage medium stores computer instructions, and when the computer instructions are executed by a processor, the above-mentioned identity recognition method is realized.
- Fig. 1 is a schematic diagram of an application scenario of an identity recognition system according to some embodiments of this specification
- Fig. 2 is an exemplary flowchart of an identity recognition method according to some embodiments of this specification
- Fig. 3 is an exemplary flow chart of another identity recognition method according to some embodiments of the present specification.
- Fig. 4 is an exemplary flowchart of another identity recognition method according to some embodiments of the present specification.
- Fig. 5 is an exemplary module diagram of an identity recognition device according to some embodiments of the present specification.
- system is a method for distinguishing different components, elements, parts, parts, or assemblies of different levels.
- the words can be replaced by other expressions.
- Fig. 1 is a schematic diagram of an application scenario of an identity recognition system according to some embodiments of this specification.
- the identity recognition system 100 can recognize the identity information of the target to be recognized.
- the identity recognition system 100 may include a processing device 110, an imaging device 120, a terminal 130, a storage device 140, and a network 150.
- the processing device 110 may process data and/or information from at least one other component of the identity recognition system 100.
- the processing device 110 may acquire image data from the imaging device 120.
- the processing device 110 may extract depth information of a candidate target (for example, a human face) based on image data and determine the target to be recognized based on the depth information of the candidate target.
- the processing device 110 may perform identity recognition on the target to be recognized based on at least a part of the image data (for example, an iris image) of the target to be recognized.
- the processing device 110 may be a single processing device or a group of processing devices.
- the processing device group may be a centralized processing device group connected to the network 150 via an access point, or a distributed processing device group respectively connected to the network 150 via at least one access point.
- the processing device 110 may be locally connected to the network 150 or remotely connected to the network 150.
- the processing device 110 may access information and/or data stored in the terminal 130 and/or the storage device 140 via the network 150.
- the storage device 140 may be used as a back-end data storage of the processing device 110.
- the processing device 110 may be implemented on a cloud platform.
- the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, internal cloud, multi-layer cloud, etc., or any combination thereof.
- the processing device 110 may include a processing device.
- the processing device can process information and/or data related to at least one function described in this specification.
- the processing device may include at least one processing unit (for example, a single-core processing device or a multi-core processing device).
- processing equipment includes central processing unit (CPU), application specific integrated circuit (ASIC), application specific instruction set processor (ASIP), graphics processing unit (GPU), physical processing unit (PPU), digital signal processor ( DSP), field programmable gate array (FPGA), programmable logic device (PLD), controller, microcontroller unit, reduced instruction set computer (RISC), microprocessor, etc., or any combination thereof.
- the imaging device 120 may include multiple types of imaging devices having an image capturing function, for example, a first imaging device 120-1, a second imaging device 120-2, a third imaging device 120-3, and the like.
- the first imaging device 120-1 may be used to collect planar images.
- the first imaging device 120-1 may include one or any combination of a color camera, a digital camera, a camcorder, a PC camera, a web camera, a closed circuit television (CCTV), a PTZ camera, a video sensor device, etc.
- the second imaging device 120-2 may be used to acquire a depth image.
- the second imaging device 120-2 may include a structured light depth camera, a binocular stereo vision camera, a time-of-flight TOF camera, and the like.
- the third imaging device 120-3 may be used to collect infrared images (for example, iris images).
- the third imaging device 120-3 may include an infrared thermal imager, an infrared camera, and the like.
- the field of view (FOV) of the first imaging device 120-1 and the field of view (FOV) of the second imaging device 120-2 overlap at least partially.
- the field of view (FOV) of the second imaging device 120-2 and the field of view (FOV) of the third imaging device 120-3 overlap at least partially.
- the first imaging device 120-1, the second imaging device 120-2, the third imaging device 120-3, etc. may be integrated in the same device.
- the first imaging device 120-1, the second imaging device 120-2, the third imaging device 120-3, etc. may be different imaging modules in the same device.
- the imaging device 120 may collect images containing candidate targets, and send the collected images to one or more devices in the identity recognition system 100.
- the imaging device 120 may collect images containing multiple human faces, and send the images to the processing device 110 through the network 150 for subsequent processing.
- the terminal 130 may communicate and/or connect with the processing device 110, the imaging device 120, and/or the storage device 140.
- the terminal 130 may obtain image data acquired through the imaging device 120, and send the image data to the processing device 110 for processing.
- the terminal 130 may obtain the result of identity recognition from the processing device 110.
- the terminal 130 may include a mobile device, a tablet computer, a laptop computer, etc., or any combination thereof.
- the user can interact with other components in the identity recognition system 100 through the terminal 130.
- the user can view the image collected by the imaging device through the terminal 130.
- the user can also view the identification result determined by the processing device 110 through the terminal 130.
- the storage device 140 may store data and/or instructions.
- the storage device 140 may store the image data collected by the imaging device 120, the coordinate system conversion relationship between the imaging devices, the identity information of the target to be recognized, the image processing model and/or algorithm, and the like.
- the storage device 140 may store data and/or instructions that can be executed by the processing device 110, and the processing device 110 may execute or use the data and/or instructions to implement the exemplary methods described in this specification.
- the storage device 140 may include mass storage, removable storage, volatile read-write storage, read-only storage (ROM), etc., or any combination thereof.
- Exemplary mass storage devices may include magnetic disks, optical disks, solid state disks, and the like.
- Exemplary removable storage may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tapes, and the like.
- An exemplary volatile read-write memory may include random access memory (RAM).
- Exemplary random access memory may include dynamic random access memory (DRAM), double data rate synchronous dynamic random access memory (DDRSDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM) And zero capacitance random access memory (Z-RAM), etc.
- Exemplary read-only memory may include mask-type read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (PEROM), electrically erasable programmable read-only memory (EEPROM), CD-ROM and digital versatile disk read-only memory, etc.
- the storage device 140 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-layer cloud, etc., or any combination thereof.
- the network 150 may facilitate the exchange of information and/or data.
- at least one component for example, the processing device 110, the imaging device 120, the terminal 130, and the storage device 140
- the processing device 110 may send information and/or data to other components via the network 150.
- the processing device 110 may acquire an image from the imaging device 120 through the network 150.
- the processing device 110 may send the acquired image to the terminal 130 via the network 150.
- the processing device 110 may use the network 150 to obtain the identity information of multiple objects (for example, biological individuals) from the storage device 140.
- the processing device 110 may send the processed image to the terminal 130 via the network 150.
- the network 150 may be any form of wired or wireless network, or any combination thereof.
- the network 150 may include a cable network, a wired network, an optical fiber network, a telecommunication network, an internal network, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), public switched telephone network (PSTN), Bluetooth network, ZigBee network, near field communication (NFC) network, etc. or any combination thereof.
- the network 150 may include at least one network access point.
- the network 150 may include wired or wireless network access points, such as base stations and/or Internet exchange points. Through these network access points, at least one component of the identification system 100 may be connected to the network 150 to exchange data and/or information. .
- Fig. 2 is an exemplary flowchart of an identity recognition method according to some embodiments of the present specification.
- the process 200 may be implemented by the identity recognition apparatus 500 or the processing device 110 shown in FIG. 1.
- the process 200 may be stored in a storage device (such as the storage device 140) in the form of a program or instruction, and the program or instruction may implement the process 200 when the program or instruction is executed.
- the process 200 may include the following steps.
- Step 201 The processing device may acquire a first image collected by a first imaging device, where the first image includes one or more candidate targets. In some embodiments, this step may be performed by the candidate target image acquisition module 501.
- the candidate target may also be referred to as a candidate to-be-recognized object.
- the candidate target may include all or part of the biological individual.
- the candidate target may include a human body or a face of a human body (ie, a human face).
- the fact that the first image includes one or more candidate targets may mean that the first image includes an image representation of the candidate targets.
- the image representation of the candidate target can also be referred to as the image description of the candidate target.
- the first image may include feature information used to represent or describe the candidate target, for example, texture features, boundary features, color features, and so on.
- the first image may be a two-dimensional image.
- the image type of the first image includes at least one of the following: grayscale image, RGB image, etc., or any combination thereof.
- the first imaging device may include an imaging device for acquiring a planar image (eg, a first image) of the candidate target.
- the first imaging device may include, but is not limited to, one or any combination of a color camera, a digital camera, a camcorder, a PC camera, a network camera, a closed-circuit television (CCTV), a PTZ camera, a video sensor device, etc.
- the processing device may acquire the first image from the first imaging device, the storage device 140, or other storage devices.
- the processing device may obtain a single first image captured by a first imaging device.
- the processing device may acquire multiple first images simultaneously acquired by multiple first imaging devices. At least one of the plurality of first images includes one or more candidate targets. For example, at least one of the multiple first images includes multiple human faces.
- the candidate target image acquisition module 501 may acquire the first image from the first imaging device and store the first image in the storage device 140.
- Step 203 The processing device may acquire a second image collected by the second imaging device, where the second image includes depth information of at least one of the candidate targets. In some embodiments, this step may be performed by the candidate target image acquisition module 501.
- the depth information of the candidate target may indicate the distance between a point on the surface of the candidate target (for example, a human face) and the second imaging device.
- the size of the pixel value of the second image may indicate the size of the distance between the surface of the candidate target and the second imaging device.
- the second image may include a depth image.
- the second image may include a point cloud image.
- the depth image can be converted into a point cloud image to obtain the second image.
- the point cloud image can be converted into a depth map to obtain the second image.
- the second image may be a two-dimensional image, a three-dimensional image, or the like.
- the second imaging device includes an imaging device that can collect depth information of a candidate target (for example, a human face).
- the second imaging device may include one or more depth imaging devices.
- the depth imaging device may include, but is not limited to: a structured light (Structured Light) depth camera, a binocular stereo vision (Binocular Stereo Vision) camera, a TOF (Time of flight) camera, etc., or any combination thereof.
- the field of view (FOV) of the second imaging device and the field of view (FOV) of the first imaging device at least partially overlap.
- the first imaging device and the second imaging device simultaneously capture the first image and the second image.
- the candidate target image acquisition module 501 may acquire the second image from the second imaging device, the storage device 140 or other storage devices. In some embodiments, the candidate target image acquisition module 501 can acquire a single second image acquired by a second imaging device. In some embodiments, the candidate target image acquisition module 501 may acquire multiple second images simultaneously acquired by multiple second imaging devices. At least one second image among the plurality of second images includes depth information of at least one candidate target among the one or more candidate targets. Each second image can correspond to a first image. As described herein, the second image corresponding to the first image refers to a pixel in the second image corresponding to the same position or the same part of a certain pixel in the first image on the candidate target.
- Step 205 The processing device may extract depth information of the one or more candidate targets based on the first image and the second image. In some embodiments, this step may be performed by the depth information extraction module 503.
- the first imaging device and the second imaging device may be calibrated based on the same coordinate system (for example, the world coordinate system) before acquiring the first image and the second image, so that the first imaging device and the second imaging device
- the equipment has a unified coordinate system.
- the processing device may directly detect one or more candidate targets (for example, human faces) from the first image.
- the processing device may extract the depth information of one or more candidate targets (for example, a human face) from the second image based on the position of the detected one or more candidate targets in the first image.
- the processing device may register the first image with the second image to obtain the registration result.
- the processing device may detect one or more candidate targets (for example, human faces) from the registered first image.
- the depth information extraction module 503 may extract depth information of one or more candidate targets (for example, a human face) based on the detected one or more candidate targets and the registration result.
- the processing device may register the first image with the second image through image registration technology.
- Exemplary image registration techniques may include gray-scale and template-based matching algorithms, feature-based matching algorithms, domain transform-based algorithms, and the like.
- the processing device may use methods such as image segmentation technology and model-based target detection technology to detect candidate targets from the registered first image.
- Image segmentation techniques may include the use of edge-based segmentation algorithms, threshold-based segmentation algorithms, region-based segmentation algorithms, morphological watershed algorithms, etc., or a combination thereof.
- Model-based target detection technology may include the use of machine learning models (R-CNN model, Fast RCNN model, SVM model, etc.) for target detection.
- the processing device may perform mask processing on the registered first image based on the detected candidate target to obtain a mask image. For example, the pixel value of the region where the candidate target is located in the registered first image can be set to 1, and the pixel value of the remaining regions can be set to 0.
- the depth information of one or more candidate targets may be extracted from the registered second image based on the position of the detected candidate target in the registered first image.
- the position of the detected candidate target in the registered second image may be determined based on the position of the detected candidate target in the registered first image.
- the mask image with the detected candidate target may be multiplied with the registered second image to determine the position of the detected candidate target in the registered second image.
- the depth information of the position of the detected candidate target in the registered second image may be extracted from the registered second image.
- Step 207 The processing device may determine at least one candidate target from the one or more candidate targets as the target to be identified based on the depth information of the one or more candidate targets. In some embodiments, this step may be performed by the to-be-identified target determination module 505.
- the spatial position relationship of one or more candidate targets may be determined based on the depth information of one or more candidate targets (for example, human faces), and based on the depth information of one or more candidate targets.
- the spatial position relationship determines the target to be identified.
- the spatial position relationship of the candidate target may include the spatial position relationship between the candidate target and the second imaging device.
- the spatial position relationship of the candidate target may be expressed as the distance between the candidate target and the second imaging device.
- the processing device may determine at least one candidate target among the one or more candidate targets as the target to be recognized based on the distance between the candidate target (for example, a human face) and the second imaging device.
- the candidate target for example, a human face
- the candidate target can be determined as the target to be recognized.
- the imaging device may set the distance to the second imaging device to be less than a certain threshold (for example, less than 1 meter, or less than 2 meters, or less than 3 meters, or less than 4 meters, etc.) or the distance of the second imaging device within a certain range
- a certain threshold for example, less than 1 meter, or less than 2 meters, or less than 3 meters, or less than 4 meters, etc.
- Candidate targets within are determined as targets to be identified.
- the processing device may determine the candidate target with the smallest distance from the second imaging device as the target to be recognized.
- the processing device may determine the target to be recognized from the two or more candidate targets based on a certain criterion. For example, the processing device may determine the target to be recognized based on the position in the first image of the candidate target having the same distance from the second imaging device. Further, the to-be-recognized target determining module 505 may determine the candidate target close to the left or right of the image in the first image as the to-be-recognized target.
- Step 209 The processing device acquires a third image collected by a third imaging device based on the depth information of at least a part of the target to be recognized, the third image including the at least part of the target to be recognized. This step can be performed by the to-be-identified target image acquisition module 507.
- the target to be recognized may include a human face, and at least a part of the target to be recognized may include at least one of a human eye, an iris, an eye pattern, and an eye circumference.
- the third image including the at least part of the target to be recognized may also be referred to as an image representation of the third image including the at least part (for example, human eyes) of the target to be recognized.
- the processing device may acquire one or more fourth images (for example, human eye images) acquired by the third imaging device.
- the processing device may determine the third image from one or more fourth images based on depth information of at least a part of the target to be recognized (for example, human eyes).
- the coordinate system conversion relationship between the third imaging device and the second imaging device ie, geometric mapping relationship or spatial projection relationship
- the projection relationship between the fourth image and the depth information of at least a part of the target to be recognized satisfies the coordinate system conversion relationship between the third imaging device and the second imaging device as the third image.
- the processing device may use the coordinate system conversion relationship between the third imaging device and the second imaging device to project the depth information of at least a part of the target to be recognized (for example, human eyes) onto the plane where each fourth image is located.
- the processing device may designate a fourth image matching the projected information among the fourth images as the third image.
- the coordinate system conversion relationship between the third imaging device and the second imaging device is related to the calibration parameters of the second imaging device and the third imaging, and is the default setting of the identity recognition system 100.
- the processing device may locate at least a part of the target to be recognized (for example, human eyes) based on the depth information of at least a part of the target to be recognized (for example, human eyes) to determine at least a part of the target to be recognized (for example, Human eye) relative to the spatial position information of the third imaging device.
- the distance and direction of at least a part of the target to be recognized (for example, the human eye) from the second imaging device may be determined based on the depth information of at least a part of the target to be recognized (for example, the human eye).
- it is possible to determine at least a part of the target to be recognized from the third imaging device.
- the third imaging device may image at least a part of the target to be recognized (for example, human eyes) based on the distance and direction of at least a part of the target to be recognized (for example, human eyes) and the third imaging device to acquire the third image.
- the spatial position information of the target to be recognized relative to the third imaging device may be determined based on the depth information of at least a part of the target to be recognized.
- the third imaging device may be activated to focus on at least a part of the target to be recognized based on the spatial position information of the target to be recognized relative to the third imaging device to acquire the third image. Refer to FIG. 4 for more description of the autofocus of the third device.
- the processing device may determine whether the depth information of at least a part of the target to be recognized meets a certain condition, and determine whether to activate the third imaging device to acquire the third image based on the determination result. For more description about the activation of the third imaging device, refer to FIG. 3.
- the third imaging device includes one or more infrared imaging devices.
- the third device may include one or more image sensors, for example, CMOS image sensors, CCD image sensors, and so on.
- at least one of the vertical viewing angle (FOV) or the horizontal viewing angle (FOV) of the third imaging device is greater than a threshold value or within a certain range.
- the vertical FOV of the third imaging device can be within a certain range, for example, at 0-60 degrees, or at 0-90 degrees, or at 0-90 degrees. 120 degrees and other ranges.
- multiple image sensors can be installed in the horizontal direction of the third imaging device, so that the horizontal FOV of the third imaging device is within a certain range, for example, at 0-60 degrees, or at 0-90 degrees, or at 0 degrees. -120 degrees and other ranges.
- multiple image sensors can be installed in the vertical and horizontal directions of the third imaging device at the same time, so that the vertical FOV and horizontal FOV of the third imaging device are within a certain range, for example, at 0-60 degrees, or at 0-degrees. 90 degrees, or in the range of 0-120 degrees.
- the image sensor in the third device can rotate along multiple degrees of freedom, for example, can rotate clockwise or counterclockwise.
- the FOV in a certain direction can be changed by rotating the image sensor in the third device. For example, if the horizontal FOV is larger, the image sensor in the third device can be rotated 90 degrees to make the vertical FOV larger.
- Step 211 The processing device may perform identity recognition on the target to be recognized based on the third image. This step may be performed by the identification module 509.
- images of at least a part of a plurality of objects may be collected in advance and image features may be extracted.
- the pre-extracted image features can be stored in the storage device 140 in the form of feature codes, or can be directly stored in an external database.
- the recognition module 509 can use a feature extraction algorithm to extract features from the third image.
- the recognition module 509 may preprocess the third image before performing feature extraction, for example, image smoothing, edge detection, image separation, and so on.
- the recognition module 509 may further perform feature encoding on the features extracted from the third image.
- the recognition module 509 can match the feature code obtained from the third image with the pre-stored feature code to perform identity recognition on the target to be recognized.
- the target to be recognized may be determined based on the depth information of the candidate target.
- the third image of the specified target may be filtered out of the images taken by the current third imaging device (for example, iris camera) based on the depth information of at least a part of the target to be recognized (for example, human eyes) for identity recognition (for example, Iris recognition), which can avoid false collection or collection of objects that should not be collected, improve the quality and efficiency of collected images, and further improve the speed and accuracy of identity recognition.
- identity recognition for example, Iris recognition
- the vertical or horizontal FOV for example, you can Obtain a larger vertical FOV in order to cover people of different heights, so as to achieve no need to increase the mechanical structure of the pitch angle adjustment.
- the second imaging device can adopt a structured light depth camera or a TOF depth camera, which can effectively reduce the dependence on ambient light, and can improve the accuracy of depth information, thereby improving the accuracy of determining the target to be recognized ,
- a structured light depth camera or a TOF depth camera which can effectively reduce the dependence on ambient light, and can improve the accuracy of depth information, thereby improving the accuracy of determining the target to be recognized .
- the third image for example, iris image
- identity recognition for example, iris recognition
- Fig. 3 is an exemplary flowchart of another identity recognition method according to some embodiments of the present specification.
- the process 300 may be implemented by the identity recognition apparatus 500 or the processing device 110 shown in FIG. 1.
- the process 300 may be stored in a storage device (such as the storage device 140) in the form of a program or instruction, and when the program or instruction is executed, the process 300 may be implemented.
- the process 300 may include the following steps.
- the third imaging device may simultaneously acquire images with the first imaging device and the second imaging device. In some embodiments, the third imaging device may initiate image acquisition based on corresponding conditions. For example, the third imaging device may determine whether to start operations such as the acquisition of the third image based on the results of the first and second images acquired by the first imaging device and the second imaging device. For example, the candidate target image acquisition module 501 may acquire a first image acquired by a first imaging device and a second image acquired by a second imaging device.
- the first and second images include image representations of one or more candidate targets and at least one
- the depth information of the candidate target can be determined based on the result of whether the depth information of the target to be identified in the second image meets the corresponding condition, whether to start the third imaging device to collect the third image.
- Step 301 The processing device may acquire a first image collected by the first imaging device, where the first image includes image representations of one or more candidate targets. In some embodiments, this step may be performed by the candidate target image acquisition module 501.
- step 201 in the process 200 For a detailed description of acquiring the first image, reference may be made to step 201 in the process 200.
- Step 303 The processing device may acquire a second image collected by the second imaging device, where the second image includes depth information of at least one candidate target among the one or more candidate targets. In some embodiments, this step may be performed by the candidate target image acquisition module 501.
- step 203 in the process 200 For a detailed description of acquiring the second image, refer to step 203 in the process 200.
- Step 305 The processing device may extract depth information of the one or more candidate targets from the second image based on the first image and the second image. In some embodiments, this step may be performed by the depth information extraction module 503.
- step 205 For a specific description of extracting the depth information of one or more candidate targets from the second image, reference may be made to step 205 in the process 200.
- Step 307 The processing device may determine at least one candidate target from the one or more candidate targets as the target to be identified based on the depth information of the one or more candidate targets. In some embodiments, this step may be performed by the to-be-identified target determination module 505.
- step 207 For a specific description of determining at least one candidate target as the target to be recognized based on the depth information, reference may be made to step 207 in the process 200.
- Step 309 The processing device may determine whether the depth information of the at least a part of the target to be identified meets a condition. In some embodiments, this step may be performed by the to-be-identified target image acquisition module 507 (for example, the activation unit (not shown)).
- the depth information of at least a part (for example, human eyes, eye patterns, eye circumference, etc.) of the target to be recognized may indicate the difference between the point on the surface of the target to be recognized and the second imaging device.
- the distance relationship between may be based on the distance between a point on the surface of at least a part of the target to be recognized (for example, human eyes, eye patterns, eye circumference, etc.) and the second imaging device, and the distance between the second imaging device and the third imaging device.
- the spatial position relationship (for example, direction, distance, etc.) between the devices determines the distance between a point on the surface of at least a part of the target to be recognized (for example, human eyes, eye patterns, eye circumference, etc.) and the third imaging device. In some embodiments, it may be based on the distance between a point on the surface of at least a part of the target to be recognized (for example, human eyes, eye patterns, eye circumference, etc.) and the second imaging device, and the geographic coordinate system of the second imaging device.
- the coordinate system conversion relationship between them determines the position of the point on the surface of at least a part of the target to be recognized (for example, human eyes, eye patterns, eye circumference, etc.) in the geographic coordinate system.
- the to-be-recognized target may be determined based on the position of the point on the surface of at least a part of the target to be recognized (for example, human eyes, eye patterns, eye circumference, etc.) in the geographic coordinate system and the position of the third imaging device in the geographic coordinate system.
- determining whether the depth information of at least a part of the target to be recognized satisfies the condition includes determining whether the distance between at least a part of the target to be recognized and the third imaging device satisfies a certain condition. For example, it can be determined whether the distance between at least a part of the target to be recognized and the third imaging device is within a certain distance range. If the distance between at least a part of the target to be recognized and the third imaging device is within a certain distance range, it can be determined that the depth information of at least a part of the target to be recognized satisfies the condition.
- the distance between at least a part of the target to be recognized and the third imaging device is not within a certain distance range, it may be determined that the depth information of at least a part of the target to be recognized does not satisfy the condition.
- the distance range can include 30-70cm, 20-80cm, 10-90cm, and so on.
- the distance threshold may include 70cm, 80cm, 90cm, and so on.
- the points on at least a part of the upper surface of the target to be recognized may not be on the same plane, that is, the distance between the points on at least a part of the surface of the target to be recognized and the third imaging device may be different.
- the distance between at least a part of the target to be recognized and the third imaging device may be determined based on the distance between a point on the surface of at least a part of the target to be recognized and the third imaging device. For example, it may be determined that the average value of the distance between a point on the surface of at least a part of the target to be recognized and the third imaging device is the distance between at least a part of the target to be recognized and the third imaging device. For another example, it may be determined that the median value of the distance between a point on the surface of at least a part of the target to be recognized and the third imaging device is the distance between at least a part of the target to be recognized and the third imaging device.
- the processing device may, in response to the depth information of at least a part of the target to be recognized satisfying the condition, start the third imaging device to collect a third image of at least a part of the target to be recognized.
- this step may be performed by the to-be-identified target image acquisition module 507 (for example, the activation unit (not shown)).
- one or more fourth images collected by the third imaging device may be activated.
- the processing device for example, a screening unit (not shown)
- the coordinate system conversion relationship between the third imaging device and the second imaging device ie, geometric mapping relationship or spatial projection relationship
- the projection relationship between the fourth image and the depth information of at least a part of the target to be recognized satisfies the coordinate system conversion relationship between the third imaging device and the second imaging device as the third image.
- the processing device may use the coordinate system conversion relationship between the third imaging device and the second imaging device to project the depth information of at least a part of the target to be recognized (for example, human eyes) onto the plane where each fourth image is located.
- the processing device may designate a fourth image matching the projected information among the fourth images as the third image.
- the coordinate system conversion relationship between the third imaging device and the second imaging device is related to the calibration parameters of the second imaging device and the third imaging, and is the default setting of the identity recognition system 100.
- the processing device may locate at least a part of the target to be recognized (for example, human eyes) based on the depth information of at least a part of the target to be recognized (for example, human eyes) to determine at least a part of the target to be recognized (for example, Human eye) relative to the spatial position information of the third imaging device.
- the distance and direction of at least a part (for example, human eyes) of the target to be recognized from the second imaging device may be determined based on the depth information of at least a part of the target to be recognized (for example, human eyes).
- it is possible to determine at least a part of the target to be recognized for example, the distance and direction between the human eye
- the third imaging device may locate at least a part of the target to be recognized (for example, human eyes) based on the depth information of at least a part of the target to be recognized (for example, human eyes) to determine at least a part of the target to be recognized (for example, Human eye) relative to the spatial position information of the third imaging device.
- the third imaging device may be activated based on the distance and direction between at least a part of the target to be recognized (for example, human eyes) and the third imaging device. , The human eye) performs imaging to obtain a third image.
- the processing device may return to perform steps 301-307 to reacquire the first image reacquired by the first imaging device and the reacquisition of the second imaging device.
- the second image based on the re-acquired first image and the second image, extract the depth information of one or more candidate targets, and determine at least from the one or more candidate targets based on the depth information of the one or more candidate targets A candidate target is the target to be identified.
- step 211 in FIG. 2 For a specific description of performing identity recognition based on the third image, reference may be made to step 211 in FIG. 2.
- the third imaging device for example, the iris imaging device
- the third imaging device When the distance between at least a part of the target to be recognized and the third imaging device satisfies the condition (for example, the distance to the third imaging device is relatively close), the third imaging device will start image acquisition, which can effectively avoid continuous acquisition by the third imaging device The image can cause the target that should not be collected to be collected. There is no need for the user to actively turn on or turn off the third imaging device, which improves user experience.
- Fig. 4 is an exemplary flow chart of another identity recognition method according to some embodiments of the present specification.
- the process 400 may be executed by the identity recognition apparatus 500 or implemented by the processing device 110 shown in FIG. 1.
- the process 400 may be stored in a storage device (such as the storage device 140) in the form of a program or instruction, and when the program or instruction is executed, the process 400 may be implemented.
- the process 400 may include the following steps.
- the third imaging device that captures at least a part of the image of the target to be recognized needs to have a good focus function of the target to be recognized, so that the captured third image meets the quality requirements for identity recognition.
- the third imaging device is activated to perform automatic focusing based on the depth information of at least a part of the target to be recognized and collect a third image.
- Step 401 The processing device may obtain a first image collected by a first imaging device, where the first image includes image representations of one or more candidate targets. In some embodiments, this step may be performed by the candidate target image acquisition module 501.
- step 201 in the process 200 For a detailed description of acquiring the first image, reference may be made to step 201 in the process 200.
- Step 403 The processing device may acquire a second image collected by the second imaging device, where the second image includes depth information of at least one candidate target among the one or more candidate targets. In some embodiments, this step may be performed by the candidate target image acquisition module 501.
- step 203 in the process 200 For a detailed description of acquiring the second image, refer to step 203 in the process 200.
- Step 405 The processing device may extract depth information of the one or more candidate targets from the second image based on the first image and the second image. In some embodiments, this step may be performed by the depth information extraction module 503.
- step 205 For a specific description of extracting the depth information of one or more candidate targets from the second image, reference may be made to step 205 in the process 200.
- the processing device may determine at least one candidate target from the one or more candidate targets as the target to be recognized based on the depth information of the one or more candidate targets. In some embodiments, this step may be performed by the to-be-identified target determination module 505.
- step 207 For a specific description of determining at least one candidate target as the target to be recognized based on the depth information, reference may be made to step 207 in the process 200.
- Step 409 The processing device may determine the spatial position information of the at least part of the target to be recognized relative to the third imaging device based on the depth information of the at least part of the target to be recognized. In some embodiments, this step may be performed by the to-be-identified target image acquisition module 507 (for example, a focusing unit (not shown)).
- the spatial position information of at least a part of the target to be recognized (for example, human face, eye, eye circumference, etc.) relative to the third imaging device may include the spatial position of at least a part of the target to be recognized and the spatial position of the third imaging device
- the spatial position information of at least a part of the target to be recognized (for example, human face, eye, eye circumference, etc.) relative to the third imaging device may include the distance between at least a part of the target to be recognized and the third imaging device, and the target to be recognized At least a part of the direction with respect to the third imaging device, etc.
- the spatial position relationship of at least a part of the target to be recognized relative to the third imaging device includes the distance of the human eye relative to the third imaging device.
- the processing device may determine the spatial position relationship between at least a part of the target to be recognized and the second imaging device based on the depth information of at least a part of the target to be recognized extracted from the second image (also referred to as the first spatial position). relation). Further, the processing device may determine the spatial position relationship of at least a part of the target to be identified relative to the third imaging device based on the first spatial position relationship and the spatial position relationship between the second imaging device and the third imaging device (also referred to as the first The second spatial position relationship), that is, the spatial position information of at least a part of the target to be recognized (for example, human face, human eye, eye circumference, etc.) relative to the third imaging device.
- the processing device may determine based on the depth information of at least a part of the target to be recognized extracted from the second image and the coordinate conversion relationship between the second imaging device and the geographic coordinate system to determine that at least part of the target to be recognized is geographically Spatial location information (for example, coordinates) in the coordinate system. Further, the processing device may determine, based on the spatial position information (for example, coordinates) of at least a part of the target to be recognized in the geographic coordinate system, and the spatial position information of the third imaging device in the geographic coordinate system, determine that at least a part of the target to be recognized is relative to Spatial location information of the third imaging device.
- the spatial position relationship between the second imaging device and the third imaging device, the coordinate conversion relationship between the second imaging device and the geographic coordinate system, and/or the spatial position information of the third imaging device in the geographic coordinate system can be determined by the identity recognition system 100 pre-setting.
- Step 411 The processing device may cause the third imaging device to focus on the at least part of the target to be recognized based on the spatial position information of the at least part of the target to be recognized relative to the third imaging device .
- this step may be performed by the to-be-identified target image acquisition module 507 (for example, a focusing unit (not shown)).
- the spatial position information of at least a part of the target to be recognized relative to the third imaging device may include at least a part of the target to be recognized (for example, human eyes, eye patterns, eye circumference) and collection
- the processing device for example, the focusing unit
- the correspondence relationship between the object distance interval and the focus position may be constructed in advance.
- the correspondence between the object distance interval and the focus position includes multiple object distance intervals and corresponding focus positions.
- the to-be-identified target image acquisition module 507 (for example, a focusing unit (not shown)) can be based on at least a part of the to-be-identified target (for example, human eyes, eye patterns, eye circumference) and the third imaging device (for example, an iris camera).
- the distance determines the object distance interval to which it belongs from a plurality of object distance intervals. And determine the corresponding focus position according to the object distance interval to which it belongs.
- the third imaging device includes a voice coil motor.
- a voice coil motor can be used as a device that converts electrical energy into mechanical energy.
- the voice coil motor can adjust the distance between the lens of the third imaging device and the image sensor according to the determined focus position to adjust the image distance and the object distance.
- the third imaging device can adjust the image distance and the object distance during the focusing process by adjusting the position of the lens group, thereby achieving focusing.
- Step 413 The processing device may obtain the third image and perform identity recognition. In some embodiments, this step may be performed by the target image acquisition module 507 and/or the recognition module 509 to be recognized.
- the to-be-identified target image acquisition module 507 adjusts the lens position of the third imaging device to the focal length position, so that the lens of the third imaging device can focus on the to-be-identified target.
- One or more fourth images including at least one candidate target can be acquired by using at least a part of the image of the target to be recognized collected by the third imaging device after focusing. Based on the depth information of at least a part of the target to be recognized, the third image may be obtained from one or more fourth images.
- one or more fourth images collected by the third imaging device may be activated.
- the to-be-recognized target image acquisition module 507 (for example, a screening unit (not shown)) may obtain the third image from one or more fourth images based on the depth information of at least a part of the to-be-recognized target (for example, human eyes).
- the spatial projection relationship ie, geometric mapping relationship or coordinate system conversion relationship
- step 211 in FIG. 2 For a specific description of performing identity recognition based on the third image, reference may be made to step 211 in FIG. 2.
- the target to be recognized for example, the human eye
- the depth information of at least a part of the target to be recognized for example, the human eye
- the distance from the third imaging device for example, an iris camera
- the third imaging device can further auto-focus on at least a part of the target to be recognized (for example, human eyes) according to the distance, so that fast and accurate auto-focusing can be realized , Improve the quality of the image collected by the third imaging device.
- the third imaging device uses a voice coil motor to achieve auto focusing, which can avoid using a complicated stepping motor to drive a mechanical structure to achieve focusing.
- Fig. 5 is an exemplary module diagram of an identity recognition device according to some embodiments of the present specification.
- the identity recognition system may include a candidate target image acquisition module 501, a depth information extraction module 503, a target identification module 505, a target image acquisition module 507, a recognition module 509, and a storage module 511.
- a candidate target image acquisition module 501 a depth information extraction module 503, a target identification module 505, a target image acquisition module 507, a recognition module 509, and a storage module 511.
- the various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other. Each embodiment focuses on the differences from other embodiments. Place.
- the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
- the candidate target image acquisition module 501 may be used to acquire the first image corresponding to one or more candidate targets acquired by the first imaging device and/or the second image acquired by the second imaging device.
- the first image includes image representations of one or more candidate targets.
- the first image may include feature information used to represent or describe the candidate target, for example, texture features, boundary features, color features, and so on.
- the second image includes depth information of at least one of the candidate targets.
- the depth information of the candidate target may indicate the distance relationship between a point on the surface of the candidate target (for example, a human face) and the second imaging device.
- the size of the pixel value of the second image may be used to indicate the size of the distance between the surface of the at least one candidate target and the second imaging device.
- the second image may include a depth image.
- the second image may include a point cloud image.
- the depth information extraction module 503 may extract depth information of one or more candidate targets based on the first image and the second image.
- the depth information extraction module 503 can register the first image with the second image and obtain the registration result, including the registered first image and the second image.
- the depth information extraction module 503 can detect one or more candidate targets from the first image after registration. And extract the depth information of one or more candidate targets (for example, human face) from the registered second image based on the detected candidate targets.
- the to-be-recognized target determining module 505 may be configured to determine at least one candidate target as the to-be-recognized target from the one or more candidate targets based on the depth information of the one or more candidate targets. In some embodiments, the to-be-recognized target determination module 505 may determine at least one candidate target as the to-be-recognized target based on the distance between the candidate target (for example, a human face) and the second imaging device. For example, the to-be-recognized target determining module 505 may determine a candidate target whose distance from the second imaging device is less than a certain threshold or within a certain range as the to-be-recognized target. For another example, the to-be-recognized target determination module 505 may determine the candidate target with the smallest distance from the second imaging device as the to-be-recognized target.
- the to-be-recognized target image acquisition module 507 may acquire the third image acquired by the third imaging device based on the depth information of at least a part of the to-be-recognized target.
- the third image includes an image representation of at least a part of the target to be identified.
- the to-be-identified target image acquisition module 507 includes a screening unit, an activation unit, and a focusing unit.
- the activation unit may activate the third imaging device to acquire an image in response to a result that the depth information of at least a part of the target to be recognized satisfies the condition.
- the activation unit may activate the third imaging device to collect one or more fourth images in response to a result that the depth information of at least a part of the target to be recognized meets the condition.
- the to-be-recognized target image acquisition module 507 (for example, a screening unit) may filter the third image from one or more fourth images based on the depth information of at least a part of the to-be-recognized target.
- the filtering unit may filter the third image from one or more fourth images based on depth information of at least a part of the target to be recognized (for example, human eyes).
- the screening unit may obtain the spatial projection relationship between the third imaging device and the second imaging device, and determine that the projection relationship between the fourth image and the third image satisfies the relationship between the third imaging device and the second imaging device.
- the fourth image of the spatial projection relationship is the third image.
- the focusing unit may be used to focus the third imaging device according to the depth of at least a part of the target to be recognized, that is, to adjust the image distance of the third imaging device. For example, the focusing unit may determine the belonging object distance interval from multiple object distance intervals according to the distance between at least a part of the target to be identified and the third imaging device, and determine the corresponding focus position according to the belonging object distance interval.
- the recognition module 509 may be used to perform identity recognition on the target to be recognized based on the third image.
- the recognition module can be used to preprocess the third image, extract image features from the third image, perform feature encoding on the extracted features, and match the extracted feature codes with pre-stored feature codes to perform Identification.
- the storage module 511 may be used to store the image data collected by the imaging device, the identity information of the target to be identified, the image processing model and/or algorithm, etc.
- the storage module 511 may store the first image collected by the first imaging device and the second image collected by the second imaging device.
- the storage module 511 may store image features of at least a part of a plurality of pre-collected targets for identity recognition.
- the storage module 511 may store algorithms such as image preprocessing and target detection technology.
- the storage module 511 includes an internal storage device, an external storage device, and the like.
- the possible beneficial effects of the embodiments of this specification include, but are not limited to: (1)
- the target to be recognized can be determined based on the depth information of the candidate target, and the depth information of at least a part of the target to be recognized (for example, human eyes) can be obtained from the current third
- the third image of the specified target is selected from the images taken by the imaging device (for example, iris camera) for identity recognition (for example, iris recognition), so as to avoid false collection or collection of objects that should not be collected, and improve the quality of the collected images And efficiency, to further improve the speed and accuracy of identity recognition based on biological characteristics such as iris and eye patterns;
- the number of image sensors in the third imaging device can be increased or the third imaging device (for example, iris camera) can be rotated In the image sensor, change the vertical or horizontal FOV (for example, you can obtain a larger vertical FOV to cover people of different heights), so as to achieve no need to increase the pitch angle adjustment mechanical structure;
- second imaging The device can use structured light depth camera or TOF
- the depth information of at least a part of the target to be recognized (for example, human eyes) determined based on the image collected by the second imaging device can determine at least the depth of the target to be recognized
- the distance between a part (for example, a human eye) and the third imaging device (for example, an iris camera), and the third imaging device can further automatically focus on at least a part of the target to be recognized (for example, the human eye) according to the distance, Fast and accurate autofocus can be realized, and the quality of the image collected by the third imaging device can be improved, and the third imaging device adopts the autofocus realized by the voice coil motor, which can avoid the use of a complicated stepping motor to drive the mechanical structure to achieve focusing.
- the possible beneficial effects may be any one or a combination of the above, or any other beneficial effects that may be obtained.
- system and its modules shown in Figure 5 can be implemented in various ways.
- the system and its modules may be implemented by hardware, software, or a combination of software and hardware.
- the hardware part can be implemented using dedicated logic;
- the software part can be stored in a memory and executed by an appropriate instruction execution system, such as a microprocessor or dedicated design hardware.
- an appropriate instruction execution system such as a microprocessor or dedicated design hardware.
- processor control codes for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware Such codes are provided on a programmable memory or a data carrier such as an optical or electronic signal carrier.
- the system and its modules in this specification can not only be implemented by hardware circuits such as very large-scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. It may also be implemented by software executed by various types of processors, or may be implemented by a combination of the foregoing hardware circuit and software (for example, firmware).
- modules in the identity recognition device 500 is only for convenience of description, and does not limit this specification within the scope of the embodiments mentioned. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form a subsystem to connect with other modules without departing from this principle. For example, in some embodiments, for example, what is disclosed in FIG. 5 may be different modules in a system, or a module may implement the functions of two or more modules described above. For example, the depth information extraction module and the target determination module to be identified can be integrated into one module. Such deformations are within the protection scope of this manual
- the computer storage medium may contain a propagated data signal containing a computer program code, for example on a baseband or as part of a carrier wave.
- the propagated signal may have multiple manifestations, including electromagnetic forms, optical forms, etc., or a suitable combination.
- the computer storage medium may be any computer readable medium other than the computer readable storage medium, and the medium may be connected to an instruction execution system, device, or device to realize communication, propagation, or transmission of the program for use.
- the program code located on the computer storage medium can be transmitted through any suitable medium, including radio, cable, fiber optic cable, RF, or similar medium, or any combination of the above medium.
- the computer program codes required for the operation of each part of this manual can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python Etc., conventional programming languages such as C language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
- the program code can be run entirely on the user's computer, or run as an independent software package on the user's computer, or partly run on the user's computer and partly run on a remote computer, or run entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any network form, such as a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
- LAN local area network
- WAN wide area network
- SaaS service Use software as a service
- numbers describing the number of ingredients and attributes are used. It should be understood that such numbers used in the description of the embodiments use the modifier "about”, “approximately” or “substantially” in some examples. Retouch. Unless otherwise stated, “approximately”, “approximately” or “substantially” indicates that the number is allowed to vary by ⁇ 20%.
- the numerical parameters used in the specification and claims are approximate values, and the approximate values can be changed according to the required characteristics of individual embodiments. In some embodiments, the numerical parameter should consider the prescribed effective digits and adopt the method of general digit retention. Although the numerical ranges and parameters used to confirm the breadth of the ranges in some embodiments of this specification are approximate values, in specific embodiments, the setting of such numerical values is as accurate as possible within the feasible range.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
L'invention concerne un procédé de reconnaissance d'identité. Le procédé peut comprendre les étapes suivantes : obtenir une première image acquise par un premier dispositif d'imagerie, la première image comprenant une ou plusieurs cibles candidates ; obtenir une deuxième image acquise par un deuxième dispositif d'imagerie, la deuxième image comprenant des informations de profondeur d'au moins une cible candidate parmi la ou les cibles candidates ; extraire des informations de profondeur de la ou des cibles candidates à partir de la deuxième image en fonction de la première image et de la deuxième image ; déterminer, en fonction des informations de profondeur de la ou des cibles candidates, parmi la ou les cibles candidates, au moins une cible candidate en tant que cible à reconnaître ; obtenir, en fonction des informations de profondeur d'au moins une partie de la cible à reconnaître, une troisième image acquise par un troisième dispositif d'imagerie, la troisième image comprenant la ou les parties de la cible à reconnaître ; et reconnaître, en fonction de la troisième image, des informations d'identité de la cible à reconnaître.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010277300.XA CN111191644B (zh) | 2020-04-10 | 2020-04-10 | 身份识别方法、系统及装置 |
CN202010277300.X | 2020-04-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021204267A1 true WO2021204267A1 (fr) | 2021-10-14 |
Family
ID=70708731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/086266 WO2021204267A1 (fr) | 2020-04-10 | 2021-04-09 | Reconnaissance d'identité |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111191644B (fr) |
WO (1) | WO2021204267A1 (fr) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111191644B (zh) * | 2020-04-10 | 2020-10-20 | 支付宝(杭州)信息技术有限公司 | 身份识别方法、系统及装置 |
CN111461092B (zh) * | 2020-06-19 | 2020-10-02 | 支付宝(杭州)信息技术有限公司 | 一种刷脸测温及核身的方法、装置和设备 |
CN113722692B (zh) * | 2021-09-07 | 2022-09-02 | 墨奇科技(北京)有限公司 | 身份识别的装置及其方法 |
CN116843731A (zh) * | 2022-03-23 | 2023-10-03 | 腾讯科技(深圳)有限公司 | 对象识别方法以及相关设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104732210A (zh) * | 2015-03-17 | 2015-06-24 | 深圳超多维光电子有限公司 | 目标人脸跟踪方法及电子设备 |
US9934436B2 (en) * | 2014-05-30 | 2018-04-03 | Leidos Innovations Technology, Inc. | System and method for 3D iris recognition |
CN109753926A (zh) * | 2018-12-29 | 2019-05-14 | 深圳三人行在线科技有限公司 | 一种虹膜识别的方法和设备 |
CN110472582A (zh) * | 2019-08-16 | 2019-11-19 | 腾讯科技(深圳)有限公司 | 基于眼部识别的3d人脸识别方法、装置和终端 |
CN111191644A (zh) * | 2020-04-10 | 2020-05-22 | 支付宝(杭州)信息技术有限公司 | 身份识别方法、系统及装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050084137A1 (en) * | 2002-01-16 | 2005-04-21 | Kim Dae-Hoon | System and method for iris identification using stereoscopic face recognition |
CN102855471B (zh) * | 2012-08-01 | 2014-11-26 | 中国科学院自动化研究所 | 远距离虹膜智能成像装置及方法 |
CN105574525B (zh) * | 2015-12-18 | 2019-04-26 | 天津中科虹星科技有限公司 | 一种复杂场景多模态生物特征图像获取方法及其装置 |
-
2020
- 2020-04-10 CN CN202010277300.XA patent/CN111191644B/zh active Active
-
2021
- 2021-04-09 WO PCT/CN2021/086266 patent/WO2021204267A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9934436B2 (en) * | 2014-05-30 | 2018-04-03 | Leidos Innovations Technology, Inc. | System and method for 3D iris recognition |
CN104732210A (zh) * | 2015-03-17 | 2015-06-24 | 深圳超多维光电子有限公司 | 目标人脸跟踪方法及电子设备 |
CN109753926A (zh) * | 2018-12-29 | 2019-05-14 | 深圳三人行在线科技有限公司 | 一种虹膜识别的方法和设备 |
CN110472582A (zh) * | 2019-08-16 | 2019-11-19 | 腾讯科技(深圳)有限公司 | 基于眼部识别的3d人脸识别方法、装置和终端 |
CN111191644A (zh) * | 2020-04-10 | 2020-05-22 | 支付宝(杭州)信息技术有限公司 | 身份识别方法、系统及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111191644B (zh) | 2020-10-20 |
CN111191644A (zh) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021204267A1 (fr) | Reconnaissance d'identité | |
US11048953B2 (en) | Systems and methods for facial liveness detection | |
CN106446873B (zh) | 人脸检测方法及装置 | |
TWI766201B (zh) | 活體檢測方法、裝置以及儲存介質 | |
US9602783B2 (en) | Image recognition method and camera system | |
US11227149B2 (en) | Method and apparatus with liveness detection and object recognition | |
US9626553B2 (en) | Object identification apparatus and object identification method | |
WO2016086343A1 (fr) | Système et procedé d'identification personnelle basés sur des informations biométriques multimodales | |
CN109583304A (zh) | 一种基于结构光模组的快速3d人脸点云生成方法及装置 | |
CN112052831B (zh) | 人脸检测的方法、装置和计算机存储介质 | |
CN106682620A (zh) | 人脸图像采集方法及装置 | |
WO2016010721A1 (fr) | Analyse d'œil multispectrale pour une authentification d'identité | |
WO2016010720A1 (fr) | Analyse d'œil multispectrale pour une authentification d'identité | |
WO2016010724A1 (fr) | Analyse multispectrale de l'œil pour l'authentification d'identité | |
CN104680128B (zh) | 一种基于四维分析的生物特征识别方法和系统 | |
US9449217B1 (en) | Image authentication | |
US11080557B2 (en) | Image authentication apparatus, method, and storage medium using registered image | |
WO2016086341A1 (fr) | Système et procédé pour l'acquisition d'information biométrique multimodale | |
US10915739B2 (en) | Face recognition device, face recognition method, and computer readable storage medium | |
JP6157165B2 (ja) | 視線検出装置及び撮像装置 | |
WO2018147059A1 (fr) | Dispositif de traitement d'image, procédé de traitement d'image, et programme | |
TWI509466B (zh) | 物件辨識方法與裝置 | |
US9282237B2 (en) | Multifocal iris recognition device | |
CN115170832A (zh) | 一种基于可见光单图像的弱纹理表面微结构特征提取方法 | |
KR101053253B1 (ko) | 3차원 정보를 이용한 얼굴 인식 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21783886 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21783886 Country of ref document: EP Kind code of ref document: A1 |