WO2018001371A1 - 一种身份识别方法及装置 - Google Patents
一种身份识别方法及装置 Download PDFInfo
- Publication number
- WO2018001371A1 WO2018001371A1 PCT/CN2017/091264 CN2017091264W WO2018001371A1 WO 2018001371 A1 WO2018001371 A1 WO 2018001371A1 CN 2017091264 W CN2017091264 W CN 2017091264W WO 2018001371 A1 WO2018001371 A1 WO 2018001371A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- identified
- target point
- identity information
- information database
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 238000001514 detection method Methods 0.000 claims abstract description 49
- 230000004424 eye movement Effects 0.000 claims description 39
- 238000005516 engineering process Methods 0.000 abstract description 12
- 210000001508 eye Anatomy 0.000 description 117
- 210000001747 pupil Anatomy 0.000 description 21
- 230000003287 optical effect Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 16
- 230000000007 visual effect Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 230000005284 excitation Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 229920001690 polydopamine Polymers 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000008175 fetal development Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000001179 pupillary effect Effects 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/60—Static or dynamic means for assisting the user to position a body part for biometric acquisition
- G06V40/67—Static or dynamic means for assisting the user to position a body part for biometric acquisition by interactive indications to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present invention relates to the field of identity recognition technologies, and in particular, to an identity recognition method and apparatus.
- the iris is an annular part between the black pupil and the white sclera that forms during the fetal development. For everyone, the structure of the iris is different and hardly changes throughout life. Therefore, the iris is widely used. It is applied to identification in various environments, but it is often the case that the forged iris is attached to the eye to achieve the purpose of deception. Therefore, when using the iris for identification, it is necessary to detect whether the iris is a living body.
- the optical devices are added, and the light source is irradiated to the human eye to obtain a sequence in which the iris and the eyeball react under light excitation, such as stimulating pupil scaling, light reflection imaging, etc., and the obtained sequence and a preset reaction sequence threshold are obtained.
- the comparison is performed to determine whether the object is a living body based on whether the obtained sequence is within the threshold.
- the pupil zoom under the light excitation is also affected by mood, color, external light source and individual differences, which may easily lead to judgment errors, and the preset reaction sequence threshold is difficult to ensure adaptation in practical applications.
- the method of using the human eye film is easy to fool the optical device, and therefore, the adaptability and the anti-counterfeiting performance are poor.
- the purpose of the embodiments of the present invention is to provide an identification method and device, which solves the problem that the living body judging method using optical excitation is easy to judge errors in the related art, and the adaptability and anti-counterfeiting performance are poor. .
- an embodiment of the present invention provides an identity recognition method, where the method includes:
- the coordinates of the target point, the user identity information database, and the identification result, the user to be identified is subjected to the living body detection.
- the embodiment of the present invention provides the first possible implementation manner of the foregoing first aspect, wherein the identifying the user according to the eye image and the pre-established user identity information database includes:
- the identity information and calibration data corresponding to the iris feature are obtained from the user identity information database.
- the embodiment of the present invention provides the second possible implementation manner of the foregoing first aspect, wherein the image according to the eye image, the coordinates of the target point, the user identity information database, and The result of the identification, the living detection of the user to be identified, including:
- the theoretical gaze point coordinates of the user to be identified are calculated according to the eye movement feature and the acquired calibration data;
- the embodiment of the present invention provides a third possible implementation manner of the foregoing first aspect, wherein the image according to the eye image, the coordinates of the target point, the user identity information database, and The result of the identification, the living detection of the user to be identified, including:
- the calibration data of the user to be identified is calculated according to the eye movement feature and the coordinates of the target point;
- the calibration data of the user to be identified is compared with the acquired calibration data to determine whether the user to be identified is a living user.
- the embodiment of the present invention provides the fourth possible implementation manner of the foregoing first aspect, wherein the image according to the eye image, the coordinates of the target point, the user identity information database, and The result of the identification, the living detection of the user to be identified, including:
- the calibration data of each user is retrieved from the user identity information database
- the coordinates of the theoretical fixation point are compared with the coordinates of the target point, and it is determined whether the number of users to be identified is a living user.
- the embodiment of the present invention provides the fifth possible implementation manner of the foregoing first aspect, wherein, before collecting the eye image when the user to be identified is gazing at the target point, the method further includes:
- the target point is randomly generated on the display screen in chronological order, and a target point is displayed on the display screen at each time and the position of the target point is random;
- generating a target point on the display screen according to the chronological order and the preset position sequence.
- a target point is displayed on the display at each time and the position of the target point is set by the preset position sequence.
- an embodiment of the present invention provides an identity recognition apparatus, where the apparatus includes:
- An acquisition module configured to collect an eye image when the user to be identified looks at the target point
- An identity recognition module configured to identify a user to be identified according to an eye image and a pre-established user identity information database
- the living body detecting module is configured to perform living body detection on the user to be identified according to the eye image, the coordinates of the target point, the user identity information database, and the identity recognition result.
- the identity recognition module includes:
- An extracting unit configured to extract an eye feature from the eye image, the eye feature including an iris feature and an eye movement feature;
- a first determining unit configured to determine whether an iris feature is included in the user identity information database
- an obtaining unit configured to acquire identity information and calibration data corresponding to the iris feature from the user identity information database in a case where the iris feature is included in the user identity information database.
- the embodiment of the present invention provides the second possible implementation manner of the foregoing second aspect, wherein the living body detection module includes:
- a calculating unit configured to calculate a theoretical gaze point coordinate of the user to be identified according to the eye movement feature and the acquired calibration data when the identification result indicates that the user identity information database includes the iris feature;
- the second determining unit is configured to compare the coordinates of the theoretical fixation point with the coordinates of the target point, and determine whether the user to be identified is a living user.
- the embodiment of the present invention provides a third possible implementation manner of the foregoing second aspect, wherein the living body detection module includes:
- a calculating unit configured to calculate calibration data of the user to be identified according to the eye movement feature and the coordinates of the target point when the identification result indicates that the user identity information database includes the iris feature;
- the second determining unit is configured to compare the calibration data of the user to be identified with the acquired calibration data, and determine whether the user to be identified is a living user.
- the embodiment of the present invention provides the fourth possible implementation manner of the foregoing second aspect, wherein the living body detection module includes:
- a calculating unit configured to: when the identification result indicates that the user identity information database includes an iris feature, retrieve calibration data of each user from the user identity information database; according to the eye movement feature and the calibration data of each user retrieved, Calculate the coordinates of the theoretical fixation point;
- the second determining unit is configured to compare the coordinates of the theoretical fixation point with the coordinates of the target point, and determine whether the number of users to be identified is a living user.
- the embodiment of the present invention provides the fifth possible implementation manner of the foregoing second aspect, wherein the device further includes:
- a first generating module configured to randomly generate a target point on the display screen in chronological order, and display a target point on the display screen at each time and the position of the target point is random;
- the second generating module is configured to generate a target point on the display screen according to the chronological order and the preset position order, and each time the display screen displays a target point position and the position of the target point position is a preset position sequence. Set.
- an embodiment of the present invention provides a storage medium, where the storage medium includes a stored program, where the device where the storage medium is located is controlled to execute the foregoing identity recognition method when the program is running.
- an embodiment of the present invention provides a processor, where a processor is configured to run a program, where the program is executed to execute the foregoing identity recognition method.
- an embodiment of the present invention provides a terminal, including: an acquisition module, configured to collect an eye image when a user to be identified is gazing at a target point; and an identity recognition module, configured to use the eye image and a pre-established user
- the identity information database identifies the user to be identified
- the living body detection module is configured to perform living body detection on the user to be identified according to the eye image, the coordinates of the target point, the user identity information database, and the identity recognition result
- the processor and the processor operate The program, wherein the program execution method performs the above identification method on data output from the acquisition module, the identity recognition module, and the living body detection module.
- an embodiment of the present invention provides a terminal, including: an acquisition module, configured to collect an eye image when a user to be identified is gazing at a target point; and an identity recognition module, configured to use an eye image and a pre-established user
- the identity information database identifies the user to be identified
- the living body detection module is configured to perform living body detection on the user to be identified according to the eye image, the coordinates of the target point, the user identity information database, and the identity recognition result
- the storage medium is used for storage a program, wherein the program performs the above identification method on data output from the acquisition module, the identification module, and the living body detection module at runtime.
- the method and device for identifying an identity include: collecting an eye image when the user to be identified is looking at the target point; and identifying the user to be identified according to the eye image and the pre-established user identity information database; According to the eye image, the coordinates of the target point, the user identity information database, and the identification result, the user to be identified is allowed to live. Body detection.
- the eye tracking technology is combined with the identity recognition technology.
- FIG. 1 is a flowchart of an identity recognition method according to Embodiment 1 of the present invention.
- FIG. 2 is a schematic structural diagram of an identity recognition apparatus according to Embodiment 2 of the present invention.
- an embodiment of the present invention provides an identification method and apparatus, which are described below by way of embodiments.
- the embodiment of the invention provides an identification method, which combines the eye tracking technology with the iris recognition technology, and the eye tracking technology extracts the eye image of the user to be identified while collecting the eye image when the user is looking at the target point. Iris feature and eye movement feature, the iris recognition is performed on the identified user and the calibration data is retrieved, and the coordinates of the target point are combined to determine whether the user to be identified is a living user, and the iris to be identified is successfully identified and the user to be identified is determined to be a living user. Then, it is determined that the identity of the user to be identified is successful.
- the identification method provided by the embodiment of the invention has high accuracy, adaptability and anti-counterfeiting performance.
- the identity identification method provided by the embodiment of the present invention can be applied to a scenario of a bank access control, a prison access control, a password payment, and the like.
- an identity information database of the application scenario authorized user needs to be established. For example, if the above identification method is applied to the bank's access control, it is necessary to establish a database of identity information of the bank employee for identifying the bank employee. When the bank employee identification is passed, the bank door is opened, and the employee can Enter, if the above identification method is applied to the prison access control, it is necessary to establish a database of identity information of the prison guard.
- Establishing a database of user identity information includes the following steps: acquiring iris characteristics of the user, calibration data, and identity information of the user; and selecting iris characteristics, calibration data, and identity information of the user and The corresponding relationship is stored in the user identity information database.
- the iris features include features such as spots, filaments, crowns, stripes, and crypts of the user's iris.
- the calibration data is calculated based on the eye movement characteristics and the coordinates of the fixation points when the user looks at one or more sets of specific target points. User individual feature data.
- the above specific target points may be the four corners of the display screen and the five target points displayed in the center, or the four corners on the display screen, the center of the four sides, and the nine target points displayed in the center of the display screen. Bit, the above specific target point is recorded as a calibration point.
- the above calibration point can also be a target point at other positions on the display screen, and the number and position of the above-mentioned one calibration point can be based on the actual application scenario. Make settings.
- the calibration points are continuously generated on the display screen in chronological order, one calibration point is displayed each time, the calibration eye image when the user eyes are gazing at the calibration point is obtained, and the user's iris feature is extracted from the collected calibration eye image.
- eye movement features include, but are not limited to, features such as spots, filaments, crowns, stripes, and crypts.
- the eye movement feature is an eye feature acquired when the user looks at the calibration point, including but not limited to the eye corner of the user, the center position of the pupil, the size of the pupil radius, and the eye features such as the Purcelle formed by the corneal emission.
- the user's calibration data is obtained according to the eye movement characteristics and calibration point information when the user looks at one or more sets of calibration points, and the calibration data includes, but is not limited to, eye characteristic data such as the angle between the user's visual axis and the optical axis. .
- the identity information of the user including but not limited to the user's name, ID number, gender, position, etc., establishing the identity information of the user, the iris feature, and the correspondence of the calibration data. Relationship, and store the user's identity information, iris characteristics, calibration data and their corresponding relationships in the user identity information database.
- the user to be identified needs to be gaze at the target point, and the target point may be a series of target points displayed on the display screen, or may be a fixed position. Display target points, for example, on the wall, glass door Superior, it can also be an object at a certain location.
- the above display screen can also display a virtual keyboard, which can be a numeric keypad consisting of 0-9 ten digits, or an alphabetic keyboard composed of 26 English letters, or It is an alphanumeric keyboard consisting of 0-9 ten digits and 26 English letters, and the group of letters or numbers on the above virtual keyboard is random, and the user to be identified can look at the keys on the virtual keyboard.
- the method enters a pre-agreed password, for example, the pre-agreed password is 123, and the user to be identified sequentially looks at the numeric key 123 on the virtual keyboard in order, or may also be the user to be identified to follow the indication on the virtual keyboard.
- Keys indicate that the user to be identified looks at the 3, 6, and 9 numeric keys on the virtual keyboard in sequence, or the P, M, V, etc., etc., and the above may indicate the user's gaze by voice.
- the letters or numbers on the virtual keyboard can also be the letters or numbers on the display that show the virtual keyboard that the user needs to look at.
- Some of the graphics may also be displayed on the display screen, and the user to be identified looks at certain parts of the graphic on the display screen according to the indication.
- the target point to be recognized by the user to be identified is the target point displayed on the display
- the target point needs to be generated on the display.
- the target point generated on the display screen includes the following two situations: the target point is randomly generated on the display screen in chronological order, and a target point is displayed on the display screen at each time and the position of the target point is random; Alternatively, the target point is generated on the display screen in chronological order and preset position order, and a target point is displayed on the display screen at each time and the position of the target point is set by the preset position sequence.
- the preset position sequence is a preset sequence of multiple positions. Each time the user to be identified is identified, a sequence of positions is randomly selected, and a target point is generated according to a randomly selected sequence of positions on a corresponding position on the display screen.
- the identity recognition method provided by the embodiment of the present invention is used to identify the user, as shown in FIG. 1 , and the specific steps include S110-S130.
- S110 Collect an eye image when the user to be identified looks at the target point.
- the user to be identified may click the start button to identify the user to be identified, or when the eye of the user to be recognized enters the area of image collection. Internally, the identified user is automatically identified.
- the user's eye image can be collected at a certain frame rate, so that multiple eye images are acquired when the user to be recognized looks at a target point.
- the eye image of the user to be recognized is collected, the eye image of the user to be recognized is included in the collected eye image, and if the eye image of the user to be recognized is not included in the collected eye image, The eye of the user to be identified may not be aligned with the image capturing device, and the user is prompted to adjust the position to re-acquire the image. If the collected eye image includes the eye region of the user to be identified, the user to be identified is identified.
- the collected eye image may be processed by interference suppression or the like.
- S120 Identify the user to be identified according to the eye image and the pre-established user identity information database.
- the process of identifying the user to be identified is actually a process of identifying the user for iris recognition.
- the method for identifying the user to be identified includes the following steps: extracting an eye feature from the eye image, the eye feature includes an iris feature and an eye movement feature; determining whether the iris feature is included in the user identity information database; if yes, The identity information and the calibration data corresponding to the iris feature are obtained in the user identity information database.
- the center position of the pupil of the eye to be recognized, the radius of the pupil, the corner of the eye, and the angle of the eye are extracted from the eye image.
- Eye movement characteristics such as Purkin's plaque formed by corneal emission, and iris features such as spots, filaments, crowns, and crypts of the eyes of the user to be identified are extracted.
- the identity of the user to be identified is determined.
- the recognition fails. At this time, the eye image of the user to be identified when the target point is gazing can be re-acquired, and the user to be identified is identified.
- the iris feature extracted from the eye image of the user to be identified is matched with the iris feature stored in the user identity information database, and it is determined whether the user identity information database has the extracted user to be identified.
- the iris feature is consistent with the iris feature. If there is an iris feature consistent with the extracted iris feature of the user to be identified, it is determined that the iris recognition is successful, and the iris identity corresponding to the user to be identified is retrieved from the user identity information database.
- the identity information and the calibration data determine the identity information corresponding to the iris feature as the identity information of the user to be identified.
- the user identity information database does not store the iris feature of the user to be identified, that is, the user to be identified is an unauthorized user.
- the identity of the user to be identified fails.
- the user to be identified can perform identity verification by the administrator. After the identity verification is passed, the iris feature, the calibration data, and the identity information of the user of the user to be identified are entered.
- the user to be identified Since the user to be identified is in the process of gazing at the target point, multiple eye images of the user to be identified are collected, so that the user may be identified multiple times by the iris recognition. In the process of multiple iris recognition, as long as the iris recognition occurs once If successful, it can be determined that the iris recognition of the user to be identified is successful, that is, the identity information and the calibration data corresponding to the iris feature exist in the user number identity information database.
- q iris features are randomly selected, and q iris recognition is performed, wherein q is smaller than the mesh displayed on the display screen.
- the number of punctuation points, in the process of q-time iris recognition, as long as the recognition succeeds m times, it can be judged that the iris recognition of the user to be recognized is successful, and m can be calculated by the following formula (1) or formula (2).
- m is the number of successful iris recognition
- q is the number of iris recognition
- q is an odd number
- q It is even.
- S130 Perform a living body detection on the user to be identified according to the eye image, the coordinates of the target point, the user identity information database, and the identity recognition result.
- the identity recognition method provided by the embodiment of the present invention also needs to perform living body detection on the user to be identified.
- the coordinates of the target point are the coordinates of the target point being displayed, and the coordinates of the target point can be calculated according to the relative position of the target point and the calibration point or other specific points or objects.
- the living detection is performed on the user to be identified, and the living detection of the user to be identified includes two cases:
- the theoretical gaze point coordinates of the user to be identified are calculated; the coordinates of the theoretical gaze point are compared with the coordinates of the target point, and it is determined whether the user to be identified is a living user.
- the data of the angle between the visual axis and the optical axis is used as the calibration data of the user, and the pupil position, the pupil radius, and The Purcelle or the like formed by the corneal reflection serves as an eye movement feature of the extracted user.
- the calibration data can be calculated by the calibration data of the eye movement characteristics such as the pupil position, the pupil radius, the corneal reflection, and the angle of the visual axis and the optical axis corresponding to the iris feature of the user to be identified.
- a theoretical gaze point coordinate the calculated coordinates of the theoretical gaze point of the user to be identified are compared with the coordinates of the target point, and it is determined whether the theoretical gaze point falls within the range of the target point, and the range is The target area is the center of the preset area, which is an acceptable error area range. If the theoretical gaze point falls within the area of the target point and continues for a preset length of time, it is determined that the target point is successfully identified. After the above target point recognition is successful, the next target point will be generated on the display screen, and the gaze point identification of the target point appearing on the display screen is performed by the above method, and the user to be identified continuously pairs the preset number of target points. If the identification is successful, it is determined that the user to be identified is a living user, and the preset number of target points is any number of target points greater than or equal to 3. The embodiment of the present invention does not limit the specific number of the preset number.
- the preset time length is greater than or equal to the response time of the human eye following the target point, and is less than or equal to the display time of the target point.
- the preset time length is greater than or equal to 0.1 second.
- the preset time length is 0.5. -1.5 seconds between.
- the preset ratio may be set and modified according to the accuracy requirement of the identification. Generally, the preset ratio is greater than 50%. Preferably, the preset ratio is between 50% and 70%, which is not in the embodiment of the present invention. The specific value of the preset ratio is limited.
- the school to be identified is calculated based on the eye movement characteristics and the coordinates of the target point. Quasi-data; the calibration data of the user to be identified is compared with the acquired calibration data to determine whether the user to be identified is a living user.
- the angle between the visual axis of the eye and the optical axis is constant. Therefore, the data of the angle between the visual axis and the optical axis is used as the calibration data of the user, and the pupil position, the pupil radius, and The Purcelle or the like formed by the corneal reflection serves as an eye movement feature of the extracted user.
- the target point on the display is the actual gaze point coordinate of the user to be identified. The coordinates of the target point can be calculated according to the relative position of the target point and the calibration point or other specific points or objects.
- the eye axis and the optical axis of the user to be identified are calculated by extracting the eye movement characteristics such as the pupil position, pupil radius and corneal reflection of the user to be recognized and the actual fixation point coordinates of the user, that is, the coordinates of the target point.
- Calibration data such as the angle of the angle.
- the calibration data corresponding to the iris feature of the user to be identified is retrieved from the pre-established user identity information database, and the calculated calibration data is compared with the acquired calibration data, if the calculation is performed.
- the obtained calibration data falls within the error range of the retrieved calibration data and continues for a preset length of time, and it is considered that the user to be identified is looking at the target point on the display screen, that is, the target point is successfully identified.
- the next target point will be displayed on the display screen, and the target point appearing on the display screen is identified by the above method, and the user to be identified continuously recognizes the preset number of target points. If the number of the target points is any number of target points greater than or equal to 3, the preset number of the specific number is not limited in the embodiment of the present invention.
- the preset time length is greater than or equal to the response time of the human eye following the target point, and is less than or equal to the display time of the target point.
- the preset time length is greater than or equal to 0.1 second.
- the preset time length is 0.5. -1.5 seconds between.
- the preset ratio may be set and modified according to the accuracy requirement of the identification. Generally, the preset ratio is greater than 50%. Preferably, the preset ratio is between 50% and 70%, which is not in the embodiment of the present invention. The specific value of the preset ratio is limited.
- the user is authenticated by the calibration data corresponding to the iris feature of the user to be identified from the user identity information database, and the calibration data of all users is retrieved from the user identity information database.
- the user performs a living body test.
- the calibration data of each user is retrieved from the identity information database; and the theoretical gaze is calculated according to the eye movement feature and the calibration data of each user retrieved.
- Point coordinates compare the coordinates of the theoretical fixation point with the coordinates of the target point, and determine whether the user to be identified is a living user.
- the angle between the visual axis of the eye and the optical axis is constant. Therefore, the data of the angle between the visual axis and the optical axis is used as the calibration data of the user, and the pupil position, the pupil radius, and The Purcelle or the like formed by the corneal reflection serves as an eye movement feature of the extracted user.
- the eye movements such as the pupillary spot formed by the pupil position, the pupil radius and the corneal reflection of the user to be identified are The characteristics are respectively calculated from calibration data such as the angle between the visual axis and the optical axis of each user in the pre-established user identity information database, and a plurality of theoretical fixation point coordinates can be obtained. Comparing the plurality of theoretical gaze point coordinates with the coordinates of the target point, respectively, determining whether there is a theoretical gaze point coordinate in the range of the plurality of theoretical gaze points in the range of the target point, the range of The target point is the center of the preset area, which is an acceptable error area range.
- the target point is identified as At the same time, it is judged that the calibration data of the user acquired when calculating the coordinates of the theoretical fixation point is the calibration data of the user to be identified.
- the next target point will be displayed on the display screen, and the target point appearing on the display screen is identified by the above method, and the user to be identified continuously recognizes the preset number of target points.
- the user to be identified is determined to be a living user, wherein the preset number of target points is any number greater than or equal to 3.
- the embodiment of the present invention does not limit the specific number of the preset number.
- the eye images of the plurality of users to be recognized are continuously collected at a certain frame rate, and the theoretical gaze points calculated according to the eye images collected within the preset time length. If there is a preset ratio falling within the range of the target point, it is determined that the preset time length is continued, and the target point recognition is successful.
- the preset time length is greater than or equal to the response time of the human eye following the target point, and is less than or equal to the display time of the target point.
- the preset time length is greater than or equal to 0.1 second.
- the preset time length is 0.5. -1.5 seconds between.
- the preset ratio may be set and modified according to the accuracy requirement of the identification. Generally, the preset ratio is greater than 50%. Preferably, the preset ratio is between 50% and 70%, which is not in the embodiment of the present invention. The specific value of the preset ratio is limited.
- the user is prompted to perform the identification again.
- the number of identifications of the user to be identified reaches a preset number of times, the identity of the user to be identified is locked.
- the preset number of times may be 3, and may be other values.
- the specific value of the preset number of times may be set according to actual conditions, and the embodiment of the present invention does not The specific value of the preset number of times is defined.
- the user's identity information and knowledge are displayed on the display.
- a voice prompt may also be issued to prompt the identity of the user to be identified to be successful.
- the identification failure and display again are displayed on the display.
- the prompt of the identification, or when the identification of the user to be identified fails, may also issue a voice prompt, prompting the identification of the user to be identified to fail, and performing the second identification.
- the user to be identified needs to be gazing on the display screen when identifying the user to be identified.
- the calibration point is calculated, the calibration data of the user to be identified is calculated, and the calibration data, the iris characteristics of the user, the identity information and the corresponding relationship are stored in the user identity information database, and the process of inputting the calibration data of the user to be identified may be performed.
- the user to be identified may be subjected to the living body detection according to the eye movement feature extracted from the acquired eye image and the calibration data of the user to be identified.
- the identity recognition method provided by the embodiment of the present invention includes: collecting an eye image when the user to be identified looks at the target point; and identifying the user according to the eye image and the pre-established user identity information database; The image, the coordinates of the target point, the user identity information database, and the identification result, and the living detection is performed on the user to be identified.
- the method provided by the invention combines the eye tracking technology and the identification technology. When the living detection and identification of the user to be identified are successful, the identification of the user to be identified is successful, the accuracy of the identification is high, and the adaptability and Anti-counterfeiting performance is very good.
- the embodiment of the invention provides an identity recognition device.
- the device comprises:
- the collecting module 210 is configured to collect an eye image when the user to be identified looks at the target point;
- the identity recognition module 220 is configured to identify the user to be identified according to the eye image and the pre-established user identity information database;
- the living body detecting module 230 is configured to perform living body detection on the user to be identified according to the eye image, the coordinates of the target point, the user identity information database, and the identity recognition result.
- the identity identification device provided by the embodiment of the present invention can be applied to a scenario of a bank access control, a prison access control, a password payment, and the like.
- an identity information database of the application scenario authorized user needs to be established. For example, if the above identification method is applied to the bank's access control, it is necessary to establish a database of identity information of the bank employee for identifying the bank employee. When the bank employee identification is passed, the bank door is opened, and the employee can Enter, if the above identification method is applied to the prison access control, it is necessary to establish a database of identity information of the prison guard.
- the foregoing collection module 210, the identity recognition module 220, and the living body detection module 230 may be run in a computer terminal as part of the device, and the functions implemented by the above module may be performed by a processor in the computer terminal, the computer terminal It can also be a smart phone (such as Android phone, iOS phone, etc.), tablet, PDA, and mobile Internet devices (MID), PAD and other terminal devices.
- a smart phone such as Android phone, iOS phone, etc.
- MID mobile Internet devices
- the above-mentioned identity recognition module 220 performs identification on the user to be identified according to the image of the eye and the database of the user identity information that is pre-established, and is specifically implemented by the extraction unit, the first determination unit, and the acquisition unit, and includes:
- the extracting unit is configured to extract an eye feature from the eye image, the eye feature includes an iris feature and an eye movement feature; the first determining unit is configured to determine whether the iris feature is included in the user identity information database; In the case that the iris feature is included in the user identity information database, the identity information and the calibration data corresponding to the iris feature are acquired from the user identity information database.
- iris features include, but are not limited to, features such as spots, filaments, crowns, stripes, and crypts, including but not limited to the user's eye corners, pupil center position, pupil radius, and corneal emission. Eye features such as plaques.
- the user to be identified Since the user to be identified is in the process of gazing at the target point, multiple eye images of the user to be identified are collected, so that the user may be identified multiple times by the iris recognition. In the process of multiple iris recognition, as long as the iris recognition occurs once If successful, it can be determined that the iris recognition of the user to be identified is successful, that is, the identity information and the calibration data corresponding to the iris feature exist in the user number identity information database.
- q iris features are randomly selected, and q times iris recognition is performed, wherein q is smaller than the number of target points displayed on the display screen,
- m can be calculated by the following formula (1) or formula (2).
- m is the number of successful iris recognition
- q is the number of iris recognition
- q is an odd number
- q It is even.
- the above-mentioned extracting unit, the first judging unit and the obtaining unit may be operated in a computer terminal as part of the device, and the function implemented by the above module may be executed by a processor in the computer terminal, and the computer terminal may also be Smartphones (such as Android phones, iOS phones, etc.), tablets, PDAs, and mobile Internet devices (MID), PAD and other terminal devices.
- Smartphones such as Android phones, iOS phones, etc.
- MID mobile Internet devices
- the living body detecting module 230 is based on the above-mentioned eye image, the coordinates of the target point, and the user's body.
- the information database and the identification result, and the living detection of the user to be identified is specifically implemented by the calculation unit and the second judgment unit, including:
- the calculating unit is configured to calculate, according to the eye movement feature and the acquired calibration data, a theoretical gaze point coordinate of the user to be recognized when the identity recognition result indicates that the user identity information database includes the iris feature; the second determining unit is configured to: Aligning the coordinates of the gaze point with the coordinates of the target point, and determining whether the user to be identified is a living user; the calculating unit may be configured to be used when the identity recognition result indicates that the user identity information database includes the iris feature.
- the second determining unit may be further configured to compare the calibration data of the user to be identified with the acquired calibration data to determine the to-be-identified Whether the user is a living user; the calculating unit may be configured to: when the identification result indicates that the user identity information database includes the iris feature, retrieve calibration data of each user from the user identity information database; according to the eye movement feature And the calibration data of each user retrieved, the theory of calculation Viewpoint coordinates; second determination means may be provided for the theoretical coordinates of the fixation point are aligned with coordinates of the target point, it is determined whether the number of users to be identified in vivo users.
- the identification of the iris and the detection of the living body are not successful at the same time, if the identification of the user to be identified fails, the user is prompted to perform the identification again.
- the number of identifications of the user to be identified reaches a preset number of times, the identity of the user to be identified is locked. The identification is continued, and the user to be identified is determined to be an unauthorized user.
- the preset number of times may be 3, and may be other values.
- the specific value of the preset number of times may be set according to an actual situation, which is not limited by the embodiment of the present invention. The specific value of the above preset number of times.
- the user's identity information and the prompt for identifying success are displayed on the display, or when the identity of the user to be identified is successful, a voice can be sent.
- the prompting indicates that the identity of the user to be identified is successful.
- the prompt of the identification failure and the identification of the identity again is displayed on the display screen, or when the identification of the user to be identified fails, the voice may also be issued.
- a prompt prompts that the identity of the user to be identified fails, and a second identity is performed.
- the above computing unit and the second determining unit may be run in a computer terminal as part of the device, and the functions implemented by the above module may be performed by a processor in the computer terminal, and the computer terminal may also be a smart phone (such as Android phones, iOS phones, etc.), tablets, handheld computers and mobile Internet devices (Mobile Internet Devices, MID), PAD and other terminal devices.
- the collecting module 210 needs to generate a target point on the display screen before collecting the eye image when the user is gazing at the target point, and generating the target point on the display screen is specifically implemented by the first generating module or the second generating module. ,include:
- the first generating module is configured to randomly generate a target point on the display screen in chronological order, and display a target point on the display screen at each time and the position of the target point is random; or the second generating module, It is used to generate a target point on the display screen according to the chronological order and the preset position sequence. At each moment, a target point is displayed on the display screen and the position of the target point is set by the preset position sequence.
- the preset position sequence is a preset sequence of multiple positions. Each time the user to be identified is identified, a sequence of positions is randomly selected, and a target point is generated according to a randomly selected sequence of positions on a corresponding position on the display screen.
- first generation module and the second generation module may be run in a computer terminal as part of the device, and the functions implemented by the module may be performed by a processor in the computer terminal, and the computer terminal may also be intelligent.
- Mobile devices such as Android phones, iOS phones, etc.
- tablets such as tablets, handheld computers, and mobile Internet devices (MID), PAD and other terminal devices.
- MID mobile Internet devices
- the identity recognition device provided by the embodiment of the present invention combines the eye tracking technology and the identity recognition technology.
- the identity of the user to be identified is successful, and the accuracy of the identity recognition is high. And the adaptability and anti-counterfeiting performance are very good.
- the identity recognition device provided by the embodiment of the present invention may be specific hardware on the device or software or firmware installed on the device.
- the implementation principle and the technical effects of the device provided by the embodiments of the present invention are the same as those of the foregoing method embodiments.
- a person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working processes of the foregoing system, the device and the unit can refer to the corresponding processes in the foregoing method embodiments, and details are not described herein again.
- An embodiment of the present invention provides a storage medium, where the storage medium includes a stored program, wherein the device where the storage medium is located is controlled to execute the foregoing identity recognition method when the program is running.
- An embodiment of the present invention provides a processor, where a processor is configured to run a program, where the program is executed to execute the foregoing identity recognition method.
- the embodiment of the present invention provides a terminal, including: an acquisition module, configured to collect an eye image when the user to be identified is gazing at a target point; and an identity recognition module, configured to use the eye image and the pre-established user identity information database. Identification of the user to be identified; live detection module, The utility model is configured to perform a living body detection on the user to be identified according to the image of the eye, the coordinates of the target point, the database of the user identity information, and the identification result; the processor and the processor run the program, wherein the program runs for the slave acquisition module and the identity recognition module.
- the data output by the living body detection module performs the above identification method.
- the embodiment of the present invention provides a terminal, including: an acquisition module, configured to collect an eye image when the user to be identified is gazing at a target point; and an identity recognition module, configured to use the eye image and the pre-established user identity information database. Identifying the user to be identified; the living body detecting module is configured to perform a living body detection on the user to be identified according to the eye image, the coordinates of the target point, the user identity information database, and the identification result; the storage medium is used to store the program, wherein The program performs the above identification method on the data output from the acquisition module, the identification module, and the living body detection module at runtime.
- the various functional units provided by the embodiments of the present application may be operated in a mobile terminal, a computer terminal, or the like, or may be stored as part of a storage medium.
- embodiments of the present invention may provide a computer terminal, which may be any computer terminal device in a group of computer terminals.
- a computer terminal may also be replaced with a terminal device such as a mobile terminal.
- the computer terminal may be located in at least one network device of the plurality of network devices of the computer network.
- the computer terminal may execute the program code of the following steps in the identity recognition method: collecting an eye image when the user to be identified looks at the target point; according to the eye diagram Like the pre-established user identity information database, the user to be identified is identified; according to the eye image, the coordinates of the target point, the user identity information database and the identity recognition result, the user to be identified is subjected to the living body detection.
- the computer terminal can include: one or more processors, memory, and transmission means.
- the memory can be used to store software programs and modules, such as the identification method and the program instructions/modules corresponding to the device in the embodiment of the present invention, and the processor executes various functional applications by running software programs and modules stored in the memory. And data processing, that is, the above identification method is implemented.
- the memory may include a high speed random access memory, and may also include non-volatile memory such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory.
- the memory can further include memory remotely located relative to the processor, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
- the above transmission device is for receiving or transmitting data via a network.
- Specific examples of the above network may include a wired network and a wireless network.
- the transmission device includes a Network Interface Controller (NIC) that can be connected to other network devices and routers via a network cable to communicate with the Internet or a local area network.
- the transmission device is a Radio Frequency (RF) module for communicating with the Internet wirelessly.
- NIC Network Interface Controller
- RF Radio Frequency
- the memory is used to store preset action conditions and information of the preset rights user, and an application.
- the processor can call the memory stored information and the application by the transmitting device to execute the program code of the method steps of each of the alternative or preferred embodiments of the above method embodiments.
- the computer terminal can also be a smart phone (eg, Android phones, iOS phones, etc.), tablets, PDAs, and mobile Internet devices (MID), PAD and other terminal devices.
- smart phone eg, Android phones, iOS phones, etc.
- tablets e.g.
- PDAs e.g.
- MID mobile Internet devices
- Embodiments of the present invention also provide a storage medium.
- the foregoing storage medium may be used to save the program code executed by the identity identification method provided by the foregoing method embodiment and the device embodiment.
- the foregoing storage medium may be located in any one of the computer terminal groups in the computer network, or in any one of the mobile terminal groups.
- the storage medium is configured to store program code for performing the following steps: collecting an eye image when the user to be identified looks at the target point; according to the eye image and the pre-established user identity information
- the database identifies the user to be identified; according to the eye image, the coordinates of the target point, the user identity information database, and the identity recognition result, the user to be identified is subjected to the living body detection.
- the storage medium may also be provided as program code of various preferred or optional method steps provided by the identification method.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some communication interface, device or unit, and may be electrical, mechanical or otherwise.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in the embodiment provided by the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
- the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
- the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
Claims (16)
- 一种身份识别方法,其中,所述方法包括:采集待识别用户注视目标点位时的眼部图像;根据所述眼部图像及预先建立的用户身份信息数据库,对所述待识别用户进行身份识别;根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测。
- 根据权利要求1所述的方法,其中,所述根据所述眼部图像及预先建立的用户身份信息数据库,对所述待识别用户进行身份识别,包括:从所述眼部图像中提取眼睛特征,所述眼睛特征包括虹膜特征和眼动特征;判断所述用户身份信息数据库中是否包括所述虹膜特征;如果是,则从所述用户身份信息数据库中获取所述虹膜特征对应的身份信息及校准数据。
- 根据权利要求2所述的方法,其中,所述根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测,包括:当身份识别结果指示所述用户身份信息数据库中包括所述虹膜特征时,根据所述眼动特征及获取的校准数据,计算待识别用户的理论注视点坐标;将所述理论注视点坐标与所述目标点位的坐标进行比对,判断所述待识别用户是否为活体用户。
- 根据权利要求2所述的方法,其中,所述根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测,包括:当身份识别结果指示所述用户身份信息数据库中包括所述虹膜特征时,根据所述眼动特征及所述目标点位的坐标,计算所述待识别用户的校准数据;将所述待识别用户的校准数据与获取的校准数据进行比对,判断所述待识别用户是否为活体用户。
- 根据权利要求2所述的方法,其中,所述根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测,包括:当身份识别结果指示所述用户身份信息数据库中包括所述虹膜特征时,从所述用户身份信息数据库中调取每个用户的校准数据;根据所述眼动特征及调取的每个用户的校准数据,计算理论注视点坐标;将所述理论注视点坐标分别与所述目标点位的坐标进行比对,判断所述待识别用户数是否为活体用户。
- 根据权利要求1所述的方法,其中,所述采集待识别用户注视目标点位时的眼部图像之前,还包括:按照时间先后顺序在显示屏上随机生成目标点位,每个时刻显示屏上显示一个目标点位且目标点位的位置是随机的;或者,按照时间先后顺序及预设位置序列在显示屏上生成目标点位,每个时刻显示屏上显示一个目标点位且目标点位的位置是由所述预设位置序列设定的。
- 一种身份识别装置,其中,所述装置包括:采集模块,用于采集待识别用户注视目标点位时的眼部图像;身份识别模块,用于根据所述眼部图像及预先建立的用户身份信息数据库,对所述待识别用户进行身份识别;活体检测模块,用于根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测。
- 根据权利要求7所述的装置,其中,所述身份识别模块包括:提取单元,用于从所述眼部图像中提取眼睛特征,所述眼睛特征包括虹膜特征和眼动特征;第一判断单元,用于判断所述用户身份信息数据库中是否包括所述虹膜特征;获取单元,用于在所述用户身份信息数据库中包括所述虹膜特征的情况下,从所述用户身份信息数据库中获取所述虹膜特征对应的身份信息及校准数据。
- 根据权利要求8所述的装置,其中,所述活体检测模块包括:计算单元,用于当身份识别结果指示所述用户身份信息数据库中包括所述虹膜特征时,根据所述眼动特征及获取的校准数据,计算待识别用户的理论注视点坐标;第二判断单元,用于将所述理论注视点坐标与所述目标点位的坐标进行比对,判断所述待识别用户是否为活体用户。
- 根据权利要求8所述的装置,其中,所述活体检测模块包括:计算单元,用于当身份识别结果指示所述用户身份信息数据库中包括所述虹膜特征时,根据所述眼动特征及所述目标点位的坐标,计算所述待识别用户的校准数据;第二判断单元,用于将所述待识别用户的校准数据与获取的校准数据进行比对,判断所述待识别用户是否为活体用户。
- 根据权利要求8所述的装置,其中,所述活体检测模块包括:计算单元,用于当身份识别结果指示所述用户身份信息数据库中包括所述虹膜特征时,从所述用户身份信息数据库中调取每个用户的校准数据;根据所述眼动特征及调取的每个用户的校准数据,计算理论注视点坐标;第二判断单元,用于将所述理论注视点坐标分别与所述目标点位的坐标进行比对,判断所述待识别用户数是否为活体用户。
- 根据权利要求7所述的装置,其中,所述装置还包括:第一生成模块,用于按照时间先后顺序在显示屏上随机生成目标点位,每个时刻显示屏上显示一个目标点位且目标点位的位置是随机的;或者,第二生成模块,用于按照时间先后顺序及预设位置序列在显示屏上生成目标点位,每个时刻显示屏上显示一个目标点位且目标点位的位置是由所述预设位置序列设定的。
- 一种存储介质,其中,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至6中任意一项所述的身份识别方法。
- 一种处理器,其中,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至6中任意一项所述的身份识别方法。
- 一种终端,其中,包括:采集模块,用于采集待识别用户注视目标点位时的眼部图像;身份识别模块,用于根据所述眼部图像及预先建立的用户身份信息数据库,对所述待识别用户进行身份识别;活体检测模块,用于根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测;处理器,所述处理器运行程序,其中,所述程序运行时对于从所述采集模块、所述身份识别模块和所述活体检测模块输出的数据执行权利要求1至6中任意一项所述的身份识别方法。
- 一种终端,其中,包括:采集模块,用于采集待识别用户注视目标点位时的眼部图像;身份识别模块,用于根据所述眼部图像及预先建立的用户身份信息数据库,对所述待识别用户进行身份识别;活体检测模块,用于根据所述眼部图像、所述目标点位的坐标、所述用户身份信息数据库及身份识别结果,对所述待识别用户进行活体检测;存储介质,用于存储程序,其中,所述程序在运行时对于从所述采集模块、所述身份识别模块和所述活体检测模块输出的数据执行权利要求1至6中任意一项所述的身份识别方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/312,301 US10891507B2 (en) | 2016-06-30 | 2017-06-30 | Method and device for recognizing identity |
EP17819367.8A EP3480727A4 (en) | 2016-06-30 | 2017-06-30 | METHOD AND APPARATUS FOR IDENTITY RECOGNITION |
AU2017287619A AU2017287619B2 (en) | 2016-06-30 | 2017-06-30 | Method and apparatus for identity recognition |
JP2018565013A JP6762380B2 (ja) | 2016-06-30 | 2017-06-30 | 身分認証方法および装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610509755.3 | 2016-06-30 | ||
CN201610509755.3A CN106203297B (zh) | 2016-06-30 | 2016-06-30 | 一种身份识别方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018001371A1 true WO2018001371A1 (zh) | 2018-01-04 |
Family
ID=57463172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/091264 WO2018001371A1 (zh) | 2016-06-30 | 2017-06-30 | 一种身份识别方法及装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10891507B2 (zh) |
EP (1) | EP3480727A4 (zh) |
JP (1) | JP6762380B2 (zh) |
CN (2) | CN110705507B (zh) |
AU (1) | AU2017287619B2 (zh) |
WO (1) | WO2018001371A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966142A (zh) * | 2021-04-20 | 2021-06-15 | 南京云玑信息科技有限公司 | 一种准确率与识别速度提高的优化方法 |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110705507B (zh) * | 2016-06-30 | 2022-07-08 | 北京七鑫易维信息技术有限公司 | 一种身份识别方法及装置 |
CN108319830A (zh) * | 2017-01-17 | 2018-07-24 | 中兴通讯股份有限公司 | 一种身份验证方法及装置 |
CN106713368B (zh) * | 2017-03-08 | 2019-09-27 | 四川大学 | 一种身份验证方法及装置 |
CN106803829A (zh) * | 2017-03-30 | 2017-06-06 | 北京七鑫易维信息技术有限公司 | 一种认证方法、装置及系统 |
US11403881B2 (en) * | 2017-06-19 | 2022-08-02 | Paypal, Inc. | Content modification based on eye characteristics |
CN108073897A (zh) * | 2017-12-01 | 2018-05-25 | 沈海斌 | 一种基于视线的活体认证方法 |
CN112270299A (zh) * | 2018-04-25 | 2021-01-26 | 北京嘀嘀无限科技发展有限公司 | 一种识别头部运动的系统和方法 |
CN109190509B (zh) | 2018-08-13 | 2023-04-25 | 创新先进技术有限公司 | 一种身份识别方法、装置和计算机可读存储介质 |
CN109472195A (zh) * | 2018-09-26 | 2019-03-15 | 北京七鑫易维信息技术有限公司 | 识别对象的方法、装置和系统 |
CN109655790A (zh) * | 2018-12-18 | 2019-04-19 | 天津大学 | 基于室内led光源的多目标检测及身份识别系统及方法 |
CN110341617B (zh) * | 2019-07-08 | 2021-05-28 | 北京七鑫易维信息技术有限公司 | 眼球追踪方法、装置、车辆和存储介质 |
CN110347257A (zh) * | 2019-07-08 | 2019-10-18 | 北京七鑫易维信息技术有限公司 | 眼球追踪设备的校准方法、装置、设备和存储介质 |
CN111402480A (zh) * | 2020-02-29 | 2020-07-10 | 深圳壹账通智能科技有限公司 | 访客信息管理方法、装置、系统、设备和存储介质 |
JP7314848B2 (ja) * | 2020-03-25 | 2023-07-26 | トヨタ自動車株式会社 | 表示制御装置、画像補正方法及びプログラム |
CN111629265B (zh) * | 2020-05-13 | 2021-12-21 | 深圳创维-Rgb电子有限公司 | 电视机及其电视控制方法、控制装置和可读存储介质 |
CN111985298B (zh) * | 2020-06-28 | 2023-07-25 | 百度在线网络技术(北京)有限公司 | 人脸识别样本收集方法和装置 |
CN111860345A (zh) * | 2020-07-22 | 2020-10-30 | 海尔优家智能科技(北京)有限公司 | 对象位置的确定方法、装置、存储介质及电子装置 |
CN112215113A (zh) * | 2020-09-30 | 2021-01-12 | 张成林 | 人脸识别方法及装置 |
CN112232255B (zh) * | 2020-10-26 | 2022-11-01 | 北京鹰瞳科技发展股份有限公司 | 基于多模态数据的身份认证方法及设备 |
CN114389909A (zh) * | 2021-11-29 | 2022-04-22 | 珠海格力电器股份有限公司 | 设备控制方法、装置、存储介质及电子设备 |
CN114546569B (zh) * | 2022-02-24 | 2024-06-04 | 深圳创维-Rgb电子有限公司 | 显示设备控制方法、装置、电子设备及可读存储介质 |
CN116049868B (zh) * | 2022-07-26 | 2023-12-08 | 荣耀终端有限公司 | 隐私保护方法、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1299103A (zh) * | 1999-12-03 | 2001-06-13 | 朴宰佑 | 在网络上通过生物情报进行的使用者认证系统及方法 |
US20100202667A1 (en) * | 2009-02-06 | 2010-08-12 | Robert Bosch Gmbh | Iris deblurring method based on global and local iris image statistics |
CN105184277A (zh) * | 2015-09-29 | 2015-12-23 | 杨晴虹 | 活体人脸识别方法以及装置 |
CN105354473A (zh) * | 2015-11-09 | 2016-02-24 | 南京运航科技有限公司 | 基于虹膜识别认证的设备桌面解锁方法及装置 |
CN105550625A (zh) * | 2015-06-30 | 2016-05-04 | 宇龙计算机通信科技(深圳)有限公司 | 一种活体虹膜检测方法及终端 |
CN106203297A (zh) * | 2016-06-30 | 2016-12-07 | 北京七鑫易维信息技术有限公司 | 一种身份识别方法及装置 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4470663B2 (ja) * | 2004-09-14 | 2010-06-02 | 沖電気工業株式会社 | 虹彩認証における不正検出システム |
CN100392669C (zh) * | 2006-09-21 | 2008-06-04 | 杭州电子科技大学 | 虹膜识别中的活体检测方法及装置 |
CN101727574A (zh) * | 2008-10-17 | 2010-06-09 | 深圳富泰宏精密工业有限公司 | 虹膜识别系统及方法 |
US8364971B2 (en) * | 2009-02-26 | 2013-01-29 | Kynen Llc | User authentication system and method |
CN101833646B (zh) * | 2009-03-11 | 2012-05-02 | 北京中科虹霸科技有限公司 | 一种虹膜活体检测方法 |
CN101710383B (zh) * | 2009-10-26 | 2015-06-10 | 北京中星微电子有限公司 | 一种身份认证的方法及认证装置 |
CN102844766B (zh) * | 2011-04-20 | 2014-12-24 | 中国科学院自动化研究所 | 基于人眼图像的多特征融合身份识别方法 |
EP3200046A1 (en) * | 2011-10-27 | 2017-08-02 | Tobii Technology AB | Power management in an eye-tracking system |
US9082011B2 (en) * | 2012-03-28 | 2015-07-14 | Texas State University—San Marcos | Person identification using ocular biometrics with liveness detection |
JP6175945B2 (ja) * | 2013-07-05 | 2017-08-09 | ソニー株式会社 | 視線検出装置及び視線検出方法 |
CN105243386B (zh) * | 2014-07-10 | 2019-02-05 | 汉王科技股份有限公司 | 人脸活体判断方法以及系统 |
EP4047551A1 (en) * | 2014-10-15 | 2022-08-24 | NEC Corporation | Impersonation detection device, impersonation detection method, and recording medium |
WO2016076914A1 (en) | 2014-11-13 | 2016-05-19 | Intel Corporation | Facial liveness detection in image biometrics |
JP6452236B2 (ja) * | 2014-12-08 | 2019-01-16 | 国立大学法人静岡大学 | 眼球識別装置及び眼球識別方法 |
CN104462923B (zh) * | 2014-12-31 | 2018-10-09 | 河南华辰智控技术有限公司 | 应用于移动通信设备的智能型虹膜身份识别系统 |
US10678897B2 (en) * | 2015-04-16 | 2020-06-09 | Tobii Ab | Identification, authentication, and/or guiding of a user using gaze information |
CN114077726A (zh) * | 2015-04-16 | 2022-02-22 | 托比股份公司 | 用于对用户进行认证的系统、方法和机器可读介质 |
CN104966070B (zh) * | 2015-06-30 | 2018-04-10 | 北京汉王智远科技有限公司 | 基于人脸识别的活体检测方法和装置 |
US10546183B2 (en) * | 2015-08-10 | 2020-01-28 | Yoti Holding Limited | Liveness detection |
CN105184246B (zh) * | 2015-08-28 | 2020-05-19 | 北京旷视科技有限公司 | 活体检测方法和活体检测系统 |
CN105139006A (zh) * | 2015-09-29 | 2015-12-09 | 清华大学 | 一种基于虹膜变化的活体识别方法及系统 |
CN105205379A (zh) * | 2015-10-28 | 2015-12-30 | 广东欧珀移动通信有限公司 | 一种终端应用的控制方法、装置和终端 |
TWI630507B (zh) * | 2017-09-25 | 2018-07-21 | 仁寶電腦工業股份有限公司 | 目光偵測、辨識與控制方法 |
-
2016
- 2016-06-30 CN CN201910979829.3A patent/CN110705507B/zh active Active
- 2016-06-30 CN CN201610509755.3A patent/CN106203297B/zh active Active
-
2017
- 2017-06-30 WO PCT/CN2017/091264 patent/WO2018001371A1/zh unknown
- 2017-06-30 AU AU2017287619A patent/AU2017287619B2/en active Active
- 2017-06-30 US US16/312,301 patent/US10891507B2/en active Active
- 2017-06-30 JP JP2018565013A patent/JP6762380B2/ja active Active
- 2017-06-30 EP EP17819367.8A patent/EP3480727A4/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1299103A (zh) * | 1999-12-03 | 2001-06-13 | 朴宰佑 | 在网络上通过生物情报进行的使用者认证系统及方法 |
US20100202667A1 (en) * | 2009-02-06 | 2010-08-12 | Robert Bosch Gmbh | Iris deblurring method based on global and local iris image statistics |
CN105550625A (zh) * | 2015-06-30 | 2016-05-04 | 宇龙计算机通信科技(深圳)有限公司 | 一种活体虹膜检测方法及终端 |
CN105184277A (zh) * | 2015-09-29 | 2015-12-23 | 杨晴虹 | 活体人脸识别方法以及装置 |
CN105354473A (zh) * | 2015-11-09 | 2016-02-24 | 南京运航科技有限公司 | 基于虹膜识别认证的设备桌面解锁方法及装置 |
CN106203297A (zh) * | 2016-06-30 | 2016-12-07 | 北京七鑫易维信息技术有限公司 | 一种身份识别方法及装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3480727A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112966142A (zh) * | 2021-04-20 | 2021-06-15 | 南京云玑信息科技有限公司 | 一种准确率与识别速度提高的优化方法 |
Also Published As
Publication number | Publication date |
---|---|
CN106203297B (zh) | 2019-11-08 |
AU2017287619B2 (en) | 2020-09-17 |
CN110705507B (zh) | 2022-07-08 |
EP3480727A1 (en) | 2019-05-08 |
JP6762380B2 (ja) | 2020-09-30 |
CN106203297A (zh) | 2016-12-07 |
EP3480727A4 (en) | 2019-08-14 |
AU2017287619A1 (en) | 2019-01-17 |
US20190205681A1 (en) | 2019-07-04 |
US10891507B2 (en) | 2021-01-12 |
CN110705507A (zh) | 2020-01-17 |
JP2019522278A (ja) | 2019-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018001371A1 (zh) | 一种身份识别方法及装置 | |
KR102139795B1 (ko) | 생채 특징 패턴을 업데이트하는 방법 및 이를 위한 전자 장치 | |
WO2019134246A1 (zh) | 基于人脸识别的安全监控方法、装置及存储介质 | |
KR102316278B1 (ko) | 지문 정보를 저장하기 위한 전자 장치 및 방법 | |
WO2019062080A1 (zh) | 身份识别方法、电子装置及计算机可读存储介质 | |
AU2017201463B2 (en) | Methods and systems for authenticating users | |
CN108701178A (zh) | 认证方法及使用认证方法的电子设备 | |
WO2021004499A1 (zh) | 一种人脸识别的方法和装置 | |
CN110751025A (zh) | 基于人脸识别的业务办理方法、装置、设备及介质 | |
KR20180014624A (ko) | 홍채 인증 시스템 및 방법 | |
CN108804006B (zh) | 用于可穿戴设备的解锁方法、装置、设备和存储介质 | |
WO2019011072A1 (zh) | 虹膜活体检测方法及相关产品 | |
EP2984599A1 (en) | The identity caddy: a tool for real-time determination of identity in the mobile environment | |
KR20160147515A (ko) | 사용자 인증 방법 및 이를 지원하는 전자장치 | |
KR102497477B1 (ko) | 개인 정보를 보호하기 위한 방법 및 그 전자 장치 | |
Kurkovsky et al. | Experiments with simple iris recognition for mobile phones | |
CN107622246A (zh) | 人脸识别方法及相关产品 | |
EP4060551B1 (en) | Fingerprint anti-counterfeiting method and electronic device | |
CN107368723A (zh) | 一种身份认证方法、可穿戴设备及计算机可读存储介质 | |
KR102579895B1 (ko) | 전자장치, 이를 활용한 적외선 센서 기반의 심장 박동 수 측정 방법 | |
US11636188B2 (en) | Combining biometrics, hidden knowledge and intent to authenticate | |
CN110119604B (zh) | 提高信息系统安全性的方法、装置、终端和存储设备 | |
KR102574772B1 (ko) | 생체 데이터를 등록 및 인증하기 위한 방법 및 그 전자 장치 | |
WO2019218905A1 (zh) | 验证对象的方法、装置和系统 | |
CN107786349B (zh) | 一种针对用户账号的安全管理方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17819367 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018565013 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017287619 Country of ref document: AU Date of ref document: 20170630 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017819367 Country of ref document: EP Effective date: 20190130 |