CN116486469A - Iris recognition method and device in human eye image - Google Patents

Iris recognition method and device in human eye image Download PDF

Info

Publication number
CN116486469A
CN116486469A CN202310454022.4A CN202310454022A CN116486469A CN 116486469 A CN116486469 A CN 116486469A CN 202310454022 A CN202310454022 A CN 202310454022A CN 116486469 A CN116486469 A CN 116486469A
Authority
CN
China
Prior art keywords
iris
image
iris recognition
determining
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310454022.4A
Other languages
Chinese (zh)
Inventor
韩正勇
邢伟寅
解书凯
钟乐海
李礁
李川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mianyang Polytechnic
Original Assignee
Mianyang Polytechnic
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mianyang Polytechnic filed Critical Mianyang Polytechnic
Priority to CN202310454022.4A priority Critical patent/CN116486469A/en
Publication of CN116486469A publication Critical patent/CN116486469A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to the technical field of iris recognition, in particular to an iris recognition method and device in a human eye image, which are characterized in that whether the user needs to carry out identity verification is determined by collecting data in a near field, if yes, an image recognition model is further called to judge whether the image in an iris recognition area is an iris image, if not, the height position information of a target user is determined according to the current image data, then the iris recognition equipment is adjusted to carry out high or low rotation according to the height position information until the current image in the iris recognition area is an iris image, the rotation is stopped, and the iris image is compared with iris database information so as to judge the identity of the target user; according to the iris recognition method in the human eye image, the elevation angle or the depression angle of the equipment can be adjusted according to the height of the user, so that the user can face the user when the user performs identity recognition, the matching degree of the user is reduced, and the experience of different users is improved.

Description

Iris recognition method and device in human eye image
Technical Field
The invention relates to the technical field of iris recognition, in particular to an iris recognition method and device in a human eye image.
Background
The rapid development of science and technology brings great convenience to the life of people, various potential safety hazards are increased, and the requirements of people on the reliability and safety of identity verification are also continuously improved. The iris recognition technology is popular in automatic identification and verification systems in recent years due to the uniqueness, stability, reliability and extremely high accuracy, and is known as one of the most promising biological recognition technologies. The iris recognition mainly comprises five key steps of image acquisition, image preprocessing, iris segmentation, iris feature extraction and feature classification, wherein the definition of the iris image acquired in the first step directly influences the accuracy and recognition speed of the iris recognition. Therefore, it becomes important to rapidly acquire iris images with sufficient definition within a certain period of time, and in the prior art, the position of the iris cannot be accurately positioned, so that the definition of the captured iris images is not high.
Disclosure of Invention
In view of the above, the present application has been made to provide a method and apparatus for iris recognition in a human eye image, which overcomes the above-mentioned problems or at least partially solves the above-mentioned problems, including:
acquiring near field acquisition data, and determining a target user according to the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data;
acquiring current image data and determining an iris recognition area image in the current image data;
calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information;
driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping steering action when the iris recognition area image is recognized as an iris image through the image recognition model;
and comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
Preferably, the acquiring near field acquisition data, determining the target user according to the near field acquisition data, includes:
determining whether the distance value of the target distance data is smaller than or equal to a preset distance threshold value;
when the distance value of the target distance data is smaller than or equal to the preset distance value, determining the target residence time data, wherein the target residence time data comprises a first recording time and a second recording time corresponding to the target distance data, and the first recording time is the initial triggering time of the target distance data;
and when the time interval between the first recording time and the second recording time is equal to the preset residence time, determining the tested user as the target user.
Preferably, the acquiring the current image data and determining the iris recognition area image in the current image data include:
determining image acquisition parameters according to the current image data;
determining iris recognition boundary parameters according to the image acquisition parameters;
determining the iris recognition area according to the iris recognition boundary parameters;
and obtaining a corresponding iris recognition area image in the current image data according to the iris recognition area.
Preferably, the determining the high-low position information of the target user according to the current image data includes:
extracting image features of the current image data;
identifying the image features according to the image identification model, and determining the human body target part information corresponding to the current image data;
and determining the height position information of the iris part of the target user relative to the iris recognition area according to the human body target part information.
Preferably, the driving the iris recognition device to perform the steering action according to the high-low position information includes:
determining steering control parameters according to the high and low position information;
when the steering control parameter is positive, driving the iris recognition equipment to steer to a high position;
and when the steering control parameter is the inversion control parameter, driving the iris recognition equipment to steer to a low position.
Preferably, the comparing the iris image in the iris recognition area according to the preset iris database information includes:
carrying out light spot positioning on the iris image in the iris recognition area to determine a light spot image area;
removing the spot image of the spot image area to obtain an iris image without spots;
and comparing the iris image without the speckles with the iris database information.
Preferably, the comparing the speckle-free iris image with the iris database information includes:
calling a preset iris recognition model, and determining M iris comparison template images in the iris database information according to the iris images without the light spots, wherein each iris comparison template image corresponds to user information;
determining an iris comparison template image with the highest contact ratio according to the contact ratio of the iris image without the light spots and each iris comparison template image;
and determining the identity of the target user according to the user information corresponding to the iris comparison template image with the highest coincidence degree.
An iris recognition device in a human eye image is also provided, and the device is applied to iris recognition equipment. The device comprises:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for acquiring near-field acquisition data and determining a target user according to the near-field acquisition data, and the near-field acquisition data comprises target distance data and target residence time data;
the second determining module is used for acquiring current image data and determining iris recognition area images in the current image data;
the third determining module is used for calling a preset image recognition model to recognize the iris recognition area image and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information;
the driving control module is used for driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping the steering action when the iris recognition area image is recognized as an iris image through the image recognition model;
the data comparison module is used for comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user
An apparatus comprising a processor, a memory and a computer program stored on the memory and capable of running on the processor, the computer program when executed by the processor performing the steps of iris recognition in a human eye image as described above.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of iris recognition in a human eye image as described above.
The application has the following advantages:
in the embodiment of the application, a target user is determined according to near field acquisition data by acquiring the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data; acquiring current image data and determining an iris recognition area image in the current image data; calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information; driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping steering action when the iris recognition area image is recognized as an iris image through the image recognition model; comparing iris images in an iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user; determining whether the user needs to carry out identity verification through near field acquisition data, if so, further calling an image recognition model to judge whether the image in the iris recognition area is an iris image, if not, determining the high-low position information of the target user according to the current image data, then adjusting the iris recognition equipment to carry out high-low rotation or low-rotation according to the high-low position information until the current image in the iris recognition area is an iris image, stopping rotation, and comparing the iris image with iris database information to judge the identity of the target user; according to the iris recognition method in the human eye image, the elevation angle or the depression angle of the equipment can be adjusted according to the height of the user, so that the user can face the user when the user performs identity recognition, the matching degree of the user is reduced, and the experience of different users is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the present application will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flowchart of steps of a method for iris recognition in a human eye image according to an embodiment of the present application;
fig. 2 is a block diagram of an iris recognition device in a human eye image according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present application more comprehensible, the present application is described in further detail below with reference to the accompanying drawings and detailed description. It will be apparent that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that, in any embodiment of the present invention, the iris recognition device using the iris recognition method in the human eye image of the present application is used for user recognition, and may include, but is not limited to, a mobile phone, a tablet computer, etc., which has an infrared near field recognition module, an image acquisition and recognition module, and an iris comparison module. In a specific user identification scene, the iris identification device further comprises a steering driving mechanism connected with the entity supporting body (wall body and door body) in the user identification scene, and the steering driving mechanism is used for driving the main body mechanism of the iris identification device to rotate at a specified angle.
Referring to fig. 1, a flowchart illustrating steps of a method for iris recognition in a human eye image according to an embodiment of the present application is shown;
the method is applied to iris recognition equipment and comprises the following steps:
s110, acquiring near field acquisition data, and determining a target user according to the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data;
s120, acquiring current image data and determining an iris recognition area image in the current image data;
s130, calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information;
s140, driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping the steering action when the iris recognition area image is recognized as an iris image through the image recognition model;
and S150, comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
In the embodiment of the application, a target user is determined according to near field acquisition data by acquiring the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data; acquiring current image data and determining an iris recognition area image in the current image data; calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information; driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping steering action when the iris recognition area image is recognized as an iris image through the image recognition model; comparing iris images in an iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user; determining whether the user needs to carry out identity verification through near field acquisition data, if so, further calling an image recognition model to judge whether the image in the iris recognition area is an iris image, if not, determining the high-low position information of the target user according to the current image data, then adjusting the iris recognition equipment to carry out high-low rotation or low-rotation according to the high-low position information until the current image in the iris recognition area is an iris image, stopping rotation, and comparing the iris image with iris database information to judge the identity of the target user; according to the iris recognition method in the human eye image, the elevation angle or the depression angle of the equipment can be adjusted according to the height of the user, so that the user can face the user when the user performs identity recognition, the matching degree of the user is reduced, and the experience of different users is improved.
Next, the iris recognition method in the human eye image described above will be further described by the following examples.
The method comprises the steps of acquiring near field acquisition data, and determining a target user according to the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data.
In an embodiment of the present invention, the specific process of "acquiring near field acquisition data and determining the target user according to the near field acquisition data" in step S110 may be further described in conjunction with the following description.
Determining whether a distance value of the target distance data is less than or equal to a preset distance threshold, as described in the following steps;
when the distance value of the target distance data is smaller than or equal to the preset distance value, determining the target residence time data, wherein the target residence time data comprises a first recording time and a second recording time corresponding to the target distance data, and the first recording time is the initial triggering time of the target distance data;
and when the time interval between the first recording time and the second recording time is equal to the preset residence time, determining the tested user as the target user.
It should be noted that, the iris recognition device of the present application includes an infrared near field recognition module, and distance data of a user can be detected and time data can be recorded through infrared ranging. In one case, if the distance of the user is detected to be greater than the preset distance value and the recorded residence time satisfies the preset residence time length, the user is a non-target user; in one case, the user is also a non-target user if the distance detected is less than or equal to the preset distance value and the recorded residence time does not meet the preset residence time. When the distance of the detected user is smaller than or equal to a preset distance value, starting to record the moment as a first recording time, calculating a time node of a second recording time according to the preset residence time and the first recording time, and when the distance of the detected user, namely the target distance data, is recorded to the second recording time, determining that the user is a target user.
As described in step S120, current image data is acquired, and an iris recognition area image in the current image data is determined.
In an embodiment of the present invention, the specific process of "obtaining the current image data, determining the iris recognition area image in the current image data" described in step S120 may be further described in conjunction with the following description.
Determining image acquisition parameters according to the current image data as follows;
determining iris recognition boundary parameters according to the image acquisition parameters;
determining the iris recognition area according to the iris recognition boundary parameters;
and obtaining a corresponding iris recognition area image in the current image data according to the iris recognition area.
It should be noted that, the iris recognition device of the present application further includes an image acquisition and recognition module, which can acquire the image in the current scene, i.e. the current image data in real time. Determining the pixel size of the current image according to the image acquisition parameters in the current image data, determining an iris recognition area according to each pixel coordinate in the pixel coordinate set, and then intercepting the image in the iris recognition area to obtain an iris recognition area image.
Step S130 is described, in which a preset image recognition model is called to recognize the iris recognition region image, and whether the iris recognition region image is an iris image is determined; and if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information.
In an embodiment of the present invention, the specific process of "determining the height position information of the target user according to the current image data" in step S130 may be further described in conjunction with the following description.
Extracting image features of the current image data as described in the following steps;
identifying the image features according to the image identification model, and determining the human body target part information corresponding to the current image data;
and determining the height position information of the iris part of the target user relative to the iris recognition area according to the human body target part information.
The image features include, but are not limited to, head features, neck features, clothing features, pants features, and the like, and of course, iris features are also included in the head features, except that the iris image representing the iris features is not in the iris recognition region in the above explanation.
In a specific embodiment, when the image features include iris features, only the pixel coordinates of the corresponding iris image are determined, and then whether the iris part of the human body is at a high position or a low position relative to the iris recognition area can be determined.
When the image features do not include iris features, orientation determination is performed according to the specific image features. At this time, if the image features are clothing features or trousers features, it indicates that the iris part of the human body is positioned at a high position relative to the iris recognition area; if the image features are small head features or no other features, the iris part of the human body is in a low position relative to the iris recognition area.
And stopping steering action until an iris image appears in the iris recognition area in the process of controlling rotation.
It should be noted that in any of the above embodiments, the target user is facing the iris recognition device. That is, only the iris recognition device is controlled or rotated up or down to acquire iris images of the user.
In step S140, the iris recognition device is driven to perform a steering operation according to the height information, and the steering operation is stopped when the iris recognition area image is recognized as an iris image by the image recognition model.
In an embodiment of the present invention, the specific process of "driving the iris recognition device to perform the steering action according to the high and low position information" in step S140 may be further described in conjunction with the following description.
Determining steering control parameters according to the high-low position information as follows;
when the steering control parameter is positive, driving the iris recognition equipment to steer to a high position;
and when the steering control parameter is the inversion control parameter, driving the iris recognition equipment to steer to a low position.
It should be noted that, the iris recognition device of the present application further includes a steering driving mechanism, and the high-order information in the high-order information corresponds to a forward rotation control parameter, and the low-order information corresponds to a reverse rotation control parameter.
And step S150, comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
In an embodiment of the present invention, the specific process of "comparing iris images in iris recognition area according to preset iris database information" in step S140 may be further described in conjunction with the following description.
Performing spot positioning on the iris image in the iris recognition area to determine a spot image area;
removing the spot image of the spot image area to obtain an iris image without spots;
and comparing the iris image without the speckles with the iris database information.
In this embodiment, a preset iris recognition model is called, and M iris comparison template images are determined in the iris database information according to the iris images without light spots, wherein each iris comparison template image corresponds to a piece of user information;
determining an iris comparison template image with the highest contact ratio according to the contact ratio of the iris image without the light spots and each iris comparison template image;
and determining the identity of the target user according to the user information corresponding to the iris comparison template image with the highest coincidence degree.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
Referring to fig. 2, a block diagram of an iris recognition device in a human eye image according to an embodiment of the present application is shown;
the apparatus is applied to iris recognition equipment, and the apparatus comprises:
a first determining module 110, configured to obtain near field acquisition data, and determine a target user according to the near field acquisition data, where the near field acquisition data includes target distance data and target residence time data;
a second determining module 120, configured to obtain current image data, and determine an iris recognition area image in the current image data;
the third determining module 130 is configured to invoke a preset image recognition model to recognize the iris recognition area image, and determine whether the iris recognition area image is an iris image; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information;
the driving control module 140 is configured to drive the iris recognition device to perform a steering action according to the high-low position information, and stop the steering action when the iris recognition area image is recognized as an iris image by the image recognition model;
the data comparison module 150 is configured to compare iris images in the iris recognition area according to preset iris database information to obtain a comparison result to determine the identity of the target user
Referring to fig. 3, a computer device for illustrating an iris recognition method in a human eye image according to the present invention may include the following:
the computer device 12 described above is embodied in the form of a general purpose computing device, and the components of the computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 16.
Bus 18 represents one or more of several types of bus 18 structures, including a memory bus 18 or memory controller, a peripheral bus 18, an accelerated graphics port, a processor, or a local bus 18 using any of a variety of bus 18 architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus 18, micro channel architecture (MAC) bus 18, enhanced ISA bus 18, video Electronics Standards Association (VESA) local bus 18, and Peripheral Component Interconnect (PCI) bus 18.
Computer device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. The computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (commonly referred to as a "hard disk drive"). Although not shown in fig. 3, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk such as a CD-ROM, DVD-ROM, or other optical media may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The memory may include at least one program product having a set (e.g., at least one) of program modules 42, the program modules 42 being configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, a memory, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules 42, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods of the embodiments described herein.
The computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, camera, etc.), one or more devices that enable a user to interact with the computer device 12, and/or any devices (e.g., network card, modem, etc.) that enable the computer device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Moreover, computer device 12 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet, through network adapter 20. As shown, network adapter 20 communicates with other modules of computer device 12 via bus 18. It should be appreciated that although not shown in fig. 3, other hardware and/or software modules may be used in connection with computer device 12, including, but not limited to: microcode, device drivers, redundant processing units 16, external disk drive arrays, RAID systems, tape drives, data backup storage systems 34, and the like.
The processing unit 16 executes various functional applications and data processing by running programs stored in the system memory 28, for example, to implement an iris recognition method in a human eye image provided by an embodiment of the present invention.
That is, the processing unit 16 realizes when executing the program: acquiring near field acquisition data, and determining a target user according to the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data; acquiring current image data and determining an iris recognition area image in the current image data; calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information; driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping steering action when the iris recognition area image is recognized as an iris image through the image recognition model; and comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
In an embodiment of the present invention, the present invention further provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements an iris recognition method in a human eye image as provided in all embodiments of the present application:
that is, the program is implemented when executed by a processor: acquiring near field acquisition data, and determining a target user according to the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data; acquiring current image data and determining an iris recognition area image in the current image data; calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information; driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping steering action when the iris recognition area image is recognized as an iris image through the image recognition model; and comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
Any combination of one or more computer readable media may be employed. The computer readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
While preferred embodiments of the present embodiments have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the present application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The above describes in detail the iris recognition method and device in the human eye image provided in the present application, and specific examples are applied herein to illustrate the principles and embodiments of the present application, where the above description of the examples is only for helping to understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. An iris recognition method in a human eye image, which is applied to an iris recognition apparatus. Characterized in that the method comprises:
acquiring near field acquisition data, and determining a target user according to the near field acquisition data, wherein the near field acquisition data comprises target distance data and target residence time data;
acquiring current image data and determining an iris recognition area image in the current image data;
calling a preset image recognition model to recognize the iris recognition area image, and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information;
driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping steering action when the iris recognition area image is recognized as an iris image through the image recognition model;
and comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
2. The method of claim 1, wherein the acquiring near field acquisition data, determining a target user from the near field acquisition data, comprises:
determining whether the distance value of the target distance data is smaller than or equal to a preset distance threshold value;
when the distance value of the target distance data is smaller than or equal to the preset distance value, determining the target residence time data, wherein the target residence time data comprises a first recording time and a second recording time corresponding to the target distance data, and the first recording time is the initial triggering time of the target distance data;
and when the time interval between the first recording time and the second recording time is equal to the preset residence time, determining the tested user as the target user.
3. The method of claim 1, wherein the acquiring current image data and determining an iris recognition area image in the current image data comprises:
determining image acquisition parameters according to the current image data;
determining iris recognition boundary parameters according to the image acquisition parameters;
determining the iris recognition area according to the iris recognition boundary parameters;
and obtaining a corresponding iris recognition area image in the current image data according to the iris recognition area.
4. The method according to claim 1, wherein determining the height position information of the target user based on the current image data, comprises:
extracting image features of the current image data;
identifying the image features according to the image identification model, and determining the human body target part information corresponding to the current image data;
and determining the height position information of the iris part of the target user relative to the iris recognition area according to the human body target part information.
5. The method of claim 1, wherein driving the iris recognition device to perform a steering operation according to the height information comprises:
determining steering control parameters according to the high and low position information;
when the steering control parameter is positive, driving the iris recognition equipment to steer to a high position;
and when the steering control parameter is the inversion control parameter, driving the iris recognition equipment to steer to a low position.
6. The method for iris recognition in human eye image according to claim 1, wherein comparing iris images in iris recognition areas according to preset iris database information comprises:
carrying out light spot positioning on the iris image in the iris recognition area to determine a light spot image area;
removing the spot image of the spot image area to obtain an iris image without spots;
and comparing the iris image without the speckles with the iris database information.
7. The method of iris recognition in a human eye image according to claim 6, wherein comparing the speckle-free iris image with the iris database information comprises:
calling a preset iris recognition model, and determining M iris comparison template images in the iris database information according to the iris images without the light spots, wherein each iris comparison template image corresponds to user information;
determining an iris comparison template image with the highest contact ratio according to the contact ratio of the iris image without the light spots and each iris comparison template image;
and determining the identity of the target user according to the user information corresponding to the iris comparison template image with the highest coincidence degree.
8. An iris recognition apparatus in a human eye image, the apparatus being applied to an iris recognition device. Characterized in that the device comprises:
the device comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for acquiring near-field acquisition data and determining a target user according to the near-field acquisition data, and the near-field acquisition data comprises target distance data and target residence time data;
the second determining module is used for acquiring current image data and determining iris recognition area images in the current image data;
the third determining module is used for calling a preset image recognition model to recognize the iris recognition area image and determining whether the iris recognition area image is an iris image or not; if the iris image is not the iris image, determining the high-low position information of the target user according to the current image data, wherein the high-low position information comprises high-order information and low-order information;
the driving control module is used for driving the iris recognition equipment to perform steering action according to the high-low position information, and stopping the steering action when the iris recognition area image is recognized as an iris image through the image recognition model;
and the data comparison module is used for comparing iris images in the iris recognition area according to preset iris database information to obtain a comparison result so as to determine the identity of the target user.
9. An apparatus comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, which when executed by the processor, implements the method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method according to any of claims 1 to 7.
CN202310454022.4A 2023-04-25 2023-04-25 Iris recognition method and device in human eye image Pending CN116486469A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310454022.4A CN116486469A (en) 2023-04-25 2023-04-25 Iris recognition method and device in human eye image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310454022.4A CN116486469A (en) 2023-04-25 2023-04-25 Iris recognition method and device in human eye image

Publications (1)

Publication Number Publication Date
CN116486469A true CN116486469A (en) 2023-07-25

Family

ID=87215223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310454022.4A Pending CN116486469A (en) 2023-04-25 2023-04-25 Iris recognition method and device in human eye image

Country Status (1)

Country Link
CN (1) CN116486469A (en)

Similar Documents

Publication Publication Date Title
US10983596B2 (en) Gesture recognition method, device, electronic device, and storage medium
US20200143146A1 (en) Target object recognition method and apparatus, storage medium, and electronic device
US20190034702A1 (en) Living body detecting method and apparatus, device and storage medium
US10824890B2 (en) Living body detecting method and apparatus, device and storage medium
JP2021524951A (en) Methods, devices, devices and computer readable storage media for identifying aerial handwriting
US20190026606A1 (en) To-be-detected information generating method and apparatus, living body detecting method and apparatus, device and storage medium
US11263634B2 (en) Payment method and device
EP3647993A1 (en) Interactive user verification
CN111539740A (en) Payment method, device and equipment
CN109785846B (en) Role recognition method and device for mono voice data
CN111145215B (en) Target tracking method and device
WO2020007191A1 (en) Method and apparatus for living body recognition and detection, and medium and electronic device
WO2020103462A1 (en) Video search method and apparatus, computer device, and storage medium
CN109241942B (en) Image processing method and device, face recognition equipment and storage medium
CN114461078B (en) Man-machine interaction method based on artificial intelligence
CN116486469A (en) Iris recognition method and device in human eye image
CN110163032B (en) Face detection method and device
CN113516481A (en) Method and device for confirming brushing intention and brushing equipment
US11755118B2 (en) Input commands via visual cues
CN111985400A (en) Face living body identification method, device, equipment and storage medium
CN112965602A (en) Gesture-based human-computer interaction method and device
CN111079662A (en) Figure identification method and device, machine readable medium and equipment
CN110891049A (en) Video-based account login method, device, medium and electronic equipment
CN113449542B (en) Face-changing identification method, device, equipment and medium
CN110920563A (en) Vehicle unlocking method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination