CN112764523A - Man-machine interaction method and device based on iris recognition and electronic equipment - Google Patents

Man-machine interaction method and device based on iris recognition and electronic equipment Download PDF

Info

Publication number
CN112764523A
CN112764523A CN201911000166.2A CN201911000166A CN112764523A CN 112764523 A CN112764523 A CN 112764523A CN 201911000166 A CN201911000166 A CN 201911000166A CN 112764523 A CN112764523 A CN 112764523A
Authority
CN
China
Prior art keywords
face
image
iris
user
target position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911000166.2A
Other languages
Chinese (zh)
Other versions
CN112764523B (en
Inventor
彭程
周军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aiku Smart Technology Co ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Shenzhen Aiku Smart Technology Co ltd
Beijing Eyecool Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aiku Smart Technology Co ltd, Beijing Eyecool Technology Co Ltd filed Critical Shenzhen Aiku Smart Technology Co ltd
Priority to CN201911000166.2A priority Critical patent/CN112764523B/en
Publication of CN112764523A publication Critical patent/CN112764523A/en
Application granted granted Critical
Publication of CN112764523B publication Critical patent/CN112764523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Ophthalmology & Optometry (AREA)
  • Geometry (AREA)
  • Collating Specific Patterns (AREA)
  • Image Input (AREA)
  • Studio Devices (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a man-machine interaction method and device based on iris recognition, electronic equipment and a storage medium, belongs to the technical field of biological feature recognition, and aims to solve the problems that in the prior art, user experience is poor when man-machine interaction is carried out based on iris recognition, and the integral iris recognition effect is poor and the efficiency is low due to the fact that a user cannot master the distance and the position between the user and the device and/or the electronic equipment well. The method is applied to the electronic equipment and comprises the following steps: starting a visible light camera and a near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen; judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not, and if so, carrying out identity recognition by using an iris image acquired by a near-infrared camera; if not, the user is prompted to adjust the location and/or distance.

Description

Man-machine interaction method and device based on iris recognition and electronic equipment
Technical Field
The present invention relates to the field of biometric identification technologies, and in particular, to a human-computer interaction method and apparatus based on iris identification, an electronic device, and a storage medium.
Background
User identification is typically the first step in user interaction with the device and/or electronic equipment. The identity of the user is detected, and then the user identity information is compared with the user identity information prestored in the information base, so that the method is a necessary method for identifying the identity of the user, wherein when the system considers that the detected user identity information is matched with the user identity information prestored in the information base, the user can normally carry out interactive operation with the device and/or the electronic equipment, otherwise, the user cannot carry out next step communication with the device and/or the electronic equipment.
Human biological features including human faces, fingerprints, palm prints, irises, sounds, gaits and the like are widely applied to the fields of finance, security and the like by using the human biological features as an identity recognition mode, wherein the human face recognition technology is most intuitive and convenient to collect, and the iris recognition technology has uniqueness and high safety, so the human face recognition technology and the iris recognition technology are widely applied to various identity recognition occasions.
In the process of iris recognition, an iris image of a current user is generally acquired through a near-infrared camera, and the acquired iris image is directly displayed as a preview image to the current user for preview in real time, so that the user can adjust the posture to acquire the iris image again or confirm the acquired iris image.
In the prior art, the field angle of the near-infrared camera is small, the acquired preview image generally only contains eye images of the user, the interactive interface is a black-and-white interface, the user experience is poor, and when the user performs iris recognition, the overall effect and efficiency of iris recognition are affected due to the fact that the distance and the position between the user and the device and/or the electronic equipment are not well mastered.
Disclosure of Invention
In order to solve the above technical problems, embodiments of the present invention provide a human-computer interaction method and apparatus based on iris recognition, an electronic device, and a storage medium, so as to solve the problems in the prior art that user experience is poor when human-computer interaction is performed based on iris recognition, and the overall iris recognition effect is poor and efficiency is low due to the fact that a user cannot grasp the distance and position between the user and the apparatus and/or the electronic device well.
The embodiment of the invention provides the following technical scheme:
on one hand, a man-machine interaction method based on iris recognition is provided and applied to electronic equipment, wherein a visible light camera for collecting a face image, a near infrared camera for collecting an iris image and a screen are arranged on the electronic equipment, and the method comprises the following steps:
starting a visible light camera and a near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen;
judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not, and if so, carrying out identity recognition by using an iris image acquired by a near-infrared camera; if not, the user is prompted to adjust the location and/or distance.
In some embodiments of the present invention, the determining whether the position and size of the face in the acquired face image are consistent with the face target position prompt box further includes:
and judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not and whether the number of iris pixels in the acquired iris image is within a preset number range or not.
In some embodiments of the present invention, the prompting the user to adjust the position and/or distance comprises:
and prompting through different colors of the prompt box of the face target position, and/or prompting through voice.
In some embodiments of the present invention, the prompting the user to adjust the position and/or distance comprises:
when the size of the face in the collected face image is larger than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is larger than a first preset threshold value, prompting a user to get away from the electronic equipment;
when the size of the face in the collected face image is smaller than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is smaller than a second preset threshold value, prompting a user to approach the electronic equipment;
and when the face in the acquired face image is not positioned in the center of the face target position prompt box, prompting the user to move towards the center of the face target position prompt box.
In some embodiments of the invention, the visible light camera and the near-infrared camera are both fixedly arranged, and when the visible light camera shoots a face image on the front side, the near-infrared camera can shoot a clear iris image.
In some embodiments of the present invention, a processor, a carrier capable of horizontally rotating, and a motor for driving the carrier to rotate are disposed in the electronic device, the visible light camera and the near-infrared camera are disposed on the carrier, and signal output ends of the visible light camera and the near-infrared camera, and a control end of the motor are both connected to the processor;
judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not, and if so, carrying out identity recognition by using an iris image acquired by a near-infrared camera; if not, prompting the user to adjust the position and/or distance, including:
detecting the face image collected by the visible light camera to obtain a first current position of human eyes in the face image;
calculating a first difference between the first current position and a first target position in the face image in the vertical direction, and when the absolute value of the first difference is greater than a third preset threshold, calculating to obtain a first angle difference according to the first difference;
generating a first control instruction according to the first angle difference so as to enable the motor to rotate by a corresponding angle according to the first control instruction;
acquiring an iris image of the user by using the near-infrared camera, detecting the iris image, and obtaining a second current position of the human eye in the iris image;
calculating a second difference value between the second current position and the second target position in the iris image in the vertical direction, and calculating to obtain a second angle difference according to the second difference value when the absolute value of the second difference value is greater than a fourth preset threshold value;
generating a second control instruction according to the second angle difference so as to enable the motor to rotate by a corresponding angle according to the second control instruction;
and acquiring the iris image of the user again by using the near-infrared camera, and carrying out identity recognition by using the iris image.
In some embodiments of the present invention, the calculating a first difference between the first current position in the face image and the first target position in the vertical direction, and when an absolute value of the first difference is greater than a third preset threshold, calculating a first angle difference according to the first difference, includes:
and multiplying the first angle difference by a coefficient k for correction, wherein the coefficient k is equal to the ratio of the size of the human face target position prompt box to the size of the current human face in the human face image.
On the other hand, provide a human-computer interaction device based on iris discernment, be applied to electronic equipment, last visible light camera, the near-infrared camera and the screen that are used for gathering the face image that are provided with of electronic equipment are used for gathering the iris image, the device includes:
the starting and displaying module is used for starting the visible light camera and the near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen;
the judging module is used for judging whether the position and the size of the face in the acquired face image are consistent with a face target position prompt box or not, and if so, the iris image acquired by the near-infrared camera is used for identity recognition; if not, the user is prompted to adjust the location and/or distance.
In some embodiments of the present invention, the determining whether the position and size of the face in the acquired face image are consistent with the face target position prompt box further includes:
and judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not and whether the number of iris pixels in the acquired iris image is within a preset number range or not.
In some embodiments of the present invention, the prompting the user to adjust the position and/or distance comprises:
and prompting through different colors of the prompt box of the face target position, and/or prompting through voice.
In some embodiments of the present invention, the prompting the user to adjust the position and/or distance comprises:
when the size of the face in the collected face image is larger than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is larger than a first preset threshold value, prompting a user to get away from the electronic equipment;
when the size of the face in the collected face image is smaller than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is smaller than a second preset threshold value, prompting a user to approach the electronic equipment;
and when the face in the acquired face image is not positioned in the center of the face target position prompt box, prompting the user to move towards the center of the face target position prompt box.
In some embodiments of the invention, the visible light camera and the near-infrared camera are both fixedly arranged, and when the visible light camera shoots a face image on the front side, the near-infrared camera can shoot a clear iris image.
In some embodiments of the present invention, a processor, a carrier capable of horizontally rotating, and a motor for driving the carrier to rotate are disposed in the electronic device, the visible light camera and the near-infrared camera are disposed on the carrier, and signal output ends of the visible light camera and the near-infrared camera, and a control end of the motor are both connected to the processor;
the judging module comprises:
the first detection subunit is used for detecting the face image acquired by the visible light camera to obtain a first current position of human eyes in the face image;
the first calculating subunit is configured to calculate a first difference between the first current position in the face image and a first target position in the vertical direction, and when an absolute value of the first difference is greater than a third preset threshold, calculate a first angle difference according to the first difference;
the first control subunit is used for generating a first control instruction according to the first angle difference so as to enable the motor to rotate by a corresponding angle according to the first control instruction;
the second detection subunit is used for acquiring an iris image of the user by using the near-infrared camera, detecting the iris image and obtaining a second current position of the human eye in the iris image;
the second calculating subunit is configured to calculate a second difference between the second current position in the iris image and the second target position in the vertical direction, and when an absolute value of the second difference is greater than a fourth preset threshold, calculate a second angle difference according to the second difference;
the second control subunit is used for generating a second control instruction according to the second angle difference so as to enable the motor to rotate by a corresponding angle according to the second control instruction;
and the identification subunit is used for acquiring the iris image of the user again by using the near-infrared camera and carrying out identity identification by using the iris image.
In some embodiments of the invention, the first computing subunit includes:
and the correction submodule is used for multiplying the first angle difference by a coefficient k to correct, wherein the coefficient k is equal to the ratio of the size of the human face target position prompt box to the size of the current human face in the human face image.
In still another aspect, an electronic device is provided, which includes: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for performing any of the methods described above.
In yet another aspect, a computer readable storage medium is provided that stores one or more programs, which are executable by one or more processors to implement any of the methods described above.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a man-machine interaction method, a man-machine interaction device, electronic equipment and a storage medium based on iris recognition.A visible light camera and a near infrared camera are started to respectively collect a face image and an iris image of a user, a face target position prompt box and the face image collected by the visible light camera are displayed on a screen, then whether the position and the size of a face in the collected face image are consistent with the face target position prompt box or not is judged, and if so, the iris image collected by the near infrared camera is utilized for identity recognition; if not, the user is prompted to adjust the location and/or distance. Therefore, on one hand, the whole face area can be displayed by displaying the face image collected by the visible light camera on the screen as a preview image, and the interactive interface is a color interface, so that the visual effect is better, and the user experience is improved; on the other hand, the distance and/or the position between the user and the electronic equipment are/is prompted by the face target position prompt box, the effect is visual, the user can purposefully adjust the position relation between the user and the electronic equipment up, down, left, right, front and back according to the prompt information, so that the electronic equipment can rapidly acquire the iris image meeting the identity recognition requirement, and the overall effect and the efficiency of iris recognition are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic flow chart of an embodiment of a human-computer interaction method based on iris recognition according to the present invention;
FIG. 2 is a schematic diagram of a screen interface of an electronic device using the method of FIG. 1, wherein (a) is an initial state diagram and (b) is a final state diagram;
FIG. 3 is a flowchart illustrating the step S102 in the embodiment of the method shown in FIG. 1;
FIG. 4 is a schematic diagram of an electronic device using the method of FIG. 3;
FIG. 5 is a schematic diagram of the method shown in FIG. 3, in which (a) is a schematic diagram of the angle of view of the camera and the position of the human eye in the initial state, and (b) is a schematic diagram of the angle of view of the camera and the position of the human eye after one rotation adjustment;
FIG. 6 is a schematic structural diagram of an embodiment of a human-computer interaction device based on iris recognition according to the present invention;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that all the directional indicators (such as up, down, left, right, front, and rear … …) in the embodiment of the present invention are only used to explain the relative position relationship between the components, the movement situation, etc. in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indicator is changed accordingly.
In addition, the descriptions related to "first", "second", etc. in the present invention are for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
On one hand, the embodiment of the invention provides a human-computer interaction method based on iris recognition, which is applied to electronic equipment, wherein the electronic equipment is provided with a visible light camera for acquiring a face image, a near infrared camera for acquiring an iris image and a screen;
the execution subject of the method process may be a program on an electronic device, such as an identification program on an iris lock, and the like, where the electronic device includes but is not limited to an iris lock, an iris gate, a mobile terminal, an iris doorbell, and the like, where the mobile terminal includes but is not limited to a smartphone, a tablet computer, an IPAD, a smart wearable device, and the like;
as shown in fig. 1, the man-machine interaction method based on iris recognition may include:
step S101: starting a visible light camera and a near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen;
in this step, can open visible light camera and near-infrared camera simultaneously, the benefit of opening simultaneously lies in, on the one hand can improve the gesture uniformity of current user in iris image and face image, and on the other hand, when judging face image and the iris image of gathering and all satisfy the condition, can utilize the iris image of gathering to carry out identification rapidly, improves recognition speed. Here, it is not required that the time is not a minute difference, and generally, the execution may be started in parallel or started almost (the time difference is not more than a second level) in parallel.
For example, the image acquisition instructions may be simultaneously and respectively sent to the visible light camera and the near-infrared camera, and it is assumed that each camera starts to acquire a corresponding image immediately after receiving the image acquisition instruction, and if the instruction transmission delay difference is not considered, it may be considered that the face image and the iris image of the current user start to be acquired simultaneously.
Because the field angle of a visible light camera used for acquiring the face image is large (generally greater than or equal to 90 degrees), the acquired face image is a large-size color image; since the near-infrared camera for acquiring the iris image has a small field angle (typically about 30 degrees), the acquired iris image is a small image. Therefore, the face image may include the entire face content image of the current user, while the iris image generally includes only a partial face content image of the current user.
In the step, the human face image collected by the visible light camera is displayed on the screen as a preview image, the whole face area can be displayed, and the interactive interface is a color interface, so that the user experience is improved. The face target position prompt box is used for prompting the target position and size of the face of a user, and the position and size of the prompt box can be obtained by acquiring the position and size of the face presented on a screen when different users use the optimal distance and position in front of a door in advance and averaging the position and size of the face. It can be understood that, the human faces of adults and children are different in size, the human face target position prompt boxes with different sizes can be correspondingly adopted, the age of the current user can be firstly identified when the human face image is collected, and then the human face target position prompt box with the size corresponding to the age is selected to be displayed on a screen.
Step S102: judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not, and if so, carrying out identity recognition by using an iris image acquired by a near-infrared camera; if not, the user is prompted to adjust the location and/or distance.
In the step, because the face image is easy to collect and can be visually observed by the user, whether the position and the size of the face in the face image accord with the face target position prompt box or not is judged firstly, if so, the collected iris image can be used for iris recognition under the condition, then, the collected iris image is used for identity recognition, and if not, the collected iris image can not be used for iris recognition, and the user is prompted to adjust the position and/or the distance.
It should be noted that, the term "match" here means that the difference between the two is within a preset range (for example, the difference is within 5% or 10% of the size of the face target position prompt box), and it is not necessary that the position and size of the face in the face image are completely equal to the position and size of the face target position prompt box.
Therefore, when the position and the size of the face in the collected face image are judged to be matched with the face target position prompt box, the user can visually judge that the distance between the current position and the electronic equipment is proper, the collected iris image can be directly used for identity recognition, the efficiency of the iris recognition is improved, meanwhile, the user sees a colorful face image on a screen, the user feels like using the face image for identity recognition, the iris image is actually used for identity recognition, and the safety and the user experience are improved.
After the user identity identification passes, allowing the user to perform subsequent interactive operation with the electronic equipment, for example, when the electronic equipment is an iris lock, allowing the user to perform lock body setting, pass record checking and the like, and when the electronic equipment is a smart phone, allowing the user to operate and use the smart phone; and if the user identity identification is not passed, refusing the user to perform subsequent interactive operation on the electronic equipment.
As an alternative embodiment, the determining whether the position and size of the face in the acquired face image are consistent with the face target position prompt box further includes:
and judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not and whether the number of iris pixels in the acquired iris image is within a preset number range (such as 10-25pix/mm) or not.
Therefore, the judgment is carried out by means of the number of iris pixels in the iris image, whether the acquired iris image can be used for identity recognition or not can be judged more accurately, and prompt information is more accurate. The iris pixel number may be specifically the pixel number included in a circular area corresponding to the outer ring of the iris, generally, the diameter of the outer ring of the iris is 1cm, and the corresponding circular area needs about 150 pixels.
As another alternative, to improve the convenience of the user, the prompting the user to adjust the position and/or distance may include:
the prompt is carried out by different colors of the human face target position prompt frame, namely, the corresponding color is set for the human face target position prompt frame, so that a user can more intuitively judge whether the position and the size of the human face in the currently acquired human face image are consistent with the human face target position prompt frame, for example, when the position and the size of the human face in the human face image are consistent with the human face target position prompt frame, the color of the human face target position prompt frame is green, when the size of the human face in the human face image is larger than the size of the human face target position prompt frame, the color of the human face target position prompt frame is red, and when the size of the human face in the human face image is smaller than the size of the human face target position prompt frame, the color of the human face target position prompt frame is blue, so that some users with high myopia can judge whether the acquired human face image is consistent with the requirement only by the color of, therefore, the position and/or the distance between the terminal equipment and the terminal equipment are/is adjusted according to the color of the human face target position prompt box;
and/or prompting through voice, for example, when the position and the size of the face in the face image are matched with the face target position prompt box, prompting the user that the current position is better and please keep the voice information, and when the position and the size are not matched, prompting the user to move to which direction through voice.
As another optional embodiment, the prompting the user to adjust the position and/or the distance may further include:
when the size of the face in the collected face image is larger than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is larger than a first preset threshold value, prompting a user to get away from the electronic equipment;
when the size of the face in the collected face image is smaller than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is smaller than a second preset threshold value, prompting a user to approach the electronic equipment;
and when the face in the acquired face image is not positioned in the center of the face target position prompt box, prompting the user to move towards the center of the face target position prompt box.
Wherein, the first two correspond to the front and back distance adjustment, and the last corresponds to the upper, lower, left and right position adjustment, further exemplified as follows:
for front and rear distance adjustment
When the size of the face in the acquired face image is larger than the size of a face target position prompt box and the number of iris pixels in the acquired iris image is larger than a first preset threshold (for example, 25pix/mm), judging that the distance between the current user and the electronic equipment is too short, displaying the face target position prompt box in red, and simultaneously guiding the user to be far away from the electronic equipment by voice;
when the size of the face in the acquired face image is smaller than the size of a face target position prompt box and the number of iris pixels in the acquired iris image is smaller than a second preset threshold (for example, 10pix/mm), judging that the distance between the current user and the electronic equipment is too far, displaying the face target position prompt box as blue, and simultaneously guiding the user to approach the electronic equipment by voice;
when the size of the face in the acquired face image is not greatly different from the size of the face target position prompt box, and the number of iris pixels in the acquired iris image is within a certain numerical range, for example: 10-25pix/mm, then 16-20pix/mm is the center area and the voice guides the user slightly closer or farther away.
For adjusting up, down, left and right positions
When the face in the acquired face image is not located in the center of the face target position prompt box (i.e. deviates from the face target position prompt box, specifically, the deviation degree between the center of the face and the center of the face target position prompt box or the deviation degree between the edge of the face frame and the edge of the face target position prompt box can be judged), prompting the user to move towards the center of the face target position prompt box.
In the embodiment shown in fig. 2, in the initial state shown in fig. 2(a), the face in the acquired face image is located right below the face target position prompt box 18 (shown as a dashed line frame), and the user is prompted to move upward, and in the final state after the movement, as shown in fig. 2(b), the face in the acquired face image is located in the center of the face target position prompt box 18. In this embodiment, the size of the face in the acquired face image is equal to the size of the face target position prompt box, and the front-back distance does not need to be adjusted, so that the state shown in fig. 2(b) is that the position and size of the face coincide with the face target position prompt box 18.
To sum up, the man-machine interaction method based on iris recognition provided by the embodiment of the invention comprises the steps of firstly starting a visible light camera and a near infrared camera to respectively collect a face image and an iris image of a user, displaying a face target position prompt frame and the face image collected by the visible light camera on a screen, then judging whether the position and the size of a face in the collected face image are consistent with the face target position prompt frame, and if so, carrying out identity recognition by using the iris image collected by the near infrared camera; if not, the user is prompted to adjust the location and/or distance. Therefore, on one hand, the whole face area can be displayed by displaying the face image collected by the visible light camera on the screen as a preview image, and the interactive interface is a color interface, so that the visual effect is better, and the user experience is improved; on the other hand, the distance and/or the position between the user and the electronic equipment are/is prompted by the face target position prompt box, the effect is visual, the user can purposefully adjust the position relation between the user and the electronic equipment up, down, left, right, front and back according to the prompt information, so that the electronic equipment can rapidly acquire the iris image meeting the identity recognition requirement, and the overall effect and the efficiency of iris recognition are improved.
In the embodiment of the present invention, the positions of the visible light camera and the near infrared camera may be fixed or may be rotatably adjusted. When the positions of the visible light camera and the near infrared camera are fixed, the positions and the angles of the visible light camera and the near infrared camera are not adjusted in the identity recognition process; when the positions of the visible light camera and the near-infrared camera can be rotationally adjusted, the positions and the angles of the visible light camera and the near-infrared camera can be rotationally adjusted as required in the identity recognition process. This is described in detail as follows:
the first situation is as follows: position fixing of visible light camera and near infrared camera
At the moment, the visible light camera and the near infrared camera are fixedly arranged according to preset positions, and the positions and the angles of the two cameras do not need to be adjusted when the face image and the iris image are collected. Because the distance between the human eyes and the terminal equipment is the same as the distance between the human eyes and the terminal equipment when the human face image and the iris image are collected, the visible light camera and the near infrared camera preferably have the same shooting depth of field in order to shoot clear human face images and iris images at the same distance.
Wherein, visible light camera and near-infrared camera can be adjacent in vertical direction and arrange, and near-infrared camera is located the top, and visible light camera is located the below, and perhaps, near-infrared camera is located the below, and visible light camera is located the top to the visual angle scope of the near-infrared camera of shooting visual angle scope cover of visible light camera. In addition, the optical axis of the iris lens (near infrared camera) and the optical axis of the face lens (visible light camera) form a certain angle, so that when the visible light camera shoots a front face image, the shooting angle of the near infrared camera can cover the area of the eyes of people so as to shoot a clear iris image, the iris lens and the near infrared camera can be separated by a preset distance, and the distance can be finely adjusted according to the size of the lens.
Or the visible light camera and the near infrared camera are adjacently arranged in the horizontal direction, the optical axis of the near infrared camera is positioned between the eyes of the user, the visible light camera is positioned on the left side or the right side of the near infrared camera, and the distance between the visible light camera and the near infrared camera is preferably between 5mm and 30 mm. An included angle is formed between the optical axis of the visible light camera and the optical axis of the near-infrared camera, and the included angle between the visible light camera and the near-infrared camera can range from 6 degrees to 10 degrees in the vertical direction.
In this embodiment, since the positions of the visible light camera and the near-infrared camera are fixed, when it is determined that the position and the size of the face in the acquired face image do not conform to the face target position prompt box, the user needs to adjust the face in the upward, downward, leftward, rightward, forward, and backward directions according to the prompt information, and after the position and the size of the face in the face image conform to the face target position prompt box, the acquired iris image is used for identity recognition.
Case two: position rotatable adjustment of visible light camera and near-infrared camera
At this time, as shown in fig. 4, a processor 10, a stage 11 capable of rotating horizontally, and a motor 12 for driving the stage 11 to rotate may be disposed in the electronic apparatus, a visible light camera 13 and a near infrared camera 14 are disposed on the stage 11, and signal output terminals of the visible light camera 13 and the near infrared camera 14, and a control terminal of the motor 12 are connected to the processor 10; it should be noted that, although the positions of the visible light camera and the near-infrared camera are rotatably adjusted, the visible light camera and the near-infrared camera are only configured to rotate horizontally (rotate in the up-down direction) on the stage at the same time in consideration of the controllability of the adjustment of the apparatus, the service life, and other factors.
As shown in fig. 3, the method determines whether the position and size of the face in the acquired face image are in accordance with the face target position prompt box, and if so, performs identity recognition by using the iris image acquired by the near-infrared camera; if not, the user is prompted to adjust the position and/or distance (step S102), which preferably includes:
step S1021: detecting the face image collected by the visible light camera to obtain a first current position of human eyes in the face image;
in this step, any known method in the prior art may be used for face detection, for example: the face image is subjected to face detection by adopting a face detection algorithm FaceCraft based on Cascade CNN, and meanwhile, the feature points of the face can be positioned by combining with an SDM (visualized Descript method) method, so that the first current position of the eyes is obtained. It should be noted that the face detection method is not limited to FaceCraft, and may be Harr-AdaBoost, SSD (Single Shot multiple boxdetector), fast RCNN, etc.; the human eye position calculation method is not limited to SDM, and may be lbf (local binary feature), lab (local Assembled binary), or the like.
It can be understood that, here can also judge simultaneously whether the position and the size of the face in the face image of gathering accord with face target position prompt box to the suggestion user adjusts distance and left and right position (need not to adjust upper and lower position) around, like this, when gathering the face image promptly the suggestion user adjusts, for when gathering the iris image again the suggestion user adjusts, more can save time, improves image acquisition efficiency.
Step S1022: calculating a first difference between the first current position and a first target position in the face image in the vertical direction, and when the absolute value of the first difference is greater than a third preset threshold, calculating to obtain a first angle difference according to the first difference;
the inventor finds that the field angles of the visible light camera 13 and the near-infrared camera 14 in the horizontal direction are generally large, and when a user is in front of the electronic device, the positions of human eyes can basically fall into the field range in the horizontal direction, so that in fig. 4, the carrier 11 only needs to be rotated upwards and downwards without being rotated leftwards and rightwards; however, the field angle α of the visible light camera 13 in the vertical direction is relatively large (typically 90 degrees), the field angle β of the near-infrared camera 14 in the vertical direction is relatively small (typically 30 degrees), and the position of the human eye 15 is likely to fall within the field range of the visible light camera 13 but not within the field range of the near-infrared camera 14, as shown in fig. 5(a), so that in order to enable the electronic device to acquire a clear iris image, the stage 11 needs to be adjusted to drive the visible light camera 13 and the near-infrared camera 14 to rotate upwards or downwards, so that the position of the human eye 15 falls within the field range of the near-infrared camera 14, as shown in fig. 5 (b).
Because the visible light camera 13 and the near-infrared camera 14 are both arranged on the carrier 11 and the positions of the visible light camera 13 and the near-infrared camera 14 are relatively fixed, a certain specific area exists in the collected face image, if the human eyes are located in the specific area, the human eyes simultaneously fall into the field of view range of the near-infrared camera 14, and if the human eyes are not located in the specific area, the human eyes do not fall into the field of view range of the near-infrared camera 14. And selecting a central position from the specific area to obtain the first target position.
The first current position and the first target position may be a region range or a position point, and the following description will take the position points as examples. As shown in fig. 2(a), in the face image displayed on the screen, if human eyes are located in the dotted circle 19 and the human eyes fall into the field of view of the near-infrared camera, the circle center of the dotted circle 19 may be selected as the first target position (it should be noted that the dotted circle 19 is shown in fig. 2(a) to illustrate the first target position, and may not be shown in the screen during actual display, as shown in fig. 2 (b)); taking the left eye as an example only, taking the left eye center position as the first current position, assuming that the coordinates of the current left eye center position (i.e., the first current position) are (100,250) and the coordinates of the center of the left dotted circle (i.e., the first target position) are (100,450), the first difference (pixel difference) between the first current position and the first target position in the vertical direction is 250-400-150.
When the absolute value of the first difference is smaller than or equal to a first preset threshold, the fact that the human eyes fall into the field of view range of the near-infrared camera is indicated, at the moment, rotation adjustment is carried out without the aid of related data of a face image, and the step can be directly skipped to step S1024; when the absolute value of the first difference is larger than a first preset threshold, it indicates that the human eyes do not fall into the field of view of the near-infrared camera, and at this time, rotation adjustment needs to be performed by means of related data of the human face image. The magnitude of the first preset threshold can be flexibly set according to actual situations, for example, the magnitude of the first preset threshold can be set to 30 in this embodiment, and since the absolute value of-150 is greater than 30, it indicates that the rotation adjustment in the subsequent step S1023 needs to be performed.
It can be understood that a first difference d between the first current position of the human eye and the first target position vertically corresponds to a required rotation angle of the stage in a substantially linear manner, and a required rotation angle of the stage/camera, that is, a first angle difference θ shown in fig. 5(b), can be calculated according to the first difference, and in this embodiment, assuming that the calculated first angle difference θ is 40 degrees, it indicates that the human eye can fall into the field range of the near-infrared camera after the stage rotates downward 40 degrees.
Furthermore, the inventor finds that the first angle difference is slightly related to the distance between the user and the camera, when the user is close to the camera, the face image in the screen is slightly larger, and the rotating angle can be slightly smaller; when the user is far away, the face image in the screen is slightly smaller, and the rotation angle can be slightly larger, so that the first angle difference is preferably multiplied by a coefficient k to correct, and the coefficient k is equal to the ratio of the size of the face target position prompt box to the size of the current face in the face image.
Step S1023: generating a first control instruction according to the first angle difference so as to enable the motor to rotate by a corresponding angle according to the first control instruction;
in this step, after the motor rotates by a corresponding angle, the eye position in the face image can be moved from the current position (the position of the solid line eye in fig. 2 (a)) to the target/desired position (the dotted circle position in fig. 2(a), and the screen effect after the movement is as shown in fig. 2 (b)), so that the eyes of the user fall into the field range of the near-infrared camera (as shown in fig. 5 (b)).
Therefore, it can be seen that the above steps S1021 to S1023 implement "large rotation" adjustment by using the face image, so that the position of the human eye falls into the field of view of the near-infrared camera as soon as possible.
Step S1024: acquiring an iris image of the user by using the near-infrared camera, detecting the iris image, and obtaining a second current position of the human eye in the iris image;
in this step, human eye detection can be performed by any method known in the art, such as SDM, LBF, LAB, etc., as described above.
Step S1025: calculating a second difference value between the second current position and the second target position in the iris image in the vertical direction, and calculating to obtain a second angle difference according to the second difference value when the absolute value of the second difference value is greater than a fourth preset threshold value;
the principle of this step is the same as that of the aforementioned step S1022, and will not be described in detail here, which is mainly different from that of performing correlation calculation in the iris image, so that the second target position is different from the first target position, where the first target position refers to a preset optimal position in the face image, and the second target position refers to a preset optimal position in the iris image.
When the absolute value of the second difference is smaller than or equal to the fourth preset threshold, the iris is relatively centered in the field of view range of the near-infrared camera, the acquisition quality of the iris image is relatively good, and at the moment, the steps S1026-S1027 can be skipped, and the currently acquired iris image is directly used for user identification; when the absolute value of the second difference is larger than the fourth preset threshold, it is indicated that the position of the iris is not centered in the field of view of the near-infrared camera, and the acquisition quality of the iris image is not guaranteed, and at this time, rotation adjustment needs to be performed by means of the related data of the iris image. Because the size of the face and the distance from the lens, the accuracy of the primary rotation adjustment can be influenced by the factors, and the secondary rotation adjustment is used for better deviation correction and enables the iris to be positioned in the centered stable position to be easily captured. The size of the fourth preset threshold value can be flexibly set according to the actual situation. When the carrying platform rotates by the second angle difference, the iris can be centered within the field of view of the near-infrared camera.
Step S1026: generating a second control instruction according to the second angle difference so as to enable the motor to rotate by a corresponding angle according to the second control instruction;
from the above, the steps S1024-S1026 realize the "small correction" adjustment by using the iris image, so that the iris is centered within the field of view of the near-infrared camera, thereby ensuring the quality of the iris image acquisition.
Step S1027: and acquiring the iris image of the user again by using the near-infrared camera, and carrying out identity recognition by using the iris image.
In this embodiment, but because the position horizontal rotation of visible light camera and near-infrared camera adjusts, visible light camera and near-infrared camera can the self-adaptation user in the position of upper and lower direction, consequently, when judging in the face image of gathering position and size and face target location prompt box nonconformity, the user only need adjust according to the tip information towards left, right side, preceding, the rear, need not up, the lower direction is adjusted (the user need not to stand tiptoe or squat), increased the convenience of using.
In addition, in the embodiment, the visible light camera and the near-infrared camera are arranged on the carrier which can horizontally rotate, in the process of acquiring the iris image, firstly, the visible light camera is used for acquiring a face image, and the carrier is adjusted in a large rotation mode, so that the positions of human eyes can quickly fall into the field range of the near-infrared camera; then, the near-infrared camera is used for collecting the iris image, and the carrying platform is adjusted in a small correction mode, so that the iris is centered in the viewing field range of the near-infrared camera, the iris image collecting quality is ensured, the clear iris image can be collected through two-step adjustment, the adjusting speed is high, the iris image collecting device can adapt to users with different heights, and the users do not need to actively search for proper collecting positions, so that the use experience of the users is improved.
In some embodiments of the present invention, an angle sensor 16 is disposed at the rotation axis of the carrier 11, and a signal output terminal of the angle sensor 16 is connected to the processor 10. The angle sensor 16 is used to detect whether the rotation angle of the stage 11 reaches the aforementioned angle difference, and if not, the processor 10 controls the motor 12 to continue rotating until the angle sensor 16 detects that the rotation angle of the stage 11 reaches the aforementioned angle difference. The angle sensor 16 may be a magnetic encoder, and has the advantages of small size and large rotation range, so that the size of the electronic device can be reduced.
In other embodiments of the present invention, a near-infrared light supplement lamp 17 may be further disposed on the carrier 11 to supplement light for the near-infrared camera 14 in a dark light. The number of the near-infrared light supplement lamps 17 can be two, and the two near-infrared light supplement lamps are respectively positioned at the left side and the right side of the carrying platform 11.
On the other hand, an embodiment of the present invention provides a human-computer interaction device based on iris recognition, which is applied to an electronic device, where the electronic device is provided with a visible light camera for collecting a face image, a near-infrared camera for collecting an iris image, and a screen, as shown in fig. 6, the device includes:
the starting and displaying module 31 is used for starting the visible light camera and the near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen;
the judging module 32 is used for judging whether the position and the size of the face in the acquired face image are consistent with the face target position prompt box or not, and if so, the iris image acquired by the near-infrared camera is used for identity recognition; if not, the user is prompted to adjust the location and/or distance.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
Preferably, the judging whether the position and size of the face in the acquired face image are in line with the face target position prompt box further comprises:
and judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not and whether the number of iris pixels in the acquired iris image is within a preset number range or not.
Preferably, the prompting the user to adjust the position and/or the distance includes:
and prompting through different colors of the prompt box of the face target position, and/or prompting through voice.
Preferably, the prompting the user to adjust the position and/or the distance includes:
when the size of the face in the collected face image is larger than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is larger than a first preset threshold value, prompting a user to get away from the electronic equipment;
when the size of the face in the collected face image is smaller than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is smaller than a second preset threshold value, prompting a user to approach the electronic equipment;
and when the face in the acquired face image is not positioned in the center of the face target position prompt box, prompting the user to move towards the center of the face target position prompt box.
Preferably, the visible light camera and the near-infrared camera are both fixedly arranged, and when the visible light camera shoots the face image on the front side, the near-infrared camera can shoot a clear iris image.
Preferably, a processor, a platform deck capable of horizontally rotating and a motor for driving the platform deck to rotate are arranged in the electronic device, the visible light camera and the near-infrared camera are arranged on the platform deck, and signal output ends of the visible light camera and the near-infrared camera and a control end of the motor are connected to the processor;
the determining module 32 includes:
the first detection subunit is used for detecting the face image acquired by the visible light camera to obtain a first current position of human eyes in the face image;
the first calculating subunit is configured to calculate a first difference between the first current position in the face image and a first target position in the vertical direction, and when an absolute value of the first difference is greater than a third preset threshold, calculate a first angle difference according to the first difference;
the first control subunit is used for generating a first control instruction according to the first angle difference so as to enable the motor to rotate by a corresponding angle according to the first control instruction;
the second detection subunit is used for acquiring an iris image of the user by using the near-infrared camera, detecting the iris image and obtaining a second current position of the human eye in the iris image;
the second calculating subunit is configured to calculate a second difference between the second current position in the iris image and the second target position in the vertical direction, and when an absolute value of the second difference is greater than a fourth preset threshold, calculate a second angle difference according to the second difference;
the second control subunit is used for generating a second control instruction according to the second angle difference so as to enable the motor to rotate by a corresponding angle according to the second control instruction;
and the identification subunit is used for acquiring the iris image of the user again by using the near-infrared camera and carrying out identity identification by using the iris image.
Preferably, the first computing subunit includes:
and the correction submodule is used for multiplying the first angle difference by a coefficient k to correct, wherein the coefficient k is equal to the ratio of the size of the human face target position prompt box to the size of the current human face in the human face image.
An embodiment of the present invention further provides an electronic device, fig. 7 is a schematic structural diagram of an embodiment of the electronic device of the present invention, and a flow of the embodiment shown in fig. 1 of the present invention may be implemented, as shown in fig. 7, where the electronic device may include: the device comprises a shell 41, a processor 42, a memory 43, a circuit board 44 and a power circuit 45, wherein the circuit board 44 is arranged inside a space enclosed by the shell 41, and the processor 42 and the memory 43 are arranged on the circuit board 44; a power supply circuit 45 for supplying power to each circuit or device of the electronic apparatus; the memory 43 is used for storing executable program code; the processor 42 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 43, for performing the method described in any of the method embodiments described above.
The specific execution process of the above steps by the processor 42 and the steps further executed by the processor 42 by running the executable program code may refer to the description of the embodiment shown in fig. 1 of the present invention, and are not described herein again.
The electronic device exists in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic equipment with data interaction function.
The embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method steps described in any of the above method embodiments.
The embodiment of the invention also provides an application program, and the application program is executed to realize the method provided by any method embodiment of the invention.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A human-computer interaction method based on iris recognition is applied to electronic equipment and is characterized in that a visible light camera for collecting a face image, a near-infrared camera for collecting an iris image and a screen are arranged on the electronic equipment, and the method comprises the following steps:
starting a visible light camera and a near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen;
judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not, and if so, carrying out identity recognition by using an iris image acquired by a near-infrared camera; if not, the user is prompted to adjust the location and/or distance.
2. The method of claim 1, wherein the determining whether the position and size of the face in the captured face image matches the face target position prompt box further comprises:
and judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not and whether the number of iris pixels in the acquired iris image is within a preset number range or not.
3. The method of claim 1, wherein prompting the user to adjust the position and/or distance comprises:
and prompting through different colors of the prompt box of the face target position, and/or prompting through voice.
4. The method of claim 1, wherein prompting the user to adjust the position and/or distance comprises:
when the size of the face in the collected face image is larger than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is larger than a first preset threshold value, prompting a user to get away from the electronic equipment;
when the size of the face in the collected face image is smaller than the size of a face target position prompt box and/or the number of iris pixels in the collected iris image is smaller than a second preset threshold value, prompting a user to approach the electronic equipment;
and when the face in the acquired face image is not positioned in the center of the face target position prompt box, prompting the user to move towards the center of the face target position prompt box.
5. The method of claim 1, wherein the visible light camera and the near infrared camera are both fixed, and when the visible light camera takes a frontal face image, the near infrared camera can take a clear iris image.
6. The method according to claim 1, wherein a processor, a stage capable of horizontally rotating, and a motor for driving the stage to rotate are arranged in the electronic device, the visible light camera and the near infrared camera are arranged on the stage, and signal output ends of the visible light camera and the near infrared camera and a control end of the motor are connected to the processor;
judging whether the position and the size of the face in the acquired face image accord with a face target position prompt box or not, and if so, carrying out identity recognition by using an iris image acquired by a near-infrared camera; if not, prompting the user to adjust the position and/or distance, including:
detecting the face image collected by the visible light camera to obtain a first current position of human eyes in the face image;
calculating a first difference between the first current position and a first target position in the face image in the vertical direction, and when the absolute value of the first difference is greater than a third preset threshold, calculating to obtain a first angle difference according to the first difference;
generating a first control instruction according to the first angle difference so as to enable the motor to rotate by a corresponding angle according to the first control instruction;
acquiring an iris image of the user by using the near-infrared camera, detecting the iris image, and obtaining a second current position of the human eye in the iris image;
calculating a second difference value between the second current position and the second target position in the iris image in the vertical direction, and calculating to obtain a second angle difference according to the second difference value when the absolute value of the second difference value is greater than a fourth preset threshold value;
generating a second control instruction according to the second angle difference so as to enable the motor to rotate by a corresponding angle according to the second control instruction;
and acquiring the iris image of the user again by using the near-infrared camera, and carrying out identity recognition by using the iris image.
7. The method according to claim 6, wherein the calculating a first difference between the first current position in the face image and the first target position in the vertical direction, and when an absolute value of the first difference is greater than a third preset threshold, calculating a first angle difference according to the first difference comprises:
and multiplying the first angle difference by a coefficient k for correction, wherein the coefficient k is equal to the ratio of the size of the human face target position prompt box to the size of the current human face in the human face image.
8. The utility model provides a human-computer interaction device based on iris discernment is applied to electronic equipment, a serial communication port, last visible light camera, the near-infrared camera and the screen that are used for gathering the face image that are provided with of electronic equipment are used for gathering the iris image, the device includes:
the starting and displaying module is used for starting the visible light camera and the near infrared camera to respectively acquire a face image and an iris image of a user, and displaying a face target position prompt box and the face image acquired by the visible light camera on a screen;
the judging module is used for judging whether the position and the size of the face in the acquired face image are consistent with a face target position prompt box or not, and if so, the iris image acquired by the near-infrared camera is used for identity recognition; if not, the user is prompted to adjust the location and/or distance.
9. An electronic device, characterized in that the electronic device comprises: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for performing the method of any of the above claims 1-7.
10. A computer-readable storage medium, storing one or more programs, the one or more programs being executable by one or more processors to perform the method of any of claims 1-7.
CN201911000166.2A 2019-10-21 2019-10-21 Man-machine interaction method and device based on iris recognition and electronic equipment Active CN112764523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911000166.2A CN112764523B (en) 2019-10-21 2019-10-21 Man-machine interaction method and device based on iris recognition and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911000166.2A CN112764523B (en) 2019-10-21 2019-10-21 Man-machine interaction method and device based on iris recognition and electronic equipment

Publications (2)

Publication Number Publication Date
CN112764523A true CN112764523A (en) 2021-05-07
CN112764523B CN112764523B (en) 2022-11-18

Family

ID=75691567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911000166.2A Active CN112764523B (en) 2019-10-21 2019-10-21 Man-machine interaction method and device based on iris recognition and electronic equipment

Country Status (1)

Country Link
CN (1) CN112764523B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564100A (en) * 2021-11-05 2022-05-31 南京大学 Free stereoscopic display hand-eye interaction method based on infrared guidance
CN115567664A (en) * 2022-10-13 2023-01-03 长沙观谱红外科技有限公司 Infrared imaging robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104967774A (en) * 2015-06-05 2015-10-07 广东欧珀移动通信有限公司 Dual-camera shooting control method and terminal
CN105095893A (en) * 2014-05-16 2015-11-25 北京天诚盛业科技有限公司 Image acquisition device and method
CN109756663A (en) * 2017-08-25 2019-05-14 北京悉见科技有限公司 A kind of control method of AR equipment, device and AR equipment
CN109977828A (en) * 2019-03-18 2019-07-05 北京中科虹霸科技有限公司 A kind of method and apparatus that camera holder automatic pitching is adjusted
CN110210333A (en) * 2019-05-16 2019-09-06 佛山科学技术学院 A kind of focusing iris image acquiring method and device automatically

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095893A (en) * 2014-05-16 2015-11-25 北京天诚盛业科技有限公司 Image acquisition device and method
CN104967774A (en) * 2015-06-05 2015-10-07 广东欧珀移动通信有限公司 Dual-camera shooting control method and terminal
CN109756663A (en) * 2017-08-25 2019-05-14 北京悉见科技有限公司 A kind of control method of AR equipment, device and AR equipment
CN109977828A (en) * 2019-03-18 2019-07-05 北京中科虹霸科技有限公司 A kind of method and apparatus that camera holder automatic pitching is adjusted
CN110210333A (en) * 2019-05-16 2019-09-06 佛山科学技术学院 A kind of focusing iris image acquiring method and device automatically

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114564100A (en) * 2021-11-05 2022-05-31 南京大学 Free stereoscopic display hand-eye interaction method based on infrared guidance
CN114564100B (en) * 2021-11-05 2023-12-12 南京大学 Infrared guiding-based hand-eye interaction method for auto-stereoscopic display
CN115567664A (en) * 2022-10-13 2023-01-03 长沙观谱红外科技有限公司 Infrared imaging robot

Also Published As

Publication number Publication date
CN112764523B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
WO2015037177A1 (en) Information processing apparatus method and program combining voice recognition with gaze detection
US10248846B2 (en) Information processing device
CN106406509B (en) Head-mounted eye-control virtual reality equipment
CN111580652B (en) Video playing control method and device, augmented reality equipment and storage medium
US20170192500A1 (en) Method and electronic device for controlling terminal according to eye action
EP3035656A1 (en) Method and apparatus for controlling an electronic device
US11693475B2 (en) User recognition and gaze tracking in a video system
US10120441B2 (en) Controlling display content based on a line of sight of a user
US11226686B2 (en) Interactive user gesture inputs
EP3584740B1 (en) Method for detecting biological feature data, biological feature recognition apparatus and electronic terminal
US11163995B2 (en) User recognition and gaze tracking in a video system
KR20160135242A (en) Remote device control via gaze detection
KR20160083903A (en) Correlated display of biometric identity, feedback and user interaction state
US9412190B2 (en) Image display system, image display apparatus, image display method, and non-transitory storage medium encoded with computer readable program
US20140043229A1 (en) Input device, input method, and computer program
CN110572716B (en) Multimedia data playing method, device and storage medium
CN108848313B (en) Multi-person photographing method, terminal and storage medium
US11151398B2 (en) Anti-counterfeiting processing method, electronic device, and non-transitory computer-readable storage medium
CN112764523B (en) Man-machine interaction method and device based on iris recognition and electronic equipment
CN110728724A (en) Image display method, device, terminal and storage medium
KR20170093440A (en) Apparatus and method for making device track subject with rotating device
US10389947B2 (en) Omnidirectional camera display image changing system, omnidirectional camera display image changing method, and program
US20150229908A1 (en) Presentation control device, method of controlling presentation, and program
US20150009314A1 (en) Electronic device and eye region detection method in electronic device
US20190028690A1 (en) Detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant