WO2019128714A1 - 虹膜识别方法和vr设备 - Google Patents

虹膜识别方法和vr设备 Download PDF

Info

Publication number
WO2019128714A1
WO2019128714A1 PCT/CN2018/120579 CN2018120579W WO2019128714A1 WO 2019128714 A1 WO2019128714 A1 WO 2019128714A1 CN 2018120579 W CN2018120579 W CN 2018120579W WO 2019128714 A1 WO2019128714 A1 WO 2019128714A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
iris
frames
camera unit
unit
Prior art date
Application number
PCT/CN2018/120579
Other languages
English (en)
French (fr)
Inventor
刘木
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019128714A1 publication Critical patent/WO2019128714A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the present application relates to the field of virtual reality (VR) technology, and in particular, to an iris recognition method and a VR device.
  • VR virtual reality
  • VR is an important technical direction in simulation technology, and it has attracted more and more attention.
  • the identification and authentication of user identities is an important part of improving the user experience.
  • the commonly used biometric identification method is the iris recognition mode, and the identity recognition and authentication of the user can be performed by the iris recognition method.
  • Iris recognition relies primarily on the identification of images of the eye that contain a clear iris texture.
  • the human eye image is mainly taken by the camera unit. Because the lens of VR is a Fresnel lens, the image of the human eye captured by the camera unit will have a Fresnel pattern.
  • the Fresnel pattern due to the strong texture characteristics of the Fresnel pattern, it is very densely covered in the iris texture area during the imaging process, which easily affects the large area of the iris. And the Fresnel pattern will also block the effective data of the iris recognition area, thus seriously affecting the accuracy and effect of iris recognition.
  • the embodiment of the present application provides an iris recognition method and a VR device, which are used to solve the problem of Fresnel interference in the iris recognition process, thereby improving the security and user experience of the user using the VR device.
  • a first aspect of the embodiments of the present application provides an iris recognition method, where the method is applied to a virtual reality VR device, the VR device includes a Fresnel lens and an imaging unit, and the imaging unit transmits the Fresnel lens Acquiring an image, the relative position of the Fresnel lens and the camera unit is fixed during the moving of the camera unit, and the method includes:
  • Iris recognition is performed based on the second image.
  • the fusion may be performed such that the content of the Fresnel region in any of the first images of the N frames is replaced by the content or partial content of the iris image in the other first images.
  • the Fresnel pattern attached to the iris is removed by using a multi-frame fusion technique to obtain a second image of the Fresnel pattern, and finally the iris feature is extracted based on the second image to complete the final iris recognition.
  • controlling the camera unit to acquire the N frames of the first image during the moving process comprises:
  • the camera unit is controlled to move N times by the motor in the VR device, and the image capturing unit is controlled to acquire an image once every time the movement is completed, thereby obtaining the N frame first image.
  • the method further includes: acquiring a pre-stored third image of the Fresnel pattern collected by the camera unit, the third image background is a solid color background, and the Fresnel pattern includes at least two a circle of the same center;
  • the moving distance of the imaging unit corresponding to N of the interval between the rings is used as the distance moved by the imaging unit once.
  • the pre-stored distance may be acquired as the distance moved by the imaging unit once.
  • the merging the first image of the N frames to obtain a second image with the Fresnel pattern removed includes:
  • the labeled N frames of the first image are aligned, and the aligned N frames of the first image are ORed to obtain the second image.
  • the determining a center of each iris in the first image of the N frames includes:
  • the center of the iris is positioned based on the position of the iris circle.
  • a virtual reality VR device in a second aspect of the embodiments of the present application, includes a Fresnel lens, an imaging unit, a control unit, and a processing unit;
  • the camera unit is configured to collect an image through the Fresnel lens
  • the relative position of the Fresnel lens and the camera unit is fixed during the movement of the camera unit;
  • the control unit is configured to control movement of the camera unit, and control the camera unit to acquire an N-frame first image during the moving process, where the first image is an iris image including a Fresnel pattern, and the value of N is More than 1, N is an integer;
  • the processing unit is configured to align and fuse the first image of the N frames to obtain a second image with the Fresnel pattern removed, and perform iris recognition based on the second image.
  • the VR device includes a motor for the control unit to control the movement of the camera unit, and the control for controlling the camera unit to acquire the first image of the N frame during the moving process
  • the unit is configured to control the camera unit to move N times through a motor in the VR device, and control the camera unit to acquire an image once every time a movement is completed, thereby obtaining the N frame first image.
  • the VR device further includes:
  • the acquiring unit is configured to obtain a third image containing the Fresnel pattern collected by the pre-stored camera unit, the third image background is a solid color background, and the Fresnel pattern includes at least two rings of the same center;
  • the control unit is configured to use a moving distance of the imaging unit corresponding to one-ninth of the interval between the rings as a distance moved by the imaging unit for each movement.
  • the processing unit that fuses the first image of the N frames to obtain a second image of the Fresnel pattern is used to determine the center of each iris in the first image of the N frames. And performing image registration on the N-frame first image by using a respective iris center in the N-frame first image as a registration offset reference of the N-frame first image; a position of the first image of the N frame, marking a pixel value in a region where the Fresnel pattern in the N frame image is 0; and aligning the first image of the labeled N frame based on the registration result, and The first image of the aligned N frames is ORed to obtain the second image.
  • the processing unit for determining a center of each iris in the first image of the N frames is configured to determine an iris image located in any of the first images of the N frames;
  • the position of the iris circle locates the center of the iris.
  • a third aspect of the embodiments of the present application provides a virtual reality VR device, where the VR device includes a Fresnel lens, an imaging unit, a processor, and a memory, and the imaging unit is configured to transmit the Fresnel lens Collecting images;
  • the relative position of the Fresnel lens and the camera unit is fixed during the movement of the camera unit;
  • the memory is configured to store an operating program, code or instruction applied to the iris recognition on the VR device;
  • the processor is configured to invoke an iris recognition operation program, a code or an instruction stored in the memory, and execute the iris recognition method provided by the first aspect of the embodiment of the present application.
  • a fourth aspect of the embodiments of the present application provides a computer readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the iris recognition method disclosed in the first aspect of the embodiments of the present application.
  • a fifth aspect of an embodiment of the present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the iris recognition method of the above aspects.
  • the iris recognition method and the VR device disclosed in the embodiments of the present application includes a Fresnel lens and an imaging unit.
  • the imaging unit collects an image through the Fresnel lens, and the relative position of the Fresnel lens and the imaging unit is fixed during the movement of the imaging unit.
  • the method controls the movement of the camera unit by performing control, and controls the camera unit to acquire an N-frame first image during the movement, the first image is an iris image including a Fresnel pattern, and then the N-frame first image is performed. Align and blend to obtain a second image from which the Fresnel pattern is removed, and finally based on the second image, the final iris recognition is completed.
  • the Fresnel pattern is removed and iris recognition is performed, thereby solving the Fresnel pattern interference problem in the iris recognition process, and improving the safety and user experience of the user using the VR device.
  • FIG. 1 is a schematic structural diagram of a VR device according to an embodiment of the present application.
  • FIG. 2 is an iris diagram of a Fresnel pattern disclosed in an embodiment of the present application.
  • FIG. 3 is an iris diagram after removing a Fresnel pattern according to an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of an iris recognition method according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a method for setting a motor related parameter according to an embodiment of the present invention
  • FIG. 6 is a schematic flowchart of removing a Fresnel pattern by multi-frame fusion according to an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of a VR device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of another VR device according to an embodiment of the present invention.
  • An iris recognition method and a VR device disclosed in the embodiments of the present application are used to solve the problem of Fresnel interference in the iris recognition process, thereby improving the security and user experience of the user using the VR device.
  • the words “first”, “second”, and the like are used to distinguish the same items or similar items whose functions and functions are substantially the same. Those skilled in the art can understand that the words “first”, “second” and the like do not limit the number and execution order, and the words “first”, “second” and the like are not necessarily limited.
  • the VR device 100 includes a motor 101, an image pickup unit, and a Fresnel lens.
  • the position of the VR device 100 corresponding to the left eye of the human eye corresponds to a set of the imaging unit 102 and the Fresnel lens 103.
  • the imaging unit 102 collects an image through the Fresnel lens 103.
  • the relative position of the imaging unit 102 and the Fresnel lens 103 is fixed during the movement of the imaging unit 102.
  • a set of imaging unit 104 and Fresnel lens 105 are correspondingly provided.
  • the imaging unit 104 collects an image through the Fresnel lens 105.
  • the relative position of the imaging unit 104 and the Fresnel lens 105 is fixed during the movement of the imaging unit 104.
  • the relative positions of the Fresnel lens and the image pickup unit on the same side are fixed during the movement of the image pickup unit.
  • the fixed meaning means that no relative displacement occurs between the imaging unit 102 and the Fresnel lens 103 during the movement of the imaging unit 102.
  • the relative speed or relative angular velocity of the Fresnel lens 103 with respect to the imaging unit 102 during the movement of the imaging unit 102 is zero.
  • the motor 101 adjusts the positions of the imaging unit 102 and the imaging unit 104 through a transmission mechanism.
  • the transmission mechanism can be a component such as a gear. Specifically, the imaging unit 102 and the imaging unit 104 are moved toward each other by the transmission adjustment of the motor 101, or the imaging unit 102 and the imaging unit 104 are adjusted to move back with the transmission of the motor 101.
  • the N frames are respectively collected to include a first image, which is an iris image including Fresnel, and N is an integer greater than 1.
  • the Fresnel pattern contains at least two rings of the same center.
  • Fig. 2 shows an iris image of a Fresnel pattern comprising a plurality of rings of the same center.
  • the VR device combines the first image of the N frames and the multi-frame image acquired by each camera unit to obtain a second image with the Fresnel pattern removed, and then extracts the iris feature based on the second image. Perform iris recognition. As shown in Fig. 3, the iris map after removing the Fresnel pattern.
  • the camera unit located in the left eye of the human eye and the right eye of the corresponding human eye adjusts the relative movement or the backward movement by the transmission mechanism of the motor, thereby changing the relative position of the VR device and the eye.
  • the VR device first captures an image of an eye at a relatively different position by controlling the camera unit during the movement. And because the position of the Fresnel lens and the camera unit on the same side is fixed during the movement of the camera unit, the position of the Fresnel pattern in the captured image is also unchanged, and the iris position of the eye is in the image. The middle is changing.
  • the Fresnel pattern attached to the iris is removed by using the multi-frame fusion technology disclosed in the embodiment of the present application, and the second image of the Fresnel pattern is obtained, and finally the iris feature is extracted based on the second image to complete the final iris. Identification. Thereby solving the interference problem of Fresnel in the iris recognition process, improving the security of the user using the VR device and the user experience.
  • FIG. 4 it is a schematic flowchart of an iris recognition method disclosed in an embodiment of the present application, and the iris recognition method is applied to the VR device disclosed in FIG. 1 .
  • the iris recognition method includes:
  • the VR device controls the camera unit to move, and controls the camera unit to acquire the N frame first image during the moving process.
  • the first image is an iris image including Fresnel stripes.
  • the application for starting the iris recognition is triggered.
  • the motor in the VR device is activated by a preset control program, and the camera unit on the VR device is moved toward or away from the camera through the transmission mechanism.
  • the control program includes a moving speed of the preset camera unit, a moving time of the single camera unit, and a preset number of movements of the camera unit.
  • the preset number of movements of the camera unit is N, N is an integer, and the value is greater than 1.
  • the distance that the camera unit moves each time is related to the moving speed of the preset camera unit and the moving time of the single camera unit. Normally, the camera unit moves the same distance each time.
  • the embodiment of the present application is not limited to this, and the distance that the camera unit moves each time may also be different according to needs.
  • the direction in which the camera unit moves each time is related to the control of the motor, and the direction in which the camera unit moves each time can be changed as needed.
  • the VR device sets the moving distance, the moving speed, and the single moving time of the camera unit by collecting the Fresnel pattern in the reference image or the preset image. As shown in Figure 5, it includes:
  • the VR device acquires a pre-stored third image acquired by the camera unit.
  • the third image is a reference image containing a Fresnel pattern or a preset image, and the background of the third image is a solid color background.
  • the Fresnel pattern includes at least two rings of the same center. After the third image is acquired once, it may be pre-stored in the imaging unit as a reference image or a preset image.
  • the white paper can be photographed by the camera unit to obtain an image containing only the white background and the Fresnel pattern.
  • a blank sheet of paper is usually placed in front of the VR device (VR Fresnel lens).
  • the camera unit collects the infrared image of the white paper through the VR Fresnel lens, and the infrared image has only the white background and the Fresnel pattern in the Fresnel lens region of the image.
  • the Fresnel pattern is composed of a plurality of rings of the same center, and the edge extraction detection method can determine how many rings of Fresnel are formed and determine the interval between adjacent rings. It should be noted that how many rings are formed by the Fresnel pattern, and how many intervals are determined by the edge extraction detection method.
  • one of the intervals may be selected as the interval between the rings included in the Fresnel pattern, that is, the interval for calculating the moving distance of the image capturing unit.
  • an average of a plurality of intervals may be taken as an interval for calculating a moving distance of the image pickup unit.
  • the maximum interval obtained is taken as the interval between the rings included in the Fresnel pattern, that is, the interval for calculating the moving distance of the image capturing unit.
  • the edge here refers to the edge of each of the rings constituting the Fresnel pattern, and the edge of each ring passes through a portion where the brightness of the local area of the image changes significantly.
  • the grayscale profile of the local region of the image can be seen as a sharp change from one gray value to a small gray value in a small buffer region.
  • the interval may also be preset.
  • the preset number N of movements of the camera unit can be understood as the number of frames of the first image that needs to be merged.
  • the camera unit collects the first image once every time, and obtains a frame of the first image.
  • the imaging unit moves N times, the first image of N frames can be acquired.
  • the preset motor movement number N preferably ranges from 3-5.
  • the embodiment of the present application is not limited to the preset number of motor movements and the number of subsequent acquisitions of the first image. On the principle of meeting the real-time performance of the system, it is also possible to take larger values.
  • the single-camera moving distance can also be determined by the VR device based on the interval of the ring in the Fresnel pattern and the first image of the N frames, and the pixel difference between the adjacent first images can be determined. The obtained pixel difference is converted into the moving distance of the single imaging unit.
  • the obtained value may be used as a pixel difference between the first images adjacent to the N frames by dividing the interval of the rings in the Fresnel pattern by N frames.
  • the VR device determines a moving time of the single imaging unit according to the moving distance of the single imaging unit and the moving speed of the preset imaging unit.
  • the preset camera unit moving speed can be represented by an image coordinate moving speed.
  • the image coordinate moving speed is determined according to the principle of small hole imaging, and the relationship between the actual moving speed of the motor and the pixel (1).
  • n is the image coordinate moving speed
  • the unit is: pixel/ms
  • s represents the actual moving speed of the motor
  • the unit is: mm/ms
  • t represents the single pixel size of the camera unit.
  • the moving speed of the preset imaging unit is characterized by the image coordinate moving speed, so n also represents the preset moving speed of the imaging unit.
  • the single imaging unit moving time length is obtained by the formula (2) according to the single imaging unit moving distance determined in S602 and the preset imaging unit moving speed determined in S603.
  • T represents the moving time of the single imaging unit
  • M represents the moving distance of the imaging unit
  • n represents the moving speed of the preset imaging unit.
  • Executing S401 to control the image capturing unit to acquire the N frames of the first image during the moving process specifically: controlling, by the motor in the VR device, the camera unit to move N times, and controlling the camera unit to collect the image once each time the movement is completed , thereby obtaining a first image of N frames.
  • the motor in the VR device controls the camera unit to move directionalally N times according to the preset number N of movements of the camera unit, and the camera unit collects the first image once every time the directional movement is completed, thereby obtaining a frame first. image.
  • the first image contains Fresnel and iris images. It should be noted that the two images of the first image are collected in the same manner.
  • the VR device controls the camera unit to complete N movements, the camera unit completes the acquisition of the first image of the N frame.
  • the VR device fuses the first image of the N frames to obtain a second image with the Fresnel pattern removed.
  • the VR device aligns and fuses the acquired N frames of the first image.
  • the reference position for multi-frame alignment and fusion that is, to determine the position of the iris in the image, and then determine the center position of the iris, and use this as a reference to complete the registration of the multi-frame image.
  • the alignment and fusion are performed on the regions marked as 0 in the image based on the multi-frame image being registered. That is to say, the multi-frame alignment fusion technique is adopted to remove the Fresnel pattern in the image.
  • the fusion may cause the content of the Fresnel region in any of the first images of the N frames to be replaced by the content or partial content of the iris image in the other first images.
  • the VR device determines respective iris centers in the first image of the N frames, and performs image registration on the first image of the N frames by using the respective iris centers in the first image of the N frames as the registration offset reference of the first image of the N frames. .
  • the center of the iris can be predetermined or determined during the identification process.
  • the process of determining the center of the iris by the VR device is:
  • the VR device when the user wears the VR normally, the VR device provides illumination through the automatic IRLED lamp, and the camera unit collects the infrared image of the human eye through the VR Fresnel lens.
  • the pupil position is determined by searching the minimum gray value in the infrared image of the human eye, and then determining the pupil image in the infrared image through the pupil position.
  • the pupil area is determined by searching the minimum gray value in the infrared image of the human eye, and then determining the pupil image in the infrared image through the pupil position.
  • the double minimum gradation value is set to a threshold value to perform binarization of the image.
  • the binarization of the image is to set the gray value of the pixel on the image to 0 or 255, that is, to display the entire image with a distinct black and white visual effect, which is binarized according to the threshold selection.
  • the algorithm can be divided into a fixed threshold and an adaptive threshold, which are not limited in this embodiment of the present application.
  • an edge segmentation algorithm is employed to determine the position of the edge of the iris circle on the first image.
  • the edge is the most obvious change of gray scale on the image, and as the boundary of different regions in the image, there are features of gray-scale mutation and directionality.
  • the pupil is a small round hole in the center of the iris in the human eye. Based on the features of the edges and the determined pupil regions, an edge segmentation algorithm can be used to determine the position of the edge of the iris circle containing the pupil region on the first image.
  • the position of the iris circle is located and the center of the iris is obtained based on the iris circle.
  • the first image of each frame is registered according to the determined iris center as an offset reference. Finally, the registered image is obtained.
  • determining respective iris centers in the first image of the N frames includes: first, determining an iris image located in any of the first images of the N frames; and then determining an area where the gray value is the smallest in the iris image It is the pupil area; then, based on the pupil area, the edge of the iris circle including the pupil area is determined by the edge detection and region segmentation algorithm to determine the position of the iris circle; finally, the iris center is positioned based on the position of the iris circle.
  • the VR device marks the pixel value in the region where the Fresnel pattern in the N frame image is 0 based on the position of the first image of the N frame in the Fresnel pattern.
  • the VR device marks the area containing the Fresnel pattern in the first image of each frame that is collected, uses the area as the mark area, and sets the mark area to 0. The pixel value in the region containing the Fresnel pattern is actually marked as 0.
  • marking the pixel values in the region containing the Fresnel pattern in the first image may be manually marked, and may be manually marked when the VR device is offline. But it is not limited to the use of manual marking.
  • the marking of the first image of the N frame only needs to manually identify one of the frames. One image is calibrated once, and other frames are automatically referenced for automatic tagging.
  • the VR device aligns the first image of the marked N frames based on the registration result, and performs an OR operation on the aligned N frames of the first image to obtain a second image.
  • the image space is invariant.
  • the VR device aligns the marked N frames of the first image based on the registration result; then, calculates the frequency mean and variance of the adjacent images in the aligned N frames of the first image, and uses the obtained frequency mean and variance settings. N-frame weighting coefficients between adjacent images in the first image. Then, the VR device sequentially performs a multi-frame image information extraction operation on the pixel value set to 0 in the fourth image of the N frames and the set weighting coefficient to obtain a second image in which the Fresnel pattern is removed in the N frame.
  • the VR device performs iris recognition based on the second image.
  • the second image that has been removed by the Fresnel pattern is iris-recognized to ensure the integrity of the iris information.
  • the embodiment of the present application controls the movement of the camera unit by performing control, and controls the camera unit to collect N frames of the first image including the overlapping Fresnel and iris images during the movement, and then, the N frames of the first image are performed. Align and blend to obtain a second image from which the Fresnel pattern is removed, and finally based on the second image, the final iris recognition is completed.
  • the Fresnel pattern is removed and iris recognition is performed, thereby solving the Fresnel pattern interference problem in the iris recognition process, and improving the safety and user experience of the user using the VR device.
  • the embodiment of the present application further discloses a VR device that performs the iris recognition method.
  • the VR device shown in FIG. 1 can be used.
  • FIG. 7 is a schematic structural diagram of a VR device disclosed in an embodiment of the present application.
  • the VR device 700 includes a Fresnel lens 701, an imaging unit 702, a control unit 703, and a processing unit 704.
  • the imaging unit 702 is configured to acquire an image through the Fresnel lens 701.
  • the relative position of the Fresnel lens 701 and the imaging unit 702 is fixed during the movement of the imaging unit 702.
  • the control unit 703 is configured to control the movement of the imaging unit 702, and control the imaging unit 702 to acquire an N-frame first image during the movement, the first image being an iris image including a Fresnel pattern, and the value of N is greater than 1, N is an integer.
  • the VR device 700 includes a motor for the control unit 703 to control the movement of the imaging unit 702.
  • the specific process of the control unit 703 for acquiring the first image of the N frame during the movement of the control imaging unit 702 is :
  • the camera unit 702 is controlled to move N times by the motor in the VR device 700, and the image capturing unit 702 is controlled to acquire an image once every time the movement is completed, thereby obtaining an N-frame first image.
  • the processing unit 704 is configured to fuse the first image of the N frames to obtain a second image with the Fresnel pattern removed, and perform iris recognition based on the second image.
  • the processing unit 704 combines the first image of the N frames to obtain a second image of the Fresnel pattern: determining the center of each iris in the first image of the N frame, and firstly N frames.
  • the respective iris centers in the image are used as the registration offset reference of the first image of the N frames, and the image registration is performed on the first image of the N frames; based on the position of the Fresnel pattern in the first image of the N frames, the images in the N frames are The pixel value in the region where the Fresnel pattern is located is marked as 0; based on the registration result, the first image of the labeled N frames is aligned, and the aligned N frames of the first image are ORed to obtain a second image. .
  • the specific process of the processing unit 704 determining the center of each iris in the first image of the N frames is: determining an iris image located in any of the first images of the N frames; determining gray values in the iris image The smallest area is the pupil area; based on the pupil area, the edge of the iris circle including the pupil area is determined by the edge detection and area segmentation algorithm to determine the position of the iris circle; the center of the iris is positioned based on the position of the iris circle.
  • the VR device further includes: an acquiring unit, configured to acquire a third image including a Fresnel pattern collected by the pre-stored camera unit, the third image background is a solid color background, and the Fresnel pattern includes at least two the same center Ring.
  • control unit 703 is configured to use the moving distance of the imaging unit 702 corresponding to one-ninth of the interval between the rings as the distance that the imaging unit 702 moves each time the movement is completed.
  • control unit 703 can also implement the function of the acquiring unit. That is, the control unit 703 is further configured to acquire a third image that includes the Fresnel pattern collected by the camera unit, the third image background is a solid color background, and the Fresnel pattern includes at least two rings of the same center. And, the moving distance of the imaging unit 702 corresponding to one-ninth of the interval between the rings is used as the distance moved by the imaging unit 702 for each movement.
  • the VR device disclosed in the embodiment of the present application can also be directly implemented by hardware, a memory executed by a processor, or a combination of the two, in combination with the iris recognition method disclosed in the embodiment of the present application.
  • the VR device 800 includes a processor 801, a memory 802, a camera 803, a Fresnel lens 804, and a motor 805.
  • the VR device 800 further includes a communication interface 806.
  • the processor 801 is coupled to the memory 802 via a bus.
  • Processor 802 is coupled to the communication interface 806 via a bus.
  • the camera 803 captures an image through a Fresnel lens 804.
  • the motor 805 is used to move the camera 803 through a corresponding transmission mechanism by the control of the processor 801.
  • the relative position of the Fresnel lens 804 and the camera 803 is fixed during the movement of the camera 803 by the camera 803.
  • the processor 801 may be a central processing unit (CPU), a network processor (NP), an application-specific integrated circuit (ASIC), or a programmable logic device (PLD). ).
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA) or a general array logic (GAL).
  • the memory 802 may specifically be a content-addressable memory (CAM) or a random-access memory (RAM).
  • CAM can be a tri-state content addressing memory (TCAM).
  • Communication interface 805 can be a wired interface, such as a fiber distributed data interface (FDDI) or an ethernet interface.
  • FDDI fiber distributed data interface
  • ethernet interface an ethernet interface
  • Memory 802 can also be integrated in processor 801. If memory 802 and processor 801 are mutually independent devices, memory 802 is coupled to processor 801, for example, memory 802 and processor 801 can communicate over a bus. Communication interface 806 and processor 806 can communicate over a bus, and communication interface 806 can also be directly coupled to processor 801.
  • the memory 802 is configured to store an operation program, code or instruction of the iris recognition method disclosed in the embodiment of the present application.
  • the memory 802 includes an operating system and an application program, and is used for the operation program, code, or instruction of the iris recognition method disclosed in the foregoing application.
  • the VR device involved in the foregoing embodiment of the present application may be completed by calling and executing an operation program, code, or instruction stored in the memory 802.
  • the process of performing the corresponding iris recognition method For the specific process, refer to the corresponding parts of the foregoing embodiment of the present application, and details are not described herein again.
  • the operations of receiving/transmitting and the like involved in the embodiment of the iris recognition method shown in FIG. 1 to FIG. 5 may refer to receiving/transmitting processing implemented by a processor, and may also refer to passing through a receiver and transmitting.
  • the transmit/receive process is completed by the receiver, and the transmitter and transmitter can exist independently or be integrated into the transceiver.
  • the VR device 800 may further include: a transceiver.
  • FIG 8 only shows a simplified design of the VR device.
  • the VR device may include any number of interfaces, processors, memories, etc., and all VR devices or control devices that can implement the embodiments of the present application are within the protection scope of the embodiments of the present application.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Abstract

一种虹膜识别方法、装置及VR设备。通过控制摄像单元在移动的过程中,抓取相对不同位置眼睛的图像。然后,利用多帧融合技术去除图像中附着在虹膜上的菲涅尔纹,从而完成最终识别,解决在虹膜识别过程中菲涅尔纹的干扰问题,提升用户使用VR设备的安全性以及用户体验。

Description

虹膜识别方法和VR设备
本申请要求于2017年12月26日提交中国专利局、申请号为201711432690.8、申请名称为“虹膜识别方法和VR设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及虚拟现实(Virtual Reality,VR)技术领域,尤其是,涉及一种虹膜识别方法和VR设备。
背景技术
随着仿真技术快速发展,VR作为仿真技术中的一个重要技术方向,也越来越受到人们的关注。在VR设备应用中,对于用户身份的识别和认证是提升用户体验的一个重要部分。目前在VR设备应用中,常用的生物识别方式是虹膜识别方式,通过虹膜识别的方式可以进行用户身份识别和认证。
虹膜识别主要依赖对包含清晰虹膜纹理的眼睛图像进行识别。在现有的VR设备中,主要采用摄像单元拍摄人眼图像。因为,VR的镜片为菲涅尔镜片,因此摄像单元所拍摄到的人眼图像上会带有菲涅尔纹。但是,由于菲涅尔纹具有很强的纹理特性,在成像的过程中,会被非常密集的随机覆盖在虹膜纹理区域内,容易使虹膜的大片区域受到影响。并且菲涅尔纹还会遮挡虹膜识别区域的有效数据,从而严重影响虹膜识别的准确度和效果。
发明内容
有鉴于此,本申请实施例提供一种虹膜识别方法和VR设备,用于解决在虹膜识别过程中菲涅尔纹的干扰问题,从而提升用户使用VR设备的安全性以及用户体验。
本申请实施例提供如下技术方案:
本申请实施例第一方面提供了一种虹膜识别方法,所述方法应用于虚拟现实VR设备,所述VR设备包括菲涅尔透镜和摄像单元,所述摄像单元透过所述菲涅尔透镜采集图像,所述菲涅尔透镜与所述摄像单元的相对位置在摄像单元移动过程中固定不变,所述方法包括:
控制所述摄像单元移动,并控制所述摄像单元在移动过程中采集N帧第一图像,所述第一图像为包括菲涅尔纹的虹膜图像,N的取值大于1,N为整数;
将所述N帧第一图像进行对齐和融合,得到去除菲涅尔纹的第二图像;
基于所述第二图像进行虹膜识别。
其中,上述融合可以使得所述N帧第一图像中任一第一图像中的菲涅尔纹所在区域的内容被其他第一图像中的虹膜图像的内容或者部分内容替代。
上述方案,利用多帧融合技术去除附着在虹膜上的菲涅尔纹,得到去除菲涅尔纹 的第二图像,最后基于该第二图像进行虹膜特征提取,完成最终的虹膜识别。从而解决在虹膜识别过程中菲涅尔纹的干扰问题,提升用户使用VR设备的安全性以及用户体验。
在一种可能的设计中,所述控制所述摄像单元在移动过程中采集N帧第一图像,包括:
通过所述VR设备中的电机控制所述摄像单元移动N次,在每完成一次移动时控制所述摄像单元采集一次图像,从而得到所述N帧第一图像。
在一种可能的设计中,还包括:获取预存的所述摄像单元采集的包含菲涅尔纹的第三图像,所述第三图像背景为纯色背景,所述菲涅尔纹包括至少两个同圆心的圆环;
将所述圆环之间所具有的间隔的N分之一对应的摄像单元的移动距离,作为所述摄像单元一次移动所移动的距离。
其中,作为替代方案,也可以不是获取预存的第三图像,而是获取预存的距离作为摄像单元一次移动所移动的距离。
在一种可能的设计中,所述将所述N帧第一图像进行融合,得到去除菲涅尔纹的第二图像,包括:
确定所述N帧第一图像中各自的虹膜圆心,以所述N帧第一图像中各自的虹膜圆心作为所述N帧第一图像的配准偏移基准,对所述N帧第一图像进行图像配准;
基于菲涅尔纹在所述N帧第一图像的位置,将所述N帧图像中的菲涅尔纹所在的区域中的像素值标记为0;
基于配准结果,将标记后的N帧第一图像进行对齐,并将对齐后的N帧第一图像进行或运算,得到所述第二图像。
在一种可能的设计中,所述确定所述N帧第一图像中各自的虹膜圆心,包括:
确定位于所述N帧第一图像中任一第一图像中的的虹膜图像;
确定所述虹膜图像中灰度值最小的区域为瞳孔区域;
基于所述瞳孔区域,利用边缘检测和区域分割算法,确定包含所述瞳孔区域的虹膜圆的边缘,从而确定虹膜圆的位置;
基于所述虹膜圆的位置定位虹膜圆心。
在本申请实施例第二方面提供了一种虚拟现实VR设备,所述VR设备包括菲涅尔透镜,摄像单元,控制单元和处理单元;
所述摄像单元用于,透过所述菲涅尔透镜采集图像;
所述菲涅尔透镜与所述摄像单元的相对位置在摄像单元移动过程中固定不变;
所述控制单元用于,控制所述摄像单元移动,并控制所述摄像单元在移动过程中采集N帧第一图像,所述第一图像为包括菲涅尔纹的虹膜图像,N的取值大于1,N为整数;
所述处理单元用于,将所述N帧第一图像进行对齐和融合,得到去除菲涅尔纹的第二图像,以及基于所述第二图像进行虹膜识别。
在一种可能的设计中,所述VR设备包括电机,所述电机用于所述控制单元控制所述摄像单元移动,所述控制所述摄像单元在移动过程中采集N帧第一图像的控制单元用于,通过所述VR设备中的电机控制所述摄像单元移动N次,在每完成一次移动时控制所述摄像单元采集一次图像,从而得到所述N帧第一图像。
在一种可能的设计中,所述VR设备还包括:
获取单元用于,获取预存的所述摄像单元采集的包含菲涅尔纹的第三图像,所述第三图像背景为纯色背景,所述菲涅尔纹包括至少两个同圆心的圆环;
所述控制单元用于,将所述圆环之间所具有的间隔的N分之一对应的摄像单元的移动距离,作为所述摄像单元每完成一次移动所移动的距离。
在一种可能的设计中,所述将所述N帧第一图像进行融合,得到去除菲涅尔纹的第二图像的处理单元用于,确定所述N帧第一图像中各自的虹膜圆心,以所述N帧第一图像中各自的虹膜圆心作为所述N帧第一图像的配准偏移基准,对所述N帧第一图像进行图像配准;基于菲涅尔纹在所述N帧第一图像的位置,将所述N帧图像中的菲涅尔纹所在的区域中的像素值标记为0;基于配准结果,将标记后的N帧第一图像进行对齐,并将对齐后的N帧第一图像进行或运算,得到所述第二图像。
在一种可能的设计中,所述确定所述N帧第一图像中各自的虹膜圆心的处理单元用于,确定位于所述N帧第一图像中任一第一图像中的的虹膜图像;
确定所述虹膜图像中灰度值最小的区域为瞳孔区域;基于所述瞳孔区域,利用边缘检测和区域分割算法,确定包含所述瞳孔区域的虹膜圆的边缘,从而确定虹膜圆的位置;基于所述虹膜圆的位置定位虹膜圆心。
本申请实施例的第三方面提供了一种虚拟现实VR设备,所述VR设备包括菲涅尔透镜,摄像单元,处理器和存储器,所述摄像单元用于,透过所述菲涅尔透镜采集图像;
所述菲涅尔透镜与所述摄像单元的相对位置在摄像单元移动过程中固定不变;
所述存储器用于,存储应用于VR设备上的虹膜识别的操作程序、代码或指令;
所述处理器用于,调用所述存储器中存储的虹膜识别的操作程序、代码或指令,执行本申请实施例第一方面提供的虹膜识别方法。
本申请实施例的第四方面提供了一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行上述本申请实施例第一方面公开的虹膜识别方法。
本申请实施例的第五方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的虹膜识别方法。
本申请实施例所公开的虹膜识别方法和VR设备。该VR设备包括菲涅尔透镜和摄像单元,该摄像单元透过该菲涅尔透镜采集图像,该菲涅尔透镜与该摄像单元的相对位置在摄像单元移动的过程中固定不变。该方法通过执行控制该摄像单元移动,并控制该摄像单元在移动的过程中采集N帧第一图像,该第一图像为包括菲涅尔纹的虹膜图像,然后,将N帧第一图像进行对齐和融合,得到去除菲涅尔纹的第二图像,最后基于该第二图像,完成最终的虹膜识别。通过上述的虹膜识别方法,将菲涅尔纹去除 之后再进行虹膜识别,从而解决在虹膜识别过程中菲涅尔纹的干扰问题,提升用户使用VR设备的安全性以及用户体验。
附图说明
图1为本申请实施例公开的一种VR设备的结构示意图;
图2为本申请实施例公开的具有菲涅尔纹的虹膜图;
图3为本申请实施例公开的去除菲涅尔纹之后的虹膜图;
图4为本申请实施例公开的一种虹膜识别方法的流程示意图;
图5为本发明实施例公开的设置电机相关参数的方法流程示意图;
图6为本发明实施例公开的多帧融合去除菲涅尔纹的流程示意图;
图7为本发明实施例公开的一种VR设备的结构示意图;
图8为本发明实施例公开的另一种VR设备的结构示意图。
具体实施方式
本申请实施例公开的一种虹膜识别方法和VR设备,用于解决在虹膜识别过程中菲涅尔纹的干扰问题,从而提升用户使用VR设备的安全性以及用户体验。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。并且,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
此外,本申请实施例和权利要求书及附图中的术语“包括”和“具有”不是排他的。例如,包括了一系列步骤或模块的过程、方法、系统、产品或设备没有限定于已列出的步骤或模块,还可以包括没有列出的步骤或模块。
以图1示出的VR设备为例对本申请实施例公开的虹膜识别方法的执行过程进行详细说明。
如图1所示,该VR设备100包括电机101,摄像单元和菲涅尔透镜。
其中,VR设备100中对应人眼左眼的位置,对应设置一套摄像单元102和菲涅尔透镜103。该摄像单元102透过该菲涅尔透镜103采集图像。该摄像单元102和菲涅尔透镜103的相对位置在摄像单元102移动过程中固定不变。VR设备100中对应人眼右眼的位置,对应设置一套摄像单元104和菲涅尔透镜105。该摄像单元104透过该菲涅尔透镜105采集图像。该摄像单元104和菲涅尔透镜105的相对位置在摄像单元104移动过程中固定不变。
也就是说,位于同一侧的菲涅尔透镜和摄像单元的相对位置在摄像单元移动的过 程中固定不变。这里需要说明的是,固定不变是指:在摄像单元102移动的过程中,摄像单元102和菲涅尔透镜103之间不会产生相对位移。或者,在摄像单元102移动的过程中,菲涅尔透镜103相对于摄像单元102的相对速度或者相对角速度为0。
电机101通过传动机构调整摄像单元102和摄像单元104的位置。该传动机构可以为齿轮等部件。具体为:摄像单元102和摄像单元104通过电机101的传动调整相向移动,或者,摄像单元102和摄像单元104随电机101的传动调整背向移动。
在摄像单元102和摄像单元104移动过程中,分别采集N帧包含第一图像,该第一图像为包括菲涅尔纹的虹膜图像,N为大于1的整数。
菲涅尔纹包含至少两个同圆心的圆环。图2示出包含多个同圆心的圆环构成的菲涅尔纹的虹膜图像。
该VR设备基于每一个摄像单元所采集到的N帧第一图像以及多帧图像的融合技术进行融合,得到去除菲涅尔纹的第二图像,然后再基于该第二图像进行虹膜特征提取,进行虹膜识别。如图3所示,为去除菲涅尔纹后的虹膜图。
本申请实施例中通过电机的传动机构调节位于人眼左眼和对应人眼右眼的摄像单元相向移动或者背向移动,从而改变VR设备与眼睛的相对位置。该VR设备首先,通过控制摄像单元在移动的过程中,抓取相对不同位置眼睛的图像。且因同一侧的菲涅尔透镜和摄像单元的位置在摄像单元移动的过程中固定不变,因此菲涅尔纹在所抓取到的图像中的位置也不变,而眼睛虹膜位置在图像中是变化的。然后,利用本申请实施例公开的多帧融合技术去除附着在虹膜上的菲涅尔纹,得到去除菲涅尔纹的第二图像,最后基于该第二图像进行虹膜特征提取,完成最终的虹膜识别。从而解决在虹膜识别过程中菲涅尔纹的干扰问题,提升用户使用VR设备的安全性以及用户体验。
如图4所示,为本申请实施例公开的一种虹膜识别方法的流程示意图,该虹膜识别方法应用于图1公开的VR设备。该虹膜识别方法包括:
S401:VR设备控制摄像单元移动,并控制摄像单元在移动过程中采集N帧第一图像。
该第一图像为包括菲涅尔纹的虹膜图像。
在具体实现中,当用户佩戴好VR设备之后,触发启动虹膜识别的应用程序。VR设备中的电机通过预设的控制程序启动,并通过传动机构使VR设备上的摄像单元相向移动或背向移动。该控制程序中包括预设的摄像单元的移动速度、单次摄像单元移动时长和预设的摄像单元的移动次数。该预设的摄像单元移动次数为N,N为整数,取值大于1。
需要说明的是,在N次移动过程中,摄像单元每次移动的距离与预设的摄像单元的移动速度和单次摄像单元移动时长有关。通常情况下,摄像单元每次移动的距离相同。对此本申请实施例并不限定,摄像单元每次移动的距离也可以根据需要而有所不同。摄像单元每次移动的方向与电机的控制有关,摄像单元每次移动的方向可根据需要变化。
在具体实现中,VR设备通过采集参照图像或预设图像中的菲涅尔纹设置摄像单元 的移动距离,移动速度和单次移动时间。具体如图5所示,包括:
S501:VR设备获取预存的通过摄像单元采集的第三图像。
第三图像为包含菲涅尔纹的参照图像或者预设图像,该第三图像的背景为纯色背景。该菲涅尔纹包括至少两个同圆心的圆环。该第三图像经由一次采集后,可预存于摄像单元中作为参照图像或者预设图像。
可选的,可以通过摄像单元拍摄纯白纸张,得到只包含白底和菲涅尔纹的图像。
在具体实现中,通常在VR设备(VR菲涅尔透镜)前放置一张白纸。通过VR设备自动IRled提供的照明,摄像单元透过VR菲涅尔透镜,采集白纸的红外图像,该红外图像在图像菲涅尔透镜区域只有白底和菲涅尔纹。
S502:VR设备将确定的菲涅尔纹所包括的圆环之间的间隔的N分之一对应的摄像单元的移动距离,作为该摄像单元一次移动所移动的距离。
菲涅尔纹由多个同圆心的圆环环绕构成,通过边缘提取检测方法,能够确定菲涅尔纹由多少个圆环构成,并确定相邻圆环之间的间隔。需要说明的是,菲涅尔纹由多少个圆环构成,通过边缘提取检测方法就会确定多少个间隔。可选的,可任选其中一个间隔作为菲涅尔纹所包括的圆环之间的间隔,即用于计算摄像单元的移动距离的间隔。可选的,可取多个间隔的平均值作为用于计算摄像单元的移动距离的间隔。可选的,将得到的最大的间隔作为菲涅尔纹所包括的圆环之间的间隔,即用于计算摄像单元的移动距离的间隔。
这里的边缘指构成菲涅尔纹的各个圆环的边缘,各个圆环的边缘通过图象局部区域亮度变化显著的部分。该图像局部区域的灰度剖面可以看作,从一个灰度值在很小的缓冲区域内急剧变化到另一个灰度相差较大的灰度值。通过对边缘检测,主要是检测图像的灰度变化的度量并通过计算进行定位,从而获取封闭边界区域。在本申请实施例中则是获取各个圆环的边界区域,能够最终确定菲涅尔纹具有多少个同圆心的圆环。
可选的,由于菲涅尔透镜和摄像单元的相对位置在摄像单元移动的过程中是固定不变的,因此,该间隔也可以进行预先设置。
在具体实现中,预设摄像单元移动次数N可以理解为后续需要进行融合的第一图像的帧数。
也就是说,在后续进行虹膜识别的过程中摄像单元每移动1次,采集一次第一图像,得到一帧第一图像。摄像单元移动N次,则能够获取N帧第一图像。在本申请实施例中,为保证VR设备运行的实时性,该预设电机移动次数N的较佳取值范围为3-5。
但是,本申请实施例对于预设电机移动次数以及后续采集第一图像的帧数并不仅限于此。在满足系统运行实时性的原则上,还可以取更大值。
另一方面,可选的,单次摄像单元移动距离也可以通过VR设备基于菲涅尔纹中圆环的间隔与N帧第一图像,确定相邻第一图像之间的像素差,可以将得到的像素差转换为单次摄像单元的移动距离。
在具体实现中,可以通过将菲涅尔纹中圆环的间隔与N帧相除,将得到的值作为N帧相邻的第一图像之间的像素差。
S503:VR设备根据单次摄像单元移动距离和预设摄像单元移动速度确定单次摄像单元移动时长。
在具体实现中,预设摄像单元移动速度可以通过图像坐标移动速度表示。该图像坐标移动速度,根据小孔成像原理,以及电机实际移动速度与像素所满足的关系式(1)确定。
n=s/t         (1)
其中,n表示图像坐标移动速度,单位为:pixel/ms;s表示电机实际移动速度,单位为:mm/ms;t表示摄像单元的单像素尺寸。
在本申请实施例中,用图像坐标移动速度表征预设摄像单元移动速度,因此n也表示预设摄像单元移动速度。
由此,根据S602中确定的单次摄像单元移动距离和S603中确定的预设摄像单元移动速度,通过公式(2)得到单次摄像单元移动时长。
T=M/n        (2)
其中,T表示单次摄像单元移动时长,M表示摄像单元移动距离,n表示预设摄像单元移动速度。
执行S401中控制摄像单元在移动过程中采集N帧第一图像具体为:通过所述VR设备中的电机控制所述摄像单元移动N次,在每完成一次移动时控制所述摄像单元采集一次图像,从而得到N帧第一图像。
在具体实现中,首先,VR设备中的电机按照预设摄像单元移动次数N控制摄像单元定向移动N次,摄像单元在每完成一次定向移动时,采集一次第一图像,从而得到一帧第一图像。第一图像中包含菲涅尔纹和虹膜图像。需要说明的是,两套摄像单元是采用同样的方式进行第一图像的采集。当VR设备控制摄像单元完成N次移动后,摄像单元完成N帧第一图像的采集。
S402:VR设备将N帧第一图像进行融合,得到去除菲涅尔纹的第二图像。
在具体实现中,VR设备将采集到的N帧第一图像进行对齐和融合。首先,需要确定进行多帧对齐和融合的基准位置,即确定虹膜在图像中的位置,再确定虹膜中心位置,并以此为基准,完成多帧图像的配准。然后,在基于进行配准好的多帧图像对图像中标记为0的区域进行对齐和融合。也就是采用多帧对齐融合技术以去除图像中的菲涅尔纹。上述融合可以使得N帧第一图像中任一第一图像中的菲涅尔纹所在区域的内容被其他第一图像中的虹膜图像的内容或者部分内容替代。
具体过程如图6所示,包括:
S601:VR设备确定N帧第一图像中各自的虹膜圆心,以N帧第一图像中各自的虹膜圆心作为N帧第一图像的配准偏移基准,对N帧第一图像进行图像配准。
该虹膜圆心可以预先确定,也可以在识别的过程中确定。在具体实现中,VR设备确定虹膜圆心的过程为:
首先,当用户正常佩戴VR后,通过VR设备自动IRLED灯提供照明,摄像单元透过VR菲涅尔透镜,采集人眼红外图像。
其次,基于采集到的人眼红外图像中,瞳孔位置的灰度值最小的原理,通过搜索人眼红外图像中最小灰度值的方式,确定瞳孔位置,进而通过瞳孔位置确定人眼红外图像中的瞳孔区域。
较佳的方式为设定两倍的最小灰度值为阈值,进行图像的二值化。图像的二值化为就是将图像上的像素点的灰度值设置为0或255,也就是将整个图像呈现出明显的只有黑和白的视觉效果,根据阈值选取的不同,二值化的算法可以分为固定阈值和自适应阈值,本申请实施例对此并不进行限定。
然后,基于确定的瞳孔区域,采用边缘分割算法,确定虹膜圆的边缘在第一图像上的位置。
在具体实现中,由于边缘是图像上灰度变化最明显的地方,且作为图像中不同区域的边界,存在灰度突变以及具有方向性的特征。瞳孔是人眼睛内虹膜中心的小圆孔。基于上述边缘的特征和确定的瞳孔区域,采用边缘分割算法,可以确定包含瞳孔区域的虹膜圆的边缘在第一图像上的位置。
最后,在确定虹膜圆的边缘的第一图像上,定位虹膜圆的位置,并基于该虹膜圆得到虹膜圆心。
也就是说,对每一帧第一图像按照确定的虹膜圆心作为偏移基准进行配准。最后,得到配准后图像。
具体的,确定N帧第一图像中各自的虹膜圆心,包括:首先,确定位于N帧第一图像中任一第一图像中的的虹膜图像;然后,确定虹膜图像中灰度值最小的区域为瞳孔区域;然后,基于瞳孔区域,利用边缘检测和区域分割算法,确定包含瞳孔区域的虹膜圆的边缘,从而确定虹膜圆的位置;最后,基于虹膜圆的位置定位虹膜圆心。
S602:VR设备基于菲涅尔纹在N帧第一图像的位置,将N帧图像中的菲涅尔纹所在的区域中的像素值标记为0。
在具体实现中,VR设备对采集到的每一帧第一图像中包含菲涅尔纹的区域进行标记,将该区域作为标记区域,并设置该标记区域为0。实际上是将该包含菲涅尔纹的区域中的像素值标记为0。
需要说明的是,对第一图像中包含菲涅尔纹的区域中的像素值进行标记可以采用人工标记,可以在VR设备离线时进行人工标记。但并不仅限于采用人工标记。另外,由于摄像单元和菲涅尔透镜的位置固定不变,即,菲涅尔纹在图像中的相对位置固定这一特点,对N帧第一图像的标记,只需要人工对其中一帧第一图像进行一次标定,其他帧自动可参照进行自动标记。
S603:VR设备基于配准结果,将标记后的N帧第一图像进行对齐,并将对齐后的N帧第一图像进行或运算,得到第二图像。
在具体实现中,依据图像空域不变性。首先,VR设备基于配准结果将标记后的N帧第一图像进行对齐;然后,计算对齐后的N帧第一图像中相邻图像的频率均值和方 差,并利用得到的频率均值和方差设置N帧第一图像中相邻图像之间的加权系数。然后,VR设备依次将N帧第四图像中设置为0的像素值与设置的加权系数进行多帧图像信息取或运算,得到N帧去除菲涅尔纹的第二图像。
S603:VR设备基于该第二图像进行虹膜识别。
在具体实现中,基于上述已将菲涅尔纹去除的第二图像进行虹膜识别,从而保证虹膜信息的完整性。
本申请实施例通过通过执行控制该摄像单元移动,并控制该摄像单元在移动的过程中采集N帧包含重叠的菲涅尔纹和虹膜图像的第一图像,然后,将N帧第一图像进行对齐和融合,得到去除菲涅尔纹的第二图像,最后基于该第二图像,完成最终的虹膜识别。通过上述的虹膜识别方法,将菲涅尔纹去除之后再进行虹膜识别,从而解决在虹膜识别过程中菲涅尔纹的干扰问题,提升用户使用VR设备的安全性以及用户体验。
基于上述本申请实施例公开的虹膜识别方法,本申请实施例还公开了执行该虹膜识别方法的VR设备。在具体实现中可以为图1中示出的VR设备。
如图7所示,为本申请实施例公开的VR设备的结构示意图。该VR设备700包括:菲涅尔透镜701,摄像单元702,控制单元703和处理单元704。
该摄像单元702用于,透过菲涅尔透镜701采集图像。
该菲涅尔透镜701与摄像单元702的相对位置在摄像单元702移动过程中固定不变。
该控制单元703用于,控制摄像单元702移动,并控制摄像单元702在移动过程中采集N帧第一图像,该第一图像为包括菲涅尔纹的虹膜图像,N的取值大于1,N为整数。
在具体实现中,在VR设备700中包括电机,该电机用于该控制单元703控制摄像单元702移动,该控制单元703在控制摄像单元702在移动过程中采集N帧第一图像的具体过程为:通过VR设备700中的电机控制摄像单元702移动N次,在每完成一次移动时控制摄像单元702采集一次图像,从而得到N帧第一图像。
该处理单元704用于,将N帧第一图像进行融合,得到去除菲涅尔纹的第二图像,以及基于第二图像进行虹膜识别。
在具体实现中,该处理单元704将N帧第一图像进行融合,得到去除菲涅尔纹的第二图像的具体过程为:确定N帧第一图像中各自的虹膜圆心,以N帧第一图像中各自的虹膜圆心作为N帧第一图像的配准偏移基准,对N帧第一图像进行图像配准;基于菲涅尔纹在N帧第一图像的位置,将N帧图像中的菲涅尔纹所在的区域中的像素值标记为0;基于配准结果,将标记后的N帧第一图像进行对齐,并将对齐后的N帧第一图像进行或运算,得到第二图像。
在具体实现中,处理单元704确定N帧第一图像中各自的虹膜圆心的具体过程为:确定位于N帧第一图像中任一第一图像中的的虹膜图像;确定虹膜图像中灰度值最小 的区域为瞳孔区域;基于瞳孔区域,利用边缘检测和区域分割算法,确定包含瞳孔区域的虹膜圆的边缘,从而确定虹膜圆的位置;基于虹膜圆的位置定位虹膜圆心。
进一步的,VR设备还包括:获取单元用于,获取预存的摄像单元采集的包含菲涅尔纹的第三图像,该第三图像背景为纯色背景,菲涅尔纹包括至少两个同圆心的圆环。
相应地,该控制单元703用于,将圆环之间所具有的间隔的N分之一对应的摄像单元702的移动距离,作为摄像单元702每完成一次移动所移动的距离。
可选的,控制单元703也可以实现获取单元的功能。即控制单元703还用于获取摄像单元采集的包含菲涅尔纹的第三图像,该第三图像背景为纯色背景,菲涅尔纹包括至少两个同圆心的圆环。以及,将圆环之间所具有的间隔的N分之一对应的摄像单元702的移动距离,作为摄像单元702每完成一次移动所移动的距离。
以上本申请实施例公开的VR设备700中的各个单元中所涉及的相应操作,可以参照上述本申请实施例中VR设备执行的相应操作,这里不再进行赘述。
结合本申请实施例公开的虹膜识别方法,本申请实施例所公开的VR设备也可以直接用硬件、处理器执行的存储器,或者二者的结合来实施。
如图8所示,该VR设备800包括:处理器801,存储器802,摄像头803,菲涅尔透镜804和电机805。可选的,该VR设备800还包括通信接口806。
该处理器801通过总线与存储器802耦合。处理器802通过总线与该通信接口806耦合。
该摄像头803透过菲涅尔透镜804采集图像。
该电机805用于通过处理器801的控制,通过相应的传动机构使摄像头803移动。
该菲涅尔透镜804与摄像头803的相对位置在摄像头803在摄像头803移动的过程中固定不变。
处理器801具体可以是中央处理器(central processing unit,CPU),网络处理器(network processor,NP),专用集成电路(application-specific integrated circuit,ASIC)或者可编程逻辑器件(programmable logic device,PLD)。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA)或者通用阵列逻辑(generic array logic,GAL)。
存储器802具体可以是内容寻址存储器(content-addressable memory,CAM)或者随机存取存储器(random-access memory,RAM)。CAM可以是三态内容寻址存储器(ternary cam,TCAM)。
通信接口805可以是有线接口,例如光纤分布式数据接口(fiber distributed data interface,FDDI)或者以太网(ethernet)接口。
存储器802也可以集成在处理器801中。如果存储器802和处理器801是相互独立的器件,存储器802和处理器801相连,例如存储器802和处理器801可以通过总线通信。通信接口806和处理器806可以通过总线通信,通信接口806也可以与处理器801直接连 接。
存储器802,用于存储上述本申请实施例公开的虹膜识别方法的操作程序、代码或指令。可选的,该存储器802包括操作系统和应用程序,用于上述本申请实施例公开的虹膜识别方法的操作程序、代码或指令。
当处理器801或硬件设备要进行上述本申请实施例公开的虹膜识别方法的相关操作时,调用并执行存储器802中存储的操作程序、代码或指令可以完成上述本申请实施例中涉及的VR设备执行相应虹膜识别方法的过程。具体过程可参见上述本申请实施例相应的部分,这里不再赘述。
可以理解的是,上述图1-图5所示虹膜识别方法实施例所涉及到的接收/发送等操作可以是指在由处理器实现的接收/发送处理,也可以是指通过接收器和发射器完成的发送/接收过程,接收器和发射器可以独立存在,也可以集成为收发器。一种可能的实现方式中,VR设备800还可以包括:收发器。
可以理解的是,图8仅仅示出了该VR设备的简化设计。在实际应用中,VR设备可以包含任意数量的接口,处理器,存储器等,而所有可以实现本申请实施例的VR设备或者控制设备都在本申请实施例的保护范围之内。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
最后应说明的是:以上实施例仅用以示例性说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请及本申请带来的有益效果进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请权利要求的范围。

Claims (11)

  1. 一种虹膜识别方法,其特征在于,所述方法应用于虚拟现实VR设备,所述VR设备包括菲涅尔透镜和摄像单元,所述摄像单元透过所述菲涅尔透镜采集图像,所述菲涅尔透镜与所述摄像单元的相对位置在摄像单元移动过程中固定不变,所述方法包括:
    控制所述摄像单元移动,并控制所述摄像单元在移动过程中采集N帧第一图像,所述第一图像为包括菲涅尔纹的虹膜图像,N的取值大于1,N为整数;
    将所述N帧第一图像进行融合,得到去除菲涅尔纹的第二图像;
    基于所述第二图像进行虹膜识别。
  2. 根据权利要求1所述的方法,其特征在于,所述控制所述摄像单元在移动过程中采集N帧第一图像,包括:
    通过所述VR设备中的电机控制所述摄像单元移动N次,在每完成一次移动时控制所述摄像单元采集一次图像,从而得到所述N帧第一图像。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:获取预存的所述摄像单元采集的包含菲涅尔纹的第三图像,所述菲涅尔纹包括至少两个同圆心的圆环;
    将所述圆环之间所具有的间隔的N分之一对应的摄像单元的移动距离,作为所述摄像单元一次移动所移动的距离。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述将所述N帧第一图像进行融合,得到去除菲涅尔纹的第二图像,包括:
    确定所述N帧第一图像中各自的虹膜圆心,以所述N帧第一图像中各自的虹膜圆心作为所述N帧第一图像的配准偏移基准,对所述N帧第一图像进行图像配准;
    基于菲涅尔纹在所述N帧第一图像的位置,将所述N帧图像中的菲涅尔纹所在的区域中的像素值标记为0;
    基于配准结果,将标记后的N帧第一图像进行对齐,并将对齐后的N帧第一图像进行或运算,得到所述第二图像。
  5. 根据权利要求4中所述的方法,其特征在于,所述确定所述N帧第一图像中各自的虹膜圆心,包括:
    确定位于所述N帧第一图像中任一第一图像中的的虹膜图像;
    确定所述虹膜图像中灰度值最小的区域为瞳孔区域;
    基于所述瞳孔区域,利用边缘检测和区域分割算法,确定包含所述瞳孔区域的虹膜圆的边缘,从而确定虹膜圆的位置;
    基于所述虹膜圆的位置定位虹膜圆心。
  6. 一种虚拟现实VR设备,其特征在于,所述VR设备包括菲涅尔透镜,摄像单元,控制单元和处理单元;
    所述摄像单元用于,透过所述菲涅尔透镜采集图像;
    所述菲涅尔透镜与所述摄像单元的相对位置在摄像单元移动过程中固定不变;
    所述控制单元用于,控制所述摄像单元移动,并控制所述摄像单元在移动过程中 采集N帧第一图像,所述第一图像为包括菲涅尔纹的虹膜图像,N的取值大于1,N为整数;
    所述处理单元用于,将所述N帧第一图像进行融合,得到去除菲涅尔纹的第二图像,以及基于所述第二图像进行虹膜识别。
  7. 根据权利要求6所述的VR设备,其特征在于,所述VR设备包括电机,所述电机用于所述控制单元控制所述摄像单元移动,所述控制所述摄像单元在移动过程中采集N帧第一图像的控制单元用于,通过所述电机控制所述摄像单元移动N次,在每完成一次移动时控制所述摄像单元采集一次图像,从而得到所述N帧第一图像。
  8. 根据权利要求6或7所述的VR设备,其特征在于,所述VR设备还包括:
    获取单元用于,获取预存的所述摄像单元采集的包含菲涅尔纹的第三图像所述菲涅尔纹包括至少两个同圆心的圆环;
    所述控制单元用于,将所述圆环之间所具有的间隔的N分之一对应的摄像单元的移动距离,作为所述摄像单元每完成一次移动所移动的距离。
  9. 根据权利要求6至8任一项所述的VR设备,其特征在于,所述将所述N帧第一图像进行融合,得到去除菲涅尔纹的第二图像的处理单元用于,确定所述N帧第一图像中各自的虹膜圆心,以所述N帧第一图像中各自的虹膜圆心作为所述N帧第一图像的配准偏移基准,对所述N帧第一图像进行图像配准;基于菲涅尔纹在所述N帧第一图像的位置,将所述N帧图像中的菲涅尔纹所在的区域中的像素值标记为0;基于配准结果,将标记后的N帧第一图像进行对齐,并将对齐后的N帧第一图像进行或运算,得到所述第二图像。
  10. 根据权利要求9所述的VR设备,其特征在于,所述确定所述N帧第一图像中各自的虹膜圆心的处理单元用于,确定位于所述N帧第一图像中任一第一图像中的的虹膜图像;
    确定所述虹膜图像中灰度值最小的区域为瞳孔区域;基于所述瞳孔区域,利用边缘检测和区域分割算法,确定包含所述瞳孔区域的虹膜圆的边缘,从而确定虹膜圆的位置;基于所述虹膜圆的位置定位虹膜圆心。
  11. 一种虚拟现实VR设备,其特征在于,所述VR设备包括菲涅尔透镜,摄像单元,处理器和存储器,所述摄像单元用于,透过所述菲涅尔透镜采集图像;
    所述菲涅尔透镜与所述摄像单元的相对位置在摄像单元移动过程中固定不变;
    所述存储器用于,存储应用于VR设备上的虹膜识别的操作程序、代码或指令;
    所述处理器用于,调用所述存储器中存储的虹膜识别的操作程序、代码或指令,执行权利要求1-5中任一项所述的虹膜识别方法。
PCT/CN2018/120579 2017-12-26 2018-12-12 虹膜识别方法和vr设备 WO2019128714A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711432690.8A CN109960992B (zh) 2017-12-26 2017-12-26 虹膜识别方法和vr设备
CN201711432690.8 2017-12-26

Publications (1)

Publication Number Publication Date
WO2019128714A1 true WO2019128714A1 (zh) 2019-07-04

Family

ID=67022047

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/120579 WO2019128714A1 (zh) 2017-12-26 2018-12-12 虹膜识别方法和vr设备

Country Status (2)

Country Link
CN (1) CN109960992B (zh)
WO (1) WO2019128714A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080075334A1 (en) * 2003-09-05 2008-03-27 Honeywell International Inc. Combined face and iris recognition system
CN101819626A (zh) * 2009-02-26 2010-09-01 何玉青 一种基于图像融合的虹膜光斑消除方法
CN105224936A (zh) * 2015-10-28 2016-01-06 广东欧珀移动通信有限公司 一种虹膜特征信息提取方法和装置
CN107292242A (zh) * 2017-05-31 2017-10-24 华为技术有限公司 一种虹膜识别方法和终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100826876B1 (ko) * 2006-09-18 2008-05-06 한국전자통신연구원 홍채 검출 방법 및 이를 위한 장치
CN204791066U (zh) * 2015-05-21 2015-11-18 北京中科虹霸科技有限公司 一种用于移动终端的虹膜识别装置及包含其的移动终端
US10565446B2 (en) * 2015-09-24 2020-02-18 Tobii Ab Eye-tracking enabled wearable devices
CN105955491B (zh) * 2016-06-30 2020-07-10 北京上古视觉科技有限公司 一种具有眼控及虹膜识别功能的vr眼镜

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080075334A1 (en) * 2003-09-05 2008-03-27 Honeywell International Inc. Combined face and iris recognition system
CN101819626A (zh) * 2009-02-26 2010-09-01 何玉青 一种基于图像融合的虹膜光斑消除方法
CN105224936A (zh) * 2015-10-28 2016-01-06 广东欧珀移动通信有限公司 一种虹膜特征信息提取方法和装置
CN107292242A (zh) * 2017-05-31 2017-10-24 华为技术有限公司 一种虹膜识别方法和终端

Also Published As

Publication number Publication date
CN109960992B (zh) 2023-08-29
CN109960992A (zh) 2019-07-02

Similar Documents

Publication Publication Date Title
US11010967B2 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
WO2019218621A1 (zh) 活体检测方法及装置、电子设备和存储介质
US9149179B2 (en) System and method for identifying eye conditions
CN105426870B (zh) 一种人脸关键点定位方法及装置
US20180300589A1 (en) System and method using machine learning for iris tracking, measurement, and simulation
JP4505362B2 (ja) 赤目検出装置および方法並びにプログラム
CN111862296B (zh) 三维重建方法及装置、系统、模型训练方法、存储介质
CN105243386B (zh) 人脸活体判断方法以及系统
JP7292000B2 (ja) 動的なカメラ較正
WO2020125499A1 (zh) 一种操作提示方法及眼镜
KR20160136391A (ko) 정보처리장치 및 정보처리방법
CN103902958A (zh) 人脸识别的方法
JP2016001447A (ja) 画像認識システム、画像認識装置、画像認識方法、およびコンピュータプログラム
CN111382613B (zh) 图像处理方法、装置、设备和介质
TWI709085B (zh) 用於對車輛損傷影像進行損傷分割的方法、裝置、電腦可讀儲存媒體和計算設備
JP6822482B2 (ja) 視線推定装置、視線推定方法及びプログラム記録媒体
AU2020203790B2 (en) Transformed multi-source content aware fill
CN205750807U (zh) 一种基于虹膜识别的幼儿园安全接送系统
CN109714530A (zh) 一种航空相机图像调焦方法
CN109308714A (zh) 基于分类惩罚的摄像头和激光雷达信息配准方法
US20200184671A1 (en) Image processing system and image processing method
CN105631285A (zh) 一种生物特征身份识别方法及装置
CN109993090B (zh) 基于级联回归森林和图像灰度特征的虹膜中心定位方法
WO2019128714A1 (zh) 虹膜识别方法和vr设备
CN112396016A (zh) 一种基于大数据技术的人脸识别系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18896198

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18896198

Country of ref document: EP

Kind code of ref document: A1