WO2017000493A1 - 一种活体虹膜检测方法及终端 - Google Patents

一种活体虹膜检测方法及终端 Download PDF

Info

Publication number
WO2017000493A1
WO2017000493A1 PCT/CN2015/095640 CN2015095640W WO2017000493A1 WO 2017000493 A1 WO2017000493 A1 WO 2017000493A1 CN 2015095640 W CN2015095640 W CN 2015095640W WO 2017000493 A1 WO2017000493 A1 WO 2017000493A1
Authority
WO
WIPO (PCT)
Prior art keywords
iris
pupil
center
photo
preset
Prior art date
Application number
PCT/CN2015/095640
Other languages
English (en)
French (fr)
Inventor
钟焰涛
傅文治
蒋罗
Original Assignee
宇龙计算机通信科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宇龙计算机通信科技(深圳)有限公司 filed Critical 宇龙计算机通信科技(深圳)有限公司
Publication of WO2017000493A1 publication Critical patent/WO2017000493A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a living body iris detecting method and terminal.
  • Iris recognition technology can be used for identification.
  • a key point of iris recognition technology is in vivo iris detection. The system needs to distinguish whether the collected iris photos are from a living body or from an eye photo.
  • most of the image recognition technology is used to analyze the texture, defocus, frequency distribution and the like of the collected iris photos, but the method needs to be performed.
  • a series of computationally complex calculations such as wavelet transforms have a certain error rate.
  • the intensity of the light that illuminates the eye can be adjusted, and at the same time, multiple iris photos are taken, and the pupil of the eye changes significantly when the light changes. By comparing the pictures of the pupils in the multiple pictures, it can be determined whether Living iris.
  • entering the eyes with strong light and low light at close range can cause discomfort to the eyes and damage to the eyes.
  • Embodiments of the present invention provide a living body iris detecting method and terminal. It can improve the accuracy and recognition efficiency of iris recognition and is not harmful to the eyes.
  • Embodiments of the present invention provide a living body iris detection method, including:
  • the iris detection result is determined according to the displacement maximum of the pupil center.
  • determining the iris detection result includes:
  • the iris photo is determined to be a living iris photo.
  • the method further includes:
  • calculating the maximum displacement of the pupil center includes:
  • identifying the identity of the user includes:
  • the identity of the user is identified as the target user.
  • an embodiment of the present invention provides a living body iris detecting terminal, including:
  • Photo acquisition module for acquiring multiple iris photos
  • a position determining module configured to determine a position of a center of the pupil in each of the plurality of iris photos
  • a displacement calculation module configured to calculate a displacement maximum value of the pupil center according to a position of the pupil center in each iris photo
  • the result determining module is configured to determine an iris detection result according to a maximum displacement of the pupil center.
  • the result determining module is specifically configured to:
  • the iris photo is determined to be a living iris photo.
  • the terminal further includes:
  • a feature extraction module configured to extract an iris texture feature from the plurality of iris photos
  • An information comparison module configured to compare the iris texture feature with preset iris template information
  • An identity module is configured to identify an identity of the user according to a comparison result between the iris texture feature and the preset iris template information.
  • the displacement calculation module includes:
  • a distance calculation unit configured to calculate, according to a position of the pupil center of each iris photo, a distance from a center of the pupil in each iris photo to a preset eyelid reference point;
  • a difference calculating unit configured to calculate a pupil center in each of the plurality of photos according to a distance from a pupil center in the each iris photo to the preset eyelid reference point to the The distance difference between the preset eyelid reference points;
  • a maximum value determining unit configured to determine a displacement maximum value of the pupil center according to a distance difference between a pupil center in each of the plurality of photos and the preset eyelid reference point.
  • the identity recognition module is specifically configured to:
  • the identity of the user is identified as the target user.
  • FIG. 1 is a flow chart of a first embodiment of a living body iris detecting method according to the present invention
  • FIG. 2 is a flow chart of a second embodiment of a living body iris detecting method according to the present invention.
  • FIG. 3 is a schematic structural diagram of a living body iris detecting terminal according to an embodiment of the present invention.
  • FIG. 4 is a structural diagram of a displacement calculation module in a living body iris detecting terminal according to an embodiment of the present invention.
  • FIG. 1 is a flow chart of a first embodiment of a living body iris detecting method according to the present invention. As shown in the figure, the method in the embodiment of the present invention includes:
  • the multi-shot iris photo can be continuously taken in a preset time by the built-in camera, and the preset time can be 0.01 seconds or 0.02 seconds; or the continuous photographing multiple iris photo can be obtained from other camera devices, or Download multiple photos of irises from web resources.
  • the area of the black-white-black structure pattern is searched pixel by pixel from each iris photo as a pupil candidate area, and the edge points of the pupil candidate area and the midpoint of each line of white lines are recorded;
  • the circularity and the average gray level of the pupil candidate region determine the trusted pupil region;
  • the straight line fitting of the random sampling consistency is performed on the midpoint of the determined white line of the trusted pupil region, thereby obtaining the white line of the trusted pupil region a reliable midpoint and an edge point corresponding to the reliable midpoint; performing a quadratic curve fitting on the edge point corresponding to the reliable midpoint to obtain a reliable edge point of the trusted pupil region; and performing a reliable edge point of the trusted pupil region
  • the ellipse is fitted to obtain the pupil center.
  • the eyeball in the iris photo can be modeled as a circle or ellipse, and the circle or ellipse can be detected using the Hough transform to determine the position of the pupil center in each iris photo.
  • the distance from the center of the pupil in each iris photo to the preset eyelid reference point may be calculated according to the position of the pupil center of each iris photo; according to the photo in each iris photo Calculating, according to the distance from the center of the pupil to the preset eyelid reference point, a distance difference between a center of the pupil in each of the plurality of photos and the preset eyelid reference point; The difference in distance from the center of the pupil in each of the two iris photographs in the photo to the preset eyelid reference point determines the maximum displacement of the pupil center.
  • the eyelid reference point can be any position on the eyelid.
  • the eyelid reference point may be first used as a coordinate origin to determine the coordinates of the pupil center in each iris photo, and then the distance from the pupil center in each iris photo to the coordinate origin is calculated, and then each iris photo is calculated separately.
  • the distance difference between the center of the pupil and the origin of the coordinate, and the maximum value of the displacement having the largest distance difference as the center of the pupil is selected therefrom.
  • determining that the iris photo is a living iris photo when the displacement maximum value of the pupil center is greater than a preset threshold, determining that the iris photo is a living iris photo; and determining the iris photo when the displacement maximum of the pupil center is not greater than a preset threshold
  • the user is prompted to obtain a photo of the living iris.
  • multiple iris photos are first acquired; then, the position of the pupil center in each iris photo in the plurality of iris photos is determined; secondly, according to the pupil center in each iris photo Position, calculate the maximum displacement of the pupil center; finally, according to the maximum displacement of the pupil center, determine the iris detection result, thereby improving the accuracy and recognition efficiency of the iris recognition, and no harm to the eye.
  • FIG. 2 is a flow chart of a second embodiment of a living body iris detecting method according to the present invention. As shown in the figure, the method in the embodiment of the present invention includes:
  • the multi-shot iris photo can be continuously taken in a preset time by the built-in camera, and the preset time can be 0.01 seconds or 0.02 seconds; or the continuous photographing multiple iris photo can be obtained from other camera devices, or Download multiple photos of irises from web resources.
  • the area of the black-white-black structure pattern is searched pixel by pixel from each iris photo as a pupil candidate area, and the edge points of the pupil candidate area and the midpoint of each line of white lines are recorded;
  • the circularity and the average gray level of the pupil candidate region determine the trusted pupil region;
  • the straight line fitting of the random sampling consistency is performed on the midpoint of the determined white line of the trusted pupil region, thereby obtaining the white line of the trusted pupil region a reliable midpoint and an edge point corresponding to the reliable midpoint; performing a quadratic curve fitting on the edge point corresponding to the reliable midpoint to obtain a reliable edge point of the trusted pupil region; and performing a reliable edge point of the trusted pupil region
  • the ellipse is fitted to obtain the pupil center.
  • the eyeball in the iris photo can be modeled as a circle or ellipse, and the circle or ellipse can be detected using the Hough transform to determine the position of the pupil center in each iris photo.
  • the distance from the center of the pupil in each iris photo to the preset eyelid reference point may be calculated according to the position of the pupil center of each iris photo; according to the photo in each iris photo Calculating, according to the distance from the center of the pupil to the preset eyelid reference point, a distance difference between a center of the pupil in each of the plurality of photos and the preset eyelid reference point; The difference in distance from the center of the pupil in each of the two iris photos in the photo to the preset eyelid reference point determines the maximum displacement of the pupil center.
  • the eyelid reference point can be any position on the eyelid.
  • the eyelid reference point may be first used as a coordinate origin to determine the coordinates of the pupil center in each iris photo, and then the distance from the pupil center in each iris photo to the coordinate origin is calculated, and then each iris photo is calculated separately.
  • the distance difference between the center of the pupil and the origin of the coordinate, and the maximum value of the displacement having the largest distance difference as the center of the pupil is selected therefrom.
  • determining that the iris photo is a living iris photo when the displacement maximum value of the pupil center is greater than a preset threshold, determining that the iris photo is a living iris photo; and determining the iris photo when the displacement maximum of the pupil center is not greater than a preset threshold
  • the user is prompted to obtain a photo of the living iris.
  • the iris photo is first decomposed into four sub-band images of horizontal high frequency (LH1), vertical high frequency (HL1), diagonal high frequency (HH1) and low frequency approximation (LL1); then multiple filters are constructed to pass
  • LH1 horizontal high frequency
  • HL1 vertical high frequency
  • HH1 diagonal high frequency
  • LL1 low frequency approximation
  • the improved two-dimensional log-Gabor filtering algorithm extracts the iris texture features from the radial and angular directions of the low-frequency approximation (LL1) sub-band image.
  • the user's eye photos may be pre-acquired, the iris template information is extracted from the collected eye photos, and the iris template information is stored in the terminal.
  • the identity of the user is identified as a target user; when the iris texture feature and the preset iris template information are not When the same, the identity of the user is identified as an illegal user.
  • multiple iris photos are first acquired; then, the position of the pupil center in each iris photo in the plurality of iris photos is determined; secondly, according to the pupil center in each iris photo Position, calculate the maximum displacement of the pupil center; finally, according to the maximum displacement of the pupil center, determine the iris detection result, and then identify the user identity according to the iris detection result, thereby improving the accuracy and recognition efficiency of the iris recognition, And no harm to the eyes.
  • FIG. 3 is a schematic structural diagram of a living body iris detecting terminal according to an embodiment of the present invention.
  • the terminal in the embodiment of the present invention includes:
  • the photo acquisition module 301 is configured to acquire a plurality of iris photos.
  • the multi-shot iris photo can be continuously taken in a preset time by the built-in camera, and the preset time can be 0.01 seconds or 0.02 seconds; or the continuous photographing multiple iris photo can be obtained from other camera devices, or Download multiple photos of irises from web resources.
  • the location determining module 302 is configured to determine a location of a pupil center in each of the plurality of iris photos.
  • the area of the black-white-black structure pattern is searched pixel by pixel from each iris photo as a pupil candidate area, and the edge points of the pupil candidate area and the midpoint of each line of white lines are recorded;
  • the circularity and the average gray level of the pupil candidate region determine the trusted pupil region;
  • the straight line fitting of the random sampling consistency is performed on the midpoint of the determined white line of the trusted pupil region, thereby obtaining the white line of the trusted pupil region a reliable midpoint and an edge point corresponding to the reliable midpoint; performing a quadratic curve fitting on the edge point corresponding to the reliable midpoint to obtain a reliable edge point of the trusted pupil region; and performing a reliable edge point of the trusted pupil region
  • the ellipse is fitted to obtain the pupil center.
  • the eyeball in the iris photo can be modeled as a circle or ellipse, and the circle or ellipse can be detected using the Hough transform to determine the position of the pupil center in each iris photo.
  • the displacement calculation module 303 is configured to calculate a displacement maximum value of the pupil center according to the position of the pupil center in each iris photo.
  • the displacement calculation module 303 may further include:
  • the distance calculating unit 401 is configured to calculate a distance from the center of the pupil in each iris photo to a preset eyelid reference point according to the position of the pupil center of each iris photo.
  • the eyelid reference point can be any position on the eyelid.
  • a difference calculation unit 402 configured to calculate a pupil center in each of the plurality of photos according to a distance from a pupil center in the each iris photo to the preset eyelid reference point The difference in distance between the preset eyelid reference points.
  • the maximum value determining unit 403 is configured to determine a displacement maximum value of the pupil center according to a distance difference between a pupil center in each of the plurality of photos and the preset eyelid reference point.
  • the eyelid reference point may be first used as a coordinate origin to determine each iris photo separately.
  • the coordinates of the center of the pupil and then calculate the distance from the center of the pupil in each iris photo to the origin of the coordinate, and then calculate the distance difference between the center of the pupil in each iris photo and the origin of the coordinate, and select the distance difference from The largest one is the displacement maximum as the center of the pupil.
  • the result determining module 304 is configured to determine an iris detection result according to a maximum displacement of the pupil center.
  • determining that the iris photo is a living iris photo when the displacement maximum value of the pupil center is greater than a preset threshold, determining that the iris photo is a living iris photo; and determining the iris photo when the displacement maximum of the pupil center is not greater than a preset threshold
  • the user is prompted to obtain a photo of the living iris.
  • the terminal in the embodiment of the present invention may further include:
  • the feature extraction module 305 is configured to extract an iris texture feature from the plurality of iris photos.
  • the iris photo is first decomposed into four sub-band images of horizontal high frequency (LH1), vertical high frequency (HL1), diagonal high frequency (HH1) and low frequency approximation (LL1); then multiple filters are constructed to pass
  • LH1 horizontal high frequency
  • HL1 vertical high frequency
  • HH1 diagonal high frequency
  • LL1 low frequency approximation
  • the improved two-dimensional log-Gabor filtering algorithm extracts the iris texture features from the radial and angular directions of the low-frequency approximation (LL1) sub-band image.
  • the information comparison module 306 is configured to perform the iris texture feature and the preset iris template information. Compared.
  • the user's eye photos may be pre-acquired, the iris template information is extracted from the collected eye photos, and the iris template information is stored in the terminal.
  • the identity recognition module 307 is configured to identify the identity of the user according to the comparison result between the iris texture feature and the preset iris template information.
  • the identity of the user is identified as a target user; when the iris texture feature and the preset iris template information are not When the same, the identity of the user is identified as an illegal user.
  • multiple iris photos are first acquired; then, the position of the pupil center in each iris photo in the plurality of iris photos is determined; secondly, according to the pupil center in each iris photo Position, calculate the maximum displacement of the pupil center; finally, according to the maximum displacement of the pupil center, determine the iris detection result, and then identify the user identity according to the iris detection result, thereby improving the accuracy and recognition efficiency of the iris recognition, And no harm to the eyes.
  • the storage medium may include: a flash drive, a read-only memory (English: Read-Only Memory, ROM for short), a random access memory (English: Random Access Memory, RAM for short), a magnetic disk or an optical disk.

Abstract

提供了一种活体虹膜检测方法及终端,包括:获取多张虹膜照片;确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;根据所述瞳孔中心的位移最大值,确定虹膜检测结果。采用本发明实施例,可以提高虹膜识别的准确性和识别效率,并且对眼睛没有伤害性。

Description

一种活体虹膜检测方法及终端
本申请要求于2015年6月30日提交中国专利局、申请号为201510385830.5、发明名称为“一种活体虹膜检测方法及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及电子技术领域,尤其涉及一种活体虹膜检测方法及终端。
背景技术
虹膜识别技术可以用于身份识别,虹膜识别技术的一个关键点是活体虹膜检测,系统需要分辨采集的虹膜照片是来自于活体,还是来自于眼睛照片。在现有技术方案中,为了解决虹膜识别的活体检测问题,大部分是通过图像识别技术,对采集的虹膜照片进行纹理、离焦、频率分布等方面的分析来实现,但是,该方法需要进行一系列如小波变换等复杂度很高的计算,并且存在一定的错误率。另外,现有技术方案中还可以调节照射眼睛的光线的强弱,同时拍摄多张虹膜照片,由于光线变化时眼睛的瞳孔会有明显变化,通过对比多张图片中瞳孔的图片,可以判定是否活体虹膜。但是近距离射入眼睛强光和弱光,会引起眼睛的不适感,对眼睛有伤害。
发明内容
本发明实施例提供一种活体虹膜检测方法及终端。可以提高虹膜识别的准确性和识别效率,并且对眼睛没有伤害性。
本发明实施例提供了一种活体虹膜检测方法,包括:
获取多张虹膜照片;
确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;
根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;
根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
其中,所述根据所述瞳孔中心的位移最大值,确定虹膜检测结果包括:
当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片。
其中,所述根据所述瞳孔中心的位移最大值,确定虹膜检测结果之后,还包括:
从所述多张虹膜照片中提取出虹膜纹理特征;
将所述虹膜纹理特征与预设的虹膜模板信息进行对比;
根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份。
其中,所述根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值包括:
根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离;
根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值;
根据所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。
其中,所述根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结 果,识别所述用户的身份包括:
当所述虹膜纹理特征与所述预设的虹膜模板信息相同时,识别得到所述用户的身份为目标使用用户。
相应地,本发明实施例提供了一种活体虹膜检测终端,包括:
照片获取模块,用于获取多张虹膜照片;
位置确定模块,用于确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;
位移计算模块,用于根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;
结果确定模块,用于根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
其中,所述结果确定模块具体用于:
当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片。
其中,所述终端还包括:
特征提取模块,用于从所述多张虹膜照片中提取出虹膜纹理特征;
信息对比模块,用于将所述虹膜纹理特征与预设的虹膜模板信息进行对比;
身份识别模块,用于根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份。
其中,所述位移计算模块包括:
距离计算单元,用于根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离;
差值计算单元,用于根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值;
最大值确定单元,用于根据所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。
其中,所述身份识别模块具体用于:
当所述虹膜纹理特征与所述预设的虹膜模板信息相同时,识别得到所述用户的身份为目标使用用户。
实施本发明实施例,首先获取多张虹膜照片;然后确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;其次根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;最后根据所述瞳孔中心的位移最大值,确定虹膜检测结果,从而提高了虹膜识别的准确性和识别效率,并且对眼睛没有伤害性。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明提出的一种活体虹膜检测方法的第一实施例的流程图;
图2是本发明提出的一种活体虹膜检测方法的第二实施例的流程图;
图3是本发明实施例提出的一种活体虹膜检测终端的结构示意图;
图4是本发明实施例提出的活体虹膜检测终端中位移计算模块的结构示 意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参考图1,图1是本发明提出的一种活体虹膜检测方法的第一实施例的流程图。如图所示,本发明实施例中的方法包括:
S101,获取多张虹膜照片。
具体实现中,可以通过内置摄像头在预设的时间内连续拍摄多照虹膜照片,预设时间可以为0.01秒或0.02秒;或者可以从其他摄像装置中获取连续拍摄的多照虹膜照片,或者可以从网络资源中下载多照虹膜照片。
S102,确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置。
具体实现中,从每张虹膜照片中逐行逐像素搜索“黑-白-黑”结构模式的区域作为瞳孔候选区域,并记录瞳孔候选区域的边缘点和每行白线的中点;根据各个瞳孔候选区域的圆形度和平均灰度确定可信瞳孔区域;对确定的可信瞳孔区域的白线的中点执行随机抽样一致性的直线拟合,从而得到可信瞳孔区域的白线的可靠中点以及与可靠中点对应的边缘点;对与可靠中点对应的边缘点执行二次曲线拟合,从而得到可信瞳孔区域的可靠边缘点;对可信瞳孔区域的可靠边缘点进行椭圆拟合,从而得到瞳孔中心。
可选的,可以将虹膜照片中的眼球建模成圆或椭圆,利用霍夫变换检测圆或椭圆,进而确定每张虹膜照片中的瞳孔中心的位置。
S103,根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值。
具体实现中,可以根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离;根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值;根据所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。其中,眼眶参考点可以为眼眶上的任意位置。进一步,可以首先将该眼眶参考点作为坐标原点,分别确定每张虹膜照片中瞳孔中心的坐标,进而计算每张虹膜照片中的瞳孔中心到该坐标原点的距离,然后分别计算每两张虹膜照片中瞳孔中心到该坐标原点的距离差值,从中选取出距离差值为最大的一个作为所述瞳孔中心的位移最大值。
S104,根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
具体实现中,当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片;当所述瞳孔中心的位移最大值不大于预设阈值时,确定所述虹膜照片为非活体虹膜照片,提示用户获取活体虹膜照片。
在本发明实施例中,首先获取多张虹膜照片;然后确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;其次根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;最后根据所述瞳孔中心的位移最大值,确定虹膜检测结果,从而提高了虹膜识别的准确性和识别效率,并且对眼睛没有伤害性。
请参考图2,图2是本发明提出的一种活体虹膜检测方法的第二实施例的流程图。如图所示,本发明实施例中的方法包括:
S201,获取多张虹膜照片。
具体实现中,可以通过内置摄像头在预设的时间内连续拍摄多照虹膜照片,预设时间可以为0.01秒或0.02秒;或者可以从其他摄像装置中获取连续拍摄的多照虹膜照片,或者可以从网络资源中下载多照虹膜照片。
S202,确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置。
具体实现中,从每张虹膜照片中逐行逐像素搜索“黑-白-黑”结构模式的区域作为瞳孔候选区域,并记录瞳孔候选区域的边缘点和每行白线的中点;根据各个瞳孔候选区域的圆形度和平均灰度确定可信瞳孔区域;对确定的可信瞳孔区域的白线的中点执行随机抽样一致性的直线拟合,从而得到可信瞳孔区域的白线的可靠中点以及与可靠中点对应的边缘点;对与可靠中点对应的边缘点执行二次曲线拟合,从而得到可信瞳孔区域的可靠边缘点;对可信瞳孔区域的可靠边缘点进行椭圆拟合,从而得到瞳孔中心。
可选的,可以将虹膜照片中的眼球建模成圆或椭圆,利用霍夫变换检测圆或椭圆,进而确定每张虹膜照片中的瞳孔中心的位置。
S203,根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值。
具体实现中,可以根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离;根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值;根据所述多 张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。其中,眼眶参考点可以为眼眶上的任意位置。进一步,可以首先将该眼眶参考点作为坐标原点,分别确定每张虹膜照片中瞳孔中心的坐标,进而计算每张虹膜照片中的瞳孔中心到该坐标原点的距离,然后分别计算每两张虹膜照片中瞳孔中心到该坐标原点的距离差值,从中选取出距离差值为最大的一个作为所述瞳孔中心的位移最大值。
S204,根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
具体实现中,当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片;当所述瞳孔中心的位移最大值不大于预设阈值时,确定所述虹膜照片为非活体虹膜照片,提示用户获取活体虹膜照片。
S205,从所述多张虹膜照片中提取出虹膜纹理特征。
具体实现中,首先将虹膜照片分解为水平高频(LH1)、垂直高频(HL1)、对角高频(HH1)和低频逼近(LL1)四个子带图像;然后构造多个滤波器,通过改进的二维log-Gabor滤波算法,对低频逼近(LL1)子带图像从径向和角度两个方向上提取虹膜纹理特征,log-Gabor函数表达式为:
Figure PCTCN2015095640-appb-000001
其中,f为滤波器的中心频率,θ为滤波器的方向,δx 2、δy 2为高斯函数标准差,改造的二维log-Gabor函数定义为:h(x,y)=g(x,y)exp[2πj(Ux+Vy)],其中,U、V分别为径向中心频率两个轴的分量;最后,提取虹膜纹理特征的公式为:
Figure PCTCN2015095640-appb-000002
其中,I(x,y)为处理后的虹膜照片,
Figure PCTCN2015095640-appb-000003
为卷积运算,k表示第k个尺度,j表示第j个方向,Fkj包含幅值信息和相位信息。
S206,将所述虹膜纹理特征与预设的虹膜模板信息进行对比。
具体实现中,在获取多张虹膜照片之前,可以预先采集用户的眼睛照片,从采集的眼睛照片中提取出虹膜模板信息,并将虹膜模板信息存入到终端中。
S207,根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份。
具体实现中,当所述虹膜纹理特征与所述预设的虹膜模板信息相同时,识别得到所述用户的身份为目标使用用户;当所述虹膜纹理特征与所述预设的虹膜模板信息不相同时,识别得到所述用户的身份为非法使用用户。
在本发明实施例中,首先获取多张虹膜照片;然后确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;其次根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;最后根据所述瞳孔中心的位移最大值,确定虹膜检测结果,进而根据虹膜检测结果识别出用户身份,从而提高了虹膜识别的准确性和识别效率,并且对眼睛没有伤害性。
请参考图3,图3是本发明是实施例提出的一种活体虹膜检测终端的结构示意图。如图所示,本发明实施例中的终端包括:
照片获取模块301,用于获取多张虹膜照片。
具体实现中,可以通过内置摄像头在预设的时间内连续拍摄多照虹膜照片,预设时间可以为0.01秒或0.02秒;或者可以从其他摄像装置中获取连续拍摄的多照虹膜照片,或者可以从网络资源中下载多照虹膜照片。
位置确定模块302,用于确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置。
具体实现中,从每张虹膜照片中逐行逐像素搜索“黑-白-黑”结构模式的区域作为瞳孔候选区域,并记录瞳孔候选区域的边缘点和每行白线的中点;根据各个瞳孔候选区域的圆形度和平均灰度确定可信瞳孔区域;对确定的可信瞳孔区域的白线的中点执行随机抽样一致性的直线拟合,从而得到可信瞳孔区域的白线的可靠中点以及与可靠中点对应的边缘点;对与可靠中点对应的边缘点执行二次曲线拟合,从而得到可信瞳孔区域的可靠边缘点;对可信瞳孔区域的可靠边缘点进行椭圆拟合,从而得到瞳孔中心。
可选的,可以将虹膜照片中的眼球建模成圆或椭圆,利用霍夫变换检测圆或椭圆,进而确定每张虹膜照片中的瞳孔中心的位置。
位移计算模块303,用于根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值。
具体实现中,位移计算模块303还可以进一步包括:
距离计算单元401,用于根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离。其中,眼眶参考点可以为眼眶上的任意位置。
差值计算单元402,用于根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值。
最大值确定单元403,用于根据所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。
进一步,可以首先将该眼眶参考点作为坐标原点,分别确定每张虹膜照片 中瞳孔中心的坐标,进而计算每张虹膜照片中的瞳孔中心到该坐标原点的距离,然后分别计算每两张虹膜照片中瞳孔中心到该坐标原点的距离差值,从中选取出距离差值为最大的一个作为所述瞳孔中心的位移最大值。
结果确定模块304,用于根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
具体实现中,当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片;当所述瞳孔中心的位移最大值不大于预设阈值时,确定所述虹膜照片为非活体虹膜照片,提示用户获取活体虹膜照片。
可选的,如图3所示,本发明实施例中的终端还可以进一步包括:
特征提取模块305,用于从所述多张虹膜照片中提取出虹膜纹理特征。
具体实现中,首先将虹膜照片分解为水平高频(LH1)、垂直高频(HL1)、对角高频(HH1)和低频逼近(LL1)四个子带图像;然后构造多个滤波器,通过改进的二维log-Gabor滤波算法,对低频逼近(LL1)子带图像从径向和角度两个方向上提取虹膜纹理特征,log-Gabor函数表达式为:
Figure PCTCN2015095640-appb-000004
其中,f为滤波器的中心频率,θ为滤波器的方向,δx 2、δy 2为高斯函数标准差,改造的二维log-Gabor函数定义为:h(x,y)=g(x,y)exp[2πj(Ux+Vy)],其中,U、V分别为径向中心频率两个轴的分量;最后,提取虹膜纹理特征的公式为:
Figure PCTCN2015095640-appb-000005
其中,I(x,y)为处理后虹膜照片,
Figure PCTCN2015095640-appb-000006
为卷积运算,k表示第k个尺度,j表示第j个方向,Fkj包含幅值信息和相位信息。
信息对比模块306,用于将所述虹膜纹理特征与预设的虹膜模板信息进行 对比。
具体实现中,在获取多张虹膜照片之前,可以预先采集用户的眼睛照片,从采集的眼睛照片中提取出虹膜模板信息,并将虹膜模板信息存入到终端中。
身份识别模块307,用于根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份。
具体实现中,当所述虹膜纹理特征与所述预设的虹膜模板信息相同时,识别得到所述用户的身份为目标使用用户;当所述虹膜纹理特征与所述预设的虹膜模板信息不相同时,识别得到所述用户的身份为非法使用用户。
在本发明实施例中,首先获取多张虹膜照片;然后确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;其次根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;最后根据所述瞳孔中心的位移最大值,确定虹膜检测结果,进而根据虹膜检测结果识别出用户身份,从而提高了虹膜识别的准确性和识别效率,并且对眼睛没有伤害性。
需要说明的是,对于前述的各个方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某一些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详细描述的部分,可以参见其他实施例的相关描述。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读 存储介质中,存储介质可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本发明实施例所提供的内容下载方法及相关设备、系统进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种活体虹膜检测方法,其特征在于,所述方法包括:
    获取多张虹膜照片;
    确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;
    根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;
    根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述瞳孔中心的位移最大值,确定虹膜检测结果包括:
    当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片。
  3. 如权利要求2所述的方法,其特征在于,所述根据所述瞳孔中心的位移最大值,确定虹膜检测结果之后,还包括:
    从所述多张虹膜照片中提取出虹膜纹理特征;
    将所述虹膜纹理特征与预设的虹膜模板信息进行对比;
    根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份。
  4. 如权利要求1所述的方法,其特征在于,所述根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值包括:
    根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离;
    根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值;
    根据所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。
  5. 如权利要求3所述的方法,其特征在于,所述根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份包括:
    当所述虹膜纹理特征与所述预设的虹膜模板信息相同时,识别得到所述用户的身份为目标使用用户。
  6. 一种活体虹膜检测终端,其特征在于,所述终端包括:
    照片获取模块,用于获取多张虹膜照片;
    位置确定模块,用于确定所述多张虹膜照片中的每张虹膜照片中的瞳孔中心的位置;
    位移计算模块,用于根据所述每张虹膜照片中的所述瞳孔中心的位置,计算所述瞳孔中心的位移最大值;
    结果确定模块,用于根据所述瞳孔中心的位移最大值,确定虹膜检测结果。
  7. 如权利要求6所述的终端,其特征在于,所述结果确定模块具体用于:
    当所述瞳孔中心的位移最大值大于预设阈值时,确定所述虹膜照片为活体虹膜照片。
  8. 如权利要求7所述的终端,其特征在于,所述终端还包括:
    特征提取模块,用于从所述多张虹膜照片中提取出虹膜纹理特征;
    信息对比模块,用于将所述虹膜纹理特征与预设的虹膜模板信息进行对比;
    身份识别模块,用于根据所述虹膜纹理特征与所述预设的虹膜模板信息的对比结果,识别所述用户的身份。
  9. 如权利要求6所述的终端,其特征在于,所述位移计算模块包括:
    距离计算单元,用于根据所述每张虹膜照片的所述瞳孔中心的位置,计算所述每张虹膜照片中的瞳孔中心到预设的眼眶参考点的距离;
    差值计算单元,用于根据所述每张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离,计算所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值;
    最大值确定单元,用于根据所述多张照片中的每两张虹膜照片中的瞳孔中心到所述预设的眼眶参考点的距离差值,确定所述瞳孔中心的位移最大值。
  10. 如权利要求8所述的终端,其特征在于,所述身份识别模块具体用于:
    当所述虹膜纹理特征与所述预设的虹膜模板信息相同时,识别得到所述用 户的身份为目标使用用户。
PCT/CN2015/095640 2015-06-30 2015-11-26 一种活体虹膜检测方法及终端 WO2017000493A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510385830.5 2015-06-30
CN201510385830.5A CN105550625B (zh) 2015-06-30 2015-06-30 一种活体虹膜检测方法及终端

Publications (1)

Publication Number Publication Date
WO2017000493A1 true WO2017000493A1 (zh) 2017-01-05

Family

ID=55829809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/095640 WO2017000493A1 (zh) 2015-06-30 2015-11-26 一种活体虹膜检测方法及终端

Country Status (2)

Country Link
CN (1) CN105550625B (zh)
WO (1) WO2017000493A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633217A (zh) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 基于三维眼球模型计算视线方向的人脸识别活体检测方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705507B (zh) 2016-06-30 2022-07-08 北京七鑫易维信息技术有限公司 一种身份识别方法及装置
KR20180014627A (ko) * 2016-08-01 2018-02-09 삼성전자주식회사 홍채 센서의 동작을 제어하는 방법 및 이를 위한 전자 장치
CN106355139B (zh) * 2016-08-22 2019-08-30 厦门中控智慧信息技术有限公司 人脸防伪方法和装置
CN106650703A (zh) * 2017-01-06 2017-05-10 厦门中控生物识别信息技术有限公司 手掌防伪方法和装置
CN107437070A (zh) * 2017-07-17 2017-12-05 广东欧珀移动通信有限公司 虹膜活体识别方法及相关产品
CN107451556B (zh) * 2017-07-28 2021-02-02 Oppo广东移动通信有限公司 检测方法及相关产品
CN112298103A (zh) * 2019-07-31 2021-02-02 比亚迪股份有限公司 车辆控制的方法、装置、存储介质及电子设备和车辆

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009107704A1 (ja) * 2008-02-26 2009-09-03 沖電気工業株式会社 虹彩認証装置
CN101833646A (zh) * 2009-03-11 2010-09-15 中国科学院自动化研究所 一种虹膜活体检测方法
CN103955717A (zh) * 2014-05-13 2014-07-30 第三眼(天津)生物识别科技有限公司 一种虹膜活性检测的方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1304114A (zh) * 1999-12-13 2001-07-18 中国科学院自动化研究所 基于多生物特征的身份鉴定融合方法
CN1330275C (zh) * 2003-12-07 2007-08-08 倪蔚民 基于虹膜纹理分析的生物测定方法
PL380581A1 (pl) * 2006-09-07 2008-03-17 Naukowa I Akademicka Sieć Komputerowa Sposób testowania żywotności oka i urządzenie do testowania żywotności oka
CN100392669C (zh) * 2006-09-21 2008-06-04 杭州电子科技大学 虹膜识别中的活体检测方法及装置
CN102043954B (zh) * 2011-01-30 2013-09-04 哈尔滨工业大学 一种基于相关函数匹配的快速稳健的虹膜识别方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009107704A1 (ja) * 2008-02-26 2009-09-03 沖電気工業株式会社 虹彩認証装置
CN101833646A (zh) * 2009-03-11 2010-09-15 中国科学院自动化研究所 一种虹膜活体检测方法
CN103955717A (zh) * 2014-05-13 2014-07-30 第三眼(天津)生物识别科技有限公司 一种虹膜活性检测的方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112633217A (zh) * 2020-12-30 2021-04-09 苏州金瑞阳信息科技有限责任公司 基于三维眼球模型计算视线方向的人脸识别活体检测方法

Also Published As

Publication number Publication date
CN105550625A (zh) 2016-05-04
CN105550625B (zh) 2018-12-25

Similar Documents

Publication Publication Date Title
WO2017000493A1 (zh) 一种活体虹膜检测方法及终端
US9710691B1 (en) Touchless fingerprint matching systems and methods
US11682232B2 (en) Device and method with image matching
KR101322168B1 (ko) 실시간 얼굴 인식 장치
TW202006602A (zh) 三維臉部活體檢測方法、臉部認證識別方法及裝置
Crihalmeanu et al. Enhancement and registration schemes for matching conjunctival vasculature
WO2013087026A1 (zh) 一种虹膜定位方法和定位装置
WO2015067084A1 (zh) 人眼定位方法和装置
JP2016532945A (ja) バイオメトリック認証のための特徴抽出およびマッチングおよびテンプレート更新
KR20100049407A (ko) 평균 곡률을 이용하는 지정맥 인증 방법 및 장치
CN110705353A (zh) 基于注意力机制的遮挡人脸的识别方法和装置
CN110119724A (zh) 一种指静脉识别方法
Asmuni et al. An improved multiscale retinex algorithm for motion-blurred iris images to minimize the intra-individual variations
WO2017054276A1 (zh) 一种生物特征身份识别方法及装置
CN107145820B (zh) 基于hog特征和fast算法的双眼定位方法
WO2015131710A1 (zh) 人眼定位方法及装置
WO2016004706A1 (zh) 一种改善在非理想环境下虹膜识别性能的方法
Szczepański et al. Pupil and iris detection algorithm for near-infrared capture devices
Karakaya et al. An iris segmentation algorithm based on edge orientation for off-angle iris recognition
KR100794361B1 (ko) 홍채 인식 성능 향상을 위한 눈꺼풀 검출과 속눈썹 보간방법
Roy et al. Iris segmentation using game theory
Aydi et al. A fast and accurate eyelids and eyelashes detection approach for iris segmentation
CN113822927A (zh) 一种适用弱质量图像的人脸检测方法、装置、介质及设备
Leo et al. Highly usable and accurate iris segmentation
KR20130036511A (ko) 객체 영역의 유효성 판단 방법 및 장치, 객체 추출 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15897020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 22.05.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15897020

Country of ref document: EP

Kind code of ref document: A1