WO2013177878A1 - 基于sift特征包的牛眼虹膜图像识别方法 - Google Patents

基于sift特征包的牛眼虹膜图像识别方法 Download PDF

Info

Publication number
WO2013177878A1
WO2013177878A1 PCT/CN2012/082165 CN2012082165W WO2013177878A1 WO 2013177878 A1 WO2013177878 A1 WO 2013177878A1 CN 2012082165 W CN2012082165 W CN 2012082165W WO 2013177878 A1 WO2013177878 A1 WO 2013177878A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sift feature
sift
iris
bull
Prior art date
Application number
PCT/CN2012/082165
Other languages
English (en)
French (fr)
Inventor
赵林度
孙胜楠
杨世才
宋阳
Original Assignee
东南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东南大学 filed Critical 东南大学
Priority to EP12877671.3A priority Critical patent/EP2858007B1/en
Publication of WO2013177878A1 publication Critical patent/WO2013177878A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours

Definitions

  • Iris recognition is considered to be one of the most reliable biometrics, and can be widely used in identification, mine access control, and criminal tracking.
  • the use of iris for identification has higher accuracy and better anti-counterfeiting than methods for identity identification using fingerprints, faces, gaits and other human biometrics.
  • Traditional animal identification mainly uses artificial labels such as ear tags, which are easy to fall off and lose, which brings a lot of inconvenience to management.
  • the iris technology is used to identify animals with good safety and strong anti-counterfeiting characteristics.
  • the existing animal iris image recognition technology has at least two problems: First, in practical applications, the animal to be identified cannot actively cooperate like a human, resulting in the rotation of the acquired image, Offset, partial occlusion, or inconsistent scales. Second, the existing iris positioning technology is difficult to obtain an accurate outer edge of the iris, which in turn affects the quality of the normalized iris image. Under these circumstances, it is difficult to obtain accurate recognition results in the prior art, thereby limiting the application of animal iris recognition technology in food traceability systems.
  • the object of the present invention is to provide a method for accurately identifying an imperfect bull's eye iris image in view of the problems existing in the prior art bull's eye iris recognition technology.
  • the bull's-eye iris image recognition method of the feature package includes the following steps: (1) if the recognition mechanism is not trained, step 2 is performed, otherwise go to step 10; (2) obtain a training image set for acquiring the SIFT feature package; (3) Obtain the best SIFT feature package according to the training image set; (4) Obtain the target bull's eye iris library; (5) Preprocess each target bull's eye iris image; (6) For each target bull's eye The iris image is positioned at the inner edge of the iris; (7) the SIFT feature point of each target bull's eye iris image is obtained by the SIFT method; (8) the SIFT feature point in the inner edge of the iris is removed, and each target bull's eye iris is obtained.
  • the effective SIFT feature points of the image (9) comparing the SIFT feature points of each target bull's eye iris image with the optimal SIFT feature packet to obtain a characteristic histogram of each target bull's eye iris image; (10) receiving the image Identifying the image; (11) pre-processing the image to be recognized; (12) performing iris inner edge positioning on the image to be recognized; (13) obtaining the SIFT feature point of the image to be recognized by using the SIFT method; (14) removing the iris inside the image
  • the SIFT feature points in the edge obtain the effective SIFT feature points of the image to be identified; (15) comparing the SIFT feature points of the image to be identified with the best SIFT feature packets, and acquiring the feature histogram of the image to be recognized; (16) calculating The histogram distance of each image in the target bull's eye iris library is identified; (17) the object corresponding to the target bull's eye iris image with the smallest histogram distance is used as the recognition result; (18) ends.
  • said step of obtaining an optimal SIFT feature packet is: (1) obtaining a training image set; (2) pre-processing each trained bull's eye iris image; and (3) training each bull's eye
  • the iris image is positioned at the inner edge of the iris; (4) the SIFT feature points of each trained bull's eye iris image are obtained by SIFT method; (5) the SIFT feature points in the inner edge of the iris are removed, and the effective SIFT feature points of the training image are obtained.
  • the method of the invention can accurately recognize the image of the iris image of the bull eye to be recognized, such as rotation, offset, partial occlusion or inconsistent scale. Improve the accuracy and reliability of iris image recognition.
  • FIG. 1 is a basic block diagram of the present invention
  • FIG. 2 is a detailed working flow chart of the method of the present invention.
  • the digital grayscale image of the bull's eye iris is acquired by the iris image acquisition device, and then the image is preprocessed according to the visual inspection to obtain the effective region of the image.
  • An effective set of effective SIFT feature points for the bull's eye iris image is obtained by iris inner edge localization and SIFT feature extraction. It is then processed by the recognition mechanism, which typically involves acquiring a feature histogram of the iris image and comparing it to an existing image in the target bull's eye iris library.
  • Step 10 is the initial action.
  • Step 11 determines whether the recognition mechanism is well trained, and if so, performs step 21; otherwise, step 12 is performed.
  • Step 12 determines if the best SIFT feature packet has been obtained, and if so, performs step 15; otherwise, step 13 is performed.
  • Step 13 acquires a training image set for acquiring an optimal SIFT feature set, the image set having at least two images per bull's eye.
  • Step 14 selects an optimal SIFT feature packet based on the training image set, the feature packet containing K SIFT feature mean values, where K is a user-specified integer value, such as 1000.
  • This selection process employs an algorithm specifically designed by the present invention, and this step will be specifically described in conjunction with FIG. 3 in the following sections.
  • step 15 acquires a target bull's eye iris library, at least one image for each bull's eye in the library.
  • Step 16 preprocesses the iris image to obtain an effective area of the image. Here, by visual inspection, the largest rectangular section covering the iris area is selected as the effective area of the image.
  • Step 17 positions the inner edge within the active area of the image. Here, the active contour method is used to position the inner edge of the iris.
  • Step 18 obtains SIFT feature points of the image using the SIFT method within the effective area of the image.
  • Step 19 removes the SIFT feature points that fall inside the inner edge of the iris to obtain a set of valid SIFT feature points.
  • Step 20 using the approximate nearest neighbor method (ANN, Approximate Nearest Neighbor) compares each feature in the set of valid SIFT feature points with the feature mean in the best SIFT feature set, assigns a label corresponding to the feature mean in the best SIFT feature set for each valid SIFT feature point, and counts, Further, a feature histogram corresponding to the target bull's eye iris image is obtained.
  • Step 21 Receive an image to be identified. Steps 22, 23, 24, 25, and 26 adopt the same method as steps 16, 17, 18, 19, and 20 to perform preprocessing, iris inner edge localization, SIFT feature extraction, and obtain effective SIFT feature point sets, and finally obtain The characteristic histogram of the image.
  • ANN Approximate Nearest Neighbor
  • Step 27 compares the image feature histogram to be identified with the image histogram distance of the target bull's eye iris library by the approximate nearest neighbor method.
  • Step 28 outputs the result, and the object corresponding to the target iris image with the smallest histogram distance is the recognition result.
  • Step 29 is the end state.
  • steps 13 and 14 of Figure 2 are dedicated to obtaining the best SIFT feature package, which, once selected, can be used directly for different target bull's eye iris image libraries.
  • Figure 3 illustrates step 14 of Figure 2 in detail. The purpose of this step is to select the best SIFT feature packet with K feature mean values based on the training image set, where K is a user-specified integer value.
  • Step 1400 of Figure 3 is the initial state. Steps 1401, 1402, 1403, and 1404 perform preprocessing, iris inner edge localization, and SIFT feature extraction on each training image in the same manner as steps 16, 17, 18, and 19 in FIG. 2, and finally correspond to the training image. A set of valid SIFT feature points.
  • Step 1405 combines the SIFT feature point sets of all training images to obtain the SIFT feature point space of the training samples.
  • Step 1406 uses the K nearest neighbor method (KNN, K-Nearest) Neighbor) performs cluster analysis on the SIFT feature point space of the training samples, and obtains K classes and corresponding K feature mean values.
  • K is specified by the user, for example 1000. In general, depending on the size of the SIFT feature point space, the value of K is between a few hundred and a few thousand.
  • Step 1407 assigns a feature to each feature mean value to identify the class represented by the feature mean, such that an optimal SIFT feature set consisting of K tagged feature mean values is obtained.
  • step 1408 is the end state of FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明公开了一种基于SIFT特征包的牛眼虹膜图像识别方法,包括以下步骤:对虹膜图像进行预处理获取有效区域;用SIFT方法获取特征点;用主动轮廓线方法进行内边缘定位;移除内边缘内的特征点得到有效SIFT特征点集;通过与最佳SIFT特征包对比获得特征直方图;计算待识别图像和目标虹膜库中每一幅图像的直方图距离,以直方图距离最小的目标牛眼虹膜图像对应的对象作为识别结果。本发明在待识别牛眼虹膜图像存在旋转、偏移、局部遮挡或尺度不一致的情况下也能够较准确地进行识别,从而有助于提高牛眼虹膜图像识别的准确性和可靠性,推动虹膜识别方法在食品溯源体系中的应用。

Description

基于SIFT 特征包的牛眼虹膜图像识别方法 技术领域
本发明涉及牛眼虹膜图像识别方法,特别涉及一种适用于存在旋转、偏移、局部遮挡或尺度不一致的非完美牛眼虹膜图像的识别方法。
背景技术
虹膜识别被认为是最可靠的生物特征识别技术之一,可以广泛应用于身份证件识别、矿场出入控制、罪犯跟踪等方面。与利用指纹、人脸、步态等其他人体生物特征进行身份鉴别的方法相比,使用虹膜进行鉴别具有更高的准确率和更好的防伪性。传统的动物标识主要采用耳标等人工标签,容易脱落、遗失,给管理带来诸多不便。相比传统的动物标识方式,采用虹膜技术标识动物具有安全性好,防伪性强等特征。
然而,与人眼虹膜识别相比,现有的动物虹膜图像识别技术至少存在两个方面的问题:第一,在实际应用中,待识别动物无法像人一样主动配合,导致获取图像的旋转、偏移、局部遮挡或尺度不一致等情况。第二,现有虹膜定位技术很难获得准确的虹膜外边缘,进而影响归一化虹膜图像的质量。在这些情况下,现有技术很难获得准确的识别结果,从而限制了动物虹膜识别技术在食品溯源体系中的应用。
技术问题
本发明的目的是针对现有牛眼虹膜识别技术存在的问题,提供一种能够对非完美牛眼虹膜图像进行准确识别的方法。
技术解决方案
本发明采用的技术方案是:在对本发明方法进行具体描述之前,首先给出相关定义:(a)非完美牛眼虹膜图像:存在旋转、偏移、局部遮挡或尺度不一致,但内边缘轮廓完整的牛眼虹膜图像。(b)目标牛眼虹膜库:存储已知身份的牛眼虹膜的图像库,待识别图像通过与该图像库对比来确定身份。(c)SIFT特征:采用尺度不变特征变换(SIFT,Scale Invariant Feature Transform)方法获得的图像特征描述子。
一种基于SIFT 特征包的牛眼虹膜图像识别方法,包括以下步骤:(1)若识别机制未训练好,则执行步骤2,否则转到步骤10;(2)获得用于获取SIFT特征包的训练图像集;(3)根据训练图像集获取最佳SIFT特征包;(4)获得目标牛眼虹膜库;(5)对每一幅目标牛眼虹膜图像进行预处理;(6)对每一幅目标牛眼虹膜图像进行虹膜内边缘定位;(7)利用SIFT方法获得每一幅目标牛眼虹膜图像的SIFT特征点;(8)移除虹膜内边缘中的SIFT特征点,得到每一幅目标牛眼虹膜图像的有效SIFT特征点;(9)将每一幅目标牛眼虹膜图像的SIFT特征点与最佳SIFT特征包对比,获得每一幅目标牛眼虹膜图像的特征直方图;(10)接收待识别图像;(11)对待识别图像进行预处理;(12)对待识别图像进行虹膜内边缘定位;(13)利用SIFT方法获得待识别图像的SIFT特征点;(14)移除虹膜内边缘中的SIFT特征点,得到待识别图像的有效SIFT特征点;(15)将待识别图像的SIFT特征点与最佳SIFT特征包对比,获取待识别图像的特征直方图;(16)计算待识别图像与目标牛眼虹膜库中每一幅图像的直方图距离;(17)以直方图距离最小的目标牛眼虹膜图像对应的对象作为识别结果;(18)结束。
作为优选,所述的获取最佳SIFT特征包的步骤是:(1)获得训练图像集;(2)对每一幅训练牛眼虹膜图像进行预处理;(3)对每一幅训练牛眼虹膜图像进行虹膜内边缘定位;(4)利用SIFT方法获得每一幅训练牛眼虹膜图像的SIFT特征点;(5)移除虹膜内边缘中的SIFT特征点,得到训练图像的有效SIFT特征点集;(6)合并所有训练图像的SIFT特征点集,得到训练样本的SIFT特征点空间;(7)采用K最邻近法对训练样本的SIFT特征点空间进行聚类分析,得到K个类及对应的特征均值;(8)为每一个特征均值赋予一个标签用以标识该特征均值代表的类,得到由K个带标签的特征均值构成的最佳SIFT特征包;(9)结束。
有益效果
由于不需要进行虹膜外边缘精确定位和虹膜图像归一化,本发明方法在待识别牛眼虹膜图像存在旋转、偏移、局部遮挡或尺度不一致的情况下也能够较准确地进行识别,有助于提高牛眼虹膜图像识别的准确性与可靠性。
附图说明
图1是本发明的基本框图;
图2是本发明方法的详细工作流程图;
图3是本发明获取SIFT特征包的工作流程图。
本发明的实施方式
下面结合附图和具体实施方式对本发明作进一步说明:
如图1所示,通过虹膜图像采集装置获取牛眼虹膜数字灰度图像,然后根据目测对图像进行预处理获取图像的有效区域。通过虹膜内边缘定位和SIFT特征提取方法获得有效的牛眼虹膜图像有效SIFT特征点集合。然后就由识别机制来处理,通常包括获取虹膜图像的特征直方图和与目标牛眼虹膜库中的已有图像进行比对。
本发明的详细工作流程如图2所示。步骤10是初始动作。步骤11判断识别机制是否训练好,如果是则执行步骤21;否则执行步骤12。步骤12判断是否已获得最佳SIFT特征包,如果是则执行步骤15;否则执行步骤13。步骤13获取用于获取最佳SIFT特征包的训练图像集,该图像集中每个牛眼至少有两张图像。步骤14根据训练图像集选择出最佳SIFT特征包,该特征包中包含K个SIFT特征均值,这里K是用户指定的整数值,例如1000。这一选择过程采用本发明专门设计的算法,该步骤在后面的部分结合图3进行具体介绍。
最佳SIFT特征包选择好后,步骤15获取目标牛眼虹膜库,该库中每个牛眼至少有一幅图像。步骤16对虹膜图像进行预处理,获得图像的有效区域。这里的通过目测,选择覆盖虹膜区域的最大矩形区间为图像的有效区域。步骤17在图像的有效区域内对内边缘进行定位。这里采用主动轮廓线法对虹膜内边缘进行定位。步骤18在图像的有效区域内用SIFT方法获得图像的SIFT特征点。步骤19移除落在虹膜内边缘内部的SIFT特征点,获得有效SIFT特征点集。步骤20,采用近似最邻近法(ANN,Approximate Nearest Neighbor)将有效SIFT特征点集中的每个特征与最佳SIFT特征包中的特征均值进行比对,为每一个有效SIFT特征点赋上对应最佳SIFT特征包中特征均值的标签,并计数,进而获得对应目标牛眼虹膜图像的特征直方图。步骤21,接收待识别图像。步骤22、23、24、25、26采用与步骤16、17、18、19、20相同的方法对待识别图像进行预处理、虹膜内边缘定位、SIFT特征提取、获得有效SIFT特征点集,最终获得图像的特征直方图。步骤27采用近似最邻近法比较待识别图像特征直方图与目标牛眼虹膜库中各图像特征直方图距离。步骤28输出结果,直方图距离最小的目标虹膜图像对应的对象为识别结果。步骤29为结束状态。
值得指出的是图2的步骤13和14专门用于获得最佳SIFT特征包,该特征包一旦被选择出来,就能够直接用于不同的目标牛眼虹膜图像库。图3详细说明了图2中的步骤14,该步骤的作用是根据训练图像集选择出具有K个特征均值的最佳SIFT特征包,这里K是用户指定的整数值。图3的步骤1400是起始状态。步骤1401、1402、1403、1404与采用与图2中的步骤16、17、18、19相同的方法对每一幅训练图像进行预处理、虹膜内边缘定位、SIFT特征提取,最终对应训练图像的有效SIFT特征点集。步骤1405合并所有训练图像的SIFT特征点集,得到训练样本的SIFT特征点空间。步骤1406采用K最邻近法(KNN,K-Nearest Neighbor)对训练样本的SIFT特征点空间进行聚类分析,得到K个类及对应的K个特征均值。这里的K由用户指定,例如1000。一般而言,根据SIFT特征点空间大小,K的值在几百到几千之间。步骤1407为每一个特征均值赋予一个标签用以标识该特征均值代表的类,这样就得到由K个带标签的特征均值构成的最佳SIFT特征包。最后,步骤1408为图3的结束状态。
应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。本实施例中未明确的各组成部分均可用现有技术加以实现。

Claims (2)

  1. 基于SIFT 特征包的牛眼虹膜图像识别方法,其特征在于,该方法包括以下步骤:
    (1)若识别机制未训练好,则执行步骤2,否则转到步骤10;
    (2)获得用于获取SIFT特征包的训练图像集;
    (3)根据训练图像集获取最佳SIFT特征包;
    (4)获得目标牛眼虹膜库;
    (5)对每一幅目标牛眼虹膜图像进行预处理;
    (6)对每一幅目标牛眼虹膜图像进行虹膜内边缘定位;
    (7)利用SIFT方法获得每一幅目标牛眼虹膜图像的SIFT特征点;
    (8)移除虹膜内边缘中的SIFT特征点,得到每一幅目标牛眼虹膜图像的有效SIFT特征点集;
    (9)将每一幅目标牛眼虹膜图像的SIFT特征点与最佳SIFT特征包对比,获得目标牛眼虹膜图像的特征直方图;
    (10)接收待识别图像;
    (11)对待识别图像进行预处理;
    (12)对待识别图像进行虹膜内边缘定位;
    (13)利用SIFT方法获得待识别图像的SIFT特征点;
    (14)移除虹膜内边缘中的SIFT特征点,得到待识别图像的有效SIFT特征点集;
    (15)将待识别图像的SIFT特征点与最佳SIFT特征包对比,获取待识别图像的特征直方图;
    (16)计算待识别图像与目标牛眼虹膜库中每一幅图像的直方图距离;
    (17)以直方图距离最小的目标牛眼虹膜图像对应的对象作为识别结果;
    (18)结束。
  2. 根据权利要求1所述的基于SIFT 特征包的牛眼虹膜图像识别方法,其特征在于,所述的获取最佳SIFT特征包的步骤是:
    (1)获得训练图像集;
    (2)对每一幅训练牛眼虹膜图像进行预处理;
    (3)对每一幅训练牛眼虹膜图像进行虹膜内边缘定位;
    (4)利用SIFT方法获得每一幅训练牛眼虹膜图像的SIFT特征点;
    (5)移除虹膜内边缘中的SIFT特征点,得到训练图像的有效SIFT特征点集;
    (6)合并所有训练图像的SIFT特征点集,得到训练样本的SIFT特征点空间;
    (7)采用K最邻近法对训练样本的SIFT特征点空间进行聚类分析,得到K个类及对应的特征均值;
    (8)为每一个特征均值赋予一个标签用以标识该特征均值代表的类,得到由K个带标签的特征均值构成的最佳SIFT特征包;
    (9)结束。
PCT/CN2012/082165 2012-05-31 2012-09-27 基于sift特征包的牛眼虹膜图像识别方法 WO2013177878A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP12877671.3A EP2858007B1 (en) 2012-05-31 2012-09-27 Sift feature bag based bovine iris image recognition method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2012101770230A CN102693421B (zh) 2012-05-31 2012-05-31 基于sift 特征包的牛眼虹膜图像识别方法
CN201210177023.0 2012-05-31

Publications (1)

Publication Number Publication Date
WO2013177878A1 true WO2013177878A1 (zh) 2013-12-05

Family

ID=46858839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/082165 WO2013177878A1 (zh) 2012-05-31 2012-09-27 基于sift特征包的牛眼虹膜图像识别方法

Country Status (3)

Country Link
EP (1) EP2858007B1 (zh)
CN (1) CN102693421B (zh)
WO (1) WO2013177878A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751143A (zh) * 2015-04-02 2015-07-01 北京中盾安全技术开发公司 一种基于深度学习的人证核验系统及方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693421B (zh) * 2012-05-31 2013-12-04 东南大学 基于sift 特征包的牛眼虹膜图像识别方法
CN103034860A (zh) * 2012-12-14 2013-04-10 南京思创信息技术有限公司 基于sift特征的违章建筑检测方法
CN103336835B (zh) * 2013-07-12 2017-02-08 西安电子科技大学 基于权值color‑sift特征字典的图像检索方法
TWI503760B (zh) * 2014-03-18 2015-10-11 Univ Yuan Ze Image description and image recognition method
CN105550633B (zh) 2015-10-30 2018-12-11 小米科技有限责任公司 区域识别方法及装置
CN108121972A (zh) * 2017-12-25 2018-06-05 北京航空航天大学 一种局部遮挡条件下的目标识别方法
CN113536968B (zh) * 2021-06-25 2022-08-16 天津中科智能识别产业技术研究院有限公司 一种自动获取虹膜内外圆边界坐标的方法
CN113657231B (zh) * 2021-08-09 2024-05-07 广州中科智云科技有限公司 一种基于多旋翼无人机的图像识别方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001035321A1 (en) * 1999-11-09 2001-05-17 Iridian Technologies, Inc. System and method of animal identification and animal transaction authorization using iris patterns
CN101447025A (zh) * 2008-12-30 2009-06-03 东南大学 一种大型动物虹膜识别方法
CN102693421A (zh) * 2012-05-31 2012-09-26 东南大学 基于sift 特征包的牛眼虹膜图像识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100453943B1 (ko) * 2001-12-03 2004-10-20 주식회사 세넥스테크놀로지 개인 식별을 위한 홍채 영상의 처리 및 인식방법과 시스템
KR20050025927A (ko) * 2003-09-08 2005-03-14 유웅덕 홍채인식을 위한 동공 검출 방법 및 형상기술자 추출방법과 그를 이용한 홍채 특징 추출 장치 및 그 방법과홍채인식 시스템 및 그 방법
CA2736609C (en) * 2008-02-14 2016-09-20 Iristrac, Llc System and method for animal identification using iris images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001035321A1 (en) * 1999-11-09 2001-05-17 Iridian Technologies, Inc. System and method of animal identification and animal transaction authorization using iris patterns
CN101447025A (zh) * 2008-12-30 2009-06-03 东南大学 一种大型动物虹膜识别方法
CN102693421A (zh) * 2012-05-31 2012-09-26 东南大学 基于sift 特征包的牛眼虹膜图像识别方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BELCHER, G. ET AL.: "Region-based SIFT approach to iris recognition.", OPTICS AND LASERS IN ENGINEERING., 28 August 2008 (2008-08-28), pages 139 - 147, XP025693652 *
See also references of EP2858007A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751143A (zh) * 2015-04-02 2015-07-01 北京中盾安全技术开发公司 一种基于深度学习的人证核验系统及方法

Also Published As

Publication number Publication date
EP2858007A4 (en) 2016-04-06
CN102693421B (zh) 2013-12-04
CN102693421A (zh) 2012-09-26
EP2858007A1 (en) 2015-04-08
EP2858007B1 (en) 2018-08-15

Similar Documents

Publication Publication Date Title
WO2013177878A1 (zh) 基于sift特征包的牛眼虹膜图像识别方法
Tharwat et al. Cattle identification using muzzle print images based on texture features approach
Awad From classical methods to animal biometrics: A review on cattle identification and tracking
Li et al. Automatic individual identification of Holstein dairy cows using tailhead images
CN109145742B (zh) 一种行人识别方法及系统
US8705806B2 (en) Object identification apparatus and control method thereof
US7817826B2 (en) Apparatus and method for partial component facial recognition
WO2019033525A1 (zh) Au特征识别方法、装置及存储介质
Prakash et al. An efficient ear localization technique
US11594074B2 (en) Continuously evolving and interactive Disguised Face Identification (DFI) with facial key points using ScatterNet Hybrid Deep Learning (SHDL) network
Fuchs et al. Computational pathology analysis of tissue microarrays predicts survival of renal clear cell carcinoma patients
CN103218610B (zh) 狗脸检测器的形成方法和狗脸检测方法
CN101408929A (zh) 一种用于人脸识别系统的多模板人脸注册方法和装置
CN112101076A (zh) 识别猪只的方法和装置
CN112668374A (zh) 图像处理方法、装置、重识别网络的训练方法及电子设备
WO2013075295A1 (zh) 低分辨率视频的服装识别方法及系统
CN111931548A (zh) 人脸识别系统、建立人脸识别数据的方法及人脸识别方法
CN111444817B (zh) 一种人物图像识别方法、装置、电子设备和存储介质
WO2017054276A1 (zh) 一种生物特征身份识别方法及装置
CN113449676B (zh) 一种基于双路互促进解纠缠学习的行人重识别方法
Hrkac et al. Tattoo detection for soft biometric de-identification based on convolutional neural networks
EP3380990A1 (en) Efficient unconstrained stroke detector
Ramesh et al. Eidetic recognition of cattle using keypoint alignment
Patravali et al. Skin segmentation using YCBCR and RGB color models
CN108537213A (zh) 增强虹膜识别精度的系统和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12877671

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2012877671

Country of ref document: EP