WO2017080410A1 - 一种识别图像中瞳孔的方法和装置 - Google Patents

一种识别图像中瞳孔的方法和装置 Download PDF

Info

Publication number
WO2017080410A1
WO2017080410A1 PCT/CN2016/104734 CN2016104734W WO2017080410A1 WO 2017080410 A1 WO2017080410 A1 WO 2017080410A1 CN 2016104734 W CN2016104734 W CN 2016104734W WO 2017080410 A1 WO2017080410 A1 WO 2017080410A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
connected graph
graph
edge
Prior art date
Application number
PCT/CN2016/104734
Other languages
English (en)
French (fr)
Inventor
冯亮
蔡子豪
尹亚伟
Original Assignee
中国银联股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国银联股份有限公司 filed Critical 中国银联股份有限公司
Priority to CA3026968A priority Critical patent/CA3026968C/en
Priority to US15/773,319 priority patent/US10755081B2/en
Priority to EP16863586.0A priority patent/EP3376431B1/en
Publication of WO2017080410A1 publication Critical patent/WO2017080410A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • Embodiments of the present invention relate to image recognition, and more particularly to methods and apparatus for identifying pupils in an image.
  • Eye tracking can be performed using a camera that includes acquisition, modeling, and simulation of eye movement information.
  • Pupil recognition is a prerequisite step for tracking eyeball trajectories.
  • a method for identifying pupils in an image comprising: A. image preprocessing; B. edge detection; C. constructing a connected graph, comprising: dividing a connected region according to an edge line of the edge map, and dividing the connected region into gray values 100 to 200 and two categories with gray values above 200. For the connected graphs of the two categories, a connected graph pair is selected, wherein the edge line of the connected graph b with the gray value of 200 or more is included in the grayscale value at 100. In the edge line of the connected graph a of -200, D.
  • Image preprocessing includes: constructing a grayscale image of the original image, and denoising the grayscale image.
  • the original image is obtained by taking an input video, dividing the video into a plurality of pictures at intervals, and using the first picture as the original image identifying the pupil.
  • the grayscale image of the original image is constructed by acquiring the R, G, and B values of each pixel of the original image, and selecting the largest value among the R, G, and B values for each pixel as the grayscale value of the pixel.
  • the edge detection includes: calculating a gradient image of the grayscale image, and taking a pixel whose value is greater than a specified threshold as the edge pixel, and constructing the edge image.
  • a device for identifying a pupil in an image comprising: A. an image preprocessing unit; B. an edge detection unit; C. constructing a connectivity diagram unit, configured to perform: dividing the connected region according to an edge line of the edge map, connecting the connected region It is divided into two categories of gray values of 100 to 200 and gray values of 200 or more. For the connected graphs of the two categories, a connected graph pair is selected, wherein the edge line of the connected graph b with the gray value of 200 or more is included. In the edge line of the connected graph a whose gray value is in the range of 100-200, D.
  • the filters the connected graph pair unit and is configured to perform: for each connected graph pair, calculate the centroid pixel position b_Location of the connected graph b, and the connected region thereof
  • the pixel length b_Major and the short-axis pixel length b_Minor of the long axis of the ellipse of the same standard second-order central moment are connected to the connected graphs a and b to obtain a new image c, and the centroid pixel position c_Location of the graph c is calculated, and the connected region has the same standard two.
  • the pixel length c_Major of the long axis of the ellipse of the order center moment, and the short axis pixel length c_Minor, for each connected figure pair, calculate the pixel distances L, b_Major and c_ of the centroid pixel positions b_Location and c_Location Proportion R of Major, ratio P of b_Minor and c_Minor, for each connected graph pair, calculate W Lm*Rn*P, where m and n are weighting factors, and select the connected graph b in the connected graph pair with the largest W value as the Identify the pupil.
  • the image pre-processing unit is configured to perform: constructing a grayscale image of the original image, and denoising the grayscale image.
  • the original image is obtained by taking an input video, dividing the video into a plurality of pictures at intervals, and using the first picture as the original image identifying the pupil.
  • the grayscale image of the original image is constructed by acquiring the R, G, and B values of each pixel of the original image, and selecting the largest value among the R, G, and B values for each pixel as the grayscale value of the pixel.
  • the edge detecting unit is configured to perform: calculating a gradient image of the grayscale image, and taking a large value for the gradient image
  • An edge map is constructed by using pixels of a specified threshold as edge pixels.
  • the pupil position after receiving the face image, the pupil position can be quickly recognized by using the relative positions of the eyes and the pupil and the pixel ratio of the axis of the image fitting ellipse.
  • Embodiments of the present invention do not require other external equipment and manual judgment.
  • the invention proposes to use parameter adjustment to set individual information weights, so that the method and device for identifying pupils in an image have high flexibility and adaptability.
  • FIG. 1 is a flow chart of a method of identifying pupils in an image, in accordance with an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an apparatus for identifying a pupil in an image, in accordance with an embodiment of the present invention.
  • FIG. 1 is a flow chart of a method of identifying pupils in an image, in accordance with an embodiment of the present invention. As shown, the method includes four steps: A. Image Preprocessing, B. Edge Detection, C. Building a Connected Graph, and D. Filtering Connected Graph Pairs.
  • Image preprocessing may include: constructing a grayscale image of the original image, and denoising the grayscale image.
  • the original image can be obtained by acquiring an input video, dividing the video into a plurality of pictures at intervals, and using the first picture as an original image for identifying the pupil.
  • the grayscale image of the original image is constructed by acquiring the R, G, and B values of each pixel of the original image, and selecting R, G, and for each pixel. The largest value of the B value is taken as the gray value of the pixel.
  • Edge detection includes: calculating a gradient image of the grayscale image, and taking a pixel whose value is greater than a specified threshold as the edge pixel, and constructing the edge image.
  • the gradient image of the grayscale image can be calculated using the Prewitt detection operator.
  • Building a connectivity graph can include:
  • the connected area is divided into two categories having a gray value of 100 to 200 and a gray value of 200 or more.
  • a connected graph pair is selected, in which the edge line of the connected graph b having the gradation value of 200 or more is included in the edge line of the connected graph a whose gradation value is 100-200.
  • Filtering connectivity graph pairs can include:
  • W L-m*R-n*P, where m and n are weighting factors, and the weighting factor is generally set to a positive number, but it can be set separately in actual conditions.
  • the connected graph b in the connected graph pair with the largest W value is selected as the identified pupil.
  • FIG. 1 may be considered as method steps, and/or are considered to be operations resulting from the execution of computer program code, and/or as a plurality of coupled logic circuit elements that are constructed to implement the associated functions.
  • Exemplary embodiments of the present invention may be implemented in hardware, software, or a combination thereof. For example, some of the present invention Some aspects can be implemented in hardware, while others can be implemented in software. Although aspects of the exemplary embodiments of the present invention may be shown and described as a block diagram, a flowchart, it is well understood that the devices or methods described herein may be implemented in a system as a non-limiting example as functional module.
  • the device includes four units: A. Image Pre-Processing Unit, B. Edge Detection Unit, C. Building Connected Graph Unit, D. Filtering Connected Graph Pair Unit.
  • the image pre-processing unit is configured to perform: constructing a grayscale image of the original image, denoising the grayscale image.
  • the edge detecting unit is configured to perform: calculating a gradient image of the grayscale image, and taking a pixel whose value is greater than a specified threshold as the edge pixel, and constructing the edge map.
  • the connected area is divided into two categories having a gray value of 100 to 200 and a gray value of 200 or more.
  • a connected graph pair is selected, wherein the edge line of the connected graph b with the gray value of 200 or more is included in the edge line of the connected graph a whose gray value is 100-200,
  • the connected graph b in the connected graph pair with the largest W value is selected as the identified pupil.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种识别图像中瞳孔的方法和装置。该方法包括四个步骤:A.图像预处理、B.边缘检测、C.构建连通图、D.筛选连通图对。

Description

一种识别图像中瞳孔的方法和装置 技术领域
本发明的实施例涉及图像识别,并且尤其涉及识别图像中瞳孔的方法和装置。
背景技术
目前ATM、手机、笔记本电脑、PC等设备都安装有摄像头。可以利用摄像头进行眼球追踪,其包括对眼球运动信息的获取、建模和模拟。瞳孔识别是追踪眼球轨迹的前提步骤。
发明内容
一种识别图像中瞳孔的方法,包括:A.图像预处理;B.边缘检测;C.构建连通图,包括:根据边缘图的边缘线划分出连通区域,将连通区域分为灰度值在100至200和灰度值在200以上的两个类别,对于两个类别的连通图,选出连通图对,其中灰度数值200以上的连通图b的边缘线被包含在灰度数值在100-200的连通图a的边缘线中,D.筛选连通图对,包括:对各连通图对,计算连通图b的质心像素位置b_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度b_Major、短轴像素长度b_Minor,连接连通图a和b得到新的图像c,计算图c的质心像素位置c_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度c_Major、短轴像素长度c_Minor,对各连通图对,计算质心像素位置b_Location和c_Location的像素距离L、b_Major与c_Major的比例R、b_Minor与c_Minor的比例P,对各连通图对,计算W=L-m*R-n*P,其中m和n为权重因子,选择W数值最大的连通图对中的连通图b作为被识别的瞳孔。
图像预处理包括:构建原始图像的灰度图,对灰度图去噪。
该原始图像通过以下方式获得:获取输入的视频、将视频按时间间隔切分成多张图片,将首张图片作为识别瞳孔的原始图像。
原始图像的灰度图通过以下方式构建:获取原始图像的每个像素的R、G、B数值,对于每个像素选择其R、G、B数值中最大的值作为该像素的灰度值。
边缘检测包括:计算灰度图的梯度图像,对梯度图像取数值大于指定阈值的像素作为边缘像素,构建边缘图。
一种识别图像中瞳孔的装置,包括:A.图像预处理单元;B.边缘检测单元;C.构建连通图单元,被配置为执行:根据边缘图的边缘线划分出连通区域,将连通区域分为灰度值在100至200和灰度值在200以上的两个类别,对于两个类别的连通图,选出连通图对,其中灰度数值200以上的连通图b的边缘线被包含在灰度数值在100-200的连通图a的边缘线中,D.筛选连通图对单元,被配置为执行:对各连通图对,计算连通图b的质心像素位置b_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度b_Major、短轴像素长度b_Minor,连接连通图a和b得到新的图像c,计算图c的质心像素位置c_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度c_Major、短轴像素长度c_Minor,对各连通图对,计算质心像素位置b_Location和c_Location的像素距离L、b_Major与c_Major的比例R、b_Minor与c_Minor的比例P,对各连通图对,计算W=L-m*R-n*P,其中m和n为权重因子,选择W数值最大的连通图对中的连通图b作为被识别的瞳孔。
图像预处理单元被配置为执行:构建原始图像的灰度图,对灰度图去噪。
该原始图像通过以下方式获得:获取输入的视频、将视频按时间间隔切分成多张图片,将首张图片作为识别瞳孔的原始图像。
原始图像的灰度图通过以下方式构建:获取原始图像的每个像素的R、G、B数值,对于每个像素选择其R、G、B数值中最大的值作为该像素的灰度值。
边缘检测单元被配置为执行:计算灰度图的梯度图像,对梯度图像取数值大 于指定阈值的像素作为边缘像素,构建边缘图。
根据本发明的实施例,在接收到人脸图像后,利用眼睛和瞳孔的相对位置以及图像拟合椭圆的轴的像素比例,能够快速识别出瞳孔位置。本发明的实施例无需其他外部设备和人工判断。本发明提出采用参数调节设置各个信息权重,使得识别图像中瞳孔的方法和装置具有很高的灵活性和适应性。
当结合附图阅读以下描述时也将理解本发明的实施例的其它特征和优势,其中附图借助于实例示出了本发明的实施例的原理。
附图说明
图1是根据本发明实施例的识别图像中瞳孔的方法的流程图。
图2是根据本发明实施例的识别图像中瞳孔的装置的示意图。
具体实施方式
在下文中,将结合实施例描述本发明的原理。应当理解的是,给出的实施例只是为了本领域技术人员更好地理解并且实践本发明,而不是限制本发明的范围。例如,本说明书中包含许多具体的实施细节不应被解释为对发明的范围或可能被要求保护的范围的限制,而是应该被视为特定于实施例的描述。例如,在各实施例的上下文描述的特征可被组合在单一实施例中来实施。在单一实施例的上下文中描述的特可在多个实施例来实施。
图1是根据本发明实施例的识别图像中瞳孔的方法的流程图。如图所示,该方法包括四个步骤:A.图像预处理、B.边缘检测、C.构建连通图、D.筛选连通图对。
A.图像预处理可以包括:构建原始图像的灰度图,对灰度图去噪。其中,该原始图像可以通过以下方式获得:获取输入的视频、将视频按时间间隔切分成多张图片,将首张图片作为识别瞳孔的原始图像。原始图像的灰度图通过以下方式构建:获取原始图像的每个像素的R、G、B数值,对于每个像素选择其R、G、 B数值中最大的值作为该像素的灰度值。
B.边缘检测包括:计算灰度图的梯度图像,对梯度图像取数值大于指定阈值的像素作为边缘像素,构建边缘图。可以采用Prewitt检测算子,计算灰度图的梯度图像。
C.构建连通图可以包括:
根据边缘图的边缘线划分出连通区域,
将连通区域分为灰度值在100至200和灰度值在200以上的两个类别,
对于两个类别的连通图,选出连通图对,其中灰度数值200以上的连通图b的边缘线被包含在灰度数值在100-200的连通图a的边缘线中。选择的连通图对构成了候选列表,列表可以以数组的形式表示为Arr={<a1,b1>,<a2,b2>,…,<an,bn>},其中n为数组的长度。
D.筛选连通图对可以包括:
对各连通图对,计算连通图b的质心像素位置b_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度b_Major、短轴像素长度b_Minor,
连接连通图a和b得到新的图像c,计算图c的质心像素位置c_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度c_Major、短轴像素长度c_Minor,
对各连通图对,计算质心像素位置b_Location和c_Location的像素距离L、b_Major与c_Major的比例R、b_Minor与c_Minor的比例P,
对各连通图对,计算W=L-m*R-n*P,其中m和n为权重因子,权重因子一般设置为正数,但是可以实际情况进行分别设置,
选择W数值最大的连通图对中的连通图b作为被识别的瞳孔。
图1各个框可被视为方法步骤、和/或被视为由于运行计算机程序代码而导致的操作、和/或被视为构建为实施相关功能的多个耦合的逻辑电路元件。
本发明示例性实施例可在硬件、软件或其组合中来实施。例如,本发明的某 些方面可在硬件中实施,而其它方面则可在软件中实施。尽管本发明的示例性实施例的方面可被示出和描述为框图、流程图,但很好理解的是,这里描述的这些装置、或方法可在作为非限制性实例的系统中被实现为功能模块。
图2是根据本发明实施例的识别图像中瞳孔的装置的示意图。如图所示,该装置包括四个单元:A.图像预处理单元、B.边缘检测单元、C.构建连通图单元、D.筛选连通图对单元。
A.图像预处理单元被配置为执行:构建原始图像的灰度图,对灰度图去噪。
B.边缘检测单元被配置为执行:计算灰度图的梯度图像,对梯度图像取数值大于指定阈值的像素作为边缘像素,构建边缘图。
C.构建连通图单元,被配置为执行:
根据边缘图的边缘线划分出连通区域,
将连通区域分为灰度值在100至200和灰度值在200以上的两个类别,
对于两个类别的连通图,选出连通图对,其中灰度数值200以上的连通图b的边缘线被包含在灰度数值在100-200的连通图a的边缘线中,
D.筛选连通图对单元,被配置为执行:
对各连通图对,计算连通图b的质心像素位置b_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度b_Major、短轴像素长度b_Minor,
连接连通图a和b得到新的图像c,计算图c的质心像素位置c_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度c_Major、短轴像素长度c_Minor,
对各连通图对,计算质心像素位置b_Location和c_Location的像素距离L、b_Major与c_Major的比例R、b_Minor与c_Minor的比例P,
对各连通图对,计算W=L-m*R-n*P,其中m和n为权重因子,
选择W数值最大的连通图对中的连通图b作为被识别的瞳孔。
此外,上述装置的各单元不应被理解为要求在所有的实施例中进行这种分 离,而应该被理解为所描述的程序组件和系统通常可以被集成在单一的软件产品中或打包成多个软件产品。
相关领域的技术人员当结合附图阅读前述说明书时,对本发明的前述示例性实施例的各种修改和变形对于相关领域的技术人员会变得明显。因此,本发明的实施例不限于所公开的特定实施例,并且变形例和其它实施例意在涵盖在所附权利要求的范围内。

Claims (10)

  1. 一种识别图像中瞳孔的方法,其特征在于,包括:
    A.图像预处理;
    B.边缘检测;
    C.构建连通图,包括:
    根据边缘图的边缘线划分出连通区域,
    将连通区域分为灰度值在100至200和灰度值在200以上的两个类别,
    对于两个类别的连通图,选出连通图对,其中灰度数值200以上的连通图b的边缘线被包含在灰度数值在100-200的连通图a的边缘线中,
    D.筛选连通图对,包括:
    对各连通图对,计算连通图b的质心像素位置b_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度b_Major、短轴像素长度b_Minor,
    连接连通图a和b得到新的图像c,计算图c的质心像素位置c_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度c_Major、短轴像素长度c_Minor,
    对各连通图对,计算质心像素位置b_Location和c_Location的像素距离L、b_Major与c_Major的比例R、b_Minor与c_Minor的比例P,
    对各连通图对,计算W=L-m*R-n*P,其中m和n为权重因子,
    选择W数值最大的连通图对中的连通图b作为被识别的瞳孔。
  2. 如权利要求1所述的方法,其特征在于,
    图像预处理包括:
    构建原始图像的灰度图,
    对灰度图去噪。
  3. 如权利要求2所述的方法,其特征在于,
    该原始图像通过以下方式获得:
    获取输入的视频、将视频按时间间隔切分成多张图片,将首张图片作为识别瞳孔的原始图像。
  4. 如权利要求3所述的方法,其特征在于,
    原始图像的灰度图通过以下方式构建:
    获取原始图像的每个像素的R、G、B数值,对于每个像素选择其R、G、B数值中最大的值作为该像素的灰度值。
  5. 如权利要求1所述的方法,其特征在于,
    边缘检测包括:
    计算灰度图的梯度图像,
    对梯度图像取数值大于指定阈值的像素作为边缘像素,构建边缘图。
  6. 一种识别图像中瞳孔的装置,其特征在于,包括:
    A.图像预处理单元;
    B.边缘检测单元;
    C.构建连通图单元,被配置为执行:
    根据边缘图的边缘线划分出连通区域,
    将连通区域分为灰度值在100至200和灰度值在200以上的两个类别,
    对于两个类别的连通图,选出连通图对,其中灰度数值200以上的连通图b的边缘线被包含在灰度数值在100-200的连通图a的边缘线中,
    D.筛选连通图对单元,被配置为执行:
    对各连通图对,计算连通图b的质心像素位置b_Location、与其连通区域具 有相同标准二阶中心矩的椭圆的长轴的像素长度b_Major、短轴像素长度b_Minor,
    连接连通图a和b得到新的图像c,计算图c的质心像素位置c_Location、与其连通区域具有相同标准二阶中心矩的椭圆的长轴的像素长度c_Major、短轴像素长度c_Minor,
    对各连通图对,计算质心像素位置b_Location和c_Location的像素距离L、b_Major与c_Major的比例R、b_Minor与c_Minor的比例P,
    对各连通图对,计算W=L-m*R-n*P,其中m和n为权重因子,
    选择W数值最大的连通图对中的连通图b作为被识别的瞳孔。
  7. 如权利要求6所述的装置,其特征在于,
    图像预处理单元被配置为执行:
    构建原始图像的灰度图,
    对灰度图去噪。
  8. 如权利要求7所述的装置,其特征在于,
    该原始图像通过以下方式获得:
    获取输入的视频、将视频按时间间隔切分成多张图片,将首张图片作为识别瞳孔的原始图像。
  9. 如权利要求8所述的装置,其特征在于,
    原始图像的灰度图通过以下方式构建:
    获取原始图像的每个像素的R、G、B数值,对于每个像素选择其R、G、B数值中最大的值作为该像素的灰度值。
  10. 如权利要求6所述的装置,其特征在于,
    边缘检测单元被配置为执行:
    计算灰度图的梯度图像,
    对梯度图像取数值大于指定阈值的像素作为边缘像素,构建边缘图。
PCT/CN2016/104734 2015-11-11 2016-11-04 一种识别图像中瞳孔的方法和装置 WO2017080410A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA3026968A CA3026968C (en) 2015-11-11 2016-11-04 Method and device for identifying pupil in an image
US15/773,319 US10755081B2 (en) 2015-11-11 2016-11-04 Method and apparatus for identifying pupil in image
EP16863586.0A EP3376431B1 (en) 2015-11-11 2016-11-04 Method and apparatus for identifying pupil in image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510765986.6 2015-11-11
CN201510765986.6A CN105590092B (zh) 2015-11-11 2015-11-11 一种识别图像中瞳孔的方法和装置

Publications (1)

Publication Number Publication Date
WO2017080410A1 true WO2017080410A1 (zh) 2017-05-18

Family

ID=55929662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/104734 WO2017080410A1 (zh) 2015-11-11 2016-11-04 一种识别图像中瞳孔的方法和装置

Country Status (5)

Country Link
US (1) US10755081B2 (zh)
EP (1) EP3376431B1 (zh)
CN (1) CN105590092B (zh)
CA (1) CA3026968C (zh)
WO (1) WO2017080410A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590092B (zh) * 2015-11-11 2019-07-19 中国银联股份有限公司 一种识别图像中瞳孔的方法和装置
CN106289532A (zh) * 2016-08-05 2017-01-04 西安西拓电气股份有限公司 一种红外热图像的温度提取方法及装置
CN108269261A (zh) * 2016-12-30 2018-07-10 亿阳信通股份有限公司 一种骨关节ct图像分割方法和系统
CN107844803B (zh) * 2017-10-30 2021-12-28 中国银联股份有限公司 一种图片比对的方法和装置
CN108392170A (zh) * 2018-02-09 2018-08-14 中北大学 一种用于验光仪的人眼追踪装置及识别定位方法
CN110276229A (zh) * 2018-03-14 2019-09-24 京东方科技集团股份有限公司 目标物体区域中心定位方法和装置
CN111854620B (zh) * 2020-07-16 2022-12-06 科大讯飞股份有限公司 基于单目相机的实际瞳距测定方法、装置以及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359365A (zh) * 2008-08-07 2009-02-04 电子科技大学中山学院 一种基于最大类间方差和灰度信息的虹膜定位方法
CN101788848A (zh) * 2009-09-29 2010-07-28 北京科技大学 用于视线追踪系统的眼部特征参数检测方法
CN102129553A (zh) * 2011-03-16 2011-07-20 上海交通大学 基于单红外光源的人眼检测方法
CN104484649A (zh) * 2014-11-27 2015-04-01 北京天诚盛业科技有限公司 虹膜识别的方法和装置
CN104809458A (zh) * 2014-12-29 2015-07-29 华为技术有限公司 一种瞳孔中心定位方法及装置
CN105590092A (zh) * 2015-11-11 2016-05-18 中国银联股份有限公司 一种识别图像中瞳孔的方法和装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1282070B1 (de) * 2001-08-01 2004-04-21 ZN Vision Technologies AG Hierarchische Bildmodellanpassung
WO2006041426A2 (en) * 2004-09-15 2006-04-20 Adobe Systems Incorporated Locating a feature in a digital image
US7751626B2 (en) * 2006-12-05 2010-07-06 Fujifilm Corporation Method and apparatus for detection using gradient-weighted and/or distance-weighted graph cuts
JP4893862B1 (ja) * 2011-03-11 2012-03-07 オムロン株式会社 画像処理装置、および画像処理方法
CN103838378B (zh) * 2014-03-13 2017-05-31 广东石油化工学院 一种基于瞳孔识别定位的头戴式眼睛操控系统
US9864430B2 (en) * 2015-01-09 2018-01-09 Microsoft Technology Licensing, Llc Gaze tracking via eye gaze model
US10048749B2 (en) * 2015-01-09 2018-08-14 Microsoft Technology Licensing, Llc Gaze detection offset for gaze tracking models

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359365A (zh) * 2008-08-07 2009-02-04 电子科技大学中山学院 一种基于最大类间方差和灰度信息的虹膜定位方法
CN101788848A (zh) * 2009-09-29 2010-07-28 北京科技大学 用于视线追踪系统的眼部特征参数检测方法
CN102129553A (zh) * 2011-03-16 2011-07-20 上海交通大学 基于单红外光源的人眼检测方法
CN104484649A (zh) * 2014-11-27 2015-04-01 北京天诚盛业科技有限公司 虹膜识别的方法和装置
CN104809458A (zh) * 2014-12-29 2015-07-29 华为技术有限公司 一种瞳孔中心定位方法及装置
CN105590092A (zh) * 2015-11-11 2016-05-18 中国银联股份有限公司 一种识别图像中瞳孔的方法和装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP3376431A4 *
YU , LONGHUA ET AL.: "Human Eyes Detection and Pupil Localization", COMPUTER ENGINEERING AND APPLICATIONS, vol. 49, no. 3, 24 October 2011 (2011-10-24), pages 186 - 189, XP009510790 *

Also Published As

Publication number Publication date
EP3376431A1 (en) 2018-09-19
US20180322332A1 (en) 2018-11-08
EP3376431A4 (en) 2019-06-12
EP3376431B1 (en) 2023-04-19
US10755081B2 (en) 2020-08-25
CN105590092B (zh) 2019-07-19
CA3026968C (en) 2021-02-23
CN105590092A (zh) 2016-05-18
CA3026968A1 (en) 2017-05-18

Similar Documents

Publication Publication Date Title
WO2017080410A1 (zh) 一种识别图像中瞳孔的方法和装置
EP3674852B1 (en) Method and apparatus with gaze estimation
EP3295424B1 (en) Systems and methods for reducing a plurality of bounding regions
WO2017163955A1 (ja) 監視システム、画像処理装置、画像処理方法およびプログラム記録媒体
JP2018523879A (ja) 眼ポーズ測定を用いた眼瞼形状推定
US20140267417A1 (en) Method and System for Disambiguation of Augmented Reality Tracking Databases
WO2017197620A1 (en) Detection of humans in images using depth information
US11055829B2 (en) Picture processing method and apparatus
US11521327B2 (en) Detection target positioning device, detection target positioning method, and sight tracking device
JP7093427B2 (ja) オブジェクト追跡方法および装置、電子設備並びに記憶媒体
CN111353336B (zh) 图像处理方法、装置及设备
CN111368717A (zh) 视线确定方法、装置、电子设备和计算机可读存储介质
US20160328626A1 (en) Morphological Automatic Triangle Orientation Detection
CN110673607B (zh) 动态场景下的特征点提取方法、装置、及终端设备
WO2021056501A1 (zh) 提取特征点的方法、可移动平台及存储介质
Sun et al. An improved Harris corner detection algorithm for low contrast image
WO2014165159A1 (en) System and method for blind image deconvolution
TWI641999B (zh) Eyeball recognition method and system
JP2007280032A (ja) 画像処理装置および方法、プログラム
JP2019020839A (ja) 画像処理装置、画像処理方法、及びプログラム
CN114821717B (zh) 目标对象融合方法、装置、电子设备及存储介质
KR20150060032A (ko) 움직임 검출 시스템 및 방법
WO2016051707A1 (ja) 情報処理装置、情報処理方法、及び、記録媒体
CN110849317B (zh) 显示屏幕间夹角的确定方法、电子设备及存储介质
KR101807541B1 (ko) 스테레오 매칭을 위한 센서스 패턴 생성 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16863586

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15773319

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016863586

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 3026968

Country of ref document: CA