CN112560709B - Pupil detection method and system based on auxiliary learning - Google Patents

Pupil detection method and system based on auxiliary learning Download PDF

Info

Publication number
CN112560709B
CN112560709B CN202011508367.6A CN202011508367A CN112560709B CN 112560709 B CN112560709 B CN 112560709B CN 202011508367 A CN202011508367 A CN 202011508367A CN 112560709 B CN112560709 B CN 112560709B
Authority
CN
China
Prior art keywords
task
vector field
auxiliary
convolutional neural
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011508367.6A
Other languages
Chinese (zh)
Other versions
CN112560709A (en
Inventor
容毅标
范衠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN202011508367.6A priority Critical patent/CN112560709B/en
Publication of CN112560709A publication Critical patent/CN112560709A/en
Application granted granted Critical
Publication of CN112560709B publication Critical patent/CN112560709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a pupil detection method and a pupil detection system based on auxiliary learning. Then, a pupil saliency map is directly derived from the original image by using the trained convolutional neural network, and finally, pupil detection is realized based on the pupil saliency map. The method is applied to the technical fields of pattern recognition and machine learning.

Description

Pupil detection method and system based on auxiliary learning
Technical Field
The invention belongs to the technical field of pattern recognition and machine learning and pupil detection processing, and particularly relates to a pupil detection method and system based on auxiliary learning.
Background
Automatic detection of pupils plays an important role in many application scenarios, such as development of brain disease diagnostic tools, detection of drowsiness, etc. Conventional pupil detection methods are generally designed based on some preconditions, such as assuming that the pixel gray values at the pupil area are small. But the performance of this type of approach typically depends on the image itself, and once the image does not meet these preconditions, the designed algorithm will fail. Another class of more advanced methods than this class of traditional algorithms is based on traditional machine learning algorithms. Although such algorithms can avoid making strong preconditions to some extent, they typically involve feature engineering, i.e., feature design performed manually, which has some significant drawbacks, such as being very time-consuming and labor-consuming, requiring a lot of prior knowledge, etc.
With the resuscitating of convolutional neural networks, convolutional neural networks are widely used in various fields and achieve performance superior to conventional algorithms. Because the convolutional neural network belongs to an end-to-end and learnable supervised algorithm, the convolutional neural network can avoid characteristic engineering and reduce dependence on priori knowledge. Currently, researchers have also utilized convolutional neural networks to effect pupil detection. And (3) expanding a gold standard area, training a convolutional neural network by using an original image and expanded gold standard, and finally designing a post-processing algorithm to realize pupil detection. In this process, the network derived pupil map is not accurate because the gold standard is dilated. However, if the gold standard is not enlarged, and the convolutional neural network is trained by directly adopting the image-gold standard pair, the network can easily sink into a local minimum value due to serious unbalance of the ratio between the number of target pixels and the number of non-target pixels in the gold standard, so that the pupil information cannot be accurately captured. As is clear from the above, if the pupil area in the gold standard is enlarged, the accuracy is reduced, and if the pupil area is not enlarged, the network falls into a local minimum.
Disclosure of Invention
The invention aims to provide a pupil detection method and a pupil detection system based on auxiliary learning, which are used for solving one or more technical problems in the prior art and at least providing a beneficial selection or creation condition.
The invention provides a pupil detection method and a pupil detection system based on auxiliary learning, so that the purpose of the invention is to design auxiliary tasks and train a convolutional neural network together with original main tasks, so that the convolutional neural network can jump out local minimum values, and the pupil detection is realized accurately. The design of the auxiliary tasks for the convolutional neural network is first based on the primary task (pupil detection). The convolutional neural network is then co-trained with the primary and secondary tasks. And then, a saliency map of the pupil is derived from the original image (an eye image containing the pupil) by using the trained convolutional neural network, and finally, the pupil detection is realized based on the saliency map.
In order to achieve the above object, according to an aspect of the present invention, there is provided a pupil detection method based on auxiliary learning, the method including the steps of:
s100, constructing auxiliary tasks;
s200, constructing a loss function through the auxiliary task and the main task;
s300, training a convolutional neural network and determining the values of parameters in the convolutional neural network by minimizing a loss function;
s400, identifying a significant figure of the pupil from the original image by using the trained convolutional neural network, and realizing pupil inspection based on the significant figure.
Further, in S100, the method for constructing the auxiliary task includes: according to the characteristics of a main task, the design of an auxiliary task is carried out, the auxiliary task is to lead a convolutional neural network to derive a vector field from an original image, the main task is pupil detection, the vector field is a set formed by m multiplied by n vectors, wherein m multiplied by n represents the size of the image, namely the number of pixels in the image, and the characteristic of the vector field is that all vectors in the vector field are pointed at a target point; let the vector field be expressed mathematically as v= [ u (x, y), v (x, y) ], (x, y) denote the position coordinates of the pixel. This formula aims at solving a vector field [ u, v ] which can be obtained by minimizing the following energy function by solving the following Euler equation:
wherein μ is a weight parameter that balances the first and second terms in the energy function; u (u) x ,u y And v x ,v y The first derivatives of u, v to x, y, respectively, u and v being abbreviations for u (x, y) and v (x, y); gt represents a gold standard image of the pupil, namely an image of which the pupil position is marked by an expert;representing a gradient operator; to minimize the energy function, the vector field v needs to satisfy the following euler equation:
the Euler equation can be solved in an iterative manner, and as the number of iterations increases, the vector field will be spread to an area away from the target point. For convenience of description below, the symbol ψ is used k (gt) represents the vector field obtained by iterating k times.
Further, in S200, the method for constructing the loss function by the auxiliary task and the main task is as follows:
the construction loss function is:
wherein M represents the number of images in the training set ζ p (. Cndot.) is the loss function of the primary task,a loss function of the auxiliary task corresponding to the iteration k times; w represents a parameter in the convolutional neural network, which is composed of two parts, one is a shared part and the other is a part which is made for a specific taskThe predicted portion, a particular task, is referred to as: firstly, the main task is to realize pupil detection; firstly, an auxiliary task, namely, letting a convolutional neural network derive a corresponding vector field; for convenience of distinction, use->Representing the result of predicting the primary task, < >>Representing the result obtained by predicting auxiliary tasks corresponding to k times of iteration, I i Representing the ith image, gt i Is the corresponding gold standard; lambda (lambda) k For the weight coefficient of each auxiliary task, the subscript k represents the auxiliary task corresponding to the iteration k times, ψ k The abbreviation ψ (·) represents the transfer function, i.e. the vector field obtained by iterating image gt k times.
Further, in S300, the method of training the convolutional neural network and determining the values of the parameters in the convolutional neural network by minimizing the loss function is: training a convolutional neural network, namely determining the values of parameters in the network by minimizing a loss function as the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the parameters resulting from minimizing the loss function, i.e., minimizing the loss function.
The invention also provides a pupil detection system based on auxiliary learning, which comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in units of the following system:
the auxiliary task construction unit is used for constructing auxiliary tasks;
the loss function construction unit is used for constructing a loss function through the auxiliary task and the main task;
the network training unit is used for training the convolutional neural network and determining the values of parameters in the convolutional neural network by minimizing the loss function;
and the pupil detection unit is used for identifying the saliency map of the pupil from the original image by using the trained convolutional neural network while performing the auxiliary task.
The beneficial effects of the invention are as follows: the invention provides a pupil detection method and a pupil detection system based on auxiliary learning, which can lead a convolutional neural network to jump out a local minimum value in the training process, thereby obviously improving the pupil detection accuracy.
Drawings
The above and other features of the present invention will become more apparent from the detailed description of the embodiments thereof given in conjunction with the accompanying drawings, in which like reference characters designate like or similar elements, and it is apparent that the drawings in the following description are merely some examples of the present invention, and other drawings may be obtained from these drawings without inventive effort to those of ordinary skill in the art, in which:
FIG. 1 is a flow chart of a pupil detection method based on aided learning;
FIG. 2 is a schematic diagram of a vector field in which all vectors are directed to a target point;
FIG. 3 is a diagram of vector fields for different capture ranges;
FIG. 4 illustrates a convolutional neural network structure;
FIG. 5 is a schematic diagram showing the detection result;
fig. 6 is a diagram showing a pupil detection system structure based on auxiliary learning.
Detailed Description
The conception, specific structure, and technical effects produced by the present invention will be clearly and completely described below with reference to the embodiments and the drawings to fully understand the objects, aspects, and effects of the present invention. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
Fig. 1 is a flowchart of a pupil detection method based on auxiliary learning according to the present invention, and a pupil detection method based on auxiliary learning according to an embodiment of the present invention is described below with reference to fig. 1.
The invention provides a pupil detection method based on auxiliary learning, which specifically comprises the following steps:
1) The design of the auxiliary task, i.e. designing the vector field where all vectors are directed to the target point, is shown in fig. 2, and fig. 2 is a schematic diagram of the vector field where all vectors are directed to the target point. To this end, the following energy function is minimized:
the vector field refers to a set consisting of m×n vectors, wherein m×n represents the size of an image, namely the number of pixels in the image, and is characterized in that all vectors in the vector field point to a target point; let the vector field be expressed mathematically as v= [ u (x, y), v (x, y) ], (x, y) denote the position coordinates of the pixel. The purpose of this formula is to find the vector field [ u, v ], which can be found by finding the Euler equation below. μ is a weight parameter that is used to balance the first and second terms in the energy function, in this example μ=0.8. To minimize the energy function, the vector field v needs to satisfy the following euler equation:
the Euler equation can be solved in an iterative mode, so that vector fields (also called external forces) with different capture ranges can be obtained. Generally, as the number of iterations k increases, the capture range of the vector field also increases, and referring to fig. 3, fig. 3 is a schematic diagram of vector fields of different capture ranges; fig. 3 (a) is a gold standard image (target area in the original image); FIG. 3 (b) is the vector field obtained by 10 iterations; FIG. 3 is the vector field resulting from (c) iterating 90 times; fig. 3 (d) is a vector field obtained by iterating 500 times, wherein fig. 3 (a) is a gold standard image gt, and fig. 3 (b) to 3 (c) are vector fields obtained by iterating 10 times, 90 times and 500 times, respectively.
2) Given training setConstructing a loss function:
in this example, the loss function ζ p (. Cndot.). Cndot. a (. Cndot.) are least squares loss functions, i.ek takes three values of 10, 30 and 96, λ k Are all set to 1. The method comprises the steps of carrying out a first treatment on the surface of the
Wherein M represents the number of images in the training set ζ p (. Cndot.) is the loss function of the primary task,a loss function of the auxiliary task corresponding to the iteration k times; w represents a parameter in the convolutional neural network, which is composed of two parts, one is a shared part and the other is a part for making predictions for a specific task, which refers to: firstly, the main task is to realize pupil detection; firstly, an auxiliary task, namely, letting a convolutional neural network derive a corresponding vector field; for convenience of distinction, use->Representing the result of predicting the primary task, < >>Representing the result obtained by predicting auxiliary tasks corresponding to k times of iteration, I i Representing the ith image, gt i Is the corresponding gold standard; lambda (lambda) k For the weight coefficient of each auxiliary task, the subscript k represents the auxiliary task corresponding to the iteration k times, ψ k The abbreviation ψ (·) represents the transfer function, i.e. the vector field obtained by iterating image gt k times.
3) The convolutional neural network is trained. In this example, pupil detection is performed by using a convolutional neural network with a commonly used U-shaped structure, the structure of the convolutional neural network is shown in fig. 4, fig. 4 shows the structure of the convolutional neural network, and the values of parameters in the network are determined by minimizing the loss function in step 2), that is:
wherein (1)>Representing the parameters resulting from minimizing the loss function, i.e., minimizing the loss function.
4) Deriving a saliency map of the pupil from the original image by using a trained convolutional neural network, wherein fig. 5 is a schematic diagram of a detection result, and fig. 5 (a) is the original image; fig. 5 (b) is a pupil saliency map; fig. 5 (c) shows the detection result.
5) Pupil detection is achieved based on the pupil saliency map. In this example, the pixel with the largest gray value in the saliency map is taken as the target point, see fig. 5 (c).
Fig. 6 is a diagram showing a pupil detection system structure based on auxiliary learning according to an embodiment of the present invention, where the pupil detection system based on auxiliary learning includes: a processor, a memory, and a computer program stored in the memory and executable on the processor, which when executed implements the steps of one of the above-described embodiments of a pupil detection system based on assisted learning.
The system comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in units of the following system:
the auxiliary task construction unit is used for constructing auxiliary tasks;
the loss function construction unit is used for constructing a loss function through the auxiliary task and the main task;
the network training unit is used for training the convolutional neural network and determining the values of parameters in the convolutional neural network by minimizing the loss function;
and the pupil detection unit is used for identifying the saliency map of the pupil from the original image by using the trained convolutional neural network while performing the auxiliary task.
The pupil detection system based on the auxiliary learning can be operated in computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The pupil detection system based on auxiliary learning can comprise, but is not limited to, a processor and a memory. It will be appreciated by those skilled in the art that the example is merely an example of a learning-assisted pupil detection system, and is not meant to be limiting, and may include more or fewer components than an example, or may combine certain components, or different components, e.g., the learning-assisted pupil detection system may further include an input-output device, a network access device, a bus, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor is a control center of the pupil detection system operation system based on auxiliary learning, and various interfaces and lines are used to connect various parts of the entire pupil detection system operation system based on auxiliary learning.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the pupil detection system based on the auxiliary learning by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
While the present invention has been described in considerable detail and with particularity with respect to several described embodiments, it is not intended to be limited to any such detail or embodiments or any particular embodiment, but is to be construed as providing broad interpretation of such claims by reference to the appended claims in view of the prior art so as to effectively encompass the intended scope of the invention. Furthermore, the foregoing description of the invention has been presented in its embodiments contemplated by the inventors for the purpose of providing a useful description, and for the purposes of providing a non-essential modification of the invention that may not be presently contemplated, may represent an equivalent modification of the invention.

Claims (3)

1. A pupil detection method based on aided learning, the method comprising the steps of:
s100, constructing auxiliary tasks;
s200, constructing a loss function through the auxiliary task and the main task;
s300, training a convolutional neural network and determining the values of parameters in the convolutional neural network by minimizing a loss function;
s400, identifying a significant figure of the pupil from the original image by using the trained convolutional neural network;
in S100, the method for constructing the auxiliary task includes: the auxiliary task is to lead the convolutional neural network to derive a vector field from an original image, the main task is pupil detection, the vector field is a set formed by m multiplied by n vectors, wherein m multiplied by n represents the size of the image, namely the number of pixels in the image, and the vector field is characterized in that all vectors in the vector field point to a target point; let the vector field be expressed mathematically as v= [ u (x, y), v (x, y) ], (x, y) denote the position coordinates of the pixel, this formula is intended to find the vector field [ u, v ], which can be obtained by solving the following euler equation, which can then be obtained by minimizing the energy function:
wherein μ is a weight parameter that balances the first and second terms in the energy function; u (u) x ,u y And v x ,v y The first derivatives of u, v to x, y, respectively, u and v being abbreviations for u (x, y) and v (x, y); gt represents a gold standard image of the pupil, namely an image of which the pupil position is marked by an expert;representing a gradient operator; to minimize the energy function, the vector field v needs to satisfy the following euler equation:
the Euler equation can be solved in an iterative manner, and as the number of iterations increases, the vector field will be spread to an area away from the target point, for convenience of the following description, the symbol ψ is used k (gt) represents the vector field obtained by iterating k times;
in S200, the method for constructing the loss function by the auxiliary task and the main task is as follows:
the construction loss function is:
wherein M represents the number of images in the training set ζ p (. Cndot.) is the loss function of the primary task,a loss function of the auxiliary task corresponding to the iteration k times; w represents a parameter in the convolutional neural network, which is composed of two parts, one is a shared part and the other is a part for making predictions for a specific task, which refers to: firstly, the main task is to realize pupil detection; firstly, an auxiliary task, namely, letting a convolutional neural network derive a corresponding vector field; for the convenience of distinction, useRepresenting the result of predicting the primary task, < >>Representing the result obtained by predicting auxiliary tasks corresponding to k times of iteration, I i Representing the ith image, gt i Is the corresponding gold standard; lambda (lambda) k For the weight coefficient of each auxiliary task, the subscript k represents the auxiliary task corresponding to the iteration k times, ψ k (gt) abbreviated as ψ ]And) represents the transfer function, i.e. the vector field obtained by iterating the image gt k times.
2. The pupil detection method based on auxiliary learning as claimed in claim 1, wherein in S300, the method of training the convolutional neural network and determining the values of parameters in the convolutional neural network by minimizing a loss function is as follows: training a convolutional neural network, namely determining the values of parameters in the network by minimizing a loss function as the following formula:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the parameters resulting from minimizing the loss function, i.e., minimizing the loss function.
3. A pupil detection system based on assisted learning, the system comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in units of the following system:
the auxiliary task construction unit is used for constructing auxiliary tasks;
the loss function construction unit is used for constructing a loss function through the auxiliary task and the main task;
the network training unit is used for training the convolutional neural network and determining the values of parameters in the convolutional neural network by minimizing the loss function;
the pupil detection unit is used for identifying a significant figure of the pupil from the original image by using the trained convolutional neural network while the auxiliary task is executed;
in the auxiliary task construction unit, the method for constructing the auxiliary task comprises the following steps: the auxiliary task is to lead the convolutional neural network to derive a vector field from an original image, the main task is pupil detection, the vector field is a set formed by m multiplied by n vectors, wherein m multiplied by n represents the size of the image, namely the number of pixels in the image, and the vector field is characterized in that all vectors in the vector field point to a target point; let the vector field be expressed mathematically as v= [ u (x, y), v (x, y) ], (x, y) denote the position coordinates of the pixel, this formula being intended to find the vector field [ u, v ], which can be obtained by solving the following euler equation, the vector field can be obtained by minimizing the energy function:
wherein μ is a weight parameter that balances the first and second terms in the energy function; u (u) x ,u y And v x ,v y The first derivatives of u, v to x, y, respectively, u and v being abbreviations for u (x, y) and v (x, y); gt represents a gold standard image of the pupil, namely an image of which the pupil position is marked by an expert;representing a gradient operator; to minimize the energy function, the vector field v needs to satisfy the following euler equation:
the Euler equation can be solved in an iterative manner, and as the number of iterations increases, the vector field will be spread to an area away from the target point, for convenience of the following description, the symbol ψ is used k (gt) represents the vector field obtained by iterating k times;
in the loss function construction unit, the method for constructing the loss function through the auxiliary task and the main task comprises the following steps:
the construction loss function is:
wherein M represents the number of images in the training set ζ p (. Cndot.) is the loss function of the primary task,a loss function of the auxiliary task corresponding to the iteration k times; w represents a parameter in the convolutional neural network, which is composed of two parts, one is a shared part and the other is a part for making predictions for a specific task, which refers to: firstly, the main task is to realize pupil detection; firstly, an auxiliary task, namely, letting a convolutional neural network derive a corresponding vector field; for the convenience of distinction, useRepresenting the result of predicting the primary task, < >>Representing the result obtained by predicting auxiliary tasks corresponding to k times of iteration, I i Representing the ith image, gt i Is the corresponding gold standard; lambda (lambda) k For the weight coefficient of each auxiliary task, the subscript k represents the auxiliary task corresponding to the iteration k times, ψ k The abbreviation ψ (·) represents the transfer function, i.e. the vector field obtained by iterating image gt k times. />
CN202011508367.6A 2020-12-18 2020-12-18 Pupil detection method and system based on auxiliary learning Active CN112560709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011508367.6A CN112560709B (en) 2020-12-18 2020-12-18 Pupil detection method and system based on auxiliary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508367.6A CN112560709B (en) 2020-12-18 2020-12-18 Pupil detection method and system based on auxiliary learning

Publications (2)

Publication Number Publication Date
CN112560709A CN112560709A (en) 2021-03-26
CN112560709B true CN112560709B (en) 2023-07-25

Family

ID=75031715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011508367.6A Active CN112560709B (en) 2020-12-18 2020-12-18 Pupil detection method and system based on auxiliary learning

Country Status (1)

Country Link
CN (1) CN112560709B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800100A (en) * 2012-08-06 2012-11-28 哈尔滨工业大学 Image segmentation method based on distance potential field and self-adaptive balloon force
CN103971089A (en) * 2014-03-18 2014-08-06 中山大学深圳研究院 GVF Snakes (gradient vector flow snakes) model based iris location algorithm
CN105260698A (en) * 2015-09-08 2016-01-20 北京天诚盛业科技有限公司 Method and device for positioning iris image
CN109166095A (en) * 2018-07-11 2019-01-08 广东技术师范学院 A kind of ophthalmoscopic image cup disk dividing method based on generation confrontation mechanism
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN109344763A (en) * 2018-09-26 2019-02-15 汕头大学 A kind of strabismus detection method based on convolutional neural networks
CN109919245A (en) * 2019-03-18 2019-06-21 北京市商汤科技开发有限公司 Deep learning model training method and device, training equipment and storage medium
CN110111316A (en) * 2019-04-26 2019-08-09 广东工业大学 Method and system based on eyes image identification amblyopia
CN111160356A (en) * 2020-01-02 2020-05-15 博奥生物集团有限公司 Image segmentation and classification method and device
CN111310839A (en) * 2020-02-24 2020-06-19 广州柏视数据科技有限公司 Method and system for detecting nipple position in molybdenum target image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8682073B2 (en) * 2011-04-28 2014-03-25 Sri International Method of pupil segmentation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800100A (en) * 2012-08-06 2012-11-28 哈尔滨工业大学 Image segmentation method based on distance potential field and self-adaptive balloon force
CN103971089A (en) * 2014-03-18 2014-08-06 中山大学深圳研究院 GVF Snakes (gradient vector flow snakes) model based iris location algorithm
CN105260698A (en) * 2015-09-08 2016-01-20 北京天诚盛业科技有限公司 Method and device for positioning iris image
CN109166095A (en) * 2018-07-11 2019-01-08 广东技术师范学院 A kind of ophthalmoscopic image cup disk dividing method based on generation confrontation mechanism
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN109344763A (en) * 2018-09-26 2019-02-15 汕头大学 A kind of strabismus detection method based on convolutional neural networks
CN109919245A (en) * 2019-03-18 2019-06-21 北京市商汤科技开发有限公司 Deep learning model training method and device, training equipment and storage medium
CN110111316A (en) * 2019-04-26 2019-08-09 广东工业大学 Method and system based on eyes image identification amblyopia
CN111160356A (en) * 2020-01-02 2020-05-15 博奥生物集团有限公司 Image segmentation and classification method and device
CN111310839A (en) * 2020-02-24 2020-06-19 广州柏视数据科技有限公司 Method and system for detecting nipple position in molybdenum target image

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Deep Pictorial Gaze Estimation;Seonwook Park 等;《ECCV 2018》;1-18 *
Deriving external forces via convolutional neural networks for biomedical image segmentation;YIBIAO RONG 等;《BIOMEDICAL OPTICS EXPRESS》;第10卷(第8期);3800-3814 *
Noise-Robust Pupil Center Detection Through CNN-Based Segmentation With Shape-Prior Loss;SANG YOON HAN 等;《IEEE Acess》;第8卷;64739-64749 *
Optic Disk Detection in Fundus Image Based on Structured Learning;Zhun Fan 等;《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》;第22卷(第1期);224-234 *
PupilNet: Convolutional Neural Networks for Robust Pupil Detection;Wolfgang Fuhl 等;《arXiv》;1-10 *
Work in Progress: Temporally Extended Auxiliary Tasks;Craig Sherstan 等;《arXiv》;1-6 *
人脸识别在疲劳驾驶检测中的应用研究;李军 等;《广东技术师范学院学报》(第3期);22-27 *

Also Published As

Publication number Publication date
CN112560709A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
US20200160212A1 (en) Method and system for transfer learning to random target dataset and model structure based on meta learning
CN109949255B (en) Image reconstruction method and device
US9489722B2 (en) Method and apparatus for implementing image denoising
WO2021089013A1 (en) Spatial graph convolutional network training method, electronic device and storage medium
CN110852349B (en) Image processing method, detection method, related equipment and storage medium
CN114155365B (en) Model training method, image processing method and related device
CN111340077B (en) Attention mechanism-based disparity map acquisition method and device
CN112085056B (en) Target detection model generation method, device, equipment and storage medium
JP6597914B2 (en) Image processing apparatus, image processing method, and program
CN111563919A (en) Target tracking method and device, computer readable storage medium and robot
WO2021068376A1 (en) Convolution processing method and system applied to convolutional neural network, and related components
CN111027412A (en) Human body key point identification method and device and electronic equipment
CN107862680A (en) A kind of target following optimization method based on correlation filter
CN111223128A (en) Target tracking method, device, equipment and storage medium
US10885635B2 (en) Curvilinear object segmentation with noise priors
CN109712146B (en) EM multi-threshold image segmentation method and device based on histogram
KR101700030B1 (en) Method for visual object localization using privileged information and apparatus for performing the same
CN109871249A (en) A kind of remote desktop operation method, apparatus, readable storage medium storing program for executing and terminal device
CN112560709B (en) Pupil detection method and system based on auxiliary learning
US10832413B2 (en) Curvilinear object segmentation with geometric priors
CN115330579B (en) Model watermark construction method, device, equipment and storage medium
CN112257686B (en) Training method and device for human body posture recognition model and storage medium
CN112801045B (en) Text region detection method, electronic equipment and computer storage medium
CN110490054B (en) Target area detection method and device, electronic equipment and readable storage medium
TWI767122B (en) Model constructing method, system, and non-transitory computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant