CN108629265A - Method and apparatus for Pupil diameter - Google Patents

Method and apparatus for Pupil diameter Download PDF

Info

Publication number
CN108629265A
CN108629265A CN201710812073.4A CN201710812073A CN108629265A CN 108629265 A CN108629265 A CN 108629265A CN 201710812073 A CN201710812073 A CN 201710812073A CN 108629265 A CN108629265 A CN 108629265A
Authority
CN
China
Prior art keywords
image block
pupil
image
threshold
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710812073.4A
Other languages
Chinese (zh)
Inventor
黄欢
赵刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinghong Electronic Technology Co Ltd
Original Assignee
Shanghai Jinghong Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinghong Electronic Technology Co Ltd filed Critical Shanghai Jinghong Electronic Technology Co Ltd
Publication of CN108629265A publication Critical patent/CN108629265A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

This disclosure relates to the method and apparatus for Pupil diameter.One embodiment discloses a kind of method for Pupil diameter, including:Multiple first image blocks are extracted from the first image containing pupil;The multiple first image block is inputted by training to learn the first nerves network of the position relationship of input picture and pupil alignment, to obtain first handling result of the first nerves network to the multiple first image block;And according to first handling result exported from the first nerves network, the first positioning image block is determined from the multiple first image block, and position image block using described first and pupil is positioned, wherein the first image block closest with pupil center centered on the first positioning image block.The disclosure also describes corresponding device and computer system and computer readable storage medium storing program for executing.

Description

Method and apparatus for Pupil diameter
Technical field
Present invention relates in general to image processing fields, and are more particularly to used for the method and apparatus of Pupil diameter.
Background technology
With the continuous enhancing of security protection consciousness and demand, in addition the technologies such as computer vision, pattern-recognition and artificial intelligence Fast development, biometrics identification technology has breakthrough development, and is widely used.Biometrics identification technology profit Use human body itself intrinsic, unique biological characteristic be used as the mark of identification, this mark is with carrying, uses Convenient feature.Iris recognition technology is exactly one kind of biological identification technology.Iris recognition have high stability, high reliability, It is non-contact and without invade property the features such as, and iris recognition have biological characteristics more other than fingerprint, face, sound, palmmprint etc. knowledge It is not more convenient and accurate.Iris recognition technology is applied to be mainly used for realizing equipment unlock, movement in mobile electronic device The functions such as payment, become the great identity recognizing technology for having development potentiality in biological characteristic discriminating field.
In iris identification method, it is its very important precondition reliably to carry out pupil detection.Pupil detection skill Most important two targets are in art:It is accurately positioned the center of pupil and calculates the size of pupil.Pupil can similar to ellipse, It is to the elliptical parametric solution that it, which is accurately positioned,.But when by external interference, the pupil of extraction cannot be similar to standard Ellipse brings difficulty to being accurately positioned.
Manual feature is all made of in existing technology at present to carry out the positioning of pupil, and considers a variety of disturbing factors Influence, detection result is bad.
Invention content
Generally, the embodiment of the present invention proposes a kind of (1) method for Pupil diameter, the method includes:From containing Multiple first image blocks are extracted in the first image for having pupil;The multiple first image block is inputted defeated to learn by training The first nerves network for entering the position relationship of image and pupil alignment, to obtain the first nerves network to the multiple first First handling result of image block;And according to first handling result exported from the first nerves network, from described The first positioning image block is determined in multiple first image blocks, and positions image block using described first and pupil is positioned, Described in first image block closest with pupil center centered on the first positioning image block.
(2) according to the method described in (1), wherein the first nerves network includes convolutional neural networks.
(3) according to the method described in (1), wherein described the method further includes the training first nerves network Training includes:Multiple second images extracted from the sample image for be marked pupil position to the first nerves network inputs Block, to obtain second processing result;Optimize the first nerves network so that:In response to the pupil of the second image block and label Distance be not more than first threshold, acquired second processing result be more than second threshold;In response to the second image block and label Pupil distance be more than the first threshold, acquired second processing result be less than third threshold value, wherein second threshold Value is more than or equal to the third threshold value.
(4) according to the method described in (3), wherein the size of the size and described first image block of second image block It is identical.
(5) according to the method described in (3), wherein described so that in response to the second image block at a distance from the pupil of label More than the first threshold, acquired second processing result is less than third threshold value, including:So that in response to the second image block with The distance of the pupil of label is more than first threshold and is not more than the 4th threshold value, and acquired second processing result is less than third threshold Value.
(6) according to the method described in (3), wherein the optimization first nerves network, including:Using backpropagation First nerves network described in algorithm optimization.
(7) according to the method described in (6), wherein described to optimize the first nerves network packet using back-propagation algorithm It includes:The first nerves network is optimized using stochastic gradient descent method.
(8) according to the method described in (1), wherein the method further includes:From the multiple third figures of the second image zooming-out As block, wherein second image is the described first image after reducing, the high resolution of described first image is in described second The resolution ratio of image;The multiple third image block is inputted and is closed with the position for learning input picture and pupil alignment by training The nervus opticus network of system, to obtain third handling result of the nervus opticus network to the multiple third image block;Root According to the third handling result exported from the nervus opticus network, the second positioning is determined from the multiple third image block Image block, and using it is described second position image block pupil is positioned, wherein it is described second positioning image block centered on The closest third image block of pupil center;It is described that multiple first image blocks, packet are extracted from the first image containing pupil It includes:Centered on positioning position of the image block in described first image by described second, the center extraction multiple first is surrounded Image block.
(9) according to the method described in (8), wherein the nervus opticus network includes convolutional neural networks.
(10) according to the method described in (2), wherein the convolutional neural networks include:Convolutional layer and at least one of: Pond layer, full articulamentum and active coating.
(11) according to the method described in (8), wherein the method further includes the training nervus opticus network, institutes Stating training includes:It is extracted from the sample image being marked after the diminution of pupil position to the nervus opticus network inputs more A 4th image block, to obtain fourth process result;Optimize the nervus opticus network so that:In response to the 4th image block with The distance of the pupil of label is not more than the 5th threshold value, and acquired fourth process result is more than the 6th threshold value;In response to the 4th figure As block is more than the 5th threshold value at a distance from the pupil of label, acquired fourth process result is less than the 7th threshold value, wherein described 6th threshold value is more than or equal to the 7th threshold value.
(12) according to the method described in (8), wherein the size of the size and the third image block of the 4th image block It is identical.
(13) according to the method described in (11), wherein it is described so that in response to the pupil of the 4th image block and label away from From more than the 5th threshold value, acquired fourth process result is less than the 7th threshold value, including:So that in response to the 4th image block and mark The distance of the pupil of note is more than the 5th threshold value and is not more than the 8th threshold value, and acquired fourth process result is less than the 7th threshold value.
(14) according to the method described in (11), wherein the optimization nervus opticus network, including:It is passed using reversed Broadcast nervus opticus network described in algorithm optimization.
(15) according to the method described in (14), wherein described to optimize the nervus opticus network using back-propagation algorithm Including:The nervus opticus network is optimized using stochastic gradient descent method.
(16) according to the method described in (1), wherein the extraction includes:It is extracted using sliding window method.
(17) according to the method described in (3), wherein the training includes repeatedly being instructed using same sample set iteration Practice.
The present invention also provides a kind of (18) device for Pupil diameter, described device includes:First extraction module, by with It is set to from the first image containing pupil and extracts multiple first image blocks;First input module, being configured as will be the multiple The input of first image block is by training to learn the position relationship first nerves network of input picture and pupil alignment, to obtain State first handling result of the first nerves network to the multiple first image block;First determining module, be configured as according to from First handling result of the first nerves network output determines the first positioning image from the multiple first image block Block;And first locating module, it is configured with the first positioning image block and pupil is positioned, wherein described first It is institute center and the first closest image block of pupil center to position image block.
(19) according to the device described in (18), wherein the first nerves network includes convolutional neural networks.
(20) according to the device described in (18), wherein the first nerves network passes through following training:To first god Multiple second image blocks extracted from the sample image for be marked pupil position through network inputs, to obtain second processing knot Fruit;Optimize the first nerves network so that:It is not more than the first threshold at a distance from the pupil of label in response to the second image block Value, acquired second processing result are more than second threshold;It is more than institute at a distance from the pupil of label in response to the second image block First threshold is stated, acquired second processing result is less than third threshold value, wherein the second threshold is more than or equal to described Third threshold value.
(21) according to the device described in (20), wherein the size of the size and described first image block of second image block It is identical.
(22) according to the device described in (20), wherein it is described so that in response to the pupil of the second image block and label away from From more than the first threshold, acquired second processing result is less than third threshold value, including:So that in response to the second image block With at a distance from the pupil of label be more than first threshold and be not more than the 4th threshold value, acquired second processing result be less than third threshold Value.
(23) according to the device described in (20), wherein the optimization first nerves network, including:It is passed using reversed Broadcast first nerves network described in algorithm optimization.
(24) according to the device described in (23), wherein described to optimize the first nerves network using back-propagation algorithm Including:The first nerves network is optimized using stochastic gradient descent method.
(25) according to the device described in (18), wherein described device further comprises:Second extraction module, is configured as From the multiple third image blocks of the second image zooming-out, wherein second image is the described first image after reducing, described first The high resolution of image is in the resolution ratio of second image;Second input module is configured as the multiple third image Block input obtains second god by training to learn the position relationship nervus opticus network of input picture and pupil alignment Through network to the third handling result of the multiple third image block;Second determining module is configured as basis from described second The third handling result of neural network output determines the second positioning image block from the multiple third image block;Second Locating module is configured with the second positioning image block and is positioned to pupil;First extraction module is by into one It is multiple around the center extraction centered on step is configured to the position in described first image by the second positioning image block First image block.
(26) according to the device described in (25), wherein the nervus opticus network includes convolutional neural networks.
(27) according to the device described in (19), wherein the convolutional neural networks include:Convolutional layer and it is following at least it One:Pond layer, full articulamentum and active coating.
(28) according to the device described in (25), wherein the nervus opticus network passes through following training:To second god Multiple 4th image blocks through being extracted in sample image of the network inputs after the diminution of pupil position is marked, to obtain the 4th Handling result;Optimize the nervus opticus network so that:In response to the 4th image block no more than the at a distance from the pupil of label Five threshold values, acquired fourth process result are more than the 6th threshold value;It is big at a distance from the pupil of label in response to the 4th image block In the 5th threshold value, acquired fourth process result is less than the 7th threshold value, wherein the 6th threshold value is more than or equal to described 7th threshold value.
(29) according to the device described in (25), wherein the size of the 4th image block is big with the third image block It is small identical.
(30) according to the device described in (28), wherein it is described so that in response to the pupil of the 4th image block and label away from From more than the 5th threshold value, acquired fourth process result is less than the 7th threshold value, including:So that in response to the 4th image block and mark The distance of the pupil of note is more than the 5th threshold value and is not more than the 8th threshold value, and acquired fourth process result is less than the 7th threshold value.
(31) device described in basis (28), wherein the optimization nervus opticus network, including:Using backpropagation Nervus opticus network described in algorithm optimization.
(32) according to the device described in (31), wherein described to optimize the nervus opticus network using back-propagation algorithm Including:The nervus opticus network is optimized using stochastic gradient descent method.
(33) it according to the device described in (20), is repeatedly instructed using same sample set iteration wherein the training package is included Practice.
(34) a kind of neural network training method for Pupil diameter, the method includes:Acquisition contains labeled The sample image of the pupil of position;Multiple images block is extracted from the sample image;Optimization neural network so that:In response to defeated Enter and be not more than first threshold at a distance from the image block of the neural network and the pupil of label, the value of output is more than second threshold; Image block in response to inputting the neural network is more than first threshold at a distance from the pupil of label and inputs negative sample, output Value is less than third threshold value, wherein the second threshold is more than or equal to third threshold value.
(35) according to the method described in (34), wherein the optimization neural network includes:It is excellent using stochastic gradient descent method Change the neural network.
(36) according to the method described in (34), wherein the neural network is convolutional neural networks.
Another aspect of the present invention also provides a kind of (37) computer system for Pupil diameter, including:It is one or more Processor;One or more computer-readable mediums;What be may be stored on the computer-readable medium is used to be handled by one or more The computer program instructions that at least one of device executes, the computer program instructions include:For from containing pupil The computer program instructions of multiple first image blocks are extracted in one image;For inputting the multiple first image block by instruction Practice the first nerves network of the position relationship to learn input picture and pupil alignment to obtain the first nerves network to institute State the computer program instructions of the first handling result of multiple first image blocks;And basis is used for from the first nerves network First handling result of output determines the first positioning image block from the multiple first image block, and uses described the The computer program instructions that certain bit image block positions pupil, wherein centered on the first positioning image block with pupil The first closest image block of center.
Another aspect of the present invention also provides a kind of (38) computer readable storage medium for Pupil diameter, the calculating At least one executable computer program instructions are stored on machine readable storage medium storing program for executing, the computer program instructions include using In the computer program instructions of each step of the method for execution (1) to any one of (17).
According to an embodiment of the invention, the position of pupil can be accurately located by the interference of each factor of reduction as possible It sets.
Description of the drawings
Exemplary embodiment of the invention is described in more detail in conjunction with the accompanying drawings, it is of the invention above-mentioned and its Its purpose, feature and advantage will be apparent, wherein:
Fig. 1 shows the environment suitable for being used for realizing the embodiment of the present invention;
Fig. 2 shows another environment for being suitable for being used for realizing the embodiment of the present invention;
Fig. 3 shows a kind of schematic flow chart of method for Pupil diameter according to the ... of the embodiment of the present invention;
Fig. 4 shows a kind of schematic flow of method for training first nerves network according to the ... of the embodiment of the present invention Figure;
Fig. 5 shows a kind of schematic flow chart of method for coarse localization according to the ... of the embodiment of the present invention;
Fig. 6 shows a kind of schematic flow chart of method for Pupil diameter according to the ... of the embodiment of the present invention;
Fig. 7 shows a kind of the schematic of method for training low resolution neural network according to the ... of the embodiment of the present invention Flow chart;And
Fig. 8 shows a kind of schematical structure diagram of device for Pupil diameter according to the ... of the embodiment of the present invention.
In the accompanying drawings, same or analogous label is used to represent same or analogous element.
Specific implementation mode
The preferred embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing Preferred embodiment, however, it is to be appreciated that the present invention can also with other various forms realize without should be limited in below retouch The specific embodiment stated.These specific embodiments are provided herein it are to keep the disclosure more thorough and complete, and And the scope of the present disclosure can be completely communicated to those skilled in the art.
Fig. 1 shows the block diagram of the exemplary environments suitable for being used for realizing embodiment of the present invention.The environment can be One terminal 100 with simple computation ability, can also be the node 100 with complicated calculations ability.
The environment is for example including computer-readable medium 101.These media for example can be that volatile and non-volatile is situated between Matter can also be moveable and immovable medium, as long as can be with the node visit of computing capability.
The environment for example can also include one or more program modules 103, these program modules are commonly used in executing sheet Invent the function and/or method in the embodiment of the description.
The environment for example can also include one or more modules 105 with computing capability.
The environment can independently execute method and/or function described in embodiment of the present invention, can also be with outside The communication of equipment 107 completes corresponding method and/or function to cooperate.
Certainly, it will be understood by those skilled in the art that the terminal 100 or calculate node 100 for example can be server or Person's computer can also be intelligent terminal, such as electronic lock, smart mobile phone, Intelligent flat etc., and the present invention is not limited thereto.
Fig. 2 shows the block diagrams for the exemplary environments for being suitable for being used for realizing embodiment of the present invention.The environment includes eventually End 201 and calculate node 203.The environment for example can be a cloud environment, and calculate node 203 is, for example, Cloud Server at this time. The environment for example can also be other communication systems, and calculate node 203 is, for example, the mobile terminal with computing capability, example at this time Such as smart mobile phone, Intelligent flat, personal computer.
It will be detailed below the mechanism and principle of the embodiment of the present invention.Unless specifically stated otherwise, below and claim The middle term "based" used indicates " being based at least partially on ".Term " comprising " indicates that opening includes, i.e., " including but it is unlimited In ".Term " multiple " expression " two or more ".Term " one embodiment " expression " at least one embodiment ".Term is " another Embodiment " expression " at least one other embodiment ".The definition of other terms provides in will be described below.
Fig. 3 shows the schematic of the method 300 for Pupil diameter of property embodiment according to an example of the present invention Flow chart.Each step for including with reference to Fig. 3 detailed description methods 300.
Method 300 starts from step 301, and multiple first image blocks are extracted from the first image containing pupil.The present invention is each In embodiment, the first image is, for example, original image or treated image.For example, can will be acquired from camera Original image directly as the first image, after can also handling original image, be re-used as the first image.
In one embodiment of the invention, such as sliding window method may be used and extract multiple first image blocks from the first image. The size of sliding window can be designed according to the resolution ratio of the first image.Further, it is also contemplated that the size of pupil is set Count the size of sliding window.For example, for example, may be used 128 × 128 sizes sliding window come from the first image extract it is more A first image block.It should be understood that being retouched for using rectangular image block for convenience, in various embodiments of the present invention It states, but the present invention is not particularly limited the shape of image block, image block is such as can be circle, ellipse.
Include for example, at least complete there are one the first image block in multiple first image blocks in one embodiment of the invention Pupil.Such as the center of at least one the first image block and the center of pupil essentially coincide or are positioned proximate to.
In step 303, the multiple first image block is inputted by training to learn input picture and pupil alignment The first nerves network of position relationship ties the first processing of the multiple first image block with obtaining the first nerves network Fruit.In an embodiment of the present invention, the output of first nerves network is, for example, confidence value, which can indicate pupil The position relationship of structure and input picture.For example, the confidence value is higher, pupil alignment is closer to the center of image.When So, it will be understood that the position relationship of pupil alignment and input picture can also be expressed by other means, such as pass through vector To express.It is appreciated that the training of first nerves network for example can just be completed using preceding.
In one embodiment of the invention, in step 303 the training of involved first nerves network for example may include with Lower step:Multiple second images extracted from the sample image for be marked pupil position to the first nerves network inputs Block, to obtain second processing result;Optimize the first nerves network so that:In response to the pupil of the second image block and label Distance be not more than first threshold, acquired second processing result be more than second threshold;In response to the second image block and label Pupil distance be more than the first threshold, acquired second processing result be less than third threshold value, wherein second threshold Value is more than or equal to the third threshold value.
Specifically, in an embodiment of the present invention, method 400 as shown in Figure 4 is provided as example to specifically describe Method for training first nerves network.This method 400 includes the following steps.
In step 401, sample image is obtained, the position of pupil is labeled in the sample image.In the present embodiment, such as The position of pupil in manual markings sample image can be passed through.Of course it is to be understood that can also be by other methods to pupil Position is marked.Also, the label can be attached in sample image, and storage can be detached with the sample image, the present invention It does not limit.In one embodiment of the invention, as the resolution ratio of sample image is for example with the first image.The present invention is another In one embodiment, sample image for example undergoes processing identical with the first image.It is appreciated that sample image can be adopted What collection obtained, it can also be obtained from existing database.
In step 403, multiple second image blocks are extracted from the sample image.In the present embodiment, the second image block is extracted Such as sliding window method may be used.In an embodiment of the present invention, the size of the second image block of extraction for example with the first image The size of block is identical.In an embodiment of the present invention, there are one the second image block include for example, at least complete pupil or extremely Few center there are one the center of the second image block and pupil essentially coincides or is positioned adjacent to.
In step 405, it is not more than first threshold at a distance from the pupil of label in response to the second image block, by second figure As block is as positive sample.It is appreciated that there are many kinds of the expression ways of positive sample, such as with 1 indicate positive sample, or can be with Positive sample is indicated using other modes, and the present invention is not limited thereto.
In an embodiment of the present invention, first threshold for example can be with value for 0.This requires that only center and pupil The second image block that center is completely superposed just constitutes positive sample, and other second image blocks are identified as negative sample.It can manage Solution, first threshold can for example take a smaller value, and the second image block that the center of such center and pupil is neighbouring can also As positive sample.If first threshold value can make the positioning to pupil more accurate for 0, because neural network is being instructed When practicing, it is trained to more accurately express the position relationship of the image and pupil alignment of input.Certainly, condition by limited time, It can allow positioning that there is certain error amount, it is possible to which first threshold is set as a smaller value.Those skilled in the art Different first thresholds can be set according to specific practical application, the present invention does not limit this.But by god Training through network, it is possible to reduce the influence of other disturbing factors more precisely positions pupil.It is appreciated that in the first threshold When value is set as 0, such as the criterion no more than first threshold may be used, and is set as a smaller value in first threshold When, the criterion no more than first threshold also includes being only smaller than first threshold.
In step 407, it is more than first threshold at a distance from the pupil of label in response to the second image block, by second image Block is as negative sample.Likewise, the expression way of negative sample also there are many.For example, negative sample, Huo Zheye can be indicated with 0 It can indicate that negative sample, the present invention do not limit specific expression way with negative.
It will be understood by those skilled in the art that positive negative sample is the expression way classified, other expression can also be utilized Mode is classified to embody, such as the first kind, the second class, effective class, invalid class etc., and the present invention does not limit this.Also, Positive negative sample of being classified with distance is one kind of realization method, can also use other judgment modes, such as included have The area etc. of pupil is imitated, the present invention does not limit this.
In one embodiment of the invention, in order to be further reduced the treating capacity of system, certain sieve can be carried out to negative sample Choosing/restriction.For example, the 4th threshold value can be arranged.When the second image block is more than first threshold at a distance from pupil and is less than Or when equal to four threshold values, using second image block as negative sample.It in this way can be to avoid second containing very few pupil Image block is trained as negative sample, or is entirely free of the second image block of pupil and is instructed as negative sample Practice, reduces unnecessary processing, further improve efficiency.
In step 409, optimize first nerves network so that when input positive sample, the value of output is more than second threshold, input When negative sample, the value of output is less than third threshold value.In an embodiment of the present invention, second threshold is greater than third threshold value, or Person's second threshold is equal to third threshold value.
In an embodiment of the present invention, such as backpropagation (BP) algorithm may be used carry out optimization neural network.In this hair In bright another embodiment, such as stochastic gradient descent method may be used to optimize first network.
In an embodiment of the present invention, for example, may be used thresholding operation to the output of first nerves network at Reason.In one example, such as binarization operation may be used, and sets a threshold to 0.5.If the output of first nerves network More than 0.5, then value is 1, if the output of first nerves network is less than 0.5, value 0.Correspondingly, positive sample correspondence 1, and Negative sample corresponds to 0, can be optimized more conveniently to first nerves network in this way.
In an embodiment of the present invention, it such as may be usedCome to first nerves net Network optimizes.Wherein, W is the parameter of neural convolutional network, xiIndicate i-th the second image block, yiIndicate i-th of second figures It is 1 as block is positive sample or negative sample, such as positive sample value, negative sample value is 0.Pass through Optimal Parameters W so that above formula takes Value is minimum.
It can be seen that the position that can learn input picture and pupil alignment by the first nerves network of above-mentioned optimization is closed System, reduces the influence of each disturbing factor, accurately determines the position of pupil.
In an embodiment of the present invention, method 400 can for example carry out same sample set multiple iterative processing, in this way So that algorithm is preferably restrained, thus preferably train network to obtain more accurately result.
In an embodiment of the present invention, first nerves network for example can be convolutional neural networks.In another reality of the present invention It applies in example, convolutional neural networks for example may include convolutional layer and at least one of:Pond layer, full articulamentum and active coating.
In one embodiment of the invention, convolutional layer for example can be used for learning characteristic, such as learn edge.Pond layer for example may be used So that the feature that convolutional layer learns has robustness to small deformation and/or translation.Full articulamentum can will for example be rolled up Product characteristic expansion is at a vector.Active coating can for example so that the value range of the value of output is [0,1].
In an embodiment of the present invention, such as Sigmoid activation primitives may be used to realize active coating.For example, using FunctionRealize active coating.It is appreciated that for example the vector sum weight vectors that full articulamentum exports can be carried out Inner product, to obtain the input variable x of activation primitive.
It is appreciated that active coating can not also be used, and directly subsequent place is carried out using the vector that full articulamentum obtains Reason.
Turning now to Fig. 3, step 305 is executed after step 303.In step 305, according to what is exported from first nerves network First handling result determines the first positioning image block from multiple first image blocks.In the foregoing description it has been noted that the first god The first handling result exported through network can be diversified forms, for example, single value, or be a vector.In a reality It applies in example, when the output of first nerves network is single value, either value range is the value of [0,1], can also be bigger Value in range, the present invention is not limited thereto.
In step 307, positions image block using described first and pupil is positioned.In the present embodiment, the first positioning figure As the first image block closest with pupil center centered on block.
In an embodiment of the present invention, if the output of first nerves network is single value, and it is worth the bigger figure for indicating input As the center of block is closer to the center of pupil, then take the corresponding image block of maximum output valve as the first positioning image block.
In an embodiment of the present invention, the center of the first positioning image block can carry out pupil as the center of pupil Positioning.
In one embodiment of the invention, on the basis of method 300 shown in Fig. 3, the positioning of pupil can be divided into two In the stage, first stage carries out coarse localization and second stage is accurately positioned, so as to improve the efficiency of positioning, more To adapt to the demand positioned in real time.
For coarse localization, it is referred to method shown in the method 500 shown in fig. 5 for coarse localization.Method 500 include the following steps.
In step 501, from the multiple third image blocks of the second image zooming-out.It has been noted that may be used in above description A variety of extracting methods carries out the extraction of image block.In one example, such as sliding window method may be used from the second image Extract multiple third image blocks.In the present embodiment, the second image is the first image after reducing.That is, reducing first Image obtains the second image.The technology of any image down may be used, the present invention is not limited thereto.For example, using two-wire Property interpolation method obtains the second image to reduce the first image.Due to being that have passed through diminution processing, so the resolution ratio of the second image Less than the resolution ratio of the first image.It is appreciated that the resolution ratio due to the second image is less than the resolution ratio of the first image, so the The resolution ratio of three image blocks is again smaller than the resolution ratio of the first image block, and the size of corresponding third image block is again smaller than the first image The size of block.For example, third image block can take 20 × 20 size.Calculation amount can not only be reduced by reducing the first image, be added Fast processing speed can also reduce the interference of noise to a certain extent.During Image Acquisition, it is easy that there are cameras Movement illumination variation, contact lenses, is blocked, the interference such as noise.It is that the second image procossing can certain journey by the first image down These interference of reduction on degree.Also, in view of the complexity of neural network, reducing treating capacity can largely improve Processing speed, to meet the needs of positioning in real time.
In step 503, by the multiple third image blocks obtained in step 501 input by training with learn input picture and The nervus opticus network of the position relationship of pupil alignment, to obtain third of the nervus opticus network to multiple third image block Handling result.In various embodiments of the present invention, nervus opticus network and first nerves network can be the nerve nets of same type Network can also be different types of neural network.It is appreciated that for example can be before use for the training of nervus opticus network With regard to completing.
In an embodiment of the present invention, nervus opticus network for example can also be convolutional neural networks.Nervus opticus network Such as can also include convolutional layer and at least one of:Pond layer, full articulamentum and active coating.
In an embodiment of the present invention, the number of plies of nervus opticus network is for example less than the number of plies of first nerves network.Consider Resolution ratio to the image of nervus opticus network processes is relatively low, and size is smaller, it is possible to be designed as relatively simple nerve net Network.And the resolution ratio of the image of first nerves network processes is higher, size is larger, it is possible to be designed as more complicated nerve Network.In one example, first nerves network for example swashs including 2 convolutional layers, 2 pond layers, 1 full articulamentum and 1 Layer living;Nervus opticus network is for example including 1 convolutional layer, 1 pond layer, 1 full articulamentum and 1 active coating.
The training of nervus opticus network involved in step 503 for example may comprise steps of:To the second god Multiple 4th image blocks through being extracted in sample image of the network inputs after the diminution of pupil position is marked, to obtain the 4th Handling result;Optimize nervus opticus network so that:It is not more than the 5th threshold at a distance from the pupil of label in response to the 4th image block Value, acquired fourth process result are more than the 6th threshold value;It is more than the at a distance from the pupil of label in response to the 4th image block Five threshold values, acquired fourth process result are less than the 7th threshold value, wherein the 6th threshold value is more than or equal to the described 7th Threshold value.
In an embodiment of the present invention, the method similar with method 400 specifically for example may be used to train the second god Through network.In training nervus opticus network, due to the resolution of the second image and the third image block for extracting from the second image Rate is relatively low, so also the sample of low resolution is used to be trained nervus opticus network.Or same sample can be utilized Nervus opticus network is trained after diminution.In an alternative embodiment of the invention, the image for training nervus opticus network The size of block is for example identical with third image block.In an alternative embodiment of the invention, the sample for training nervus opticus network Resolution ratio it is for example identical with the resolution ratio of the second image.Same sample can also be for example used in training nervus opticus network Carry out successive ignition.
In an embodiment of the present invention, in training nervus opticus network, the classification for positive sample and negative sample, such as The threshold value different with when training first nerves network may be used.
Training for first nerves network is referred to for the other details of the training of nervus opticus network, herein not It repeats again.
In step 505, according to the third handling result exported from the nervus opticus network, from the multiple third The second positioning image block is determined in image block.Similar, the present invention does not limit embodying for the output of nervus opticus network.
In step 507, positions image block using described second and pupil is positioned.Centered on the second positioning image block The third image block closest with pupil center.
Method 500 as shown in Figure 5 is in the case where constituting the coarse localization of first stage, method 300 as shown in Figure 3 Constitute being accurately positioned for second stage.In the case, step 301 extracts multiple first image blocks from the first image, specifically For centered on the second positioning position of the image block in the first image, to surround multiple first image blocks of the center extraction.Specifically Ground, the position of the second positioning image is the position in the second image when due to coarse localization, for the second positioning image the Position in one image can for example be obtained by coordinate transform.The specific zoom factor of coordinate transform can be according to diminution first Zoom factor when image determines.
It is fixed to ensure while reducing treating capacity and improving processing speed in conjunction with coarse localization and pinpoint scheme The precision of position, to meet the needs of handling in real time simultaneously and improve the demand of positioning accuracy.For example, in human-computer interaction field Scape automatically controls the real-time that Pupil diameter had both been needed under the several scenes such as scene, it is also desirable to the accuracy of Pupil diameter.It is above-mentioned Scheme described in embodiment can be applied under such a scenario.
It can refer to and combine each other between the various embodiments described above of the present invention, to obtain more embodiments.For example, such as Shown in Fig. 6, referred to each other, in conjunction with obtained embodiment for the various embodiments described above.With reference to Fig. 6, to one embodiment of the invention The method 600 for Pupil diameter provided is described in detail.
Step 601, it is the pending image of low resolution by the image down of acquisition.The diminution is for example, by using bilinear interpolation Method carries out down-sampling, downscaled images size according to the specified diminution factor to the image of acquisition.The image of acquisition for example can be with It is the image or can also be treated image that acquisition directly obtains.
Step 603, multiple low-resolution image blocks are extracted using sliding window method from the pending image of low resolution.Originally show In example, low-resolution image block is, for example, 25 × 25 image block.The center of wherein at least one low-resolution image block is leaned on The center of nearly pupil alignment.
Step 605, the image block obtained in step 603 is input in trained low resolution neural network.Originally show In example, low resolution neural network is trained for example, by using method 700 as shown in Figure 7.
Step 701, the eye image of acquisition is obtained as sample image.In this example, the eye image of acquisition is, for example, Directly the collect or image to directly acquiring has carried out what processing obtained.
Step 703, sample image is reduced.
Step 705, pupil position is marked to the sample image of diminution.In this example, for example, can to the position of pupil into Row manually marks.The edge of pupil can be for example drawn in label and/or draws the center of pupil.It is in this example and unlimited Make for pupil position label this how to embody and store.
Step 707, multiple images block is extracted from the sample image of diminution.In this example, such as sliding window may be used Method is extracted multiple images block from sample image.
Step 709, it is less than or equal to the image block of 1 pixel as positive sample with interpupillary distance using in multiple images block.It can To understand, this example is not intended to limit threshold value and has to be set as 1 pixel, and other values can also be arranged.For example, this example In can for example generate 9 positive samples.
Step 711, will in multiple images block with interpupillary distance be more than 1 pixel and less than 5 pixels image block as Negative sample.It has to be set as 1 pixel and 5 pixels likewise, it is understood that this example is not intended to limit threshold value, it can also Other values are set.The image block for being more than 5 pixels with interpupillary distance can consider apart from pupil farther out do not have for the positioning of pupil There is excessive help, in order to reduce treating capacity, does not consider further that such image block.For example, it can for example be produced in this example Raw 40 negative samples.
Although being appreciated that the criteria for classification using distance as positive negative sample in this example, it is subject to not to this Limitation can also use other criteria for classifications, for example, the active ingredient for the pupil for being included number or pupil be located at figure As the position etc. in block.
Step 713, positive and negative sample standard deviation is inputted into convolutional neural networks and the convolutional neural networks is trained.This example In, 1 convolutional layer, 1 pond layer, 1 full articulamentum and 1 active coating is for example arranged in convolutional neural networks.The core of convolutional layer Size is for example set as 5 × 5, and step-length 1 is filled with 0.The core of convolutional layer can for example be initialized by Gaussian function. It is limited not to this it is of course possible to carry out full 0 or complete 1 initialization, the present invention.The window size of pond layer is, for example, 4 × 4, step-length 4 is filled with 0.The neuronal quantity of full articulamentum is 64.Active coating uses Sigmoid activation primitives.In training Shi Caiyong stochastic gradient descent methods, for example, by using fixed learning rate 0.001, batch size is 100.Using functionTo be optimized to neural network.In this example, W is the parameter of neural convolutional network, xi Indicate i-th of image block, yiIndicate that i-th of image block is positive sample or negative sample, such as positive sample, yiValue is 1;It is right In negative sample, yiValue is 0.
In this example, method 700 can carry out multiple iterative processing using the multiple images block that step 707 obtains. For example, 1000 iteration can be carried out for same sample set.Can algorithm preferably be restrained in this way.
Training by method 700 can obtain low resolution neural network.It is appreciated that claiming trained neural network The input that trained neural network is mainly based upon for low resolution neural network is the lower image block of resolution ratio, is distinguished with this Subsequent descriptions are that obtained high-resolution neural network is trained in input with the higher image block of resolution ratio.Height herein is differentiated Rate is opposite, and nisi.
Fig. 6 is returned to, executes step 607 after step 605.Step 607, it obtains and exports from low resolution neural network. In this example, output is, for example, corresponding to the value that the one group of value range inputted is [0,1].
Step 609, coarse localization is carried out to pupil using the highest low-resolution image block of output valve as positioning image block. Since low-resolution image block is from the image zooming-out after diminution, so will position the center of image block according to zoom factor In image before evolution to diminution, the coarse localization as pupil.It can be seen that the diminution for image is handled Treating capacity is reduced, reduces the interference of noise to a certain extent.Then by using trained low resolution neural network, utilize The powerful feature representation ability of convolutional neural networks, can be further reduced the interference of noise, reduce treating capacity to ensure to locate The accuracy of positioning is increased substantially while managing speed.
Simultaneously in view of the amplification of size can be carried out in coordinate transform, so error can be introduced, what coarse localization obtained Pupil position might not be accurate.This example is accurately positioned the stage and further increases the standard of positioning using step 611-617's Exactness, while it is that will not introduce excessive treating capacity based on what coarse localization carried out that this, which is accurately positioned, still can guarantee that satisfaction is real When demand.
Step 611, centered on the coarse localization for the pupil that step 609 obtains, in the image pending from high-resolution Extract multiple images block.In this example, the pending image of high-resolution is exactly the image without diminution in step 601.It can manage Solution, high-resolution and low resolution are only opposite, and nisi.In this example, extraction multiple images block for example passes through cunning Dynamic window method extracts.In this example, the size of the image block of extraction is, for example, 128 × 128.In this example, due to having had Coarse localization, can only extract less image block.It can not only be accurately positioned in this way but also treating capacity has been greatly reduced.
Step 613, the image block obtained in step 611 is input in trained high-resolution neural network.Originally show The method similar with method 700 may be used to train high-resolution neural network, detail to be referred to method in example 700, details are not described herein again.Since high-resolution neural network needs to handle the higher image block of resolution ratio, it is possible to design More layers are used for meeting process demand.Such as high-resolution neural network may include 2 convolutional layers, 2 pond layers, 1 Full articulamentum and 1 active coating.The core size of first convolutional layer is, for example, 21 × 21, quantity 9, and step-length 1 is filled with 0. The core size of second convolutional layer is, for example, 25 × 25, quantity 7, and step-length 1 is filled with 0.The window ruler of first pond layer Very little is, for example, 3 × 3, and step-length 3 is filled with 0.The window size of second pond layer is, for example, 2 × 2, and step-length 2 is filled with 0.The neuronal quantity of full articulamentum is 128.Active coating uses Sigmoid activation primitives.
In training high-resolution neural network, such as also 128 × 128 image block is used to be trained.
In training high-resolution neural network, without again reducing sample image, sample image extraction figure is directly used As block is trained.
In training high-resolution neural network, the side different from method 700 can be used for the classification of positive negative sample Formula and/or different threshold values., can be using the image block for being 0 with interpupillary distance as positive sample for example, in this example, it will be with Interpupillary distance is more than 0 and less than the image block of 5 pixels as negative sample.In this example, for a sample image, such as Generate 1 positive sample and 8 negative samples.
In training high-resolution neural network, such as can also neural network be trained using stochastic gradient descent method, Such as the fixation learning rate of value 0.001 may be used, batch size is, for example, 100.Such as it can be by the image of same sample set Block iteration 15 times trains neural network.
Fig. 6 is returned to, step 615, obtains and exports from high-resolution neural network.In this example, output is, for example, to correspond to In the value that one group of value range of input is [0,1].
Step 617, pupil is accurately positioned using the highest high-definition picture block of output valve as positioning image block. When being accurately positioned, directly it is accurately positioned the center for positioning image block as the center of pupil.
It can be seen that by the processing for being accurately positioned the stage, it will be apparent that the precision of positioning is improved, but due to accurate Positioning can't increase excessive treating capacity based on being carried out on the basis of coarse localization, remain able to adapt to real-time requirement.
Fig. 8 shows a kind of schematic frame of the device 800 for Pupil diameter provided according to embodiments of the present invention Figure.The device 800 includes:First extraction module 801 is configured as extracting multiple first figures from the first image containing pupil As block;First input module 803, be configured as by the multiple first image block input by training with learn input picture and The first nerves network of the position relationship of pupil alignment, to obtain the first nerves network to the multiple first image block First handling result;First determining module 805 is configured as according to first processing exported from the first nerves network As a result, determining the first positioning image block from the multiple first image block;And first locating module 807, it is configured as making Image block is positioned with described first to position pupil, wherein centered on the first positioning image block the most with pupil center The first close image block.
In an embodiment of the present invention, device 800 can also include for example the second extraction module, be configured as from the second figure As extracting multiple third image blocks, wherein second image is the described first image after reducing, point of described first image Resolution is higher than the resolution ratio of second image;Second input module is configured as the multiple third image block input the Two neural networks are to obtain third handling result of the nervus opticus network to the multiple third image block, wherein described Two neural networks are by training to learn the position relationship of input picture and pupil alignment;Second determining module, is configured as root According to the third handling result exported from the nervus opticus network, the second positioning is determined from the multiple third image block Image block;Second locating module is configured with the second positioning image block and is positioned to pupil.Corresponding first carries Centered on modulus block 801 is configured to the position in described first image by the second positioning image block, surround Multiple first image blocks of center extraction.
The specific implementation of device 800 provided in this embodiment is referred to corresponding embodiment of the method, and details are not described herein again.
For clarity, all selectable units or subelement included by device 800 are not shown in Fig. 8.Above-mentioned side All features and operation described in method embodiment and the embodiment by reference to that can be obtained with combination are respectively suitable for filling 800 are set, therefore details are not described herein.
It will be understood by those skilled in the art that the division of unit or subelement is not limiting but shows in device 800 Example property, be in order to more convenient it will be appreciated by those skilled in the art that logically describing its major function or operation.In device In 800, the function of a unit can be realized by multiple units;Conversely, multiple units can also be realized by a unit.This Invention limits not to this.
Similarly, it will be understood by those skilled in the art that various modes, which may be used, carrys out the list that realization device 800 is included Member, including but not limited to software, hardware, firmware or its arbitrary combination, the present invention limit not to this.
The present invention can be system, method, computer-readable storage medium and/or computer program product.Computer Readable storage medium storing program for executing for example can be the tangible device that can keep and store the instruction used by instruction execution equipment.
Computer-readable/executable program instruction can be downloaded to from computer readable storage medium each calculating/from Equipment is managed, outer computer or External memory equipment can also be downloaded to by various communication modes.The present invention does not limit specifically Make the specific programming language for realizing computer-readable/executable program instruction or instruction.
Referring herein to according to the method for the embodiment of the present invention, the flowchart and or block diagram of device (system) describe this hair Bright various aspects.It should be appreciated that each box in each box and flowchart and or block diagram of flowchart and or block diagram Combination can be realized by computer-readable/executable program instruction.
Various embodiments of the present invention are described above, it is stated that above description is exemplary in as described above, And non-exclusive, and it is also not necessarily limited to disclosed each embodiment, each other can be referred between each embodiment and in conjunction with obtaining More embodiments.Without departing from the scope and spirit of illustrated each embodiment, for the general of the art Many modifications and changes will be apparent from for logical technical staff.

Claims (10)

1. a kind of method for Pupil diameter, the method includes:
Multiple first image blocks are extracted from the first image containing pupil;
The multiple first image block is inputted by training to learn the first of the position relationship of input picture and pupil alignment Neural network, to obtain first handling result of the first nerves network to the multiple first image block;And
According to first handling result exported from the first nerves network, the is determined from the multiple first image block Certain bit image block, and position image block using described first and pupil is positioned, wherein the first positioning image block is Center and the first closest image block of pupil center.
2. according to the method described in claim 1, wherein, the first nerves network includes convolutional neural networks.
3. according to the method described in claim 1, wherein, the method further includes the training first nerves network, institutes Stating training includes:Multiple second figures extracted from the sample image for be marked pupil position to the first nerves network inputs As block, to obtain second processing result;Optimize the first nerves network so that:In response to the pupil of the second image block and label The distance in hole is not more than first threshold, and acquired second processing result is more than second threshold;In response to the second image block and mark The distance of the pupil of note is more than the first threshold, and acquired second processing result is less than third threshold value, wherein described second Threshold value is more than or equal to the third threshold value.
4. according to the method described in claim 3, wherein, the size of the size and described first image block of second image block It is identical.
5. described so that in response to the second image block at a distance from the pupil of label according to the method described in claim 3, wherein More than the first threshold, acquired second processing result is less than third threshold value, including:So that in response to the second image block with The distance of the pupil of label is more than first threshold and is not more than the 4th threshold value, and acquired second processing result is less than third threshold Value.
6. according to the method described in claim 3, wherein, the optimization first nerves network, including:Using backpropagation First nerves network described in algorithm optimization.
7. a kind of device for Pupil diameter, described device include:
First extraction module is configured as extracting multiple first image blocks from the first image containing pupil;
First input module is configured as inputting the multiple first image block by training to learn input picture and pupil The position relationship first nerves network of structure, to obtain the first nerves network at the first of the multiple first image block Manage result;
First determining module, is configured as according to first handling result that is exported from the first nerves network, from described The first positioning image block is determined in multiple first image blocks;And
First locating module is configured with the first positioning image block and is positioned to pupil, wherein described first is fixed Bit image block is institute center and the first closest image block of pupil center.
8. a kind of neural network training method for Pupil diameter, the method includes:
Obtain the sample image for the pupil for containing labeled position;
Multiple images block is extracted from the sample image;
Optimization neural network so that:Image block in response to inputting the neural network is not more than at a distance from the pupil of label The value of first threshold, output is more than second threshold;In response to input the image block of the neural network and the pupil of label away from Negative sample is inputted from more than first threshold, the value of output is less than third threshold value, wherein the second threshold is more than or equal to the Three threshold values.
9. a kind of computer system for Pupil diameter, including:
One or more processors;
One or more computer-readable mediums;
The computer journey for being executed by least one of one or more processors that may be stored on the computer-readable medium Sequence instructs, and the computer program instructions include:
Computer program instructions for extracting multiple first image blocks from the first image containing pupil;
For inputting the multiple first image block by training the position relationship to learn input picture and pupil alignment First nerves network is to obtain computer of the first nerves network to the first handling result of the multiple first image block Program instruction;And
For first handling result that basis is exported from the first nerves network, from the multiple first image block really Fixed first positioning image block, and the computer program instructions that image block positions pupil are positioned using described first, wherein First image block closest with pupil center centered on the first positioning image block.
10. a kind of computer readable storage medium for Pupil diameter, be stored on the computer readable storage medium to Few executable computer program instructions, the computer program instructions include any in requiring 1 to 6 for perform claim The computer program instructions of each step of the method for item.
CN201710812073.4A 2017-03-20 2017-09-11 Method and apparatus for Pupil diameter Pending CN108629265A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710167631 2017-03-20
CN2017101676316 2017-03-20

Publications (1)

Publication Number Publication Date
CN108629265A true CN108629265A (en) 2018-10-09

Family

ID=63705764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710812073.4A Pending CN108629265A (en) 2017-03-20 2017-09-11 Method and apparatus for Pupil diameter

Country Status (1)

Country Link
CN (1) CN108629265A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934192A (en) * 2019-03-20 2019-06-25 京东方科技集团股份有限公司 Target image localization method and device, Eye-controlling focus equipment
CN112561859A (en) * 2020-11-20 2021-03-26 中国煤炭科工集团太原研究院有限公司 Monocular vision-based steel belt drilling and anchor net identification method and device for anchoring and protecting
TWI817116B (en) * 2021-05-12 2023-10-01 和碩聯合科技股份有限公司 Object positioning method and object positioning system
CN117045194A (en) * 2022-05-07 2023-11-14 苏州健雄职业技术学院 Laser scanning fundus camera pupil positioning system with improved S-curve algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WOLFGANG FUHL ET AL: "《Pupilnet:Convolutional Neural Networks for Robust Pupil Detection》", 《ARXIV:1601.04902V1》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934192A (en) * 2019-03-20 2019-06-25 京东方科技集团股份有限公司 Target image localization method and device, Eye-controlling focus equipment
CN112561859A (en) * 2020-11-20 2021-03-26 中国煤炭科工集团太原研究院有限公司 Monocular vision-based steel belt drilling and anchor net identification method and device for anchoring and protecting
TWI817116B (en) * 2021-05-12 2023-10-01 和碩聯合科技股份有限公司 Object positioning method and object positioning system
CN117045194A (en) * 2022-05-07 2023-11-14 苏州健雄职业技术学院 Laser scanning fundus camera pupil positioning system with improved S-curve algorithm
CN117045194B (en) * 2022-05-07 2024-05-17 苏州健雄职业技术学院 Laser scanning fundus camera pupil positioning system with improved S-curve algorithm

Similar Documents

Publication Publication Date Title
CN112446270B (en) Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
Ale et al. Deep learning based plant disease detection for smart agriculture
CN105917353B (en) Feature extraction and matching for biological identification and template renewal
CN105138993B (en) Establish the method and device of human face recognition model
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN110309856A (en) Image classification method, the training method of neural network and device
CN111274916A (en) Face recognition method and face recognition device
CN109902548B (en) Object attribute identification method and device, computing equipment and system
CN109840530A (en) The method and apparatus of training multi-tag disaggregated model
CN109614907B (en) Pedestrian re-identification method and device based on feature-enhanced guided convolutional neural network
CN106503687A (en) The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN108629265A (en) Method and apparatus for Pupil diameter
CN107871100A (en) The training method and device of faceform, face authentication method and device
CN112084917B (en) Living body detection method and device
CN106326874A (en) Method and device for recognizing iris in human eye images
CN109993707A (en) Image de-noising method and device
CN108108807A (en) Learning-oriented image processing method, system and server
CN109033994A (en) A kind of facial expression recognizing method based on convolutional neural networks
CN110222718B (en) Image processing method and device
CN112215180A (en) Living body detection method and device
CN108052884A (en) A kind of gesture identification method based on improvement residual error neutral net
CN110070107A (en) Object identification method and device
CN110506274B (en) Object detection and representation in images
CN108921019A (en) A kind of gait recognition method based on GEI and TripletLoss-DenseNet
CN105320945A (en) Image classification method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181009