CN103218605A - Quick eye locating method based on integral projection and edge detection - Google Patents

Quick eye locating method based on integral projection and edge detection Download PDF

Info

Publication number
CN103218605A
CN103218605A CN2013101198439A CN201310119843A CN103218605A CN 103218605 A CN103218605 A CN 103218605A CN 2013101198439 A CN2013101198439 A CN 2013101198439A CN 201310119843 A CN201310119843 A CN 201310119843A CN 103218605 A CN103218605 A CN 103218605A
Authority
CN
China
Prior art keywords
image
point
gray
value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101198439A
Other languages
Chinese (zh)
Other versions
CN103218605B (en
Inventor
路小波
陈伍军
曾维理
杜一君
祁慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201310119843.9A priority Critical patent/CN103218605B/en
Publication of CN103218605A publication Critical patent/CN103218605A/en
Application granted granted Critical
Publication of CN103218605B publication Critical patent/CN103218605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a quick eye locating method based on integral projection and edge detection. The method comprises the following steps of 1, carrying out gray level transformation on detected face images, and smoothly denoising by a filter; 2, utilizing a horizontal integral projection method to obtain the general position of an eye; 3, carrying out the edge extraction and binaryzation on the images in the step 1; 4, calculating the line complexity and the row complexity, and accurately locating the position of the eye; and 5, correcting, so as to obtain the final position of the eye. The method has the advantages that the calculation speed is high, the effect on the locating of the eye by ornaments in the face images can be effectively inhibited, the locating speed is high, and the stability is realized.

Description

A kind of fast human-eye positioning method based on integral projection and rim detection
Technical field
The invention belongs to the characteristic point positioning method in pattern-recognition, relate in particular to a kind of fast human-eye positioning method based on integral projection and rim detection, a kind of fast and convenient solution proposed mainly for human eye orientation problem in input picture, can locate human eye fast and effectively.
Background technology
Computer face identification is the research field of enlivening very much in recent years.Being of wide application of it, as Sex, Age analysis, security system authentication, expression analysis and video conference etc.It mainly comprises that people's face detects, feature location is extracted and feature is identified several steps.Human eye is as a key feature of people's face, and can the result that accurately locate feature extraction and feature identification it have tremendous influence.
Half-tone information and the marginal information of this paper based on image, propose a kind of algorithm of human eye detection fast and effectively.This algorithm can be located human eye fast, and can be good at suppressing the impact that illumination and jewelry bring to the location result.
Summary of the invention
The invention provides a kind of succinct and high fast human-eye positioning method based on integral projection and rim detection of accuracy.
In order to realize this target, the present invention takes following technical scheme:
Step 1: initialization, read in one and collect the image that contains people's face
Figure 2013101198439100002DEST_PATH_IMAGE001
,
Step 2: utilize the digital picture of Adaboost algorithm to collecting
Figure 807395DEST_PATH_IMAGE001
Carry out people's face and detect operation, get facial image wherein ,
Step 3: to the facial image acquired in step 2 Carry out pre-service, method is as follows:
Step 3.1: by the facial image acquired Be converted into facial image
Figure 335142DEST_PATH_IMAGE002
Gray level image, and by facial image Gray level image be normalized to W
Figure 2013101198439100002DEST_PATH_IMAGE003
The image of H
Figure 704124DEST_PATH_IMAGE004
, wherein W, H are positive integer, mean respectively facial image
Figure 669806DEST_PATH_IMAGE002
Gray level image be normalized to W
Figure 851388DEST_PATH_IMAGE003
The image of H
Figure 2013101198439100002DEST_PATH_IMAGE005
Line number and columns,
Step 3.2: utilize Gaussian filter to W
Figure 742859DEST_PATH_IMAGE003
The image of H
Figure 642682DEST_PATH_IMAGE005
Carry out level and smooth, denoising, concrete grammar is as follows: at first, and the image after level and smooth, denoising The gray-scale value of each boundary pixel be respectively smoothly, the W before denoising
Figure 182564DEST_PATH_IMAGE003
The image of H
Figure 808718DEST_PATH_IMAGE005
The gray-scale value of each boundary pixel; Secondly, for W
Figure 867941DEST_PATH_IMAGE003
The image of H
Figure 237742DEST_PATH_IMAGE004
The gray-scale value of non-boundary pixel, with W The image of H
Figure 128655DEST_PATH_IMAGE005
In any one non-boundary pixel centered by pixel, choose 3 Gauss's template of 3, obtain the gray-scale value of described center pixel, that is:
Figure DEST_PATH_IMAGE007
In formula,
Figure 81622DEST_PATH_IMAGE008
Represent row-coordinate,
Figure 92303DEST_PATH_IMAGE009
Represent the row coordinate,
Figure 60259DEST_PATH_IMAGE010
For image
Figure 94075DEST_PATH_IMAGE005
In
Figure 805679DEST_PATH_IMAGE011
The gray value of point,
Figure 936446DEST_PATH_IMAGE012
For image
Figure 12986DEST_PATH_IMAGE005
In with Centered by point 3
Figure 846130DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 769087DEST_PATH_IMAGE011
The gray value of some lower-left angle point, For image
Figure 883990DEST_PATH_IMAGE005
In with Centered by point 3
Figure 213395DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 694055DEST_PATH_IMAGE011
The gray value of some upper left angle point,
Figure 189758DEST_PATH_IMAGE014
For image
Figure 781277DEST_PATH_IMAGE005
In with
Figure 741142DEST_PATH_IMAGE011
Centered by point 3
Figure 330387DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 110124DEST_PATH_IMAGE011
The gray value of some bottom right angle point,
Figure 443016DEST_PATH_IMAGE015
For image
Figure 257389DEST_PATH_IMAGE005
In with
Figure 79851DEST_PATH_IMAGE011
Centered by point 3
Figure 346884DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 982003DEST_PATH_IMAGE011
The gray value of some upper right angle point,
Figure 650881DEST_PATH_IMAGE016
For image
Figure 644245DEST_PATH_IMAGE004
In with
Figure 336258DEST_PATH_IMAGE011
Centered by point 3
Figure 338849DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 862234DEST_PATH_IMAGE011
Put the gray value of horizontal left to point,
Figure 964182DEST_PATH_IMAGE017
For image
Figure 205808DEST_PATH_IMAGE004
In with
Figure 746511DEST_PATH_IMAGE011
Centered by point 3
Figure 62085DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 662831DEST_PATH_IMAGE011
The gray value of point under point, For image
Figure 906785DEST_PATH_IMAGE005
In with
Figure 139183DEST_PATH_IMAGE011
Centered by point 3
Figure 910830DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 799152DEST_PATH_IMAGE011
The gray value of the horizontal right point of point,
Figure 681657DEST_PATH_IMAGE019
For image
Figure 34141DEST_PATH_IMAGE005
In with
Figure 914373DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at The gray value of point directly over point,
Figure 183177DEST_PATH_IMAGE020
For after gaussian filtering
Figure 296626DEST_PATH_IMAGE011
The gray value of point, obtain image after processing , its size is still W
Figure 882383DEST_PATH_IMAGE003
H, traversal W The image of H
Figure 165914DEST_PATH_IMAGE005
In all non-boundary pixels,
Step 4: to image
Figure 578441DEST_PATH_IMAGE006
Carry out horizontal integral projection, the projection computing formula is as follows:
Figure 606440DEST_PATH_IMAGE021
In formula,
Figure 725706DEST_PATH_IMAGE022
Mean the
Figure 180958DEST_PATH_IMAGE023
The horizontal integral projection result of row,
Figure 815201DEST_PATH_IMAGE024
For image
Figure 318995DEST_PATH_IMAGE006
Figure 620663DEST_PATH_IMAGE025
The gray-scale value at some place, W is picturedeep, H is picturewide,
Choose the horizontal integral projection result of gray-scale value minimum from W capable horizontal integral projection result, and be designated as
Figure 246817DEST_PATH_IMAGE026
,
Figure 804575DEST_PATH_IMAGE027
For making horizontal integral projection computing formula
Figure 174376DEST_PATH_IMAGE028
While obtaining minimum value
Figure 330551DEST_PATH_IMAGE023
Corresponding line number, from image In select the vertical coordinate scope and be
Figure 674125DEST_PATH_IMAGE029
Zone, the zone to be detected that may exist as human eye, wherein, , mean
Figure 530402DEST_PATH_IMAGE031
Value be
Figure 498358DEST_PATH_IMAGE032
The value of rounding downwards,
Figure 594490DEST_PATH_IMAGE033
The vertical coordinate interval estimation parameter that may exist for human eye,
Step 5: to image
Figure 243778DEST_PATH_IMAGE006
Carry out edge extracting, the specific implementation process is as follows:
Step 5.1: at first, carry out the gradient magnitude matrix correspondence image after gradient calculation
Figure 108965DEST_PATH_IMAGE034
The grey scale pixel value of each frontier point be the image carried out before gradient calculation
Figure 247823DEST_PATH_IMAGE006
The grey scale pixel value of frontier point, secondly, for image
Figure 267469DEST_PATH_IMAGE006
In non-boundary pixel point
Figure 782764DEST_PATH_IMAGE035
, choose
Figure 502458DEST_PATH_IMAGE036
,
Figure 749900DEST_PATH_IMAGE037
,
Figure 820624DEST_PATH_IMAGE038
For the horizontal direction edge detection operator,
Figure 874031DEST_PATH_IMAGE039
For the vertical direction edge detection operator, calculate non-boundary pixel point
Figure 651494DEST_PATH_IMAGE035
The horizontal direction at place, the single order partial derivative of vertical direction, and non-boundary pixel point Gradient magnitude and the gradient direction at place, computing formula is as follows:
Figure 690174DEST_PATH_IMAGE040
Figure 219376DEST_PATH_IMAGE041
In formula
Figure 46758DEST_PATH_IMAGE044
Representative image
Figure 441967DEST_PATH_IMAGE006
In non-boundary pixel point Gray-scale value, i is image
Figure 16485DEST_PATH_IMAGE006
In the horizontal coordinate of non-frontier point, j is image
Figure 283518DEST_PATH_IMAGE006
In the vertical coordinate of non-frontier point,
Figure 482419DEST_PATH_IMAGE045
For image
Figure 88980DEST_PATH_IMAGE006
In with non-boundary pixel point
Figure 82344DEST_PATH_IMAGE035
Point is upper left corner element
Figure 836674DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid lower left corner,
Figure 511369DEST_PATH_IMAGE047
For image In with non-boundary pixel point
Figure 900816DEST_PATH_IMAGE035
Point is upper left corner element
Figure 142442DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid upper right corner,
Figure 417565DEST_PATH_IMAGE048
For image
Figure 998719DEST_PATH_IMAGE006
In with non-boundary pixel point
Figure 333886DEST_PATH_IMAGE035
Point is upper left corner element
Figure 490DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid lower right corner,
Figure 79305DEST_PATH_IMAGE049
, Represent respectively non-boundary pixel point
Figure 286612DEST_PATH_IMAGE035
The horizontal direction at place, the single order partial derivative of vertical direction,
Figure 237251DEST_PATH_IMAGE051
Represent non-boundary pixel point
Figure 119756DEST_PATH_IMAGE035
Gradient magnitude,
Figure 908458DEST_PATH_IMAGE052
Represent non-boundary pixel point
Figure 851007DEST_PATH_IMAGE035
Gradient direction,
Step 5.2: by the gradient direction value of each non-boundary pixel point of obtaining in step 5.1
Figure 288941DEST_PATH_IMAGE052
Carry out discretize and obtain new gradient direction value
Figure 647241DEST_PATH_IMAGE053
, choose with non-boundary pixel point
Figure 119811DEST_PATH_IMAGE035
Centered by 3
Figure 233261DEST_PATH_IMAGE003
3 windows, right
Figure 158491DEST_PATH_IMAGE051
Value processed, computing formula is as follows:
Figure 320482DEST_PATH_IMAGE054
In formula,
Figure 381979DEST_PATH_IMAGE055
With
Figure 604013DEST_PATH_IMAGE056
Represent respectively with non-boundary pixel point
Figure 16540DEST_PATH_IMAGE035
Centered by 3 In 3 windows along
Figure 662340DEST_PATH_IMAGE057
The gradient magnitude of two pixels of direction,
Figure 852013DEST_PATH_IMAGE058
For the gradient magnitude after processing through above formula, with the gradient magnitude after processing
Figure 751835DEST_PATH_IMAGE058
As central element, image The frontier point grey scale pixel value as the boundary element structural matrix, the corresponding image that produces of this matrix
Figure 557297DEST_PATH_IMAGE034
,
Step 6: to the image obtained in step 5.2
Figure 917872DEST_PATH_IMAGE059
Carry out binaryzation, concrete grammar is as follows:
Step 6.1: adopt the maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture
Figure 242674DEST_PATH_IMAGE059
In the gray-scale value of each pixel, using the highest gray-scale value as high grade grey level m, m is integer, the image gray levels scope consists of each round values in interval [0, m], the number of pixels that gray level is t is made as
Figure 612475DEST_PATH_IMAGE060
, total number of pixels , the probability of each gray-scale value is
Figure 674027DEST_PATH_IMAGE062
If use integer
Figure 282863DEST_PATH_IMAGE063
Gray scale is divided into to two groups
Figure 456355DEST_PATH_IMAGE064
,
Figure DEST_PATH_IMAGE065
,
Figure 404720DEST_PATH_IMAGE066
For gray level is less than or equal to
Figure DEST_PATH_IMAGE067
The gray level group,
Figure 310359DEST_PATH_IMAGE068
For gray level is greater than Be less than
Figure DEST_PATH_IMAGE069
The gray level group, utilize following formula to calculate the variance between two gray level groups:
Figure 55778DEST_PATH_IMAGE070
In formula,
Figure DEST_PATH_IMAGE071
For general image mean value,
Figure 294867DEST_PATH_IMAGE072
For The mean value of group,
Figure DEST_PATH_IMAGE073
For
Figure 954835DEST_PATH_IMAGE066
The probability of group, choose respectively each round values in interval [0, m] value as k, then calculate respectively the corresponding variance yields of each k value
Figure 142234DEST_PATH_IMAGE074
, and select maximum variance yields from m+1 variance yields, then using the corresponding k value of maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image
Figure 127508DEST_PATH_IMAGE034
Carry out binary conversion treatment, image
Figure 896922DEST_PATH_IMAGE034
Its gray-scale value of pixel that middle gray-scale value is more than or equal to threshold value T is set to 255, lower than the grey scale pixel value of threshold value T, is set to 0, obtains bianry image
Figure DEST_PATH_IMAGE075
,
Step 7: accurately determine horizontal level and the upright position of human eye, the specific implementation process is as follows:
Step 7.1: for bianry image
Figure 967646DEST_PATH_IMAGE075
, calculate row, column complexity function, computing formula is as follows:
Figure 958736DEST_PATH_IMAGE076
Figure DEST_PATH_IMAGE077
In formula,
Figure 798516DEST_PATH_IMAGE078
Representative row complexity function,
Figure DEST_PATH_IMAGE079
Represent row complexity function,
Figure 216859DEST_PATH_IMAGE080
Representative image
Figure 509300DEST_PATH_IMAGE075
In be positioned at
Figure DEST_PATH_IMAGE081
The pixel value of the pixel at place, , For the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function
Figure 434586DEST_PATH_IMAGE078
, row complexity function
Figure 86147DEST_PATH_IMAGE079
Carry out the one dimension low-pass filtering and obtain new capable complexity function
Figure 318414DEST_PATH_IMAGE084
, new row complexity function
Figure DEST_PATH_IMAGE085
, by calculating, find new capable complexity function
Figure 651306DEST_PATH_IMAGE084
Maximum of points and new row complexity function
Figure 200099DEST_PATH_IMAGE085
Maximum point determine the position of human eye coordinate, through calculating function
Figure 288141DEST_PATH_IMAGE084
A maximum of points
Figure 492858DEST_PATH_IMAGE086
, function
Figure 691758DEST_PATH_IMAGE085
Two maximum points
Figure DEST_PATH_IMAGE087
,
Figure 531275DEST_PATH_IMAGE088
, obtain pixel
Figure DEST_PATH_IMAGE089
, pixel
Figure 790218DEST_PATH_IMAGE090
,
Step 7.3: the new image obtained in step 3.2 In, select respectively the pixel that step 7.2 obtains
Figure 219243DEST_PATH_IMAGE089
, pixel 6
Figure 110155DEST_PATH_IMAGE003
6 neighborhoods, two pixels of the gray-scale value minimum in this field are set as the human eye center, and the position coordinates of two pixels of gray-scale value minimum is defined as to the eyes coordinate
Figure DEST_PATH_IMAGE091
,
Figure 709371DEST_PATH_IMAGE092
.
Compared with prior art, characteristics of the present invention are:
At first this algorithm is accurately carrying out the integral projection processing to image before the human eye of location, so just can roughly determine the zone that human eye may exist, eliminate other interference region, the impact that for example face, nose bring, can greatly save the calculating required time like this, also can be good at improving the positioning result of final human eye simultaneously.In addition, due to the variation of human eye area, than other organ more complicated of people's face, edge variation is stronger, adopts the method for rim detection can identify simply and effectively human eye area.
The accompanying drawing explanation
Fig. 1 is fast human-eye location algorithm process flow diagram.
Fig. 2 is gradient direction angle discretize standard drawing.
Fig. 3 is the horizontal integral projection figure of facial image.
Fig. 4 is the edge detection results figure of human eye area and this human eye area.
Fig. 5 is the row complexity function schematic diagram of human eye area edge detection results figure.
Fig. 6 is the capable complexity function schematic diagram of human eye area edge detection results figure.
Fig. 7 is the result schematic diagram of row complexity function after low-pass filtering of human eye area edge detection results figure.
Fig. 8 is the result schematic diagram of capable complexity function after low-pass filtering of human eye area edge detection results figure.
Embodiment
In concrete embodiment, in connection with accompanying drawing, the clear detailed implementation procedure of intactly having described the fast human-eye location algorithm,
A kind of fast human-eye positioning method is characterized in that carrying out according to following steps:
Step 1: initialization, read in one and collect the image that contains people's face
Figure 250073DEST_PATH_IMAGE001
,
Step 2: utilize the digital picture of Adaboost algorithm to collecting
Figure 893544DEST_PATH_IMAGE001
Carry out people's face and detect operation, get facial image wherein
Figure 166394DEST_PATH_IMAGE002
,
Step 3: to the facial image acquired in step 2
Figure 895315DEST_PATH_IMAGE002
Carry out pre-service, method is as follows:
Step 3.1: by the facial image acquired
Figure 974130DEST_PATH_IMAGE002
Be converted into facial image
Figure 409790DEST_PATH_IMAGE002
Gray level image, computing formula is as follows:
Figure 915858DEST_PATH_IMAGE094
For processing the brightness value of rear pixel, the relative intensity that R, G, B are three primary colours, by facial image
Figure 69759DEST_PATH_IMAGE002
Gray level image be normalized to W
Figure 952264DEST_PATH_IMAGE003
The image of H
Figure 304748DEST_PATH_IMAGE004
, wherein W, H are positive integer, mean respectively facial image
Figure 683515DEST_PATH_IMAGE002
Gray level image be normalized to W
Figure 121449DEST_PATH_IMAGE003
The image of H
Figure 542066DEST_PATH_IMAGE005
Line number and columns,
Step 3.2: utilize Gaussian filter to W
Figure 952319DEST_PATH_IMAGE003
The image of H
Figure 800189DEST_PATH_IMAGE005
Carry out level and smooth, denoising, concrete grammar is as follows: at first, and the image after level and smooth, denoising The gray-scale value of each boundary pixel be respectively smoothly, the W before denoising
Figure 887411DEST_PATH_IMAGE003
The image of H The gray-scale value of each boundary pixel; Secondly, for W
Figure 498838DEST_PATH_IMAGE003
The image of H
Figure 849048DEST_PATH_IMAGE004
The gray-scale value of non-boundary pixel, with W
Figure 611468DEST_PATH_IMAGE003
The image of H
Figure 793050DEST_PATH_IMAGE005
In any one non-boundary pixel centered by pixel, choose 3 Gauss's template of 3, obtain the gray-scale value of described center pixel, that is:
Figure 584343DEST_PATH_IMAGE007
In formula, Represent row-coordinate,
Figure 124226DEST_PATH_IMAGE009
Represent the row coordinate,
Figure 750380DEST_PATH_IMAGE010
For image In
Figure 117087DEST_PATH_IMAGE011
The gray value of point,
Figure 273262DEST_PATH_IMAGE012
For image
Figure 506535DEST_PATH_IMAGE005
In with
Figure 115371DEST_PATH_IMAGE011
Centered by point 3
Figure 288863DEST_PATH_IMAGE003
In 3 grids, be positioned at The gray value of some lower-left angle point,
Figure 205184DEST_PATH_IMAGE013
For image
Figure 301316DEST_PATH_IMAGE005
In with
Figure 685024DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at
Figure 954648DEST_PATH_IMAGE011
The gray value of some upper left angle point,
Figure 475759DEST_PATH_IMAGE014
For image
Figure 725475DEST_PATH_IMAGE005
In with
Figure 710748DEST_PATH_IMAGE011
Centered by point 3
Figure 456725DEST_PATH_IMAGE003
In 3 grids, be positioned at The gray value of some bottom right angle point, For image In with
Figure 838979DEST_PATH_IMAGE011
Centered by point 3
Figure 131420DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 926201DEST_PATH_IMAGE011
The gray value of some upper right angle point,
Figure 620487DEST_PATH_IMAGE016
For image
Figure 272049DEST_PATH_IMAGE004
In with
Figure 255048DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at
Figure 199050DEST_PATH_IMAGE011
Put the gray value of horizontal left to point,
Figure 723310DEST_PATH_IMAGE017
For image
Figure 990344DEST_PATH_IMAGE004
In with
Figure 923665DEST_PATH_IMAGE011
Centered by point 3
Figure 530226DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 789169DEST_PATH_IMAGE011
The gray value of point under point,
Figure 277920DEST_PATH_IMAGE018
For image
Figure 218194DEST_PATH_IMAGE005
In with
Figure 741579DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at
Figure 85153DEST_PATH_IMAGE011
The gray value of the horizontal right point of point,
Figure 625855DEST_PATH_IMAGE019
For image
Figure 3747DEST_PATH_IMAGE005
In with
Figure 40711DEST_PATH_IMAGE011
Centered by point 3
Figure 504053DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 786130DEST_PATH_IMAGE011
The gray value of point directly over point,
Figure 284108DEST_PATH_IMAGE020
For after gaussian filtering
Figure 790175DEST_PATH_IMAGE011
The gray value of point, obtain image after processing
Figure 944076DEST_PATH_IMAGE006
, its size is still W
Figure 826581DEST_PATH_IMAGE003
H, traversal W
Figure 179065DEST_PATH_IMAGE003
The image of H
Figure 793718DEST_PATH_IMAGE005
In all non-boundary pixels,
Step 4: to image Carry out horizontal integral projection, the projection computing formula is as follows:
Figure 917848DEST_PATH_IMAGE021
In formula,
Figure 124839DEST_PATH_IMAGE022
Mean the
Figure 612190DEST_PATH_IMAGE023
The horizontal integral projection result of row,
Figure 537420DEST_PATH_IMAGE024
For image
Figure 761728DEST_PATH_IMAGE006
Figure 26488DEST_PATH_IMAGE025
The gray-scale value at some place, W is picturedeep, H is picturewide,
Choose the horizontal integral projection result of gray-scale value minimum from W capable horizontal integral projection result, and be designated as
Figure 310838DEST_PATH_IMAGE026
,
Figure 457786DEST_PATH_IMAGE027
For making horizontal integral projection computing formula
Figure 423468DEST_PATH_IMAGE028
While obtaining minimum value
Figure 605051DEST_PATH_IMAGE023
Corresponding line number, from image
Figure 496521DEST_PATH_IMAGE006
In select the vertical coordinate scope and be
Figure 396344DEST_PATH_IMAGE029
Zone, the zone to be detected that may exist as human eye, wherein,
Figure 900137DEST_PATH_IMAGE030
, mean
Figure 936227DEST_PATH_IMAGE031
Value be
Figure 562380DEST_PATH_IMAGE032
The value of rounding downwards,
Figure 683920DEST_PATH_IMAGE033
The vertical coordinate interval estimation parameter that may exist for human eye,
Step 5: to image
Figure 991404DEST_PATH_IMAGE006
Carry out edge extracting, the specific implementation process is as follows:
Step 5.1: at first, carry out the gradient magnitude matrix correspondence image after gradient calculation
Figure 147579DEST_PATH_IMAGE034
The grey scale pixel value of each frontier point be the image carried out before gradient calculation
Figure 944634DEST_PATH_IMAGE006
The grey scale pixel value of frontier point, secondly, for image
Figure 491153DEST_PATH_IMAGE006
In non-boundary pixel point
Figure 399066DEST_PATH_IMAGE035
, choose
Figure 409747DEST_PATH_IMAGE036
,
Figure 813922DEST_PATH_IMAGE037
,
Figure 910054DEST_PATH_IMAGE038
For the horizontal direction edge detection operator,
Figure 621658DEST_PATH_IMAGE039
For the vertical direction edge detection operator, calculate non-boundary pixel point
Figure 690108DEST_PATH_IMAGE035
The horizontal direction at place, the single order partial derivative of vertical direction, and non-boundary pixel point
Figure 828965DEST_PATH_IMAGE035
Gradient magnitude and the gradient direction at place, computing formula is as follows:
Figure 146814DEST_PATH_IMAGE040
Figure 894824DEST_PATH_IMAGE043
In formula
Figure 637652DEST_PATH_IMAGE044
Representative image
Figure 691059DEST_PATH_IMAGE006
In non-boundary pixel point Gray-scale value, i is image
Figure 447717DEST_PATH_IMAGE006
In the horizontal coordinate of non-frontier point, j is image
Figure 5737DEST_PATH_IMAGE006
In the vertical coordinate of non-frontier point, For image
Figure 494805DEST_PATH_IMAGE006
In with non-boundary pixel point
Figure 146366DEST_PATH_IMAGE035
Point is upper left corner element
Figure 926103DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid lower left corner, For image
Figure 73368DEST_PATH_IMAGE006
In with non-boundary pixel point
Figure 895830DEST_PATH_IMAGE035
Point is upper left corner element
Figure 100546DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid upper right corner,
Figure 299447DEST_PATH_IMAGE048
For image
Figure 968325DEST_PATH_IMAGE006
In with non-boundary pixel point
Figure 421345DEST_PATH_IMAGE035
Point is upper left corner element
Figure 175674DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid lower right corner,
Figure 178265DEST_PATH_IMAGE049
,
Figure 639334DEST_PATH_IMAGE050
Represent respectively non-boundary pixel point
Figure 803599DEST_PATH_IMAGE035
The horizontal direction at place, the single order partial derivative of vertical direction,
Figure 45224DEST_PATH_IMAGE051
Represent non-boundary pixel point
Figure 523610DEST_PATH_IMAGE035
Gradient magnitude,
Figure 901502DEST_PATH_IMAGE052
Represent non-boundary pixel point
Figure 502247DEST_PATH_IMAGE035
Gradient direction,
Step 5.2: by the gradient direction value of each non-boundary pixel point of obtaining in step 5.1
Figure 339491DEST_PATH_IMAGE052
Carry out discretize according to the discretize standard of accompanying drawing 2 and obtain new gradient direction value
Figure 683885DEST_PATH_IMAGE053
, choose with non-boundary pixel point
Figure 916283DEST_PATH_IMAGE035
Centered by 3
Figure 625613DEST_PATH_IMAGE003
3 windows, right
Figure 576252DEST_PATH_IMAGE051
Value processed, computing formula is as follows:
In formula,
Figure 748924DEST_PATH_IMAGE055
With
Figure 691472DEST_PATH_IMAGE056
Represent respectively with non-boundary pixel point
Figure 129407DEST_PATH_IMAGE035
Centered by 3 In 3 windows along
Figure 960277DEST_PATH_IMAGE057
The gradient magnitude of two pixels of direction,
Figure 73726DEST_PATH_IMAGE058
For the gradient magnitude after processing through above formula, with the gradient magnitude after processing
Figure 435175DEST_PATH_IMAGE058
As central element, image The frontier point grey scale pixel value as the boundary element structural matrix, the corresponding image that produces of this matrix
Figure 986559DEST_PATH_IMAGE034
,
Step 6: to the image obtained in step 5.2
Figure 943014DEST_PATH_IMAGE059
Carry out binaryzation, concrete grammar is as follows:
Step 6.1: adopt the maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture
Figure 355541DEST_PATH_IMAGE059
In the gray-scale value of each pixel, using the highest gray-scale value as high grade grey level m, m is integer, the image gray levels scope consists of each round values in interval [0, m], the number of pixels that gray level is t is made as
Figure 383539DEST_PATH_IMAGE060
, total number of pixels
Figure 502805DEST_PATH_IMAGE061
, the probability of each gray-scale value is
Figure 958057DEST_PATH_IMAGE062
If use integer
Figure 592301DEST_PATH_IMAGE063
Gray scale is divided into to two groups ,
Figure 397763DEST_PATH_IMAGE065
, For gray level is less than or equal to
Figure 581674DEST_PATH_IMAGE067
The gray level group,
Figure 951476DEST_PATH_IMAGE068
For gray level is greater than
Figure 45334DEST_PATH_IMAGE067
Be less than
Figure 842389DEST_PATH_IMAGE069
The gray level group, utilize following formula to calculate the variance between two gray level groups:
Figure 451224DEST_PATH_IMAGE070
In formula,
Figure 296821DEST_PATH_IMAGE071
For general image mean value,
Figure 307502DEST_PATH_IMAGE072
For
Figure 213141DEST_PATH_IMAGE066
The mean value of group, For
Figure 20877DEST_PATH_IMAGE066
The probability of group, choose respectively each round values in interval [0, m] value as k, then calculate respectively the corresponding variance yields of each k value
Figure 322283DEST_PATH_IMAGE074
, and select maximum variance yields from m+1 variance yields, then using the corresponding k value of maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image Carry out binary conversion treatment, image
Figure 982252DEST_PATH_IMAGE034
Its gray-scale value of pixel that middle gray-scale value is more than or equal to threshold value T is set to 255, lower than the grey scale pixel value of threshold value T, is set to 0, obtains bianry image
Figure 497547DEST_PATH_IMAGE075
,
Step 7: accurately determine horizontal level and the upright position of human eye, the specific implementation process is as follows:
Step 7.1: for bianry image
Figure 217241DEST_PATH_IMAGE075
, calculate row, column complexity function, computing formula is as follows:
Figure 464683DEST_PATH_IMAGE076
In formula,
Figure 588814DEST_PATH_IMAGE078
Representative row complexity function, Represent row complexity function, Representative image In be positioned at
Figure 432693DEST_PATH_IMAGE081
The pixel value of the pixel at place,
Figure 126980DEST_PATH_IMAGE082
,
Figure 981804DEST_PATH_IMAGE083
For the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function
Figure 761541DEST_PATH_IMAGE078
, row complexity function
Figure 94433DEST_PATH_IMAGE079
Carry out the one dimension low-pass filtering and obtain new capable complexity function
Figure 908805DEST_PATH_IMAGE084
, new row complexity function
Figure 731268DEST_PATH_IMAGE085
, by calculating, find new capable complexity function
Figure 935984DEST_PATH_IMAGE084
Maximum of points and new row complexity function Maximum point determine the position of human eye coordinate, through calculating function
Figure 239981DEST_PATH_IMAGE084
A maximum of points
Figure 233345DEST_PATH_IMAGE086
, function
Figure 987675DEST_PATH_IMAGE085
Two maximum points ,
Figure 185755DEST_PATH_IMAGE088
, obtain pixel
Figure 553282DEST_PATH_IMAGE089
, pixel
Figure 794908DEST_PATH_IMAGE090
,
Step 7.3: the new image obtained in step 3.2
Figure 70031DEST_PATH_IMAGE006
In, select respectively the pixel that step 7.2 obtains
Figure 651185DEST_PATH_IMAGE089
, pixel
Figure 986352DEST_PATH_IMAGE090
6 6 neighborhoods, two pixels of the gray-scale value minimum in this field are set as the human eye center, and the position coordinates of two pixels of gray-scale value minimum is defined as to the eyes coordinate
Figure 230306DEST_PATH_IMAGE091
,
Figure 728283DEST_PATH_IMAGE092
.

Claims (1)

1. the fast human-eye positioning method based on integral projection and rim detection is characterized in that carrying out according to following steps:
Step 1: initialization, read in one and collect the image that contains people's face
Figure 926810DEST_PATH_IMAGE001
,
Step 2: utilize the digital picture of Adaboost algorithm to collecting
Figure 877449DEST_PATH_IMAGE001
Carry out people's face and detect operation, get facial image wherein ,
Step 3: to the facial image acquired in step 2
Figure 50121DEST_PATH_IMAGE002
Carry out pre-service, method is as follows:
Step 3.1: by the facial image acquired
Figure 992669DEST_PATH_IMAGE002
Be converted into facial image
Figure 430604DEST_PATH_IMAGE002
Gray level image, and by facial image
Figure 287439DEST_PATH_IMAGE002
Gray level image be normalized to W
Figure 760009DEST_PATH_IMAGE003
The image of H , wherein W, H are positive integer, mean respectively facial image
Figure 736372DEST_PATH_IMAGE002
Gray level image be normalized to W
Figure 898363DEST_PATH_IMAGE003
The image of H Line number and columns,
Step 3.2: utilize Gaussian filter to W The image of H Carry out level and smooth, denoising, concrete grammar is as follows: at first, and the image after level and smooth, denoising
Figure 996321DEST_PATH_IMAGE006
The gray-scale value of each boundary pixel be respectively smoothly, the W before denoising
Figure 177904DEST_PATH_IMAGE003
The image of H The gray-scale value of each boundary pixel; Secondly, for W
Figure 205082DEST_PATH_IMAGE003
The image of H
Figure 145094DEST_PATH_IMAGE004
The gray-scale value of non-boundary pixel, with W
Figure 446763DEST_PATH_IMAGE003
The image of H
Figure 807337DEST_PATH_IMAGE005
In any one non-boundary pixel centered by pixel, choose 3
Figure 132139DEST_PATH_IMAGE003
Gauss's template of 3, obtain the gray-scale value of described center pixel, that is:
In formula,
Figure 392536DEST_PATH_IMAGE008
Represent row-coordinate,
Figure 127274DEST_PATH_IMAGE009
Represent the row coordinate,
Figure 736110DEST_PATH_IMAGE010
For image
Figure 909602DEST_PATH_IMAGE005
In
Figure 857967DEST_PATH_IMAGE011
The gray value of point,
Figure 825923DEST_PATH_IMAGE012
For image
Figure 358273DEST_PATH_IMAGE005
In with
Figure 69877DEST_PATH_IMAGE011
Centered by point 3
Figure 935065DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 11605DEST_PATH_IMAGE011
The gray value of some lower-left angle point,
Figure 595033DEST_PATH_IMAGE013
For image
Figure 782432DEST_PATH_IMAGE005
In with
Figure 767706DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at
Figure 85872DEST_PATH_IMAGE011
The gray value of some upper left angle point,
Figure 139278DEST_PATH_IMAGE014
For image In with
Figure 895936DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at
Figure 983158DEST_PATH_IMAGE011
The gray value of some bottom right angle point,
Figure 677445DEST_PATH_IMAGE015
For image
Figure 329006DEST_PATH_IMAGE005
In with
Figure 374322DEST_PATH_IMAGE011
Centered by point 3
Figure 707215DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 256008DEST_PATH_IMAGE011
The gray value of some upper right angle point,
Figure 344049DEST_PATH_IMAGE016
For image
Figure 548766DEST_PATH_IMAGE004
In with
Figure 747666DEST_PATH_IMAGE011
Centered by point 3 In 3 grids, be positioned at
Figure 846127DEST_PATH_IMAGE011
Put the gray value of horizontal left to point,
Figure 600456DEST_PATH_IMAGE017
For image
Figure 337468DEST_PATH_IMAGE004
In with
Figure 798536DEST_PATH_IMAGE011
Centered by point 3
Figure 228381DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 204427DEST_PATH_IMAGE011
The gray value of point under point, For image
Figure 326284DEST_PATH_IMAGE005
In with
Figure 661450DEST_PATH_IMAGE011
Centered by point 3
Figure 328055DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 406869DEST_PATH_IMAGE011
The gray value of the horizontal right point of point,
Figure 904847DEST_PATH_IMAGE019
For image
Figure 847133DEST_PATH_IMAGE005
In with
Figure 63350DEST_PATH_IMAGE011
Centered by point 3
Figure 945856DEST_PATH_IMAGE003
In 3 grids, be positioned at
Figure 236023DEST_PATH_IMAGE011
The gray value of point directly over point,
Figure 178571DEST_PATH_IMAGE020
For after gaussian filtering The gray value of point, obtain image after processing
Figure 974806DEST_PATH_IMAGE006
, its size is still W
Figure 447375DEST_PATH_IMAGE003
H, traversal W
Figure 295246DEST_PATH_IMAGE003
The image of H
Figure 158159DEST_PATH_IMAGE005
In all non-boundary pixels,
Step 4: to image
Figure 382467DEST_PATH_IMAGE006
Carry out horizontal integral projection, the projection computing formula is as follows:
Figure 709543DEST_PATH_IMAGE021
In formula,
Figure 430112DEST_PATH_IMAGE022
Mean the
Figure 842639DEST_PATH_IMAGE023
The horizontal integral projection result of row,
Figure 605059DEST_PATH_IMAGE024
For image The gray-scale value at some place, W is picturedeep, H is picturewide,
Choose the horizontal integral projection result of gray-scale value minimum from W capable horizontal integral projection result, and be designated as
Figure 79400DEST_PATH_IMAGE026
,
Figure 583193DEST_PATH_IMAGE027
For making horizontal integral projection computing formula
Figure 619282DEST_PATH_IMAGE028
While obtaining minimum value
Figure 245436DEST_PATH_IMAGE023
Corresponding line number, from image
Figure 570238DEST_PATH_IMAGE006
In select the vertical coordinate scope and be
Figure 674460DEST_PATH_IMAGE029
Zone, the zone to be detected that may exist as human eye, wherein,
Figure 830635DEST_PATH_IMAGE030
, mean
Figure 63908DEST_PATH_IMAGE031
Value be
Figure 672744DEST_PATH_IMAGE032
The value of rounding downwards,
Figure 846236DEST_PATH_IMAGE033
The vertical coordinate interval estimation parameter that may exist for human eye,
Step 5: to image Carry out edge extracting, the specific implementation process is as follows:
Step 5.1: at first, carry out the gradient magnitude matrix correspondence image after gradient calculation The grey scale pixel value of each frontier point be the image carried out before gradient calculation
Figure 858689DEST_PATH_IMAGE006
The grey scale pixel value of frontier point, secondly, for image
Figure 242397DEST_PATH_IMAGE006
In non-boundary pixel point
Figure 373164DEST_PATH_IMAGE035
, choose
Figure 512021DEST_PATH_IMAGE036
, , For the horizontal direction edge detection operator,
Figure 268121DEST_PATH_IMAGE039
For the vertical direction edge detection operator, calculate non-boundary pixel point
Figure 37536DEST_PATH_IMAGE035
The horizontal direction at place, the single order partial derivative of vertical direction, and non-boundary pixel point Gradient magnitude and the gradient direction at place, computing formula is as follows:
Figure 419790DEST_PATH_IMAGE042
Figure 712231DEST_PATH_IMAGE043
In formula
Figure 507011DEST_PATH_IMAGE044
Representative image
Figure 201298DEST_PATH_IMAGE006
In non-boundary pixel point
Figure 852859DEST_PATH_IMAGE035
Gray-scale value, i is image
Figure 835859DEST_PATH_IMAGE006
In the horizontal coordinate of non-frontier point, j is image
Figure 231068DEST_PATH_IMAGE006
In the vertical coordinate of non-frontier point,
Figure 779861DEST_PATH_IMAGE045
For image In with non-boundary pixel point
Figure 571154DEST_PATH_IMAGE035
Point is upper left corner element
Figure 504475DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid lower left corner,
Figure 48720DEST_PATH_IMAGE047
For image In with non-boundary pixel point
Figure 734096DEST_PATH_IMAGE035
Point is upper left corner element
Figure 736687DEST_PATH_IMAGE046
Grid in be positioned at the gray-scale value of the pixel in the grid upper right corner,
Figure 633974DEST_PATH_IMAGE048
For image
Figure 798239DEST_PATH_IMAGE006
In with non-boundary pixel point
Figure 39865DEST_PATH_IMAGE035
Point is upper left corner element Grid in be positioned at the gray-scale value of the pixel in the grid lower right corner,
Figure 83093DEST_PATH_IMAGE049
,
Figure 120057DEST_PATH_IMAGE050
Represent respectively non-boundary pixel point
Figure 583399DEST_PATH_IMAGE035
The horizontal direction at place, the single order partial derivative of vertical direction,
Figure 927793DEST_PATH_IMAGE051
Represent non-boundary pixel point
Figure 675038DEST_PATH_IMAGE035
Gradient magnitude,
Figure 118789DEST_PATH_IMAGE052
Represent non-boundary pixel point Gradient direction,
Step 5.2: by the gradient direction value of each non-boundary pixel point of obtaining in step 5.1
Figure 155195DEST_PATH_IMAGE052
Carry out discretize and obtain new gradient direction value , choose with non-boundary pixel point Centered by 3
Figure 560265DEST_PATH_IMAGE003
3 windows, right Value processed, computing formula is as follows:
Figure 889670DEST_PATH_IMAGE054
In formula,
Figure 3120DEST_PATH_IMAGE055
With
Figure 928351DEST_PATH_IMAGE056
Represent respectively with non-boundary pixel point
Figure 90342DEST_PATH_IMAGE035
Centered by 3
Figure 417418DEST_PATH_IMAGE003
In 3 windows along
Figure 639452DEST_PATH_IMAGE057
The gradient magnitude of two pixels of direction,
Figure 786399DEST_PATH_IMAGE058
For the gradient magnitude after processing through above formula, with the gradient magnitude after processing
Figure 814398DEST_PATH_IMAGE058
As central element, image
Figure 933664DEST_PATH_IMAGE006
The frontier point grey scale pixel value as the boundary element structural matrix, the corresponding image that produces of this matrix ,
Step 6: to the image obtained in step 5.2
Figure 724957DEST_PATH_IMAGE059
Carry out binaryzation, concrete grammar is as follows:
Step 6.1: adopt the maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture
Figure 291068DEST_PATH_IMAGE059
In the gray-scale value of each pixel, using the highest gray-scale value as high grade grey level m, m is integer, the image gray levels scope consists of each round values in interval [0, m], the number of pixels that gray level is t is made as
Figure 327157DEST_PATH_IMAGE060
, total number of pixels
Figure 890993DEST_PATH_IMAGE061
, the probability of each gray-scale value is
Figure 12533DEST_PATH_IMAGE062
If use integer
Figure 382334DEST_PATH_IMAGE063
Gray scale is divided into to two groups
Figure 476192DEST_PATH_IMAGE064
,
Figure 273247DEST_PATH_IMAGE065
,
Figure 819766DEST_PATH_IMAGE066
For gray level is less than or equal to
Figure 727679DEST_PATH_IMAGE067
The gray level group,
Figure 738361DEST_PATH_IMAGE068
For gray level is greater than
Figure 142535DEST_PATH_IMAGE067
Be less than
Figure 238667DEST_PATH_IMAGE069
The gray level group, utilize following formula to calculate the variance between two gray level groups:
Figure 887954DEST_PATH_IMAGE070
In formula,
Figure 18721DEST_PATH_IMAGE071
For general image mean value,
Figure 157578DEST_PATH_IMAGE072
For
Figure 413110DEST_PATH_IMAGE066
The mean value of group,
Figure 928405DEST_PATH_IMAGE073
For The probability of group, choose respectively each round values in interval [0, m] value as k, then calculate respectively the corresponding variance yields of each k value
Figure 161121DEST_PATH_IMAGE074
, and select maximum variance yields from m+1 variance yields, then using the corresponding k value of maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image
Figure 966265DEST_PATH_IMAGE034
Carry out binary conversion treatment, image
Figure 455890DEST_PATH_IMAGE034
Its gray-scale value of pixel that middle gray-scale value is more than or equal to threshold value T is set to 255, lower than the grey scale pixel value of threshold value T, is set to 0, obtains bianry image
Figure 295670DEST_PATH_IMAGE075
,
Step 7: accurately determine horizontal level and the upright position of human eye, the specific implementation process is as follows:
Step 7.1: for bianry image , calculate row, column complexity function, computing formula is as follows:
Figure 272034DEST_PATH_IMAGE076
Figure 801235DEST_PATH_IMAGE077
In formula,
Figure 761101DEST_PATH_IMAGE078
Representative row complexity function,
Figure 412662DEST_PATH_IMAGE079
Represent row complexity function,
Figure 130082DEST_PATH_IMAGE080
Representative image
Figure 525292DEST_PATH_IMAGE075
In be positioned at
Figure 775882DEST_PATH_IMAGE081
The pixel value of the pixel at place,
Figure 598345DEST_PATH_IMAGE082
,
Figure 803061DEST_PATH_IMAGE083
For the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function
Figure 1961DEST_PATH_IMAGE078
, row complexity function
Figure 608523DEST_PATH_IMAGE079
Carry out the one dimension low-pass filtering and obtain new capable complexity function
Figure 601887DEST_PATH_IMAGE084
, new row complexity function
Figure 293899DEST_PATH_IMAGE085
, by calculating, find new capable complexity function
Figure 296490DEST_PATH_IMAGE084
Maximum of points and new row complexity function
Figure 819876DEST_PATH_IMAGE085
Maximum point determine the position of human eye coordinate, through calculating function
Figure 420359DEST_PATH_IMAGE084
A maximum of points
Figure 661985DEST_PATH_IMAGE086
, function
Figure 140370DEST_PATH_IMAGE085
Two maximum points
Figure 455945DEST_PATH_IMAGE087
,
Figure 994374DEST_PATH_IMAGE088
, obtain pixel
Figure 457716DEST_PATH_IMAGE089
, pixel
Figure 238328DEST_PATH_IMAGE090
,
Step 7.3: the new image obtained in step 3.2 In, select respectively the pixel that step 7.2 obtains , pixel
Figure 379963DEST_PATH_IMAGE090
6
Figure 200151DEST_PATH_IMAGE003
6 neighborhoods, two pixels of the gray-scale value minimum in this field are set as the human eye center, and the position coordinates of two pixels of gray-scale value minimum is defined as to the eyes coordinate
Figure 552635DEST_PATH_IMAGE091
,
Figure 432866DEST_PATH_IMAGE092
.
CN201310119843.9A 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection Active CN103218605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310119843.9A CN103218605B (en) 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310119843.9A CN103218605B (en) 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection

Publications (2)

Publication Number Publication Date
CN103218605A true CN103218605A (en) 2013-07-24
CN103218605B CN103218605B (en) 2016-01-13

Family

ID=48816374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310119843.9A Active CN103218605B (en) 2013-04-09 2013-04-09 A kind of fast human-eye positioning method based on integral projection and rim detection

Country Status (1)

Country Link
CN (1) CN103218605B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617638A (en) * 2013-12-05 2014-03-05 北京京东尚科信息技术有限公司 Image processing method and device
CN104484679A (en) * 2014-09-17 2015-04-01 北京邮电大学 Non-standard gun shooting bullet trace image automatic identification method
CN106407657A (en) * 2016-08-31 2017-02-15 无锡雅座在线科技发展有限公司 Method and device for capturing event
CN108303420A (en) * 2017-12-30 2018-07-20 上饶市中科院云计算中心大数据研究院 A kind of domestic type sperm quality detection method based on big data and mobile Internet
CN108648206A (en) * 2018-04-28 2018-10-12 成都信息工程大学 A kind of Robert edge detections film computing system and method
CN109063689A (en) * 2018-08-31 2018-12-21 江苏航天大为科技股份有限公司 Facial image hair style detection method
CN109241862A (en) * 2018-08-14 2019-01-18 广州杰赛科技股份有限公司 Target area determines method and system, computer equipment, computer storage medium
CN110070017A (en) * 2019-04-12 2019-07-30 北京迈格威科技有限公司 A kind of face artificial eye image generating method and device
CN110288540A (en) * 2019-06-04 2019-09-27 东南大学 A kind of online imaging standards method of carbon-fibre wire radioscopic image
CN110516649A (en) * 2019-09-02 2019-11-29 南京微小宝信息技术有限公司 Alumnus's authentication method and system based on recognition of face
CN111814795A (en) * 2020-06-05 2020-10-23 北京嘉楠捷思信息技术有限公司 Character segmentation method, device and computer readable storage medium
CN111860423A (en) * 2020-07-30 2020-10-30 江南大学 Improved human eye positioning method of integral projection method
CN115331269A (en) * 2022-10-13 2022-11-11 天津新视光技术有限公司 Fingerprint identification method based on gradient vector field and application
CN116363736A (en) * 2023-05-31 2023-06-30 山东农业工程学院 Big data user information acquisition method based on digitalization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1475961A (en) * 2003-07-14 2004-02-18 中国科学院计算技术研究所 Human eye location method based on GaborEge model
US20110164816A1 (en) * 2010-01-05 2011-07-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN102968624A (en) * 2012-12-12 2013-03-13 天津工业大学 Method for positioning human eyes in human face image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1475961A (en) * 2003-07-14 2004-02-18 中国科学院计算技术研究所 Human eye location method based on GaborEge model
US20110164816A1 (en) * 2010-01-05 2011-07-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN102968624A (en) * 2012-12-12 2013-03-13 天津工业大学 Method for positioning human eyes in human face image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
欧阳: "基于自适应边缘提取的人眼定位方法", 《微计算机信息》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617638B (en) * 2013-12-05 2017-03-15 北京京东尚科信息技术有限公司 The method and device of image procossing
CN103617638A (en) * 2013-12-05 2014-03-05 北京京东尚科信息技术有限公司 Image processing method and device
CN104484679A (en) * 2014-09-17 2015-04-01 北京邮电大学 Non-standard gun shooting bullet trace image automatic identification method
CN106407657A (en) * 2016-08-31 2017-02-15 无锡雅座在线科技发展有限公司 Method and device for capturing event
CN108303420A (en) * 2017-12-30 2018-07-20 上饶市中科院云计算中心大数据研究院 A kind of domestic type sperm quality detection method based on big data and mobile Internet
CN108648206A (en) * 2018-04-28 2018-10-12 成都信息工程大学 A kind of Robert edge detections film computing system and method
CN108648206B (en) * 2018-04-28 2022-09-16 成都信息工程大学 Robert edge detection film computing system and method
CN109241862A (en) * 2018-08-14 2019-01-18 广州杰赛科技股份有限公司 Target area determines method and system, computer equipment, computer storage medium
CN109063689A (en) * 2018-08-31 2018-12-21 江苏航天大为科技股份有限公司 Facial image hair style detection method
CN110070017B (en) * 2019-04-12 2021-08-24 北京迈格威科技有限公司 Method and device for generating human face artificial eye image
CN110070017A (en) * 2019-04-12 2019-07-30 北京迈格威科技有限公司 A kind of face artificial eye image generating method and device
CN110288540A (en) * 2019-06-04 2019-09-27 东南大学 A kind of online imaging standards method of carbon-fibre wire radioscopic image
CN110516649A (en) * 2019-09-02 2019-11-29 南京微小宝信息技术有限公司 Alumnus's authentication method and system based on recognition of face
CN110516649B (en) * 2019-09-02 2023-08-22 南京微小宝信息技术有限公司 Face recognition-based alumni authentication method and system
CN111814795A (en) * 2020-06-05 2020-10-23 北京嘉楠捷思信息技术有限公司 Character segmentation method, device and computer readable storage medium
CN111860423A (en) * 2020-07-30 2020-10-30 江南大学 Improved human eye positioning method of integral projection method
CN111860423B (en) * 2020-07-30 2024-04-30 江南大学 Improved human eye positioning method by integral projection method
CN115331269A (en) * 2022-10-13 2022-11-11 天津新视光技术有限公司 Fingerprint identification method based on gradient vector field and application
CN115331269B (en) * 2022-10-13 2023-01-13 天津新视光技术有限公司 Fingerprint identification method based on gradient vector field and application
CN116363736A (en) * 2023-05-31 2023-06-30 山东农业工程学院 Big data user information acquisition method based on digitalization
CN116363736B (en) * 2023-05-31 2023-08-18 山东农业工程学院 Big data user information acquisition method based on digitalization

Also Published As

Publication number Publication date
CN103218605B (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN103218605A (en) Quick eye locating method based on integral projection and edge detection
CN105469113B (en) A kind of skeleton point tracking method and system in two-dimensional video stream
CN103927016B (en) Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
CN103761519B (en) Non-contact sight-line tracking method based on self-adaptive calibration
CN103186904B (en) Picture contour extraction method and device
CN103310194B (en) Pedestrian based on crown pixel gradient direction in a video shoulder detection method
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
CN104794693B (en) A kind of portrait optimization method of face key area automatic detection masking-out
CN101551853A (en) Human ear detection method under complex static color background
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN105809173B (en) A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform
CN104268853A (en) Infrared image and visible image registering method
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN108921813A (en) Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision
CN103971135A (en) Human body target detection method based on head and shoulder depth information features
CN104268520A (en) Human motion recognition method based on depth movement trail
CN105225216A (en) Based on the Iris preprocessing algorithm of space apart from circle mark rim detection
CN104766316A (en) Novel lip segmentation algorithm for traditional Chinese medical inspection diagnosis
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN103914829B (en) Method for detecting edge of noisy image
CN109344706A (en) It is a kind of can one man operation human body specific positions photo acquisition methods
CN102073872A (en) Image-based method for identifying shape of parasite egg
CN104021567A (en) Gaussian blur falsification detection method of image based on initial digital law
CN106778499B (en) Method for rapidly positioning human iris in iris acquisition process
CN103971347A (en) Method and device for treating shadow in video image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant