CN109558825A - A kind of pupil center's localization method based on digital video image processing - Google Patents

A kind of pupil center's localization method based on digital video image processing Download PDF

Info

Publication number
CN109558825A
CN109558825A CN201811408486.7A CN201811408486A CN109558825A CN 109558825 A CN109558825 A CN 109558825A CN 201811408486 A CN201811408486 A CN 201811408486A CN 109558825 A CN109558825 A CN 109558825A
Authority
CN
China
Prior art keywords
image
value
face
pixel
video image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811408486.7A
Other languages
Chinese (zh)
Inventor
王鹏
才思文
薛楠
董鑫
沈翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201811408486.7A priority Critical patent/CN109558825A/en
Publication of CN109558825A publication Critical patent/CN109558825A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

A kind of pupil center's localization method based on digital video image processing, belongs to field of image processing.Existing pupil positioning method has that locating speed is slow, locating accuracy is low.A kind of pupil center's localization method based on digital video image processing acquires video image by camera and exports each frame with digital rgb format;Face datection and positioning are carried out by the method that the segmentation of rough facial image is combined with AdaBoost algorithm, determines the face image in video image;It determines after face image using the feature that human eye area is not larger relative to face other area grayscale value differences, carries out the positioning of eye image;Using edge detection and Hough transform loop truss, pupil center location is determined.For the present invention compared with existing pupil center's detection algorithm, not by fringe, wearing spectacles block bring to face is influenced, while having very high real-time and accuracy rate.

Description

A kind of pupil center's localization method based on digital video image processing
Technical field
The present invention relates to a kind of pupil center's localization methods based on digital video image processing.
Background technique
Eye movement tracer technique is also referred to as visual pursuit technology, is to utilize the various inspections such as software algorithm, machinery, electronics, optics Survey means obtain the technology that subject's Current vision pays attention to direction, and application range has penetrated into human society life Every field, for example, eye movement tracer technique will become the major way of the mankind and machine interaction in field of intelligent man-machine interaction One of, to upper limb disability personage, the elderly or both hands because executing operation task be for occupied personnel one very well Supplementary mode;In medical domain, in terms of spirit being diagnosed by the changing rule of the motion profile of eyeball or pupil Disease and eye disease;In VR (virtual reality) field, mere contact of the eyes as people and virtual world, eye movement tracking Technology is at virtual reality system machine interaction important means;Pupil center's positioning is the most crucial portion of eye movement tracer technique Point.
Existing pupil center's localization method anti-interference ability is weak, and under complex background, eyelash is longer and fringe, pendant It wears glasses and is affected under face's circumstance of occlusion to its accuracy rate, that there are Pupil diameter speed is slow, locating accuracy is low etc. lacks Point.
Summary of the invention
The purpose of the present invention is to solve existing pupil positioning method there are Pupil diameter speed slow, locating accuracies Under low, complex background, eyelash is longer and fringe, wearing spectacles are to being affected to accuracy rate under face's circumstance of occlusion Problem, and propose a kind of pupil center's localization method based on digital video image processing.
A kind of pupil center's localization method based on digital video image processing, the method are realized by following steps:
Step 1: acquiring video image by camera and exporting each frame with digital rgb format;
Step 2: carrying out Face datection by the method that the segmentation of rough facial image is combined with AdaBoost algorithm and determining Position, determines the face image in video image;Wherein, rough facial image segmentation in using YCgCr color space with Morphology operations combine;Strong classifier Cascading Methods are used in AdaBoost training algorithm;
Step 3: using human eye area relative to the not biggish spy of other area grayscale value differences of face after determining face image Point carries out the positioning of eye image;
Step 4: determining pupil center location using edge detection and Hough transform loop truss.
The invention has the benefit that
The present invention proposes a kind of pupil center's localization method based on video image processing, and this method can be taken the photograph by number Each frame is exported as head acquisition video image and with digital rgb format, pupil of human center is tracked in realization in real time, Pass through camera first to acquire video image and export each frame with digital rgb format, later using YCgCr color space with Morphology operations, which combine, is partitioned into facial image roughly, recycle AdaBoost algorithm to rough facial image carry out quickly, Accurate Face datection and positioning compare single use YCbCr color space colour of skin cluster segmentation method, under complex background, Eyelash is longer and fringe, wearing spectacles are under face's circumstance of occlusion, and accuracy greatly improves, and compares single use AdaBoost algorithm may make that in the case where keeping Face datection accuracy almost the same, detection speed is promoted, and compares YCbCr The method that color space skin color segmentation is combined with AdaBoost algorithm, accuracy have promotion, and in AdaBoost training algorithm The upper strong classifier that uses cascades, and improves the speed and accuracy rate of AdaBoost algorithm detection face, determines benefit after facial image With the feature that human eye area is not larger relative to face other area grayscale value differences, eye image is oriented, then utilizes edge Detection and Hough transform loop truss determine pupil center.Wherein, detect speed promotion be using YCgCr color space with The method that morphology operations combine, Fast Segmentation go out rough facial image, then again with the detection segmentation of AdaBoost algorithm Image out thus reduces the quantity of AdaBoost algorithm pixel to be detected, so that AdaBoost algorithm detects Speed be improved.
The present invention is compared with existing pupil center's detection algorithm, and the algorithm accuracy is high, and speed is fast, can significantly improve Under complex background, eyelash is longer and fringe, wearing spectacles are to Detection accuracy under face's circumstance of occlusion, and has very high Real-time.
The method combined using YCgCr color space with morphology operations is solved and uses traditional skin color segmentation, because It is longer and fringe, wearing spectacles block face makes face non-skin pixel occur target area is caused to be lost for eyelash The problem of.
Detailed description of the invention
Fig. 1 is flow chart of the method for the present invention;
Specific embodiment
Specific embodiment 1:
Present embodiment it is a kind of based on digital video image processing pupil center's localization method, the method includes with Lower step:
Step 1: acquiring video image by camera and exporting each frame with digital rgb format;
Step 2: carrying out Face datection by the method that the segmentation of rough facial image is combined with AdaBoost algorithm and determining Position, determines the face image in video image;Wherein, rough facial image segmentation in using YCgCr color space with Morphology operations combine, and improve under complex background, eyelash is longer and fringe, wearing spectacles are under face's circumstance of occlusion The problem of bringing;Strong classifier Cascading Methods are used in AdaBoost training algorithm, improve AdaBoost algorithm detection face Speed and accuracy rate;
Step 3: using human eye area relative to the not biggish spy of other area grayscale value differences of face after determining face image Point carries out the positioning of eye image;
Step 4: determining pupil center location using edge detection and Hough transform loop truss.
Specific embodiment 2:
Unlike specific embodiment one, in a kind of pupil based on digital video image processing of present embodiment Heart localization method is acquired video image by camera in the step one and is exported the mistake of each frame with digital rgb format Journey is that the acquisition of video image is carried out using high speed camera.
Specific embodiment 3:
Unlike specific embodiment one or two, a kind of pupil based on digital video image processing of present embodiment Hole center positioning method divides the method combined with AdaBoost algorithm by rough facial image in the step two Face datection and positioning are carried out, accurately determines the process of the face image in video image are as follows:
It is calculated in the description form of YCgCr color space according to the data of the collected RGB color of camera, Cg, Cr value for utilizing each pixel in YCgCr color space later carry out colour of skin clustering processing to video image, according to Colour of skin clustering information constructs bianry image matrix, carries out first large area later and expands, the morphology operations of rear large area corrosion, The candidate region of rough face is obtained, it is collected original in camera using face candidate area pixel coordinate information later Take out corresponding rough face candidate region part in image and carry out gray processing, using AdaBoost algorithm to gray processing after Image carry out Face datection and obtain accurately facial image.Single AdaBoost algorithm detection face is compared, may make guarantor Hold Face datection accuracy it is almost the same in the case where, speed promoted.
Specific embodiment 4:
Unlike specific embodiment three, in a kind of pupil based on digital video image processing of present embodiment Heart localization method, the method progress combined by the segmentation of rough facial image with AdaBoost algorithm in the step two Face datection and positioning, determine the process of the face image in video image are as follows:
Step 2 one calculates it in YCgCr color space according to the data of the collected RGB color of camera Description form carries out colour of skin cluster:
After separating and removing luminance information, the colour of skin of face has preferable stability, not agnate face complexion letter It ceases almost the same.So the distribution of face complexion its Cb and Cr is substantially consistent concentrates under YCbCr color space One a small range.YCgCr color space is similar with YCbCr color space, difference be only that Cg and Cb component, due to RGB color is by counting a large amount of face pixels, Cg the and Cr distribution of face complexion, than the distribution of Cb and Cr It is smaller therefore higher using the YCbCr color space accuracy of YCgCr color space ratio.Video image is acquired simultaneously by camera Each frame is exported with digital rgb format, the pixel of each frame is calculated it in YCgCr color space according to formula (1) Description form obtains Cg the and Cr value of each pixel in these pictures, wherein RGB color refer to R (Red: red), G (Green: green), B (Blue: blue) are three primary colors, carry out different degrees of aliasing to describe color, YCgCr color space is Refer to and color is described with brightness and coloration, Y indicates that brightness value, Cg indicate that green chroma component, Cr indicate red chrominance component
Since the corresponding Cg value of skin pixel point is mainly distributed in [85,135] section, Cr value is mainly distributed on area Between in [130,165], after pixel is transformed into the space YCgCr, traverse the Cg of each pixel, Cr value is sieved with formula (2) It selects skin pixel and bianry image matrix B W is constructed according to the coordinate of each pixel and Cg, Cr distributed intelligence;
Wherein: and is indicated and relationship, and or is indicated or relationship;BW is the bianry image square that pixel value is 255 or 0 Battle array;
Step 2 two, the expansive working for carrying out large area to BW by formula (3), then recycle formula (4) to BWPGreatly Area etching operation;
BWF=BWpΘ B, (4)
Wherein A is the square structural element of 25x25B is the square structural element of 20x20
SymbolIndicate that morphological dilation, symbol Θ indicate morphological erosion operation, BWPFor with A to BW expansive working The bianry image matrix obtained afterwards, BWFFor with B to BWPThe bianry image matrix obtained after etching operation;
Step 2 three, the BW obtained according to step 2 twoFBianry image matrix, using its value be 255 pixel coordinate, In the collected original image RGB color of camera, respective coordinates pixel is taken out, formula (5) is recycled to carry out ash Degreeization obtains rough facial image, to further AdaBoost algorithm be utilized to detect face;
Step 2 four, people face there is apparent feature distribution, the grayscale information variation of face image is obvious, eyes area The gray value in domain is larger compared to other position difference of face, and Haar-like characteristic value can be very good the gray scale of reflection image Situation of change.The Haar-like feature that the present invention passes through the rough facial image after calculating the ashing that step 2 three obtains Value detects and precisely determines face image;Since the pixel of image is more, the calculation amount of Haar-like characteristic value is larger, Therefore calculating that is of the invention and accelerating Haar-like characteristic value using integrogram method;Then it is carried out by AdaBoost algorithm special Sign is chosen, and accurately face image is obtained.
Specific embodiment 5:
Unlike specific embodiment four, in a kind of pupil based on digital video image processing of present embodiment Heart localization method in the step two four, passes through the rough facial image after calculating the ashing that step 2 three obtains Haar-like characteristic value detects and determines face image;And accelerate the calculating of Haar-like characteristic value using integrogram method; Then Feature Selection is carried out by AdaBoost algorithm, obtains the process of accurately face image specifically:
A) integrogram ii is calculated by formula (6) in the area of skin color image after the ashing for obtaining step 2 four,
Wherein, ii (x, y) indicates integral image;I (x ', y ') indicates the pixel ash for being located at the position (x ', y ') in original image Angle value;
B) Haar-like feature is obtained using integrogram operation;
C) Feature Selection is carried out using AdaBoost algorithm, detects accurately face image result;
D) strong classifier is still limited in the accuracy rate and detection efficiency of Face datection, and the present invention is accurate in order to improve Multiple strong classifiers that training obtains are connected to obtain a cascade classifier by rate, and this classifier can not only improve detection Accuracy rate may also speed up detection speed, method particularly includes:
Given training sample indicates that target sample collection, N indicate non-targeted samples collection, indicate that cascade classifier is every with f with P The upper limit of layer error in classification, the lower limit of every layer of classification accuracy of cascade classifier is indicated with d, uses FtIndicate entire cascade sort The error of device;
1) classifier error F when i-th iteration, is initializedi, Fi=1, i=1;
2), iteration following steps, until Fi<Ft
2a), for i-th layer, a threshold value is set, is trained using training sample, so that the error f of this layeriIt is less than F, and the classification accuracy d of this layeriGreater than d;
2b), it enablesI=i0+ 1,
If 2c), Fi> Ft, then all non-face pictures are scanned using current classifier, by therein point The picture of class mistake is put into non-targeted samples set;
3), cascade classifier algorithm terminates.
Specific embodiment 6:
Unlike specific embodiment five, in a kind of pupil based on digital video image processing of present embodiment Heart localization method, described carries out Feature Selection using AdaBoost algorithm, detects the process of accurately face image result Are as follows:
The training process of AdaBoost algorithm are as follows:
The given set containing N number of training sample: (x1,y1),(x2,y2),……,(xN,yN), wherein yiIndicate training sample This classification, yi∈ { -1,1 }, i=1,2 ... N, i.e. target sample yiValue is 1, non-targeted samples yiValue is -1;It will be every The Haar-like feature of one training sample is as a Weak Classifier h, then T Weak Classifier is obtained in N number of training sample H, then iteration ordinal number t=1,2 ..., T;
C1), as t=1, the weight distribution of training sample is initialized, each sample assigns identical weight: w when initiali =1/N,
Then the initial weight of training sample set is distributed D1(i):
D1(i)=(w1,w2,…,wN)=(1/N, 1/N ..., 1/N)
(there are T Weak Classifier h, each Weak Classifier h h to the classification of N number of training sample respectively with T Weak Classifier h It indicates, in T Weak Classifier h, each Weak Classifier h will classify N number of training sample x), with h (xi) indicate Weak Classifier h To training sample xiSorted return value;If h (xi)=yi, presentation class is correct, h (xi)≠yi, presentation class mistake, so The error rate ε of each Weak Classifier h is calculated afterwards:
Wherein, P [h (xi)≠yi] indicate h (xi)≠yiWhen value be 1;h(xi)=yiValue is 0;
The then error rate ε of corresponding one of a Weak Classifier h oneself, takes error rate ε in all Weak Classifier h the smallest Basic classification device H when as t=11, and error rate at this time is denoted as minimal error rate ε1
Calculate basic classification device H1In final classification device f (xi) in shared weight α1:
C2), when t=2, the weight distribution D of training sample is updated2(i):
Wherein, Z1Indicate normalization factor, so that:It takes
Each h is calculated in weight set D2(i) error rate underMost according to error rate Small principle obtains basic classification device H2With minimal error rate ε2
Calculate basic classification device H2The shared weight α in final classification device2
C3), then in the t times iteration, the weight distribution D of training sample is updatedt(i):
Wherein, Zt-1Indicate normalization factor, so that:It takes
With training sample and weight set Dt(i) according to error rate minimum principle, training obtains basic classification device HtMinimum is accidentally Rate εt
Calculate HtThe shared weight α in final classification devicet:
C4), t value is since 1, and each+1, get T always (comprising T);
C5), weight α is finally pressedtCombine each basic classification device HtObtain final classification device f (xi):
C6), a strong classifier S (x is waited until finally by sign function signi):
Specific embodiment 7:
Unlike specific embodiment six, in a kind of pupil based on digital video image processing of present embodiment Heart localization method determines in the step three after face image using human eye area relative to other area grayscale values of face The larger feature of difference, orients the process of eye image, specifically:
After the face image for accurately detecting face, the present invention is using human eye area relative to other area grayscale values of face The larger feature of difference, first calculate human eye candidate point, obtain human eye candidate region, further according to human eye candidate region size and The information such as position, filter out human eye area, to realize the accurate positionin to position of human eye.Specific way are as follows:
Step 3 one calculates human eye candidate point:
It calculates step 2 and obtains the gray value of 8 pixels around each pixel p in accurately face image, obtain Gray value p about 8 pixels around the gray value p (x, y) and p of p pointi(x, y), i=1 ... 8, given threshold t are enabled:
If:
Then determine that pixel p (x, y) is human eye candidate point;
Wherein, niIndicate whether the gray value p (x, y) of p point is greater than the middle ith pixel point ash of 8 pixels around p Angle value piThe Boolean of (x, y);
Step 3 two, later carries out auxiliary judgment according to human eye physiology size, i.e. human eye cannot be excessive, cannot be too small, filter Remove the width threshold value T for being greater than setting in human eye candidate point regionwWith height threshold ThPixel connection made of region, with And other human eye candidate point quantity are less than the human eye candidate point amount threshold T of setting around filtering outnHuman eye candidate point, recognize It is noise spot for the human eye candidate point;
Step 3 three, later, it is candidate to the human eye after step 3 two filters out operation according to formula (9), (10) and (11) Point region is judged, the centre coordinate in human eye candidate region there are two region is met formula (9), (10) and (11), It is determined as human eye area, completes the positioning of eye image;
dij> t1*W (9)
dij< t2*W (10)
|xi-xj| < H (11)
Wherein, t1, t2 are fixed threshold value;(xi,yi) indicate the i human eye candidate region centre coordinate, dijTable Show that the European geometric distance of i-th and j-th human eye candidate region centre coordinate, H indicate that the height of candidate region, W indicate candidate The width in region.
Specific embodiment 8:
Unlike specific embodiment seven, in a kind of pupil based on digital video image processing of present embodiment Heart localization method, during determining pupil center location using edge in the step four, the edge detection method Using Sobel gradient operator, specifically: the eye image histogram equalization that step 3 determines is enhanced, is used later Sobel gradient operator carries out edge detection, both horizontally and vertically 3 × 3 operators point that the Sobel gradient operator uses It is not as follows:
Later, if carrying out edge detection to obtain eye image being f (x, y) through Sobel gradient operator, pass through formula (14) (15) calculate in Hough transform, edge strength size that when ballot selection peak value needs: rectangular co-ordinate mooring points (x, y) exists Polar diameter M (x, y) and polar angle θ (x, y) under polar coordinate system;
θ (x, y)=tan-1(G2(x,y)/G1(x,y)) (15)
Wherein, G1(x, y), G2(x, y) respectively indicates the both horizontally and vertically first derivative of each pixel in image, It is calculated by following formula (16) and (17);
Symbol in formula (16) and (17)Indicate the convolution algorithm of Digital Image Processing.
Specific embodiment 9:
Unlike specific embodiment eight, in a kind of pupil based on digital video image processing of present embodiment Heart localization method determines the process of pupil center location in the step four using Hough transform loop truss specifically:
Step 4 one, the normal equation for setting the circle being made of pupil edge pixel: (X-a)2+(Y-b)2=r2;Wherein, X, Cross, the ordinate value that Y is fastened by the rectangular co-ordinate relied on when the round normal equation of expression, (a, b) are central coordinate of circle, and r is circle Radius, they are the parameter of image, then are expressed as (a, b, r) in parameter space, and in image coordinate space a circle corresponds to A point in parameter space;
Step 4 two, the calculating process for carrying out pupil center:
Create three-dimensional array A3, and it is 0 that initialization, which enables each element value,;
It enables parameter a, b increase in value range, while solving the r value for meeting above formula;
(a, b, a r) value is often calculated, by array A3The corresponding element value in the position [a] [b] [r] adds 1;
After calculating, array position a, b, r value corresponding to the maximum element of value in array is found out, as required Round parameter, central coordinate of circle (a, b) at this time is pupil center.
The present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, this field Technical staff makes various corresponding changes and modifications in accordance with the present invention, but these corresponding changes and modifications should all belong to In the protection scope of the appended claims of the present invention.

Claims (9)

1. a kind of pupil center's localization method based on digital video image processing, it is characterised in that: the method passes through following Step is realized:
Step 1: acquiring video image by camera and exporting each frame with digital rgb format;
Step 2: Face datection and positioning are carried out by the method that the segmentation of rough facial image is combined with AdaBoost algorithm, Determine the face image in video image;Wherein, YCgCr color space and form are used in the segmentation of rough facial image Student movement combines;Strong classifier Cascading Methods are used in AdaBoost training algorithm;
Step 3: using the feature that human eye area is not larger relative to face other area grayscale value differences after determining face image, Carry out the positioning of eye image;
Step 4: determining pupil center location using edge detection and Hough transform loop truss.
2. pupil center's localization method according to claim 1 based on digital video image processing, it is characterised in that: institute Video image is acquired by camera in the step of stating one and is using taking the photograph at a high speed with the process that digital rgb format exports each frame As head carries out the acquisition of digital video image.
3. pupil center's localization method according to claim 1 or 2 based on digital video image processing, feature exist In: in the step two by the method that is combined with AdaBoost algorithm of rough facial image segmentation carry out Face datection and Positioning, determines the process of the face image in video image are as follows:
It is calculated in the description form of YCgCr color space, later according to the data of the collected RGB color of camera Cg, Cr value of each pixel are utilized in YCgCr color space, colour of skin clustering processing is carried out to video image, it is poly- according to the colour of skin Category information constructs bianry image matrix, carries out first large area expansion later, and the morphology operations of rear large area corrosion obtain rough Face candidate region, taken out in the collected original image of camera using face candidate area pixel coordinate information later Corresponding rough face candidate region part carries out gray processing, carries out people to the image after gray processing using AdaBoost algorithm Face detection obtains accurately facial image.
4. pupil center's localization method according to claim 3 based on digital video image processing, it is characterised in that: institute Face datection and positioning are carried out by the method that the segmentation of rough facial image is combined with AdaBoost algorithm in the step of stating two, Determine the process of the face image in video image are as follows:
Step 2 one calculates it in the description of YCgCr color space according to the data of the collected RGB color of camera Form carries out colour of skin cluster:
Video image is acquired by camera and each frame is exported with digital rgb format, by the pixel of each frame according to formula (1) it is calculated in the description form of YCgCr color space, obtains Cg the and Cr value of each pixel in these pictures, wherein RGB color refers to R (Red: red), G (Green: green), B (Blue: blue) for three primary colors, carries out different degrees of aliasing To describe color, YCgCr color space, which refers to, describes color with brightness and coloration, and Y indicates that brightness value, Cg indicate green chroma Component, Cr indicate red chrominance component;
Since the corresponding Cg value of skin pixel point is mainly distributed in [85,135] section, Cr value is mainly distributed on section In [130,165], after pixel is transformed into the space YCgCr, the Cg of each pixel is traversed, Cr value is filtered out with formula (2) Skin pixel simultaneously constructs bianry image matrix B W according to the coordinate of each pixel and Cg, Cr distributed intelligence;
Wherein: and is indicated and relationship, and or is indicated or relationship;BW is the bianry image matrix that pixel value is 255 or 0;
Step 2 two, the expansive working for carrying out large area to BW by formula (3), then recycle formula (4) to BWPLarge area Etching operation;
BWF=BWpΘ B, (4)
Wherein A is the square structural element of 25x25B is the square structural element of 20x20
SymbolIndicate that morphological dilation, symbol Θ indicate morphological erosion operation, BWPFor with A to being obtained after BW expansive working The bianry image matrix obtained, BWFFor with B to BWPThe bianry image matrix obtained after etching operation;
Step 2 three, the BW obtained according to step 2 twoFBianry image matrix is being taken the photograph using the coordinate for the pixel that its value is 255 As taking out respective coordinates pixel in collected original image RGB color, formula (5) is recycled to carry out gray processing Rough facial image is obtained, to further AdaBoost algorithm be utilized to detect face;
Step 2 four, the Haar-like characteristic value for calculating the rough facial image after the ashing that step 2 three obtains, detection is simultaneously Precisely determine face image;Since the pixel of image is more, the calculation amount of Haar-like characteristic value is larger, therefore the present invention And accelerate the calculating of Haar-like characteristic value using integrogram method;Then Feature Selection is carried out by AdaBoost algorithm, obtained Accurately face image.
5. pupil center's localization method according to claim 4 based on digital video image processing, it is characterised in that: institute In the step of stating two or four, the Haar-like characteristic value of the rough facial image after calculating the ashing that step 2 three obtains, detection And precisely determine face image;Accelerate the calculating of Haar-like characteristic value using integrogram method;Then pass through AdaBoost algorithm Feature Selection is carried out, the process of accurately face image is obtained specifically:
A) integrogram ii is calculated by formula (6) in the rough face image after the ashing for obtaining step 2 three,
Wherein, ii (x, y) indicates integral image;I (x ', y ') indicates the grey scale pixel value for being located at the position (x ', y ') in original image;
B) Haar-like feature is obtained using integrogram operation;
C) Feature Selection is carried out using AdaBoost algorithm, detects accurately face image result;
D) multiple strong classifiers that training obtains are connected to obtain a cascade classifier, method particularly includes:
Given training sample indicates that target sample collection, N indicate non-targeted samples collection with P, indicates every layer of cascade classifier point with f The upper limit of class error indicates the lower limit of every layer of classification accuracy of cascade classifier with d, uses FtIndicate the mistake of entire cascade classifier Difference;
1) classifier error F when i-th iteration is initializedi, Fi=1, i=1;
2) iteration following steps, until Fi<Ft
2a), for i-th layer, a threshold value is set, is trained using training sample, so that the error f of this layeriLess than f, and The classification accuracy d of this layeriGreater than d;
2b), F is enabledi+1=fiFi, i=i0+ 1,
If 2c), Fi> Ft, then all non-face pictures are scanned using current classifier, classification therein is wrong Picture is put into non-targeted samples set;
3) cascade classifier algorithm terminates.
6. pupil center's localization method according to claim 5 based on digital video image processing, it is characterised in that: institute That states carries out Feature Selection using AdaBoost algorithm, detects the process of accurately face image result are as follows:
The training process of AdaBoost algorithm are as follows:
The given set containing N number of training sample: (x1,y1),(x2,y2),……,(xN,yN), wherein yiIndicate training sample Classification, yi∈ { -1,1 }, i=1,2 ... N, i.e. target sample yiValue is 1, non-targeted samples yiValue is -1;Each is instructed Practice the Haar-like feature of sample as a Weak Classifier h, then T Weak Classifier h is obtained in N number of training sample, then iteration Ordinal number t=1,2 ..., T;
C1), as t=1, the weight distribution of training sample is initialized, each sample assigns identical weight: w when initiali=1/ N,
Then the initial weight of training sample set is distributed D1(i):
D1(i)=(w1,w2,…,wN)=(1/N, 1/N ..., 1/N)
Classified respectively to N number of training sample with T Weak Classifier h, with h (xi) indicate Weak Classifier h to training sample xiAfter classification Return value;If h (xi)=yi, presentation class is correct, h (xi)≠yi, then presentation class mistake calculates each weak typing The error rate ε of device h:
Wherein, P [h (xi)≠yi] indicate h (xi)≠yiWhen value be 1;h(xi)=yiValue is 0;
The then error rate ε of corresponding one of a Weak Classifier h oneself, takes error rate ε in all Weak Classifier h the smallest as t Basic classification device H when=11, and error rate at this time is denoted as minimal error rate ε1
Calculate basic classification device H1In final classification device f (xi) in shared weight α1:
C2), when t=2, the weight distribution D of training sample is updated2(i):
Wherein, Z1Indicate normalization factor, so that:It takes
Each h is calculated in weight set D2(i) error rate underIt is minimum former according to error rate Then obtain basic classification device H2With minimal error rate ε2
Calculate basic classification device H2The shared weight α in final classification device2:
C3), then in the t times iteration, the weight distribution D of training sample is updatedt(i):
Wherein, Zt-1Indicate normalization factor, so that:It takes
With training sample and weight set Dt(i) according to error rate minimum principle, training obtains basic classification device HtMinimal error rate εt
Calculate HtThe shared weight α in final classification devicet:
C4), t value is since 1, and each+1, get T always (comprising T);
C5), weight α is finally pressedtCombine each basic classification device HtObtain final classification device f (xi):
C6), a strong classifier S (x is waited until finally by sign function signi):
7. pupil center's localization method according to claim 6 based on digital video image processing, it is characterised in that: institute Not larger relative to face other area grayscale value differences using human eye area feature after face image is determined in the step of stating three, The process of eye image is oriented, specifically:
Step 3 one calculates human eye candidate point:
Calculate step 2 obtain the gray value of 8 pixels around each pixel p in accurately face image, obtain about The gray value p of 8 pixels around the gray value p (x, y) and p of p pointi(x, y), i=1 ... 8, given threshold t are enabled:
If:
Then determine that pixel p (x, y) is human eye candidate point;
Wherein, niIndicate whether the gray value p (x, y) of p point is greater than the middle ith pixel point gray value p of 8 pixels around pi The Boolean of (x, y);
Step 3 two, later carries out auxiliary judgment according to human eye physiology size, filters out and be greater than setting in human eye candidate point region Width threshold value TwWith height threshold ThPixel connection made of region, and other human eye candidate point numbers around filtering out Amount is less than the human eye candidate point amount threshold T of settingnHuman eye candidate point;
Step 3 three, later, according to formula (9), (10) and (11) to the human eye candidate point area after step 3 two filters out operation Domain is judged, the centre coordinate in human eye candidate region there are two region is met formula (9), (10) and (11), is determined For human eye area, the positioning of eye image is completed;
dij> t1*W (9)
dij< t2*W (10)
|xi-xj| < H (11)
Wherein, t1, t2 are fixed threshold value;(xi,yi) indicate the i human eye candidate region centre coordinate, dijIndicate i-th A and j-th of human eye candidate region centre coordinate European geometric distance, H indicate that the height of candidate region, W indicate candidate region It is wide.
8. pupil center's localization method according to claim 7 based on digital video image processing, it is characterised in that: institute During determining pupil center location using edge in the step of stating four, the edge detection method uses Sobel gradient Operator, specifically: the eye image histogram equalization that step 3 determines is enhanced, binaryzation uses Sobel again later Gradient operator carries out edge detection, and both horizontally and vertically 3 × 3 operators difference that the Sobel gradient operator uses is as follows:
Later, if carrying out edge detection to obtain eye image being f (x, y) through Sobel gradient operator, pass through formula (14) and (15) Polar diameter M (x, y) and polar angle θ (x, y) of the edge pixel rectangular co-ordinate mooring points (x, y) under polar coordinate system is calculated, into One step Hough transformation loop truss;
θ (x, y)=tan-1(G2(x,y)/G1(x,y)) (15)
Wherein, G1(x, y), G2(x, y) respectively indicates the both horizontally and vertically first derivative of each pixel in image, passes through Following formula (16) and (17) are calculated;
Symbol in formula (16) and (17)Indicate the convolution algorithm of Digital Image Processing.
9. pupil center's localization method according to claim 8 based on digital video image processing, it is characterised in that: institute The process of pupil center location is determined in the step of stating four using Hough transform loop truss specifically:
Step 4 one, the normal equation for setting the circle being made of pupil edge pixel: (X-a)2+(Y-b)2=r2;Wherein, X, Y are table Cross that the rectangular co-ordinate relied on when showing round normal equation is fastened, ordinate value, (a, b) are central coordinate of circle, and r is radius of circle, It is the parameter of image, is then expressed as (a, b, r) in parameter space, in the corresponding parameter space of in image coordinate space a circle A point;
Step 4 two, the calculating process for carrying out pupil center:
Create three-dimensional array A3, and it is 0 that initialization, which enables each element value,;
It enables parameter a, b increase in value range, while solving the r value for meeting above formula;
(a, b, a r) value is often calculated, by array A3The corresponding element value in the position [a] [b] [r] adds 1;
After calculating, array position a, b, r value corresponding to the maximum element of value in array is found out, as required circle Parameter, central coordinate of circle (a, b) at this time is pupil center.
CN201811408486.7A 2018-11-23 2018-11-23 A kind of pupil center's localization method based on digital video image processing Pending CN109558825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811408486.7A CN109558825A (en) 2018-11-23 2018-11-23 A kind of pupil center's localization method based on digital video image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811408486.7A CN109558825A (en) 2018-11-23 2018-11-23 A kind of pupil center's localization method based on digital video image processing

Publications (1)

Publication Number Publication Date
CN109558825A true CN109558825A (en) 2019-04-02

Family

ID=65867196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811408486.7A Pending CN109558825A (en) 2018-11-23 2018-11-23 A kind of pupil center's localization method based on digital video image processing

Country Status (1)

Country Link
CN (1) CN109558825A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598635A (en) * 2019-09-12 2019-12-20 北京大学第一医院 Method and system for face detection and pupil positioning in continuous video frames
CN111126330A (en) * 2019-12-31 2020-05-08 北京理工大学 Pupil membrane center positioning method and student class attendance fatigue degree detection method
CN111291701A (en) * 2020-02-20 2020-06-16 哈尔滨理工大学 Sight tracking method based on image gradient and ellipse fitting algorithm
CN111724381A (en) * 2020-06-24 2020-09-29 武汉互创联合科技有限公司 Microscopic image cell counting and posture identification method based on multi-view cross validation
CN112542241A (en) * 2020-04-07 2021-03-23 徐敬媛 Cloud storage type color feature analysis platform and method
CN112541433A (en) * 2020-12-11 2021-03-23 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism
CN112884761A (en) * 2021-03-19 2021-06-01 东营市阔海水产科技有限公司 Aquatic economic animal head identification method, terminal device and readable storage medium
CN113116292A (en) * 2021-04-22 2021-07-16 上海交通大学医学院附属第九人民医院 Eye position measuring method, device, terminal and equipment based on eye appearance image
CN115115641A (en) * 2022-08-30 2022-09-27 江苏布罗信息技术有限公司 Pupil image segmentation method
CN116823746A (en) * 2023-06-12 2023-09-29 广州视景医疗软件有限公司 Pupil size prediction method and device based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440476A (en) * 2013-08-26 2013-12-11 大连理工大学 Locating method for pupil in face video
US20170042462A1 (en) * 2015-08-10 2017-02-16 Neuro Kinetics, Inc. Automated Data Acquisition, Appraisal and Analysis in Noninvasive Rapid Screening of Neuro-Otologic Conditions Using Combination of Subject's Objective Oculomotor Vestibular and Reaction Time Analytic Variables
CN106557750A (en) * 2016-11-22 2017-04-05 重庆邮电大学 It is a kind of based on the colour of skin and the method for detecting human face of depth y-bend characteristics tree
CN107220624A (en) * 2017-05-27 2017-09-29 东南大学 A kind of method for detecting human face based on Adaboost algorithm
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440476A (en) * 2013-08-26 2013-12-11 大连理工大学 Locating method for pupil in face video
US20170042462A1 (en) * 2015-08-10 2017-02-16 Neuro Kinetics, Inc. Automated Data Acquisition, Appraisal and Analysis in Noninvasive Rapid Screening of Neuro-Otologic Conditions Using Combination of Subject's Objective Oculomotor Vestibular and Reaction Time Analytic Variables
CN106557750A (en) * 2016-11-22 2017-04-05 重庆邮电大学 It is a kind of based on the colour of skin and the method for detecting human face of depth y-bend characteristics tree
CN107220624A (en) * 2017-05-27 2017-09-29 东南大学 A kind of method for detecting human face based on Adaboost algorithm
CN108268859A (en) * 2018-02-08 2018-07-10 南京邮电大学 A kind of facial expression recognizing method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
孙慧: ""基于色恒常及肤色信息的人脸检测算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王子敬: ""眼部区域瞳孔定位关键技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
董大洼,等: ""桌面式眼动跟踪系统研究"", 《电子技术研发》 *
龚静: ""基于面部关键点的人脸表情识别算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598635B (en) * 2019-09-12 2022-05-27 北京大学第一医院 Method and system for face detection and pupil positioning in continuous video frames
CN110598635A (en) * 2019-09-12 2019-12-20 北京大学第一医院 Method and system for face detection and pupil positioning in continuous video frames
CN111126330A (en) * 2019-12-31 2020-05-08 北京理工大学 Pupil membrane center positioning method and student class attendance fatigue degree detection method
CN111291701A (en) * 2020-02-20 2020-06-16 哈尔滨理工大学 Sight tracking method based on image gradient and ellipse fitting algorithm
CN111291701B (en) * 2020-02-20 2022-12-13 哈尔滨理工大学 Sight tracking method based on image gradient and ellipse fitting algorithm
CN112542241A (en) * 2020-04-07 2021-03-23 徐敬媛 Cloud storage type color feature analysis platform and method
CN111724381B (en) * 2020-06-24 2022-11-01 武汉互创联合科技有限公司 Microscopic image cell counting and posture identification method based on multi-view cross validation
CN111724381A (en) * 2020-06-24 2020-09-29 武汉互创联合科技有限公司 Microscopic image cell counting and posture identification method based on multi-view cross validation
CN112541433A (en) * 2020-12-11 2021-03-23 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism
CN112541433B (en) * 2020-12-11 2024-04-19 中国电子技术标准化研究院 Two-stage human eye pupil accurate positioning method based on attention mechanism
CN112884761A (en) * 2021-03-19 2021-06-01 东营市阔海水产科技有限公司 Aquatic economic animal head identification method, terminal device and readable storage medium
CN113116292A (en) * 2021-04-22 2021-07-16 上海交通大学医学院附属第九人民医院 Eye position measuring method, device, terminal and equipment based on eye appearance image
CN115115641A (en) * 2022-08-30 2022-09-27 江苏布罗信息技术有限公司 Pupil image segmentation method
CN115115641B (en) * 2022-08-30 2023-12-22 孙清珠 Pupil image segmentation method
CN116823746A (en) * 2023-06-12 2023-09-29 广州视景医疗软件有限公司 Pupil size prediction method and device based on deep learning
CN116823746B (en) * 2023-06-12 2024-02-23 广州视景医疗软件有限公司 Pupil size prediction method and device based on deep learning

Similar Documents

Publication Publication Date Title
CN109558825A (en) A kind of pupil center&#39;s localization method based on digital video image processing
CN104268583B (en) Pedestrian re-recognition method and system based on color area features
US6611613B1 (en) Apparatus and method for detecting speaking person&#39;s eyes and face
Asteriadis et al. Facial feature detection using distance vector fields
CN104951773A (en) Real-time face recognizing and monitoring system
CN106682578B (en) Weak light face recognition method based on blink detection
CN108229458A (en) A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction
CN102194108B (en) Smile face expression recognition method based on clustering linear discriminant analysis of feature selection
CN105138954A (en) Image automatic screening, query and identification system
CN105046219A (en) Face identification system
CN101383001A (en) Quick and precise front human face discriminating method
Pandey et al. Hand gesture recognition for sign language recognition: A review
WO2009123354A1 (en) Method, apparatus, and program for detecting object
CN105205804A (en) Caryon-cytolymph separation method and apparatus of white blood cells in blood cell image, and classification method and apparatus of white blood cells in blood cell image
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN109271932A (en) Pedestrian based on color-match recognition methods again
Denman et al. Searching for people using semantic soft biometric descriptions
CN106650628B (en) Fingertip detection method based on three-dimensional K curvature
CN108416291A (en) Face datection recognition methods, device and system
CN106909890A (en) A kind of Human bodys&#39; response method based on position cluster feature
CN105631456B (en) A kind of leucocyte method for extracting region based on particle group optimizing ITTI model
Sobia et al. Facial expression recognition using PCA based interface for wheelchair
CN106909884A (en) A kind of hand region detection method and device based on hierarchy and deformable part sub-model
CN106611158A (en) Method and equipment for obtaining human body 3D characteristic information
CN109948461A (en) A kind of sign language image partition method based on center coordination and range conversion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190402