CN103218605A - Quick eye locating method based on integral projection and edge detection - Google Patents
Quick eye locating method based on integral projection and edge detection Download PDFInfo
- Publication number
- CN103218605A CN103218605A CN2013101198439A CN201310119843A CN103218605A CN 103218605 A CN103218605 A CN 103218605A CN 2013101198439 A CN2013101198439 A CN 2013101198439A CN 201310119843 A CN201310119843 A CN 201310119843A CN 103218605 A CN103218605 A CN 103218605A
- Authority
- CN
- China
- Prior art keywords
- image
- point
- gray
- value
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a quick eye locating method based on integral projection and edge detection. The method comprises the following steps of 1, carrying out gray level transformation on detected face images, and smoothly denoising by a filter; 2, utilizing a horizontal integral projection method to obtain the general position of an eye; 3, carrying out the edge extraction and binaryzation on the images in the step 1; 4, calculating the line complexity and the row complexity, and accurately locating the position of the eye; and 5, correcting, so as to obtain the final position of the eye. The method has the advantages that the calculation speed is high, the effect on the locating of the eye by ornaments in the face images can be effectively inhibited, the locating speed is high, and the stability is realized.
Description
Technical field
The invention belongs to the characteristic point positioning method in pattern-recognition, relate in particular to a kind of fast human-eye positioning method based on integral projection and rim detection, a kind of fast and convenient solution proposed mainly for human eye orientation problem in input picture, can locate human eye fast and effectively.
Background technology
Computer face identification is the research field of enlivening very much in recent years.Being of wide application of it, as Sex, Age analysis, security system authentication, expression analysis and video conference etc.It mainly comprises that people's face detects, feature location is extracted and feature is identified several steps.Human eye is as a key feature of people's face, and can the result that accurately locate feature extraction and feature identification it have tremendous influence.
Half-tone information and the marginal information of this paper based on image, propose a kind of algorithm of human eye detection fast and effectively.This algorithm can be located human eye fast, and can be good at suppressing the impact that illumination and jewelry bring to the location result.
Summary of the invention
The invention provides a kind of succinct and high fast human-eye positioning method based on integral projection and rim detection of accuracy.
In order to realize this target, the present invention takes following technical scheme:
Step 2: utilize the digital picture of Adaboost algorithm to collecting
Carry out people's face and detect operation, get facial image wherein
,
Step 3: to the facial image acquired in step 2
Carry out pre-service, method is as follows:
Step 3.1: by the facial image acquired
Be converted into facial image
Gray level image, and by facial image
Gray level image be normalized to W
The image of H
, wherein W, H are positive integer, mean respectively facial image
Gray level image be normalized to W
The image of H
Line number and columns,
Step 3.2: utilize Gaussian filter to W
The image of H
Carry out level and smooth, denoising, concrete grammar is as follows: at first, and the image after level and smooth, denoising
The gray-scale value of each boundary pixel be respectively smoothly, the W before denoising
The image of H
The gray-scale value of each boundary pixel; Secondly, for W
The image of H
The gray-scale value of non-boundary pixel, with W
The image of H
In any one non-boundary pixel centered by pixel, choose 3
Gauss's template of 3, obtain the gray-scale value of described center pixel, that is:
In formula,
Represent row-coordinate,
Represent the row coordinate,
For image
In
The gray value of point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some lower-left angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some upper left angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some bottom right angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some upper right angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
Put the gray value of horizontal left to point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of point under point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of the horizontal right point of point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of point directly over point,
For after gaussian filtering
The gray value of point, obtain image after processing
, its size is still W
H, traversal W
The image of H
In all non-boundary pixels,
Step 4: to image
Carry out horizontal integral projection, the projection computing formula is as follows:
In formula,
Mean the
The horizontal integral projection result of row,
For image
The gray-scale value at some place, W is picturedeep, H is picturewide,
Choose the horizontal integral projection result of gray-scale value minimum from W capable horizontal integral projection result, and be designated as
,
For making horizontal integral projection computing formula
While obtaining minimum value
Corresponding line number, from image
In select the vertical coordinate scope and be
Zone, the zone to be detected that may exist as human eye, wherein,
, mean
Value be
The value of rounding downwards,
The vertical coordinate interval estimation parameter that may exist for human eye,
Step 5.1: at first, carry out the gradient magnitude matrix correspondence image after gradient calculation
The grey scale pixel value of each frontier point be the image carried out before gradient calculation
The grey scale pixel value of frontier point, secondly, for image
In non-boundary pixel point
, choose
,
,
For the horizontal direction edge detection operator,
For the vertical direction edge detection operator, calculate non-boundary pixel point
The horizontal direction at place, the single order partial derivative of vertical direction, and non-boundary pixel point
Gradient magnitude and the gradient direction at place, computing formula is as follows:
In formula
Representative image
In non-boundary pixel point
Gray-scale value, i is image
In the horizontal coordinate of non-frontier point, j is image
In the vertical coordinate of non-frontier point,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid lower left corner,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid upper right corner,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid lower right corner,
,
Represent respectively non-boundary pixel point
The horizontal direction at place, the single order partial derivative of vertical direction,
Represent non-boundary pixel point
Gradient magnitude,
Represent non-boundary pixel point
Gradient direction,
Step 5.2: by the gradient direction value of each non-boundary pixel point of obtaining in step 5.1
Carry out discretize and obtain new gradient direction value
, choose with non-boundary pixel point
Centered by 3
3 windows, right
Value processed, computing formula is as follows:
In formula,
With
Represent respectively with non-boundary pixel point
Centered by 3
In 3 windows along
The gradient magnitude of two pixels of direction,
For the gradient magnitude after processing through above formula, with the gradient magnitude after processing
As central element, image
The frontier point grey scale pixel value as the boundary element structural matrix, the corresponding image that produces of this matrix
,
Step 6.1: adopt the maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture
In the gray-scale value of each pixel, using the highest gray-scale value as high grade grey level m, m is integer, the image gray levels scope consists of each round values in interval [0, m], the number of pixels that gray level is t is made as
, total number of pixels
, the probability of each gray-scale value is
If use integer
Gray scale is divided into to two groups
,
,
For gray level is less than or equal to
The gray level group,
For gray level is greater than
Be less than
The gray level group, utilize following formula to calculate the variance between two gray level groups:
In formula,
For general image mean value,
For
The mean value of group,
For
The probability of group, choose respectively each round values in interval [0, m] value as k, then calculate respectively the corresponding variance yields of each k value
, and select maximum variance yields from m+1 variance yields, then using the corresponding k value of maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image
Carry out binary conversion treatment, image
Its gray-scale value of pixel that middle gray-scale value is more than or equal to threshold value T is set to 255, lower than the grey scale pixel value of threshold value T, is set to 0, obtains bianry image
,
Step 7: accurately determine horizontal level and the upright position of human eye, the specific implementation process is as follows:
Step 7.1: for bianry image
, calculate row, column complexity function, computing formula is as follows:
In formula,
Representative row complexity function,
Represent row complexity function,
Representative image
In be positioned at
The pixel value of the pixel at place,
,
For the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function
, row complexity function
Carry out the one dimension low-pass filtering and obtain new capable complexity function
, new row complexity function
, by calculating, find new capable complexity function
Maximum of points and new row complexity function
Maximum point determine the position of human eye coordinate, through calculating function
A maximum of points
, function
Two maximum points
,
, obtain pixel
, pixel
,
Step 7.3: the new image obtained in step 3.2
In, select respectively the pixel that step 7.2 obtains
, pixel
6
6 neighborhoods, two pixels of the gray-scale value minimum in this field are set as the human eye center, and the position coordinates of two pixels of gray-scale value minimum is defined as to the eyes coordinate
,
.
Compared with prior art, characteristics of the present invention are:
At first this algorithm is accurately carrying out the integral projection processing to image before the human eye of location, so just can roughly determine the zone that human eye may exist, eliminate other interference region, the impact that for example face, nose bring, can greatly save the calculating required time like this, also can be good at improving the positioning result of final human eye simultaneously.In addition, due to the variation of human eye area, than other organ more complicated of people's face, edge variation is stronger, adopts the method for rim detection can identify simply and effectively human eye area.
The accompanying drawing explanation
Fig. 1 is fast human-eye location algorithm process flow diagram.
Fig. 2 is gradient direction angle discretize standard drawing.
Fig. 3 is the horizontal integral projection figure of facial image.
Fig. 4 is the edge detection results figure of human eye area and this human eye area.
Fig. 5 is the row complexity function schematic diagram of human eye area edge detection results figure.
Fig. 6 is the capable complexity function schematic diagram of human eye area edge detection results figure.
Fig. 7 is the result schematic diagram of row complexity function after low-pass filtering of human eye area edge detection results figure.
Fig. 8 is the result schematic diagram of capable complexity function after low-pass filtering of human eye area edge detection results figure.
Embodiment
In concrete embodiment, in connection with accompanying drawing, the clear detailed implementation procedure of intactly having described the fast human-eye location algorithm,
A kind of fast human-eye positioning method is characterized in that carrying out according to following steps:
Step 2: utilize the digital picture of Adaboost algorithm to collecting
Carry out people's face and detect operation, get facial image wherein
,
Step 3.1: by the facial image acquired
Be converted into facial image
Gray level image, computing formula is as follows:
For processing the brightness value of rear pixel, the relative intensity that R, G, B are three primary colours, by facial image
Gray level image be normalized to W
The image of H
, wherein W, H are positive integer, mean respectively facial image
Gray level image be normalized to W
The image of H
Line number and columns,
Step 3.2: utilize Gaussian filter to W
The image of H
Carry out level and smooth, denoising, concrete grammar is as follows: at first, and the image after level and smooth, denoising
The gray-scale value of each boundary pixel be respectively smoothly, the W before denoising
The image of H
The gray-scale value of each boundary pixel; Secondly, for W
The image of H
The gray-scale value of non-boundary pixel, with W
The image of H
In any one non-boundary pixel centered by pixel, choose 3
Gauss's template of 3, obtain the gray-scale value of described center pixel, that is:
In formula,
Represent row-coordinate,
Represent the row coordinate,
For image
In
The gray value of point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some lower-left angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some upper left angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some bottom right angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some upper right angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
Put the gray value of horizontal left to point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of point under point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of the horizontal right point of point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of point directly over point,
For after gaussian filtering
The gray value of point, obtain image after processing
, its size is still W
H, traversal W
The image of H
In all non-boundary pixels,
Step 4: to image
Carry out horizontal integral projection, the projection computing formula is as follows:
In formula,
Mean the
The horizontal integral projection result of row,
For image
The gray-scale value at some place, W is picturedeep, H is picturewide,
Choose the horizontal integral projection result of gray-scale value minimum from W capable horizontal integral projection result, and be designated as
,
For making horizontal integral projection computing formula
While obtaining minimum value
Corresponding line number, from image
In select the vertical coordinate scope and be
Zone, the zone to be detected that may exist as human eye, wherein,
, mean
Value be
The value of rounding downwards,
The vertical coordinate interval estimation parameter that may exist for human eye,
Step 5.1: at first, carry out the gradient magnitude matrix correspondence image after gradient calculation
The grey scale pixel value of each frontier point be the image carried out before gradient calculation
The grey scale pixel value of frontier point, secondly, for image
In non-boundary pixel point
, choose
,
,
For the horizontal direction edge detection operator,
For the vertical direction edge detection operator, calculate non-boundary pixel point
The horizontal direction at place, the single order partial derivative of vertical direction, and non-boundary pixel point
Gradient magnitude and the gradient direction at place, computing formula is as follows:
In formula
Representative image
In non-boundary pixel point
Gray-scale value, i is image
In the horizontal coordinate of non-frontier point, j is image
In the vertical coordinate of non-frontier point,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid lower left corner,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid upper right corner,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid lower right corner,
,
Represent respectively non-boundary pixel point
The horizontal direction at place, the single order partial derivative of vertical direction,
Represent non-boundary pixel point
Gradient magnitude,
Represent non-boundary pixel point
Gradient direction,
Step 5.2: by the gradient direction value of each non-boundary pixel point of obtaining in step 5.1
Carry out discretize according to the discretize standard of accompanying drawing 2 and obtain new gradient direction value
, choose with non-boundary pixel point
Centered by 3
3 windows, right
Value processed, computing formula is as follows:
In formula,
With
Represent respectively with non-boundary pixel point
Centered by 3
In 3 windows along
The gradient magnitude of two pixels of direction,
For the gradient magnitude after processing through above formula, with the gradient magnitude after processing
As central element, image
The frontier point grey scale pixel value as the boundary element structural matrix, the corresponding image that produces of this matrix
,
Step 6.1: adopt the maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture
In the gray-scale value of each pixel, using the highest gray-scale value as high grade grey level m, m is integer, the image gray levels scope consists of each round values in interval [0, m], the number of pixels that gray level is t is made as
, total number of pixels
, the probability of each gray-scale value is
If use integer
Gray scale is divided into to two groups
,
,
For gray level is less than or equal to
The gray level group,
For gray level is greater than
Be less than
The gray level group, utilize following formula to calculate the variance between two gray level groups:
In formula,
For general image mean value,
For
The mean value of group,
For
The probability of group, choose respectively each round values in interval [0, m] value as k, then calculate respectively the corresponding variance yields of each k value
, and select maximum variance yields from m+1 variance yields, then using the corresponding k value of maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image
Carry out binary conversion treatment, image
Its gray-scale value of pixel that middle gray-scale value is more than or equal to threshold value T is set to 255, lower than the grey scale pixel value of threshold value T, is set to 0, obtains bianry image
,
Step 7: accurately determine horizontal level and the upright position of human eye, the specific implementation process is as follows:
Step 7.1: for bianry image
, calculate row, column complexity function, computing formula is as follows:
In formula,
Representative row complexity function,
Represent row complexity function,
Representative image
In be positioned at
The pixel value of the pixel at place,
,
For the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function
, row complexity function
Carry out the one dimension low-pass filtering and obtain new capable complexity function
, new row complexity function
, by calculating, find new capable complexity function
Maximum of points and new row complexity function
Maximum point determine the position of human eye coordinate, through calculating function
A maximum of points
, function
Two maximum points
,
, obtain pixel
, pixel
,
Step 7.3: the new image obtained in step 3.2
In, select respectively the pixel that step 7.2 obtains
, pixel
6
6 neighborhoods, two pixels of the gray-scale value minimum in this field are set as the human eye center, and the position coordinates of two pixels of gray-scale value minimum is defined as to the eyes coordinate
,
.
Claims (1)
1. the fast human-eye positioning method based on integral projection and rim detection is characterized in that carrying out according to following steps:
Step 2: utilize the digital picture of Adaboost algorithm to collecting
Carry out people's face and detect operation, get facial image wherein
,
Step 3.1: by the facial image acquired
Be converted into facial image
Gray level image, and by facial image
Gray level image be normalized to W
The image of H
, wherein W, H are positive integer, mean respectively facial image
Gray level image be normalized to W
The image of H
Line number and columns,
Step 3.2: utilize Gaussian filter to W
The image of H
Carry out level and smooth, denoising, concrete grammar is as follows: at first, and the image after level and smooth, denoising
The gray-scale value of each boundary pixel be respectively smoothly, the W before denoising
The image of H
The gray-scale value of each boundary pixel; Secondly, for W
The image of H
The gray-scale value of non-boundary pixel, with W
The image of H
In any one non-boundary pixel centered by pixel, choose 3
Gauss's template of 3, obtain the gray-scale value of described center pixel, that is:
In formula,
Represent row-coordinate,
Represent the row coordinate,
For image
In
The gray value of point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some lower-left angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some upper left angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some bottom right angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of some upper right angle point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
Put the gray value of horizontal left to point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of point under point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of the horizontal right point of point,
For image
In with
Centered by point 3
In 3 grids, be positioned at
The gray value of point directly over point,
For after gaussian filtering
The gray value of point, obtain image after processing
, its size is still W
H, traversal W
The image of H
In all non-boundary pixels,
Step 4: to image
Carry out horizontal integral projection, the projection computing formula is as follows:
In formula,
Mean the
The horizontal integral projection result of row,
For image
The gray-scale value at some place, W is picturedeep, H is picturewide,
Choose the horizontal integral projection result of gray-scale value minimum from W capable horizontal integral projection result, and be designated as
,
For making horizontal integral projection computing formula
While obtaining minimum value
Corresponding line number, from image
In select the vertical coordinate scope and be
Zone, the zone to be detected that may exist as human eye, wherein,
, mean
Value be
The value of rounding downwards,
The vertical coordinate interval estimation parameter that may exist for human eye,
Step 5: to image
Carry out edge extracting, the specific implementation process is as follows:
Step 5.1: at first, carry out the gradient magnitude matrix correspondence image after gradient calculation
The grey scale pixel value of each frontier point be the image carried out before gradient calculation
The grey scale pixel value of frontier point, secondly, for image
In non-boundary pixel point
, choose
,
,
For the horizontal direction edge detection operator,
For the vertical direction edge detection operator, calculate non-boundary pixel point
The horizontal direction at place, the single order partial derivative of vertical direction, and non-boundary pixel point
Gradient magnitude and the gradient direction at place, computing formula is as follows:
In formula
Representative image
In non-boundary pixel point
Gray-scale value, i is image
In the horizontal coordinate of non-frontier point, j is image
In the vertical coordinate of non-frontier point,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid lower left corner,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid upper right corner,
For image
In with non-boundary pixel point
Point is upper left corner element
Grid in be positioned at the gray-scale value of the pixel in the grid lower right corner,
,
Represent respectively non-boundary pixel point
The horizontal direction at place, the single order partial derivative of vertical direction,
Represent non-boundary pixel point
Gradient magnitude,
Represent non-boundary pixel point
Gradient direction,
Step 5.2: by the gradient direction value of each non-boundary pixel point of obtaining in step 5.1
Carry out discretize and obtain new gradient direction value
, choose with non-boundary pixel point
Centered by 3
3 windows, right
Value processed, computing formula is as follows:
In formula,
With
Represent respectively with non-boundary pixel point
Centered by 3
In 3 windows along
The gradient magnitude of two pixels of direction,
For the gradient magnitude after processing through above formula, with the gradient magnitude after processing
As central element, image
The frontier point grey scale pixel value as the boundary element structural matrix, the corresponding image that produces of this matrix
,
Step 6.1: adopt the maximum variance between clusters definite threshold, the process of definite threshold is as follows:
Statistical picture
In the gray-scale value of each pixel, using the highest gray-scale value as high grade grey level m, m is integer, the image gray levels scope consists of each round values in interval [0, m], the number of pixels that gray level is t is made as
, total number of pixels
, the probability of each gray-scale value is
If use integer
Gray scale is divided into to two groups
,
,
For gray level is less than or equal to
The gray level group,
For gray level is greater than
Be less than
The gray level group, utilize following formula to calculate the variance between two gray level groups:
In formula,
For general image mean value,
For
The mean value of group,
For
The probability of group, choose respectively each round values in interval [0, m] value as k, then calculate respectively the corresponding variance yields of each k value
, and select maximum variance yields from m+1 variance yields, then using the corresponding k value of maximum variance yields as threshold value T,
Step 6.2: use the threshold value T obtained in step 6.1 to image
Carry out binary conversion treatment, image
Its gray-scale value of pixel that middle gray-scale value is more than or equal to threshold value T is set to 255, lower than the grey scale pixel value of threshold value T, is set to 0, obtains bianry image
,
Step 7: accurately determine horizontal level and the upright position of human eye, the specific implementation process is as follows:
Step 7.1: for bianry image
, calculate row, column complexity function, computing formula is as follows:
In formula,
Representative row complexity function,
Represent row complexity function,
Representative image
In be positioned at
The pixel value of the pixel at place,
,
For the human face region parameter obtained in step 4,
Step 7.2: adopt known mean filter, respectively to row complexity function
, row complexity function
Carry out the one dimension low-pass filtering and obtain new capable complexity function
, new row complexity function
, by calculating, find new capable complexity function
Maximum of points and new row complexity function
Maximum point determine the position of human eye coordinate, through calculating function
A maximum of points
, function
Two maximum points
,
, obtain pixel
, pixel
,
Step 7.3: the new image obtained in step 3.2
In, select respectively the pixel that step 7.2 obtains
, pixel
6
6 neighborhoods, two pixels of the gray-scale value minimum in this field are set as the human eye center, and the position coordinates of two pixels of gray-scale value minimum is defined as to the eyes coordinate
,
.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310119843.9A CN103218605B (en) | 2013-04-09 | 2013-04-09 | A kind of fast human-eye positioning method based on integral projection and rim detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310119843.9A CN103218605B (en) | 2013-04-09 | 2013-04-09 | A kind of fast human-eye positioning method based on integral projection and rim detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103218605A true CN103218605A (en) | 2013-07-24 |
CN103218605B CN103218605B (en) | 2016-01-13 |
Family
ID=48816374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310119843.9A Active CN103218605B (en) | 2013-04-09 | 2013-04-09 | A kind of fast human-eye positioning method based on integral projection and rim detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103218605B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617638A (en) * | 2013-12-05 | 2014-03-05 | 北京京东尚科信息技术有限公司 | Image processing method and device |
CN104484679A (en) * | 2014-09-17 | 2015-04-01 | 北京邮电大学 | Non-standard gun shooting bullet trace image automatic identification method |
CN106407657A (en) * | 2016-08-31 | 2017-02-15 | 无锡雅座在线科技发展有限公司 | Method and device for capturing event |
CN108303420A (en) * | 2017-12-30 | 2018-07-20 | 上饶市中科院云计算中心大数据研究院 | A kind of domestic type sperm quality detection method based on big data and mobile Internet |
CN108648206A (en) * | 2018-04-28 | 2018-10-12 | 成都信息工程大学 | A kind of Robert edge detections film computing system and method |
CN109063689A (en) * | 2018-08-31 | 2018-12-21 | 江苏航天大为科技股份有限公司 | Facial image hair style detection method |
CN109241862A (en) * | 2018-08-14 | 2019-01-18 | 广州杰赛科技股份有限公司 | Target area determines method and system, computer equipment, computer storage medium |
CN110070017A (en) * | 2019-04-12 | 2019-07-30 | 北京迈格威科技有限公司 | A kind of face artificial eye image generating method and device |
CN110288540A (en) * | 2019-06-04 | 2019-09-27 | 东南大学 | A kind of online imaging standards method of carbon-fibre wire radioscopic image |
CN110516649A (en) * | 2019-09-02 | 2019-11-29 | 南京微小宝信息技术有限公司 | Alumnus's authentication method and system based on recognition of face |
CN111814795A (en) * | 2020-06-05 | 2020-10-23 | 北京嘉楠捷思信息技术有限公司 | Character segmentation method, device and computer readable storage medium |
CN111860423A (en) * | 2020-07-30 | 2020-10-30 | 江南大学 | Improved human eye positioning method of integral projection method |
CN115331269A (en) * | 2022-10-13 | 2022-11-11 | 天津新视光技术有限公司 | Fingerprint identification method based on gradient vector field and application |
CN116363736A (en) * | 2023-05-31 | 2023-06-30 | 山东农业工程学院 | Big data user information acquisition method based on digitalization |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1475961A (en) * | 2003-07-14 | 2004-02-18 | 中国科学院计算技术研究所 | Human eye location method based on GaborEge model |
US20110164816A1 (en) * | 2010-01-05 | 2011-07-07 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN102968624A (en) * | 2012-12-12 | 2013-03-13 | 天津工业大学 | Method for positioning human eyes in human face image |
-
2013
- 2013-04-09 CN CN201310119843.9A patent/CN103218605B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1475961A (en) * | 2003-07-14 | 2004-02-18 | 中国科学院计算技术研究所 | Human eye location method based on GaborEge model |
US20110164816A1 (en) * | 2010-01-05 | 2011-07-07 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN102968624A (en) * | 2012-12-12 | 2013-03-13 | 天津工业大学 | Method for positioning human eyes in human face image |
Non-Patent Citations (1)
Title |
---|
欧阳: "基于自适应边缘提取的人眼定位方法", 《微计算机信息》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617638B (en) * | 2013-12-05 | 2017-03-15 | 北京京东尚科信息技术有限公司 | The method and device of image procossing |
CN103617638A (en) * | 2013-12-05 | 2014-03-05 | 北京京东尚科信息技术有限公司 | Image processing method and device |
CN104484679A (en) * | 2014-09-17 | 2015-04-01 | 北京邮电大学 | Non-standard gun shooting bullet trace image automatic identification method |
CN106407657A (en) * | 2016-08-31 | 2017-02-15 | 无锡雅座在线科技发展有限公司 | Method and device for capturing event |
CN108303420A (en) * | 2017-12-30 | 2018-07-20 | 上饶市中科院云计算中心大数据研究院 | A kind of domestic type sperm quality detection method based on big data and mobile Internet |
CN108648206A (en) * | 2018-04-28 | 2018-10-12 | 成都信息工程大学 | A kind of Robert edge detections film computing system and method |
CN108648206B (en) * | 2018-04-28 | 2022-09-16 | 成都信息工程大学 | Robert edge detection film computing system and method |
CN109241862A (en) * | 2018-08-14 | 2019-01-18 | 广州杰赛科技股份有限公司 | Target area determines method and system, computer equipment, computer storage medium |
CN109063689A (en) * | 2018-08-31 | 2018-12-21 | 江苏航天大为科技股份有限公司 | Facial image hair style detection method |
CN110070017B (en) * | 2019-04-12 | 2021-08-24 | 北京迈格威科技有限公司 | Method and device for generating human face artificial eye image |
CN110070017A (en) * | 2019-04-12 | 2019-07-30 | 北京迈格威科技有限公司 | A kind of face artificial eye image generating method and device |
CN110288540A (en) * | 2019-06-04 | 2019-09-27 | 东南大学 | A kind of online imaging standards method of carbon-fibre wire radioscopic image |
CN110516649A (en) * | 2019-09-02 | 2019-11-29 | 南京微小宝信息技术有限公司 | Alumnus's authentication method and system based on recognition of face |
CN110516649B (en) * | 2019-09-02 | 2023-08-22 | 南京微小宝信息技术有限公司 | Face recognition-based alumni authentication method and system |
CN111814795A (en) * | 2020-06-05 | 2020-10-23 | 北京嘉楠捷思信息技术有限公司 | Character segmentation method, device and computer readable storage medium |
CN111860423A (en) * | 2020-07-30 | 2020-10-30 | 江南大学 | Improved human eye positioning method of integral projection method |
CN111860423B (en) * | 2020-07-30 | 2024-04-30 | 江南大学 | Improved human eye positioning method by integral projection method |
CN115331269A (en) * | 2022-10-13 | 2022-11-11 | 天津新视光技术有限公司 | Fingerprint identification method based on gradient vector field and application |
CN115331269B (en) * | 2022-10-13 | 2023-01-13 | 天津新视光技术有限公司 | Fingerprint identification method based on gradient vector field and application |
CN116363736A (en) * | 2023-05-31 | 2023-06-30 | 山东农业工程学院 | Big data user information acquisition method based on digitalization |
CN116363736B (en) * | 2023-05-31 | 2023-08-18 | 山东农业工程学院 | Big data user information acquisition method based on digitalization |
Also Published As
Publication number | Publication date |
---|---|
CN103218605B (en) | 2016-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103218605A (en) | Quick eye locating method based on integral projection and edge detection | |
CN105469113B (en) | A kind of skeleton point tracking method and system in two-dimensional video stream | |
CN103927016B (en) | Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision | |
CN103761519B (en) | Non-contact sight-line tracking method based on self-adaptive calibration | |
CN103186904B (en) | Picture contour extraction method and device | |
CN103310194B (en) | Pedestrian based on crown pixel gradient direction in a video shoulder detection method | |
CN107316031A (en) | The image characteristic extracting method recognized again for pedestrian | |
CN104794693B (en) | A kind of portrait optimization method of face key area automatic detection masking-out | |
CN101551853A (en) | Human ear detection method under complex static color background | |
CN110032932B (en) | Human body posture identification method based on video processing and decision tree set threshold | |
CN105809173B (en) | A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform | |
CN104268853A (en) | Infrared image and visible image registering method | |
CN103020614B (en) | Based on the human motion identification method that space-time interest points detects | |
CN108921813A (en) | Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision | |
CN103971135A (en) | Human body target detection method based on head and shoulder depth information features | |
CN104268520A (en) | Human motion recognition method based on depth movement trail | |
CN105225216A (en) | Based on the Iris preprocessing algorithm of space apart from circle mark rim detection | |
CN104766316A (en) | Novel lip segmentation algorithm for traditional Chinese medical inspection diagnosis | |
CN105741326B (en) | A kind of method for tracking target of the video sequence based on Cluster-Fusion | |
CN103914829B (en) | Method for detecting edge of noisy image | |
CN109344706A (en) | It is a kind of can one man operation human body specific positions photo acquisition methods | |
CN102073872A (en) | Image-based method for identifying shape of parasite egg | |
CN104021567A (en) | Gaussian blur falsification detection method of image based on initial digital law | |
CN106778499B (en) | Method for rapidly positioning human iris in iris acquisition process | |
CN103971347A (en) | Method and device for treating shadow in video image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |