CN109766809A - A kind of improved human eye detection and tracking - Google Patents
A kind of improved human eye detection and tracking Download PDFInfo
- Publication number
- CN109766809A CN109766809A CN201811642394.5A CN201811642394A CN109766809A CN 109766809 A CN109766809 A CN 109766809A CN 201811642394 A CN201811642394 A CN 201811642394A CN 109766809 A CN109766809 A CN 109766809A
- Authority
- CN
- China
- Prior art keywords
- image
- human eye
- eye
- frame
- matching degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
Improved human eye detection of the invention and tracking, comprising: a) video image acquisition;B) identifies human face region, then goes out human eye approximate region according to " three five, front yards " structure determination;C) carries out human eye detection in human eye approximate region;D) tracing of human eye chooses eye image of the matching position as present frame corresponding to minimum value from standard deviation matching degree;E) the subsequent tracing of human eye of.Improved human eye detection of the invention and tracking, during calculating standard variance matching degree, the calculating of standard variance matching degree is carried out after the average gray value of the gray value of each pixel and image is made the difference again, avoid influence of the illumination variation to graphics standard variance matching degree, existing automobile is solved by bridge opening, tunnel and in night running, change dramatically can occur for illumination will lead to optimum position matching degree not the problem of being minimum value, is, it can be achieved that accurately track human eye.
Description
Technical field
The present invention relates to a kind of improved human eye detection and trackings, more specifically, more particularly to a kind of in vehicle
By environment such as bridge opening, tunnel and drivings at night, cause still realize accurate people in the case where illumination change dramatically
The method of eye detection and tracking.
Background technique
Harmful influence transportation scale constantly expands in recent years, and the traffic accident of generation also constantly increases.Most of traffic accidents
Thing is caused by realizing shallow, fatigue driving by driver safety, therefore carries out fatigue detecting to harmful influence driver, is to keep away
Exempt from one of the means of harmful influence traffic accident generation.The method quantified at present for degree of fatigue is divided into two major classes, subjective
Evaluation assessment and objective evaluation.Subjective estimate method mainly passes through fatigue scale and gives a mark to type Acquisition Librarian, than more typical
" the fatigue symptom Self-assessment Scale " for thering is Japanese industries Society of Public Health to develop.But subjective estimate method is since its subjectivity is bigger,
Interviewee's fatigue state in certain time can only be counted, real-time detection is unable to, so being applied in fatigue driving recognition detection field
It is less.
Objective evaluation is that detection interviewee's fatigue state is gone using objective detection technique, mainly passes through information collecting device
Some fatigue characteristics of interviewee are objectively detected.For example, pass through the physiological characteristic of contact device measuring interviewee,
Such as brain electricity, electrocardiosignal, pulse detection, electromyography signal detection.Or interviewee's behavior spy is measured by contactless device
Sign, such as head, eye feature detection etc..The method avoids the strong problem of subjectivity, reliability improves a lot.
And for objective detection technique, by acquiring the video image of driver in real time, and then analyze the fatigue of driver
State is one of more common method, and this method wears any aided-detection device without driver, only need to driver just
A common camera is set up in front, and this analysis method, but also will not be to driving not only not vulnerable to the interference of human factor
The person of sailing impacts, and has many advantages, such as that operability is simple, controllability is strong.
However traditional template matching track algorithm is when carrying out tracing of human eye, the meeting when significant change occurs for intensity of illumination
Lead to eye deviations, causes certain frame or certain continuous frames that can not accurately track to human eye.For example when automobile passes through
Bridge opening, tunnel and in night running, change dramatically can occur for illumination, will lead in video image collected present frame with
The pixel value of previous frame image varies widely, in this case, if again using the matching of the existing difference of two squares or related
Match, will lead to driver eye's position inaccurate, the ocular of driver can not be oriented, it also cannot be according to eye image
Judge the fatigue state of driver, therefore the human eye detection of driver and tracking are to realize the premise of fatigue detecting.
Summary of the invention
The present invention in order to overcome the shortcomings of the above technical problems, provides a kind of improved human eye detection and tracking.
Improved human eye detection of the invention and tracking, which is characterized in that realized by following steps:
A) video image acquisition contains driver's face by being set to the indoor image acquisition device of driving
Video image, and framing is carried out to video image;
B) obtains human eye approximate region image, the human face region of first frame image is identified, then according to " the three of face
Five, front yard " structure determination goes out human eye approximate region;
C) human eye detection carries out human eye detection in step b) in the human eye approximate region of acquisition, it is current to obtain driver
The eye image of frame, if the size of eye image is w × h, w, h are respectively picture traverse, the pixel number in height;
D) tracing of human eye, when the second frame image arrives, after the human eye approximate region that previous frame is identified expands outward
As the human eye approximate region of current frame image, it is denoted as S, the image size of S is m × n, w < m, h < n;By the human eye of previous frame
Image is as template image T, and the human eye approximate region of present frame is human eye image to be matched S, according to from left to right, from top to bottom
Sequence, utilize the standard deviation matching degree R in formula (1) calculation template image T and all matching positions of image to be matched S
(x, y):
Wherein:
T (x, y) indicates that gray value of the template image T at point (x, y), S (x+x ', y+y ') indicate that image to be matched S exists
Gray value at point (x+x ', y+y '), (x ', y ') indicate that sliding step, R (x, y) indicate matching degree, and w and h indicate template image
Width with it is high,Indicate the average value of template image T all pixels point gray value,Expression sliding step is the position (x ', y ')
The average value of the image to be matched S all pixels point gray value at place;X=1,2 ... w;Y=1,2 ... h;
X ' successively takes 1,2 ..., and m-w, y ' successively take 1,2 ..., n-h, by formula (1) calculate template image with it is each
Standard deviation matching degree total (m-w) (n-h) on each position of human eye image to be matched is a, a from (m-w) (n-h)
Eye image of the matching position as present frame corresponding to minimum value is chosen in standard deviation matching degree;
E) the subsequent tracing of human eye of, when third frame image arrives, the second frame just becomes previous frame, and third frame, which just becomes, works as
Previous frame identifies the eye image of third frame using method identical with step d);Similarly, the present frame of subsequent acquisition, all
Eye image is identified using method identical with step d), to realize human eye detection and tracking to driver.
Human eye approximate region image in improved human eye detection of the invention and tracking, the step b) and step d)
Acquisition and step c) and step d) in the acquisition of eye image realize that Adaboost is calculated by Adaboost algorithm
The training data of method is using the LBP feature in characteristics of image processing, as the feature extracting method of gray level image.
The beneficial effects of the present invention are: improved human eye detection of the invention and tracking, in the detection of eye image
In the process, human face region is identified in the image of acquisition first, go out human eye further according to " three five, front yards " structure recognition of face
Approximate region determines eye image in human eye approximate region;During the tracking of eye image, the people of previous frame is utilized
Eye image is template image T, is gradually calculation template in image to be matched S in the human eye approximate region image of current frame image
Standard variance matching degree on image T and each position image to be matched S, the corresponding position of selection standard variance matching degree minimum value
It is set to eye image, the tracking of Lai Shixian personnel's image, during calculating standard variance matching degree, by each pixel
The average gray value of gray value and image carries out the calculating of standard variance matching degree again after making the difference, avoid illumination variation to image
The influence of standard variance matching degree solves existing automobile by bridge opening, tunnel and in night running, and illumination can occur
Change dramatically will lead to optimum position matching degree not the problem of being minimum value, is, it can be achieved that accurately track human eye.
Detailed description of the invention
Fig. 1 is the schematic diagram for identifying human face region in the present invention from image;
Fig. 2 is that human eye approximate region is determined from facial image according to " three five, front yards " structure of face in the present invention
Schematic diagram;
Fig. 3 is the eye image finally determined in the present invention;
Fig. 4 is that showing for standard deviation matching degree is gradually calculated in image to be matched S using template image T in the present invention
It is intended to;
Fig. 5 is the tracking schematic diagram of existing human eye detection and tracking during illumination change dramatically;
Fig. 6 is the tracking schematic diagram of human eye detection and tracking of the invention during illumination change dramatically.
Specific embodiment
The invention will be further described with embodiment with reference to the accompanying drawing.
Mainly there are three categories currently based on the method that human eye positions, based on geometrical characteristic, based on template matching and based on system
The method for counting study.Method main thought based on geometrical characteristic is judged from the unique feature of human eye.Such as, human eye have pair
Title property, the relative position of eyes, the colour of skin of skin and eye color etc..Such method is based on human eye geometrical characteristic, and advantage is
It can quickly be used for quickly detecting, the disadvantage is that higher to context request.It is required that uniform background, and intensity of illumination is moderate, cannot occur
Strong variations.So such method robustness is poor.Method based on template matching mainly first establishes eye template images, then
It is slided using sliding window in source images, compares similitude between target image and source images, provide human eye specific location.
This method is influenced smaller by contextual factor, but calculation amount is huge, cannot reach real-time requirement, poor expandability.Based on statistics
Method mainly passes through a large amount of training of human eye picture database progress and obtains one group of parameter, utilizes parameter model building human eye point
Class device.Such method robustness is stronger, and application range is wider, and the face and human eye positioning in the present invention are exactly to utilize statistics side
The Adaboost algorithm most represented in method.
Improved human eye detection of the invention and tracking are realized by following steps:
A) video image acquisition contains driver's face by being set to the indoor image acquisition device of driving
Video image, and framing is carried out to video image;
B) obtains human eye approximate region image, the human face region of first frame image is identified, then according to " the three of face
Five, front yard " structure determination goes out human eye approximate region;
As shown in Figure 1, giving the schematic diagram for identifying human face region in the present invention from image, Fig. 2 gives the present invention
The middle schematic diagram for determining human eye approximate region from facial image according to " three five, front yards " structure of face, it is seen then that identifying
Out after human face region, according to five, the three front yard distribution of face, the eyes region of driver can be identified, be known
Not Chu eye region be human eye approximate region.
C) human eye detection carries out human eye detection in step b) in the human eye approximate region of acquisition, it is current to obtain driver
The eye image of frame, if the size of eye image is w × h, w, h are respectively picture traverse, the pixel number in height;
As shown in figure 3, giving the eye image finally determined in the present invention, it is seen then that previously determined face area out
Domain, then determine human eye approximate region, it can finally obtain accurate eye image.
D) tracing of human eye, when the second frame image arrives, after the human eye approximate region that previous frame is identified expands outward
As the human eye approximate region of current frame image, it is denoted as S, the image size of S is m × n, w < m, h < n;By the human eye of previous frame
Image is as template image T, and the human eye approximate region of present frame is human eye image to be matched S, according to from left to right, from top to bottom
Sequence, utilize the standard deviation matching degree R in formula (1) calculation template image T and all matching positions of image to be matched S
(x, y):
Wherein:
T (x, y) indicates that gray value of the template image T at point (x, y), S (x+x ', y+y ') indicate that image to be matched S exists
Gray value at point (x+x ', y+y '), (x ', y ') indicate that sliding step, R (x, y) indicate matching degree, and w and h indicate template image
Width with it is high,Indicate the average value of template image T all pixels point gray value,Expression sliding step is the position (x ', y ')
The average value of the image to be matched S all pixels point gray value at place;X=1,2 ... w;Y=1,2 ... h;
X ' successively takes 1,2 ..., and m-w, y ' successively take 1,2 ..., n-h, by formula (1) calculate template image with it is each
Standard deviation matching degree total (m-w) (n-h) on each position of human eye image to be matched is a, a from (m-w) (n-h)
Eye image of the matching position as present frame corresponding to minimum value is chosen in standard deviation matching degree;
Standard is gradually calculated in image to be matched S using template image T as shown in figure 4, giving in the present invention
The schematic diagram of poor matching degree, it is seen then that using sequence of the template image T in image to be matched " from left to right, from top to bottom " by
Secondary matching.
E) the subsequent tracing of human eye of, when third frame image arrives, the second frame just becomes previous frame, and third frame, which just becomes, works as
Previous frame identifies the eye image of third frame using method identical with step d);Similarly, the present frame of subsequent acquisition, all
Eye image is identified using method identical with step d), to realize human eye detection and tracking to driver.
Eye image in the acquisition of human eye approximate region image and step c) and step d) in step b) and step d)
Acquisition realizes that the training data of Adaboost algorithm is special using the LBP in characteristics of image processing by Adaboost algorithm
Sign, as the feature extracting method of gray level image.
Adaboost algorithm is to promote one kind of (boosting) method, and method for improving is common statistical learning method,
It is widely used effectively, in classification problem, it learns multiple classifiers, and these are classified by changing training sample weight
Device carries out linear combination, improves sort merge.Adaboost algorithm core concept is also to follow the thought of method for improving.
The training data of Adaboost algorithm is not traditional gray level image, but handles institute by feature extraction by gray level image
The data obtained, using characteristics of image handle in common LBP (Local Binary Pattern) feature, as grayscale image
The feature extracting method of picture.
LBP (Local Binary Pattern) feature, is a kind of operator for describing image local feature.With more
The characteristics such as the constant, invariable rotary of resolution ratio, grey scale.The texture blending being mainly used in feature extraction.Due to LBP feature meter
Calculation is simple, effect is preferable, therefore LBP feature is all widely used in many fields of computer vision, LBP aspect ratio
More famous application is used in recognition of face and target detection, is had in computer vision open source library OpenCV using LBP spy
Sign carries out the interface of recognition of face, and also the method for useful LBP feature training objective detection classifier, can be achieved to face area
Domain, human eye area accurately identify.
For giving input picture, if directly carrying out human eye detection, the precision of human eye detection can be reduced.If first detecting people
Face, then human eye is detected from human face photo, then precision can increase.In addition, after detecting face, in conjunction with " three front yards of face
Five " approximate region of structure determination human eye, human eye detection, detection speed are further carried out in this region using the algorithm
Degree can be promoted, and corresponding precision can also be promoted, and table 1 is the comparison of three kinds of mode precision and time.
Table 1
Detection mode | It is time-consuming | Precision |
Human eye | 0.8s | 78.7% |
Face-human eye | 1.3s | 86.9% |
Face-human eye approximate region-human eye | 1.1s | 91.3% |
It can be seen that mode carries out human eye detection by " face-human eye approximate region-human eye " from table 1, although speed
On can decrease, but promoted in precision many.
For existing human eye detection and tracking, after obtaining template image T and image to be matched S,
By difference of two squares matching or relevant matches come detection and tracking eye image,
Difference of two squares matching degree:
Standard deviation matching degree:
Relevant matches degree:
Standard relevant matches degree:
In above-mentioned 4 formula, T (x, y) indicates the size of template image pixel at point (x, y), S (x+x ', y+y ') table
Show target image to be matched pixel size at point (x+x ', y+y ').(x ', y ') indicates sliding step.R (x, y) indicates matching
Degree, for difference of two squares matching, best matching degree is 0, and for relevant matches, the bigger expression matching degree of matching degree is more
It is high.
By bridge opening, tunnel and in night running, change dramatically can occur automobile for illumination, if at this moment using existing
Human eye detection and tracking calculate matching degree, then will lead to optimum position similarity is not minimum value (ideal value 0).
Because image to be detected intensity of illumination changes at this time, so that optimum position pixel and template pixel are unequal.It is easy to lead
Eye image tracking is caused to lose, as shown in figure 5, giving existing human eye detection and tracking during illumination change dramatically
Tracking schematic diagram, it is seen then that the width image irradiation of the 3rd, the 4th and the 5th occur change dramatically when, using existing matching degree calculate
Method is easy to cause face tracking to fail.
But similarity is calculated using human eye detection of the invention and tracking, optimum position similarity remains as minimum
Value (ideal value 0).Doing one briefly to above-mentioned formula below proves:
Assuming that intensity of illumination even variation, i.e., when intensity of illumination changes, the size variation of pixel value is all equal.No
Harm sets changing value as c, then when intensity of illumination does not change, optimum position pixel should be equal everywhere with template pixel, this
When:
T (x, y)=S (x+x', y+y') (6)
To arbitrary x ∈ [0, w], y ∈ [0, h] is set up,
Then have:
For in (7) formula we only focus on molecule, molecule should be equal to 0, and (x ', y ') is indicated in optimum position template image
Coordinate of the top left corner apex relative to image to be detected, i.e. sliding step.
When intensity of illumination changes, molecular change in (7) formula at this time are as follows:
Factorization are as follows:
Final result can be obtained by bringing (6) formula into are as follows:
Similarity is proportional to square of illumination variation at this time, it is clear that cannot be maintained at optimum position similarity is 0.
Similarity then molecule in (1) formula at this time is sought according to (1) formula are as follows:
Factorization are as follows:
It brings (6) formula and (8) formula into, final result can be obtained are as follows:
Similarity is 0 in optimum position at this time, meets template matching track algorithm basic thought.As shown in fig. 6, providing
The tracking schematic diagram of human eye detection and tracking of the invention during illumination change dramatically, it is seen then that although the 3rd,
When change dramatically occurs for the 4th and the 5th width image irradiation, using matching degree calculation method of the invention, still it may be implemented to people
The accurate detection and tracking of eye.
Claims (2)
1. a kind of improved human eye detection and tracking, which is characterized in that realized by following steps:
A) video image acquisition, by being set to the video for driving indoor image acquisition device and containing driver's face
Image, and framing is carried out to video image;
B) obtains human eye approximate region image, the human face region of first frame image is identified, then according to " three front yards five of face
Eye " structure determination goes out human eye approximate region;
C) human eye detection carries out human eye detection in step b) in the human eye approximate region of acquisition, obtain driver's present frame
Eye image, if the size of eye image is w × h, w, h are respectively picture traverse, the pixel number in height;
D) tracing of human eye, when the second frame image arrives, conduct after the human eye approximate region that previous frame is identified expands outward
The human eye approximate region of current frame image, is denoted as S, and the image size of S is m × n, w < m, h < n;By the eye image of previous frame
As template image T, the human eye approximate region of present frame is human eye image to be matched S, suitable according to from left to right, from top to bottom
Sequence, using in formula (1) calculation template image T and all matching positions of image to be matched S standard deviation matching degree R (x,
Y):
Wherein:
T (x, y) indicates that gray value of the template image T at point (x, y), S (x+x ', y+y ') indicate image to be matched S in point (x+
X ', y+y ') at gray value, (x ', y ') indicates that sliding step, R (x, y) indicate matching degree, and w and h indicate the width of template image
With height,Indicate the average value of template image T all pixels point gray value,Indicate that sliding step is at the position (x ', y ')
The average value of image to be matched S all pixels point gray value;X=1,2 ... w;Y=1,2 ... h;
X ' successively takes 1,2 ..., and m-w, y ' successively take 1,2 ..., n-h, calculates template image and each human eye by formula (1)
Standard deviation matching degree total (m-w) (n-h) on each position of image to be matched is a, from (m-w) (n-h) a standard
Eye image of the matching position as present frame corresponding to minimum value is chosen in difference of two squares matching degree;
E) the subsequent tracing of human eye of, when third frame image arrives, the second frame just becomes previous frame, and third frame just becomes present frame,
The eye image of third frame is identified using method identical with step d);Similarly, the present frame of subsequent acquisition, all using with
The identical method of step d) identifies eye image, to realize human eye detection and tracking to driver.
2. improved human eye detection according to claim 1 and tracking, it is characterised in that: the step b) and step
D) acquisition of eye image passes through Adaboost and calculates in the acquisition of human eye approximate region image and step c) and step d) in
Method realizes that the training data of Adaboost algorithm is using the LBP feature in characteristics of image processing, as the spy of gray level image
Levy extracting method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811642394.5A CN109766809B (en) | 2018-12-29 | 2018-12-29 | Improved human eye detection and tracking method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811642394.5A CN109766809B (en) | 2018-12-29 | 2018-12-29 | Improved human eye detection and tracking method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109766809A true CN109766809A (en) | 2019-05-17 |
CN109766809B CN109766809B (en) | 2021-01-29 |
Family
ID=66453063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811642394.5A Active CN109766809B (en) | 2018-12-29 | 2018-12-29 | Improved human eye detection and tracking method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109766809B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326777A (en) * | 2021-05-31 | 2021-08-31 | 沈阳康慧类脑智能协同创新中心有限公司 | Eye identification tracking method and device based on monocular camera |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110286670A1 (en) * | 2010-05-18 | 2011-11-24 | Canon Kabushiki Kaisha | Image processing apparatus, processing method therefor, and non-transitory computer-readable storage medium |
CN104463080A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN104866821A (en) * | 2015-05-04 | 2015-08-26 | 南京大学 | Video object tracking method based on machine learning |
CN105917353A (en) * | 2013-09-16 | 2016-08-31 | 眼验股份有限公司 | Feature extraction and matching and template update for biometric authentication |
CN106373140A (en) * | 2016-08-31 | 2017-02-01 | 杭州沃朴物联科技有限公司 | Transparent and semitransparent liquid impurity detection method based on monocular vision |
CN106503645A (en) * | 2016-10-19 | 2017-03-15 | 深圳大学 | Monocular distance-finding method and system based on Android |
US20170076145A1 (en) * | 2015-09-11 | 2017-03-16 | EyeVerify Inc. | Image enhancement and feature extraction for ocular-vascular and facial recognition |
CN107153848A (en) * | 2017-06-15 | 2017-09-12 | 南京工程学院 | Instrument image automatic identifying method based on OpenCV |
-
2018
- 2018-12-29 CN CN201811642394.5A patent/CN109766809B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110286670A1 (en) * | 2010-05-18 | 2011-11-24 | Canon Kabushiki Kaisha | Image processing apparatus, processing method therefor, and non-transitory computer-readable storage medium |
CN104463080A (en) * | 2013-09-16 | 2015-03-25 | 展讯通信(天津)有限公司 | Detection method of human eye state |
CN105917353A (en) * | 2013-09-16 | 2016-08-31 | 眼验股份有限公司 | Feature extraction and matching and template update for biometric authentication |
CN104866821A (en) * | 2015-05-04 | 2015-08-26 | 南京大学 | Video object tracking method based on machine learning |
US20170076145A1 (en) * | 2015-09-11 | 2017-03-16 | EyeVerify Inc. | Image enhancement and feature extraction for ocular-vascular and facial recognition |
CN106373140A (en) * | 2016-08-31 | 2017-02-01 | 杭州沃朴物联科技有限公司 | Transparent and semitransparent liquid impurity detection method based on monocular vision |
CN106503645A (en) * | 2016-10-19 | 2017-03-15 | 深圳大学 | Monocular distance-finding method and system based on Android |
CN107153848A (en) * | 2017-06-15 | 2017-09-12 | 南京工程学院 | Instrument image automatic identifying method based on OpenCV |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326777A (en) * | 2021-05-31 | 2021-08-31 | 沈阳康慧类脑智能协同创新中心有限公司 | Eye identification tracking method and device based on monocular camera |
Also Published As
Publication number | Publication date |
---|---|
CN109766809B (en) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106682603B (en) | Real-time driver fatigue early warning system based on multi-source information fusion | |
CN101464946B (en) | Detection method based on head identification and tracking characteristics | |
CN103761519B (en) | Non-contact sight-line tracking method based on self-adaptive calibration | |
CN110097034A (en) | A kind of identification and appraisal procedure of Intelligent human-face health degree | |
CN104715238B (en) | A kind of pedestrian detection method based on multi-feature fusion | |
US9639748B2 (en) | Method for detecting persons using 1D depths and 2D texture | |
CN102214309B (en) | Special human body recognition method based on head and shoulder model | |
CN106886216A (en) | Robot automatic tracking method and system based on RGBD Face datections | |
CN103810491B (en) | Head posture estimation interest point detection method fusing depth and gray scale image characteristic points | |
CN105740758A (en) | Internet video face recognition method based on deep learning | |
CN103390164A (en) | Object detection method based on depth image and implementing device thereof | |
CN104077577A (en) | Trademark detection method based on convolutional neural network | |
CN103455820A (en) | Method and system for detecting and tracking vehicle based on machine vision technology | |
CN102902986A (en) | Automatic gender identification system and method | |
CN102096823A (en) | Face detection method based on Gaussian model and minimum mean-square deviation | |
CN104298969A (en) | Crowd scale statistical method based on color and HAAR feature fusion | |
CN104217217A (en) | Vehicle logo detection method and system based on two-layer classification | |
CN110473199A (en) | A kind of detection of color spot acne and health assessment method based on the segmentation of deep learning example | |
Lopar et al. | An overview and evaluation of various face and eyes detection algorithms for driver fatigue monitoring systems | |
CN106548195A (en) | A kind of object detection method based on modified model HOG ULBP feature operators | |
CN104063689B (en) | Face image identification method based on binocular stereoscopic vision | |
Gao et al. | Research on facial expression recognition of video stream based on OpenCV | |
CN109766809A (en) | A kind of improved human eye detection and tracking | |
CN106127160A (en) | A kind of human eye method for rapidly positioning for iris identification | |
Fan et al. | Lane detection based on machine learning algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20201209 Address after: 250014 No. 7366 East Second Ring Road, Lixia District, Shandong, Ji'nan Applicant after: SHANDONG University OF FINANCE AND ECONOMICS Applicant after: Shandong Rengong Intelligent Technology Co.,Ltd. Address before: 250014 No. 7366 East Second Ring Road, Lixia District, Shandong, Ji'nan Applicant before: SHANDONG University OF FINANCE AND ECONOMICS |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |