CN109766809B - Improved human eye detection and tracking method - Google Patents

Improved human eye detection and tracking method Download PDF

Info

Publication number
CN109766809B
CN109766809B CN201811642394.5A CN201811642394A CN109766809B CN 109766809 B CN109766809 B CN 109766809B CN 201811642394 A CN201811642394 A CN 201811642394A CN 109766809 B CN109766809 B CN 109766809B
Authority
CN
China
Prior art keywords
image
human eye
human
tracking
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811642394.5A
Other languages
Chinese (zh)
Other versions
CN109766809A (en
Inventor
郭强
季磊
邵潘红
徐英明
周洁
方一帆
蒋晓彤
刘庆淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Finance and Economics
Original Assignee
Shandong Rengong Intelligent Technology Co ltd
Shandong University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Rengong Intelligent Technology Co ltd, Shandong University of Finance and Economics filed Critical Shandong Rengong Intelligent Technology Co ltd
Priority to CN201811642394.5A priority Critical patent/CN109766809B/en
Publication of CN109766809A publication Critical patent/CN109766809A/en
Application granted granted Critical
Publication of CN109766809B publication Critical patent/CN109766809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The improved human eye detecting and tracking method of the invention comprises the following steps: a) acquiring a video image; b) identifying a human face area, and then determining a human eye approximate area according to a three-family five-eye structure; c) performing human eye detection in the general region of the human eye; d) tracking human eyes, namely selecting a matching position corresponding to the minimum value from the standard square error matching degree as a human eye image of the current frame; e) subsequent human eye tracking. According to the improved human eye detection and tracking method, in the process of calculating the standard variance matching degree, the gray value of each pixel point is subtracted from the average gray value of the image, then the standard variance matching degree is calculated, the influence of illumination change on the standard variance matching degree of the image is avoided, the problem that the matching degree at the optimal position is not the minimum value due to the fact that illumination can change sharply when an existing automobile passes through a bridge opening and a tunnel and runs at night is solved, and accurate tracking of human eyes can be achieved.

Description

Improved human eye detection and tracking method
Technical Field
The invention relates to an improved human eye detection and tracking method, in particular to a method which can still realize accurate human eye detection and tracking under the condition that illumination is changed sharply when a vehicle passes through environments such as a bridge opening, a tunnel, night driving and the like.
Background
In recent years, the transportation scale of dangerous chemicals is continuously enlarged, and the generated traffic accidents are also continuously increased. Most traffic stories are caused by the fact that drivers are low in safety awareness and fatigue driving, so that fatigue detection of dangerous chemical drivers is one of means for avoiding dangerous chemical traffic accidents. At present, methods for quantifying fatigue degree are divided into two categories, namely subjective evaluation methods and objective evaluation methods. The subjective evaluation method is to score the interviewee mainly by a fatigue scale, and is typically a fatigue symptom self-evaluation scale developed by the japan industrial health society. However, the subjective evaluation method is relatively high in subjectivity, can only count the fatigue state of the interviewee within a certain period of time, and cannot detect the fatigue state in real time, so that the subjective evaluation method is less applied to the field of fatigue driving identification and detection.
The objective evaluation method is to detect the fatigue state of the interviewee by using an objective detection technology and mainly carry out objective detection on some fatigue characteristics of the interviewee through information acquisition equipment. For example, physiological characteristics of the interviewee, such as electroencephalogram, electrocardiosignal, pulse detection, electromyogram signal detection, and the like, are measured by a contact device. Or measuring the behavior characteristics of the interviewee by a non-contact device, such as head, eye characteristic detection and the like. The method avoids the problem of strong subjectivity and greatly improves the reliability.
For the objective detection technology, the fatigue state of the driver is analyzed by collecting the video image of the driver in real time, and the method is one of the more common methods, the method does not need the driver to wear any auxiliary detection equipment, and only needs to set up an ordinary camera right in front of the driver, so that the analysis method is not easily interfered by human factors, cannot influence the driver, and has the advantages of simple operability, strong controllability and the like.
However, when the traditional template matching tracking algorithm is used for tracking human eyes, when the illumination intensity is obviously changed, the eye positioning deviation is caused, and a certain frame or certain continuous frames cannot accurately track the human eyes. For example, when an automobile passes through a bridge opening and a tunnel and runs at night, illumination can be changed sharply, so that pixel values of a current frame and a previous frame in an acquired video image can be changed greatly, under the condition, if the existing square error matching or related matching is adopted, the eye positioning of a driver is inaccurate, the eye region of the driver cannot be positioned, and the fatigue state of the driver cannot be judged according to the eye image, so that the eye detection and tracking of the driver are the premise for realizing the fatigue detection.
Disclosure of Invention
The present invention overcomes the above-described deficiencies and provides an improved human eye detection and tracking method.
The improved human eye detection and tracking method is characterized by comprising the following steps of:
a) acquiring a video image, namely acquiring the video image containing the face of a driver through an image acquisition device arranged in a cab, and framing the video image;
b) acquiring an approximate human eye region image, identifying a human face region of the first frame image, and determining the approximate human eye region according to a 'three-family five-eye' structure of a human face;
c) detecting human eyes, namely detecting the human eyes in the human eye approximate area obtained in the step b), obtaining a human eye image of the current frame of the driver, and setting the size of the human eye image as w x h, wherein the w and the h are the pixel numbers on the width and the height of the image respectively;
d) when the second frame of image comes, the human eye approximate region identified in the previous frame is expanded outwards to serve as the human eye approximate region of the current frame of image and is marked as S, the image size of S is mxn, w is less than m, and h is less than n; taking the human eye image of the previous frame as a template image T, taking the human eye approximate region of the current frame as a human eye image S to be matched, and calculating the standard square deviation matching degree R (x, y) of all matching positions of the template image T and the image S to be matched by using a formula (1) according to the sequence from left to right and from top to bottom:
Figure BDA0001931391980000021
wherein:
Figure BDA0001931391980000031
Figure BDA0001931391980000032
Figure BDA0001931391980000033
Figure BDA0001931391980000034
t (x, y) represents the gray value of the template image T at the point (x, y), S (x + x ', y + y') represents the gray value of the image S to be matched at the point (x + x ', y + y'), (x ', y') represents the sliding step length, R (x, y) represents the degree of matching, w and h represent the width and height of the template image,
Figure BDA0001931391980000035
represents the average value of the gray values of all the pixel points of the template image T,
Figure BDA0001931391980000036
representing the average value of all pixel point gray values of the image S to be matched at the position with the sliding step length of (x ', y'); x is 1,2,. w; h, 1,2,. h;
x 'sequentially takes 1,2, …, m-w, y' sequentially takes 1,2, …, n-h, the standard square deviation matching degrees of the template image and each position of each image to be matched with the human eyes are calculated through a formula (1), the total number of the standard square deviation matching degrees is (m-w) - (n-h), and the matching position corresponding to the minimum value is selected from the (m-w) - (n-h) standard square deviation matching degrees to serve as the human eye image of the current frame;
e) subsequent human eye tracking, wherein when a third frame of image arrives, the second frame becomes a previous frame, the third frame becomes a current frame, and the human eye image of the third frame is identified by adopting the method same as the step d); similarly, the subsequent acquired current frame adopts the same method as the step d) to identify the human eye image so as to realize the human eye detection and tracking of the driver.
In the improved human eye detection and tracking method, the acquisition of the image of the approximate region of the human eye in the steps b) and d) and the acquisition of the image of the human eye in the steps c) and d) are realized by an Adaboost algorithm, and the training data of the Adaboost algorithm uses LBP (local binary pattern) features in image feature processing as a feature extraction method of a gray level image.
The invention has the beneficial effects that: in the improved human eye detection and tracking method, in the detection process of a human eye image, firstly, a human face area is identified in an acquired image, then, a human eye approximate area is identified according to a three-family five-eye structure of a human face, and the human eye image is determined in the human eye approximate area; in the process of tracking the human eye image, the standard variance matching degree of each position of the template image T and the image S to be matched is calculated one by using the human eye image of the previous frame as the template image T, and the position corresponding to the minimum value of the standard variance matching degree is selected as the human eye image S to realize the tracking of the human eye image.
Drawings
FIG. 1 is a schematic diagram of recognizing a face region from an image according to the present invention;
FIG. 2 is a schematic diagram of determining a human eye approximate region from a human face image according to a three-family five-eye structure of a human face in the invention;
FIG. 3 is a final eye image determined in the present invention;
FIG. 4 is a schematic diagram of a method for calculating a standard square error matching degree successively in an image S to be matched by using a template image T;
FIG. 5 is a schematic diagram of tracking of a conventional human eye detection and tracking method during a rapid change in illumination;
FIG. 6 is a schematic diagram of the tracking of the human eye detecting and tracking method of the present invention during the process of the rapid change of the illumination.
Detailed Description
The invention is further described with reference to the following figures and examples.
At present, methods based on human eye positioning mainly comprise three main types, namely methods based on geometric characteristics, template matching and statistical learning. The main idea of the geometric feature-based method is to judge from the unique features of the human eye. For example, the human eye has symmetry, relative positions of the eyes, skin tone of the skin, color of the eyes, and the like. The method is based on the geometric characteristics of human eyes, and has the advantages of rapid detection and high requirement on background. The requirement is single background, and the illumination intensity is moderate and can not change strongly. Such methods are less robust. The template matching based method mainly comprises the steps of establishing an eye template image, sliding a source image by using a sliding window, comparing the similarity between a target image and the source image, and giving the specific position of human eyes. The method is slightly influenced by background factors, but has huge calculation amount, can not meet the real-time requirement and has poor expandability. The method is based on a statistical method and mainly comprises the steps of carrying out massive training through a human eye picture database to obtain a group of parameters, and constructing a human eye classifier by using the parameter model. The method has strong robustness and wide application range, and the face and eye positioning in the invention is the most representative Adaboost algorithm in a statistical method.
The improved human eye detection and tracking method is realized by the following steps:
a) acquiring a video image, namely acquiring the video image containing the face of a driver through an image acquisition device arranged in a cab, and framing the video image;
b) acquiring an approximate human eye region image, identifying a human face region of the first frame image, and determining the approximate human eye region according to a 'three-family five-eye' structure of a human face;
as shown in fig. 1, a schematic diagram of recognizing a face region from an image in the present invention is given, and fig. 2 is a schematic diagram of determining an approximate region of a human eye from a face image according to a "three-family five-eye" structure of the human face in the present invention.
c) Detecting human eyes, namely detecting the human eyes in the human eye approximate area obtained in the step b), obtaining a human eye image of the current frame of the driver, and setting the size of the human eye image as w x h, wherein the w and the h are the pixel numbers on the width and the height of the image respectively;
as shown in fig. 3, the human eye image finally determined in the present invention is given, and thus, the human face region is determined first, then the human eye approximate region is determined, and finally, the accurate human eye image can be obtained.
d) When the second frame of image comes, the human eye approximate region identified in the previous frame is expanded outwards to serve as the human eye approximate region of the current frame of image and is marked as S, the image size of S is mxn, w is less than m, and h is less than n; taking the human eye image of the previous frame as a template image T, taking the human eye approximate region of the current frame as a human eye image S to be matched, and calculating the standard square deviation matching degree R (x, y) of all matching positions of the template image T and the image S to be matched by using a formula (1) according to the sequence from left to right and from top to bottom:
Figure BDA0001931391980000051
wherein:
Figure BDA0001931391980000061
Figure BDA0001931391980000062
Figure BDA0001931391980000063
Figure BDA0001931391980000064
t (x, y) represents the gray value of the template image T at the point (x, y), S (x + x ', y + y') represents the gray value of the image S to be matched at the point (x + x ', y + y'), (x ', y') represents the sliding step length, R (x, y) represents the degree of matching, w and h represent the width and height of the template image,
Figure BDA0001931391980000065
represents the average value of the gray values of all the pixel points of the template image T,
Figure BDA0001931391980000066
representing the average value of all pixel point gray values of the image S to be matched at the position with the sliding step length of (x ', y'); x is 1,2,. w; h, 1,2,. h;
x 'sequentially takes 1,2, …, m-w, y' sequentially takes 1,2, …, n-h, the standard square deviation matching degrees of the template image and each position of each image to be matched with the human eyes are calculated through a formula (1), the total number of the standard square deviation matching degrees is (m-w) - (n-h), and the matching position corresponding to the minimum value is selected from the (m-w) - (n-h) standard square deviation matching degrees to serve as the human eye image of the current frame;
as shown in fig. 4, a schematic diagram of sequentially calculating the standard square deviation matching degree in the image S to be matched by using the template image T in the present invention is given, and it can be seen that the template image T is sequentially matched on the image S to be matched in the sequence of "from left to right, from top to bottom".
e) Subsequent human eye tracking, wherein when a third frame of image arrives, the second frame becomes a previous frame, the third frame becomes a current frame, and the human eye image of the third frame is identified by adopting the method same as the step d); similarly, the subsequent acquired current frame adopts the same method as the step d) to identify the human eye image so as to realize the human eye detection and tracking of the driver.
The acquisition of the human eye approximate region image in the step b) and the step d), and the acquisition of the human eye image in the step c) and the step d) are realized by an Adaboost algorithm, and the LBP (local binary pattern) feature in the image feature processing is used as a feature extraction method of the gray level image by the training data of the Adaboost algorithm.
The Adaboost algorithm is one of boosting methods, the boosting method is a common statistical learning method, the application is wide and effective, in the classification problem, a plurality of classifiers are learned by changing the weight of training samples, and the classifiers are linearly combined to improve the classification combination. The core idea of the Adaboost algorithm is also the idea of following the lifting method. The training data of the Adaboost algorithm is not a traditional gray image, but data obtained by performing feature extraction processing on the gray image, and the feature extraction method of the gray image is implemented by using the common LBP (local Binary pattern) feature in the image feature processing.
Lbp (local Binary pattern) feature, is an operator used to describe local features of an image. The method has the characteristics of multi-resolution, unchanged gray scale, unchanged rotation and the like. The method is mainly used for texture extraction in feature extraction. The LBP features are simple in calculation and good in effect, so that the LBP features are widely applied to many fields of computer vision, the LBP feature comparison is famous and applied to face recognition and target detection, an interface for carrying out face recognition by using the LBP features is arranged in a computer vision open source library OpenCV, and a method for training a target detection classifier by using the LBP features is also adopted, so that accurate recognition of a face region and a human eye region can be realized.
For a given input image, if the human eye detection is performed directly, the accuracy of the human eye detection is reduced. If the human face is detected first and then the human eyes are detected from the human face picture, the accuracy is increased. In addition, after the human face is detected, the approximate region of the human eye is determined by combining the structure of the human face, namely the three-family five-eye structure, and then the algorithm is further used for detecting the human eye in the region, so that the detection speed is improved, the corresponding precision is also improved, and table 1 is the comparison between the precision and the time of three modes.
TABLE 1
Detection mode Time consuming Accuracy of measurement
Human eye 0.8s 78.7%
Face-eye 1.3s 86.9%
Human face-human eye general region-human eye 1.1s 91.3%
As can be seen from table 1, although the speed is reduced, the accuracy is improved by a few degrees by performing the eye detection in the "face-eye approximate region-eye" manner.
For the existing human eye detection and tracking method, after a template image T and an image S to be matched are obtained, the human eye image is detected and tracked through square error matching or correlation matching,
the square error matching degree:
Figure BDA0001931391980000071
standard square error matching degree:
Figure BDA0001931391980000081
correlation matching degree:
Figure BDA0001931391980000082
standard correlation matching degree:
Figure BDA0001931391980000083
in the above 4 formulas, T (x, y) represents the size of the pixel of the template image at the point (x, y), and S (x + x ', y + y') represents the size of the pixel of the target image to be matched at the point (x + x ', y + y'). (x ', y') represents the step size of the sliding. R (x, y) represents a degree of matching, and preferably 0 for square error matching, and a larger degree of matching represents a higher degree of matching for correlation matching.
When an automobile passes through a bridge opening and a tunnel and runs at night, illumination changes rapidly, and if the matching degree is calculated by adopting the existing human eye detection and tracking method, the similarity at the optimal position is not the minimum value (the ideal value is 0). Because the illumination intensity of the image to be detected changes at the moment, the pixel at the optimal position is not equal to the template pixel. The tracking loss of the human eye image is easily caused, as shown in fig. 5, a tracking schematic diagram of the existing human eye detection and tracking method in the process of the rapid change of illumination is given, and it can be seen that when the illumination of the 3 rd, 4 th and 5 th images is rapidly changed, the existing matching degree calculation method is adopted, and the face tracking failure is easily caused.
However, the similarity is calculated by using the human eye detection and tracking method of the invention, and the similarity at the optimal position is still the minimum value (the ideal value is 0). A brief demonstration of the above equation is provided below:
it is assumed that the illumination intensity varies uniformly, i.e., the pixel values vary equally in magnitude as the illumination intensity varies. If the variation value is not set as c, when the illumination intensity is not changed, the pixel at the optimal position is equal to the template pixel at the corresponding position, at this time:
T(x,y)=S(x+x',y+y') (6)
for any x ∈ [0, w ], y ∈ [0, h ] is true,
then there are:
Figure BDA0001931391980000091
Figure BDA0001931391980000092
for the formula (7) where we only focus on the molecule, the molecule should be equal to 0, and (x ', y') represents the coordinates of the top left vertex of the template image relative to the image to be detected at the optimal position, i.e. the sliding step.
When the intensity of light is changed, the molecule in the formula (7) becomes:
Figure BDA0001931391980000093
factorization into:
Figure BDA0001931391980000094
the final result obtained by substituting equation (6) is:
Figure BDA0001931391980000095
the similarity is now proportional to the square of the illumination change and apparently cannot be maintained at the optimal position with a similarity of 0.
If the similarity is obtained by the formula (1), the numerator in the formula (1) is:
Figure BDA0001931391980000096
factorization into:
Figure BDA0001931391980000097
substituting equations (6) and (8) into the final result:
Figure BDA0001931391980000098
at the moment, the similarity is 0 at the optimal position, and the basic idea of the template matching tracking algorithm is met. As shown in fig. 6, a schematic tracking diagram of the human eye detection and tracking method in the process of rapid change of illumination is shown, and it can be seen that although the illumination of the 3 rd, 4 th and 5 th images is rapidly changed, accurate detection and tracking of human eyes can still be realized by using the matching degree calculation method of the present invention.

Claims (2)

1. An improved human eye detection and tracking method, characterized by the steps of:
a) acquiring a video image, namely acquiring the video image containing the face of a driver through an image acquisition device arranged in a cab, and framing the video image;
b) acquiring a human eye region image, identifying a human face region of the first frame image, and determining the human eye region according to a 'three-family five-eye' structure of a human face;
c) detecting human eyes in the human eye area obtained in the step b), obtaining a human eye image of the current frame of the driver, and setting the size of the human eye image as w x h, wherein the w and the h are the pixel numbers on the width and the height of the image respectively;
d) when the second frame of image comes, the eye area identified by the previous frame is expanded outwards to serve as the eye area of the current frame of image and is marked as S, the size of the S image is mxn, w is less than m, and h is less than n; taking the human eye image of the previous frame as a template image T, taking the human eye region of the current frame as a human eye image S to be matched, and calculating the standard square deviation matching degree R (x, y) of all matching positions of the template image T and the image S to be matched by using a formula (1) according to the sequence from left to right and from top to bottom:
Figure FDA0002619537700000011
wherein:
Figure FDA0002619537700000012
Figure FDA0002619537700000013
Figure FDA0002619537700000014
Figure FDA0002619537700000015
t (x, y) represents the gray value of the template image T at the point (x, y), S (x + x ', y + y') represents the gray value of the image S to be matched at the point (x + x ', y + y'), (x ', y') represents the sliding step length, R (x, y) represents the degree of matching, w and h represent the width and height of the template image,
Figure FDA0002619537700000021
represents the average value of the gray values of all the pixel points of the template image T,
Figure FDA0002619537700000022
representing the average value of all pixel point gray values of the image S to be matched at the position with the sliding step length of (x ', y'); x is 1,2,. w; h, 1,2,. h;
x 'sequentially takes 1,2, …, m-w, y' sequentially takes 1,2, …, n-h, the standard square deviation matching degrees of the template image and each position of each image to be matched with the human eyes are calculated through a formula (1), the total number of the standard square deviation matching degrees is (m-w) - (n-h), and the matching position corresponding to the minimum value is selected from the (m-w) - (n-h) standard square deviation matching degrees to serve as the human eye image of the current frame;
e) subsequent human eye tracking, wherein when a third frame of image arrives, the second frame becomes a previous frame, the third frame becomes a current frame, and the human eye image of the third frame is identified by adopting the method same as the step d); similarly, the subsequent acquired current frame adopts the same method as the step d) to identify the human eye image so as to realize the human eye detection and tracking of the driver.
2. The improved human eye detection and tracking method of claim 1, wherein: the acquisition of the human eye region images in the step b) and the step d) and the acquisition of the human eye images in the step c) and the step d) are all realized by an Adaboost algorithm, and the training data of the Adaboost algorithm uses LBP (local binary pattern) features in image feature processing as a feature extraction method of the gray level image.
CN201811642394.5A 2018-12-29 2018-12-29 Improved human eye detection and tracking method Active CN109766809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811642394.5A CN109766809B (en) 2018-12-29 2018-12-29 Improved human eye detection and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811642394.5A CN109766809B (en) 2018-12-29 2018-12-29 Improved human eye detection and tracking method

Publications (2)

Publication Number Publication Date
CN109766809A CN109766809A (en) 2019-05-17
CN109766809B true CN109766809B (en) 2021-01-29

Family

ID=66453063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811642394.5A Active CN109766809B (en) 2018-12-29 2018-12-29 Improved human eye detection and tracking method

Country Status (1)

Country Link
CN (1) CN109766809B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326777B (en) * 2021-05-31 2024-10-15 沈阳康慧类脑智能协同创新中心有限公司 Eye recognition tracking method and device based on monocular camera

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5495934B2 (en) * 2010-05-18 2014-05-21 キヤノン株式会社 Image processing apparatus, processing method thereof, and program
US9053365B2 (en) * 2013-09-16 2015-06-09 EyeVerify, Inc. Template update for biometric authentication
CN104463080A (en) * 2013-09-16 2015-03-25 展讯通信(天津)有限公司 Detection method of human eye state
CN104866821B (en) * 2015-05-04 2018-09-14 南京大学 Video object tracking based on machine learning
RU2691195C1 (en) * 2015-09-11 2019-06-11 Айверифай Инк. Image and attribute quality, image enhancement and identification of features for identification by vessels and individuals, and combining information on eye vessels with information on faces and/or parts of faces for biometric systems
CN106373140B (en) * 2016-08-31 2020-03-27 杭州沃朴物联科技有限公司 Transparent and semitransparent liquid impurity detection method based on monocular vision
CN106503645A (en) * 2016-10-19 2017-03-15 深圳大学 Monocular distance-finding method and system based on Android
CN107153848A (en) * 2017-06-15 2017-09-12 南京工程学院 Instrument image automatic identifying method based on OpenCV

Also Published As

Publication number Publication date
CN109766809A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN108805093B (en) Escalator passenger tumbling detection method based on deep learning
CN110097034B (en) Intelligent face health degree identification and evaluation method
CN106682603B (en) Real-time driver fatigue early warning system based on multi-source information fusion
CN106250870B (en) A kind of pedestrian's recognition methods again of joint part and global similarity measurement study
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
US9639748B2 (en) Method for detecting persons using 1D depths and 2D texture
Agarwal et al. Learning to detect objects in images via a sparse, part-based representation
CN105279772B (en) A kind of trackability method of discrimination of infrared sequence image
CN111144207B (en) Human body detection and tracking method based on multi-mode information perception
JP7450848B2 (en) Transparency detection method based on machine vision
CN101726498B (en) Intelligent detector and method of copper strip surface quality on basis of vision bionics
CN110473199A (en) A kind of detection of color spot acne and health assessment method based on the segmentation of deep learning example
CN114926410A (en) Method for detecting appearance defects of brake disc
CN115346197A (en) Driver distraction behavior identification method based on bidirectional video stream
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
CN109766809B (en) Improved human eye detection and tracking method
Graf et al. Robust recognition of faces and facial features with a multi-modal system
CN116110006B (en) Scenic spot tourist abnormal behavior identification method for intelligent tourism system
CN112766145A (en) Method and device for identifying dynamic facial expressions of artificial neural network
CN116735610A (en) Steel pipe surface defect detection method based on machine vision
CN112215873A (en) Method for tracking and positioning multiple targets in transformer substation
Fan et al. Lane detection based on machine learning algorithm
JP4674920B2 (en) Object number detection device and object number detection method
CN114612934A (en) Gait sequence evaluation method and system based on quality dimension
JPWO2022247162A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201209

Address after: 250014 No. 7366 East Second Ring Road, Lixia District, Shandong, Ji'nan

Applicant after: SHANDONG University OF FINANCE AND ECONOMICS

Applicant after: Shandong Rengong Intelligent Technology Co.,Ltd.

Address before: 250014 No. 7366 East Second Ring Road, Lixia District, Shandong, Ji'nan

Applicant before: SHANDONG University OF FINANCE AND ECONOMICS

GR01 Patent grant
GR01 Patent grant