CN110705453A - Real-time fatigue driving detection method - Google Patents

Real-time fatigue driving detection method Download PDF

Info

Publication number
CN110705453A
CN110705453A CN201910929846.6A CN201910929846A CN110705453A CN 110705453 A CN110705453 A CN 110705453A CN 201910929846 A CN201910929846 A CN 201910929846A CN 110705453 A CN110705453 A CN 110705453A
Authority
CN
China
Prior art keywords
state
queue
fatigue
time
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910929846.6A
Other languages
Chinese (zh)
Inventor
尹东
张锐
周志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201910929846.6A priority Critical patent/CN110705453A/en
Publication of CN110705453A publication Critical patent/CN110705453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Emergency Alarm Devices (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A real-time fatigue driving detection method comprises the following steps: step one, acquiring image data from a video shot by a camera; secondly, performing state recognition according to the acquired image data, wherein the state recognition comprises face recognition and eye positioning and classification; step three, calculating various parameters and forming multi-parameter composite criteria; step four, fatigue detection, namely determining whether fatigue occurs according to a multi-parameter composite criterion: if not, turning to the first step; if yes, an alarm is given. The technology of the invention mainly utilizes an SSD (Single Shot MultiBox Detector) network to identify the face in the video image and position and classify the eyes and the mouth, and then utilizes a PERCLOS method and blinking, yawning frequency and the like as the standard of fatigue judgment. In view of the detection effect of the SSD network, the technology has higher detection accuracy and real-time performance.

Description

Real-time fatigue driving detection method
Technical Field
The invention belongs to the field of computer graphic image processing, and particularly relates to a driver fatigue detection method.
Background
Video image processing is one of the research focuses and hot spots in the field of computer science, and has been advanced greatly. The fatigue driving detection based on image processing is more of interest to scholars and is also a field related to the patent application.
The existing algorithms for face recognition and fatigue driving detection mainly comprise an AdaBoost algorithm, a PERCLOS method and the like. The method mainly comprises the steps of detecting the positions of human eyes by using an active shape model based on AdaBoost face detection and human eye positioning, and then judging the opening and closing states of the eyes of a driver by using a PERCLOS method. For example, the detection rate of the Xiaoqing implementation based on the AdaBoost classifier and the PERCLOS standard can reach 86% -93%, but the processing speed is low, and the real-time performance is not high.
The invention mainly solves the technical problems as follows: how to carry out face recognition, eye and mouth accurate positioning and classification technology and how to synthesize multi-parameter fatigue state judgment.
Disclosure of Invention
In order to solve the problems, the technology of the invention mainly utilizes an SSD (Single Shot MultiBoxDector) network to identify the human face in the video image and position and classify the eyes and the mouth, and then utilizes a PERCLOS method and blinking, yawning frequency and the like as the standard of fatigue judgment. In view of the detection effect of the SSD network, the technology has higher detection accuracy and real-time performance.
The invention provides a real-time fatigue driving detection method, which comprises the following steps:
step one, acquiring image data from a video shot by a camera;
secondly, performing state recognition according to the acquired image data, wherein the state recognition comprises face recognition and eye positioning and classification;
step three, calculating various parameters and forming multi-parameter composite criteria;
step four, fatigue detection, namely determining whether fatigue occurs according to a multi-parameter composite criterion: if not, turning to the first step; if yes, an alarm is given.
In the first step, the video acquisition and analysis of the single-frame image specifically comprises: real-time video is acquired through a camera, and then opencv is used for capturing video frames, wherein captured image data is a three-dimensional tensor with the channel number being 3, 640 and 480.
Step two, eye-mouth positioning and state recognition are carried out, the obtained single-frame image is input into the trained SSD network model to be detected, and the state of the eyes and the mouth of the driver is determined and stored according to the detection result; the method specifically comprises the following steps: after the image of each frame is sent into the SSD network, the output of the network is the detected positions of the eyes and the mouth and the label information; determining state information to be stored according to the label information, namely the type information of eyes and mouths, and defining the open eyes as a state '1' and the closed eyes as a state '0'; the mouth opening is in a state of '1' and the mouth closing is in a state of '0'.
And storing the state information in a queue form, and storing the corresponding state information in a defined queue in real time according to a detected result, wherein the horizontal axis represents frame number information, and the vertical axis represents the state information of the frame number.
The third step of calculating various parameters and forming a multi-parameter composite criterion comprises the following steps:
(1) fatigue state parameter calculation
The fatigue state detection comprises 3 parameters in total: PERCLOS, blink frequency, and yawning frequency;
when the state information of the eyes of the driver in a certain time period is obtained and stored in a corresponding queue, the PERCLOS is calculated, and the sum of elements in the queue is the total number of frames with the eyes open; the difference between the queue length and the sum of the elements is the total number of frames for which the eye is closed; simultaneously calculating the arithmetic mean value of the elements in the queue; PERCLOS is equal to 1 minus the arithmetic mean of the elements in the queue, as shown in equation (1):
where len is the length of the queue, sum is the sum of all elements in the queue, and avg is the average of the elements in the queue.
(2) Blink frequency calculation
For the calculation of the blink frequency, when the eye state of the driver is obtained, if the value of the penultimate element in the queue is open eye '1' and the value of the last element is closed eye '0', namely each falling edge in the state sequence, counting one blink, recording whether the current frame is a blink frame in a queue form, if so, recording as '1', and if not, recording as '0'; fps of video processing is determined, queue length is used as a timing tool, and a calculation formula is shown in formula (2):
Figure BDA0002219906410000022
where N represents the total number of frames in the eye-closed state per unit time, and N represents the total number of frames per unit time.
Calculating by the formula to obtain blink frequency as a fatigue judgment parameter;
(3) frequency calculation of yawning
For the calculation of the yawning frequency, a sequence which is 1 ' except for the tail part and has the time length of 1.5s is searched at the tail part of the mouth state queue, namely ' 11111 … … 10 '; finding the data to be used as a one-time hacking and storing the data into a queue for storing hacking information; calculating the frequency of the yawning by adopting a formula of the frequency m of the yawning to the number of frames N, wherein the formula (3) is as follows:
Figure BDA0002219906410000031
the fatigue state detection in the fourth step comprises the following steps:
when the blink frequency is reduced and the PERCLOS value is increased at the same time, judging that the driver is tired, judging that the threshold value of fatigue is 0.25 and the threshold value of PERCLOS is 0.4 by the blink frequency;
and for the yawning frequency, more than three times of yawning appears in half a minute, namely fatigue is detected, and thus the fatigue driving detection function is finally completed.
The method of the invention comprises the following technical means:
1) convolutional neural network research for face recognition
Fatigue driving is mainly detected by detecting small objects such as human eyes and mouths. The invention utilizes the SSD network to identify the face in the video image, and further completes the positioning and classification of the eyes and the mouth, thereby not only having high accuracy, but also having good real-time performance.
2) Face part detection technology
The fatigue driving judgment is mainly used for identifying the change detection of small targets such as human eyes, mouths and the like. In view of high detection accuracy of the SSD on the small target, the invention adopts the SSD network to identify the face and position and classify the eyes and the mouth, and has good real-time performance.
3) Fatigue state discrimination method
The invention adopts a PERCLOS-based method, blink frequency and yawning frequency to jointly judge the fatigue state, and has higher accuracy. The sensitivity of the system to the fatigue state is reasonable by setting the threshold value of each parameter. Meanwhile, the output result of the fatigue detection system is closer to the real situation by adopting the composite criterion.
Has the advantages that:
the invention adopts the SSD300 as the detection network, has the advantages of high detection speed, higher detection precision and low computation amount, can reduce the deployment cost, and is very suitable for the detection scene of driver fatigue driving. And a large number of face images of the driver are used for making a data set for neural network training, and the training of the SSD network is completed. In consideration of different conditions of the bus in the driving process, the images in the data set used by the invention contain different illumination, fatigue degrees, different face angles of drivers and the like, thereby ensuring that the trained neural network has good robustness.
When a driver is tired, a plurality of expressions such as reduction of blinking frequency, long eye closing time, yawning and the like appear. Because a single condition such as the condition of the eyes is detected only by means of the convolutional neural network, the condition of the eyes and the condition of the mouth cannot be well reflected, the method firstly detects the conditions of the eyes and the mouth by means of the convolutional neural network, and then calculates PERCLOS, the blink frequency (the average blink frequency in the first 10 seconds of the current frame is calculated), the yawning frequency (the average yawning frequency in the first 30 seconds of the current frame is calculated) and the like to serve as a fatigue condition judgment method. The existing detection methods are single, and the method comprehensively forms a composite criterion by multiple parameters, so that the accuracy is higher.
The traditional fatigue driving detection method has the defects of high equipment cost, low detection efficiency, low detection accuracy and the like, and is easy to bring inconvenience to a driver, and the like, such as fatigue detection based on physiological signals, fatigue detection based on driving behaviors and the like; meanwhile, the PERCLOS, the blink frequency and the yawning frequency are adopted to jointly judge the fatigue state, and the composite criterion enables the output result of the fatigue detection system to be closer to the real situation.
Drawings
FIG. 1(a) is an image of the eyes and mouth of a driver being open;
FIG. 1(b) is an image of a driver closing eyes and opening mouth;
FIG. 2 is a graph showing the effect of the present invention on the detection of FIG. 1(a) and (b);
FIG. 3 is a flowchart of a fatigue driving detection method of the present invention;
FIG. 4 is a diagram of state information stored in a queue.
Detailed Description
The following describes embodiments of the present invention with reference to the drawings.
Referring to fig. 3, the fatigue driving detection method of the present invention implements a general flow.
The system input realized by the invention is a 24-bit true color video acquired by a camera, and the output is a fatigue detection result of a driver. The general flow chart of the fatigue detection method of the invention is shown in fig. 3, and comprises the following steps:
step one, acquiring image data from a video shot by a camera;
secondly, performing state recognition according to the acquired image data, wherein the state recognition comprises face recognition and eye positioning and classification;
step three, calculating various parameters and forming multi-parameter composite criteria;
step four, fatigue detection, namely determining whether fatigue occurs according to a multi-parameter composite criterion: if not, turning to the first step; if yes, an alarm is given.
The first step, the video acquisition and analysis of the single-frame image specifically comprises: real-time video is acquired through a camera, and then video frames are captured by using opencv. The realization process is as follows:
cap=cv2.VideoCapture(0)
ret,img=cap.read()
img is the captured image data and is a three-dimensional tensor (channels width height) of 3 x 640 x 480.
In the second step, the eye-mouth positioning and state recognition specifically comprise: and inputting the acquired single-frame image into the most effective one of the trained SSD network models for detection, and determining and storing the eye and mouth states of the driver according to the detection result. Referring to the drawings, fig. 1(a) is an image of the eyes and mouth of a driver being open; FIG. 1(b) is an image of a driver closing eyes and opening mouth; fig. 2 shows the results of detection.
The method specifically comprises the following steps: after the image of each frame is sent to the SSD network, the output of the network is the detected eye and mouth positions and label information. According to the label information (the type of eyes and mouths), determining state information to be stored, wherein the invention defines that the eyes are opened as a state '1', and the eyes are closed as a state '0'; mouth opening is in state '1', mouth closing is in state '0', as shown in table 1:
TABLE 1 Tab & State Compare
Considering that the system needs the eye and mouth state information of the driver in a certain time period to calculate the fatigue judgment related parameters, but the early information (such as single detection) has no significance for fatigue judgment, the invention adopts the form of queue to store the state information. Storing the corresponding state information into a defined queue in real time according to the detected result, as shown in fig. 4: the horizontal axis represents frame number information and the vertical axis represents status information of the several frames. For example, for the eyes, the status information of the queue eyes in the 15 frames is 110111011100111, 1 represents open eyes, and 0 represents closed eyes.
The fatigue detection in the third step and the fourth step is specifically as follows:
1) fatigue state parameter calculation
The fatigue state detection used by the invention has 3 parameters: PERCLOS, blink frequency, and yawning frequency.
Figure BDA0002219906410000053
PERCLOS
After the status information of the driver's eyes for a certain period of time is obtained and stored in the corresponding queue, the calculation of PERCLOS (which is a proper term) is performed. Because the information in the queue is stored in the form of integer numbers '1' and '0', when the PERCLOS is calculated, the sum of the elements in the queue is the total number of the frames with the eyes open; the difference between the queue length and the sum of the elements is the total number of frames for which the eye is closed; while the arithmetic mean of the elements in the queue is calculated. Therefore, PERCLOS is equal to 1 minus the arithmetic mean of the elements in the queue. The specific implementation formula is shown as formula 1:
Figure BDA0002219906410000052
where len is the length of the queue, sum is the sum of all elements in the queue, and avg is the average of the elements in the queue.
Figure BDA0002219906410000063
Blink frequency
For calculation of the blink frequency, when the eye state of the driver is acquired, if the value of the second last element in the queue is '1' (open eye) and the value of the last element is '0' (closed eye) (i.e. each falling edge in the state sequence), a single blink is counted, and in order to consider the real-time performance of detection, whether the current frame is a blink frame is recorded in the form of a queue, and if the current frame is a blink frame, the current frame is recorded as '1', and if the current frame is not '0'. The fps of the video processing can be determined, so the queue length can be used as a timing tool. The calculation formula is shown in formula 2:
Figure BDA0002219906410000061
where N represents the total number of frames in the eye-closed state per unit time, and N represents the total number of frames per unit time.
And calculating by the formula to obtain the blink frequency as a fatigue judgment parameter.
Figure BDA0002219906410000064
Frequency of yawning
For the calculation of the frequency of the yawning, the invention searches for a sequence which is about 1.5s in time length and is all '1' except the tail, namely '11111 … … 10'. And finding the data to be used as a one-time hacking and storing the data into a queue for storing hacking information. And (3) when blinking, calculating the frequency of the yawning by adopting a formula of the frequency m of the yawning to the number of frames N, wherein the formula is shown in formula 3:
Figure BDA0002219906410000062
(2) fatigue state detection
When the driver is tired, the blinking frequency is reduced, and the PERCLOS value is increased, namely, the phenomenon of long-time eye closing and the phenomenon of frequent yawning occur. The normal eye blinking frequency of human eyes should be more than 15 times per minute, and the threshold value for determining fatigue after converting into eye blinking frequency is 0.25, but in practical application, the threshold value should be adjusted by considering the actual situation so as to avoid excessive false alarm caused by too high threshold value, and at the same time, the threshold value is converted into eye blinking frequency according to fps of the video. For the PERCLOS value, it is generally considered that a driver is fatigued when the value is greater than 0.4. For the frequency of yawning, the driver is not considered to be in fatigue when occasional yawning occurs. When the driver is tired, the phenomenon of continuous yawning in a short time can occur. Therefore, fatigue is considered to occur when more than three yawns occur within a defined half minute. Similarly, a specific threshold is calculated from the video fps. Thereby finally completing the fatigue driving detection function.

Claims (6)

1. A real-time fatigue driving detection method is characterized by comprising the following steps:
step one, acquiring image data from a video shot by a camera;
secondly, performing state recognition according to the acquired image data, wherein the state recognition comprises face recognition and eye positioning and classification;
step three, calculating various parameters and forming multi-parameter composite criteria;
step four, fatigue detection, namely determining whether fatigue occurs according to a multi-parameter composite criterion: if not, turning to the first step; if yes, an alarm is given.
2. A real-time fatigue driving detection method according to claim 1, characterized by:
in the first step, the video acquisition and analysis of the single-frame image specifically comprises: real-time video is acquired through a camera, and then opencv is used for capturing video frames, wherein captured image data is a three-dimensional tensor with the channel number being 3, 640 and 480.
3. A real-time fatigue driving detection method according to claim 1, characterized by:
step two, eye-mouth positioning and state recognition are carried out, the obtained single-frame image is input into the trained SSD network model to be detected, and the state of the eyes and the mouth of the driver is determined and stored according to the detection result; the method specifically comprises the following steps: after the image of each frame is sent into the SSD network, the output of the network is the detected positions of the eyes and the mouth and the label information; determining state information to be stored according to the label information, namely the type information of eyes and mouths, and defining the open eyes as a state '1' and the closed eyes as a state '0'; the mouth opening is in a state of '1' and the mouth closing is in a state of '0'.
4. A real-time fatigue driving detection method according to claim 3, characterized in that:
and storing the state information in a queue form, and storing the corresponding state information in a defined queue in real time according to a detected result, wherein the horizontal axis represents frame number information, and the vertical axis represents the state information of the frame number.
5. A real-time fatigue driving detection method according to claim 1, characterized by: the third step of calculating various parameters and forming a multi-parameter composite criterion comprises the following steps:
(1) fatigue state parameter calculation
The fatigue state detection comprises 3 parameters in total: PERCLOS, blink frequency, and yawning frequency;
when the state information of the eyes of the driver in a certain time period is obtained and stored in a corresponding queue, the PERCLOS is calculated, and the sum of elements in the queue is the total number of frames with the eyes open; the difference between the queue length and the sum of the elements is the total number of frames for which the eye is closed; simultaneously calculating the arithmetic mean value of the elements in the queue; PERCLOS is equal to 1 minus the arithmetic mean of the elements in the queue, as shown in equation (1):
Figure FDA0002219906400000021
where len is the length of the queue, sum is the sum of all elements in the queue, and avg is the average of the elements in the queue;
(2) blink frequency calculation
For the calculation of the blink frequency, when the eye state of the driver is obtained, if the value of the penultimate element in the queue is open eye '1' and the value of the last element is closed eye '0', namely each falling edge in the state sequence, counting one blink, recording whether the current frame is a blink frame in a queue form, if so, recording as '1', and if not, recording as '0'; fps of video processing is determined, queue length is used as a timing tool, and a calculation formula is shown in formula (2):
Figure FDA0002219906400000022
wherein N represents the total frame number of the eye closing state in unit time, and N represents the total frame number in unit time;
calculating by the formula to obtain the blink frequency;
(3) frequency calculation of yawning
For the calculation of the yawning frequency, a sequence which is 1 ' except for the tail part and has the time length of 1.5s is searched at the tail part of the mouth state queue, namely ' 11111 … … 10 '; finding the data to be used as a one-time hacking and storing the data into a queue for storing hacking information; calculating the frequency of the yawning by adopting a formula of the frequency m of the yawning to the number of frames N, wherein the formula (3) is as follows:
Figure FDA0002219906400000023
6. the real-time fatigue driving detection method according to claim 5, wherein the step four fatigue state detection comprises the steps of:
when the blink frequency is reduced and the PERCLOS value is increased at the same time, judging that the driver is tired, judging that the threshold value of fatigue is 0.25 and the threshold value of PERCLOS is 0.4 by the blink frequency;
and for the yawning frequency, more than three times of yawning appears in half a minute, namely fatigue is detected, and thus the fatigue driving detection function is finally completed.
CN201910929846.6A 2019-09-29 2019-09-29 Real-time fatigue driving detection method Pending CN110705453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910929846.6A CN110705453A (en) 2019-09-29 2019-09-29 Real-time fatigue driving detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910929846.6A CN110705453A (en) 2019-09-29 2019-09-29 Real-time fatigue driving detection method

Publications (1)

Publication Number Publication Date
CN110705453A true CN110705453A (en) 2020-01-17

Family

ID=69197940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910929846.6A Pending CN110705453A (en) 2019-09-29 2019-09-29 Real-time fatigue driving detection method

Country Status (1)

Country Link
CN (1) CN110705453A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016429A (en) * 2020-08-21 2020-12-01 高新兴科技集团股份有限公司 Fatigue driving detection method based on train cab scene
CN112686161A (en) * 2020-12-31 2021-04-20 遵义师范学院 Fatigue driving detection method based on neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240446A (en) * 2014-09-26 2014-12-24 长春工业大学 Fatigue driving warning system on basis of human face recognition
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107480629A (en) * 2017-08-11 2017-12-15 常熟理工学院 A kind of method for detecting fatigue driving and device based on depth information
CN110119672A (en) * 2019-03-26 2019-08-13 湖北大学 A kind of embedded fatigue state detection system and method
CN110223212A (en) * 2019-06-20 2019-09-10 上海木木机器人技术有限公司 A kind of dispatch control method and system of transportation robot
CN110276273A (en) * 2019-05-30 2019-09-24 福建工程学院 Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240446A (en) * 2014-09-26 2014-12-24 长春工业大学 Fatigue driving warning system on basis of human face recognition
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107480629A (en) * 2017-08-11 2017-12-15 常熟理工学院 A kind of method for detecting fatigue driving and device based on depth information
CN110119672A (en) * 2019-03-26 2019-08-13 湖北大学 A kind of embedded fatigue state detection system and method
CN110276273A (en) * 2019-05-30 2019-09-24 福建工程学院 Merge the Driver Fatigue Detection of facial characteristics and the estimation of image pulse heart rate
CN110223212A (en) * 2019-06-20 2019-09-10 上海木木机器人技术有限公司 A kind of dispatch control method and system of transportation robot

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016429A (en) * 2020-08-21 2020-12-01 高新兴科技集团股份有限公司 Fatigue driving detection method based on train cab scene
CN112686161A (en) * 2020-12-31 2021-04-20 遵义师范学院 Fatigue driving detection method based on neural network

Similar Documents

Publication Publication Date Title
CN105809144B (en) A kind of gesture recognition system and method using movement cutting
CN110543867B (en) Crowd density estimation system and method under condition of multiple cameras
JP4316541B2 (en) Monitoring recording apparatus and monitoring recording method
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN109214373A (en) A kind of face identification system and method for attendance
CN106682578B (en) Weak light face recognition method based on blink detection
WO2015131734A1 (en) Method, device, and storage medium for pedestrian counting in forward looking surveillance scenario
CN105844659B (en) The tracking and device of moving component
CN103049459A (en) Feature recognition based quick video retrieval method
CN105868574B (en) A kind of optimization method of camera track human faces and wisdom health monitor system based on video
TWI687159B (en) Fry counting system and fry counting method
CN108596087B (en) Driving fatigue degree detection regression model based on double-network result
CN110287907B (en) Object detection method and device
CN106650574A (en) Face identification method based on PCANet
CN109829382A (en) The abnormal object early warning tracing system and method for Behavior-based control feature intelligent analysis
CN104063709B (en) Sight line detector and method, image capture apparatus and its control method
CN110276265A (en) Pedestrian monitoring method and device based on intelligent three-dimensional solid monitoring device
CN106570490A (en) Pedestrian real-time tracking method based on fast clustering
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN110705453A (en) Real-time fatigue driving detection method
WO2013075295A1 (en) Clothing identification method and system for low-resolution video
CN109063626A (en) Dynamic human face recognition methods and device
CN113255608A (en) Multi-camera face recognition positioning method based on CNN classification
Sun et al. Kinect-based intelligent monitoring and warning of students' sitting posture
CN104751144B (en) A kind of front face fast appraisement method of facing video monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117