CN112347891B - Method for detecting drinking water state in cabin based on vision - Google Patents

Method for detecting drinking water state in cabin based on vision Download PDF

Info

Publication number
CN112347891B
CN112347891B CN202011192152.8A CN202011192152A CN112347891B CN 112347891 B CN112347891 B CN 112347891B CN 202011192152 A CN202011192152 A CN 202011192152A CN 112347891 B CN112347891 B CN 112347891B
Authority
CN
China
Prior art keywords
driver
drinking
key point
cabin
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011192152.8A
Other languages
Chinese (zh)
Other versions
CN112347891A (en
Inventor
黄宇维
刘国清
杨广
周滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Youjia Technology Co ltd
Original Assignee
Nanjing Youjia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Youjia Technology Co ltd filed Critical Nanjing Youjia Technology Co ltd
Priority to CN202011192152.8A priority Critical patent/CN112347891B/en
Publication of CN112347891A publication Critical patent/CN112347891A/en
Application granted granted Critical
Publication of CN112347891B publication Critical patent/CN112347891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting the drinking water state in a cabin based on vision, which comprises the following steps: collecting images of objects left in the cabin at different visual angles under different illumination conditions, labeling the images, and dividing the labeled images into a training set, a testing set and a verification set; judging whether the vehicle is started or not, and shooting a driver to obtain a driver image; the state of a driver is detected and judged based on a human upper limb key point detector, if the driver is detected to leave a steering wheel by one hand, whether the driver drinks water is judged based on a water drinking detector, and if the driver drinks water, whether a water drinking container exists in an image is judged; classifying the drinking container based on a fine classifier, and judging the drinking container to be an open container or a straw cup; if the drinking container is an open container, judging whether the pitching angle of the head of the driver influences the sight line or not based on the face key point detection, and judging whether the pitching angle of the head of the driver influences the sight line or not based on the face key point detection.

Description

Method for detecting drinking water state in cabin based on vision
Technical Field
The invention relates to a method for detecting the water drinking state in a cabin based on vision, and belongs to the technical field of intelligent cabins.
Background
With the continuous improvement of computer software and hardware capability and the improvement of the requirement of people on driving safety in recent years, the cabin monitoring technology receives extensive attention in academia and industry. The drinking water state analysis is also one of important tasks, and can effectively protect the driving safety of drivers. The intelligent water drinking device has the main functions of detecting the water drinking state of a driver in the cabin and giving an alarm when the safe driving is influenced.
In the past, the drinking state detection is usually to only detect whether a driver drinks water, and an alarm is given once the driver drinks water, which is not ideal. Investigation shows that the water drinking state has great influence on driving safety, and preparation work before drinking water, such as pop can opening, needs to leave a steering wheel with two hands to open the pop can, so that a vehicle is in an uncontrollable state within a few seconds. If the driver drives at a speed of 120 km/h, the travel distance per second is about 33 meters. Assuming that it takes 1.5 seconds to open the can, the vehicle will travel 50 more meters, and these routes are completed without operation, extremely compromising public safety. Research shows that when drinking water by using the suction pipe, eyes still can look ahead, which has great guarantee for driving safety. If only the drinking state is detected, the preparation state of drinking cannot be guaranteed to influence driving, and whether drinking affects driving safety cannot be judged.
In order to overcome the problems, a human body key point detection method is used for judging whether two hands of a driver are separated from a steering wheel or not, if the two hands are separated from the steering wheel, alarming is carried out in time, then a water drinking detection algorithm is used for detecting whether the driver drinks water or not, water cups are distinguished by using a deep neural network, then a face pitch angle is calculated according to the face key point detection algorithm, and alarming is carried out aiming at the water drinking state with a larger pitch angle.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a visual sense-based detection method for water drinking states in an engine room, wherein a human body key point detection method is used for judging whether a driver separates two hands from a steering wheel or not, if the two hands separate from the steering wheel, an alarm is given in time, then a water drinking detection algorithm is used for detecting whether the driver drinks water or not, a deep neural network is used for distinguishing water cups, a face pitch angle is calculated according to the face key point detection algorithm, and an alarm is given for the water drinking states with larger pitch angles.
In order to achieve the purpose, the invention provides a method for detecting the drinking water state in a cabin based on vision, which comprises the following steps:
1) collecting images of objects left in the cabin at different visual angles under different illumination conditions for multiple times, labeling the images, and dividing the labeled images into a training set, a testing set and a verification set;
2) judging whether the vehicle is started or not, and shooting a driver to obtain a driver image;
3) detecting and judging the state of a driver based on a human upper limb key point detector, if detecting that the driver leaves a steering wheel by one hand, entering a step 4), and if not, repeatedly shooting the driver to obtain a driver image;
4) judging whether a driver drinks water based on the water drinking detector, and if so, judging whether a water drinking container exists in the image; classifying the drinking container based on a fine classifier, and judging the drinking container to be an open container or a straw cup;
5) if the drinking container is a straw cup, no alarm is given; if the drinking container is an open container, judging whether the pitching angle of the head of the driver influences the sight line or not based on the face key point detection, and judging whether the pitching angle of the head of the driver influences the sight line or not based on the face key point detection.
Preferably, the state of the driver is detected and judged based on key points of upper limbs of the human body, if the hands of the driver do not leave the steering wheel, the alarm is not given, and if the hands of the driver leave the steering wheel, the alarm is given.
Preferably, in step 1),
the training of the human body upper limb key point detector comprises the following steps:
marking a driver image, marking upper limb nodes of a human body, and randomly dividing the driver image into a training set of upper limb key point detectors of the human body, a verification set of the upper limb key point detectors of the human body and a test set of the upper limb key point detectors of the human body;
setting a Hourglass neural network, sending the training set into the Hourglass neural network for iterative training, and verifying on the verification set;
testing the key point positions of the upper limbs of the human body on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises the test of false detection of the key point positions of the upper limbs of the human body and the test of missing detection of the key point positions of the upper limbs of the human body.
Preferably, the training of the drinking water detector comprises the steps of:
marking images of objects left in a cabin at the same visual angle, marking the position of a drinking container in a drinking state, wherein the position of the drinking container comprises an upper left corner coordinate and a lower right corner coordinate, randomly dividing the images of the objects left in the cabin at the same visual angle into a drinking detector training set, a drinking detector verification set and a drinking detector testing set;
setting a ResNet detection neural network, sending the training set into the ResNet detection neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the face detection is accurate on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises a face detection false detection test and a face detection missing detection test.
Preferably, the training of the fine classifier comprises the steps of:
firstly, digging and marking a detection frame result in a driver image, dividing the detection frame result into an open container and a suction cup, and randomly dividing the driver image into a fine classifier training set, a fine classifier verification set and a fine classifier testing set;
setting a ShuffleNet classification neural network, sending the training set into the ShuffleNet classification neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the cup type classification is correct or not on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises a cup type classification false detection test and a cup type classification missing detection test.
Preferably, the training of the face keypoint detector comprises the following steps:
firstly, labeling a driver image, and labeling 68 point key point coordinates of a human face;
setting a ResNet recurrent neural network, sending the training set into the ResNet recurrent neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the positions of the key points of the face are correct on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises the test of false detection of the positions of the key points of the face and the test of missing detection of the positions of the key points of the face.
Preferably, the human upper limb nodes comprise a left wrist, a right wrist, a left elbow, a right elbow, a left shoulder and a right shoulder.
Preferably, the training set of human upper limb key point detectors accounts for 80% of the number of images of the driver, the verification set of human upper limb key point detectors accounts for 10% of the number of images of the driver, and the testing set of human upper limb key point detectors accounts for 10% of the number of images of the driver.
Preferably, the training set of the drinking water detector accounts for 80% of the total amount of images of objects left in the cabin at the same viewing angle, the verification set of the drinking water detector accounts for 10% of the total amount of images of objects left in the cabin at the same viewing angle, and the testing set of the drinking water detector accounts for 10% of the total amount of images of objects left in the cabin at the same viewing angle.
Preferably, the training set of fine classifiers accounts for 80% of the driver's map, the validation set of fine classifiers accounts for 10% of the driver's map, and the testing set of fine classifiers accounts for 10% of the driver's map.
The invention achieves the following beneficial effects:
the invention analyzes the driving state of a driver by using human body key point detection, analyzes the drinking state by using a drinking detection algorithm, analyzes by using a fine classification algorithm and a human face key point detection algorithm, and gives an alarm only for the state influencing driving safety. Based on our scheme, all promote on the precision that drinks and detect and driver's attention recall rate, concentrated attention when impelling driver's driving vehicle has improved the security among the driving process.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The method for detecting the drinking water state in the cabin based on vision comprises the following steps:
1) collecting images of objects left in the cabin at different visual angles under different illumination conditions for multiple times, labeling the images, and dividing the labeled images into a training set, a testing set and a verification set;
2) judging whether the vehicle is started or not, and shooting a driver to obtain a driver image;
3) detecting and judging the state of a driver based on a human upper limb key point detector, if detecting that the driver leaves a steering wheel by one hand, entering a step 4), and if not, repeatedly shooting the driver to obtain a driver image;
4) judging whether a driver drinks water based on the water drinking detector, and if so, judging whether a water drinking container exists in the image; classifying the drinking container based on a fine classifier, and judging the drinking container to be an open container or a straw cup;
5) if the drinking container is a straw cup, no alarm is given; if the drinking container is an open container, judging whether the pitching angle of the head of the driver influences the sight line or not based on the face key point detection, and judging whether the pitching angle of the head of the driver influences the sight line or not based on the face key point detection.
Furthermore, the state of the driver is detected and judged based on key points of the upper limbs of the human body, if the hands of the driver do not leave the steering wheel, the alarm is not given, and if the hands of the driver leave the steering wheel, the alarm is given.
Further, in the step 1),
the training of the human body upper limb key point detector comprises the following steps:
marking a driver image, marking upper limb nodes of a human body, and randomly dividing the driver image into a training set of upper limb key point detectors of the human body, a verification set of the upper limb key point detectors of the human body and a test set of the upper limb key point detectors of the human body;
setting a Hourglass neural network, sending the training set into the Hourglass neural network for iterative training, and verifying on the verification set;
testing the key point positions of the upper limbs of the human body on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises the test of false detection of the key point positions of the upper limbs of the human body and the test of missing detection of the key point positions of the upper limbs of the human body.
Further, the training of the drinking detector comprises the steps of:
marking images of objects left in a cabin at the same visual angle, marking the position of a drinking container in a drinking state, wherein the position of the drinking container comprises an upper left corner coordinate and a lower right corner coordinate, randomly dividing the images of the objects left in the cabin at the same visual angle into a drinking detector training set, a drinking detector verification set and a drinking detector testing set;
setting a ResNet detection neural network, sending the training set into the ResNet detection neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the face detection is accurate on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises a face detection false detection test and a face detection missing detection test.
Further, the training of the fine classifier comprises the steps of:
firstly, digging and marking a detection frame result in a driver image, dividing the detection frame result into an open container and a suction cup, and randomly dividing the driver image into a fine classifier training set, a fine classifier verification set and a fine classifier testing set;
setting a ShuffleNet classification neural network, sending the training set into the ShuffleNet classification neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the cup type classification is correct or not on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises a cup type classification false detection test and a cup type classification missing detection test.
Further, the training of the face key point detector comprises the following steps:
firstly, labeling a driver image, and labeling 68 point key point coordinates of a human face;
setting a ResNet recurrent neural network, sending the training set into the ResNet recurrent neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the positions of the key points of the face are correct on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises the test of false detection of the positions of the key points of the face and the test of missing detection of the positions of the key points of the face.
Further, the human upper limb nodes comprise a left wrist, a right wrist, a left elbow, a right elbow, a left shoulder and a right shoulder.
Further, the training set of the human upper limb key point detectors accounts for 80% of the number of the images of the driver, the verification set of the human upper limb key point detectors accounts for 10% of the number of the images of the driver, and the testing set of the human upper limb key point detectors accounts for 10% of the number of the images of the driver.
Further, the training set of the drinking detector accounts for 80% of the total amount of images of the objects left in the cabin at the same viewing angle, the verification set of the drinking detector accounts for 10% of the total amount of images of the objects left in the cabin at the same viewing angle, and the testing set of the drinking detector accounts for 10% of the total amount of images of the objects left in the cabin at the same viewing angle.
Further, the training set of the fine classifier accounts for 80% of the driver map, the validation set of the fine classifier accounts for 10% of the driver map, and the testing set of the fine classifier accounts for 10% of the driver map.
In the embodiment, whether the vehicle is started or not can be judged by shooting the image of the center console and detecting the vehicle speed rotating table in the image of the center console; or a speed sensor is arranged on the vehicle to detect whether the vehicle is started.
The training set is used for model training, the verification set determines the network structure or parameters for controlling the complexity of the model, and the test set tests how the performance of the finally selected optimal model is. The coordinates of the key points of the face 68 belong to coordinates commonly used in the prior art, which is not illustrated in this embodiment.
The camera component can adopt various models in the prior art, and those skilled in the art can select an appropriate model according to actual needs, which is not illustrated in this embodiment.
Deep Neural Networks, known as Deep Neural Networks in english, abbreviated DNN. The deep neural network is a framework of deep learning and is a neural network with at least one hidden layer. Similar to the shallow neural network, the deep neural network can also provide modeling for a complex nonlinear system, but the extra levels provide higher abstraction levels for the model, thereby improving the capability of the model. Deep neural networks are a discriminant model that can be trained using back-propagation algorithms.
The invention is based on an embedded platform such as a tablet computer deployed in a cabin or sent to a cloud for processing by using a communication module.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (9)

1. The method for detecting the drinking water state in the cabin based on vision is characterized by comprising the following steps of:
1) collecting images of objects left in the cabin at different visual angles under different illumination conditions for multiple times, labeling the images, and dividing the labeled images into a training set, a testing set and a verification set;
2) judging whether the vehicle is started or not, and shooting a driver to obtain a driver image;
3) detecting and judging the state of a driver based on a human upper limb key point detector, if detecting that the driver leaves a steering wheel by one hand, entering a step 4), and if not, repeatedly shooting the driver to obtain a driver image;
4) judging whether a driver drinks water based on the water drinking detector, and if so, judging whether a water drinking container exists in the image; classifying the drinking container based on a fine classifier, and judging the drinking container to be an open container or a straw cup;
5) if the drinking container is a straw cup, no alarm is given; if the drinking container is an open container, calculating the pitching angle of the head of the driver based on the face key point detection to judge whether the sight line is influenced;
the training of the face key point detector comprises the following steps:
firstly, labeling a driver image, and labeling 68 point key point coordinates of a human face;
setting a ResNet recurrent neural network, sending the training set into the ResNet recurrent neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the positions of the key points of the face are correct on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises the test of false detection of the positions of the key points of the face and the test of missing detection of the positions of the key points of the face.
2. The vision-based intra-cabin water drinking state detection method according to claim 1, wherein the state of the driver is determined based on detection of key points on upper limbs of a human body, and no alarm is given if the hands of the driver do not leave the steering wheel, and an alarm is given if the hands of the driver leave the steering wheel.
3. The vision-based intra-cabin water drinking state detection method according to claim 1, wherein, in the step 1),
the training of the human body upper limb key point detector comprises the following steps:
marking a driver image, marking upper limb nodes of a human body, and randomly dividing the driver image into a training set of upper limb key point detectors of the human body, a verification set of the upper limb key point detectors of the human body and a test set of the upper limb key point detectors of the human body;
setting a Hourglass neural network, sending the training set into the Hourglass neural network for iterative training, and verifying on the verification set;
testing the key point positions of the upper limbs of the human body on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises the test of false detection of the key point positions of the upper limbs of the human body and the test of missing detection of the key point positions of the upper limbs of the human body.
4. The vision-based intra-cabin drinking status detection method of claim 1, wherein the training of the drinking detector comprises the steps of:
marking images of objects left in a cabin at the same visual angle, marking the position of a drinking container in a drinking state, wherein the position of the drinking container comprises an upper left corner coordinate and a lower right corner coordinate, randomly dividing the images of the objects left in the cabin at the same visual angle into a drinking detector training set, a drinking detector verification set and a drinking detector testing set;
setting a ResNet detection neural network, sending the training set into the ResNet detection neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the face detection is accurate on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises a face detection false detection test and a face detection missing detection test.
5. The vision-based intra-cabin drinking water status detection method according to claim 1, wherein the training of the fine classifier comprises the steps of:
firstly, digging and marking a detection frame result in a driver image, dividing the detection frame result into an open container and a suction cup, and randomly dividing the driver image into a fine classifier training set, a fine classifier verification set and a fine classifier testing set;
setting a ShuffleNet classification neural network, sending the training set into the ShuffleNet classification neural network for iterative training, and verifying on the verification set;
and thirdly, testing whether the cup type classification is correct or not on the test set, and performing network iteration optimization by using backflow data, wherein the backflow data comprises a cup type classification false detection test and a cup type classification missing detection test.
6. The vision-based intra-cabin water drinking state detection method of claim 3, wherein the human upper limb nodes comprise a left wrist, a right wrist, a left elbow, a right elbow, a left shoulder and a right shoulder.
7. The vision-based intra-cabin water drinking state detection method according to claim 3, wherein the training set of human upper limb key point detectors accounts for 80% of the number of images of the driver, the verification set of human upper limb key point detectors accounts for 10% of the number of images of the driver, and the testing set of human upper limb key point detectors accounts for 10% of the number of images of the driver.
8. The vision-based on-cabin water drinking state detection method of claim 4, wherein the training set of the water drinking detector accounts for 80% of the total amount of images of the objects left in the cabin at the same viewing angle, the verification set of the water drinking detector accounts for 10% of the total amount of images of the objects left in the cabin at the same viewing angle, and the testing set of the water drinking detector accounts for 10% of the total amount of images of the objects left in the cabin at the same viewing angle.
9. The vision-based intra-cabin water drinking status detection method according to claim 5, wherein the training set of the fine classifier accounts for 80% of the driver's diagram, the verification set of the fine classifier accounts for 10% of the driver's diagram, and the test set of the fine classifier accounts for 10% of the driver's diagram.
CN202011192152.8A 2020-10-30 2020-10-30 Method for detecting drinking water state in cabin based on vision Active CN112347891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011192152.8A CN112347891B (en) 2020-10-30 2020-10-30 Method for detecting drinking water state in cabin based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011192152.8A CN112347891B (en) 2020-10-30 2020-10-30 Method for detecting drinking water state in cabin based on vision

Publications (2)

Publication Number Publication Date
CN112347891A CN112347891A (en) 2021-02-09
CN112347891B true CN112347891B (en) 2022-02-22

Family

ID=74356211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011192152.8A Active CN112347891B (en) 2020-10-30 2020-10-30 Method for detecting drinking water state in cabin based on vision

Country Status (1)

Country Link
CN (1) CN112347891B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109937152A (en) * 2017-08-10 2019-06-25 北京市商汤科技开发有限公司 Driving condition supervision method and apparatus, driver's monitoring system, vehicle
CN110309723A (en) * 2019-06-04 2019-10-08 东南大学 A kind of driving behavior recognition methods based on characteristics of human body's disaggregated classification
CN111661059A (en) * 2019-03-08 2020-09-15 虹软科技股份有限公司 Method and system for monitoring distracted driving and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6693427B2 (en) * 2017-01-18 2020-05-13 トヨタ自動車株式会社 Driver status detector
JP7005933B2 (en) * 2017-05-09 2022-01-24 オムロン株式会社 Driver monitoring device and driver monitoring method
CN110119676B (en) * 2019-03-28 2023-02-03 广东工业大学 Driver fatigue detection method based on neural network
CN111222477B (en) * 2020-01-10 2023-05-30 厦门瑞为信息技术有限公司 Vision-based method and device for detecting departure of hands from steering wheel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109937152A (en) * 2017-08-10 2019-06-25 北京市商汤科技开发有限公司 Driving condition supervision method and apparatus, driver's monitoring system, vehicle
CN111661059A (en) * 2019-03-08 2020-09-15 虹软科技股份有限公司 Method and system for monitoring distracted driving and electronic equipment
CN110309723A (en) * 2019-06-04 2019-10-08 东南大学 A kind of driving behavior recognition methods based on characteristics of human body's disaggregated classification

Also Published As

Publication number Publication date
CN112347891A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN105354988B (en) A kind of driver tired driving detecting system and detection method based on machine vision
CN109919049A (en) Fatigue detection method based on deep learning human face modeling
CN100462047C (en) Safe driving auxiliary device based on omnidirectional computer vision
CN105151049B (en) The early warning system detected based on driver's face feature and deviation
CN104637246B (en) Driver multi-behavior early warning system and danger evaluation method
CN105286802B (en) Driver Fatigue Detection based on video information
CN107273816B (en) Traffic speed limit label detection recognition methods based on vehicle-mounted forward sight monocular camera
CN104224204B (en) A kind of Study in Driver Fatigue State Surveillance System based on infrared detection technology
CN108960065A (en) A kind of driving behavior detection method of view-based access control model
CN108537197A (en) A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109636924A (en) Vehicle multi-mode formula augmented reality system based on real traffic information three-dimensional modeling
CN106485233A (en) Drivable region detection method, device and electronic equipment
CN102982316A (en) Driver abnormal driving behavior recognition device and method thereof
CN105354987A (en) Vehicle fatigue driving detection and identity authentication apparatus, and detection method thereof
CN107491769A (en) Method for detecting fatigue driving and system based on AdaBoost algorithms
CN110147738B (en) Driver fatigue monitoring and early warning method and system
CN109460699A (en) A kind of pilot harness's wearing recognition methods based on deep learning
CN110103816B (en) Driving state detection method
CN108447303A (en) The periphery visual field dangerous discernment method coupled with machine vision based on human eye vision
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
CN110203202A (en) A kind of lane-change auxiliary method for early warning and device based on Driver intention recognition
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN104881956A (en) Fatigue driving early warning system
CN101587544A (en) Automotive on-vehicle antitracking device based on computer vision
CN103324932A (en) Video-based vehicle detecting and tracking method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant