CN118220166A - Driver distraction early warning system for overcoming problems of near vision mirror and eye difference - Google Patents

Driver distraction early warning system for overcoming problems of near vision mirror and eye difference Download PDF

Info

Publication number
CN118220166A
CN118220166A CN202410551460.7A CN202410551460A CN118220166A CN 118220166 A CN118220166 A CN 118220166A CN 202410551460 A CN202410551460 A CN 202410551460A CN 118220166 A CN118220166 A CN 118220166A
Authority
CN
China
Prior art keywords
value
distraction
sight
driver
upper limit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410551460.7A
Other languages
Chinese (zh)
Inventor
余勇
赵红
张禹丰
李世伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Junjie Technology Beijing Co ltd
Original Assignee
Junjie Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Junjie Technology Beijing Co ltd filed Critical Junjie Technology Beijing Co ltd
Priority to CN202410551460.7A priority Critical patent/CN118220166A/en
Publication of CN118220166A publication Critical patent/CN118220166A/en
Pending legal-status Critical Current

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention relates to the technical field of driving monitoring, and discloses a driver distraction early warning system for overcoming the problems of near vision mirror and eye difference; the method comprises the steps of collecting real-time comprehensive sight data, predicting a sight pitching upper limit value, generating a pitching difference value based on the real-time sight pitching upper limit value and the predicted sight pitching upper limit value, dividing a distraction early warning level, generating an auxiliary driving instruction based on the distraction early warning level, and assisting a driver to safely drive a vehicle; compared with the prior art, the real-time sight pitching upper limit value can be accurately predicted, whether the driver has the phenomenon of distracted driving is judged, the phenomenon of distracted driving misjudgment caused by lens refraction when the driver wears the near vision mirror can be effectively avoided, errors caused by the vertical preference of the sight of different drivers in the driving process can be avoided, and specific intelligent driving auxiliary measures are formulated when the distracted driving phenomenon occurs, so that the safety of vehicle driving is improved.

Description

Driver distraction early warning system for overcoming problems of near vision mirror and eye difference
Technical Field
The invention relates to the technical field of driving monitoring, in particular to a driver distraction early warning system for overcoming the problems of near vision mirror and eye difference.
Background
The distraction driving refers to the phenomenon that the attention of the driver is directed to activities not related to normal driving, so that the driving operation capability of the driver is reduced, and the distraction driving is also a main cause for causing road traffic accidents, especially in the transportation industry of passenger transportation and freight transportation, once the driver has distraction driving behaviors in the high-speed driving process, the light driver has illegal behaviors, and the heavy driver can cause traffic accidents to cause serious personal and property loss, so that the distraction driving early warning is needed for avoiding the harm caused by distraction driving and improving the driving safety.
The patent application with the reference publication number CN109584507A discloses a driving behavior monitoring method, a device, a system, a vehicle and a storage medium, which are used for determining whether the current driving state of a driver is an abnormal driving state or not by executing detection operation on video information in the collected vehicle, executing early warning operation corresponding to the abnormal driving state when the current driving state is the abnormal driving state, realizing omnibearing monitoring on two abnormal driving states, namely a fatigue driving state and a distraction driving state, and realizing targeted high-efficiency early warning by executing early warning operation corresponding to the abnormal driving state, thereby reducing driving risk;
The prior art has the following defects:
The existing distraction driving early warning system is based on the fact that the face position is recognized by a 2D camera, then the face position is converted into the head gesture through a 3D modeling mode, eye vision is converted based on the head gesture, the vision position of a driver is monitored, a good early warning effect can be achieved under the condition that the driver does not wear a near vision mirror, however, when the driver wears the near vision mirror, the influence of refraction of the near vision mirror light is received at the moment, the monitored vision position of the driver is lower than the position without the near vision mirror, and due to the fact that the eye habits of individuals of the driver are different, the vision position of the driver is overlapped with the vision position during distraction driving, false triggering of distraction driving signals is easily caused, and accuracy of distraction driving early warning is reduced.
In view of the above, the present invention provides a driver distraction warning system that overcomes the problems of the near vision mirror and the eye difference.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides the following technical scheme for achieving the purposes: a driver distraction warning system for overcoming the problems of near vision mirror and eye difference, applied to a driver monitoring system, comprising:
the data acquisition module is used for acquiring distraction driving data of a driver, wherein the distraction driving data comprises comprehensive sight line data and sight line pitching upper limit value;
The model training module is used for training a machine learning model for predicting the upper limit value of the line of sight pitching based on the distraction driving data;
The real-time early warning module is used for collecting real-time comprehensive sight data, inputting the real-time comprehensive sight data into a machine learning model after training to predict the upper limit value of the sight pitch, and judging whether to send out a distraction early warning prompt;
the level dividing module is used for generating a pitching difference value based on the real-time sight pitching upper limit value and the predicted sight pitching upper limit value and dividing a distraction early warning level;
And the intelligent driving assisting module is used for generating an assisting driving instruction based on the distraction early warning level and assisting a driver to safely drive the vehicle based on the assisting driving instruction.
Further, the comprehensive vision data comprises a head posture offset value, a pupil center fluctuation value, an illumination intensity concentration degree and a lens refraction compensation value;
The method for acquiring the head posture offset value comprises the following steps:
taking a preset collection time length as a standard, and taking the current moment as a starting point to push i preset collection time lengths forwards, and marking i data collection moments;
shooting head posture images of a driver through a camera installed in a cab at i data acquisition moments to obtain i head images;
identifying the point position of the inner side of the left eye angle and the point position of the inner side of the right eye angle of a driver in the head image through a computer vision technology, and marking the point positions as a first point position and a second point position respectively;
After connecting the i first point positions and the i second point positions, forming i horizontal dividing lines, and marking the midpoints of the i horizontal dividing lines as head center points to obtain i head center points;
Drawing a horizontal standard line and a vertical standard line at the middle points of four boundaries of the overhead image respectively, and marking the intersection point of the horizontal standard line and the vertical standard line as a standard center point;
sequentially measuring the distances from the i head center points to the standard center points through a scale to obtain i sub-offset values;
removing the maximum value and the minimum value of the sub-offset values, accumulating the rest i-2 sub-offset values, and then averaging to obtain a head attitude offset value;
the expression of the head pose offset value is:
Where PY zt is the head attitude offset value, and PY za is the a-th sub offset value.
Further, the method for obtaining the pupil center fluctuation value comprises the following steps:
Shooting eye images of a driver through a camera arranged in a cab at i data acquisition moments to obtain i eye shooting images;
identifying the lower edge of the upper eyelid and the upper edge of the lower eyelid of one of the i eye-shot images by a computer vision technology, and marking the area between the lower edge of the upper eyelid and the upper edge of the lower eyelid as a target area;
Marking out pixel values of all pixel points in a target area, and marking the pixel points with the pixel values larger than a preset pixel threshold value as target pixel points;
Randomly selecting a target pixel point as a circle center, and drawing p detection circles by taking a preset length as a radius;
Respectively counting the total amount of the pixel points in the p detection circles and the number of the target pixel points in the p detection circles, and comparing the number of the target pixel points in the p detection circles with the total amount of the pixel points in the p detection circles to obtain p target pixel point occupation ratios;
The expression of the target pixel point occupation ratio is:
Wherein ZB xsp is the p-th target pixel point ratio, SL mbp is the number of target pixel points in the p-th detection circle, and SL zlp is the total number of pixel points in the p-th detection circle;
Screening the maximum value from m target pixel point occupation ratios larger than a preset duty ratio threshold, and marking a detection circle corresponding to the maximum value of the target pixel point occupation ratio as a target circle;
in the i eye shooting images, marking the position of a target pixel point corresponding to the center position of a circle in a target circle as a pupil center point, and obtaining i pupil center points;
respectively marking n measuring points on the lower edge of the upper eyelid and the upper edge of the lower eyelid, measuring the distance between the two measuring points which are positioned on the same vertical line and on the lower edge of the upper eyelid and the upper edge of the lower eyelid through a scale, and obtaining n distance values;
connecting two measuring points corresponding to the maximum value of the distance values to obtain a distance line, and marking the center of the distance line as a conventional center point;
Respectively measuring the distances from the i pupil center points to the conventional center points to obtain i sub-fluctuation values, accumulating the i sub-fluctuation values, and then averaging to obtain the pupil center fluctuation value;
The expression of pupil center fluctuation value is:
Where BD zt is the pupil center fluctuation value, and BD zb is the b < th > sub-fluctuation value.
Further, the method for obtaining the concentration degree of the illumination intensity comprises the following steps:
Shooting a body position image of a driver in the driving process through a camera;
identifying the region of the head of the driver in the body position image through a computer vision technology, and marking the region as a head region;
The head area is subjected to shadow coloring, a preset length value is used as a side length, the head area is divided into s rectangular grids with equal areas, and the shadow coloring area values in the s rectangular grids are measured;
Marking the rectangular cells with the shadow coloring area value larger than the preset coloring lower limit value as effective rectangular cells, obtaining w effective rectangular cells, and calculating the area of the effective rectangular cells according to a rectangular area calculation formula;
Acquiring illumination intensity data of a head area at i data acquisition moments by an illumination intensity sensor to obtain i illumination intensity values;
Accumulating the i illumination intensity values, averaging, and comparing with the accumulated values of the areas of w effective rectangular grids to obtain illumination intensity concentration degree;
The expression of the light intensity concentration degree is:
Wherein GZ jj is the light intensity concentration, GZ zc is the c-th light intensity value, and MJ gd is the area of the d-th effective rectangular grid.
Further, the method for obtaining the refraction compensation value of the lens comprises the following steps:
taking the position of the camera as a starting point and the position of the myopia lens as an ending point, and measuring the distance from the starting point to the ending point by a range finder to obtain a first distance value;
measuring the eye movement angles from the starting point to the end point by an eye movement tracking algorithm in the data acquisition module at i data acquisition moments to obtain i inclination angles;
based on the first distance value and the i inclination angles, calculating the distance from the camera to the myopia lens, and obtaining i second distance values;
The expression of the second distance value is:
JLd2i=JLd1*cos(θi);
Wherein JL d2i is the i second distance value, JL d1 is the first distance value, and θ i is the i inclination angle;
The refractive index of the myopia lens is input in advance, and when the eye sight of a driver and the myopia lens are in a horizontal state, the refraction angle of the lens is calculated through a refraction angle calculation formula;
Combining the refraction angle of the lens and i second distance values, calculating the downward distance of the sight line, and obtaining i sub-compensation values;
The expression of the sub-compensation value is:
BCzi=JLd2i*tan(90°-θ2);
Wherein BC zi is the ith sub-compensation value, and θ 2 is the lens refraction angle;
Accumulating the i sub-compensation values and then averaging to obtain a lens refraction compensation value;
The expression of the lens refraction compensation value is:
Where BC zs is the lens refractive compensation value and BC ze is the e-th sub-compensation value.
Further, the training method of the machine learning model for predicting the upper limit value of the line of sight pitching comprises the following steps:
And converting the comprehensive sight data into a corresponding group of feature vectors, taking the feature vectors as the input of a machine learning model, taking the sight pitch upper limit value corresponding to each group of comprehensive sight data as the output of the machine learning model, taking the sight pitch upper limit value as a prediction target, taking the sum of prediction errors of all training data as a training target, and training the machine learning model until the sum of the prediction errors reaches convergence.
Further, the judging method for judging whether to send out the distraction early warning prompt comprises the following steps:
collecting a real-time sight pitching upper limit value FY ss;
Comparing the real-time gaze pitch upper limit FY ss with the predicted gaze pitch upper limit FY yc;
When FY ss is more than or equal to FY yc, judging to send out a distraction early warning prompt;
When FY ss is smaller than FY yc, the judgment is that the distraction early warning prompt is not sent.
Further, the distraction early warning level comprises a first-level distraction level and a second-level distraction level;
The method for dividing the primary separation level and the secondary separation level comprises the following steps:
The real-time sight pitch upper limit FY ss and the predicted sight pitch upper limit FY yc are subjected to difference to obtain a pitch difference value;
the expression of the pitch difference is:
FYcz=FYss-FYyc
Wherein FY cz is a pitch difference value;
Comparing the pitch difference value FY cz with a preset pitch difference threshold value FY yz;
When FY cz is less than FY yz, dividing into a first level of distraction;
When FY cz is equal to or greater than FY yz, it is classified into a secondary classification level.
Further, the auxiliary driving instruction comprises a flexible intelligent driving instruction and a hard intelligent driving instruction;
the method for generating the flexible intelligent driving instruction and the hard intelligent driving instruction comprises the following steps:
when the distraction early warning level is a first-level distraction level, generating a flexible intelligent driving instruction;
and when the distraction early warning level is a secondary distraction level, generating a hard intelligent driving instruction.
Further, the method for assisting the driver to safely drive the vehicle comprises the following steps:
When a flexible intelligent driving instruction is generated, the driving computer controls the vehicle-mounted alarm to start, and the vehicle-mounted alarm continuously blinks and buzzes to give an alarm;
When the hard intelligent driving instruction is generated, the driving computer controls the vehicle-mounted alarm to start, the vehicle-mounted alarm continuously blinks and buzzes to give an alarm, and meanwhile, the driving computer controls the vehicle engine to reduce the running power.
The driver distraction early warning system for overcoming the problems of near vision mirror and eye difference has the technical effects and advantages that:
According to the invention, a machine learning model for predicting the upper limit value of the pitching of the line of sight is trained by collecting the comprehensive line of sight data and the upper limit value of the pitching of the line of sight of a driver, real-time comprehensive line of sight data is collected, the upper limit value of the pitching of the line of sight is predicted in the trained machine learning model, whether a distraction early warning prompt is sent out or not is judged, a pitching difference value is generated based on the real-time upper limit value of the pitching of the line of sight and the predicted upper limit value of the pitching of the line of sight, distraction early warning levels are divided, an auxiliary driving instruction is generated based on the distraction early warning levels, and the driver is assisted to safely drive a vehicle; compared with the prior art, the real-time sight pitching upper limit value can be accurately predicted by collecting comprehensive sight data and combining a machine learning model, whether the driver drives the vehicle in a distracted manner can be accurately judged according to the magnitude of the predicted sight pitching upper limit value, so that the distracted driving misjudgment phenomenon caused by refraction of the lenses of the driver wearing the near vision mirror can be effectively avoided, errors caused by the vertical preference of the sight of different drivers in the driving process can be avoided, the distracted driving phenomenon of the driver can be accurately and timely identified by the driver distracted driving system, and the intelligent driving auxiliary measure aimed at the occurrence of the distracted driving phenomenon is formulated, so that traffic accidents are avoided, and the driving safety of the vehicle is improved.
Drawings
Fig. 1 is a schematic diagram of a driver distraction warning system for overcoming the problems of the near vision mirror and the eye difference according to embodiment 1 of the present invention;
Fig. 2 is a flow chart of a driver distraction warning method for overcoming the problems of the near vision mirror and the eye difference according to embodiment 2 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, a driver distraction warning system for overcoming the problems of the difference between the near vision mirror and the eyes according to the present embodiment is applied to a driver monitoring system, and includes:
the data acquisition module is used for acquiring distraction driving data of a driver, wherein the distraction driving data comprises comprehensive sight line data and sight line pitching upper limit value;
The comprehensive sight line data is a series of diversified data which are generated by a driver in the process of driving the vehicle and can change the driving sight line state to influence the distraction early warning judgment result, so that the data of distraction driving in the driving process of the driver can be acquired in a diversified manner and used as a data base of subsequent distraction early warning judgment;
The comprehensive vision data comprises a head posture offset value, a pupil center fluctuation value, an illumination intensity concentration degree and a lens refraction compensation value;
the head posture offset value refers to the deviation between the real-time position of the head center point and the standard center point position in the driving process of the driver, and when the head posture offset value is larger, the larger the difference between the head posture and the standard posture of the driver in the driving process is, the larger the change amplitude of the upper limit value of the line of sight pitching is;
The method for acquiring the head posture offset value comprises the following steps:
Taking a preset collection time length as a standard, and taking the current moment as a starting point to push i preset collection time lengths forwards, and marking i data collection moments; the preset acquisition time length is used for representing the time span between two adjacent data acquisition moments, so that the time span between the two adjacent data acquisition moments is ensured to be consistent, and the phenomenon that the data are consistent before and after the data is avoided; the preset acquisition time length is obtained by selecting the maximum value after acquiring a large number of time spans between two adjacent data acquisition moments of the history;
shooting head posture images of a driver through a camera installed in a cab at i data acquisition moments to obtain i head images;
identifying the point position of the inner side of the left eye angle and the point position of the inner side of the right eye angle of a driver in the head image through a computer vision technology, and marking the point positions as a first point position and a second point position respectively;
after connecting the i first point positions and the i second point positions, forming i horizontal dividing lines, and marking the midpoints of the i horizontal dividing lines as head center points to obtain i head center points; the marked head center point is used as a reference point for head posture comparison, so that a large area of head posture can be reduced to a small point, data acquisition and analysis are facilitated, and data processing burden is reduced;
Drawing a horizontal standard line and a vertical standard line at the middle points of four boundaries of the overhead image respectively, and marking the intersection point of the horizontal standard line and the vertical standard line as a standard center point;
sequentially measuring the distances from the i head center points to the standard center points through a scale to obtain i sub-offset values;
removing the maximum value and the minimum value of the sub-offset values, accumulating the rest i-2 sub-offset values, and then averaging to obtain a head attitude offset value;
the expression of the head pose offset value is:
wherein PY zt is a head posture offset value, PY za is an a-th sub offset value;
The pupil center fluctuation value refers to the distance between the actual position and the conventional position of the pupil of the driver's eye in the image, and when the pupil center fluctuation value is larger, the larger the distance between the pupil of the driver's eye and the conventional position is, the larger the change amplitude of the upper limit value of the pitching of the line of sight is;
the pupil center fluctuation value acquisition method comprises the following steps:
Shooting eye images of a driver through a camera arranged in a cab at i data acquisition moments to obtain i eye shooting images;
identifying the lower edge of the upper eyelid and the upper edge of the lower eyelid of one of the i eye-shot images by a computer vision technology, and marking the area between the lower edge of the upper eyelid and the upper edge of the lower eyelid as a target area;
Marking out pixel values of all pixel points in a target area, and marking the pixel points with the pixel values larger than a preset pixel threshold value as target pixel points; the target pixel points are pixel points corresponding to pupil areas in eyes of a driver, and the pixel values of the pixel points in the pupil areas are different from the pixel values of the pixel points in the white-eye position areas because the colors of the pupils are relatively deep, so that the positions of the pupils can be accurately represented by the target pixel points;
Randomly selecting a target pixel point as a circle center, and drawing p detection circles by taking a preset length as a radius; the preset length is the basis for limiting the length of the radius of the detection circle, so that the preset length is used as the radius of the detection circle and can encircle a sufficient number of target pixel points; the preset length is obtained through coefficient optimization after measuring the maximum span of the area where the target pixel point is located;
Respectively counting the total amount of the pixel points in the p detection circles and the number of the target pixel points in the p detection circles, and comparing the number of the target pixel points in the p detection circles with the total amount of the pixel points in the p detection circles to obtain p target pixel point occupation ratios;
The expression of the target pixel point occupation ratio is:
Wherein ZB xsp is the p-th target pixel point ratio, SL mbp is the number of target pixel points in the p-th detection circle, and SL zlp is the total number of pixel points in the p-th detection circle;
Screening the maximum value from m target pixel point occupation ratios larger than a preset duty ratio threshold, and marking a detection circle corresponding to the maximum value of the target pixel point occupation ratio as a target circle; the preset duty ratio threshold is used for limiting the minimum value of the duty ratio of the target pixel points in the target circle, so that the target pixel points in the target circle are ensured to have sufficient quantity;
in the i eye shooting images, marking the position of a target pixel point corresponding to the center position of a circle in a target circle as a pupil center point, and obtaining i pupil center points;
respectively marking n measuring points on the lower edge of the upper eyelid and the upper edge of the lower eyelid, measuring the distance between the two measuring points which are positioned on the same vertical line and on the lower edge of the upper eyelid and the upper edge of the lower eyelid through a scale, and obtaining n distance values;
connecting two measuring points corresponding to the maximum value of the distance values to obtain a distance line, and marking the center of the distance line as a conventional center point;
Respectively measuring the distances from the i pupil center points to the conventional center points to obtain i sub-fluctuation values, accumulating the i sub-fluctuation values, and then averaging to obtain the pupil center fluctuation value;
The expression of pupil center fluctuation value is:
Wherein BD zt is pupil center fluctuation value, and BD zb is b sub-fluctuation value;
The light intensity concentration degree refers to the concentration degree of the light intensity in the cab in unit time during the driving of the vehicle, when the light intensity concentration degree is higher, the light concentration degree in the cab is denser, the influence degree of the light intensity change on the upper limit of the pitching of the sight is higher, and the change amplitude of the upper limit of the pitching of the sight is larger;
the method for acquiring the illumination intensity concentration degree comprises the following steps:
Shooting a body position image of a driver in the driving process through a camera;
identifying the region of the head of the driver in the body position image through a computer vision technology, and marking the region as a head region;
The head area is subjected to shadow coloring, a preset length value is used as a side length, the head area is divided into s rectangular grids with equal areas, and the shadow coloring area values in the s rectangular grids are measured; the shadow coloring can obviously distinguish the head area from other areas of the body, so that the importance of the head area is highlighted, and the subsequent data acquisition, calculation and analysis processing are also facilitated; the preset length value is a numerical value basis for limiting the side length of the minimum rectangular grid, so that the area of the rectangular grid drawn by the preset length value is convenient to measure, the reasonable number of the rectangular grids is ensured, and the phenomenon of too many or too few rectangular grids can not occur;
Marking the rectangular cells with the shadow coloring area value larger than the preset coloring lower limit value as effective rectangular cells, obtaining w effective rectangular cells, and calculating the area of the effective rectangular cells according to a rectangular area calculation formula; the preset coloring lower limit value is used for limiting the shadow coloring area value of the effective rectangular grid to be minimum, and the effective rectangular grid and the ineffective rectangular grid can be effectively screened out; the method comprises the steps that a coloring lower limit value is preset, and the coloring lower limit value is obtained through coefficient optimization after a plurality of minimum values of shadow coloring area values corresponding to effective rectangular grids are collected;
Acquiring illumination intensity data of a head area at i data acquisition moments by an illumination intensity sensor to obtain i illumination intensity values;
Accumulating the i illumination intensity values, averaging, and comparing with the accumulated values of the areas of w effective rectangular grids to obtain illumination intensity concentration degree;
The expression of the light intensity concentration degree is:
wherein GZ jj is the concentration degree of illumination intensity, GZ zC is the c-th illumination intensity value, and MJ gd is the area of the d-th effective rectangular grid;
The lens refraction compensation value refers to the magnitude of the declination of the sight line caused by the influence of the refractive index of the lens when the driver wears the near-vision mirror, and when the lens refraction compensation value is larger, the magnitude of the declination of the sight line caused by the influence of the refractive index of the lens is larger, and the variation magnitude of the pitching upper limit value of the sight line is larger at the moment;
the method for acquiring the lens refraction compensation value comprises the following steps:
taking the position of the camera as a starting point and the position of the myopia lens as an ending point, and measuring the distance from the starting point to the ending point by a range finder to obtain a first distance value;
measuring the eye movement angles from the starting point to the end point by an eye movement tracking algorithm in the data acquisition module at i data acquisition moments to obtain i inclination angles;
based on the first distance value and the i inclination angles, calculating the distance from the camera to the myopia lens, and obtaining i second distance values;
The expression of the second distance value is:
JLd2i=JLd1*cos(θi);
Wherein JL d2i is the i second distance value, JL d1 is the first distance value, and θ i is the i inclination angle;
The refractive index of the myopia lens is input in advance, and when the eye sight of a driver and the myopia lens are in a horizontal state, the refraction angle of the lens is calculated through a refraction angle calculation formula;
Combining the refraction angle of the lens and i second distance values, calculating the downward distance of the sight line, and obtaining i sub-compensation values;
The expression of the sub-compensation value is:
BCzi=JLd2i*tan(90°-θ2);
Wherein BC zi is the ith sub-compensation value, and θ 2 is the lens refraction angle;
Accumulating the i sub-compensation values and then averaging to obtain a lens refraction compensation value;
The expression of the lens refraction compensation value is:
Wherein BC zs is the lens refraction compensation value, and BC ze is the e sub-compensation value;
The upper limit value of the line of sight pitch refers to the distance from the upper edge of the iris to the lower edge of the upper eyelid in the eyes of the driver, and when the upper limit value of the line of sight pitch is smaller, the larger the line of sight range seen upwards by the driver is, the lower the probability that the driver is distracted is, and the distraction is less likely to occur. The upper limit value of the pitching of the sight line is obtained by measuring the distance from the upper edge of the iris to the lower edge of the upper eyelid through a scale after tracking and identifying the position of the upper edge of the iris through an eye movement tracking algorithm in an acquisition module;
The model training module is used for training a machine learning model for predicting the upper limit value of the line of sight pitching based on the distraction driving data;
The training method of the machine learning model for predicting the upper limit value of the sight pitching comprises the following steps:
converting the comprehensive sight line data into a corresponding set of feature vectors;
Taking the feature vector as input of a machine learning model, taking the upper limit value of line of sight pitching corresponding to each group of comprehensive line of sight data as output of the machine learning model, taking the upper limit value of line of sight pitching as a prediction target, taking the sum of prediction errors of all training data to be minimized as a training target, and training the machine learning model until the sum of the prediction errors reaches convergence;
illustratively, the machine learning model is any one of a CNN neural network model or AlexNet;
The calculation formula of the prediction error is as follows:
zk=(ak-wk)2
Wherein zk is a prediction error, and k is a group number of the feature vector; ak is a predicted state value corresponding to the kth group of feature vectors, wk is an actual state value corresponding to the kth group of training data;
in the machine learning model, the feature vector is comprehensive sight line data, and the state value is a sight line pitching upper limit value;
Other model parameters of the machine learning model, target loss values, optimization algorithms, verification set proportion of training set test sets, optimization of loss functions and the like are all realized through actual engineering, and are obtained after experimental optimization is continuously carried out;
The real-time early warning module is used for collecting real-time comprehensive sight data, inputting the real-time comprehensive sight data into a machine learning model after training to predict the upper limit value of the sight pitch, and judging whether to send out a distraction early warning prompt;
The method comprises the steps of collecting real-time comprehensive sight line data, inputting the real-time comprehensive sight line data into a machine learning model, predicting a real-time sight line pitching upper limit value, and comparing the real-time sight line pitching upper limit value with the predicted sight line pitching upper limit value, so that the sight line pitching data of a driver can be compared to judge whether the sight line pitching upper limit of the driver changes or not, and the sight line pitching upper limit value is used as a basis for judging whether the driver has a distraction driving phenomenon or not and a basis for sending an early warning prompt;
The judging method for judging whether to send out the distraction early warning prompt comprises the following steps:
collecting a real-time sight pitching upper limit value FY ss;
Comparing the real-time gaze pitch upper limit FY ss with the predicted gaze pitch upper limit FY yc;
When FY ss is larger than or equal to FY yc, the real-time sight pitch upper limit value is larger than or equal to the predicted sight pitch upper limit value, and the driver is in a distraction driving state at the moment, and then a distraction early warning prompt is judged to be sent;
When FY ss is smaller than FY yc, the real-time sight pitch upper limit value is smaller than the predicted sight pitch upper limit value, and the driver is not in a distraction driving state at the moment, and the driver is judged not to send distraction early warning prompt;
the level dividing module is used for generating a pitching difference value based on the real-time sight pitching upper limit value and the predicted sight pitching upper limit value and dividing a distraction early warning level;
when a distraction early warning prompt is sent out, the real-time sight pitch upper limit value is larger than or equal to the predicted sight pitch upper limit value, and the driver can generate distraction driving phenomenon at the moment;
The distraction early warning level comprises a first-level distraction level and a second-level distraction level; the severity of the distraction driving corresponding to the first-level distraction level is low, and the severity of the distraction driving corresponding to the second-level distraction level is high;
The method for dividing the primary separation level and the secondary separation level comprises the following steps:
The real-time sight pitch upper limit FY ss and the predicted sight pitch upper limit FY yc are subjected to difference to obtain a pitch difference value;
the expression of the pitch difference is:
FYcz=FYss-FYyc
Wherein FY cz is a pitch difference value;
Comparing the pitch difference value FY cz with a preset pitch difference threshold value FY yz;
When FY cz is smaller than FY yz, the pitch difference value is smaller than a preset pitch difference threshold value, and the severity of distraction driving is low at the moment, and the distraction driving is classified into a first-level distraction level;
When FY cz is greater than or equal to FY yz, the pitch difference value is greater than or equal to a preset pitch difference threshold value, and the severity of distraction driving is high at the moment, and the classification is divided into two-level classification grades;
The intelligent driving assisting module is used for generating an assisting driving instruction based on the distraction early warning level and assisting a driver to safely drive the vehicle based on the assisting driving instruction;
The auxiliary driving instruction is an instruction which is formulated under different distraction early warning levels and used for helping a driver to safely drive, so that the driver can safely drive the vehicle;
the auxiliary driving instruction comprises a flexible intelligent driving instruction and a hard intelligent driving instruction; the flexible intelligent driving instruction is used for providing intelligent driving assisting assistance for the driver in a soft and non-compulsory mode, helping the driver with low distraction driving degree to quickly and timely recover to a safe driving state, and the hard intelligent driving instruction is used for providing intelligent driving assisting assistance for the driver in a hard and compulsory mode, helping the driver with high distraction driving degree to timely and correctly make measures, avoiding safety accidents and improving driving safety;
the method for generating the flexible intelligent driving instruction and the hard intelligent driving instruction comprises the following steps:
When the distraction early warning level is a first-level distraction level, flexible auxiliary driving assistance is needed for the driver, and a flexible intelligent driving instruction is generated;
When the distraction early warning level is a secondary distraction level, hard auxiliary driving assistance is needed for the driver, and a hard intelligent driving instruction is generated;
After the flexible intelligent driving instruction or the hard intelligent driving instruction is generated, the flexible intelligent driving instruction or the hard intelligent driving instruction is required to be specifically applied to a driver to drive the vehicle, so that the driver is assisted to safely drive the vehicle, and traffic accidents caused by distracted driving are avoided;
the method for assisting the driver to safely drive the vehicle comprises the following steps:
When a flexible intelligent driving instruction is generated, the driver monitoring system sends flexible intelligent auxiliary information to the driving computer, at the moment, the driving computer controls the vehicle-mounted alarm to start, the vehicle-mounted alarm continuously blinks and buzzes to give an alarm, and the driver is reminded of safely driving the vehicle;
After the hard intelligent driving instruction is generated, the driver monitoring system sends hard intelligent auxiliary information to the driving computer, at the moment, the driving computer controls the vehicle-mounted alarm to start, the vehicle-mounted alarm continuously blinks and buzzes to give an alarm, and meanwhile, the driving computer controls the vehicle engine to reduce the running power, reduce the running speed of the vehicle and assist the driver to safely drive the vehicle.
In the embodiment, a machine learning model for predicting the upper limit value of the pitching of the line of sight is trained by collecting comprehensive line of sight data and the upper limit value of the pitching of the line of sight of a driver, real-time comprehensive line of sight data is collected, the real-time comprehensive line of sight data is input into the machine learning model after the training is finished, the upper limit value of the pitching of the line of sight is predicted, whether a distraction early warning prompt is sent out or not is judged, a pitching difference value is generated based on the real-time upper limit value of the pitching of the line of sight and the predicted upper limit value of the pitching of the line of sight, distraction early warning levels are divided, an auxiliary driving instruction is generated based on the distraction early warning levels, and the driver is assisted to safely drive a vehicle; compared with the prior art, the real-time sight pitching upper limit value can be accurately predicted by collecting comprehensive sight data and combining a machine learning model, whether the driver drives the vehicle in a distracted manner can be accurately judged according to the magnitude of the predicted sight pitching upper limit value, so that the distracted driving misjudgment phenomenon caused by refraction of the lenses of the driver wearing the near vision mirror can be effectively avoided, errors caused by the vertical preference of the sight of different drivers in the driving process can be avoided, the distracted driving phenomenon of the driver can be accurately and timely identified by the driver distracted driving system, and the intelligent driving auxiliary measure aimed at the occurrence of the distracted driving phenomenon is formulated, so that traffic accidents are avoided, and the driving safety of the vehicle is improved.
Example 2: referring to fig. 2, the embodiment is not described in detail, but is partially described in embodiment 1, and provides a driver distraction early warning method for overcoming the problems of the near-view mirror and the eye difference, which is applied to a driver monitoring system and implemented based on a driver distraction early warning system for overcoming the problems of the near-view mirror and the eye difference, and comprises the following steps:
s1: collecting distraction driving data of a driver, wherein the distraction driving data comprises comprehensive sight line data and sight line pitching upper limit value;
s2: training a machine learning model for predicting an upper limit of line of sight pitch based on the distraction driving data;
s3: collecting real-time comprehensive sight data, inputting the real-time comprehensive sight data into a trained machine learning model to predict a sight pitching upper limit value, and judging whether to send out a distraction early warning prompt; if the distraction early warning prompt is sent out, executing S4-S5; if the distraction early warning prompt is not sent out, repeating the step S3;
s4: generating a pitching difference value based on the real-time line-of-sight pitching upper limit value and the predicted line-of-sight pitching upper limit value, and dividing a distraction early warning level;
s5: based on the distraction early warning level, an auxiliary driving instruction is generated, and based on the auxiliary driving instruction, the driver is assisted to safely drive the vehicle.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention.

Claims (10)

1. A driver distraction warning system for overcoming the problem of near vision mirror and eye difference, applied to a driver monitoring system, comprising:
the data acquisition module is used for acquiring distraction driving data of a driver, wherein the distraction driving data comprises comprehensive sight line data and sight line pitching upper limit value;
The model training module is used for training a machine learning model for predicting the upper limit value of the line of sight pitching based on the distraction driving data;
The real-time early warning module is used for collecting real-time comprehensive sight data, inputting the real-time comprehensive sight data into a machine learning model after training to predict the upper limit value of the sight pitch, and judging whether to send out a distraction early warning prompt;
the level dividing module is used for generating a pitching difference value based on the real-time sight pitching upper limit value and the predicted sight pitching upper limit value and dividing a distraction early warning level;
And the intelligent driving assisting module is used for generating an assisting driving instruction based on the distraction early warning level and assisting a driver to safely drive the vehicle based on the assisting driving instruction.
2. The driver distraction warning system for overcoming the problems of near vision mirror and eye-use difference according to claim 1, wherein the integrated gaze data comprises a head pose offset value, a pupil center fluctuation value, an illumination intensity concentration and a lens refraction compensation value;
The method for acquiring the head posture offset value comprises the following steps:
taking a preset collection time length as a standard, and taking the current moment as a starting point to push i preset collection time lengths forwards, and marking i data collection moments;
shooting head posture images of a driver through a camera installed in a cab at i data acquisition moments to obtain i head images;
identifying the point position of the inner side of the left eye angle and the point position of the inner side of the right eye angle of a driver in the head image through a computer vision technology, and marking the point positions as a first point position and a second point position respectively;
After connecting the i first point positions and the i second point positions, forming i horizontal dividing lines, and marking the midpoints of the i horizontal dividing lines as head center points to obtain i head center points;
Drawing a horizontal standard line and a vertical standard line at the middle points of four boundaries of the overhead image respectively, and marking the intersection point of the horizontal standard line and the vertical standard line as a standard center point;
sequentially measuring the distances from the i head center points to the standard center points through a scale to obtain i sub-offset values;
removing the maximum value and the minimum value of the sub-offset values, accumulating the rest i-2 sub-offset values, and then averaging to obtain a head attitude offset value;
the expression of the head pose offset value is:
Where PY zt is the head attitude offset value, and PY za is the a-th sub offset value.
3. The driver distraction warning system for overcoming the problems of the near vision mirror and the eye difference according to claim 2, wherein the method for obtaining the pupil center fluctuation comprises the following steps:
Shooting eye images of a driver through a camera arranged in a cab at i data acquisition moments to obtain i eye shooting images;
identifying the lower edge of the upper eyelid and the upper edge of the lower eyelid of one of the i eye-shot images by a computer vision technology, and marking the area between the lower edge of the upper eyelid and the upper edge of the lower eyelid as a target area;
Marking out pixel values of all pixel points in a target area, and marking the pixel points with the pixel values larger than a preset pixel threshold value as target pixel points;
Randomly selecting a target pixel point as a circle center, and drawing p detection circles by taking a preset length as a radius;
Respectively counting the total amount of the pixel points in the p detection circles and the number of the target pixel points in the p detection circles, and comparing the number of the target pixel points in the p detection circles with the total amount of the pixel points in the p detection circles to obtain p target pixel point occupation ratios;
The expression of the target pixel point occupation ratio is:
Wherein ZB xsp is the p-th target pixel point ratio, SL mbp is the number of target pixel points in the p-th detection circle, and SL zlp is the total number of pixel points in the p-th detection circle;
Screening the maximum value from m target pixel point occupation ratios larger than a preset duty ratio threshold, and marking a detection circle corresponding to the maximum value of the target pixel point occupation ratio as a target circle;
in the i eye shooting images, marking the position of a target pixel point corresponding to the center position of a circle in a target circle as a pupil center point, and obtaining i pupil center points;
respectively marking n measuring points on the lower edge of the upper eyelid and the upper edge of the lower eyelid, measuring the distance between the two measuring points which are positioned on the same vertical line and on the lower edge of the upper eyelid and the upper edge of the lower eyelid through a scale, and obtaining n distance values;
connecting two measuring points corresponding to the maximum value of the distance values to obtain a distance line, and marking the center of the distance line as a conventional center point;
Respectively measuring the distances from the i pupil center points to the conventional center points to obtain i sub-fluctuation values, accumulating the i sub-fluctuation values, and then averaging to obtain the pupil center fluctuation value;
The expression of pupil center fluctuation value is:
Where BD zt is the pupil center fluctuation value, and BD zb is the b < th > sub-fluctuation value.
4. A driver distraction warning system for overcoming the problems of near vision mirror and eye-use difference according to claim 3, wherein the method for obtaining the concentration of the illumination intensity comprises:
Shooting a body position image of a driver in the driving process through a camera;
identifying the region of the head of the driver in the body position image through a computer vision technology, and marking the region as a head region;
The head area is subjected to shadow coloring, a preset length value is used as a side length, the head area is divided into s rectangular grids with equal areas, and the shadow coloring area values in the s rectangular grids are measured;
Marking the rectangular cells with the shadow coloring area value larger than the preset coloring lower limit value as effective rectangular cells, obtaining w effective rectangular cells, and calculating the area of the effective rectangular cells according to a rectangular area calculation formula;
Acquiring illumination intensity data of a head area at i data acquisition moments by an illumination intensity sensor to obtain i illumination intensity values;
Accumulating the i illumination intensity values, averaging, and comparing with the accumulated values of the areas of w effective rectangular grids to obtain illumination intensity concentration degree;
The expression of the light intensity concentration degree is:
Wherein GZ jj is the light intensity concentration, GZ zc is the c-th light intensity value, and MJ gd is the area of the d-th effective rectangular grid.
5. The driver distraction warning system for overcoming the problems of the near vision mirror and the eye difference according to claim 4, wherein the method for obtaining the lens refraction compensation value comprises:
taking the position of the camera as a starting point and the position of the myopia lens as an ending point, and measuring the distance from the starting point to the ending point by a range finder to obtain a first distance value;
measuring the eye movement angles from the starting point to the end point by an eye movement tracking algorithm in the data acquisition module at i data acquisition moments to obtain i inclination angles;
based on the first distance value and the i inclination angles, calculating the distance from the camera to the myopia lens, and obtaining i second distance values;
The expression of the second distance value is:
JLd2i=JLd1*cos(θi);
Wherein JL d2i is the i second distance value, JL d1 is the first distance value, and θ i is the i inclination angle;
The refractive index of the myopia lens is input in advance, and when the eye sight of a driver and the myopia lens are in a horizontal state, the refraction angle of the lens is calculated through a refraction angle calculation formula;
Combining the refraction angle of the lens and i second distance values, calculating the downward distance of the sight line, and obtaining i sub-compensation values;
The expression of the sub-compensation value is:
BCzi=JLd2i*tan(90°-θ2);
Wherein BC zi is the ith sub-compensation value, and θ 2 is the lens refraction angle;
Accumulating the i sub-compensation values and then averaging to obtain a lens refraction compensation value;
The expression of the lens refraction compensation value is:
Where BC zs is the lens refractive compensation value and BC ze is the e-th sub-compensation value.
6. The driver distraction warning system for overcoming the problem of the near vision mirror and eye-using difference according to claim 5, wherein the training method of the machine learning model for predicting the upper limit value of the line of sight pitch comprises:
And converting the comprehensive sight data into a corresponding group of feature vectors, taking the feature vectors as the input of a machine learning model, taking the sight pitch upper limit value corresponding to each group of comprehensive sight data as the output of the machine learning model, taking the sight pitch upper limit value as a prediction target, taking the sum of prediction errors of all training data as a training target, and training the machine learning model until the sum of the prediction errors reaches convergence.
7. The driver distraction warning system for overcoming the problems of the near vision mirror and the eye-using difference according to claim 6, wherein the judging method for judging whether to issue the distraction warning prompt comprises:
collecting a real-time sight pitching upper limit value FY ss;
Comparing the real-time gaze pitch upper limit FY ss with the predicted gaze pitch upper limit FY yc;
When FT ss is greater than or equal to FY yc, judging to send out a distraction early warning prompt;
When FY ss is smaller than FY yc, the judgment is that the distraction early warning prompt is not sent.
8. The driver distraction warning system for overcoming the problems of near vision mirror and eye-use difference of claim 7, wherein the distraction warning levels comprise a primary distraction level and a secondary distraction level;
The method for dividing the primary separation level and the secondary separation level comprises the following steps:
The real-time sight pitch upper limit FY ss and the predicted sight pitch upper limit FY yc are subjected to difference to obtain a pitch difference value;
the expression of the pitch difference is:
FYcz=FYss-FYyc
Wherein FY cz is a pitch difference value;
Comparing the pitch difference value FY cz with a preset pitch difference threshold value FY yz;
When FY cz is less than FY yz, dividing into a first level of distraction;
When FY cz is equal to or greater than FY yz, it is classified into a secondary classification level.
9. The driver distraction warning system for overcoming the problems of the near vision mirror and the eye difference of claim 8, wherein the auxiliary driving instructions comprise a flexible intelligent driving instruction and a hard intelligent driving instruction;
the method for generating the flexible intelligent driving instruction and the hard intelligent driving instruction comprises the following steps:
when the distraction early warning level is a first-level distraction level, generating a flexible intelligent driving instruction;
and when the distraction early warning level is a secondary distraction level, generating a hard intelligent driving instruction.
10. A driver distraction warning system for overcoming the problem of near vision mirror and eye difference as claimed in claim 9, wherein said method for assisting the driver in driving the vehicle safely comprises:
When a flexible intelligent driving instruction is generated, the driving computer controls the vehicle-mounted alarm to start, and the vehicle-mounted alarm continuously blinks and buzzes to give an alarm;
When the hard intelligent driving instruction is generated, the driving computer controls the vehicle-mounted alarm to start, the vehicle-mounted alarm continuously blinks and buzzes to give an alarm, and meanwhile, the driving computer controls the vehicle engine to reduce the running power.
CN202410551460.7A 2024-05-07 2024-05-07 Driver distraction early warning system for overcoming problems of near vision mirror and eye difference Pending CN118220166A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410551460.7A CN118220166A (en) 2024-05-07 2024-05-07 Driver distraction early warning system for overcoming problems of near vision mirror and eye difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410551460.7A CN118220166A (en) 2024-05-07 2024-05-07 Driver distraction early warning system for overcoming problems of near vision mirror and eye difference

Publications (1)

Publication Number Publication Date
CN118220166A true CN118220166A (en) 2024-06-21

Family

ID=91510241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410551460.7A Pending CN118220166A (en) 2024-05-07 2024-05-07 Driver distraction early warning system for overcoming problems of near vision mirror and eye difference

Country Status (1)

Country Link
CN (1) CN118220166A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014100352A1 (en) * 2013-01-18 2014-07-24 Carnegie Mellon University Method for detecting condition of viewing direction of rider of vehicle, involves estimating driver's line of sight on basis of detected location for each of eye characteristic of eyeball of rider and estimated position of head
CN113569785A (en) * 2021-08-04 2021-10-29 上海汽车集团股份有限公司 Driving state sensing method and device
CN113743471A (en) * 2021-08-05 2021-12-03 暨南大学 Driving evaluation method and system
US20220051038A1 (en) * 2020-08-17 2022-02-17 Verizon Connect Ireland Limited Systems and methods for identifying distracted driver behavior from video
CN114387587A (en) * 2022-01-14 2022-04-22 东北大学 Fatigue driving monitoring method
CN115690750A (en) * 2022-10-21 2023-02-03 浙江大学 Driver distraction detection method and device
CN115923812A (en) * 2023-02-09 2023-04-07 招商局检测车辆技术研究院有限公司 Driver distraction identification method and device and storage medium
CN116630945A (en) * 2023-06-16 2023-08-22 岚图汽车科技有限公司 Driving distraction reminding method, device, equipment and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014100352A1 (en) * 2013-01-18 2014-07-24 Carnegie Mellon University Method for detecting condition of viewing direction of rider of vehicle, involves estimating driver's line of sight on basis of detected location for each of eye characteristic of eyeball of rider and estimated position of head
US20220051038A1 (en) * 2020-08-17 2022-02-17 Verizon Connect Ireland Limited Systems and methods for identifying distracted driver behavior from video
CN113569785A (en) * 2021-08-04 2021-10-29 上海汽车集团股份有限公司 Driving state sensing method and device
CN113743471A (en) * 2021-08-05 2021-12-03 暨南大学 Driving evaluation method and system
CN114387587A (en) * 2022-01-14 2022-04-22 东北大学 Fatigue driving monitoring method
CN115690750A (en) * 2022-10-21 2023-02-03 浙江大学 Driver distraction detection method and device
CN115923812A (en) * 2023-02-09 2023-04-07 招商局检测车辆技术研究院有限公司 Driver distraction identification method and device and storage medium
CN116630945A (en) * 2023-06-16 2023-08-22 岚图汽车科技有限公司 Driving distraction reminding method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US20230154207A1 (en) Driver fatigue detection method and system based on combining a pseudo-3d convolutional neural network and an attention mechanism
CN103714660B (en) System for achieving fatigue driving judgment on basis of image processing and fusion between heart rate characteristic and expression characteristic
US8165408B2 (en) Image recognition apparatus utilizing plurality of weak classifiers for evaluating successive sub-images extracted from an input image
CN106250801A (en) Based on Face datection and the fatigue detection method of human eye state identification
CN104200192A (en) Driver gaze detection system
CN113743471B (en) Driving evaluation method and system
CN105286802B (en) Driver Fatigue Detection based on video information
CN103714659B (en) Fatigue driving identification system based on double-spectrum fusion
CN104246850B (en) Crawl decision maker
BRPI0712837A2 (en) Method and apparatus for determining and analyzing a location of visual interest.
US7650034B2 (en) Method of locating a human eye in a video image
CN107886043A (en) The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible
CN105404862A (en) Hand tracking based safe driving detection method
CN107563346A (en) One kind realizes that driver fatigue sentences method for distinguishing based on eye image processing
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
CN111415524A (en) Intelligent processing method and system for fatigue driving
KR20200092739A (en) Driver status monitor method and apparatus
CN103942527A (en) Method for determining eye-off-the-road condition by using road classifier
CN118220166A (en) Driver distraction early warning system for overcoming problems of near vision mirror and eye difference
Jimenez et al. Detection of the tiredness level of drivers using machine vision techniques
CN116681722A (en) Traffic accident detection method based on isolated forest algorithm and target tracking
JP2018537787A (en) Method and apparatus for classifying at least one eye opening data of a vehicle occupant and method and apparatus for detecting drowsiness and / or microsleep of a vehicle occupant
CN113361452B (en) Driver fatigue driving real-time detection method and system based on deep learning
CN114399752A (en) Eye movement multi-feature fusion fatigue detection system and method based on micro eye jump characteristics
CN114267169A (en) Fatigue driving prevention speed limit control method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination