CN116985792A - Dangerous sensing method and device for vehicle - Google Patents

Dangerous sensing method and device for vehicle Download PDF

Info

Publication number
CN116985792A
CN116985792A CN202310973566.1A CN202310973566A CN116985792A CN 116985792 A CN116985792 A CN 116985792A CN 202310973566 A CN202310973566 A CN 202310973566A CN 116985792 A CN116985792 A CN 116985792A
Authority
CN
China
Prior art keywords
degree
time
driver
vehicle
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310973566.1A
Other languages
Chinese (zh)
Inventor
贺刚
陈星�
陈霖
朱宏海
张警吁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202310973566.1A priority Critical patent/CN116985792A/en
Publication of CN116985792A publication Critical patent/CN116985792A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0953Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0051Handover processes from occupants to vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0059Estimation of the risk associated with autonomous or manual driving, e.g. situation too complex, sensor failure or driver incapacity
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/26Incapacity

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle hazard perception method and device, wherein the method comprises the following steps: identifying a current situational awareness loss level of the driver; if the current situational awareness loss degree is a preset degree, calculating longitudinal contact time and/or transverse contact time of each surrounding vehicle in contact with the current vehicle; determining the dangerous area degree of each surrounding vehicle according to one or both of the longitudinal contact time and the transverse contact time and the total time for the driver to complete the takeover reaction, and sensing the collision risk of the current vehicle and the surrounding vehicles based on the dangerous area degree. According to the embodiment of the application, when the current situation awareness loss degree of the driver reaches a certain degree, the longitudinal time and the transverse time of the contact between the surrounding vehicles and the current vehicle are calculated, and the danger degree of other surrounding vehicles is determined so as to sense the collision danger between the current vehicle and the surrounding vehicles, so that the driver is reminded of carrying out related driving safety takeover, the driving safety is improved, and the driving experience is improved.

Description

Dangerous sensing method and device for vehicle
Technical Field
The application relates to the technical field of automatic driving, in particular to a dangerous sensing method and device for a vehicle.
Background
The driver's perception, understanding and predictive ability to many factors in the traffic environment, namely be the situational awareness in the road traffic field, driver situational awareness is the important factor that influences driving safety, the vehicle that has the autopilot mode can be liberated the driver from some driving-related actions, can meet the scene that autopilot can't discern when the vehicle is in autopilot mode, need the driver take over the vehicle this moment, but because the driver is in the autopilot mode for a long time, the situational awareness of easily causing the driver loses, the alertness to surrounding environment declines, be difficult to accomplish the vehicle and take over the task in the first time, and the degree of losing of driver situational awareness then directly has decided the length of take over time, thereby influence safe driving.
In the related art, the danger of the environment in the vehicle can be detected and provided with danger data, driver consciousness is monitored, the danger data and the gaze trail are compared, the perceived danger and the unaware danger are identified, and the unaware danger is alarmed.
However, the takeover early warning system in the related art has a single design mode, cannot effectively sense the collision danger between the current vehicle and surrounding vehicles, does not fully consider the complexity and uncertainty of a driver in an automatic driver vehicle path closed loop system, is difficult to safely take over the vehicle when the situation consciousness of the driver is lost, prolongs the complete takeover time, leads to low safety of vehicle driving, reduces the driving experience of the driver, and needs to be solved urgently.
Disclosure of Invention
The application provides a vehicle danger sensing method and device, which are used for solving the problems that the design mode of a take-over early warning system in the related technology is single, the collision danger between the current vehicle and surrounding vehicles cannot be effectively sensed, the complexity and uncertainty of a driver in an automatic driver road closed loop system are not fully considered, the vehicle is difficult to take over safely when the situation awareness of the driver is lost, the complete take-over time is prolonged, the driving safety of the vehicle is low, the driving experience of the driver is reduced and the like.
An embodiment of a first aspect of the present application provides a hazard sensing method for a vehicle, including the steps of: identifying a current degree of situational awareness loss for a driver based on a facial image of the driver; if the current situation awareness losing degree is a preset degree, calculating longitudinal contact time and/or transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion trail state of the current vehicle; and determining a dangerous area degree of each surrounding vehicle according to the total time of one or two of the longitudinal contact time and/or the transverse contact time and the taking over reaction completed by the driver, and sensing the collision risk of the current vehicle and the surrounding vehicles based on the dangerous area degree.
According to the technical means, when the current situation awareness loss degree of the driver reaches a certain degree, the embodiment of the application can calculate the longitudinal time and the transverse time of the contact between the surrounding vehicles and the current vehicle, and determine the danger degree of other surrounding vehicles so as to sense the collision danger between the current vehicle and the surrounding vehicles, thereby reminding the driver of carrying out related driving safety takeover based on the situation awareness loss degree, improving the driving safety, being safer and more practical and improving the driving experience.
Optionally, in one embodiment of the present application, the calculating, according to the current situational awareness loss degree, a longitudinal contact time and/or a lateral contact time of each surrounding vehicle in contact with the current vehicle in a current motion trail state of the current vehicle includes: acquiring boundary information of a lane where the current vehicle is located, speed difference between the boundary information and at least one surrounding vehicle and real-time comprehensive environment state; and calculating the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state.
According to the technical means, the embodiment of the application can calculate the longitudinal contact time and/or the transverse contact time of each surrounding vehicle and the current vehicle in the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state, thereby providing basis for determining the dangerous area degree of each surrounding vehicle, further improving the driving safety, and being safer and more practical.
Optionally, in one embodiment of the present application, the identifying the current degree of situational awareness loss of the driver includes: extracting eye position features and facial action features of the driver in the facial image; judging whether the driver meets a preset ambient environment observation condition according to the eye position features and the facial action features; when the preset ambient environment observation condition is not met, judging that the current situational awareness loss degree is awareness loss degree; and when the preset ambient environment observation condition is met, judging that the current situation awareness degree is the alertness awareness degree.
According to the technical means, the embodiment of the application can extract the eye features and the facial action features of the driver, judge whether the driver meets a certain ambient environment observation condition, judge that the current situation consciousness degree is the consciousness degree when the driver does not meet the certain ambient environment observation condition, judge that the current situation consciousness degree is the vigilance consciousness degree when the driver meets the certain ambient environment observation condition, and carry out relevant driving safety takeover based on the situation consciousness degree, thereby further ensuring the verification of the existing takeover alarm mechanism.
Optionally, in one embodiment of the present application, the determining whether the driver meets a preset ambient environment viewing condition according to the eye position feature and the facial motion feature includes: calculating a plurality of positions of the midpoint of the two-eye connecting line according to the eye position characteristics; taking the position with the longest time of the positions of the midpoint of the binocular connecting line in the plurality of positions as a reference position, calculating the actual position of the midpoint of the binocular connecting line of the driver in each frame according to the facial action characteristics, and detecting the corresponding relation between the actual position and the reference position; and when the actual position is detected to be consistent with the reference position, judging that the preset ambient environment observation condition is met, otherwise, judging that the preset ambient environment observation condition is not met.
According to the technical means, the embodiment of the application can calculate a plurality of positions of the midpoint of the binocular connecting line according to the eye position characteristics of the driver, calculate the actual position of the midpoint of the binocular connecting line of the driver in each frame according to the facial action characteristics, detect the corresponding relation between the actual position and the reference position, judge that a certain ambient environment observation condition is met when the actual position is detected to be consistent with the reference position, otherwise judge that a certain ambient environment observation condition is not met, effectively recognize the visual focus position and the stay time through the eye movement track of the driver, judge the attention condition of the driver to a certain visual object, and more accurately judge the attention degree of the driver to the ambient environment according to the direct attention behavior of the driver, thereby accurately deducing the situation awareness degree of the driver in automatic driving.
Optionally, in one embodiment of the present application, before acquiring the boundary information of the lane in which the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle, and the real-time integrated environment state, the method further includes: if the current situational awareness loss degree is awareness loss degree, meeting the preset dangerous alarm condition; and if the current situation awareness degree is the alertness awareness degree, acquiring the continuous observation time length of the driver, and judging that the preset dangerous alarm condition is met when the continuous observation time length is smaller than the preset safety time length.
According to the technical means, the embodiment of the application can judge that a certain dangerous alarm condition is met when the current situation awareness degree is the awareness degree, acquire the continuous observation time length of the driver when the current situation awareness degree is the awareness degree, and judge that a certain dangerous alarm condition is met when the continuous observation time length is smaller than a certain safety time length, thereby ensuring that the driver effectively realizes an early warning mechanism of taking over under different situation awareness states and improving the driving safety.
Optionally, in one embodiment of the present application, before determining the total time for the driver to complete taking over the reaction from the current degree of situational awareness, further comprising: detecting a current working mode of the vehicle; and when the current working mode is detected to be the first working mode, the total time for the driver to finish taking over the reaction is preset time, otherwise, the total time for the driver to finish taking over the reaction is determined by the current situation awareness loss degree.
According to the technical means, the embodiment of the application can detect the current working mode of the vehicle, when the current working mode is detected to be the first working mode, the total time for the driver to finish taking over the reaction is judged to be the preset time, otherwise, the total time for the driver to finish taking over the reaction is determined by the loss of the current situational awareness, so that the quality of taking over is ensured.
Optionally, in an embodiment of the present application, the determining the total time for the driver to complete the take over reaction based on the current degree of situational awareness includes: collecting physiological state information and emotional state information of the driver; and matching the total time for the driver to finish taking over the reaction from a preset database according to the physiological state information and the emotional state information.
According to the technical means, the embodiment of the application can acquire the physiological state information and the emotional state information of the driver, and the total time for the driver to finish taking over the reaction is matched from the database according to the physiological state information and the emotional state information, so that the situation awareness loss degree of the driver is further obtained, the related driving safety taking over is carried out, and the driving experience is improved.
Optionally, in one embodiment of the present application, the determining the extent of the dangerous area of each surrounding vehicle according to the total time of the driver's completion of the takeover reaction and one or both of the longitudinal contact time and/or the lateral contact time includes: if only the longitudinal contact exists, the dangerous area degree is an extremely dangerous degree when the longitudinal contact time is smaller than the total time, the dangerous area degree is a more dangerous degree when the longitudinal contact time is larger than the total time and smaller than the sum of the total time and a first preset duration, and the dangerous area degree is a general dangerous area when the longitudinal contact time is larger than the sum of the total time and the first preset duration; if only the lateral contact exists, the dangerous area degree is the extremely dangerous degree when the lateral contact time is smaller than the total time, the dangerous area degree is the more dangerous degree when the lateral contact time is larger than the total time and smaller than the sum of the total time and a second preset time length, and the dangerous area degree is the general dangerous area when the lateral contact time is larger than the sum of the total time and the second preset time length; if the longitudinal contact and the transverse contact exist simultaneously, when any contact time is smaller than the total time, the dangerous area degree is the extremely dangerous degree, and when any contact time is larger than the total time and smaller than the sum of the total time and a third preset time, the dangerous area degree is the more dangerous degree, and when any contact time is larger than the sum of the total time and the third preset time, the dangerous area degree is the general dangerous area.
According to the technical means, the embodiment of the application can judge the dangerous degree by comparing the longitudinal contact time, the transverse contact time and the total time, and can prompt a driver to keep situational awareness in a warning mode of different degrees in a vehicle when the vehicle is in an extremely dangerous warning area and a common dangerous area, thereby ensuring enough reaction time when the driver needs to manually take over the vehicle in an emergency, improving the driving safety and being safer and more practical.
Optionally, in one embodiment of the present application, after sensing the collision risk of the vehicle with surrounding vehicles based on the risk area degree, the method further includes: and matching the optimal takeover alarming action based on the dangerous area degree, and controlling the current vehicle to execute the optimal takeover alarming action on one or more surrounding vehicles.
According to the technical means, after the collision danger between the vehicle and surrounding vehicles is perceived based on the dangerous area degree, the optimal take-over alarm action can be matched and controlled to be executed according to the dangerous area degree, so that the automatic driving safety take-over is effectively ensured under different driver states and environment states, and differentiated take-over is realized.
An embodiment of a second aspect of the present application provides a hazard sensing apparatus for a vehicle, including: the recognition module is used for recognizing the current situation awareness loss degree of the driver based on the face image of the driver; the computing module is used for computing the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion track state of the current vehicle when the current situational awareness loss degree is a preset degree; and a sensing module for determining a dangerous area degree of each surrounding vehicle according to the total time of one or two of the longitudinal contact time and/or the transverse contact time and the taking over reaction completed by the driver, and sensing the collision risk of the current vehicle and the surrounding vehicles based on the dangerous area degree.
Optionally, in one embodiment of the present application, the computing module includes: an acquisition unit for acquiring boundary information of a lane in which the current vehicle is located, a speed difference between the current vehicle and at least one surrounding vehicle, and a real-time comprehensive environmental state; and the calculating unit is used for calculating the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle under the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state.
Optionally, in one embodiment of the present application, the identification module includes: an extraction unit configured to extract an eye position feature and a facial motion feature of the driver in a facial image; a judging unit configured to judge whether the driver satisfies a preset ambient environment observation condition according to the eye position feature and the facial motion feature; a first determining unit configured to determine that the current situational awareness loss degree is an awareness loss degree when the preset ambient observation condition is not satisfied; and the second judging unit is used for judging that the current situation awareness degree is the alertness awareness degree when the preset ambient environment observation condition is met.
Optionally, in one embodiment of the present application, the determining unit includes: a calculating subunit, configured to calculate a plurality of positions of the midpoint of the binocular connecting line according to the eye position feature; a detection subunit, configured to use a position with the longest time of the positions of the midpoint of the binocular connecting line in the plurality of positions as a reference position, calculate an actual position of the midpoint of the binocular connecting line of the driver in each frame according to the facial motion feature, and detect a correspondence between the actual position and the reference position; and the judging subunit is used for judging that the preset ambient environment observation condition is met when the actual position is detected to be consistent with the reference position, and judging that the preset ambient environment observation condition is not met otherwise.
Optionally, in one embodiment of the present application, further includes: the first judging module is used for meeting the preset dangerous alarm condition when the current situation consciousness loss degree is the consciousness loss degree before acquiring boundary information of a lane where the current vehicle is located, speed difference between the current vehicle and at least one surrounding vehicle and real-time comprehensive environment state; and the second judging module is used for acquiring the continuous observation time length of the driver when the current situation awareness degree is the alertness awareness degree, and judging that the preset dangerous alarm condition is met when the continuous observation time length is smaller than the preset safety time length.
Optionally, in one embodiment of the present application, further includes: a detection module for detecting a current mode of operation of the vehicle before determining a total time for the driver to complete the take over reaction from the current degree of situational awareness loss; and the determining module is used for determining the total time for the driver to finish taking over the reaction as preset time when the current working mode is detected to be the first working mode, otherwise, determining the total time for the driver to finish taking over the reaction according to the current situation awareness loss degree.
Optionally, in one embodiment of the present application, the determining module includes: the acquisition unit is used for acquiring physiological state information and emotion state information of the driver; and the matching unit is used for matching the total time for the driver to finish taking over the reaction from a preset database according to the physiological state information and the emotional state information.
Optionally, in one embodiment of the present application, the sensing module includes: a third determination unit configured to, when there is only a longitudinal contact, when the longitudinal contact time is less than the total time, the dangerous area degree is an extremely dangerous degree, and when the longitudinal contact time is greater than the total time and less than a sum of the total time and a first preset time period, the dangerous area degree is a more dangerous degree, and when the longitudinal contact time is greater than a sum of the total time and a first preset time period, the dangerous area degree is a general dangerous area; a fourth determination unit configured to, when there is only lateral contact, when the lateral contact time is less than the total time, the dangerous area degree is the extremely dangerous degree, and when the lateral contact time is greater than the total time and less than a sum of the total time and a second preset time period, the dangerous area degree is the relatively dangerous degree, and when the lateral contact time is greater than a sum of the total time and the second preset time period, the dangerous area degree is the general dangerous area; a fifth determination unit configured to, when the longitudinal contact and the lateral contact exist simultaneously, the dangerous area degree is the extremely dangerous degree when any one of the contact times is smaller than the total time, and is the more dangerous degree when any one of the contact times is larger than the total time and smaller than a sum of the total time and a third preset time period, and is the general dangerous area when any one of the contact times is larger than a sum of the total time and the third preset time period.
Optionally, in one embodiment of the present application, further includes: the control module is used for matching the optimal takeover alarming action based on the dangerous area degree after sensing the collision danger between the vehicle and the surrounding vehicles based on the dangerous area degree, and controlling the current vehicle to execute the optimal takeover alarming action on one or more surrounding vehicles.
An embodiment of a third aspect of the present application provides a vehicle including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the method for sensing the danger of the vehicle according to the embodiment.
A fourth aspect embodiment of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the hazard sensing method of a vehicle as above.
The embodiment of the application has the beneficial effects that:
(1) According to the embodiment of the application, when the current situation awareness loss degree of the driver reaches a certain degree, the longitudinal time and the transverse time of the contact between the surrounding vehicles and the current vehicle can be calculated, and the danger degree of other surrounding vehicles is determined so as to sense the collision danger between the current vehicle and the surrounding vehicles, so that the driver is reminded of carrying out related driving safety takeover based on the situation awareness loss degree, the driving safety is improved, the safety and the practicability are improved, and the driving experience is improved.
(2) According to the embodiment of the application, the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion track state of the current vehicle can be calculated according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state, so that basis is provided for determining the dangerous area degree of each surrounding vehicle, the driving safety is further improved, and the vehicle is safer and more practical.
(3) According to the embodiment of the application, the physiological state information and the emotional state information of the driver can be acquired, the total time for the driver to finish taking over the reaction is matched from the database according to the physiological state information and the emotional state information, so that the situation awareness loss degree of the driver is further obtained, the related driving safety taking over is carried out, and the driving experience is improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for risk awareness of a vehicle according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of risk awareness of a vehicle according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a hazard sensing method of a vehicle according to one embodiment of the present application;
FIG. 4 is a schematic diagram of a risk sensing device for a vehicle according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle according to an embodiment of the present application.
Wherein, 10-a hazard perception device of the vehicle; 100-recognition module, 200-calculation module, 300-perception module; 501-memory, 502-processor and 503-communication interface.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The following describes a hazard sensing method and apparatus for a vehicle according to an embodiment of the present application with reference to the accompanying drawings. Aiming at the problems that the design mode of the takeover early warning system in the related art mentioned in the background art is single, the collision danger between the current vehicle and surrounding vehicles cannot be effectively perceived, the complexity and uncertainty of a driver in an automatic driver vehicle path closed loop system are not fully considered, the vehicle is difficult to safely take over when the situation awareness of the driver is lost, the complete takeover time is prolonged, the driving safety of the vehicle is low, and the driving experience of the driver is reduced. Therefore, the problems that the design mode of the take-over early warning system in the related technology is single, the collision danger between the current vehicle and surrounding vehicles cannot be effectively perceived, the complexity and uncertainty of a driver in an automatic driver vehicle path closed loop system are not fully considered, the vehicle is difficult to take over safely when the situation awareness of the driver is lost, the complete take-over time is prolonged, the driving safety of the vehicle is low, the driving experience of the driver is reduced and the like are solved.
Specifically, fig. 1 is a schematic flow chart of a risk sensing method for a vehicle according to an embodiment of the present application.
As shown in fig. 1, the hazard sensing method of the vehicle includes the steps of:
in step S101, the current degree of situational awareness loss of the driver is recognized based on the face image of the driver.
It can be understood that the embodiment of the application can acquire the facial image of the driver, for example, the facial image of the driver is acquired through the vision sensor, and the positions of the eyes and the facial actions of the driver can be extracted according to the facial image information, so as to identify the current situation awareness loss degree of the driver.
For example, the embodiment of the application can install the camera which is opposite to the head of the driver in the vehicle to acquire the face image of the driver, and identify the current situation awareness loss degree of the driver by extracting the positions of the eyes and the face action information of the driver in the face image, so that the current situation awareness loss degree of the driver is accurately identified, the driver is reminded of carrying out related driving safety takeover based on the situation awareness loss degree, and the driving safety is improved.
Optionally, in one embodiment of the application, identifying the current degree of situational awareness loss of the driver includes: extracting eye position features and facial action features of a driver in the facial image; judging whether the driver meets the preset ambient environment observation condition according to the eye position characteristics and the facial action characteristics; when the preset ambient environment observation conditions are not met, judging that the current situation consciousness loss degree is the consciousness loss degree; when the preset ambient environment observation condition is met, the current situation awareness degree is judged to be the alertness awareness degree.
It can be understood that, in the embodiment of the present application, the eye position features and the facial motion features of the driver can be extracted from the facial image, for example, the eye and facial motion features of the driver are extracted by the camera, and the driver in the embodiment of the present application can observe the motion of the front driving mirror, the left and right rear view mirrors, etc. for detecting the position of the eyes of the driver when meeting a certain ambient environment observation condition.
In the actual execution process, the embodiment of the application can shoot the image in the vehicle through the camera, extract the eye position characteristic and the facial action characteristic of the driver, and ensure the accuracy of acquiring the eye position characteristic and the facial action characteristic; according to the embodiment of the application, whether the driver meets a certain ambient environment observation condition can be judged according to the eye position characteristics and the facial action characteristics, for example, when the situation that the eye position characteristics of the driver do not observe the front side, the left side and the right side rearview mirrors and the facial action characteristics do not observe the ambient environment is detected, the situation that the driver does not meet the certain ambient environment observation condition is judged, and the current situation consciousness loss degree of the driver is the consciousness loss degree; for another example, the embodiment of the application can judge that a certain ambient environment observation condition is met when the eye positions of the driver are detected to observe the front side rearview mirror, the left side rearview mirror and the right side rearview mirror, at the moment, the current situation awareness degree of the driver is the awareness degree, the accumulated time of the driver observing the ambient environment of the vehicle in the first 30s of the current moment of running of the vehicle can be counted, and if the accumulated time reaches or exceeds 15s, the awareness degree of the driver is judged to be high; if the accumulated time is between 9 and 15 seconds, judging that the alertness degree of the driver is medium; if the accumulated time is less than 9s, the driver's alertness degree is judged to be low, and the embodiment of the application can count the corresponding relation between the position of the midpoint of the binocular connecting line in the first 30s at the current moment and the reference position, wherein if the accumulated time exceeds 50% of the preset time, namely 15s-30s, the driver's alertness degree is judged to be high, and the driver's situation awareness loss degree is low; if the accumulated time is not more than 50% of the preset time and is more than 30%, namely 9-15s, judging that the alertness degree of the driver is medium, and indicating that the driver basically loses situational awareness; if the accumulated time is less than 30% of the preset time, namely 0-9s, the driver is judged to be low in alertness, the situation awareness of the driver is completely lost, and at the moment, the current situation awareness of the driver is the alertness.
The embodiment of the application can fully consider the complexity and uncertainty of the driver under the automatic driving condition, ensure to realize an effective take-over alarm mechanism through different situational awareness states of the driver, ensure take-over quality, and is safer and more practical.
It should be noted that the preset ambient environment observation condition may be set by those skilled in the art according to the actual situation, and is not particularly limited herein.
Optionally, in one embodiment of the present application, determining whether the driver meets the preset ambient environment observation condition according to the eye position feature and the facial motion feature includes: calculating a plurality of positions of the midpoint of the two-eye connecting line according to the eye position characteristics; taking the position with the longest time of the positions of the midpoint of the binocular connecting line in the plurality of positions as a reference position, calculating the actual position of the midpoint of the binocular connecting line of the driver in each frame according to facial action characteristics, and detecting the corresponding relation between the actual position and the reference position; and when the actual position is detected to be consistent with the reference position, judging that the preset ambient environment observation condition is met, otherwise, judging that the preset ambient environment observation condition is not met.
In some embodiments, a large number of monitoring videos of drivers when the drivers make various actions on a vehicle can be firstly obtained, K-means clustering analysis is respectively carried out on the monitoring videos corresponding to each action to obtain the position with the longest connecting line between the two eyes and the midpoint of the two eyes when each action is made, the position with the longest time of the midpoint of the two eyes in the positions is taken as a reference position, the actual position of the midpoint of the two eyes of the drivers in each frame is calculated according to facial action characteristics, and the corresponding relation between the actual position and the reference position is detected; secondly, performing cluster analysis on monitoring videos of a driver when the driver makes various actions on the vehicle by using a K-means clustering algorithm, wherein the monitoring videos are specifically a first cluster center, a second cluster center and a third cluster center, the first cluster center corresponds to the action of the driver when the driver drives the vehicle in front of the front view, and the corresponding relation between the position of the midpoint of a binocular connecting line of the action corresponding to the first cluster center and a first reference position is obtained and is recorded as a first corresponding relation; the second center corresponds to the action of the driver when observing the left rearview mirror to the left, and the corresponding relation between the position of the midpoint of the binocular connecting line of the action corresponding to the second center and the second reference position is obtained and is recorded as a second corresponding relation; the third class center corresponds to the action of a driver when observing the right rearview mirror to the right, and at the moment, the corresponding relation between the position of the midpoint of the two-eye connecting line of the action corresponding to the third class center and the third reference position is obtained and is recorded as the third corresponding relation; when it is detected that the position of the midpoint of the two-eye line does not coincide with the reference position, the driver does not observe the surrounding environment of the vehicle, and it is determined that certain surrounding environment observation conditions are not satisfied.
According to the embodiment of the application, the visual focus position and the stay time can be effectively identified through the eye movement track of the driver, the attention condition of the driver to a certain visual object is judged, and the attention degree of the driver to the surrounding environment is accurately judged according to the direct attention behavior of the driver, so that the situation consciousness loss degree of the driver in automatic driving is accurately deduced.
It should be noted that, the preset ambient environment observation condition may be set by a person skilled in the art according to the actual situation, and is not limited herein, and because the eyes are inevitably swayed slightly when the driver drives and observes the driving situation, when judging whether the first corresponding relationship, the second corresponding relationship or the third corresponding relationship is satisfied, a certain error threshold may be set to improve the accuracy of detection.
In step S102, if the current situational awareness loss degree is a preset degree, a longitudinal contact time and/or a lateral contact time for each surrounding vehicle to contact the current vehicle in the current motion trajectory state of the current vehicle are calculated.
It can be understood that the longitudinal contact time in the embodiment of the present application may be a time from the beginning of recording to the shortening of the longitudinal distance from the vehicle to 0, i.e. the case that the vehicle head of the side and rear vehicles is flush with the vehicle tail, and the lateral contact time in the embodiment of the present application may be a time from the beginning of recording to the shortening of the lateral distance from the vehicle to 0, i.e. the case that the vehicle body of the side vehicle is in contact with the vehicle body of the vehicle.
In some cases, the embodiment of the application can calculate the longitudinal contact time and/or the transverse contact time of each surrounding vehicle contacting the current vehicle in the current motion track state of the current vehicle when the current situation awareness degree is a preset degree, for example, when the situation awareness degree of the driver is completely lost, the time t of dangerous interaction between the rear vehicle and the current vehicle is calculated, wherein the time t comprises the longitudinal contact time t1 and the transverse contact time t2, and according to the longitudinal contact time and the transverse contact time, the driver is ensured to have enough reaction time to manually take over the vehicle in emergency, and the driving safety is improved.
It should be noted that the preset degree may be set by those skilled in the art according to actual situations, and is not particularly limited herein.
Optionally, in one embodiment of the present application, calculating the longitudinal contact time and/or the lateral contact time of each surrounding vehicle in contact with the current vehicle in the current motion trajectory state of the current vehicle according to the current situational awareness includes: acquiring boundary information of a lane where a current vehicle is located, a speed difference between the boundary information and at least one surrounding vehicle and a real-time comprehensive environment state; and calculating the longitudinal contact time and/or the transverse contact time of each surrounding vehicle contacting the current vehicle under the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state.
In the actual execution process, the embodiment of the application can establish a vehicle track prejudging system and input a vehicle visual scene based on a deep convolutional neural network, wherein the input flow is a multi-task convolutional neural network topology: inputting three-channel RGB (Red, green, blue, color mode) images, and outputting target detection results by branch decoding after shared convolution feature coding description; acquiring scene driving videos of different time, different weather and different driving conditions through offline model training, selecting time sequence discrete training samples according to fixed time intervals, and manually labeling to generate training labels; compressing model parameters obtained by offline model training in the previous step according to the operational characteristics of the embedded platform through model compression, and deploying the simplified model on the embedded platform after accuracy verification and retraining; the method comprises the steps of intercepting and scaling a network input size of a multi-task convolutional neural network topology, performing image preprocessing on an image original data ROI (Region of Interest ) part, inputting the preprocessed image into a compressed neural network, and outputting an analysis result of a model definition scene after model forward operation, wherein various sensors (laser radar, millimeter wave radar, binocular camera and the like) can be arranged in a double 360-degree range around a vehicle to perform dead angle detection and redundancy detection, the radar can be millimeter wave radar, in a control system, a domain controller can perform 360-degree all-round environment dangerous condition judgment through target information of the millimeter wave radar and image information of the camera, and can perform linkage setting on the front side and the rear side of the vehicle, for example, the domain controller receives early warning or braking signals of a front angle radar, early warning or braking signals of a rear angle radar and early warning or braking signals of a front-looking intelligent camera, and then performs fusion judgment processing.
For example, the embodiment of the application CAN acquire real-time environment attribute data, thereby acquiring boundary information of a lane where a current vehicle is located, speed difference between the current vehicle and at least one surrounding vehicle and real-time comprehensive environment state, utilizing various sensors (laser radar, millimeter wave radar, binocular camera and the like) installed outside the intelligent vehicle to identify dynamic and static objects around the vehicle and sense environment information, acquiring driving environment information including but not limited to road environment (path, lane and the like) where the vehicle is located, traffic conditions (traffic density crowding degree and the like), weather conditions (rain, snow, heavy fog and the like), side rear vehicle behaviors, distance and the like, generating environment attribute data from the acquired real-time driving environment information of the vehicle, constructing a condition attribute set through the environment attribute data, taking a comprehensive environment state index meeting a requirement condition as a decision attribute set, determining the index weight of each environment attribute data by utilizing attribute dependency degree and importance, further weighting and summing to obtain the comprehensive environment state index, thereby generating real-time comprehensive environment state of the vehicle, determining real-time environment complexity of the vehicle, acquiring time of the driver sensing environment complexity, transmitting the acquired real-time environment complexity to a local area network (CAN (Controller Area Network) bus) and providing an early warning system for early warning and providing an auxiliary system for failure control before the vehicle is in real-time.
According to the embodiment of the application, the longitudinal contact time and the transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion track state of the current vehicle can be calculated according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state, so that a basis is provided for determining the dangerous area degree of each surrounding vehicle, the driving safety is further improved, and the vehicle is safer and more practical.
Optionally, in one embodiment of the present application, before acquiring the boundary information of the lane in which the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle, and the real-time integrated environment state, the method further includes: if the current situational awareness loss degree is awareness loss degree, meeting preset dangerous alarm conditions; if the current situation awareness degree is the alertness awareness degree, the continuous observation time length of the driver is obtained, and when the continuous observation time length is smaller than the preset safety time length, the preset dangerous alarm condition is judged to be met.
It may be appreciated that the preset dangerous alarm condition in the embodiment of the present application may be, but is not limited to, a loss degree of awareness of the current situation of the driver, a period of time that the driver observes the surrounding environment is less than a safe period of time, etc.
In the actual execution process, the embodiment of the application can judge that certain dangerous alarm conditions are met when the current situation awareness degree of the driver is awareness degree, acquire the continuous observation time length of the driver when the current situation awareness degree of the driver is awareness degree, judge that certain dangerous alarm conditions are met when the continuous observation time length is smaller than certain safety time length, and therefore alarm take over when certain dangerous alarm conditions are met, ensure that the driver effectively realizes an early warning mechanism of take over under different situation awareness states, ensure take over quality, improve the driving safety and be safer and more practical.
It should be noted that the preset dangerous alarm condition and the preset safety duration may be set by those skilled in the art according to actual situations, and are not particularly limited herein.
Optionally, in one embodiment of the present application, before determining the total time for the driver to complete taking over the reaction from the current degree of situational awareness, further comprising: detecting a current working mode of the vehicle; and when the current working mode is detected to be the first working mode, the total time for the driver to finish taking over the reaction is preset, otherwise, the total time for the driver to finish taking over the reaction is determined according to the current situational awareness loss degree.
In the actual execution process, the embodiment of the application can detect the current working mode of the vehicle before the total time for the driver to finish taking over the reaction is determined by the current situation awareness degree, the total time for the driver to finish taking over the reaction is the preset time when the current working mode is detected to be the first working mode, and the total time for the driver to finish taking over the reaction is determined by the current situation awareness degree when the current working mode is detected to be not the first working mode, so that the taking over quality is ensured.
Optionally, in one embodiment of the application, determining the total time for the driver to complete the take over reaction from the current degree of situational awareness comprises: collecting physiological state information and emotional state information of a driver; and matching the total time for the driver to finish taking over the reaction from a preset database according to the physiological state information and the emotional state information.
As a possible implementation manner, the embodiment of the application can acquire the physiological state information and the emotional state information of the driver in the driving process in real time through the state acquisition function, and can also set a physiological state library, an emotional state library and a normal situation awareness takeover time library, wherein the state acquisition function, the physiological state library, the emotional state library and the normal situation awareness takeover time library are interacted through a bus to generate the total time for the driver to complete takeover reaction from the current state, the state acquisition function comprises an emotion recognition function for judging the emotional state of the driver by utilizing micro-expression recognition according to the acquired facial image information of the driver, and the state acquisition function also comprises a sensor function capable of acquiring various physiological parameters of the driver for representing the emotional state, wherein the physiological parameters at least comprise the body temperature, the heart rate and the blood pressure of the driver.
For example, when the first sensor is a heart rate sensor disposed on the steering wheel, it detects that the heart rate of the driver is between 60 and 80 times/min, and the time reflected by t0 includes heart rate parameters, and similarly, when the emotion state of the driver is matched according to the facial image information of the driver and the heart rate and/or the blood pressure, the time reflected by t0 is a comprehensive value that integrates the emotion state of the current driver.
In some cases, in the state acquisition function, the emotion of the driver can be transmitted in various modes, such as collecting physiological information of the driver by adopting a heart rate, blood pressure, pulse and other contact acquisition components, and then judging the emotion; or a non-contact acquisition component such as a camera is adopted to acquire facial image information of the driver, so as to obtain emotion of the driver; or a contact and non-contact combination mode is adopted to improve detection precision, obtain the situation loss degree of the driver, and generate the total time for the driver to finish taking over the reaction from the current state, so that the differential taking over alarm is realized.
In step S103, the extent of the danger area of each surrounding vehicle is determined according to the total time for which one or both of the longitudinal contact time and/or the lateral contact time and the driver have completed the takeover reaction, and the collision danger of the current vehicle and the surrounding vehicles is perceived based on the extent of the danger area.
It will be appreciated that the extent of the dangerous area in the embodiment of the present application may be determined according to the total time for which one or both of the longitudinal contact time and the lateral contact time and the driver complete the take-over reaction, and the total time for which the take-over reaction may be recorded as the take-over time t0.
In the actual execution process, the embodiment of the application can determine the dangerous area degree of each surrounding vehicle according to the total time of one or two of the longitudinal contact time and/or the transverse contact time and the taking over reaction completed by the driver, so that the collision danger between the current vehicle and the surrounding vehicles is perceived based on the dangerous area degree, the driving safety of the vehicles is improved, the safety and the practicability are improved, and the driving experience of the driver is improved.
For example, the embodiment of the application can establish a warning area pre-judging system to generate a warning area surrounding a current vehicle, wherein the warning area is a time prediction area in which a rear vehicle dangerously interacts with the vehicle under the current motion track, and particularly when only longitudinal contact exists, if the longitudinal contact time t1 is less than the take-over time t0, the degree of the dangerous area is judged to be an extremely dangerous area, and at the moment, the collision risk of the current vehicle and surrounding vehicles can be perceived according to the extremely dangerous area, and the larger collision risk is judged; further, if the taking-over time t0< the transverse contact time t2< the taking-over time t0+3s, the dangerous area degree is judged to be a more dangerous area when only the transverse contact exists, and at the moment, the collision danger between the current vehicle and surrounding vehicles can be perceived according to the more dangerous area, and the moderate collision danger is judged; still further, in the embodiment of the present application, when the longitudinal contact time t1 and the transverse contact time t2> take over the time t0+3s, the degree of the dangerous area is determined to be a general dangerous area, and at this time, the collision risk between the current vehicle and the surrounding vehicles can be perceived according to the general dangerous area, so that the collision risk is determined to be small.
Optionally, in one embodiment of the present application, determining the extent of the hazard zone of each surrounding vehicle according to the total time for which one or both of the longitudinal contact time and/or the lateral contact time and the driver have completed the take over reaction includes: if only longitudinal contact exists, when the longitudinal contact time is smaller than the total time, the dangerous area degree is extremely dangerous, when the longitudinal contact time is larger than the total time and smaller than the sum of the total time and the first preset time, the dangerous area degree is more dangerous, and when the longitudinal contact time is larger than the sum of the total time and the first preset time, the dangerous area degree is a general dangerous area; if only the transverse contact exists, the dangerous area degree is extremely dangerous when the transverse contact time is smaller than the total time, the dangerous area degree is more dangerous when the transverse contact time is larger than the total time and smaller than the sum of the total time and the second preset time, and the dangerous area degree is a general dangerous area when the transverse contact time is larger than the sum of the total time and the second preset time; if the longitudinal contact and the transverse contact exist at the same time, the dangerous area degree is extremely dangerous when any contact time is smaller than the total time, the dangerous area degree is more dangerous when any contact time is larger than the total time and smaller than the sum of the total time and the third preset time, and the dangerous area degree is a general dangerous area when any contact time is larger than the sum of the total time and the third preset time.
It will be appreciated that the total time to take over the reaction in an embodiment of the application may be referred to as take over time t0.
In the actual implementation process, the embodiment of the application can judge that the dangerous area degree is extremely dangerous when the longitudinal contact time is smaller than the total time, judge that the dangerous area degree is more dangerous when the longitudinal contact time is larger than the total time and smaller than the sum of the total time and the first preset time, and judge that the dangerous area degree is a general dangerous area when the longitudinal contact time is larger than the sum of the total time and the first preset time; further, in the embodiment of the application, when only the lateral contact is performed, the dangerous area degree is judged to be an extremely dangerous degree when the lateral contact time is smaller than the total time, the dangerous area degree is judged to be a more dangerous degree when the lateral contact time is larger than the total time and smaller than the sum of the total time and the second preset time, and the dangerous area degree is judged to be a general dangerous area when the lateral contact time is larger than the sum of the total time and the second preset time; further, in the embodiment of the present application, when there is both the longitudinal contact and the transverse contact, the dangerous area degree is determined to be an extremely dangerous degree when any contact time is less than the total time, the dangerous area degree is determined to be a more dangerous degree when any contact time is greater than the total time and less than the sum of the total time and the third preset time, and the dangerous area degree is determined to be a general dangerous area when any contact time is greater than the sum of the total time and the third preset time.
For example, the embodiment of the application can establish a warning area pre-judging system to generate a warning area surrounding a current vehicle, wherein the warning area is a time prediction area in which a rear vehicle has dangerous interaction with the vehicle under a current motion track, the vehicle track pre-judging system, the take-over preparation system and the warning area pre-judging system perform real-time linkage adaptation, when only longitudinal contact exists, if the longitudinal contact time t1 is less than the take-over time t0, the dangerous area is judged to be an extremely dangerous area, if the take-over time t0 is less than the longitudinal contact time t1 is less than the take-over time t0+3s, the dangerous area is judged to be a more dangerous area, and if the longitudinal contact time t1 is less than the take-over time t0+3s, the dangerous area is judged to be a general dangerous area; for another example, in the embodiment of the present application, when there is only a lateral contact, if the lateral contact time t2< take over time t0, the dangerous area is determined to be an extremely dangerous area, if the take over time t0< lateral contact time t2< take over time t0+3s, the dangerous area is determined to be a more dangerous area, and if the lateral contact time t2> takes over time t0+3s, the dangerous area is determined to be a general dangerous area; for another example, in the embodiment of the present application, when there is both the longitudinal contact time and the lateral contact time, if either one of the longitudinal contact time t1 and the lateral contact time t2 is less than the take-over time t0, the dangerous area is judged to be an extremely dangerous area; if the take-over time is less than the longitudinal contact time t1 and the transverse contact time t2 is less than the take-over time t0+3s, judging the dangerous area as a more dangerous area; if the longitudinal contact time t1 and the transverse contact time t2> take over the time t0+3s, the dangerous area is judged as a general dangerous area.
According to the embodiment of the application, the longitudinal time and the transverse time of the contact between the surrounding vehicles and the current vehicle can be calculated based on the situation awareness loss degree of the driver, the dangerous degree of the surrounding other vehicles is determined, and the vehicle is controlled to execute the optimal take-over alarm action, so that the driver is reminded of carrying out relevant driving safety take-over based on the situation awareness loss degree, the driving safety is improved, the driving safety is safer and more practical, and the driving experience is improved.
It should be noted that the first preset duration, the second preset duration, and the third preset duration may be set by those skilled in the art according to actual situations, and are not limited herein.
Optionally, in one embodiment of the present application, after sensing the collision risk of the vehicle with the surrounding vehicle based on the risk area degree, further includes: and matching the optimal take-over alarm action based on the degree of the dangerous area, and controlling the current vehicle to execute the optimal take-over alarm action on one or more surrounding vehicles.
It can be understood that the optimal take-over alarm action in the embodiment of the application can be performed in a visual, audible and tactile mode, and the like, wherein an audible message can quickly capture the attention of a driver, provide a priority warning, enhance the visual warning in a non-time emergency situation, and a tactile message can quickly capture the attention of the driver when the audible message is unlikely to be effective.
For example, when the dangerous area degree is an extremely dangerous area, the embodiment of the application can match with the optimal taking-over alarm action, control the current vehicle to execute the optimal taking-over alarm action on one or more surrounding vehicles, at the moment, if the situation awareness of the driver is low, take-over alarm can be carried out in a mode of combining vision and hearing, if the situation awareness of the driver is basically lost, take-over alarm can be carried out in a mode of combining vision, hearing and touch, if the situation awareness of the driver is completely lost, take-over alarm can be carried out in a mode of combining vision, hearing and touch, and the optimal taking-over alarm action can be executed according to the difference of the situation awareness of the driver and the dangerous area degree; for another example, when the dangerous area degree is a dangerous area, the embodiment of the application can match the optimal taking over alarm action, at the moment, if the situation awareness of the driver is low, taking over alarm can be carried out in a visual mode, if the situation awareness of the driver is basically lost, taking over alarm can be carried out in a visual and auditory combined mode, and if the situation awareness of the driver is completely lost, taking over alarm can be carried out in a visual, auditory and tactile combined mode; for another example, when the dangerous area degree is completely lost in the situation awareness, the embodiment of the application can match the optimal taking over alarm action, and at this time, if the driver has low situation awareness, taking over alarm can be performed without a combination of vision, hearing and touch, if the driver has basically lost situation awareness, taking over alarm can be performed with vision, and if the driver has completely lost situation awareness, taking over alarm can be performed with a combination of vision and hearing, table 1 is a differential taking over alarm table of the situation awareness of the driver and the dangerous area degree, where as shown in table 1:
TABLE 1
According to the embodiment of the application, after the collision danger between the vehicle and surrounding vehicles is perceived based on the degree of the dangerous area, the optimal take-over alarm action is matched and controlled to be executed according to the degree of the dangerous area, so that the automatic driving safety take-over is effectively ensured under different driver states and environment states, and differentiated take-over is realized.
Specifically, with reference to fig. 2 and 3, the working principle of the risk sensing method of the vehicle according to the embodiment of the present application will be described in detail with reference to a specific embodiment.
As shown in fig. 2, an embodiment of the present application may include the steps of:
step S201: a camera is arranged in the vehicle and is opposite to the head of the driver.
Step S202: and initializing, and keeping the positions of the middles of the eyes at the middle of the picture of the camera.
The embodiment of the application can identify and collect the positions of the eyes of the driver in real time within two minutes when the driver just starts the vehicle, calculate the position of the midpoint of the connecting line of the eyes, take the position of the midpoint of the connecting line of the eyes of the driver within the two minutes, which is the longest time, as the reference position, adjust the angle of the camera, keep the reference position in the middle of the picture of the camera, and not adjust the angle of the camera before the current driving is finished.
Step S203: in the automatic driving mode, the current degree of situational awareness loss of the driver is obtained according to the positions of the eyes of the driver.
The embodiment of the application can establish a three-dimensional space coordinate system by taking the reference position as the center of a circle, firstly acquire a large number of monitoring videos when drivers make various actions on the vehicle, respectively perform K-means clustering analysis on the monitoring videos corresponding to each action to obtain the relationship between the position of the midpoint of the binocular connecting line and the reference position during each action, and record the relationship as a corresponding relationship; secondly, performing cluster analysis on monitoring videos of a driver when the driver makes various actions on the vehicle by using a K-means clustering algorithm, wherein the monitoring videos are specifically a first cluster center, a second cluster center and a third cluster center, the first cluster center corresponds to the action of the driver when the driver drives the vehicle in front of the front view, and the corresponding relation between the position of the midpoint of a binocular connecting line of the action corresponding to the first cluster center and a first reference position is obtained and is recorded as a first corresponding relation; the second center corresponds to the action of the driver when observing the left rearview mirror to the left, and the corresponding relation between the position of the midpoint of the binocular connecting line of the action corresponding to the second center and the second reference position is obtained and is recorded as a second corresponding relation; the third class center corresponds to the action of the driver when observing the right rearview mirror to the right, at this time, the corresponding relation between the position of the midpoint of the binocular connecting line of the action corresponding to the third class center and the third reference position is obtained and recorded as the third corresponding relation, and again, in the automatic driving process of the vehicle, the embodiment of the application can acquire the head video of the driver shot by the camera in real time, calculate the corresponding relation between the position of the midpoint of the binocular connecting line of the driver and the reference position in each frame according to the video, and if the corresponding relation between the position of the midpoint of the binocular connecting line and the reference position meets any one of the first corresponding relation, the second corresponding relation or the third corresponding relation, the driver is proved to observe the surrounding environment of the vehicle when making the action.
Further, the embodiment of the application can count the accumulated time of observing the surrounding environment of the vehicle by the driver within the first 30s at the current moment, and if the accumulated time reaches or exceeds 15s, the driver is judged to have high alertness; if the accumulated time is between 9 and 15 seconds, judging that the alertness degree of the driver is medium; if the cumulative time is less than 9s, it is determined that the driver's alertness is low.
The embodiment of the application can count the corresponding relation between the position of the midpoint of the binocular connecting line in the first 30s at the current moment and the reference position, and meet the accumulated time of any one of the first corresponding relation, the second corresponding relation or the third corresponding relation, wherein if the accumulated time exceeds 50% of the preset time, namely 15s-30s, the driver is judged to have high alertness, and the driver is shown to have low situational awareness loss; if the accumulated time is not more than 50% of the preset time and is more than 30%, namely 9-15s, judging that the alertness degree of the driver is medium, and indicating that the driver basically loses situational awareness; if the accumulated time is less than 30% of the preset time, namely 0-9s, judging that the alertness degree of the driver is low, and indicating that the driver completely loses the situational awareness.
Next, the operation principle of the risk sensing method of the vehicle according to the embodiment of the present application may be further described in detail through fig. 3.
As shown in fig. 3, the embodiment of the present application may include vehicle position distribution and vehicle speed information, where the vehicle distribution in the embodiment of the present application is that the vehicle 1 is in front of the right lane, the vehicle speed is V1 when traveling forward, the vehicle 2 is behind the left lane, the vehicle speed is V2 when traveling forward, V3 when traveling rightward, the vehicle 3 is behind the right lane 1, the vehicle speed is V2 when traveling forward, and according to the vehicle position distribution and the vehicle speed information, the longitudinal time and the lateral time when the surrounding vehicle contacts with the current vehicle are calculated, and the risk degree of other surrounding vehicles is determined, so as to facilitate sensing the collision risk of the current vehicle and the surrounding vehicle, and improve the traveling safety.
According to the danger sensing method for the vehicle, when the current situation awareness loss degree of the driver reaches a certain degree, the longitudinal time and the transverse time of the contact between the surrounding vehicles and the current vehicle can be calculated, the danger degree of other surrounding vehicles can be determined, so that the collision danger between the current vehicle and the surrounding vehicles can be sensed, the driver is reminded of carrying out relevant driving safety takeover based on the situation awareness loss degree, the driving safety is improved, the safety and the practicability are improved, and the driving experience is improved. Therefore, the problems that the design mode of the take-over early warning system in the related technology is single, the collision danger between the current vehicle and surrounding vehicles cannot be effectively perceived, the complexity and uncertainty of a driver in an automatic driver vehicle path closed loop system are not fully considered, the vehicle is difficult to take over safely when the situation consciousness of the driver is lost, the complete take-over time is prolonged, the safety of vehicle driving is low, and the driving experience of the driver is reduced are solved.
Next, a risk sensing apparatus for a vehicle according to an embodiment of the present application will be described with reference to the accompanying drawings.
Fig. 4 is a schematic structural view of a hazard sensing apparatus of a vehicle according to an embodiment of the present application.
As shown in fig. 4, the risk sensing device 10 of the vehicle includes: an identification module 100, a calculation module 200 and a perception module 300.
Specifically, the recognition module 100 is configured to recognize a current degree of situational awareness loss of the driver based on the facial image of the driver.
The calculating module 200 is configured to calculate a longitudinal contact time and/or a lateral contact time of each surrounding vehicle in contact with the current vehicle in the current motion trajectory state of the current vehicle when the current situational awareness loss degree is a preset degree.
The sensing module 300 is configured to determine a dangerous area degree of each surrounding vehicle according to one or both of the longitudinal contact time and/or the lateral contact time and a total time for the driver to complete the takeover reaction, and sense a collision risk of the current vehicle with the surrounding vehicle based on the dangerous area degree.
Optionally, in one embodiment of the present application, the computing module 200 includes: an acquisition unit and a calculation unit.
The system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring boundary information of a lane where a current vehicle is located, a speed difference between the boundary information and at least one surrounding vehicle and a real-time comprehensive environment state.
And the calculating unit is used for calculating the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle under the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state.
Optionally, in one embodiment of the present application, the identification module 100 includes: the device comprises an extraction unit, a judging unit, a first judging unit and a second judging unit.
The extraction unit is used for extracting eye position features and facial action features of the driver in the facial image.
And the judging unit is used for judging whether the driver meets the preset ambient environment observation condition according to the eye position characteristics and the facial action characteristics.
And the first judging unit is used for judging that the current situation consciousness loss degree is the consciousness loss degree when the preset ambient environment observation condition is not met.
And the second judging unit is used for judging that the current situation awareness degree is the alertness awareness degree when the preset ambient environment observation condition is met.
Optionally, in one embodiment of the present application, the judging unit includes: a calculation subunit, a detection subunit and a determination subunit.
The calculating subunit is used for calculating a plurality of positions of the midpoint of the two-eye connecting line according to the eye position characteristics.
And the detection subunit is used for taking the position with the longest time of the positions of the midpoint of the binocular connecting line in the plurality of positions as a reference position, calculating the actual position of the midpoint of the binocular connecting line of the driver in each frame according to facial action characteristics, and detecting the corresponding relation between the actual position and the reference position.
And the judging subunit is used for judging that the preset ambient environment observation condition is met when the actual position is detected to be consistent with the reference position, or else judging that the preset ambient environment observation condition is not met.
Optionally, in one embodiment of the present application, the hazard sensing device 10 of the vehicle further includes: a first decision module and a second decision module.
The first judging module is used for meeting preset dangerous alarm conditions when the current situation consciousness loss degree is the consciousness loss degree before acquiring boundary information of a lane where the current vehicle is located, speed difference between the current vehicle and at least one surrounding vehicle and real-time comprehensive environment state.
And the second judging module is used for acquiring the continuous observation time length of the driver when the current situation awareness degree is the alertness awareness degree, and judging that the preset dangerous alarm condition is met when the continuous observation time length is smaller than the preset safety time length.
Optionally, in one embodiment of the present application, the hazard sensing device 10 of the vehicle further includes: a detection module and a determination module.
The detection module is used for detecting the current working mode of the vehicle before the total time for the driver to finish taking over the reaction is determined by the current situational awareness loss degree.
And the determining module is used for determining the total time for the driver to finish taking over the reaction as the preset time when the current working mode is detected to be the first working mode, otherwise, determining the total time for the driver to finish taking over the reaction according to the current situation awareness loss degree.
Optionally, in one embodiment of the present application, the determining module includes: an acquisition unit and a matching unit.
The system comprises an acquisition unit, a control unit and a control unit, wherein the acquisition unit is used for acquiring physiological state information and emotion state information of a driver.
And the matching unit is used for matching the total time for the driver to finish taking over the reaction from a preset database according to the physiological state information and the emotion state information.
Alternatively, in one embodiment of the present application, the alarm module 300 includes: a third determination unit, a fourth determination unit, and a fifth determination unit.
The third judging unit is used for judging that when only longitudinal contact exists, the dangerous area degree is extremely dangerous when the longitudinal contact time is smaller than the total time, the dangerous area degree is more dangerous when the longitudinal contact time is larger than the total time and smaller than the sum of the total time and the first preset time, and the dangerous area degree is general dangerous area when the longitudinal contact time is larger than the sum of the total time and the first preset time.
And a fourth determination unit configured to, when only the lateral contact is made, determine that the dangerous area level is an extremely dangerous level when the lateral contact time is less than the total time, and determine that the dangerous area level is a more dangerous level when the lateral contact time is greater than the total time and less than the sum of the total time and a second preset time period, and determine that the dangerous area level is a general dangerous area when the lateral contact time is greater than the sum of the total time and the second preset time period.
And a fifth judging unit for, when there is both the longitudinal contact and the transverse contact, when any contact time is less than the total time, the dangerous area degree is an extremely dangerous degree, and when any contact time is greater than the total time and less than the sum of the total time and the third preset time period, the dangerous area degree is a more dangerous degree, and when any contact time is greater than the sum of the total time and the third preset time period, the dangerous area degree is a general dangerous area.
Optionally, in one embodiment of the present application, further includes: and a control module.
The control module is used for matching the optimal takeover alarming action based on the dangerous area degree after sensing the collision danger between the vehicle and the surrounding vehicles based on the dangerous area degree, and controlling the current vehicle to execute the optimal takeover alarming action on one or more surrounding vehicles.
It should be noted that the foregoing explanation of the embodiment of the risk sensing method of the vehicle is also applicable to the risk sensing device of the vehicle of the embodiment, and will not be repeated here.
According to the danger sensing device for the vehicle, provided by the embodiment of the application, when the current situation awareness loss degree of the driver reaches a certain degree, the longitudinal time and the transverse time of the contact between the surrounding vehicles and the current vehicle can be calculated, and the danger degree of other surrounding vehicles is determined to sense the collision danger between the current vehicle and the surrounding vehicles, so that the driver is reminded of carrying out relevant driving safety takeover based on the situation awareness loss degree, the driving safety is improved, the safety and the practicability are improved, and the driving experience is improved. Therefore, the problems that the design mode of the take-over early warning system in the related technology is single, the collision danger between the current vehicle and surrounding vehicles cannot be effectively perceived, the complexity and uncertainty of a driver in an automatic driver vehicle path closed loop system are not fully considered, the vehicle is difficult to take over safely when the situation consciousness of the driver is lost, the complete take-over time is prolonged, the safety of vehicle driving is low, and the driving experience of the driver is reduced are solved.
Fig. 5 is a schematic structural diagram of a vehicle according to an embodiment of the present application. The vehicle may include:
Memory 501, processor 502, and a computer program stored on memory 501 and executable on processor 502.
The processor 502 implements the hazard sensing method of the vehicle provided in the above embodiment when executing a program.
Further, the vehicle further includes:
a communication interface 503 for communication between the memory 501 and the processor 502.
Memory 501 for storing a computer program executable on processor 502.
The memory 501 may include high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 501, the processor 502, and the communication interface 503 are implemented independently, the communication interface 503, the memory 501, and the processor 502 may be connected to each other via a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 501, the processor 502, and the communication interface 503 are integrated on a chip, the memory 501, the processor 502, and the communication interface 503 may perform communication with each other through internal interfaces.
The processor 502 may be a central processing unit (Central Processing Unit, abbreviated as CPU) or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC) or one or more integrated circuits configured to implement embodiments of the present application.
The embodiment of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the hazard sensing method of a vehicle as above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (20)

1. A hazard perception method for a vehicle, comprising the steps of:
identifying a current degree of situational awareness loss for a driver based on a facial image of the driver;
if the current situation awareness losing degree is a preset degree, calculating longitudinal contact time and/or transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion trail state of the current vehicle; and
determining a dangerous area degree of each surrounding vehicle according to one or both of the longitudinal contact time and the transverse contact time and the total time for a driver to complete a take-over reaction, and sensing the collision risk of the current vehicle and the surrounding vehicles based on the dangerous area degree.
2. The method according to claim 1, wherein the calculating a longitudinal contact time and/or a lateral contact time for each surrounding vehicle to come into contact with the current vehicle in a current motion trajectory state of the current vehicle includes:
Acquiring boundary information of a lane where the current vehicle is located, speed difference between the boundary information and at least one surrounding vehicle and real-time comprehensive environment state;
and calculating the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state.
3. The method of claim 1, wherein the identifying the current degree of situational awareness loss for the driver comprises:
extracting eye position features and facial action features of the driver in the facial image;
judging whether the driver meets a preset ambient environment observation condition according to the eye position features and the facial action features;
when the preset ambient environment observation condition is not met, judging that the current situational awareness loss degree is awareness loss degree;
and when the preset ambient environment observation condition is met, judging that the current situation awareness degree is the alertness awareness degree.
4. The method of claim 3, wherein said determining whether the driver satisfies a preset ambient viewing condition based on the eye position features and the facial motion features comprises:
Calculating a plurality of positions of the midpoint of the two-eye connecting line according to the eye position characteristics;
taking the position with the longest time of the positions of the midpoint of the binocular connecting line in the plurality of positions as a reference position, calculating the actual position of the midpoint of the binocular connecting line of the driver in each frame according to the facial action characteristics, and detecting the corresponding relation between the actual position and the reference position;
and when the actual position is detected to be consistent with the reference position, judging that the preset ambient environment observation condition is met, otherwise, judging that the preset ambient environment observation condition is not met.
5. A method according to claim 3, further comprising, prior to acquiring boundary information of a lane in which the current vehicle is located, a speed difference between at least one surrounding vehicle, and a real-time integrated environmental state:
if the current situational awareness loss degree is awareness loss degree, meeting the preset dangerous alarm condition;
and if the current situation awareness degree is the alertness awareness degree, acquiring the continuous observation time length of the driver, and judging that the preset dangerous alarm condition is met when the continuous observation time length is smaller than the preset safety time length.
6. The method of claim 1, further comprising, prior to determining a total time for the driver to complete taking over the reaction from the current degree of situational awareness loss:
detecting a current working mode of the vehicle;
and when the current working mode is detected to be the first working mode, the total time for the driver to finish taking over the reaction is preset time, otherwise, the total time for the driver to finish taking over the reaction is determined by the current situation awareness loss degree.
7. The method of claim 6, wherein determining the total time for the driver to complete the take over reaction from the current degree of situational awareness comprises:
collecting physiological state information and emotional state information of the driver;
and matching the total time for the driver to finish taking over the reaction from a preset database according to the physiological state information and the emotion state information.
8. The method according to claim 1, wherein said determining the extent of the hazard zone for each surrounding vehicle from the total time for which one or both of the longitudinal contact time and/or the lateral contact time and the driver have completed a take over reaction comprises:
If only the longitudinal contact exists, the dangerous area degree is an extremely dangerous degree when the longitudinal contact time is smaller than the total time, the dangerous area degree is a more dangerous degree when the longitudinal contact time is larger than the total time and smaller than the sum of the total time and a first preset duration, and the dangerous area degree is a general dangerous area when the longitudinal contact time is larger than the sum of the total time and the first preset duration;
if only the lateral contact exists, the dangerous area degree is the extremely dangerous degree when the lateral contact time is smaller than the total time, the dangerous area degree is the more dangerous degree when the lateral contact time is larger than the total time and smaller than the sum of the total time and a second preset time length, and the dangerous area degree is the general dangerous area when the lateral contact time is larger than the sum of the total time and the second preset time length;
if the longitudinal contact and the transverse contact exist simultaneously, when any contact time is smaller than the total time, the dangerous area degree is the extremely dangerous degree, and when any contact time is larger than the total time and smaller than the sum of the total time and a third preset time, the dangerous area degree is the more dangerous degree, and when any contact time is larger than the sum of the total time and the third preset time, the dangerous area degree is the general dangerous area.
9. The method according to claim 1, further comprising, after sensing a collision risk of the vehicle with surrounding vehicles based on the risk area extent:
and matching the optimal takeover alarming action based on the dangerous area degree, and controlling the current vehicle to execute the optimal takeover alarming action on one or more surrounding vehicles.
10. A hazard sensing device for a vehicle, comprising:
the recognition module is used for recognizing the current situation awareness loss degree of the driver based on the face image of the driver;
the computing module is used for computing the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle in the current motion track state of the current vehicle when the current situational awareness loss degree is a preset degree; and
and the sensing module is used for determining the dangerous area degree of each surrounding vehicle according to the total time of one or two of the longitudinal contact time and/or the transverse contact time and the taking over reaction of the driver, and sensing the collision risk of the current vehicle and the surrounding vehicles based on the dangerous area degree.
11. The apparatus of claim 10, wherein the computing module comprises:
An acquisition unit for acquiring boundary information of a lane in which the current vehicle is located, a speed difference between the current vehicle and at least one surrounding vehicle, and a real-time comprehensive environmental state;
and the calculating unit is used for calculating the longitudinal contact time and/or the transverse contact time of each surrounding vehicle in contact with the current vehicle under the current motion track state of the current vehicle according to the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state.
12. The apparatus of claim 10, wherein the identification module comprises:
an extraction unit configured to extract an eye position feature and a facial motion feature of the driver in a facial image;
a judging unit configured to judge whether the driver satisfies a preset ambient environment observation condition according to the eye position feature and the facial motion feature;
a first determining unit configured to determine that the current situational awareness loss degree is an awareness loss degree when the preset ambient observation condition is not satisfied;
and the second judging unit is used for judging that the current situation awareness degree is the alertness awareness degree when the preset ambient environment observation condition is met.
13. The apparatus according to claim 12, wherein the judging unit includes:
a calculating subunit, configured to calculate a plurality of positions of the midpoint of the binocular connecting line according to the eye position feature;
a detection subunit, configured to use a position with the longest time of the positions of the midpoint of the binocular connecting line in the plurality of positions as a reference position, calculate an actual position of the midpoint of the binocular connecting line of the driver in each frame according to the facial motion feature, and detect a correspondence between the actual position and the reference position;
and the judging subunit is used for judging that the preset ambient environment observation condition is met when the actual position is detected to be consistent with the reference position, and judging that the preset ambient environment observation condition is not met otherwise.
14. The apparatus as recited in claim 12, further comprising:
the first judging module is used for judging that the preset dangerous alarm condition is met when the current situation consciousness loss degree is the consciousness loss degree before the boundary information of the lane where the current vehicle is located, the speed difference between the current vehicle and at least one surrounding vehicle and the real-time comprehensive environment state are obtained;
And the second judging module is used for acquiring the continuous observation time length of the driver when the current situation awareness degree is the alertness awareness degree, and judging that the preset dangerous alarm condition is met when the continuous observation time length is smaller than the preset safety time length.
15. The apparatus as recited in claim 10, further comprising:
a detection module for detecting a current mode of operation of the vehicle before determining a total time for the driver to complete the take over reaction from the current degree of situational awareness loss;
and the determining module is used for determining the total time for the driver to finish taking over the reaction as preset time when the current working mode is detected to be the first working mode, otherwise, determining the total time for the driver to finish taking over the reaction according to the current situation awareness loss degree.
16. The apparatus of claim 15, wherein the means for determining comprises:
the acquisition unit is used for acquiring physiological state information and emotion state information of the driver;
and the matching unit is used for matching the total time for the driver to finish taking over the reaction from a preset database according to the physiological state information and the emotional state information.
17. The apparatus of claim 10, wherein the sensing module comprises:
a third determination unit configured to, when there is only a longitudinal contact, when the longitudinal contact time is less than the total time, the dangerous area degree is an extremely dangerous degree, and when the longitudinal contact time is greater than the total time and less than a sum of the total time and a first preset time period, the dangerous area degree is a more dangerous degree, and when the longitudinal contact time is greater than a sum of the total time and a first preset time period, the dangerous area degree is a general dangerous area;
a fourth determination unit configured to, when there is only lateral contact, when the lateral contact time is less than the total time, the dangerous area degree is the extremely dangerous degree, and when the lateral contact time is greater than the total time and less than a sum of the total time and a second preset time period, the dangerous area degree is the relatively dangerous degree, and when the lateral contact time is greater than a sum of the total time and the second preset time period, the dangerous area degree is the general dangerous area;
a fifth determination unit configured to, when the longitudinal contact and the lateral contact exist simultaneously, the dangerous area degree is the extremely dangerous degree when any one of the contact times is smaller than the total time, and is the more dangerous degree when any one of the contact times is larger than the total time and smaller than a sum of the total time and a third preset time period, and is the general dangerous area when any one of the contact times is larger than a sum of the total time and the third preset time period.
18. The apparatus as recited in claim 10, further comprising:
the control module is used for matching the optimal takeover alarming action based on the dangerous area degree after sensing the collision danger between the vehicle and the surrounding vehicles based on the dangerous area degree, and controlling the current vehicle to execute the optimal takeover alarming action on one or more surrounding vehicles.
19. A vehicle, characterized by comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the hazard sensing method of a vehicle as claimed in any one of claims 1 to 9.
20. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor for implementing a hazard perception method of a vehicle as claimed in any one of claims 1-9.
CN202310973566.1A 2023-08-03 2023-08-03 Dangerous sensing method and device for vehicle Pending CN116985792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310973566.1A CN116985792A (en) 2023-08-03 2023-08-03 Dangerous sensing method and device for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310973566.1A CN116985792A (en) 2023-08-03 2023-08-03 Dangerous sensing method and device for vehicle

Publications (1)

Publication Number Publication Date
CN116985792A true CN116985792A (en) 2023-11-03

Family

ID=88522982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310973566.1A Pending CN116985792A (en) 2023-08-03 2023-08-03 Dangerous sensing method and device for vehicle

Country Status (1)

Country Link
CN (1) CN116985792A (en)

Similar Documents

Publication Publication Date Title
EP3488382B1 (en) Method and system for monitoring the status of the driver of a vehicle
Omidyeganeh et al. Intelligent driver drowsiness detection through fusion of yawning and eye closure
EP3588372B1 (en) Controlling an autonomous vehicle based on passenger behavior
Doshi et al. A comparative exploration of eye gaze and head motion cues for lane change intent prediction
CN112289003B (en) Method for monitoring end-of-driving behavior of fatigue driving and active safety driving monitoring system
US10430677B2 (en) Method for classifying driver movements
US20240000354A1 (en) Driving characteristic determination device, driving characteristic determination method, and recording medium
Chang et al. Driver fatigue surveillance via eye detection
Bergasa et al. Visual monitoring of driver inattention
KR20190134909A (en) The apparatus and method for Driver Status Recognition based on Driving Status Decision Information
KR101500016B1 (en) Lane Departure Warning System
KR101680833B1 (en) Apparatus and method for detecting pedestrian and alert
CN107323343A (en) A kind of safe driving method for early warning and system, automobile and readable storage medium storing program for executing
CN116985792A (en) Dangerous sensing method and device for vehicle
CN116513199A (en) Driver fatigue monitoring method integrating visual recognition and vehicle signals
EP4015311B1 (en) Vehicle driver assistance system, vehicle, driver assistance method, computer program and computer-readable medium
WO2021024905A1 (en) Image processing device, monitoring device, control system, image processing method, computer program, and recording medium
WO2021262166A1 (en) Operator evaluation and vehicle control based on eyewear data
Nowosielski Vision-based solutions for driver assistance
Nair et al. Smart System for Drowsiness and Accident Detection
CN115641569B (en) Driving scene processing method, device, equipment and medium
Srivastava Driver's drowsiness identification using eye aspect ratio with adaptive thresholding
CN111152653A (en) Fatigue driving detection method based on multi-information fusion
Hijaz et al. Driver Visual Focus of Attention Estimation in Autonomous Vehicles
JP7433155B2 (en) Electronic equipment, information processing device, estimation method, and estimation program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination