CN115320626B - Danger perception capability prediction method and device based on human-vehicle state and electronic equipment - Google Patents

Danger perception capability prediction method and device based on human-vehicle state and electronic equipment Download PDF

Info

Publication number
CN115320626B
CN115320626B CN202211237709.4A CN202211237709A CN115320626B CN 115320626 B CN115320626 B CN 115320626B CN 202211237709 A CN202211237709 A CN 202211237709A CN 115320626 B CN115320626 B CN 115320626B
Authority
CN
China
Prior art keywords
vehicle
information
time
driver
danger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211237709.4A
Other languages
Chinese (zh)
Other versions
CN115320626A (en
Inventor
何云勇
何恩怀
高建平
刘自强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Highway Planning Survey and Design Institute Ltd
Original Assignee
Sichuan Highway Planning Survey and Design Institute Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Highway Planning Survey and Design Institute Ltd filed Critical Sichuan Highway Planning Survey and Design Institute Ltd
Priority to CN202211237709.4A priority Critical patent/CN115320626B/en
Publication of CN115320626A publication Critical patent/CN115320626A/en
Application granted granted Critical
Publication of CN115320626B publication Critical patent/CN115320626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/0052Filtering, filters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/223Posture, e.g. hand, foot, or seat position, turned or inclined
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Emergency Alarm Devices (AREA)
  • Navigation (AREA)

Abstract

The application provides a method, a device and an electronic device for predicting danger perception capability based on human-vehicle states. Since the danger sensing ability of the driver is not only related to the state of the driver, but also related to the current driving state (for example, in a continuous longitudinal slope section, especially a continuous downhill section, the danger sensing ability is related to factors of the road itself due to belonging to an accident high-incidence area, and the danger sensing ability is reduced due to easy dispersion of the attention of the driver on such a section), the manner can take into consideration the factors of the human-vehicle state, and the preset danger sensing ability prediction model can predict the danger sensing ability and detect the danger sensing ability of the driver in real time.

Description

Danger perception capability prediction method and device based on human-vehicle state and electronic equipment
Technical Field
The application relates to the technical field of auxiliary driving, in particular to a method and a device for predicting danger perception capability based on human-vehicle states and electronic equipment.
Background
Danger perception is an objective ability of a person, which can be improved and enhanced by scientific training as long as it can be correctly recognized and evaluated. The hazard-sensing capability is defined as: and identifying, judging and deciding the dangerous source in the traffic dangerous scene and driving operation by the driver. The hazard source herein refers to all objects that may cause injury to the driver in the traffic scene, including some road obstacles, suddenly started vehicles, suddenly stopped vehicles, suddenly leaped pedestrians, and the like.
The driver processes the information of the danger source through the processes of three stages of visual cognition, decision judgment and driving operation. In the visual cognition stage, the visual organs of a driver are mainly relied on, and meanwhile, the sense organs such as the auditory sense and the touch sense are combined. And in the decision-making stage, road traffic information is input, integrated and output by means of the nervous system of the brain of the driver. And finally, the vehicle is controlled through coordination action, so that a danger source is avoided.
The danger perception capability of the driver can change along with various factors of the driver in the driving process, how to monitor the danger perception capability of the driver and remind the driver when the driver is about to descend so as to effectively reduce the probability of accidents, and the problem to be solved is solved.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for predicting a risk perception capability based on a human-vehicle state, and an electronic device, so as to predict a risk perception capability of a driver, and prompt when the risk perception capability is lower than a threshold value, so as to effectively reduce the probability of an accident.
In order to achieve the above object, embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a method for predicting risk perception capability based on human-vehicle states, including: acquiring vehicle state information and a driver image in a measurement period, wherein the vehicle state information is used for reflecting a vehicle position and a vehicle state; determining a driver danger perception capability parameter based on the vehicle state information, the driver image and a preset danger perception capability prediction model; and when the danger perception capability parameter of the driver is lower than a threshold value, generating prompt information to prompt the driver.
In the embodiment of the application, by acquiring the vehicle state information and the driver image in the measurement period and using the preset danger sensing capability prediction model, the danger sensing capability parameter of the driver can be determined, and prompt information is generated to prompt the driver when the danger sensing capability parameter is lower than the threshold value. Since the danger sensing capability of the driver is not only related to the state of the driver, but also related to the current driving state (for example, in a continuous longitudinal slope section, especially a continuous downhill slope section, the road belongs to an accident-prone area, and the road is not only related to factors of the road, and in such a section, the driver's attention is easily dispersed, which is also an important reason, and the danger sensing capability is reduced), the manner can take into consideration the factors of the human-vehicle state, and the preset danger sensing capability prediction model can predict the danger sensing capability and detect the danger sensing capability of the driver in real time.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the determining a driver danger sensing capability parameter based on the vehicle state information, the driver image, and a preset danger sensing capability prediction model includes: filtering the vehicle body attitude measurement data to obtain vehicle body attitude accurate data; converting the accurate data of the vehicle body posture into a navigation coordinate system to obtain vehicle body posture information; carrying out image recognition on the driver image, and determining body posture information and face state information reflecting the real-time state of the driver; inputting the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, the body posture information and the facial state information into the danger perception capability prediction model; and acquiring the driver danger perception capability parameters output by the danger perception capability prediction model.
In the implementation mode, the accurate vehicle body attitude data obtained by filtering the vehicle body attitude measurement data can be converted into the vehicle body attitude information under the navigation coordinate system, so that the noise can be effectively suppressed, and the accurate vehicle body attitude information can be obtained. The vehicle body posture information generally reflects the driving state, and whether the driver is easy to relax and the attention is reduced. Therefore, accurate vehicle body posture information is beneficial to effectively predicting the danger perception capability of the driver. And driver's health posture information and facial state information can effectively reflect driver's real-time state, judge whether driver is in the state that leads to danger perception ability decline such as lax, tired out.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the performing filtering processing on the vehicle body posture measurement data to obtain vehicle body posture accurate data includes:
through the first step
Figure 640387DEST_PATH_IMAGE001
Equation of measurement of time of day
Figure 939781DEST_PATH_IMAGE002
Measured to obtain
Figure 369495DEST_PATH_IMAGE001
Measurement vector at time:
Figure 437945DEST_PATH_IMAGE003
wherein, the first and the second end of the pipe are connected with each other,
Figure 45644DEST_PATH_IMAGE004
is as follows
Figure 583067DEST_PATH_IMAGE001
The measurement vector of the time of day,
Figure 567203DEST_PATH_IMAGE005
is as follows
Figure 490160DEST_PATH_IMAGE001
A matrix of the sensitivity of the measurement at the time,
Figure 518027DEST_PATH_IMAGE006
is as follows
Figure 792014DEST_PATH_IMAGE001
The estimated error vector for a time instant,
Figure 48683DEST_PATH_IMAGE007
is as follows
Figure 105107DEST_PATH_IMAGE001
The white noise vector is measured at a moment,
Figure 789029DEST_PATH_IMAGE008
is as follows
Figure 815891DEST_PATH_IMAGE001
The acceleration at the moment of time is,
Figure 876251DEST_PATH_IMAGE009
is as follows
Figure 757488DEST_PATH_IMAGE001
The angular velocity of the moment in time is,
Figure 877891DEST_PATH_IMAGE010
to represent
Figure 611623DEST_PATH_IMAGE009
Derivative of (1) is referred to as
Figure 210095DEST_PATH_IMAGE001
The angular acceleration at the moment in time is,
Figure 211418DEST_PATH_IMAGE011
is as follows
Figure 237142DEST_PATH_IMAGE001
A motion gesture at a time, and:
Figure 441859DEST_PATH_IMAGE012
Figure 326245DEST_PATH_IMAGE013
Figure 463965DEST_PATH_IMAGE014
measuring the vector
Figure 909859DEST_PATH_IMAGE004
Substituting into a preset improved Kalman filtering equation to calculate accurate data of the filtered vehicle body attitude
Figure 133030DEST_PATH_IMAGE015
Figure 73304DEST_PATH_IMAGE016
Respectively represent
Figure 65531DEST_PATH_IMAGE017
The filtered accurate value.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the modified kalman filter equation is:
Figure 714949DEST_PATH_IMAGE018
Figure 159837DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 356332DEST_PATH_IMAGE020
in order to estimate the error before the update,
Figure 203065DEST_PATH_IMAGE021
is composed of
Figure 741494DEST_PATH_IMAGE022
The state matrix of the time of day,
Figure 687060DEST_PATH_IMAGE023
first, the
Figure 969136DEST_PATH_IMAGE022
The error in the estimation of the time of day,
Figure 670376DEST_PATH_IMAGE024
is a Kalman gain matrix, an
Figure 894553DEST_PATH_IMAGE024
The following conditions are satisfied:
Figure 314033DEST_PATH_IMAGE025
Figure 399801DEST_PATH_IMAGE026
Figure 971859DEST_PATH_IMAGE027
Figure 117669DEST_PATH_IMAGE028
Figure 24445DEST_PATH_IMAGE029
wherein, the first and the second end of the pipe are connected with each other,
Figure 897592DEST_PATH_IMAGE030
is a matrix of the units,
Figure 573424DEST_PATH_IMAGE031
is composed of
Figure 890136DEST_PATH_IMAGE005
The transpose of (a) is performed,
Figure 32011DEST_PATH_IMAGE032
is a first
Figure 459581DEST_PATH_IMAGE033
The noise covariance of the uncorrelated devices at a time,
Figure 989920DEST_PATH_IMAGE034
is the covariance of zero-mean white noise,
Figure 726801DEST_PATH_IMAGE035
and
Figure 342590DEST_PATH_IMAGE036
is as follows
Figure 839430DEST_PATH_IMAGE001
A priori, a posteriori covariance matrix of the time of day,
Figure 975007DEST_PATH_IMAGE037
is as follows
Figure 633522DEST_PATH_IMAGE033
The state matrix of the time of day,
Figure 736607DEST_PATH_IMAGE038
is as follows
Figure 755248DEST_PATH_IMAGE033
The a-posteriori covariance matrix of the time of day,
Figure 525757DEST_PATH_IMAGE039
for the purpose of the updated estimation error,
Figure 89594DEST_PATH_IMAGE040
is as follows
Figure 693357DEST_PATH_IMAGE033
The estimated error after the time update.
In the implementation mode, the improved Kalman algorithm is adopted to carry out filtering processing on the vehicle body attitude measurement data, so that noise can be effectively suppressed.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the covariance of uncorrelated device noise
Figure 266421DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 625858DEST_PATH_IMAGE042
The following fitness function is satisfied:
Figure 875443DEST_PATH_IMAGE043
wherein, the first and the second end of the pipe are connected with each other,
Figure 953120DEST_PATH_IMAGE044
Figure 329875DEST_PATH_IMAGE045
in order to be an integral term, the integral term,
Figure 294551DEST_PATH_IMAGE046
Figure 731349DEST_PATH_IMAGE047
are all initial integral terms;
covariance of uncorrelated device noise
Figure 30743DEST_PATH_IMAGE041
Comprises the following steps:
Figure 211189DEST_PATH_IMAGE048
covariance of zero mean white noise
Figure 528906DEST_PATH_IMAGE042
Comprises the following steps:
Figure 871026DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 923296DEST_PATH_IMAGE050
representing an identity matrix.
In this implementation, the covariance of uncorrelated device noise when Kalman filtering is used
Figure 124076DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 578191DEST_PATH_IMAGE042
Often, the filter needs to be selected according to practical experience, so that it is difficult to obtain an approximate value, and the filtering precision is reduced. This approach to covariance of uncorrelated device noise
Figure 91212DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 880046DEST_PATH_IMAGE034
The two parameters construct a fitness function, so that the steady-state error can be effectively reduced, and the filtering precision is improved.
With reference to the second possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, a navigation coordinate system is taken
Figure 136715DEST_PATH_IMAGE051
The axis is directed to the true north,
Figure 179757DEST_PATH_IMAGE052
the axis is directed to the right east,
Figure 614412DEST_PATH_IMAGE053
and the vertical horizontal plane points to the ground, and the accurate data of the vehicle body posture is converted into a navigation coordinate system to obtain vehicle body posture information, wherein the method comprises the following steps:
accurately measuring the body attitude
Figure 110115DEST_PATH_IMAGE015
Substituting into a coordinate conversion equation:
Figure 419743DEST_PATH_IMAGE054
Figure 582871DEST_PATH_IMAGE056
Figure 437694DEST_PATH_IMAGE057
Figure 434076DEST_PATH_IMAGE058
Figure 32547DEST_PATH_IMAGE059
Figure 50182DEST_PATH_IMAGE060
Figure 607065DEST_PATH_IMAGE061
Figure 326628DEST_PATH_IMAGE062
wherein the content of the first and second substances,
Figure 728791DEST_PATH_IMAGE063
in order to convert the matrix, the first and second matrices,
Figure 600932DEST_PATH_IMAGE064
respectively represent
Figure 813870DEST_PATH_IMAGE065
The exact value of the filtered value is then,
Figure 37041DEST_PATH_IMAGE066
respectively represent
Figure 977315DEST_PATH_IMAGE067
The exact value of the filtered value is,
Figure 687651DEST_PATH_IMAGE068
respectively represent
Figure 320757DEST_PATH_IMAGE069
The filtered accurate value is calculated to obtain the vehicle body attitude information under the navigation coordinate system
Figure 271306DEST_PATH_IMAGE070
Figure 15271DEST_PATH_IMAGE071
And
Figure 862004DEST_PATH_IMAGE072
in the implementation mode, the accurate data of the vehicle body posture can be quickly and accurately converted into the vehicle body posture information under the navigation coordinate system.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, after the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, the body posture information, and the facial state information are input into the risk perception capability prediction model, the risk perception capability prediction model performs the following processing:
determining whether the vehicle runs on a continuous longitudinal slope section or not based on the vehicle positioning information and the real-time vehicle speed information;
if yes, the posture information of the vehicle body is obtained
Figure 649701DEST_PATH_IMAGE070
Figure 581885DEST_PATH_IMAGE071
And
Figure 614694DEST_PATH_IMAGE072
body posture information
Figure 315934DEST_PATH_IMAGE073
And facial state information
Figure 25264DEST_PATH_IMAGE074
=
Figure 694011DEST_PATH_IMAGE075
Substituting into a first danger perception capability function
Figure 779779DEST_PATH_IMAGE076
Calculating
Figure 335525DEST_PATH_IMAGE076
And outputs:
Figure 229138DEST_PATH_IMAGE077
Figure 604756DEST_PATH_IMAGE078
wherein, the first and the second end of the pipe are connected with each other,
Figure 477903DEST_PATH_IMAGE079
respectively representing acceleration, angular velocity and motion attitude,
Figure 419314DEST_PATH_IMAGE080
is as follows
Figure 470447DEST_PATH_IMAGE081
A first weight value of the item information,
Figure 615252DEST_PATH_IMAGE082
is as follows
Figure 42822DEST_PATH_IMAGE081
A risk perception capability value of the item information;
if not, the vehicle body posture information is acquired
Figure 573160DEST_PATH_IMAGE070
Figure 575620DEST_PATH_IMAGE071
And
Figure 191409DEST_PATH_IMAGE072
body posture information
Figure 422671DEST_PATH_IMAGE073
And facial state information
Figure 555318DEST_PATH_IMAGE074
=
Figure 479412DEST_PATH_IMAGE075
Substituting a second danger-sensing capability function
Figure 582497DEST_PATH_IMAGE083
Calculating
Figure 617449DEST_PATH_IMAGE084
And outputs:
Figure 106068DEST_PATH_IMAGE085
Figure 201063DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 791445DEST_PATH_IMAGE086
is as follows
Figure 115241DEST_PATH_IMAGE081
A second weight value of the item information,
Figure 740257DEST_PATH_IMAGE082
is as follows
Figure 474995DEST_PATH_IMAGE081
A risk perception capability value of the item information.
In this implementation, because the continuous longitudinal slope section belongs to the accident-prone section, in addition to the risk factors of the road itself, when the driver drives on the continuous longitudinal slope section, the driver is more easily relaxed and tired compared with other driving scenes, thereby causing the reduction of the danger sensing capability. Whether the vehicle runs on the continuous longitudinal slope section or not is determined through the vehicle positioning information and the real-time vehicle speed information, different danger sensing capability functions are given based on the judgment result, so that differentiated danger sensing capability numerical calculation is performed on the two conditions, and whether the danger sensing capability of the driver falls or is about to fall can be effectively predicted. For the continuous longitudinal slope section, the model adopts a relatively more sensitive calculation mode, for example, higher weight is given to the body posture information and the face state information of the driver, so that when the numerical values corresponding to the information belong to low numerical values, the numerical values can be more reflected in the danger perception capability numerical values predicted by the model.
In a second aspect, an embodiment of the present application provides a device for predicting risk perception capability based on human-vehicle states, including: an information acquisition unit for acquiring vehicle state information and a driver image within a measurement period, wherein the vehicle state information is used for reflecting a vehicle position and a vehicle state; the parameter calculation unit is used for determining a driver danger perception capability parameter based on the vehicle state information, the driver image and a preset danger perception capability prediction model; and the danger prompting unit is used for generating prompting information to prompt the driver when the danger perception capability parameter of the driver is lower than a threshold value.
In a third aspect, an embodiment of the present application provides a storage medium, where the storage medium includes a stored program, where, when the program runs, a device in which the storage medium is located is controlled to execute the method for predicting human-vehicle state-based risk perception capability according to any one of the first aspect or possible implementation manners of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory is configured to store information including program instructions, and the processor is configured to control execution of the program instructions, where the program instructions are loaded and executed by the processor to implement the human-vehicle state-based risk awareness capability prediction method according to the first aspect or any one of possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is an application scenario diagram of a risk perception capability prediction method based on a human-vehicle state according to an embodiment of the present application.
Fig. 2 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of a risk perception capability prediction method based on a human-vehicle state according to an embodiment of the present application.
Fig. 4 is a block diagram of a risk perception capability prediction apparatus based on a human-vehicle state according to an embodiment of the present application.
An icon: 10-an electronic device; 11-a memory; 12-a communication module; 13-a bus; 14-a processor; 110-a camera; 120-GPS; 130-vehicle speed sensor; 141-a three-axis gyroscope; 142-a three-axis accelerometer; 143-a three-axis magnetometer; 150-intelligent terminal.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a view illustrating an application scenario of a risk perception capability prediction method based on a human-vehicle state according to an embodiment of the present application.
In this embodiment, in order to implement a method for predicting a risk perception capability based on a human-vehicle state, corresponding configuration needs to be performed: the vehicle-mounted computer carried by the vehicle can be used to cooperate with the three-axis accelerometer 142, the three-axis gyroscope 141 and the three-axis magnetometer 143 (for convenience of explanation and simplification of a data processing process, in this embodiment, the three-axis accelerometer 142, the three-axis gyroscope 141 and the three-axis magnetometer 143 are installed at the same position of the vehicle for explanation), so that vehicle body attitude measurement data (the vehicle body attitude measurement data is in a vehicle coordinate system) can be obtained in real time; and, the vehicle-mounted computer can also obtain vehicle positioning information through a positioning device (such as a GPS 120) of the vehicle-mounted computer, and obtain real-time vehicle speed information through a vehicle speed sensor 130 mounted on the vehicle. The cab is provided with a camera 110 (which is required to be able to photograph the position of the upper body and face of the driver on the seat, such as the steering wheel, the middle upper part of the windshield, etc.), and is mainly used for photographing an image of the driver so as to analyze the body posture information and the face state information of the driver in the following. Meanwhile, an intelligent terminal 150 (different from a vehicle-mounted computer) is also required to be arranged in the cockpit, and the intelligent terminal 150 can be a smart phone, which is the most convenient.
The risk perception capability prediction method based on the human-vehicle state can be operated by the electronic device 10. The electronic device 10 may be a server or an intelligent terminal 150, but no matter whether the server operates a human-vehicle state-based risk perception capability prediction method or the intelligent terminal 150 operates the intelligent terminal 150 based on a human-vehicle state risk perception capability prediction method, an intelligent terminal 150 (for example, an intelligent mobile phone) needs to be placed in a driving cabin to remind a driver (the reminding is different from the reminding given by a vehicle-mounted computer).
It should be noted that, if the electronic device 10 is a server, the vehicle-mounted computer and the camera 110 are in communication connection with the server, and the server is connected with the intelligent terminal 150 (for example, a smart phone), so that data such as vehicle positioning information, real-time vehicle speed information, vehicle body posture measurement data, and a driver image detected in real time can be transmitted to the server, and the server can obtain a driver danger sensing capability parameter by using a danger sensing capability prediction method based on a human-vehicle state through the information, and when the driver danger sensing capability parameter is lower than a threshold value, generate a prompt message to be sent to the intelligent terminal 150, so that the intelligent terminal 150 prompts the driver (for example, prompts in a voice mode, an alarm sound mode, and the like). If the electronic device 10 is an intelligent terminal 150 (e.g., a smart phone), the vehicle-mounted computer and the camera 110 can establish a communication connection with the intelligent terminal 150, so as to transmit data such as real-time detected vehicle positioning information, real-time vehicle speed information, vehicle body posture measurement data, driver images and the like to the intelligent terminal 150, the intelligent terminal 150 may obtain the driver danger sensing capability parameter by operating the danger sensing capability prediction method based on the human-vehicle state through the information, and generate the prompt information to prompt the driver (for example, prompt in a voice mode, a warning sound mode, or the like) when the driver danger sensing capability parameter is lower than the threshold value.
Referring to fig. 2, fig. 2 is a block diagram of an electronic device 10 according to an embodiment of the present disclosure.
Illustratively, the electronic device 10 may include: a communication module 12 connected to the outside world via a network, one or more processors 14 for executing program instructions, a bus 13, and a different form of memory 11, such as a disk, ROM, or RAM, or any combination thereof. And, the electronic device 10 also has a display screen on which the card may be displayed. The memory 11, the communication module 12, and the processor 14 may be connected by a bus 13.
Illustratively, the memory 11 stores a program. The processor 14 may call and execute the programs from the memory 11, so that the risk perception capability prediction method based on the human-vehicle state can be implemented by executing the programs.
In order to predict the risk perception capability of the driver, the electronic device 10 may be used to operate a risk perception capability prediction method based on the human-vehicle state.
Referring to fig. 3, fig. 3 is a flowchart of a method for predicting danger sensing capability based on human-vehicle status according to an embodiment of the present disclosure. The danger perceptibility prediction method based on the human-vehicle state may include step S10, step S20, and step S30.
First, the electronic device 10 may perform step S10.
Step S10: and acquiring vehicle state information and a driver image in the measuring period, wherein the vehicle state information is used for reflecting the vehicle position and the vehicle state.
In this embodiment, the electronic device 10 may obtain vehicle state information and a driver image in a measurement time period, where the vehicle state information may include vehicle positioning information (which may be obtained by positioning through the GPS120 and then sent to the electronic device 10 by a vehicle-mounted computer), real-time vehicle speed information (which may be obtained by processing real-time vehicle speed information obtained by detecting sensors disposed at a transmission shaft, an engine, and the like and sent to the electronic device 10 by the vehicle-mounted computer), and vehicle body attitude measurement data (which may be obtained by detecting the vehicle attitude by the three-axis accelerometer 142, the three-axis gyroscope 141, and the three-axis magnetometer 143 and then sending the vehicle attitude measurement data to the electronic device 10 by the vehicle-mounted computer). The vehicle state information can effectively reflect the vehicle position and the vehicle state. The driver image may be captured by the camera 110 and sent to the electronic device 10 (one driver image may be captured and sent, or multiple driver images may be captured and sent).
After acquiring the vehicle state information and the driver image within the measurement period, the electronic device 10 may perform step S20.
Step S20: and determining a driver danger perception capability parameter based on the vehicle state information, the driver image and a preset danger perception capability prediction model.
In this embodiment, the electronic device 10 may determine the risk perception capability parameter of the driver based on the vehicle state information, the driver image, and a preset risk perception capability prediction model.
Accurate vehicle body posture information is beneficial to effectively predicting the danger perception capability of a driver. And because cost factors need to be considered when the sensors are selected, the selected sensors often have some error characteristics, random noise in output information of the sensors is easily influenced by external environment, and the noise statistical characteristics are inaccurate. Therefore, here, the vehicle body attitude measurement data may be subjected to filtering processing:
in this embodiment, a vehicle coordinate system may be defined
Figure 801940DEST_PATH_IMAGE087
The center of the vehicle, namely the middle point of the length and the width is taken as the origin,
Figure 178695DEST_PATH_IMAGE088
the shaft points to the vehicle head,
Figure 658218DEST_PATH_IMAGE052
the shaft is directed to the right of the vehicle,
Figure 577239DEST_PATH_IMAGE053
the vertical horizontal plane is directed to the ground. Since the sensors (the three-axis accelerometer 142, the three-axis gyroscope 141 and the three-axis magnetometer 143) are all installed at the same position of the vehicle, the three-axis acceleration
Figure 142212DEST_PATH_IMAGE089
Three-axis angular velocity
Figure 791499DEST_PATH_IMAGE090
And three-axis magnetic induction
Figure 374796DEST_PATH_IMAGE091
And the position of the sensor in the vehicle coordinate system
Figure 716916DEST_PATH_IMAGE092
Related, there are:
Figure 503606DEST_PATH_IMAGE012
, (1)
Figure 972896DEST_PATH_IMAGE013
, (2)
Figure 161432DEST_PATH_IMAGE014
, (3)
wherein, the first and the second end of the pipe are connected with each other,
Figure 674453DEST_PATH_IMAGE008
is as follows
Figure 197707DEST_PATH_IMAGE001
The acceleration at the moment of time is,
Figure 719955DEST_PATH_IMAGE009
is as follows
Figure 762998DEST_PATH_IMAGE001
The angular velocity of the moment of time is,
Figure 194723DEST_PATH_IMAGE011
is as follows
Figure 956005DEST_PATH_IMAGE001
The motion attitude at the moment.
The kalman filtering algorithm may be modified to filter the measurement data obtained from the sensors (i.e., body attitude measurement data). First, the
Figure 16365DEST_PATH_IMAGE001
The measurement equation at time is:
Figure 428761DEST_PATH_IMAGE002
, (4)
wherein the content of the first and second substances,
Figure 283584DEST_PATH_IMAGE004
is as follows
Figure 532163DEST_PATH_IMAGE001
The measurement vector of the time of day,
Figure 146946DEST_PATH_IMAGE005
is a first
Figure 164581DEST_PATH_IMAGE001
A matrix of the sensitivity of the measurement at the time,
Figure 455885DEST_PATH_IMAGE006
is as follows
Figure 175448DEST_PATH_IMAGE001
The estimated error vector for a time instant,
Figure 577610DEST_PATH_IMAGE007
is as follows
Figure 449752DEST_PATH_IMAGE001
The measured white noise vector at that moment.
To obtain a
Figure 925339DEST_PATH_IMAGE001
Measurement vector at time:
Figure 617351DEST_PATH_IMAGE003
, (5)
wherein the content of the first and second substances,
Figure 88784DEST_PATH_IMAGE010
is composed of
Figure 64699DEST_PATH_IMAGE009
The first derivative of (a).
The measurement vector may then be used
Figure 697806DEST_PATH_IMAGE004
Substituting a preset improved Kalman filtering equation, wherein the improved Kalman filtering equation is as follows:
Figure 142694DEST_PATH_IMAGE018
, (6)
Figure 637391DEST_PATH_IMAGE019
, (7)
wherein the content of the first and second substances,
Figure 484124DEST_PATH_IMAGE020
in order to estimate the error before the update,
Figure 288132DEST_PATH_IMAGE021
is composed of
Figure 204005DEST_PATH_IMAGE022
The state matrix of the time of day,
Figure 751661DEST_PATH_IMAGE023
first, the
Figure 718480DEST_PATH_IMAGE022
The error in the estimation of the time of day,
Figure 175612DEST_PATH_IMAGE024
is a kalman gain matrix. And is provided with
Figure 595092DEST_PATH_IMAGE024
Satisfies the following conditions, can be solved
Figure 946439DEST_PATH_IMAGE024
,:
Figure 751453DEST_PATH_IMAGE025
, (8)
Figure 631684DEST_PATH_IMAGE026
, (9)
Figure 492455DEST_PATH_IMAGE093
, (10)
Figure 647493DEST_PATH_IMAGE094
, (11)
Figure 57746DEST_PATH_IMAGE029
, (12)
Wherein the content of the first and second substances,
Figure 640037DEST_PATH_IMAGE030
is a matrix of the units,
Figure 17798DEST_PATH_IMAGE031
is composed of
Figure 445368DEST_PATH_IMAGE005
The method (2) is implemented by the following steps,
Figure 241286DEST_PATH_IMAGE032
is a first
Figure 679964DEST_PATH_IMAGE033
The noise covariance of the uncorrelated devices at a time,
Figure 748283DEST_PATH_IMAGE034
is the covariance of zero-mean white noise,
Figure 713965DEST_PATH_IMAGE035
and
Figure 849542DEST_PATH_IMAGE036
is a first
Figure 242477DEST_PATH_IMAGE001
A priori, a posteriori covariance matrix of the time of day,
Figure 594830DEST_PATH_IMAGE037
is as follows
Figure 364203DEST_PATH_IMAGE033
The state matrix of the time of day,
Figure 603554DEST_PATH_IMAGE038
is as follows
Figure 469790DEST_PATH_IMAGE033
The a-posteriori covariance matrix of the time of day,
Figure 794592DEST_PATH_IMAGE039
for the purpose of the updated estimation error,
Figure 102076DEST_PATH_IMAGE040
is as follows
Figure 976360DEST_PATH_IMAGE033
The estimated error after the time update.
Covariance of uncorrelated device noise using Kalman filtering
Figure 976677DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 788776DEST_PATH_IMAGE034
Often, the filter needs to be selected according to practical experience, so that it is difficult to obtain an approximate value, and the filtering precision is reduced. Therefore, to overcome this drawback, the two parameters (covariance of uncorrelated device noise) can be matched
Figure 916263DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 130206DEST_PATH_IMAGE042
) The following fitness function is constructed:
Figure 301425DEST_PATH_IMAGE095
, (13)
wherein, the first and the second end of the pipe are connected with each other,
Figure 115666DEST_PATH_IMAGE044
Figure 30532DEST_PATH_IMAGE045
in order to be an integral term of the light,
Figure 630141DEST_PATH_IMAGE046
Figure 720063DEST_PATH_IMAGE047
are all initial integral terms.
Covariance of uncorrelated device noise
Figure 241174DEST_PATH_IMAGE041
May be configured such that:
Figure 959732DEST_PATH_IMAGE048
, (14)
covariance of zero mean white noise
Figure 397535DEST_PATH_IMAGE042
May be configured such that:
Figure 176135DEST_PATH_IMAGE049
, (15)
wherein the content of the first and second substances,
Figure 184542DEST_PATH_IMAGE050
representing an identity matrix.
Covariance of uncorrelated device noise in this manner
Figure 457523DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 500565DEST_PATH_IMAGE042
The two parameters construct a fitness function, so that the steady-state error can be effectively reduced, and the filtering precision is improved.
Based on this, equations (13), (14) and (15) can be substituted into equations (8) to (12), and the kalman gain matrix can be solved by a computer
Figure 450067DEST_PATH_IMAGE096
Thereby realizing the measurement of the vehicle body attitude
Figure 460617DEST_PATH_IMAGE003
Obtaining the filtered accurate data of the vehicle body posture
Figure 520977DEST_PATH_IMAGE015
In the mode, the improved Kalman algorithm is adopted to filter the vehicle body attitude measurement data, so that the noise can be effectively suppressed, and the method does not depend on artificial experience.
After the accurate data of the vehicle body posture are obtained, the accurate data of the vehicle body posture can be converted into a navigation coordinate system, and vehicle body posture information is obtained.
For example, a navigation coordinate system can be taken
Figure 684105DEST_PATH_IMAGE051
The axis is directed to the true north,
Figure 286731DEST_PATH_IMAGE052
the axis is directed to the right east,
Figure 535310DEST_PATH_IMAGE097
vertical horizontal plane directionGround, then accurately data the body attitude
Figure 399361DEST_PATH_IMAGE015
Substituting into a coordinate conversion equation:
Figure 666263DEST_PATH_IMAGE054
, (15)
Figure 691988DEST_PATH_IMAGE056
, (16)
Figure 162283DEST_PATH_IMAGE057
, (17)
Figure 580758DEST_PATH_IMAGE058
, (18)
Figure 718478DEST_PATH_IMAGE059
, (19)
Figure 915104DEST_PATH_IMAGE060
, (20)
Figure 121963DEST_PATH_IMAGE061
, (21)
Figure 327817DEST_PATH_IMAGE062
, (22)
wherein the content of the first and second substances,
Figure 320044DEST_PATH_IMAGE063
in order to convert the matrix, the first and second matrices,
Figure 435374DEST_PATH_IMAGE064
respectively represent
Figure 880261DEST_PATH_IMAGE065
The exact value of the filtered value is,
Figure 624227DEST_PATH_IMAGE066
respectively represent
Figure 720227DEST_PATH_IMAGE067
The exact value of the filtered value is,
Figure 524235DEST_PATH_IMAGE098
respectively represent
Figure 190840DEST_PATH_IMAGE069
The filtered accurate value.
Through calculation, the vehicle body attitude information under the navigation coordinate system can be obtained
Figure 489228DEST_PATH_IMAGE070
Figure 190468DEST_PATH_IMAGE071
And
Figure 165377DEST_PATH_IMAGE072
. The method can quickly and accurately convert the accurate data of the vehicle body posture into the vehicle body posture information under the navigation coordinate system.
The electronic device 10 may also perform image recognition on the driver image to determine body posture information and facial state information reflecting the real-time state of the driver.
For example, the electronic device 10 may utilize a gesture recognition algorithm and a face recognition algorithm, and since the body gesture information of the driver can be very accurately recognized (for example, a gesture recognition algorithm based on random forest, a gesture recognition algorithm based on depth science, etc.) and the face state information of the driver can be extracted (for example, an expression recognition algorithm, facial features extracted therein, such as eye movements, blinking times, eyebrows, nose and mouth micro-movements, etc. occurring in a measurement period, can be utilized), the specific process of obtaining the body gesture information and the face state information through the driver image is not repeated herein.
After obtaining the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, and the body posture information and the face state information, the information may be input to a preset risk perception capability prediction model (the trained risk perception capability prediction model is preset in the electronic device 10).
For example, the danger awareness ability prediction model may perform the following processing after receiving the input vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, and the body posture information and the face state information:
because the continuous longitudinal slope section belongs to the accident-prone section, except the dangerous factors of the road, when a driver drives on the continuous longitudinal slope section, the driver is easy to relax and fatigue compared with other driving scenes, and accordingly the danger sensing capability is reduced.
Therefore, the risk perception capability prediction model may determine whether the vehicle is traveling on a continuous longitudinal section of road based on the vehicle location information and the real-time vehicle speed information. The vehicle positioning information can determine whether the position of the vehicle belongs to a continuous longitudinal slope section, and the real-time vehicle speed information can judge whether the vehicle is in a driving state.
If the vehicle is determined to run on the continuous longitudinal slope section, the danger sensing capability prediction model can be used for predicting the vehicle body posture information
Figure 568546DEST_PATH_IMAGE070
Figure 654314DEST_PATH_IMAGE071
And
Figure 210060DEST_PATH_IMAGE072
body posture information
Figure 103673DEST_PATH_IMAGE099
And facial state information
Figure 744870DEST_PATH_IMAGE074
=
Figure 618017DEST_PATH_IMAGE100
Substituting into a first danger perception capability function
Figure 28270DEST_PATH_IMAGE076
Calculating
Figure 564556DEST_PATH_IMAGE076
And outputs:
Figure 958628DEST_PATH_IMAGE077
, (23)
Figure 386198DEST_PATH_IMAGE078
, (24)
wherein the content of the first and second substances,
Figure 165804DEST_PATH_IMAGE079
respectively representing acceleration, angular velocity and motion attitude,
Figure 387838DEST_PATH_IMAGE080
is as follows
Figure 3627DEST_PATH_IMAGE081
A first weight value of the item information,
Figure 982691DEST_PATH_IMAGE082
is as follows
Figure 101957DEST_PATH_IMAGE081
A risk perception capability value of the item information.
Danger sensing capability if it is determined that the vehicle is not traveling on a continuous longitudinal grade sectionThe force prediction model can convert the vehicle body posture information
Figure 275318DEST_PATH_IMAGE070
Figure 316086DEST_PATH_IMAGE071
And
Figure 101771DEST_PATH_IMAGE072
body posture information
Figure 606702DEST_PATH_IMAGE073
And facial state information
Figure 170538DEST_PATH_IMAGE074
=
Figure 479029DEST_PATH_IMAGE100
Substituting into a second Risk perceptibility function
Figure 52092DEST_PATH_IMAGE084
Calculating
Figure 159332DEST_PATH_IMAGE084
And outputs:
Figure 159649DEST_PATH_IMAGE085
, (25)
Figure 955436DEST_PATH_IMAGE078
, (26)
wherein the content of the first and second substances,
Figure 332191DEST_PATH_IMAGE086
is as follows
Figure 546134DEST_PATH_IMAGE081
A second weight value of the item information,
Figure 468085DEST_PATH_IMAGE082
is as follows
Figure 767479DEST_PATH_IMAGE081
A risk perception capability value of the item information.
Whether the vehicle runs on the continuous longitudinal slope section or not is determined through the vehicle positioning information and the real-time vehicle speed information, different danger sensing capability functions are given based on the judgment result, so that differentiated danger sensing capability numerical calculation is performed on the two conditions, and whether the danger sensing capability of the driver falls or is about to fall can be effectively predicted. For the continuous longitudinal slope section, the model adopts a relatively more sensitive calculation mode, for example, higher weight is given to the body posture information and the face state information of the driver, so that when the numerical values corresponding to the information belong to low numerical values, the numerical values can be more reflected in the danger perception capability numerical values predicted by the model.
The risk perception capability prediction model designed by the scheme is relatively simple, can be effectively applied to the intelligent terminal 150, and can ensure real-time performance. The training process of the model can be briefly described as follows: randomly selecting volunteer drivers (preferably distributed in all age groups), and simulating driving by using a driving simulator to obtain a data set; different danger sources are respectively arranged on the simulator, the simulation driving of the continuous longitudinal slope section is carried out, the simulation driving of the discontinuous longitudinal slope section is carried out, the vehicle information is directly obtained through the simulator, and the posture and the face state information of the driver are obtained by installing a camera above a steering wheel of the simulator. The trained danger perception capability prediction model can be obtained by training the acquired data on a computer by using a machine learning algorithm. Of course, in order to further improve the accuracy of the model, the drivers can be grouped, the volunteers of the drivers can be distinguished according to ages, sexes and the like, and the factors such as the ages and the sexes are also considered when the danger sensing capability of the drivers is predicted, so that the prediction accuracy is further improved.
After determining the driver danger perception capability parameter, the electronic device 10 may perform step S30.
Step S30: and when the danger perception capability parameter of the driver is lower than a threshold value, generating prompt information to prompt the driver.
In this embodiment, after obtaining the driver danger sensing capability parameter output by the danger sensing capability prediction model, the electronic device 10 may determine the driver danger sensing capability parameter to determine whether the driver danger sensing capability parameter is lower than a threshold value. And when the danger perception capability parameter of the driver is lower than the threshold value, generating prompt information to prompt the driver.
By acquiring vehicle state information and a driver image in a measurement period and utilizing a preset danger perception capability prediction model, a danger perception capability parameter of the driver can be determined, and prompt information is generated to prompt the driver when the danger perception capability parameter is lower than a threshold value. Since the danger sensing ability of the driver is not only related to the state of the driver, but also related to the current driving state (for example, in a continuous longitudinal slope section, especially a continuous downhill section, the danger sensing ability is related to factors of the road itself due to belonging to an accident high-incidence area, and the danger sensing ability is reduced due to easy dispersion of the attention of the driver on such a section), the manner can take into consideration the factors of the human-vehicle state, and the preset danger sensing ability prediction model can predict the danger sensing ability and detect the danger sensing ability of the driver in real time.
Referring to fig. 4, based on the same inventive concept, an embodiment of the present application further provides a risk perception capability prediction apparatus 20 based on a human-vehicle state. In this embodiment, the risk sensing capability prediction apparatus 20 may include:
an information acquisition unit 21 for acquiring vehicle state information reflecting a vehicle position and a vehicle state and a driver image in a measurement period.
And the parameter calculation unit 22 is configured to determine a driver danger sensing capability parameter based on the vehicle state information, the driver image, and a preset danger sensing capability prediction model.
And a danger prompting unit 23, configured to generate a prompting message to prompt the driver when the driver danger perceptibility parameter is lower than a threshold.
In this embodiment, the vehicle state information includes vehicle positioning information, real-time vehicle speed information, and vehicle body posture measurement data measured in a vehicle coordinate system, and the parameter calculation unit 22 is specifically configured to: filtering the vehicle body attitude measurement data to obtain vehicle body attitude accurate data; converting the accurate data of the vehicle body posture into a navigation coordinate system to obtain vehicle body posture information; carrying out image recognition on the driver image, and determining body posture information and face state information which reflect the real-time state of the driver; inputting the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, the body posture information and the facial state information into the danger perception capability prediction model; and acquiring the driver danger perception capability parameters output by the danger perception capability prediction model.
In this embodiment, the parameter calculating unit 22 is specifically configured to:
through the first step
Figure 416766DEST_PATH_IMAGE081
Equation of measurement of time of day
Figure 734484DEST_PATH_IMAGE002
Measured to obtain
Figure 76604DEST_PATH_IMAGE081
Measurement vector at time:
Figure 611097DEST_PATH_IMAGE003
wherein, the first and the second end of the pipe are connected with each other,
Figure 595233DEST_PATH_IMAGE101
is a first
Figure 518190DEST_PATH_IMAGE081
The measurement vector of the time of day,
Figure 280479DEST_PATH_IMAGE102
is as follows
Figure 820044DEST_PATH_IMAGE081
A matrix of the sensitivity of the measurement at the time,
Figure 76713DEST_PATH_IMAGE006
is as follows
Figure 870488DEST_PATH_IMAGE081
The estimated error vector for a time instant,
Figure 554410DEST_PATH_IMAGE103
is a first
Figure 315693DEST_PATH_IMAGE081
The white noise vector is measured at a time,
Figure 94162DEST_PATH_IMAGE104
is as follows
Figure 991711DEST_PATH_IMAGE081
The acceleration at the moment of time is,
Figure 883354DEST_PATH_IMAGE105
is as follows
Figure 866353DEST_PATH_IMAGE081
The angular velocity of the moment in time is,
Figure 464825DEST_PATH_IMAGE010
to represent
Figure 731727DEST_PATH_IMAGE105
Derivative of (2) refers to
Figure 757452DEST_PATH_IMAGE081
The angular acceleration at the moment in time is,
Figure 447321DEST_PATH_IMAGE011
is as follows
Figure 318326DEST_PATH_IMAGE081
A motion gesture at a time, and:
Figure 439734DEST_PATH_IMAGE012
Figure 105202DEST_PATH_IMAGE013
Figure 545017DEST_PATH_IMAGE014
measuring the vector
Figure 219712DEST_PATH_IMAGE101
Substituting into a preset improved Kalman filtering equation to calculate accurate data of the filtered vehicle body attitude
Figure 664469DEST_PATH_IMAGE015
Figure 297575DEST_PATH_IMAGE016
Respectively represent
Figure DEST_PATH_IMAGE106
The filtered accurate value.
In this embodiment, the modified kalman filter equation is:
Figure 962037DEST_PATH_IMAGE018
Figure 440423DEST_PATH_IMAGE019
wherein the content of the first and second substances,
Figure 802003DEST_PATH_IMAGE020
in order to estimate the error before the update,
Figure 74853DEST_PATH_IMAGE037
is composed of
Figure 272616DEST_PATH_IMAGE033
The state matrix of the time of day,
Figure 302495DEST_PATH_IMAGE023
first, the
Figure 3735DEST_PATH_IMAGE033
The error in the estimation of the time of day,
Figure 244223DEST_PATH_IMAGE096
is a Kalman gain matrix, an
Figure 116233DEST_PATH_IMAGE024
The following conditions are satisfied:
Figure 202001DEST_PATH_IMAGE025
Figure 508480DEST_PATH_IMAGE026
Figure 857553DEST_PATH_IMAGE027
Figure 748017DEST_PATH_IMAGE028
Figure 637476DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 313308DEST_PATH_IMAGE030
is a matrix of the units,
Figure 643402DEST_PATH_IMAGE031
is composed of
Figure 506315DEST_PATH_IMAGE005
The transpose of (a) is performed,
Figure 199465DEST_PATH_IMAGE032
is as follows
Figure 979071DEST_PATH_IMAGE033
The noise covariance of the uncorrelated devices at a time,
Figure 466684DEST_PATH_IMAGE034
is the covariance of zero-mean white noise,
Figure 82473DEST_PATH_IMAGE035
and
Figure 64467DEST_PATH_IMAGE036
is as follows
Figure 714891DEST_PATH_IMAGE001
A priori, a posteriori covariance matrix of the time of day,
Figure 107826DEST_PATH_IMAGE037
is a first
Figure 460179DEST_PATH_IMAGE033
The state matrix of the time of day,
Figure 229552DEST_PATH_IMAGE038
is as follows
Figure 62DEST_PATH_IMAGE033
The a-posteriori covariance matrix of the time of day,
Figure 311701DEST_PATH_IMAGE039
for the purpose of the updated estimation error,
Figure 167661DEST_PATH_IMAGE040
is a first
Figure 740725DEST_PATH_IMAGE033
The estimated error after the time update.
In this embodiment, the covariance of uncorrelated device noise
Figure 349430DEST_PATH_IMAGE041
Covariance of white noise with zero mean
Figure 349747DEST_PATH_IMAGE042
The following fitness function is satisfied:
Figure 161845DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 289332DEST_PATH_IMAGE044
Figure 237697DEST_PATH_IMAGE045
in order to be an integral term of the light,
Figure 392604DEST_PATH_IMAGE046
Figure 691998DEST_PATH_IMAGE047
are all initial integral terms;
covariance of uncorrelated device noise
Figure 354667DEST_PATH_IMAGE041
Comprises the following steps:
Figure 688696DEST_PATH_IMAGE048
covariance of zero mean white noise
Figure 30816DEST_PATH_IMAGE042
Comprises the following steps:
Figure 66774DEST_PATH_IMAGE049
wherein the content of the first and second substances,
Figure 519752DEST_PATH_IMAGE050
representing an identity matrix.
In this embodiment, the navigation coordinate system is taken
Figure 708288DEST_PATH_IMAGE051
The axis is directed to the true north,
Figure 237620DEST_PATH_IMAGE052
the axis is directed to the right east,
Figure 511607DEST_PATH_IMAGE097
the vertical horizontal plane is directed to the ground, and the parameter calculation unit 22 is specifically configured to: accurately measuring the body attitude
Figure 768276DEST_PATH_IMAGE015
Substituting into a coordinate conversion equation:
Figure 60586DEST_PATH_IMAGE054
Figure 10087DEST_PATH_IMAGE056
Figure 505791DEST_PATH_IMAGE057
Figure 313953DEST_PATH_IMAGE058
Figure 477081DEST_PATH_IMAGE059
Figure 331905DEST_PATH_IMAGE060
Figure 95331DEST_PATH_IMAGE061
Figure 693802DEST_PATH_IMAGE062
wherein the content of the first and second substances,
Figure 445857DEST_PATH_IMAGE063
in order to convert the matrix, the first and second matrices,
Figure 753473DEST_PATH_IMAGE064
respectively represent
Figure 223769DEST_PATH_IMAGE065
The exact value of the filtered value is,
Figure 625931DEST_PATH_IMAGE066
respectively represent
Figure 481761DEST_PATH_IMAGE067
The exact value of the filtered value is,
Figure 943966DEST_PATH_IMAGE068
respectively represent
Figure 167137DEST_PATH_IMAGE069
The filtered accurate value is calculated to obtain the vehicle body attitude information under the navigation coordinate system
Figure 855214DEST_PATH_IMAGE070
Figure 581861DEST_PATH_IMAGE071
And
Figure 214968DEST_PATH_IMAGE072
in this embodiment, after the parameter calculation unit 22 inputs the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, the body posture information, and the facial state information into the risk perception capability prediction model, the risk perception capability prediction model performs the following processing:
determining whether the vehicle runs on a continuous longitudinal slope section or not based on the vehicle positioning information and the real-time vehicle speed information;
if yes, the posture information of the vehicle body is obtained
Figure 643544DEST_PATH_IMAGE070
Figure 387509DEST_PATH_IMAGE071
And
Figure 499822DEST_PATH_IMAGE072
body posture information
Figure 788983DEST_PATH_IMAGE073
And facial state information
Figure 986746DEST_PATH_IMAGE074
=
Figure 268823DEST_PATH_IMAGE075
Substituting into a first danger perception capability function
Figure 953751DEST_PATH_IMAGE076
Calculating
Figure 663081DEST_PATH_IMAGE076
And outputs:
Figure 166301DEST_PATH_IMAGE077
Figure 252069DEST_PATH_IMAGE078
wherein, the first and the second end of the pipe are connected with each other,
Figure 807815DEST_PATH_IMAGE079
respectively representing acceleration, angular velocity and motion attitude,
Figure 468473DEST_PATH_IMAGE080
is a first
Figure 109670DEST_PATH_IMAGE081
A first weight value of the item information,
Figure 733549DEST_PATH_IMAGE082
is as follows
Figure 425692DEST_PATH_IMAGE081
A risk perception capability value of the item information;
if not, the vehicle body posture information is acquired
Figure 476825DEST_PATH_IMAGE070
Figure 605318DEST_PATH_IMAGE071
And
Figure 282156DEST_PATH_IMAGE072
body posture information
Figure 812494DEST_PATH_IMAGE073
And facial state information
Figure 565687DEST_PATH_IMAGE074
=
Figure 929279DEST_PATH_IMAGE075
Substituting into a second Risk perceptibility function
Figure 894961DEST_PATH_IMAGE083
Calculating
Figure 545385DEST_PATH_IMAGE084
And outputs:
Figure 453167DEST_PATH_IMAGE085
Figure 556252DEST_PATH_IMAGE078
wherein the content of the first and second substances,
Figure 591204DEST_PATH_IMAGE086
is a first
Figure 581288DEST_PATH_IMAGE081
A second weight value of the item information,
Figure 145125DEST_PATH_IMAGE082
is a first
Figure 984773DEST_PATH_IMAGE081
A danger perceptibility value of the item of information.
The embodiment of the application provides a storage medium, which comprises a stored program, wherein when the program runs, a device where the storage medium is located is controlled to execute the risk perception capability prediction method based on the human-vehicle state in the embodiment.
In summary, the embodiment of the application provides a method, a device and an electronic device for predicting danger sensing capability based on human-vehicle state, vehicle state information and a driver image in a measurement period are obtained, a preset danger sensing capability prediction model is utilized, a danger sensing capability parameter of a driver can be determined, and prompt information is generated to prompt the driver when the danger sensing capability parameter is lower than a threshold value. Since the danger sensing ability of the driver is not only related to the state of the driver, but also related to the current driving state (for example, in a continuous longitudinal slope section, especially a continuous downhill section, the danger sensing ability is related to factors of the road itself due to belonging to an accident high-incidence area, and the danger sensing ability is reduced due to easy dispersion of the attention of the driver on such a section), the manner can take into consideration the factors of the human-vehicle state, and the preset danger sensing ability prediction model can predict the danger sensing ability and detect the danger sensing ability of the driver in real time.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (6)

1. A danger perception capability prediction method based on human-vehicle states is characterized by comprising the following steps:
acquiring vehicle state information and a driver image in a measurement period, wherein the vehicle state information is used for reflecting a vehicle position and a vehicle state;
determining a driver danger perception capability parameter based on the vehicle state information, the driver image and a preset danger perception capability prediction model;
when the driver danger perception capability parameter is lower than a threshold value, generating prompt information to prompt a driver;
the vehicle state information comprises vehicle positioning information, real-time vehicle speed information and vehicle body attitude measurement data measured under a vehicle coordinate system, and the driver danger perception capability parameters are determined based on the vehicle state information, the driver image and a preset danger perception capability prediction model, and the method comprises the following steps of:
carrying out filtering processing on the vehicle body attitude measurement data to obtain vehicle body attitude accurate data; converting the accurate data of the vehicle body posture into a navigation coordinate system to obtain vehicle body posture information; carrying out image recognition on the driver image, and determining body posture information and face state information reflecting the real-time state of the driver; inputting the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, the body posture information and the facial state information into the danger perception capability prediction model; acquiring a driver danger perception capability parameter output by the danger perception capability prediction model;
and carrying out filtering processing on the vehicle body attitude measurement data to obtain vehicle body attitude accurate data, and the method comprises the following steps:
through the first step
Figure 342556DEST_PATH_IMAGE002
Equation of measurement of time of day
Figure DEST_PATH_IMAGE003
Measured to obtain
Figure 853172DEST_PATH_IMAGE002
Measurement vector at time:
Figure 695226DEST_PATH_IMAGE004
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE005
is a first
Figure 952420DEST_PATH_IMAGE002
The measurement vector of the time of day,
Figure 829109DEST_PATH_IMAGE006
is a first
Figure 448309DEST_PATH_IMAGE002
A matrix of the measurement sensitivity at the time,
Figure DEST_PATH_IMAGE007
is as follows
Figure 308817DEST_PATH_IMAGE002
The estimated error vector for a time instant,
Figure 570035DEST_PATH_IMAGE008
is as follows
Figure 35651DEST_PATH_IMAGE002
The white noise vector is measured at a moment,
Figure DEST_PATH_IMAGE009
is as follows
Figure 622490DEST_PATH_IMAGE002
The acceleration at the moment of time is,
Figure 173557DEST_PATH_IMAGE010
is as follows
Figure 975816DEST_PATH_IMAGE002
The angular velocity of the moment in time is,
Figure DEST_PATH_IMAGE011
to represent
Figure 92676DEST_PATH_IMAGE010
Derivative of (1) is referred to as
Figure 319258DEST_PATH_IMAGE002
The angular acceleration at the moment in time is,
Figure 357621DEST_PATH_IMAGE012
is as follows
Figure 429482DEST_PATH_IMAGE002
A motion gesture at a time, and:
Figure DEST_PATH_IMAGE013
Figure 666429DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
measuring the vector
Figure 595070DEST_PATH_IMAGE005
Substituting into a preset improved Kalman filtering equation to calculate accurate data of the filtered vehicle body attitude
Figure 120730DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE017
Respectively represent
Figure 784230DEST_PATH_IMAGE018
A filtered accurate value;
the improved Kalman filtering equation is as follows:
Figure DEST_PATH_IMAGE019
Figure 875683DEST_PATH_IMAGE020
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE021
in order to estimate the error before the update,
Figure 240806DEST_PATH_IMAGE022
is composed of
Figure DEST_PATH_IMAGE023
The state matrix of the time of day,
Figure 784919DEST_PATH_IMAGE024
first, the
Figure 464162DEST_PATH_IMAGE023
The error in the estimation of the time of day,
Figure DEST_PATH_IMAGE025
is a Kalman gain matrix, an
Figure 678631DEST_PATH_IMAGE026
The following conditions are satisfied:
Figure DEST_PATH_IMAGE027
Figure 214654DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
Figure 246064DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE031
wherein the content of the first and second substances,
Figure 525736DEST_PATH_IMAGE032
is a matrix of the units,
Figure DEST_PATH_IMAGE033
is composed of
Figure 591781DEST_PATH_IMAGE006
The transpose of (a) is performed,
Figure 501968DEST_PATH_IMAGE034
is as follows
Figure 492445DEST_PATH_IMAGE023
The covariance of the noise of the uncorrelated devices at the moment,
Figure DEST_PATH_IMAGE035
is the covariance of zero-mean white noise,
Figure 310228DEST_PATH_IMAGE036
and
Figure DEST_PATH_IMAGE037
is as follows
Figure 230780DEST_PATH_IMAGE002
A priori, a posteriori covariance matrix of the time of day,
Figure 311868DEST_PATH_IMAGE022
is as follows
Figure 521133DEST_PATH_IMAGE038
The state matrix of the time of day,
Figure DEST_PATH_IMAGE039
is as follows
Figure 408186DEST_PATH_IMAGE023
The a-posteriori covariance matrix of the time of day,
Figure 120927DEST_PATH_IMAGE040
for the purpose of the updated estimation error,
Figure DEST_PATH_IMAGE041
is as follows
Figure 172584DEST_PATH_IMAGE023
An estimation error after the time update;
covariance of uncorrelated device noise
Figure 869145DEST_PATH_IMAGE042
Covariance of white noise with zero mean
Figure 966414DEST_PATH_IMAGE035
The following fitness function is satisfied:
Figure DEST_PATH_IMAGE043
wherein the content of the first and second substances,
Figure 127137DEST_PATH_IMAGE044
Figure DEST_PATH_IMAGE045
in order to be an integral term, the integral term,
Figure 81186DEST_PATH_IMAGE046
Figure DEST_PATH_IMAGE047
are all initial integral terms;
covariance of uncorrelated device noise
Figure 61781DEST_PATH_IMAGE042
Comprises the following steps:
Figure 228320DEST_PATH_IMAGE048
covariance of zero mean white noise
Figure 195881DEST_PATH_IMAGE049
Comprises the following steps:
Figure DEST_PATH_IMAGE050
wherein, the first and the second end of the pipe are connected with each other,
Figure 320832DEST_PATH_IMAGE051
representing an identity matrix.
2. The method according to claim 1, wherein the navigation coordinate system is taken as a reference system
Figure DEST_PATH_IMAGE052
The axis is directed to the true north,
Figure 788722DEST_PATH_IMAGE053
the axis is directed to the right east,
Figure DEST_PATH_IMAGE054
and the vertical horizontal plane points to the ground, and the accurate data of the vehicle body posture is converted into a navigation coordinate system to obtain vehicle body posture information, wherein the method comprises the following steps:
accurately measuring the body attitude
Figure 555690DEST_PATH_IMAGE055
Substituting into a coordinate conversion equation:
Figure DEST_PATH_IMAGE056
Figure 159846DEST_PATH_IMAGE057
Figure DEST_PATH_IMAGE058
Figure 724208DEST_PATH_IMAGE059
Figure DEST_PATH_IMAGE060
Figure 413815DEST_PATH_IMAGE061
Figure DEST_PATH_IMAGE062
Figure 718894DEST_PATH_IMAGE063
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE064
in order to convert the matrix, the first and second matrices,
Figure 443137DEST_PATH_IMAGE065
respectively represent
Figure DEST_PATH_IMAGE066
The exact value of the filtered value is,
Figure 441049DEST_PATH_IMAGE067
respectively represent
Figure DEST_PATH_IMAGE068
The exact value of the filtered value is,
Figure 620882DEST_PATH_IMAGE069
respectively represent
Figure DEST_PATH_IMAGE070
The filtered accurate value is calculated to obtain the vehicle body attitude information under the navigation coordinate system
Figure 464073DEST_PATH_IMAGE071
Figure 246084DEST_PATH_IMAGE072
And
Figure DEST_PATH_IMAGE073
3. the human-vehicle state-based danger awareness ability prediction method according to claim 2, wherein after the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, and the body posture information and the facial state information are input into the danger awareness ability prediction model, the danger awareness ability prediction model performs the following processing:
determining whether the vehicle runs on a continuous longitudinal slope section or not based on the vehicle positioning information and the real-time vehicle speed information;
if yes, the posture information of the vehicle body is obtained
Figure 211635DEST_PATH_IMAGE071
Figure 344676DEST_PATH_IMAGE072
And
Figure 991558DEST_PATH_IMAGE073
body posture information
Figure 427743DEST_PATH_IMAGE074
And facial state information
Figure DEST_PATH_IMAGE075
=
Figure 767457DEST_PATH_IMAGE076
Substituting into the first danger perceptibility function
Figure DEST_PATH_IMAGE077
Calculating
Figure 184532DEST_PATH_IMAGE077
And outputs:
Figure 369526DEST_PATH_IMAGE078
Figure DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure 391709DEST_PATH_IMAGE080
respectively representing acceleration, angular velocity and motion attitude,
Figure DEST_PATH_IMAGE081
is a first
Figure 627956DEST_PATH_IMAGE002
A first weight value of the item information,
Figure 470010DEST_PATH_IMAGE082
is as follows
Figure 661957DEST_PATH_IMAGE002
A risk perception capability value of the item information;
if not, the vehicle body posture information is acquired
Figure 7488DEST_PATH_IMAGE071
Figure 626688DEST_PATH_IMAGE072
And
Figure 690459DEST_PATH_IMAGE073
body posture information
Figure 686097DEST_PATH_IMAGE074
And facial state information
Figure 886134DEST_PATH_IMAGE075
=
Figure 676235DEST_PATH_IMAGE076
Substituting a second danger-sensing capability function
Figure DEST_PATH_IMAGE083
Calculating
Figure 24040DEST_PATH_IMAGE083
And outputs:
Figure 560719DEST_PATH_IMAGE084
Figure 880842DEST_PATH_IMAGE079
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE085
is as follows
Figure 169741DEST_PATH_IMAGE002
A second weight value of the item information,
Figure 208104DEST_PATH_IMAGE086
is as follows
Figure 811124DEST_PATH_IMAGE002
A danger perceptibility value of the item of information.
4. A danger awareness capability prediction apparatus based on a human-vehicle state, comprising:
an information acquisition unit for acquiring vehicle state information and a driver image within a measurement period, wherein the vehicle state information is used for reflecting a vehicle position and a vehicle state;
the parameter calculation unit is used for determining a driver danger perception capability parameter based on the vehicle state information, the driver image and a preset danger perception capability prediction model;
the danger prompting unit is used for generating prompt information to prompt a driver when the danger perception capability parameter of the driver is lower than a threshold value;
the vehicle state information comprises vehicle positioning information, real-time vehicle speed information and vehicle body attitude measurement data measured under a vehicle coordinate system, and the parameter calculation unit is specifically used for: carrying out filtering processing on the vehicle body attitude measurement data to obtain vehicle body attitude accurate data; converting the accurate data of the vehicle body posture into a navigation coordinate system to obtain vehicle body posture information; carrying out image recognition on the driver image, and determining body posture information and face state information reflecting the real-time state of the driver; inputting the vehicle positioning information, the real-time vehicle speed information, the vehicle body posture information, the body posture information and the facial state information into the danger perception capability prediction model; acquiring a driver danger perception capability parameter output by the danger perception capability prediction model;
the parameter calculation unit is specifically configured to:
through the first step
Figure 251332DEST_PATH_IMAGE002
Equation of measurement of time of day
Figure 383237DEST_PATH_IMAGE003
Measured to obtain
Figure 908896DEST_PATH_IMAGE002
Measurement vector at time:
Figure DEST_PATH_IMAGE087
wherein, the first and the second end of the pipe are connected with each other,
Figure 318536DEST_PATH_IMAGE088
is as follows
Figure 613251DEST_PATH_IMAGE002
The measurement vector of the time of day,
Figure 916057DEST_PATH_IMAGE006
is as follows
Figure 929012DEST_PATH_IMAGE002
A matrix of the measurement sensitivity at the time,
Figure 608255DEST_PATH_IMAGE007
is as follows
Figure 757477DEST_PATH_IMAGE002
The estimated error vector for a time instant,
Figure 496762DEST_PATH_IMAGE008
is as follows
Figure 731435DEST_PATH_IMAGE002
The white noise vector is measured at a time,
Figure 948789DEST_PATH_IMAGE009
is as follows
Figure 952517DEST_PATH_IMAGE002
The acceleration at the moment of time is,
Figure DEST_PATH_IMAGE089
is as follows
Figure 396793DEST_PATH_IMAGE002
The angular velocity of the moment in time is,
Figure 118761DEST_PATH_IMAGE011
represent
Figure 139807DEST_PATH_IMAGE089
Derivative of (1) is referred to as
Figure 794779DEST_PATH_IMAGE002
The angular acceleration at the moment in time is,
Figure 672605DEST_PATH_IMAGE012
is as follows
Figure 881870DEST_PATH_IMAGE002
A motion gesture at a time, and:
Figure 175448DEST_PATH_IMAGE013
Figure 153768DEST_PATH_IMAGE014
Figure 405758DEST_PATH_IMAGE090
measuring the vector
Figure 116967DEST_PATH_IMAGE088
Substituting into a preset improved Kalman filtering equation to calculate accurate data of the filtered vehicle body attitude
Figure 479815DEST_PATH_IMAGE016
Figure DEST_PATH_IMAGE091
Respectively represent
Figure 843800DEST_PATH_IMAGE018
A filtered accurate value;
the improved Kalman filtering equation is as follows:
Figure 266691DEST_PATH_IMAGE092
Figure 450548DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 85929DEST_PATH_IMAGE021
in order to estimate the error before the update,
Figure 38841DEST_PATH_IMAGE022
is composed of
Figure 632634DEST_PATH_IMAGE023
The state matrix of the time of day,
Figure 772628DEST_PATH_IMAGE093
first, the
Figure 742858DEST_PATH_IMAGE023
The error in the estimation of the time of day,
Figure 550277DEST_PATH_IMAGE026
is a Kalman gain matrix, an
Figure 317900DEST_PATH_IMAGE026
The following conditions are satisfied:
Figure 945191DEST_PATH_IMAGE027
Figure 187953DEST_PATH_IMAGE028
Figure 115458DEST_PATH_IMAGE029
Figure 51053DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE094
wherein the content of the first and second substances,
Figure 227956DEST_PATH_IMAGE032
is a matrix of the units,
Figure 8831DEST_PATH_IMAGE095
is composed of
Figure DEST_PATH_IMAGE096
The transpose of (a) is performed,
Figure 853159DEST_PATH_IMAGE097
is as follows
Figure 959655DEST_PATH_IMAGE023
The covariance of the noise of the uncorrelated devices at the moment,
Figure 564467DEST_PATH_IMAGE035
is the covariance of zero-mean white noise,
Figure 414612DEST_PATH_IMAGE036
and
Figure 51129DEST_PATH_IMAGE037
is as follows
Figure 328527DEST_PATH_IMAGE002
A priori, a posteriori covariance matrix of the time,
Figure 152126DEST_PATH_IMAGE022
is as follows
Figure 805962DEST_PATH_IMAGE038
The state matrix of the time of day,
Figure 296986DEST_PATH_IMAGE039
is as follows
Figure DEST_PATH_IMAGE098
The a-posteriori covariance matrix of the time of day,
Figure 542022DEST_PATH_IMAGE040
for the purpose of the updated estimation error,
Figure 384076DEST_PATH_IMAGE099
is as follows
Figure 576023DEST_PATH_IMAGE023
An estimation error after the time update;
covariance of uncorrelated device noise
Figure 924484DEST_PATH_IMAGE042
Covariance of white noise with zero mean
Figure 809263DEST_PATH_IMAGE049
The following fitness function is satisfied:
Figure DEST_PATH_IMAGE100
wherein the content of the first and second substances,
Figure 404193DEST_PATH_IMAGE044
Figure 134251DEST_PATH_IMAGE045
in order to be an integral term, the integral term,
Figure 599868DEST_PATH_IMAGE046
Figure 655548DEST_PATH_IMAGE047
are all initialAn integral term;
covariance of uncorrelated device noise
Figure 675457DEST_PATH_IMAGE042
Comprises the following steps:
Figure 474786DEST_PATH_IMAGE048
covariance of zero mean white noise
Figure 794908DEST_PATH_IMAGE049
Comprises the following steps:
Figure 755911DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure 785485DEST_PATH_IMAGE051
representing an identity matrix.
5. A storage medium, characterized in that the storage medium includes a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the human-vehicle state-based risk perception capability prediction method according to any one of claims 1 to 3.
6. An electronic device comprising a memory for storing information including program instructions and a processor for controlling execution of the program instructions, the program instructions being loaded and executed by the processor to implement the human-vehicle state based risk awareness capability prediction method of any one of claims 1 to 3.
CN202211237709.4A 2022-10-11 2022-10-11 Danger perception capability prediction method and device based on human-vehicle state and electronic equipment Active CN115320626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211237709.4A CN115320626B (en) 2022-10-11 2022-10-11 Danger perception capability prediction method and device based on human-vehicle state and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211237709.4A CN115320626B (en) 2022-10-11 2022-10-11 Danger perception capability prediction method and device based on human-vehicle state and electronic equipment

Publications (2)

Publication Number Publication Date
CN115320626A CN115320626A (en) 2022-11-11
CN115320626B true CN115320626B (en) 2022-12-30

Family

ID=83913171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211237709.4A Active CN115320626B (en) 2022-10-11 2022-10-11 Danger perception capability prediction method and device based on human-vehicle state and electronic equipment

Country Status (1)

Country Link
CN (1) CN115320626B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109397294A (en) * 2018-12-05 2019-03-01 南京邮电大学 A kind of robot cooperated localization method based on BA-ABC converged communication algorithm
CN109471439A (en) * 2018-11-23 2019-03-15 广州小鹏汽车科技有限公司 Control method for vehicle, device, equipment, storage medium and automobile
CN112883834A (en) * 2021-01-29 2021-06-01 重庆长安汽车股份有限公司 DMS system distraction detection method, DMS system distraction detection system, DMS vehicle, and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10129373A (en) * 1996-10-30 1998-05-19 Kansei Corp Degree of danger evaluation device
US7609150B2 (en) * 2006-08-18 2009-10-27 Motorola, Inc. User adaptive vehicle hazard warning apparatuses and method
US20080042814A1 (en) * 2006-08-18 2008-02-21 Motorola, Inc. Mode sensitive vehicle hazard warning apparatuses and method
JP6241235B2 (en) * 2013-12-04 2017-12-06 三菱電機株式会社 Vehicle driving support device
CN108407815A (en) * 2018-03-31 2018-08-17 四川攸亮科技有限公司 A kind of intelligent travelling crane auxiliary system
JP6811743B2 (en) * 2018-05-15 2021-01-13 三菱電機株式会社 Safe driving support device
CN109272775B (en) * 2018-10-22 2021-07-16 华南理工大学 Highway curve safety monitoring and early warning method, system and medium
CN111137284B (en) * 2020-01-04 2021-07-23 长安大学 Early warning method and early warning device based on driving distraction state
KR20210129913A (en) * 2020-04-21 2021-10-29 주식회사 만도모빌리티솔루션즈 Driver assistance apparatus
CN113888890A (en) * 2021-09-29 2022-01-04 四川奇石缘科技股份有限公司 Electronic warning system for preventing accident on highway
CN114239423A (en) * 2022-02-25 2022-03-25 四川省公路规划勘察设计研究院有限公司 Method for constructing prediction model of danger perception capability of driver on long and large continuous longitudinal slope section

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109471439A (en) * 2018-11-23 2019-03-15 广州小鹏汽车科技有限公司 Control method for vehicle, device, equipment, storage medium and automobile
CN109397294A (en) * 2018-12-05 2019-03-01 南京邮电大学 A kind of robot cooperated localization method based on BA-ABC converged communication algorithm
CN112883834A (en) * 2021-01-29 2021-06-01 重庆长安汽车股份有限公司 DMS system distraction detection method, DMS system distraction detection system, DMS vehicle, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An Extended Kalman Filter for Low-Cost Positioning System in Agricultural Vehicles;Nenavath Ravi Kumar ET AL;《2016 International Conference on Wireless Communications》;20160915;151-157 *
基于改进粒子滤波器的移动机器人同时定位与建图方法;潘薇等;《模式识别与人工智能》;20081215(第06期);133-138 *

Also Published As

Publication number Publication date
CN115320626A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
US10748446B1 (en) Real-time driver observation and progress monitoring
JP6832963B2 (en) Systems and methods for identifying dangerous driving behavior
US20220286811A1 (en) Method for smartphone-based accident detection
TWI754068B (en) Devices and methods for recognizing driving behavior based on movement data
US11055544B2 (en) Electronic device and control method thereof
US9275552B1 (en) Real-time driver observation and scoring for driver'S education
JPWO2017038166A1 (en) Information processing apparatus, information processing method, and program
Sun et al. An integrated solution for lane level irregular driving detection on highways
JP2019195377A (en) Data processing device, monitoring system, awakening system, data processing method, and data processing program
US20180204078A1 (en) System for monitoring the state of vigilance of an operator
JP2021155032A (en) Automatically estimating skill levels and confidence levels of drivers
CN112489425A (en) Vehicle anti-collision early warning method and device, vehicle-mounted terminal equipment and storage medium
JP2022033805A (en) Method, device, apparatus, and storage medium for identifying passenger's status in unmanned vehicle
KR102051136B1 (en) Artificial intelligence dashboard robot base on cloud server for recognizing states of a user
CN114764912A (en) Driving behavior recognition method, device and storage medium
JP2019195376A (en) Data processing device, monitoring system, awakening system, data processing method, and data processing program
CN115320626B (en) Danger perception capability prediction method and device based on human-vehicle state and electronic equipment
Parasana et al. A health perspective smartphone application for the safety of road accident victims
Saeed et al. A novel extension for e-Safety initiative based on developed fusion of biometric traits
JP7114953B2 (en) In-vehicle device, driving evaluation device, driving evaluation system provided with these, data transmission method, and data transmission program
JP6772775B2 (en) Driving support device and driving support method
CN114926896A (en) Control method for automatic driving vehicle
CN117058730A (en) Dataset generation and enhancement for machine learning models
CN108446644A (en) A kind of virtual display system for New-energy electric vehicle
CN113822449B (en) Collision detection method, collision detection device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant