CN113232668A - Driving assistance method, system, medium and terminal - Google Patents

Driving assistance method, system, medium and terminal Download PDF

Info

Publication number
CN113232668A
CN113232668A CN202110420338.2A CN202110420338A CN113232668A CN 113232668 A CN113232668 A CN 113232668A CN 202110420338 A CN202110420338 A CN 202110420338A CN 113232668 A CN113232668 A CN 113232668A
Authority
CN
China
Prior art keywords
driver
perception
vehicle
data
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110420338.2A
Other languages
Chinese (zh)
Other versions
CN113232668B (en
Inventor
江智浩
唐韧之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ShanghaiTech University
Original Assignee
ShanghaiTech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ShanghaiTech University filed Critical ShanghaiTech University
Priority to CN202110420338.2A priority Critical patent/CN113232668B/en
Publication of CN113232668A publication Critical patent/CN113232668A/en
Application granted granted Critical
Publication of CN113232668B publication Critical patent/CN113232668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W40/09Driving style or behaviour
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Abstract

The invention provides a driving assistance method, system, medium and terminal, comprising: acquiring observed behavior data of a driver, operation behavior data of the driver, vehicle observation data and a driver behavior model of other vehicles; constructing a driver perception model based on the driver observation behavior data, the vehicle observation data and the driver behavior model of other vehicles; constructing a driver behavior model based on the driver perception model and the driver operation behavior data; constructing a vehicle perception model based on the vehicle observation data, and predicting a vehicle perception driving scene by combining the driver behavior model; and predicting a driver perception driving scene based on the driver perception model, and comparing the driver perception driving scene with the vehicle perception driving scene to acquire driver missing information so as to provide assistance for the driver. According to the method and the system, the driver behavior is observed, so that the information of the driver in the aspects of perception and decision is obtained, the perception model and the behavior model of the driver are built, the driver is accurately assisted, the cooperation of the driver and an auxiliary system is promoted, and the overall traffic safety is improved.

Description

Driving assistance method, system, medium and terminal
Technical Field
The present invention relates to the field of intelligent driving technologies, and in particular, to a driving assistance method, system, medium, and terminal.
Background
Automotive driving is a social activity where drivers need to share road resources with other traffic participants (pedestrians and other vehicles, etc.). With the increasing income of residents, the quantity of motor vehicles kept increases year by year, so that road conditions become more and more complicated, and the number of traffic accidents also increases year by year, wherein 94% of traffic accidents are caused by human factors. According to the data of the national statistical bureau, the direct property loss caused by traffic accidents is more than 13 billion yuan, and the number of casualties is more than 30 ten thousand. Traffic accidents pose a great threat to the safety of the lives and properties of people.
With the development of artificial intelligence technology and sensor technology and the great improvement of computer computing power, the development trend of intellectualization and informatization of road traffic appears: for example, the driving assistance system senses the driving environment through a sensor device such as a radar, a laser, a camera and the like, and prompts a driver or directly controls the vehicle; for another example, with the coming 5G era and the development of communication technology of internet of vehicles, vehicles can transmit information through the internet of vehicles, thereby improving the efficiency of road traffic and the safety of driving.
Although the automatic driving technology has advanced sufficiently, human-driven vehicles and automatic driving vehicles will coexist for a long time in the foreseeable future, and developing more effective intelligent driving assistance systems is still the main means for improving road safety. The existing driving assistance system has little knowledge of the characteristics and the state of a driver, so that a conservative strategy is mostly adopted in the assistance process, so that the false positive of an alarm is excessive, the trust of the driver on the assistance system is reduced, and the assistance effect is influenced. In addition, false alarms also increase the cognitive load on the driver during complex traffic conditions, further reducing safety.
Therefore, it is highly desirable to provide a new technical solution for reducing the false positive rate of the warning in the driving assistance system and providing more accurate driving assistance to the driver.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a driving assistance method, system, medium, and terminal, which are used to solve the technical problem of poor assistance effect of the driving assistance system in the prior art.
To achieve the above and other related objects, a first aspect of the present invention provides a driving assistance method including: acquiring observed behavior data of a driver, operation behavior data of the driver, vehicle observation data and a driver behavior model of other vehicles; constructing a driver perception model based on the driver observation behavior data, the vehicle observation data and the driver behavior model of other vehicles; constructing a driver behavior model based on the driver perception model and the driver operation behavior data; constructing a vehicle perception model based on the vehicle observation data, and predicting a vehicle perception driving scene by combining the driver behavior model; and predicting a driver perception driving scene based on the driver perception model, and comparing the driver perception driving scene with the vehicle perception driving scene to acquire driver missing information so as to provide assistance for the driver.
In some embodiments of the first aspect of the present invention, the method comprises: evaluating the risk level of the acquired driver missing information; providing corresponding assistance to the driver based on the risk level.
In some embodiments of the first aspect of the present invention, the predicting manner of the vehicle-perceived driving scene includes: predicting and acquiring next-moment driver operation behavior data based on the driver behavior model; and predicting and acquiring the vehicle perception driving scene of the next moment based on the vehicle perception model, the next moment driver operation behavior data and the driver behavior models of the other vehicles.
In some embodiments of the first aspect of the present invention, the driving environment includes a plurality of vehicles, and the driver perception model is constructed in a manner including: the target vehicle is in communication connection with other vehicles to obtain driver behavior models of the other vehicles; constructing suspected perception models of a plurality of target vehicle drivers based on the driver observation behavior data; reducing the number of suspected perception models of the driver of the target vehicle through a game based on the driver behavior models of other vehicles; and updating the observed behavior data of the driver and the operation behavior data of the driver, and reducing the number of the remaining suspected perception models based on the updated observed behavior data of the driver and the operation behavior data of the driver so as to acquire the perception model of the driver.
In some embodiments of the first aspect of the present invention, the driver observed behavior data comprises current driver observed behavior data and driver observed behavior data of a previous moment; the construction mode of the driver perception model comprises the following steps: acquiring a current driver observation result based on the previous driver observation behavior data and the current vehicle observation data; and constructing the driver perception model based on the current driver observation result and the current driver observation behavior data.
In some embodiments of the first aspect of the present invention, the driver operational behavior data comprises current driver operational behavior data; the method comprises the following steps: and constructing the driver behavior model based on the driver perception model and the current driver operation behavior data.
In some embodiments of the first aspect of the present invention, the driver observed behavior data comprises driver gaze data, driver posture data, and driver expression data; the method comprises the following steps: inferring a driver's field of view based on the driver gaze data and collecting driver field of view data; and acquiring the perception information of the driver to other vehicles in the observation field of the driver based on the posture data and the expression data of the driver so as to construct the driver perception model.
To achieve the above and other related objects, a second aspect of the present invention provides a driving assistance system including: the data acquisition module is used for acquiring driver observation behavior data, driver operation behavior data, vehicle observation data and driver behavior models of other vehicles; the driver perception model building module is used for building a driver perception model based on the driver observation behavior data, the vehicle observation data and the driver behavior models of other vehicles; the driver behavior model building module is used for building a driver behavior model based on the driver perception model and the driver operation behavior data; the vehicle perception driving scene prediction module is used for constructing a vehicle perception model based on the vehicle observation data and predicting a vehicle perception driving scene by combining the driver behavior model; and the missing information acquisition module is used for predicting a driver perception driving scene based on the driver perception model and acquiring the missing information of the driver after comparing the missing information with the vehicle driving scene so as to provide assistance for the driver.
To achieve the above and other related objects, a third aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the driving assistance method.
To achieve the above and other related objects, a fourth aspect of the present invention provides an electronic terminal, comprising: a processor and a memory; the memory is configured to store a computer program, and the processor is configured to execute the computer program stored in the memory to cause the terminal to execute the driving assistance method.
As described above, the driving support method, system, medium, and terminal according to the present invention have the following advantageous effects: thereby obtain the information of driver in the aspect of perception and decision-making through observing driver's action and found driver's perception model and behavior model to make full use of car networking shares driver's behavior model, carries out more accurate assistance for the driver, promotes driver and auxiliary system's cooperation, promotes whole traffic safety.
Drawings
Fig. 1 is a flowchart illustrating a driving assistance method according to an embodiment of the invention.
Fig. 2 is a schematic diagram of a driving assistance model according to an embodiment of the invention.
Fig. 3 is a schematic diagram of a driving scenario according to an embodiment of the invention.
FIG. 4 is a schematic diagram of another driving scenario according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a driving assistance system according to an embodiment of the invention.
Fig. 6 is a schematic structural diagram of an electronic terminal according to an embodiment of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present invention. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present invention. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present invention is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," "retained," and the like are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
The invention aims to provide a driving assistance method, a driving assistance system, a driving assistance medium and a driving assistance terminal, which are used for solving the technical problem that the driving assistance system in the prior art is poor in assistance effect.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example one
To facilitate understanding of the present invention, the present embodiment first describes a wide application scenario thereof: a driving environment X comprising K traffic participants X1,x2,…,xk(ii) a Wherein x is1For the analysis of the target, x is composed of the driver d and the vehicle c2, x3,…,xkIs equal to x1There are K-1 connected transportation participants or transportation facilities.
The driver d is a process of constantly observing the driving environment X during the driving process, so that the observation behavior can be generated
Figure RE-GDA0003128653980000041
(e.g., looking at the rear-view mirror, turning the head, leaning forward, squinting, etc.), where t is the current time, and obtaining observations
Figure RE-GDA0003128653980000042
(e.g., number of surrounding vehicles, travel speed, distance from the driving vehicle, etc.); further, the driver d obtains the observation result
Figure RE-GDA0003128653980000043
Sensing of driving environment immediately before the vehicle
Figure RE-GDA0003128653980000044
Integration is performed to obtain the driving environment perception at the current moment
Figure RE-GDA0003128653980000045
Based on the perception and judgment of the current driving environment (generally through more complex mental activities, collectively referred to as thinking), the driver will perform an operational behavior on the vehicle
Figure RE-GDA0003128653980000051
(e.g., throttle, brake, steering wheel, turn signal, etc.) to affect the driving state of the vehicle.
The driving environment X of the vehicle c is also a process of continuously observing during the driving process, and the observation result is generally obtained by a vehicle-mounted sensor (such as an optical radar, a camera, etc.)
Figure RE-GDA0003128653980000052
And, the observation result of the driving environment X by the vehicle c
Figure RE-GDA0003128653980000053
Can also be used to construct the perception of the vehicle of the driving environment
Figure RE-GDA0003128653980000054
It is worth mentioning that the unmanned vehicle can also realize the operation behavior of the vehicle based on the perception and judgment of the driving environment
Figure RE-GDA0003128653980000055
The judgment usually needs deep learning and strongThe method is obtained after a complex and unexplainable 'thinking' process is completed by artificial intelligence modes such as chemistry study and the like, and the process is considered to be unexplainable because most of the current intelligent methods cannot be explained yet. The present invention will only be discussed in the context of an application in which the driver is involved.
In the application scenario, the driving assistance system currently applied and researched has little knowledge of the characteristics and the state of the driver, so that a conservative strategy is mostly adopted in the assistance process, so that the false positive of the alarm is excessive, a 'wonderful' effect is generated, the trust of the driver on the assistance system is reduced, and the assistance effect is influenced. False alarms also increase the cognitive load on the driver during complex traffic conditions, further reducing safety. The primary sources of false positives for secondary system alarms include:
1) difference in vehicle perception and driver perception: the driver's observation of the driving environment is mainly dependent on vision, and thus there are vision blind areas in both time and space. However, the driver can pre-judge the positions of other vehicles around according to historical observation and experience, and the visual blind areas are supplemented to a certain extent. In the case where the driver perception state is unknown, the driving assist system generally assumes that the driver has no perception of a dangerous condition, thus resulting in false positives of the warning.
2) The experience and level of the driver is unknown: the ability of experienced drivers to anticipate other vehicle behaviors and identify impending hazardous conditions is far greater than novice drivers while providing the same observation. In situations where the driver experience and level are unknown, the driving assistance system typically assumes that the driver is a novice driver, resulting in many false positives of the warning, reducing the driver's level of confidence in the assistance system.
3) The intentions of other traffic participants in the periphery are unknown: the driving assist system needs to warn of dangerous driving conditions rather than recognize them, and thus needs to predict future driving environment conditions. Since the intentions of other traffic participants in the vicinity are unknown, the driving assistance system usually assumes that the other traffic participants may take the most dangerous intentions, leading to erroneous judgments and false alarms.
Therefore, the present embodiment provides a driving assistance method, which reduces the false positive rate of an alarm on the premise of ensuring a low false negative rate of the alarm by enhancing the collection and analysis of the behavior and perception data of the driver, and provides accurate and interpretable driving assistance for the driver, and fig. 1 shows a flow diagram of a driving assistance method, which can be specifically expressed as follows:
and S11, acquiring observed behavior data of the driver, operation behavior data of the driver, observed data of the vehicle and a driver behavior model of other vehicles. Specifically, vehicle-mounted sensing devices such as a camera device, an eye tracker, a radar sensor, a laser sensor and the like can be adopted to acquire observation behavior data, operation behavior data and vehicle observation data of a driver; the driver behavior models of other vehicles may be obtained in wireless communication with the other vehicles. The sensing device for collecting the driver related data is preferentially arranged in the vehicle, and the sensing device for collecting the vehicle related data is preferentially arranged outside the vehicle, such as the top and the tail of the vehicle.
And S12, constructing a driver perception model based on the driver observation behavior data, the vehicle observation data and the driver behavior models of other vehicles. Wherein the driver observed behavior data comprises current driver observed behavior data
Figure RE-GDA0003128653980000061
And the previous driver observation behavior data
Figure RE-GDA0003128653980000062
Specifically, the current driver observation result is obtained based on the previous driver observation behavior data and the current vehicle observation data
Figure RE-GDA0003128653980000063
Constructing the driver perception model based on the current driver observation result and the current driver observation behavior data
Figure RE-GDA0003128653980000064
The estimated value is
Figure RE-GDA0003128653980000065
In a preferred embodiment of the present embodiment, the driver observed behavior data includes, but is not limited to, driver sight line data, driver posture data, driver expression data, and the like. The sight line data of the driver, such as sight line direction, sight line fixation point track and the like, can be acquired by adopting an eye tracker; the image information of the driver can be collected by adopting a camera device, so that the posture data, the expression data and the like of the driver can be obtained.
Preferably, the driver sight data can be used for deducing the driver's observation visual field so as to collect the driver visual field data; and acquiring perception information of the driver to other vehicles in the observation visual field of the driver, such as the number of other vehicles in the visual field, the driving speed, the distance between the driver and the vehicle and the like, based on the driver posture data and the driver expression data, so as to construct the driver perception model.
And S13, constructing a driver behavior model based on the driver perception model and the driver operation behavior data. The driver operation behavior data comprises current driver operation behavior data; based on the driver perception model
Figure RE-GDA0003128653980000066
And the current driver operating behavior data
Figure RE-GDA0003128653980000067
Constructing the driver behavior model
Figure RE-GDA0003128653980000068
The estimated value obtained is
Figure RE-GDA0003128653980000069
Optionally, the driver operation behavior data is obtained through a vehicle-mounted sensor, such as a sensor arranged on a clutch, a steering wheel, a seat, and the like; for example, the camera module comprises a camera device, a storage device and a processing device. The image capturing device includes but is not limited to: cameras, video cameras, camera modules integrated with optical systems or CCD chips, camera modules integrated with optical systems and CMOS chips, and the like.
S14, based on the vehicle observation data
Figure RE-GDA00031286539800000610
Building vehicle perception model
Figure RE-GDA00031286539800000611
Predicting vehicle perception driving scene by combining the driver behavior model
Figure RE-GDA00031286539800000612
The vehicle observation data acquisition device includes, but is not limited to, a vehicle optical radar, a camera, an ultrasonic sensor, and the like.
In a preferred embodiment of this embodiment, the predicting manner of the vehicle sensing driving scene includes: predicting and acquiring next-moment driver operation behavior data based on the driver behavior model; and predicting and acquiring the vehicle perception driving scene of the next moment based on the vehicle perception model, the next moment driver operation behavior data and the driver behavior models of the other vehicles.
In some examples, the manner in which the vehicle perceives the driving scene may also be expressed as: and (c) presuming m behaviors A ═ a which the driver can possibly do at the next moment according to the driver behavior model0,a1,…,am}. The vehicles other than the host vehicle keep their speed directions and magnitudes unchanged and estimate the state of the next time. And considering m different vehicle states after the behavior in the A is performed on the vehicle according to the vehicle perception model, and combining the m different vehicle states with the estimated states of other vehicles to obtain the vehicle driving scene at the next moment.
Step S15, based on the driver perception model
Figure RE-GDA0003128653980000071
Predicting driver perceived driving scenarios
Figure RE-GDA0003128653980000072
And senses a driving scene with the vehicle
Figure RE-GDA0003128653980000073
And obtaining the missing information of the driver after comparison so as to provide the driver with assistance.
In a preferred embodiment of this embodiment, the driving environment includes a plurality of vehicles, and the driver perception model is constructed in a manner including: the target vehicle is in communication connection with other vehicles to obtain driver behavior models of the other vehicles; constructing suspected perception models of a plurality of target vehicle drivers based on the driver observation behavior data; reducing the number of suspected perception models of the driver of the target vehicle through a game based on the driver behavior models of other vehicles; and updating the observed behavior data of the driver and the operation behavior data of the driver, and reducing the number of the remaining suspected perception models based on the updated observed behavior data of the driver and the operation behavior data of the driver so as to acquire the perception model of the driver.
The above-mentioned obtaining manner of the driver's perception of the driving scene may also be expressed as: estimating all m possible behaviors A ═ a of the driver at the next moment according to the driver perception model0,a1,…,amAnd obtaining m possible driving scenes. Other driver perception models are obtained through interaction with surrounding vehicles, the surrounding vehicles and possible driving scenes of the target vehicle are played, the number of the possible driving scenes is reduced, and a result with high possibility is obtained to serve as a final driver perception driving scene.
In a preferred embodiment of the present invention, the driving assistance method further includes: evaluating the risk level of the acquired driver missing information; providing corresponding assistance to the driver based on the risk level. Specifically, the corresponding risk level is set by analyzing whether the missing information of the current driver may cause the occurrence of a traffic accident, the risk level of the traffic accident that may occur, the probability of the traffic accident, and the like. For example, it may be set that the probability of a traffic accident occurring should be positively correlated with the risk level; the missing information that may cause casualties is set to a higher level (the information of the visual field near the sidewalk side is missing, which may cause injury to pedestrians). And the higher the grade, the more priority the missing information needs to be reminded, and the more intuitive and effective the reminding mode is.
As shown in fig. 2, the present embodiment proposes a schematic view of a driving assistance model, which includes a driver, a driving environment, and a vehicle, and the vehicle is provided with a conventional driving assistance system and a cognitive model assistance system (i.e., the driving assistance system proposed by the present invention), respectively. The traditional driving assistance system is mostly provided with a radar sensor, a laser sensor, a camera module and the like, and the hardware can sense the driving environment and prompt the driver or directly control the vehicle, such as a Forward Collision Warning system (FCW), a Lane Departure Warning system (LDW) and the like. The driving assistance system provided by the invention constructs a real-time perception model of the driver for the surrounding driving environment by utilizing the information acquired by the sensor, the camera module, the eye tracker and the like, thereby providing more accurate driving assistance with certain interpretability for the driver on the basis of the current perception model. Since the objects and their relationships in the model have been described in detail above, they are not described in detail here.
In some embodiments, the method may be applied to a controller, such as an arm (advanced RISC machines) controller, an fpga (field Programmable Gate array) controller, a soc (system on chip) controller, a dsp (digital Signal processing) controller, or an mcu (microcontroller unit) controller, among others. In some embodiments, the methods are also applicable to computers including components such as memory, memory controllers, one or more processing units (CPUs), peripheral interfaces, RF circuits, audio circuits, speakers, microphones, input/output (I/O) subsystems, display screens, other output or control devices, and external ports; the computer includes, but is not limited to, Personal computers such as desktop computers, notebook computers, tablet computers, smart phones, smart televisions, Personal Digital Assistants (PDAs), and the like. In other embodiments, the method may also be applied to servers, which may be arranged on one or more physical servers, or may be formed of a distributed or centralized cluster of servers, depending on various factors such as function, load, etc.
Example two
The present embodiment proposes a driving assistance system, and takes one driving scenario shown in fig. 3 and 4 as an example to explain the functional application of the system in detail.
In fig. 3 and 4, a1 and a2 represent actual scenes, b1 and b2 represent car observations, c1 and c2 represent car estimation results, and d1 and d2 represent car perception results; e1 and e2 represent driver observations, f1 and f2 represent driver estimates, and g1 and g2 represent driver perceptions.
In the scene, 4 vehicles run on three lanes, and for the clear expression, the vehicles are distinguished by numbers 1 to 4. Their positions at the initial time are shown as a1 and the positions at the next time are shown as a 2. During the period, the vehicles No. 1, No. 2 and No. 4 are accelerated, and the vehicle No. 3 is decelerated. In this example, the driver's sight line is directed straight ahead, and the viewing angle is 119 ° with respect to vehicle No. 1.
The driving scenario gives the following information: compared with the vehicle No. 1, the vehicle No. 2 has almost the same speed, the vehicle No. 4 has a slightly increased speed, and the vehicle No. 3 has a greatly reduced speed. The vehicle 1 carries out the following action on the vehicle 2.
Data obtained by acquiring surrounding conditions by a sensor on vehicle No. 1 shows that: at an initial time, car number 2, 3, 4 is observed within the sensor range, as shown at b 1; at the next time, only car number 2, 4 is observed, as shown in fig. b 2.
For the process of this transition, the driving assistance system proposed in this embodiment can estimate that the result that the vehicle number 3 has a large probability of being behind the observation range and a small probability of being behind the observation range, as shown in c1 and c2, and further obtain the perception of the driving environment by the vehicle, as shown in d1 and d 2.
For the perception result of such a vehicle, the unmanned system will include it in the end-to-end model, and through training, learning, get the unmanned model output as driving operation, its unexplainable nature corresponds to the unexplainable nature of the vehicle behavior model. In the conventional driving assistance system, there are two ways to use the sensing result: directly showing to the driver; by carrying out risk analysis on the perception result, the driver is reminded at the moment when the perception result is considered to be at risk.
Both of these conventional approaches convey too much information to the driver, which can lead to distraction during driving or increased distrust of the driver assistance system. According to the invention, the driver perception scene and the vehicle perception scene are obtained and compared through the built driver perception model, the driver behavior model and the vehicle perception model, so that the information really required to be obtained by the driver is optimized, the driver can keep more attentive in the driving process, the trust of the driving assistance system is increased, and the driving assistance system can provide better assistance for the driver in the driving process.
The system collects the observation behavior information of the driver through sensors such as an eye tracker, the sight line of the driver always faces to the right front, and the following two observation results of the driver are presumed: at the initial time, the driver observation includes front vehicle number 2 and front right vehicle number 3, as shown by e 1; at the next time, the driver observation includes only the front vehicle # 2, as shown at e 2.
And the driver is observed to only keep following the vehicle before the next moment, and the perception of the driver to the scene can be estimated by combining the two observations obtained by the above conjecture as follows: at the initial moment, only the vehicles No. 2 and No. 3 in the sight line are shown as f 1; at the next time, since the vehicle 3 disappears from the sight line and only the vehicle 2 is observed as shown in f2, the driver's perception of the approximate position of the vehicle 3 is estimated to be roughly behind the right side, and is less likely to be behind the vehicle 1 as shown in g1 and g 2.
At this time, the driver behavior model at this time can be estimated by the driver perception model in combination with the behavior of the driver keeping following as mentioned above and the driver behavior model at the previous time (it is also possible to consider taking into account the driver behavior model obtained from the history data). It can be considered that the obtained model can presume that the driver carries out overtaking behavior after the next moment in a large probability, and the following behavior is kept with a small probability. As this is a very common and reasonable strategy in real-world scenarios.
The system combines the perception of the vehicle and the behavior model of the driver mentioned above to obtain the strategy that the driver should keep following the vehicle with a large probability and should slow down or change lanes to the right with a small probability. Since the vehicle can know that the vehicle number 4 is gradually accelerated at this time, the vehicle number 1 will probably cause a collision if the lane change operation is selected at this time. Comparing the strategy with the estimated driver strategy, the risk prompting unit of the system makes a sound and displays a prompt to prompt the driver to keep following the vehicle at the moment, and the behavior prohibits lane changing and overtaking left so as to avoid the occurrence of accidents caused by collision with the No. 4 vehicle.
If the driver does not comply with the prompt and does not perform any operation other than the prompt, the two situations can be divided: (1) other operations which do not cause danger are carried out; (2) dangerous operations are performed. For (1), the system can deduce and update the perception model of the driver at the later moment through the operation of the driver, and further analyze the risk and possibly prompt the driver by combining the information at the later moment. For (2), the system will continue to prompt for a short period of time, e.g., 1s, and if the driver still chooses a dangerous maneuver, the system will consider the driver to have a perception of impending risky behavior.
In conclusion, the driving assistance system provided by the invention greatly improves the intellectualization of the driving assistance system, strengthens the trust of the driver on the driving assistance system, reduces the times of interrupting the attention by the driving assistance system, and ensures the safety of the driver, the vehicle and the pedestrian more favorably by the optimized assistance information.
EXAMPLE III
As shown in fig. 5, the present embodiment proposes a driving assistance system including: the data acquisition module 51 is used for acquiring driver observation behavior data, driver operation behavior data, vehicle observation data and a driver behavior model of other vehicles; a driver perception model building module 52, configured to build a driver perception model based on the driver observed behavior data, the vehicle observed data, and a driver behavior model of another vehicle; a driver behavior model construction module 53, configured to construct a driver behavior model based on the driver perception model and the driver operation behavior data; the vehicle driving scene prediction module 54 is configured to construct a vehicle perception model based on the vehicle observation data, and predict a vehicle perception driving scene by combining the driver behavior model; and the missing information acquisition module 55 is used for predicting a driver perception driving scene based on the driver perception model and acquiring the missing driver information after comparing the driver perception driving scene with the vehicle perception driving scene so as to provide assistance for the driver.
It should be noted that the modules provided in this embodiment are similar to the methods and embodiments provided above, and therefore, the description thereof is omitted. It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the missing information acquiring module 55 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the function of the missing information acquiring module 55. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Example four
The present embodiment proposes a computer-readable storage medium on which a computer program is stored which, when being executed by a processor, implements the driving assistance method described above.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The aforementioned computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
EXAMPLE five
As shown in fig. 6, an embodiment of the present invention provides a schematic structural diagram of an electronic terminal. The electronic terminal provided by the embodiment comprises: a processor 61, a memory 62, a communicator 63; the memory 62 is connected to the processor 61 and the communicator 63 through a system bus and performs communication with each other, the memory 62 is used for storing computer programs, the communicator 63 is used for communicating with other devices, and the processor 61 is used for operating the computer programs, so that the electronic terminal executes the steps of the driving assistance method.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other devices (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In summary, the invention provides a driving assistance method, a driving assistance system, a driving assistance medium and a driving assistance terminal, wherein a driver perception model is established by observing driver behaviors to acquire information of a driver in perception and decision aspects, and the driver perception model is shared by fully utilizing the internet of vehicles, so that more accurate assistance is performed for the driver, the cooperation between the driver and an assistance system is promoted, and the overall traffic safety is improved. Therefore, the present invention effectively overcomes various disadvantages of the prior art and has a high industrial utility value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A driving assistance method characterized by comprising:
acquiring observed behavior data of a driver, operation behavior data of the driver, vehicle observation data and a driver behavior model of other vehicles;
constructing a driver perception model based on the driver observation behavior data, the vehicle observation data and the driver behavior model of other vehicles;
constructing a driver behavior model based on the driver perception model and the driver operation behavior data;
constructing a vehicle perception model based on the vehicle observation data, and predicting a vehicle perception driving scene by combining the driver behavior model;
and predicting a driver perception driving scene based on the driver perception model, and comparing the driver perception driving scene with the vehicle perception driving scene to acquire driver missing information so as to provide assistance for the driver.
2. The driving assistance method according to claim 1, characterized by comprising: evaluating the risk level of the acquired driver missing information; providing corresponding assistance to the driver based on the risk level.
3. The driving assistance method according to claim 1, wherein the prediction manner in which the vehicle perceives the driving scene includes:
predicting and acquiring next-moment driver operation behavior data based on the driver behavior model; and predicting and acquiring the vehicle perception driving scene of the next moment based on the vehicle perception model, the next moment driver operation behavior data and the driver behavior models of the other vehicles.
4. The driving assistance method according to claim 1, wherein a plurality of vehicles are included in a driving environment, and the driver perception model is constructed in a manner that:
the target vehicle is in communication connection with other vehicles to obtain driver behavior models of the other vehicles;
constructing suspected perception models of a plurality of target vehicle drivers based on the driver observation behavior data;
reducing the number of suspected perception models of the driver of the target vehicle through a game based on the driver behavior models of other vehicles;
and updating the observed behavior data of the driver and the operation behavior data of the driver, and reducing the number of the remaining suspected perception models based on the updated observed behavior data of the driver and the operation behavior data of the driver so as to acquire the perception model of the driver.
5. The driving assist method according to claim 1, characterized in that the driver observed behavior data includes current driver observed behavior data and driver observed behavior data of the immediately preceding moment; the construction mode of the driver perception model comprises the following steps:
acquiring a current driver observation result based on the previous driver observation behavior data and the current vehicle observation data;
and constructing the driver perception model based on the current driver observation result and the current driver observation behavior data.
6. The driving assistance method according to claim 1, characterized in that the driver operational behavior data includes current driver operational behavior data; the method comprises the following steps: and constructing the driver behavior model based on the driver perception model and the current driver operation behavior data.
7. The driving assistance method according to claim 1, characterized in that the driver observed behavior data includes driver sight line data, driver posture data, and driver expression data; the method comprises the following steps:
inferring a driver's field of view based on the driver gaze data and collecting driver field of view data; and acquiring the perception information of the driver to other vehicles in the observation field of the driver based on the posture data and the expression data of the driver so as to construct the driver perception model.
8. A driving assistance system characterized by comprising:
the data acquisition module is used for acquiring driver observation behavior data, driver operation behavior data, vehicle observation data and driver behavior models of other vehicles;
the driver perception model building module is used for building a driver perception model based on the driver observation behavior data, the vehicle observation data and the driver behavior models of other vehicles;
the driver behavior model building module is used for building a driver behavior model based on the driver perception model and the driver operation behavior data;
the vehicle perception driving scene prediction module is used for constructing a vehicle perception model based on the vehicle observation data and predicting a vehicle perception driving scene by combining the driver behavior model;
and the missing information acquisition module is used for predicting a driver perception driving scene based on the driver perception model and acquiring the missing information of the driver after comparing the missing information with the vehicle perception driving scene so as to provide assistance for the driver.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the driving assistance method according to any one of claims 1 to 7.
10. An electronic terminal, comprising: a processor and a memory;
the memory is configured to store a computer program, and the processor is configured to execute the computer program stored in the memory to cause the terminal to execute the driving assistance method according to any one of claims 1 to 7.
CN202110420338.2A 2021-04-19 2021-04-19 Driving assistance method, system, medium and terminal Active CN113232668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110420338.2A CN113232668B (en) 2021-04-19 2021-04-19 Driving assistance method, system, medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110420338.2A CN113232668B (en) 2021-04-19 2021-04-19 Driving assistance method, system, medium and terminal

Publications (2)

Publication Number Publication Date
CN113232668A true CN113232668A (en) 2021-08-10
CN113232668B CN113232668B (en) 2022-08-30

Family

ID=77128582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110420338.2A Active CN113232668B (en) 2021-04-19 2021-04-19 Driving assistance method, system, medium and terminal

Country Status (1)

Country Link
CN (1) CN113232668B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903102A (en) * 2021-10-29 2022-01-07 广汽埃安新能源汽车有限公司 Adjustment information acquisition method, adjustment device, electronic device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180099679A1 (en) * 2015-04-20 2018-04-12 Bayerische Motoren Werke Aktiengesellschaft Apparatus and Method for Controlling a User Situation Awareness Modification of a User of a Vehicle, and a User Situation Awareness Modification Processing System
CN108694367A (en) * 2017-04-07 2018-10-23 北京图森未来科技有限公司 A kind of method for building up of driving behavior model, device and system
CN110550042A (en) * 2018-06-01 2019-12-10 沃尔沃汽车公司 Method and system for assisting a driver in preventive driving
CN111645691A (en) * 2020-04-29 2020-09-11 云南安之骅科技有限责任公司 Driving behavior evaluation system based on comprehensive environment perception
CN112356841A (en) * 2020-11-26 2021-02-12 中国人民解放军国防科技大学 Vehicle control method and device based on brain-computer interaction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180099679A1 (en) * 2015-04-20 2018-04-12 Bayerische Motoren Werke Aktiengesellschaft Apparatus and Method for Controlling a User Situation Awareness Modification of a User of a Vehicle, and a User Situation Awareness Modification Processing System
CN108694367A (en) * 2017-04-07 2018-10-23 北京图森未来科技有限公司 A kind of method for building up of driving behavior model, device and system
CN110550042A (en) * 2018-06-01 2019-12-10 沃尔沃汽车公司 Method and system for assisting a driver in preventive driving
CN111645691A (en) * 2020-04-29 2020-09-11 云南安之骅科技有限责任公司 Driving behavior evaluation system based on comprehensive environment perception
CN112356841A (en) * 2020-11-26 2021-02-12 中国人民解放军国防科技大学 Vehicle control method and device based on brain-computer interaction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113903102A (en) * 2021-10-29 2022-01-07 广汽埃安新能源汽车有限公司 Adjustment information acquisition method, adjustment device, electronic device, and medium
CN113903102B (en) * 2021-10-29 2023-11-17 广汽埃安新能源汽车有限公司 Adjustment information acquisition method, adjustment device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113232668B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111133447B (en) Method and system for object detection and detection confidence for autonomous driving
US11508049B2 (en) Deep neural network processing for sensor blindness detection in autonomous machine applications
US11934955B2 (en) Neural network based facial analysis using facial landmarks and associated confidence values
CN110494863B (en) Determining drivable free space of an autonomous vehicle
US20200380274A1 (en) Multi-object tracking using correlation filters in video analytics applications
US11636689B2 (en) Adaptive object tracking algorithm for autonomous machine applications
WO2019157193A1 (en) Controlling autonomous vehicles using safe arrival times
US11636609B2 (en) Gaze determination machine learning system having adaptive weighting of inputs
CN114379565A (en) Occupant attention and cognitive load monitoring for autonomous and semi-autonomous driving applications
US11841987B2 (en) Gaze determination using glare as input
US11222232B1 (en) Using temporal filters for automated real-time classification
US20220135075A1 (en) Safety decomposition architecture for autonomous machine applications
CN114194190A (en) Lane maneuver intention detection system and method
CN116767245A (en) Map information object data management using neural networks of autonomous systems and applications
CN113232668B (en) Driving assistance method, system, medium and terminal
CN117581117A (en) Dynamic object detection using LiDAR data in autonomous machine systems and applications
CN116263688A (en) Single and cross sensor object tracking using feature descriptor mapping in autonomous systems and applications
CN117058730A (en) Dataset generation and enhancement for machine learning models
EP3850539A2 (en) Deep neural network processing for sensor blindness detection in autonomous machine applications
US20230394842A1 (en) Vision-based system with thresholding for object detection
US20240010196A1 (en) Learning autonomous vehicle safety concepts from demonstrations
US20230360231A1 (en) Joint 2d and 3d object tracking for autonomous systems and applications
CN116229390A (en) Deep learning based operation domain verification for autonomous systems and applications using camera based input
CN116500619A (en) Radar signal sampling for automotive radar sensing
CN117011329A (en) Object tracking and collision time estimation for autonomous systems and applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant