CN113570901A - Vehicle driving assisting method and device - Google Patents

Vehicle driving assisting method and device Download PDF

Info

Publication number
CN113570901A
CN113570901A CN202010343228.6A CN202010343228A CN113570901A CN 113570901 A CN113570901 A CN 113570901A CN 202010343228 A CN202010343228 A CN 202010343228A CN 113570901 A CN113570901 A CN 113570901A
Authority
CN
China
Prior art keywords
information
road condition
condition information
gazing
estimation model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010343228.6A
Other languages
Chinese (zh)
Other versions
CN113570901B (en
Inventor
王云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing 7Invensun Technology Co Ltd
Original Assignee
Beijing 7Invensun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing 7Invensun Technology Co Ltd filed Critical Beijing 7Invensun Technology Co Ltd
Priority to CN202010343228.6A priority Critical patent/CN113570901B/en
Publication of CN113570901A publication Critical patent/CN113570901A/en
Application granted granted Critical
Publication of CN113570901B publication Critical patent/CN113570901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides an auxiliary method and device for vehicle driving, wherein the method comprises the following steps: acquiring road condition information of a current road condition and current gazing information to be evaluated, wherein the gazing information to be evaluated is gazing information of a driver or mapping gazing information, and the mapping gazing information is gazing information which maps the gazing information of the driver to a picture of a front camera of a current driving vehicle; at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining safe watching information under the current road condition through the sight estimation model, wherein the safe watching information is full-view watching information which meets the safe driving requirement corresponding to the road condition information, and the sight estimation model is obtained by pre-training at least the road condition information of a plurality of different road conditions and the safe watching information corresponding to each road condition information; and comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.

Description

Vehicle driving assisting method and device
Technical Field
The present disclosure relates to driving assistance technologies, and in particular, to a method and an apparatus for assisting vehicle driving.
Background
Because of the difficulty of the complete automatic driving technology, one main direction of the current automatic driving research is to organically combine the behavior of the driver with the monitoring of the vehicle on the surrounding environment, so as to provide safe driving guarantee for the driver.
The intelligent assistant driving system of the current vehicle can monitor the road conditions and give some prompts to the driver in some definite dangerous scenes. For example, when the distance between the vehicle and the vehicle in front is less than a preset safe distance, an alarm prompt is given.
However, the existing mode is simple, and the behavior of the driver and the surrounding environment of the vehicle are not well combined, so that effective safety guarantee cannot be provided for the driver.
Disclosure of Invention
Based on the defects of the prior art, the invention provides an auxiliary method and device for vehicle driving, and aims to solve the problem that effective safety guarantee cannot be provided for a driver because the behavior of the driver and the surrounding environment of the vehicle are not well combined in the prior art.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a method of assisting driving of a vehicle, including:
acquiring road condition information of a current road condition and current gazing information to be evaluated, wherein the gazing information to be evaluated is gazing information of a driver or mapping gazing information, and the mapping gazing information is gazing information which maps the gazing information of the driver to a picture of a front camera of a current driving vehicle;
at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining safe watching information under the current road condition through the sight estimation model, wherein the safe watching information is full-view watching information which meets the safe driving requirement corresponding to the road condition information, and the sight estimation model is obtained by adopting at least road condition information of a plurality of different road conditions and safe watching information corresponding to each road condition information in advance for training;
and comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.
Optionally, in the method for assisting in driving a vehicle, the comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result includes:
comparing the coincidence degree of the gazing information to be evaluated and the safety gazing information, and grading the gazing quality of the gazing information to be evaluated according to the coincidence degree to obtain a safety driving coefficient; wherein the higher the degree of overlap, the higher the safe driving coefficient.
Optionally, in the above method for assisting vehicle driving, after obtaining the safe driving coefficient, the method further includes:
judging whether the safe driving coefficient is lower than a preset alarm coefficient or not;
and if the safe driving coefficient is judged to be lower than a preset alarm coefficient, sending prompt information to the driver.
Optionally, in the above method for assisting vehicle driving, the method for training the sight-line estimation model includes:
acquiring a plurality of road condition information of a plurality of different road conditions and safety watching information corresponding to each road condition information;
inputting each piece of road condition information into the sight estimation model, and calculating by the sight estimation model to obtain watching information corresponding to the road condition information;
comparing the watching information corresponding to the road condition information obtained by calculation with the safety watching information corresponding to the road condition information;
if the coincidence degree of the gaze information corresponding to the calculated road condition information and the safe gaze information corresponding to the road condition information is greater than a preset threshold value, it is determined that the training of the sight estimation model is completed;
and if the coincidence degree of the watching information corresponding to the calculated road condition information and the safe watching information corresponding to the road condition information is not larger than a preset threshold value, adjusting parameters of the sight estimation model, returning to execute, inputting each road condition information into the sight estimation model, and calculating the watching information corresponding to the road condition information through the sight estimation model.
Optionally, in the method for assisting vehicle driving, the obtaining safety gaze information corresponding to each piece of road condition information includes:
acquiring the watching information of at least one driver meeting the safety requirements under the road condition corresponding to each piece of road condition information;
and the gaze information of the driver meeting the safety requirement is used as the safety gaze information corresponding to the road condition information, or the gaze information of the driver meeting the safety requirement is mapped to the gaze information in the picture shot by the front camera of the current driving vehicle and used as the safety gaze information corresponding to the road condition information.
Optionally, in the method for assisting vehicle driving, before the inputting the road condition information of the current road condition into a pre-trained sight line estimation model and obtaining the safe watching information under the current road condition through the sight line estimation model, the method further includes:
acquiring vehicle condition information of a current vehicle;
wherein, the at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining the safe watching information under the current road condition through the sight estimation model, comprises:
inputting the road condition information of the current road condition and the vehicle condition information of the current vehicle into the sight line estimation model trained in advance, and obtaining safe watching information under the current road condition and the current vehicle condition through the sight line estimation model; the sight estimation model is obtained by training road condition information of a plurality of different road conditions, corresponding vehicle condition information under each road condition information, and safety watching information corresponding to each road condition information and vehicle condition information in advance.
A second aspect of the present application provides an assistance apparatus for vehicle driving, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring road condition information of a current road condition and current gaze information to be evaluated, the gaze information to be evaluated is gaze information of a driver or mapping gaze information, and the mapping gaze information is gaze information which maps the gaze information of the driver to a picture of a front camera of a current driving vehicle;
the system comprises an estimation unit, a visual line estimation module and a visual line estimation module, wherein the estimation unit is used for at least inputting the road condition information of the current road condition into a pre-trained visual line estimation model and obtaining safe watching information under the current road condition through the visual line estimation model, the safe watching information is full-view watching information which meets the safe driving requirement corresponding to the road condition information, and the visual line estimation model is obtained by pre-training at least the road condition information of a plurality of different road conditions and the safe watching information corresponding to each road condition information;
and the evaluation unit is used for comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.
Alternatively, in the above-described assistance apparatus for vehicle driving, the evaluation unit includes:
the first comparison unit is used for comparing the coincidence degree of the gazing information to be evaluated and the safety gazing information and grading the gazing quality of the gazing information to be evaluated according to the coincidence degree to obtain a safety driving coefficient; wherein the higher the degree of overlap, the higher the safe driving coefficient.
Optionally, the vehicle driving assistance device further includes:
the judging unit is used for judging whether the safe driving coefficient is lower than a preset alarm coefficient or not;
and the prompting unit is used for sending prompting information to the driver when the judging unit judges that the safe driving coefficient is lower than a preset alarm coefficient.
Optionally, the vehicle driving assistance device further includes: a model training unit, wherein the model training unit comprises:
the second acquisition unit is used for acquiring a plurality of pieces of road condition information of a plurality of different road conditions and safety watching information corresponding to each piece of road condition information;
the input unit is used for respectively inputting each piece of road condition information into the sight estimation model, and calculating the gaze information corresponding to the road condition information through the sight estimation model;
the second comparison unit is used for comparing the watching information corresponding to the road condition information obtained by calculation with the safety watching information corresponding to the road condition information;
a determining unit, configured to determine that training of the sight estimation model is completed when a coincidence degree of gaze information corresponding to the calculated road condition information and the safe gaze information corresponding to the road condition information, which is compared by the second comparing unit, is greater than a preset threshold;
and the adjusting unit is used for adjusting parameters of the sight estimation model when the coincidence degree of the gaze information corresponding to the road condition information and the safety gaze information corresponding to the road condition information, which is obtained by the comparison of the second comparison unit, is not greater than a preset threshold value, returning to execute the input of each road condition information into the sight estimation model, and obtaining the gaze information corresponding to the road condition information through the calculation of the sight estimation model.
Alternatively, in the above-described assistance device for vehicle driving, the second acquisition unit includes:
the second acquiring subunit is configured to acquire, under a road condition corresponding to each piece of road condition information, gaze information of at least one driver meeting a safety requirement;
and the mapping unit is used for mapping the watching information of the driver meeting the safety requirement to the safe watching information corresponding to the road condition information, or mapping the watching information of the driver meeting the safety requirement to the watching information in the picture shot by the front camera of the current driving vehicle to serve as the safe watching information corresponding to the road condition information.
Optionally, the vehicle driving assistance device further includes:
a third acquiring unit, configured to acquire vehicle condition information of a current vehicle;
the estimation unit executes the input of at least the road condition information of the current road condition into a pre-trained sight estimation model, and when obtaining the safe watching information under the current road condition through the sight estimation model, the estimation unit is used for:
inputting the road condition information of the current road condition and the vehicle condition information of the current vehicle into the sight line estimation model trained in advance, and obtaining safe watching information under the current road condition and the current vehicle condition through the sight line estimation model; the sight estimation model is obtained by training road condition information of a plurality of different road conditions, corresponding vehicle condition information under each road condition information, and safety watching information corresponding to each road condition information and vehicle condition information in advance.
The application provides an auxiliary method for vehicle driving, which is used for obtaining road condition information of a current road condition and current watching information to be evaluated. The gazing information to be evaluated is the gazing information of the driver or mapping gazing information, and the mapping gazing information is the gazing information which maps the gazing information of the driver to a picture of a front camera of the current driving vehicle. And then, at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining the safe watching information under the current road condition through the sight estimation model. And finally comparing the gazing information to be evaluated with the safe gazing information to evaluate the safety of the gazing mode of the driver and obtain an evaluation result for assisting driving. Therefore, the driving behavior of the driver, namely the watching information, is well combined with the surrounding environment, and effective safety guarantee is provided for the driver.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an auxiliary system for vehicle driving according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a training method of a sight line estimation model according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for acquiring safety gaze information corresponding to each piece of road condition information according to another embodiment of the present disclosure;
fig. 4 is a schematic flowchart of an auxiliary method for driving a vehicle according to another embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating another method for assisting driving of a vehicle according to another embodiment of the present application;
fig. 6 is a schematic structural diagram of an auxiliary device for vehicle driving according to another embodiment of the present application;
fig. 7 is a schematic structural diagram of a model training unit according to another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The application provides an auxiliary method for vehicle driving, which aims to solve the problem that the behavior of a driver and the surrounding environment of a vehicle are not well combined in the prior art, so that effective safety guarantee cannot be provided for the driver.
First, it should be noted that, in order to implement the method for assisting vehicle driving provided by the present application, an embodiment of the present application provides an assistance system for vehicle driving, as shown in fig. 1, specifically including: an automobile gaze information estimation subsystem 101, a driver gaze information estimation subsystem 102, and a driving assessment subsystem 103.
The automobile gaze information estimation subsystem 101 is mainly configured to obtain road condition information and vehicle condition information, such as videos captured by cameras on the automobile, which are acquired by information acquisition devices such as various automobile sensors on the automobile, and then input the obtained information into a pre-trained sight estimation model, so as to obtain full-view safety gaze information meeting the safety driving requirements corresponding to the input information.
The driver gaze information estimation subsystem 102 is mainly used for acquiring a face image, an eye image and the like of the driver, and obtaining gaze information of the driver according to the face image and the eye image of the driver, namely the gaze information to be evaluated. Specifically, since the driver's gaze information that is usually obtained is described in the format of a Database Management System (DMS) of the vehicle, the driver's gaze information estimation subsystem 102 may further include a gaze information mapping subsystem 104 for mapping the obtained driver's gaze information to gaze information in a screen captured by a front camera of the currently driven vehicle, that is, gaze information centered on a person to gaze information centered on the vehicle.
The driving evaluation subsystem 103 is mainly used for comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result, so that the safety of the current gazing information of the driver is obtained, and early warning, gazing suggestion prompt and the like can be timely given when the safety is low. Specifically, the specific evaluation mode can be used for scoring the fixation quality, and carrying out early warning and prompting when the score is lower than a preset value.
Since the method for assisting vehicle driving provided by the present application needs to be implemented based on a pre-trained sight line estimation model, another embodiment of the present application provides a method for training a sight line estimation model, as shown in fig. 2, including:
s201, acquiring a plurality of pieces of road condition information of a plurality of different road conditions and safety watching information corresponding to each piece of road condition information.
Since the sight estimation model in the embodiment of the application is mainly used for outputting corresponding safety watching information at least according to the input road condition information, the training sample at least comprises the road condition information under a plurality of different road conditions and the safety watching information corresponding to each road condition information. If the gaze estimation model is used to output the safety gaze information according to the input traffic information and other information, such as vehicle condition information, the training samples include a plurality of traffic information and other information, and the safety gaze information corresponding to the traffic information and other information.
The safety watching information corresponding to the road condition information refers to watching information of a driver who can guarantee driving safety under a road condition corresponding to the current road condition information, namely, the driver with abundant experience meets safety requirements.
Optionally, another embodiment of the present application provides a specific implementation method for acquiring the safety gaze information corresponding to each piece of road condition information, as shown in fig. 3, including:
s301, obtaining the watching information of at least one driver meeting the safety requirements under the road condition corresponding to each road condition information.
Specifically, the method can be used for aiming at a plurality of drivers with abundant experience, namely, the driving age reaches the required age, and safety accidents never happen, and the drivers with good driving habits collect road condition information of road conditions and synchronously collect the watching information of the drivers, so that a large amount of watching information of full visual angles of the drivers meeting the safety requirements is obtained, namely, the method not only comprises the watching information of the drivers watching the front, but also comprises the watching information of the drivers watching the left and right mirrors, the rearview mirrors and other directions.
S302, the gaze information of the driver meeting the safety requirements is used as the safety gaze information corresponding to the road condition information, or the gaze information of the driver meeting the safety requirements is mapped to the gaze information in the picture shot by the front camera of the current driving vehicle and used as the safety gaze information corresponding to the road condition information.
It should be noted that the collected gaze information of the driver meeting the safety requirement is described in the DMS of the vehicle, and is gaze information in a coordinate system with the driver as the origin of coordinates, and the road condition information is captured by the front-facing camera, so that the gaze information of the driver meeting the safety requirement can be mapped to gaze information in a picture captured by the front-facing camera of the currently driven vehicle as safety gaze information corresponding to the road condition information, that is, the safety gaze information mainly based on the vehicle is used as a training sample, so that the safety gaze information output by the trained sight line estimation model mainly includes the vehicle.
Of course, the gaze information of the driver meeting the safety requirements may also be directly used as the safety gaze information corresponding to the road condition information, and the safety gaze information output by the trained sight line estimation model may also be used to evaluate the gaze information of the current driver and give a corresponding prompt. Or in the using process, if the acquired gazing information to be evaluated is the gazing information which maps the gazing information of the driver to the picture of the front camera of the current driving vehicle, the safety gazing information output by the sight line estimation model can be mapped to the picture shot by the front camera of the current driving vehicle before evaluation, and then the gazing information to be evaluated is evaluated based on the mapped safety gazing information.
S202, inputting each road condition information into the sight line estimation model, and calculating by the sight line estimation model to obtain the watching information corresponding to the road condition information.
And S203, judging whether the coincidence degree of the watching information corresponding to the calculated road condition information and the safety watching information corresponding to the road condition information is greater than a preset threshold value or not.
And comparing the watching information corresponding to the calculated road condition information with the safe watching information corresponding to the road condition information in the training sample to judge whether the coincidence degree of the watching information corresponding to the calculated road condition information and the safe watching information corresponding to the road condition information is greater than a preset threshold value. If the coincidence degree of the gaze information corresponding to the calculated road condition information and the safe gaze information corresponding to the road condition information in the training sample is greater than the preset threshold value, it indicates that the sight estimation model can output the safe gaze information meeting the safety requirement according to the road condition information, so step S204 is executed at this time. If the coincidence degree of the gaze information corresponding to the calculated road condition information and the safe gaze information corresponding to the road condition information in the training sample is not greater than the preset threshold value by comparison, the sight line estimation model does not achieve the expected effect yet, and the training needs to be continued, so that step S205 is executed at this time.
And S204, finishing the training of the sight estimation model.
And S205, adjusting parameters of the sight line estimation model.
It should be noted that, after the parameters are adjusted, the sight line estimation model needs to be trained further, so that the step S202 needs to be returned to again input each road condition information into the sight line estimation model, and the gaze information corresponding to the road condition information is obtained through calculation of the sight line estimation model, so that the optimum parameters are obtained through continuous training and adjustment, and the sight line estimation model achieves the expected effect.
Based on the above-mentioned assistance system for vehicle driving and the trained sight line estimation model, another embodiment of the present application provides an assistance method for vehicle driving, as shown in fig. 4, including:
s401, obtaining road condition information of the current road condition and current gazing information to be evaluated, wherein the gazing information to be evaluated is gazing information or mapping gazing information of a driver.
The gazing information to be evaluated is gazing information or mapping gazing information of a driver, and specifically may include position information of a gazing point, gazing direction and the like. The mapping attention information is the attention information which maps the attention information of the driver to the image of the front camera of the current driving vehicle.
It should be noted that the road condition information of the current road condition may specifically include information such as route information, pedestrians and other objects on the road side, pedestrians, vehicles, and obstacles on the road. The road condition information of the current road condition is mainly acquired through the front camera of the automobile, and certainly, more road condition information can be acquired through equipment such as an ultrasonic sensor.
Specifically, an eyeball tracking technology can be integrated in a DMS of the vehicle, an eye image of the driver is obtained in real time by the eyeball tracking technology, a current eye feature of the driver is obtained by analyzing the eye image of the driver, gaze information of the driver is obtained by calculation based on the current eye feature of the driver, and the obtained gaze information of the driver is used as information to be evaluated, or the gaze information of the driver is mapped to gaze information in a picture of a front camera of a currently-driven vehicle and used as the gaze information to be evaluated.
S402, at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining the safe watching information under the current road condition through the sight estimation model, wherein the safe watching information is the watching information of a full visual angle which meets the safe driving requirement of the corresponding road condition information.
The safe watching information can also be understood as watching information of visual angles in all directions, which meets the safe driving requirements, under the road condition corresponding to the current road condition information. The sight line estimation model is obtained by training at least the road condition information of a plurality of different road conditions and the safety watching information corresponding to each road condition information in advance.
Optionally, in another embodiment of the application, the sight line estimation model is obtained by training a sight line estimation model by using road condition information of a plurality of different road conditions, vehicle condition information corresponding to each road condition information, and safety gaze information corresponding to each road condition information and vehicle condition information in advance, so before executing step S402, the method may further include: and acquiring the vehicle condition information of the current vehicle.
The vehicle condition information corresponding to each road condition information refers to vehicle condition information of a vehicle condition of the vehicle when the vehicle is in a road condition corresponding to the current road condition information, and the vehicle condition information may specifically be vehicle condition information such as a rotating speed, a vehicle speed, a gear, whether a brake is applied, and a turn light is turned on of an engine. Under different vehicle conditions, the safety watching information meeting the driving safety requirements is different, for example, when the vehicle speed is higher, the watching point of the driver is farther than that when the vehicle speed is lower, so that enough time can be provided for reacting, and the driving safety is ensured; when a driver turns on the left turn light, the driver usually pays attention to the condition of the left direction of the vehicle, and when the driver takes braking measures, the driver usually pays attention to the condition of the position near the vehicle within a range of 1-2 meters away; the vehicle condition information is further considered in the embodiment of the present application.
Accordingly, the specific implementation manner of step S402 in this embodiment of the present application is: and inputting the road condition information of the current road condition and the vehicle condition information of the current vehicle into a pre-trained sight estimation model, and obtaining the safe watching information under the current road condition and the current vehicle condition through the sight estimation model.
And S403, comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.
Wherein the evaluation result is used for assisting the driver in realizing safe driving. Specifically, the safety of the current watching mode of the driver is evaluated by comparing whether the watching information to be evaluated is consistent with the safety watching information, and the driver is prompted or warned to assist in driving under the condition that the safety is determined to be low.
Optionally, in another embodiment of the present application, as shown in fig. 5, a specific implementation manner of comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result is shown in step S501, and specifically is: and comparing the coincidence degree of the gazing information to be evaluated and the safety gazing information, and grading the gazing quality of the gazing information to be evaluated according to the coincidence degree to obtain a safety driving coefficient.
That is to say, the embodiment of the present application evaluates the gaze information to be evaluated in a scoring manner. Since the higher the coincidence degree of the gazing information to be evaluated and the safety gazing information is compared is, the more the main line information to be evaluated of the driver approaches the safety gazing information, the higher the safety of driving is at this time, and the score obtained by scoring is performed on the coincidence degree of the gazing information to be evaluated and the safety gazing information. Therefore, specifically, the safe driving coefficient corresponding to each contact ratio can be preset, and then the safe driving coefficient of the gazing information to be evaluated is determined according to the preset corresponding relation after the contact ratio of the gazing information to be evaluated and the safe gazing information is obtained through comparison.
Optionally, in the application embodiment, referring to fig. 5 as well, after comparing the coincidence degree of the gazing information to be evaluated and the safety gazing information, and scoring the gazing quality of the gazing information to be evaluated according to the coincidence degree to obtain the safety driving coefficient, step S502 and step S503 are further performed, specifically:
and S502, judging whether the safe driving coefficient is lower than a preset alarm coefficient.
Since the safe driving coefficient is lower than the preset alarm coefficient, which indicates that there is a great potential safety hazard in the current gazing mode of the driver, step S504 is executed when it is determined that the safe driving coefficient is lower than the preset alarm coefficient.
And S503, sending prompt information to the driver.
Specifically, the prompt information is sent to the driver, and prompt voice can be played through voice to prompt that the current watching mode of the driver is dangerous, and prompt the driver of the correct watching mode, namely, the watching information of the driver and the safe watching information can be highly overlapped through prompt. And alarm information can be displayed on a screen of the central control screen for prompting, and a correct watching mode can be displayed on a front windshield of the vehicle.
Optionally, if the obtained safe driving coefficient is lower under the road condition corresponding to the current road condition information, the driver may seek the road condition that bypasses the type as much as possible when planning the route for the driver next time, so as to improve the driving safety.
According to the auxiliary method for vehicle driving, the road condition information of the current road condition and the current gazing information to be evaluated are obtained. The gazing information to be evaluated is the gazing information of the driver or mapping gazing information, and the mapping gazing information is the gazing information which maps the gazing information of the driver to a picture of a front camera of the current driving vehicle. And then, at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining the safe watching information under the current road condition through the sight estimation model. And finally comparing the gazing information to be evaluated with the safe gazing information to evaluate the safety of the gazing method of the driver and obtain an evaluation result for assisting driving. Therefore, the driving behavior of the driver, namely the watching information, is well combined with the surrounding environment, and effective safety guarantee is provided for the driver.
Another embodiment of the present application provides an assistance apparatus for vehicle driving, as shown in fig. 6, including:
the first obtaining unit 601 is configured to obtain road condition information of a current road condition and current gaze information to be evaluated.
The gazing information to be evaluated is the gazing information of the driver or mapping gazing information, and the mapping gazing information is the gazing information which maps the gazing information of the driver to a picture of a front camera of the current driving vehicle.
The estimating unit 602 is configured to input at least the road condition information of the current road condition into a pre-trained sight estimation model, and obtain the safe watching information under the current road condition through the sight estimation model.
The safety watching information is full-view watching information meeting safety driving requirements corresponding to the road condition information, and the sight line estimation model is obtained by training at least the road condition information of a plurality of different road conditions and the safety watching information corresponding to each road condition information in advance.
The evaluation unit 603 is configured to compare the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.
It should be noted that, for the specific working process of the foregoing unit in the embodiment of the present application, reference may be made to step S401 to step S403 in the foregoing method embodiment, which is not described herein again.
Optionally, in another embodiment of the present application, the evaluation unit 603 includes:
and the first comparison unit is used for comparing the contact degree of the gazing information to be evaluated and the safety gazing information, and grading the gazing quality of the gazing information to be evaluated according to the contact degree to obtain the safety driving coefficient.
It should be noted that, the specific working process of the first comparing unit may refer to step S501 in the foregoing method embodiment, and details are not described here.
Wherein the higher the contact ratio, the higher the safe driving coefficient.
Optionally, another embodiment of the present application provides an auxiliary device for vehicle driving, further including:
and the judging unit is used for judging whether the safe driving coefficient is lower than a preset alarm coefficient or not.
And the prompting unit is used for sending prompting information to the driver when the judging unit judges that the safe driving coefficient is lower than the preset alarm coefficient.
It should be noted that, the specific working processes of the determining unit and the prompting unit may refer to step S502 and step S503 in the foregoing method embodiment accordingly, which are not described herein again.
Optionally, another embodiment of the present application provides an auxiliary device for vehicle driving, further including: and a model training unit. As shown in fig. 7, the model training unit includes:
the second obtaining unit 701 is configured to obtain a plurality of pieces of road condition information of a plurality of different road conditions and safety gaze information corresponding to each piece of road condition information.
An input unit 702, configured to input each piece of road condition information into the sight estimation model, and obtain, through calculation of the sight estimation model, gaze information corresponding to the road condition information.
The second comparing unit 703 is configured to compare the gaze information corresponding to the calculated road condition information with the safety gaze information corresponding to the road condition information.
A determining unit 704, configured to determine that training of the sight estimation model is completed when the coincidence degree between the gaze information corresponding to the calculated road condition information and the safety gaze information corresponding to the road condition information, which is compared by the second comparing unit 703, is greater than a preset threshold.
The adjusting unit 705 is configured to adjust parameters of the sight estimation model when the coincidence degree of the gaze information corresponding to the calculated road condition information and the safety gaze information corresponding to the road condition information, which is compared by the second comparing unit 703, is not greater than a preset threshold, and return to the input unit 702 to execute the process of inputting each road condition information into the sight estimation model respectively, so as to obtain the gaze information corresponding to the road condition information through calculation of the sight estimation model.
It should be noted that, for the specific working process of the foregoing units in the embodiment of the present application, reference may be made to step S201 to step S205 in the foregoing method embodiment, which is not described herein again.
Optionally, in another embodiment of the present application, the second obtaining unit 701 includes:
and the second acquiring subunit is used for acquiring the gazing information of at least one driver meeting the safety requirements under the road condition corresponding to each road condition information.
And the mapping unit is used for mapping the watching information of the driver meeting the safety requirement to the safe watching information corresponding to the road condition information, or mapping the watching information of the driver meeting the safety requirement to the watching information in the picture shot by the front camera of the current driving vehicle to be used as the safe watching information corresponding to the road condition information.
It should be noted that, the specific working processes of the second obtaining subunit and the mapping unit may refer to step S301 and step S302 in the foregoing method embodiment, which is not described herein again.
Optionally, another embodiment of the present application provides an auxiliary device for vehicle driving, further including:
and the third acquisition unit is used for acquiring the vehicle condition information of the current vehicle.
The estimating unit 602 performs at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and when obtaining the safe watching information under the current road condition through the sight estimation model, is configured to:
inputting the road condition information of the current road condition and the vehicle condition information of the current vehicle into a pre-trained sight estimation model, and obtaining safe watching information under the current road condition and the current vehicle condition through the sight estimation model; the sight line estimation model is obtained by training road condition information of a plurality of different road conditions, corresponding vehicle condition information under each road condition information, and safety watching information corresponding to each road condition information and the vehicle condition information in advance.
The embodiment of the application provides an auxiliary device for vehicle driving, which acquires road condition information of a current road condition and current gazing information to be evaluated through a first acquisition unit. The gazing information to be evaluated is the gazing information of the driver or mapping gazing information, and the mapping gazing information is the gazing information which maps the gazing information of the driver to a picture of a front camera of the current driving vehicle. Then, the estimation unit at least inputs the road condition information of the current road condition into a pre-trained sight estimation model, and obtains the safe watching information under the current road condition through the sight estimation model. The safety watching information is full-view watching information meeting the safety driving requirement corresponding to the road condition information, so that the evaluation unit compares the watching information to be evaluated with the safety watching information, the safety of the watching method of the driver can be evaluated, and an evaluation result for assisting driving is obtained. Therefore, the driving behavior of the driver, namely the watching information, is well combined with the surrounding environment, and effective safety guarantee is provided for the driver.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method of assisting driving of a vehicle, characterized by comprising:
acquiring road condition information of a current road condition and current gazing information to be evaluated, wherein the gazing information to be evaluated is gazing information of a driver or mapping gazing information, and the mapping gazing information is gazing information which maps the gazing information of the driver to a picture of a front camera of a current driving vehicle;
at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining safe watching information under the current road condition through the sight estimation model, wherein the safe watching information is full-view watching information which meets the safe driving requirement corresponding to the road condition information, and the sight estimation model is obtained by adopting at least road condition information of a plurality of different road conditions and safe watching information corresponding to each road condition information in advance for training;
and comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.
2. The assistance method according to claim 1, wherein the comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result comprises:
comparing the coincidence degree of the gazing information to be evaluated and the safety gazing information, and grading the gazing quality of the gazing information to be evaluated according to the coincidence degree to obtain a safety driving coefficient; wherein the higher the degree of overlap, the higher the safe driving coefficient.
3. The assist method according to claim 2, characterized in that, after the obtaining the safe driving coefficient, further comprising:
judging whether the safe driving coefficient is lower than a preset alarm coefficient or not;
and if the safe driving coefficient is judged to be lower than a preset alarm coefficient, sending prompt information to the driver.
4. The assistance method according to claim 1, wherein the training method of the sight-line estimation model includes:
acquiring a plurality of road condition information of a plurality of different road conditions and safety watching information corresponding to each road condition information;
inputting each piece of road condition information into the sight estimation model, and calculating by the sight estimation model to obtain watching information corresponding to the road condition information;
comparing the watching information corresponding to the road condition information obtained by calculation with the safety watching information corresponding to the road condition information;
if the coincidence degree of the gaze information corresponding to the calculated road condition information and the safe gaze information corresponding to the road condition information is greater than a preset threshold value, it is determined that the training of the sight estimation model is completed;
and if the coincidence degree of the watching information corresponding to the calculated road condition information and the safe watching information corresponding to the road condition information is not larger than a preset threshold value, adjusting parameters of the sight estimation model, returning to execute, inputting each road condition information into the sight estimation model, and calculating the watching information corresponding to the road condition information through the sight estimation model.
5. The assistance method according to claim 4, wherein the obtaining of the safety gaze information corresponding to each of the traffic information comprises:
acquiring the watching information of at least one driver meeting the safety requirements under the road condition corresponding to each piece of road condition information;
and the gaze information of the driver meeting the safety requirement is used as the safety gaze information corresponding to the road condition information, or the gaze information of the driver meeting the safety requirement is mapped to the gaze information in the picture shot by the front camera of the current driving vehicle and used as the safety gaze information corresponding to the road condition information.
6. The assistance method according to claim 1, wherein before the inputting the traffic information of the current traffic into a pre-trained sight estimation model and obtaining the safe gaze information under the current traffic through the sight estimation model, the method further comprises:
acquiring vehicle condition information of a current vehicle;
wherein, the at least inputting the road condition information of the current road condition into a pre-trained sight estimation model, and obtaining the safe watching information under the current road condition through the sight estimation model, comprises:
inputting the road condition information of the current road condition and the vehicle condition information of the current vehicle into the sight line estimation model trained in advance, and obtaining safe watching information under the current road condition and the current vehicle condition through the sight line estimation model; the sight estimation model is obtained by training road condition information of a plurality of different road conditions, corresponding vehicle condition information under each road condition information, and safety watching information corresponding to each road condition information and vehicle condition information in advance.
7. An assistance apparatus for vehicle driving, characterized by comprising:
the device comprises a first acquisition unit, a second acquisition unit and a display unit, wherein the first acquisition unit is used for acquiring road condition information of a current road condition and current gaze information to be evaluated, the gaze information to be evaluated is gaze information of a driver or mapping gaze information, and the mapping gaze information is gaze information which maps the gaze information of the driver to a picture of a front camera of a current driving vehicle;
the system comprises an estimation unit, a visual line estimation module and a visual line estimation module, wherein the estimation unit is used for at least inputting the road condition information of the current road condition into a pre-trained visual line estimation model and obtaining safe watching information under the current road condition through the visual line estimation model, the safe watching information is full-view watching information which meets the safe driving requirement corresponding to the road condition information, and the visual line estimation model is obtained by pre-training at least the road condition information of a plurality of different road conditions and the safe watching information corresponding to each road condition information;
and the evaluation unit is used for comparing the gazing information to be evaluated with the safety gazing information to obtain an evaluation result.
8. The assistance device according to claim 7, characterized in that said evaluation unit comprises:
the first comparison unit is used for comparing the coincidence degree of the gazing information to be evaluated and the safety gazing information and grading the gazing quality of the gazing information to be evaluated according to the coincidence degree to obtain a safety driving coefficient; wherein the higher the degree of overlap, the higher the safe driving coefficient.
9. The assistance device according to claim 8, further comprising:
the judging unit is used for judging whether the safe driving coefficient is lower than a preset alarm coefficient or not;
and the prompting unit is used for sending prompting information to the driver when the judging unit judges that the safe driving coefficient is lower than a preset alarm coefficient.
10. The assistance device according to claim 7, further comprising: a model training unit, wherein the model training unit comprises:
the second acquisition unit is used for acquiring a plurality of pieces of road condition information of a plurality of different road conditions and safety watching information corresponding to each piece of road condition information;
the input unit is used for respectively inputting each piece of road condition information into the sight estimation model, and calculating the gaze information corresponding to the road condition information through the sight estimation model;
the second comparison unit is used for comparing the watching information corresponding to the road condition information obtained by calculation with the safety watching information corresponding to the road condition information;
a determining unit, configured to determine that training of the sight estimation model is completed when a coincidence degree of gaze information corresponding to the calculated road condition information and the safe gaze information corresponding to the road condition information, which is compared by the second comparing unit, is greater than a preset threshold;
and the adjusting unit is used for adjusting parameters of the sight estimation model when the coincidence degree of the gaze information corresponding to the road condition information and the safety gaze information corresponding to the road condition information, which is obtained by the comparison of the second comparison unit, is not greater than a preset threshold value, returning to execute the input of each road condition information into the sight estimation model, and obtaining the gaze information corresponding to the road condition information through the calculation of the sight estimation model.
11. The assistance device according to claim 10, characterized in that the second acquisition unit comprises:
the second acquiring subunit is configured to acquire, under a road condition corresponding to each piece of road condition information, gaze information of at least one driver meeting a safety requirement;
and the mapping unit is used for mapping the watching information of the driver meeting the safety requirement to the safe watching information corresponding to the road condition information, or mapping the watching information of the driver meeting the safety requirement to the watching information in the picture shot by the front camera of the current driving vehicle to serve as the safe watching information corresponding to the road condition information.
12. The assistance device according to claim 7, further comprising:
a third acquiring unit, configured to acquire vehicle condition information of a current vehicle;
the estimation unit executes the input of at least the road condition information of the current road condition into a pre-trained sight estimation model, and when obtaining the safe watching information under the current road condition through the sight estimation model, the estimation unit is used for:
inputting the road condition information of the current road condition and the vehicle condition information of the current vehicle into the sight line estimation model trained in advance, and obtaining safe watching information under the current road condition and the current vehicle condition through the sight line estimation model; the sight estimation model is obtained by training road condition information of a plurality of different road conditions, corresponding vehicle condition information under each road condition information, and safety watching information corresponding to each road condition information and vehicle condition information in advance.
CN202010343228.6A 2020-04-27 2020-04-27 Vehicle driving assisting method and device Active CN113570901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010343228.6A CN113570901B (en) 2020-04-27 2020-04-27 Vehicle driving assisting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010343228.6A CN113570901B (en) 2020-04-27 2020-04-27 Vehicle driving assisting method and device

Publications (2)

Publication Number Publication Date
CN113570901A true CN113570901A (en) 2021-10-29
CN113570901B CN113570901B (en) 2022-09-20

Family

ID=78157661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010343228.6A Active CN113570901B (en) 2020-04-27 2020-04-27 Vehicle driving assisting method and device

Country Status (1)

Country Link
CN (1) CN113570901B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006163828A (en) * 2004-12-07 2006-06-22 Nissan Motor Co Ltd Alarm device for vehicle, and method of alarming ambient condition of vehicle
CN101512617A (en) * 2006-09-04 2009-08-19 松下电器产业株式会社 Travel information providing device
CN105083291A (en) * 2014-04-25 2015-11-25 歌乐株式会社 Driver auxiliary system based on visual line detection
CN105235595A (en) * 2015-10-30 2016-01-13 浪潮集团有限公司 Auxiliary driving method, automobile data recorder, display device and system
CN110254510A (en) * 2018-03-08 2019-09-20 操纵技术Ip控股公司 Prepare the system and method for assessment for the driver of vehicle
CN110348281A (en) * 2018-04-04 2019-10-18 爱信精机株式会社 Alarm device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006163828A (en) * 2004-12-07 2006-06-22 Nissan Motor Co Ltd Alarm device for vehicle, and method of alarming ambient condition of vehicle
CN101512617A (en) * 2006-09-04 2009-08-19 松下电器产业株式会社 Travel information providing device
CN105083291A (en) * 2014-04-25 2015-11-25 歌乐株式会社 Driver auxiliary system based on visual line detection
CN105235595A (en) * 2015-10-30 2016-01-13 浪潮集团有限公司 Auxiliary driving method, automobile data recorder, display device and system
CN110254510A (en) * 2018-03-08 2019-09-20 操纵技术Ip控股公司 Prepare the system and method for assessment for the driver of vehicle
CN110348281A (en) * 2018-04-04 2019-10-18 爱信精机株式会社 Alarm device

Also Published As

Publication number Publication date
CN113570901B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN112965504B (en) Remote confirmation method, device and equipment based on automatic driving and storage medium
US9707971B2 (en) Driving characteristics diagnosis device, driving characteristics diagnosis system, driving characteristics diagnosis method, information output device, and information output method
CN112289003B (en) Method for monitoring end-of-driving behavior of fatigue driving and active safety driving monitoring system
US8085140B2 (en) Travel information providing device
CN114026611A (en) Detecting driver attentiveness using heatmaps
US10232772B2 (en) Driver assistance system
JP2005267108A (en) Driving support device
CN108604413B (en) Display device control method and display device
CN110544368B (en) Fatigue driving augmented reality early warning device and early warning method
JP2018127099A (en) Display control unit for vehicle
CN111366168A (en) AR navigation system and method based on multi-source information fusion
CN111351474B (en) Vehicle moving target detection method, device and system
US20230234618A1 (en) Method and apparatus for controlling autonomous vehicle
CN114872713A (en) Device and method for monitoring abnormal driving state of driver
JP6891926B2 (en) Vehicle systems, methods performed on vehicle systems, and driver assistance systems
CN111152792A (en) Device and method for determining the level of attention demand of a vehicle driver
CN214775848U (en) A-column display screen-based obstacle detection device and automobile
CN111231946B (en) Low-sight-distance vehicle safe driving control method
CN113570901B (en) Vehicle driving assisting method and device
CN112319486A (en) Driving detection method based on driving data acquisition and related device
JP2020095466A (en) Electronic device
JP7043795B2 (en) Driving support device, driving status information acquisition system, driving support method and program
CN113421402A (en) Passenger body temperature and fatigue driving behavior detection system and method based on infrared camera
CN113212451A (en) Rearview auxiliary system for intelligent driving automobile
CN109649398B (en) Navigation assistance system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant