CN112550306A - Vehicle driving assistance system, vehicle including the same, and corresponding method and medium - Google Patents
Vehicle driving assistance system, vehicle including the same, and corresponding method and medium Download PDFInfo
- Publication number
- CN112550306A CN112550306A CN201910854196.3A CN201910854196A CN112550306A CN 112550306 A CN112550306 A CN 112550306A CN 201910854196 A CN201910854196 A CN 201910854196A CN 112550306 A CN112550306 A CN 112550306A
- Authority
- CN
- China
- Prior art keywords
- driver
- semantic
- distraction
- vehicle
- level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000006185 dispersion Substances 0.000 claims abstract description 17
- 238000011156 evaluation Methods 0.000 claims abstract description 15
- 230000004044 response Effects 0.000 claims abstract description 11
- 238000013145 classification model Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 abstract description 25
- 238000010586 diagram Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000036651 mood Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W40/09—Driving style or behaviour
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/143—Alarm means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a vehicle driving assistance system, a vehicle including the vehicle driving assistance system, a corresponding vehicle driving assistance method, and a computer-readable storage medium. The system comprises: the voice acquisition unit is used for acquiring the voice of a driver in the vehicle; the semantic analysis unit is used for carrying out semantic analysis on the collected voice and classifying the voice into corresponding semantic categories according to the result of the semantic analysis; an attention dispersion evaluation unit that evaluates a level of attention dispersion of the driver based on the semantic category and determines whether the level of attention dispersion of the driver exceeds a threshold value; and a reminder unit that provides a reminder signal to the driver in response to determining that the level of distraction of the driver exceeds a threshold. By utilizing the scheme of the invention, the driver can be reminded of the indication that the distraction may be serious to the level causing the safety risk, so that the driver is prompted to take corresponding action to avoid the driver from compromising the driving safety due to the distraction.
Description
Technical Field
The present invention relates to the field of vehicle technologies, and more particularly, to a vehicle driving assistance system, a vehicle including the vehicle driving assistance system, a corresponding vehicle driving assistance method, and a computer-readable storage medium.
Background
The attention of the driver is focused on ensuring the driving safety. Traffic regulations expressly prohibit a driver from making and receiving calls during driving (or in some cases expressly prohibit passengers from talking to the driver), but sometimes occur when a driver talks to an occupant in a vehicle or a driver makes and receives calls during driving. The driver can facilitate his or her call making through, for example, a bluetooth headset or a voice telephony system equipped on the vehicle, but this can also distract the driver, especially if the content involved in the call causes the driver to think or affects the mood of the driver.
Therefore, there is a need in the art for a solution that alerts a driver in real time of the risk of driving safety in situations where the driver is making and receiving calls and/or talking to occupants in the vehicle that is distracting to the driver.
Disclosure of Invention
Therefore, an object of the present invention is to provide a solution for reminding a driver of a risk of driving safety in real time in response to a driver making a call and/or talking to a passenger in a vehicle, so that the driver is aware that his/her behavior is potentially constituting a safety hazard, thereby improving driving safety.
Specifically, according to a first aspect of the present invention, there is provided a vehicle driving assist system, the system including:
a voice collecting unit configured to collect a voice of a driver in the vehicle;
the semantic analysis unit is configured to perform semantic analysis on the voice collected by the voice collection unit and classify the collected voice into corresponding semantic categories according to the result of the semantic analysis;
an attention dispersion evaluation unit configured to evaluate a level of attention dispersion of the driver based on the classified semantic categories, and determine whether the level of attention dispersion of the driver exceeds a threshold; and
a reminder unit configured to provide a reminder signal to the driver in response to the distraction evaluation unit determining that the level of distraction of the driver exceeds a threshold.
In one embodiment, the semantic analysis unit includes a semantic classification model trained by machine learning with speech samples having class labels and configured to classify speech collected by the speech collection unit into respective ones of a number of semantic classes.
In one embodiment, the semantic analysis unit is configured to perform one or more semantic class classifications on the captured speech over a predetermined period of time to produce a respective one or more semantic classes, wherein each semantic class corresponds to a respective distraction value.
In one embodiment, the distraction evaluation unit is further configured to perform a weighted summation of one or more distraction values corresponding to the respective one or more semantic categories over the predetermined time period, and to determine the level of distraction of the driver over the predetermined time period based on the result of the weighted summation.
In one embodiment, the vehicle driving assistance system further includes: an image acquisition unit configured to acquire an image related to an object around a vehicle, and/or a vehicle state information acquisition unit configured to acquire state information data related to driving of the vehicle.
In one embodiment, the distraction evaluation unit is further configured to detect whether there is an objective factor distracting the driver's attention based on the image acquired by the image acquisition unit and/or the state information data acquired by the vehicle state information acquisition unit, and adjust the evaluated level of distraction based on the result of the detection.
According to a second aspect of the invention, there is provided a vehicle comprising a system according to the first aspect of the invention.
According to a third aspect of the present invention, there is provided a driving assistance method for a vehicle, the method comprising:
collecting the voice of a driver in a vehicle;
performing semantic analysis on the collected voice, and classifying the collected voice into corresponding semantic categories according to the result of the semantic analysis;
evaluating a driver's distraction level based on the classified semantic categories and determining whether the driver's distraction level exceeds a threshold; and
in response to determining that the level of driver distraction exceeds a threshold, a reminder signal is provided to the driver.
In one embodiment, the method further comprises: training a semantic classification model through machine learning by means of a speech sample with class labels, and classifying the collected speech into a corresponding semantic class of a plurality of semantic classes through the semantic classification model.
In one embodiment, the method further comprises: one or more semantic class classifications are made to the captured speech over a predetermined period of time to produce respective one or more semantic classes, where each semantic class corresponds to a respective distraction value.
In one embodiment, the method further comprises: and carrying out weighted summation on one or more attention dispersion values corresponding to one or more corresponding semantic categories in the preset time period, and determining the attention dispersion level of the driver in the preset time period according to the result of the weighted summation.
In one embodiment, the method further comprises: an image relating to an object around a vehicle is acquired, and state information data relating to driving of the vehicle is acquired.
In one embodiment, the method further comprises: whether there is an objective factor distracting the driver's attention is detected based on the acquired image and/or status information data, and the estimated distraction level is adjusted based on the result of the detection.
According to a fourth aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method according to the third aspect of the invention.
By using the scheme of the invention, aiming at the condition that the driver calls and/or talks with the passengers in the vehicle to distract the driver, the distraction level of the driver is objectively evaluated according to the content of the call and/or the conversation, and the driver is reminded in real time to realize that the behavior of the driver potentially forms a potential safety hazard under the condition that the distraction level exceeds a certain threshold value and the safety risk is possibly caused, so that the driving safety is improved.
Drawings
Non-limiting and non-exhaustive embodiments of the present invention are described by way of example with reference to the following drawings, in which:
fig. 1 is a schematic diagram of a vehicle driving assist system according to an embodiment of the invention.
Fig. 2 is a schematic diagram of a vehicle driving assist method according to an embodiment of the invention.
Detailed Description
In order to make the above and other features and advantages of the present invention more apparent, the present invention is further described below with reference to the accompanying drawings. It is understood that the specific embodiments described herein are for purposes of illustration only and are not intended to be limiting.
The present invention provides a method of alerting a driver that his distraction may have been severe enough to cause a safety risk in response to the driver having a distraction level that may cause a safety risk during the driver's call-in and/or conversation with the occupant, thereby prompting the driver to take a corresponding action (e.g., stop the call-in and/or conversation, stop the vehicle, etc.) to avoid the driver's distraction compromising driving safety.
In a first aspect of the present invention, this object is achieved by providing a vehicle driving assistance system.
Fig. 1 is a schematic diagram of a vehicle driving assistance system 100 according to an embodiment of the present invention.
As shown in fig. 1, the system 100 includes a voice acquisition unit 110, a semantic analysis unit 120, a distraction evaluation unit 130, and a reminder unit 140.
The voice collecting unit 110 may be configured to collect a voice of a driver in the vehicle. In some embodiments, the speech acquisition unit 110 may include an array of microphones disposed within the vehicle. The microphone array can perform sound source localization to recognize and collect sound waves (voice) from a specific person (here, a driver). Particularly, sound localization refers to a technology for calculating an angle and a distance of a target speaker by using a microphone array so as to track the target speaker and directionally pick up subsequent voices, and the technology is an important and mature preprocessing technology in the fields of man-machine interaction, audio and video conferences and the like. With the microphone array, the speech acquisition unit 110 may focus (e.g., recognize, acquire, and process) only on the speech of the driver within the vehicle, and not on other sound waves and/or speech from other persons within the vehicle. In some embodiments, the voice capture unit 110 may include a phone call system (e.g., a bluetooth call system (including a bluetooth headset)) used by the driver to make and receive calls. The voice collecting unit 110 may collect the voice of the driver in the vehicle through a phone call system. The present invention contemplates any device and system capable of capturing the voice of a driver in a vehicle.
The speech acquisition unit 110 is communicatively coupled (wired and/or wireless) to the semantic analysis unit 120, thereby communicating the acquired speech to the semantic analysis unit 120 for analysis thereof. The semantic analysis unit 120 may be configured to perform semantic analysis on the voice collected by the voice collection unit 110 and classify the collected voice into a corresponding semantic category according to a result of the semantic analysis. In some embodiments, the semantic analysis unit 120 includes a semantic classification model trained by machine learning with speech samples having class labels and configured to classify speech collected by the speech collection unit into respective ones of several semantic classes. Regarding speech recognition and/or semantic classification of speech, chinese patent application CN104123936 discloses a dialogue system automatic training method, a dialogue system and a control device for a vehicle, and chinese patent application CN103179122 discloses a method and system for preventing telecom telephone fraud based on speech semantic content analysis. Semantic recognition and classification of speech is discussed in detail in both of the above-mentioned patent application documents, the contents of which are incorporated herein by reference. In addition, any semantic analysis technique that recognizes content and/or instances of content in speech (e.g., Convolutional Neural Network (CNN) based semantic classification techniques) may be used with the present invention. For example, a classification model/network may be trained using positive data samples with class labels and utilized to classify the collected one or more voices and/or one or more portions of the voices into a corresponding one or more most likely classes. In this regard, semantic classification and analysis of speech may be achieved with the aid of data processing and analysis techniques such as machine learning techniques, artificial intelligence, cloud, and the like.
In some embodiments, the semantic analysis unit 120 may be configured to perform one or more semantic class classifications on the captured speech over a predetermined period of time to produce a respective one or more semantic classes, wherein each semantic class corresponds to a respective distraction value. The voice of the driver over the predetermined period of time may comprise one or more content corresponding to one or more semantic categories that cause different levels of distraction to the driver, so that each semantic category corresponds/indicates/has a corresponding distraction value, wherein the distraction value indicates the level of distraction, and the higher the distraction value, the greater the level of distraction. For example, semantic categories may include, for example, decisions, business discussions, and answers to technical questions, among others. Depending on the extent to which such content elicits the driver's thinking or affects the driver's mood (and thus distracts the driver), these semantic categories may be assigned respective distraction values. For example, the decision semantic category may be assigned an attention-distraction value of 0.8, the business discussion semantic category may be assigned an attention-distraction value of 0.7, and the solution technical question semantic category may be assigned an attention-distraction value of 0.5, which may indicate that the decision semantic category may distract the driver more and the solution technical question semantic category may distract the driver less. In some cases, it may be assessed that the driver distraction level is greater where a particular semantic category is determined to exist in the captured speech. In other embodiments, the driver's distraction level may be evaluated by a composite evaluation of the various distraction values (e.g., a distraction level evaluation based on a weighted summation of the various distraction values, as described below).
The semantic analysis unit 120 is communicatively coupled to a distraction evaluation unit 130 (wired and/or wireless). In some embodiments, the distraction evaluation unit 130 may be configured to perform a weighted summation of one or more distraction values corresponding to the respective one or more semantic categories over a predetermined time period, and determine a level of distraction of the driver over the predetermined time period according to a result of the weighted summation. As described above, the driver's level of distraction can be evaluated based on various criteria according to the distraction values corresponding to the respective semantic categories. For example, the driver's distraction level may be evaluated based on a single semantic category/distraction value (e.g., only a particular semantic category, i.e., evaluating that the driver's distraction level is greater); the driver's level of distraction can be evaluated based on the sum of the assigned distraction values for the respective semantic categories; alternatively, the driver's level of distraction may be evaluated based on a weighted sum of the distraction values to which the respective semantic categories are assigned. The weighted sum of the one or more distraction values corresponding to the respective one or more semantic categories over the predetermined time period may indicate a total distraction level to the driver's attention caused by all semantic categories included in the conversation or telephony content over the predetermined time period. It will be readily appreciated that different thresholds may be provided, corresponding to the different ways described above. Additionally, the respective weighting values and thresholds may change over time. For example, over time, the driver may become increasingly fatigued and increasingly distracted. In such a case, the weighting value may be made larger and/or the threshold value may be made smaller, so that a smaller distraction value also indicates a greater level of distraction, thereby making the long distance drive safer. Alternatively, if multiple semantic classifications are observed over a short period of time (e.g., one minute) and the weighted value of the distraction value exceeds a set threshold, this may indicate that the driver is being distracted frequently (possibly indicating an increased level of distraction). In such a case, a weighted sum of one or more distraction values may indicate such a condition, thereby preventing further distraction from compromising driving safety. In some embodiments, different weighting values may be set for different drivers, e.g., the weighting value for a novice driver may be set larger and the weighting value for a skilled driver may be set smaller.
In many cases, it becomes more dangerous for the driver to make and receive calls and/or talk to occupants in the vehicle while experiencing certain road conditions or performing certain driving actions during driving. Thus, it is possible to detect whether there are other objective factors distracting the driver (e.g., experiencing certain road conditions or performing certain driving actions), and to adjust the estimated level of distraction based on the results of the detection, as described below. In some embodiments, the vehicle driving assistance system 100 may further include an image acquisition unit 150 and/or a vehicle state information acquisition unit 160. The image acquisition unit 150 may be configured to acquire images related to objects around the vehicle. The vehicle state information acquisition unit 160 may be configured to acquire state information data related to driving of the vehicle. In some embodiments, the distraction evaluation unit 130 may be further configured to detect whether there is an objective factor distracting the driver's attention based on the image provided by the image acquisition unit 150 and/or the state information data provided by the vehicle state information acquisition unit 160, and adjust the evaluated level of distraction based on the result of the detection. For example, the image capture unit 150 may include a camera that may detect objects (e.g., people, traffic lights, curves, other vehicles, etc.) around the vehicle. In the presence of such objects, the distraction of the driver's attention may cause a serious accident. Accordingly, based on detecting the presence of such objects, the respective distraction values may be increased and/or the threshold may be decreased, indicating a greater level of distraction even with slight distraction, thereby alerting the driver to stop the phone call and/or talk upon observing such objects. In addition, in some cases, the vehicle state information acquisition unit 160 may acquire vehicle state information, such as steering, braking, acceleration, illumination state information, from the CAN bus. Such vehicle state information may indicate that the driver is performing a particular driving action in response to a certain situation (e.g., a curve, a red light, passing, evading a rear vehicle, etc.). In this case, the driver is often not allowed to be somewhat distracted. Thus, the estimated level of distraction may be adjusted by increasing the respective distraction values and/or decreasing the threshold value in response to detecting the presence of an objective factor distracting the driver.
The alert unit 140 may be configured to provide an alert signal to the driver in response to the distraction evaluation unit 130 determining that the level of distraction of the driver exceeds a threshold. The alert signal may be in the form of a visual signal, an audible signal, and/or a tactile signal (e.g., a vibratory signal). In some embodiments, the reminder signal may provide an persuasion (e.g., give a warning in a visual and/or audible manner "the current conversation is distracting you, please stop the conversation!"). In some embodiments, the reminder may provide a suggestion (e.g., giving a warning in a visual and/or audible manner "is the current conversation distracting your attention, is you finding a parking place. In such a case, upon driver confirmation (e.g., via a button, touch screen, or voice confirmation), the vehicle may provide a path to and/or automatically navigate to the nearest/suitable parking location, e.g., via an automated driving assistance system, such that the driver may make and/or receive calls and/or conversations during and/or at the automated navigation to the nearest/suitable parking location. In some embodiments, the alert signal may trigger certain actions of the vehicle, such as slowing down, switching to an autonomous driving mode, and the like.
In a second aspect of the invention, a vehicle is provided that includes the vehicle driving assistance system 100 according to the first aspect of the invention. The vehicle driving assistance system 100 may be integrated with the vehicle control system or as a separate system. Additionally, although the system 100 is described above as separate units for ease of illustration, it will be appreciated that the various units may be combined into fewer units or subdivided into more units without departing from the spirit and scope of the present invention.
In a third aspect of the invention, the foregoing object is attained by a driving assistance method for vehicle provided.
Fig. 2 is a schematic diagram of a vehicle driving assistance method 200 according to an embodiment of the invention. The method 200 may be implemented using the vehicle driving assistance system 100 of the present invention as described above. As shown, the method 200 includes:
step S210: collecting the voice of a driver in a vehicle;
step S220: performing semantic analysis on the collected voice, and classifying the collected voice into corresponding semantic categories according to the result of the semantic analysis;
step S230: evaluating a driver's distraction level based on the classified semantic categories and determining whether the driver's distraction level exceeds a threshold; and
step S240: in response to determining that the level of driver distraction exceeds a threshold, a reminder signal is provided to the driver.
In some embodiments of the invention, the method further comprises: training a semantic classification model through machine learning by means of a speech sample with class labels, and classifying the collected speech into a corresponding semantic class of a plurality of semantic classes through the semantic classification model.
In some embodiments of the invention, the method further comprises: the method further comprises the following steps: the method further includes performing one or more semantic class classifications on the captured speech over a predetermined time period to produce a respective one or more semantic classes, wherein each semantic class corresponds to a respective distraction value.
In some embodiments of the invention, the method further comprises: and carrying out weighted summation on one or more attention dispersion values corresponding to one or more corresponding semantic categories in the preset time period, and determining the attention dispersion level of the driver in the preset time period according to the result of the weighted summation.
In some embodiments of the invention, the method further comprises: an image relating to an object around a vehicle is acquired, and state information data relating to driving of the vehicle is acquired.
In some embodiments of the invention, the method further comprises: whether there is an objective factor distracting the driver's attention is detected based on the acquired image and/or status information data, and the estimated distraction level is adjusted based on the result of the detection.
According to a fourth aspect of the invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method according to the third aspect of the invention.
It should be understood that the various units of the above-described anti-forgetting reminder system 100 for items within a vehicle may be implemented in whole or in part by software, hardware, and combinations thereof. The units can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the units.
In an embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, the processor implementing the steps of the method in any of the above embodiments when executing the computer program. The computer device may be a server or a vehicle-mounted terminal. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Which computer program is executed by a processor to carry out the method of the invention.
Those skilled in the art will appreciate that the schematic diagram of the driving assistance system 100 shown in fig. 1 is merely a block diagram of a portion of the structure associated with the disclosed aspects and does not constitute a limitation on the hardware/software/firmware employed in the disclosed aspects, as a particular system may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the steps in implementing the methods according to the above embodiments of the present invention may be instructed to be performed by the relevant hardware by a computer program, which may be stored in a non-volatile computer-readable storage medium, and which, when executed, may include the steps of the above embodiments of the methods. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory.
By the method and the device, the attention dispersion level of the driver can be objectively estimated according to the contents of the call receiving and/or conversation aiming at the condition that the driver calls and/or talks with the passengers in the vehicle to disperse the attention of the driver, and the driver is reminded in real time to realize that the behavior of the driver potentially forms a potential safety hazard under the condition that the attention dispersion level exceeds a threshold value and the safety risk is possibly caused, so that the driving safety is improved.
While the present invention has been described in connection with the embodiments, it is to be understood by those skilled in the art that the foregoing description and drawings are merely illustrative and not restrictive of the broad invention, and that this invention not be limited to the disclosed embodiments. Various modifications and variations are possible without departing from the spirit of the invention.
Claims (14)
1. A vehicle driving assist system, characterized by comprising:
a voice collecting unit configured to collect a voice of a driver in the vehicle;
the semantic analysis unit is configured to perform semantic analysis on the voice collected by the voice collection unit and classify the collected voice into corresponding semantic categories according to the result of the semantic analysis;
an attention dispersion evaluation unit configured to evaluate a level of attention dispersion of the driver based on the classified semantic categories, and determine whether the level of attention dispersion of the driver exceeds a threshold; and
a reminder unit configured to provide a reminder signal to the driver in response to the distraction evaluation unit determining that the level of distraction of the driver exceeds a threshold.
2. The system of claim 1, wherein the semantic analysis unit comprises a semantic classification model trained by machine learning with speech samples having class labels and configured to classify speech collected by the speech collection unit into respective ones of a number of semantic classes.
3. The system according to claim 1 or 2, wherein the semantic analysis unit is configured to perform one or more semantic class classifications on the captured speech over a predetermined period of time to produce a respective one or more semantic classes, wherein each semantic class corresponds to a respective distraction value.
4. The system of claim 3, wherein the distraction assessment unit is further configured to perform a weighted summation of one or more distraction values corresponding to the respective one or more semantic categories over the predetermined time period, and to determine the level of distraction of the driver over the predetermined time period based on a result of the weighted summation.
5. The system according to claim 1 or 2, characterized in that the vehicle driving assist system further comprises: an image acquisition unit configured to acquire an image related to an object around a vehicle, and/or a vehicle state information acquisition unit configured to acquire state information data related to driving of the vehicle.
6. The system according to claim 5, wherein the distraction evaluation unit is further configured to detect whether there is an objective factor distracting the driver's attention from the image acquired by the image acquisition unit and/or the state information data acquired by the vehicle state information acquisition unit, and adjust the evaluated level of distraction according to the result of the detection.
7. A vehicle, characterized in that it comprises a system according to any one of claims 1-6.
8. A vehicle driving assist method, characterized by comprising:
collecting the voice of a driver in a vehicle;
performing semantic analysis on the collected voice, and classifying the collected voice into corresponding semantic categories according to the result of the semantic analysis;
evaluating a driver's distraction level based on the classified semantic categories and determining whether the driver's distraction level exceeds a threshold; and
in response to determining that the level of driver distraction exceeds a threshold, a reminder signal is provided to the driver.
9. The method of claim 8, further comprising: training a semantic classification model through machine learning by means of a speech sample with class labels, and classifying the collected speech into a corresponding semantic class of a plurality of semantic classes through the semantic classification model.
10. The method according to claim 8 or 9, characterized in that the method further comprises: the method further includes performing one or more semantic class classifications on the captured speech over a predetermined time period to produce a respective one or more semantic classes, wherein each semantic class corresponds to a respective distraction value.
11. The method of claim 10, further comprising: and carrying out weighted summation on one or more attention dispersion values corresponding to one or more corresponding semantic categories in the preset time period, and determining the attention dispersion level of the driver in the preset time period according to the result of the weighted summation.
12. The method according to claim 8 or 9, characterized in that the method further comprises: an image relating to an object around a vehicle is acquired, and state information data relating to driving of the vehicle is acquired.
13. The method of claim 12, further comprising: whether there is an objective factor distracting the driver's attention is detected based on the acquired image and/or status information data, and the estimated distraction level is adjusted based on the result of the detection.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 8 to 13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910854196.3A CN112550306A (en) | 2019-09-10 | 2019-09-10 | Vehicle driving assistance system, vehicle including the same, and corresponding method and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910854196.3A CN112550306A (en) | 2019-09-10 | 2019-09-10 | Vehicle driving assistance system, vehicle including the same, and corresponding method and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112550306A true CN112550306A (en) | 2021-03-26 |
Family
ID=75028855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910854196.3A Withdrawn CN112550306A (en) | 2019-09-10 | 2019-09-10 | Vehicle driving assistance system, vehicle including the same, and corresponding method and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112550306A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113133583A (en) * | 2021-04-28 | 2021-07-20 | 重庆电子工程职业学院 | Multifunctional workbench for computer software developer |
CN113361343A (en) * | 2021-05-21 | 2021-09-07 | 上海可深信息科技有限公司 | Deep learning based call receiving and making behavior detection method |
CN113548057A (en) * | 2021-08-02 | 2021-10-26 | 四川科泰智能电子有限公司 | Safe driving assistance method and system based on driving trace |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106043311A (en) * | 2016-06-27 | 2016-10-26 | 观致汽车有限公司 | Method and system for judging whether driver is distracted or not |
CN106205052A (en) * | 2016-07-21 | 2016-12-07 | 上海仰笑信息科技有限公司 | A kind of driving recording method for early warning |
CN108146438A (en) * | 2016-12-02 | 2018-06-12 | 卢卡斯汽车股份有限公司 | For enhancing driver attention's module of driving assistance system |
CN108437999A (en) * | 2018-03-20 | 2018-08-24 | 中国计量大学 | A kind of attention auxiliary system |
CN109664891A (en) * | 2018-12-27 | 2019-04-23 | 北京七鑫易维信息技术有限公司 | Auxiliary driving method, device, equipment and storage medium |
CN109941288A (en) * | 2017-12-18 | 2019-06-28 | 现代摩比斯株式会社 | Safe driving auxiliary device and method |
CN110136464A (en) * | 2019-04-18 | 2019-08-16 | 深圳市宏电技术股份有限公司 | A kind of method, device and equipment that auxiliary drives |
CN110209791A (en) * | 2019-06-12 | 2019-09-06 | 百融云创科技股份有限公司 | It is a kind of to take turns dialogue intelligent speech interactive system and device more |
-
2019
- 2019-09-10 CN CN201910854196.3A patent/CN112550306A/en not_active Withdrawn
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106043311A (en) * | 2016-06-27 | 2016-10-26 | 观致汽车有限公司 | Method and system for judging whether driver is distracted or not |
CN106205052A (en) * | 2016-07-21 | 2016-12-07 | 上海仰笑信息科技有限公司 | A kind of driving recording method for early warning |
CN108146438A (en) * | 2016-12-02 | 2018-06-12 | 卢卡斯汽车股份有限公司 | For enhancing driver attention's module of driving assistance system |
CN109941288A (en) * | 2017-12-18 | 2019-06-28 | 现代摩比斯株式会社 | Safe driving auxiliary device and method |
CN108437999A (en) * | 2018-03-20 | 2018-08-24 | 中国计量大学 | A kind of attention auxiliary system |
CN109664891A (en) * | 2018-12-27 | 2019-04-23 | 北京七鑫易维信息技术有限公司 | Auxiliary driving method, device, equipment and storage medium |
CN110136464A (en) * | 2019-04-18 | 2019-08-16 | 深圳市宏电技术股份有限公司 | A kind of method, device and equipment that auxiliary drives |
CN110209791A (en) * | 2019-06-12 | 2019-09-06 | 百融云创科技股份有限公司 | It is a kind of to take turns dialogue intelligent speech interactive system and device more |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113133583A (en) * | 2021-04-28 | 2021-07-20 | 重庆电子工程职业学院 | Multifunctional workbench for computer software developer |
CN113361343A (en) * | 2021-05-21 | 2021-09-07 | 上海可深信息科技有限公司 | Deep learning based call receiving and making behavior detection method |
CN113548057A (en) * | 2021-08-02 | 2021-10-26 | 四川科泰智能电子有限公司 | Safe driving assistance method and system based on driving trace |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800633B (en) | Non-motor vehicle traffic violation judgment method and device and electronic equipment | |
CN112550306A (en) | Vehicle driving assistance system, vehicle including the same, and corresponding method and medium | |
CN105825621A (en) | Method and device for driving motor vehicle able to be driven in an at least partially automated manner | |
CN110395260B (en) | Vehicle, safe driving method and device | |
CN110213516A (en) | Vehicular video recording method, device, storage medium and electronic device | |
CN111277755B (en) | Photographing control method and system and vehicle | |
JP5530106B2 (en) | Driving behavior guidance system | |
US20150125126A1 (en) | Detection system in a vehicle for recording the speaking activity of a vehicle occupant | |
CN112215097A (en) | Method for monitoring driving state of vehicle, vehicle and computer readable storage medium | |
CN111489522A (en) | Method, device and system for outputting information | |
CN112820072A (en) | Dangerous driving early warning method and device, computer equipment and storage medium | |
CN111301428A (en) | Motor vehicle driver distraction detection warning method and system and motor vehicle | |
CN112633387A (en) | Safety reminding method, device, equipment, system and storage medium | |
CN113312958B (en) | Method and device for adjusting dispatch priority based on driver state | |
CN108242181A (en) | A kind of information early warning processing method and server | |
US10891496B2 (en) | Information presentation method | |
CN111862529A (en) | Alarm method and equipment | |
CN116691719A (en) | Driver early warning method, device, equipment and storage medium | |
JP2014238707A (en) | Driver state determination system | |
CN114194199A (en) | Safe driving method and device for vehicle | |
CN114332913A (en) | Pedestrian prompt tone control method and device for electric automobile and electronic equipment | |
CN114379582A (en) | Method, system and storage medium for controlling respective automatic driving functions of vehicles | |
CN113393643A (en) | Abnormal behavior early warning method and device, vehicle-mounted terminal and medium | |
CN116088790A (en) | Control method and device for multimedia volume of vehicle, vehicle and storage medium | |
CN118343135A (en) | Human-vehicle emotion interaction method, device, equipment, storage medium and computer program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210326 |
|
WW01 | Invention patent application withdrawn after publication |