CN114360241B - Vehicle interaction method, vehicle interaction device and storage medium - Google Patents

Vehicle interaction method, vehicle interaction device and storage medium Download PDF

Info

Publication number
CN114360241B
CN114360241B CN202111505828.9A CN202111505828A CN114360241B CN 114360241 B CN114360241 B CN 114360241B CN 202111505828 A CN202111505828 A CN 202111505828A CN 114360241 B CN114360241 B CN 114360241B
Authority
CN
China
Prior art keywords
data
vehicle
state
feature
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111505828.9A
Other languages
Chinese (zh)
Other versions
CN114360241A (en
Inventor
王琪
李鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebred Network Technology Co Ltd
Original Assignee
Zebred Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebred Network Technology Co Ltd filed Critical Zebred Network Technology Co Ltd
Priority to CN202111505828.9A priority Critical patent/CN114360241B/en
Publication of CN114360241A publication Critical patent/CN114360241A/en
Application granted granted Critical
Publication of CN114360241B publication Critical patent/CN114360241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Time Recorders, Dirve Recorders, Access Control (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle interaction method, a vehicle interaction device and a storage medium, wherein the method comprises the steps of obtaining first state data in the process of driving a vehicle by a driver; acquiring environmental data of the vehicle in the running process; outputting an interaction message according to the first state data of the driver and the environment data of the vehicle; wherein the interactive message is used for improving the driving state of the driver. According to the method and the device, corresponding interaction information is output according to the driving state data and the external environment data, and the interaction accuracy is improved.

Description

Vehicle interaction method, vehicle interaction device and storage medium
Technical Field
The application relates to the technical field of automobiles, in particular to a vehicle interaction method, a vehicle interaction device and a storage medium.
Background
The man-machine interaction design in the current cabin can design corresponding interaction feedback according to the occurrence of the event. For example, the vehicle-mounted DMS (Driver Monitor System, driver monitoring system) is used for all-weather monitoring of fatigue state and dangerous driving behavior of a driver in the driving process of the driver, and after fatigue, yawning, squinting and other wrong driving states of the driver are found, the DMS can analyze the behavior in time and conduct voice light prompt, so that the driver is warned and the wrong driving behavior is corrected. However, the interactive messages such as voice or light prompts are fixed feedback given only by the driving state of the driver, and the interactive messages output only according to the driving state may be inaccurate.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a vehicle interaction method, a vehicle interaction device and a storage medium.
According to a first aspect of an embodiment of the present application, there is provided a vehicle interaction method, including:
acquiring first state data of a driver in the process of driving the vehicle;
acquiring environmental data of the vehicle in the running process;
outputting an interaction message according to the first state data of the driver and the environment data of the vehicle; wherein the interactive message is used for improving the driving state of the driver.
In some embodiments, the outputting an interaction message according to the first state data of the driver and the environmental data of the vehicle includes:
determining a status score based on the first status data of the driver and the environmental data of the vehicle; wherein the status score characterizes a degree of safety of the driver driving the vehicle;
and outputting the interactive message according to the state score.
In some embodiments, the outputting the interaction message according to the status score includes:
determining a state grade to which the state score belongs;
outputting the interactive message with the intensity corresponding to the state level.
In some embodiments, the determining a status score from the first status data of the driver and the environmental data of the vehicle includes:
extracting features of the environmental data of the vehicle to obtain a first feature data set;
extracting the characteristics of the first state data of the driver to obtain a second characteristic data set;
and aligning the first characteristic data set with the second characteristic data set, and then inputting the aligned first characteristic data set and the aligned second characteristic data set into a preset state evaluation model to obtain the state score.
In some embodiments, the method further comprises:
aligning the first characteristic data set and the second characteristic data set, and then inputting the first characteristic data set and the second characteristic data set into the preset state evaluation model to obtain the driving intention of the driver;
the outputting the interactive message according to the state score comprises the following steps:
and outputting the interaction message according to the state score and the driving intention.
In some embodiments, the environmental data of the vehicle includes: the running data of the vehicle itself, and image data representing the external environment of the vehicle; wherein the external environment comprises a weather environment and a road environment;
the feature extraction is performed on the environmental data of the vehicle to obtain a first feature data set, including:
Extracting characteristics of the running data of the vehicle to obtain first sub-characteristic data;
extracting features of the image data to obtain second sub-feature data;
and fusing the first sub-feature data and the second sub-feature data to obtain the first feature data set.
In some embodiments, the feature extraction of the image data to obtain second sub-feature data includes:
carrying out feature extraction on the image data representing the weather environment by using a preset first feature extraction module to obtain weather feature data; the first feature extraction module is a module suitable for extracting weather features;
carrying out feature extraction on the image data representing the road environment by using a preset second feature extraction module to obtain road feature data; the second feature extraction module is a module suitable for extracting road features;
the image data comprising the weather environment and the road environment are subjected to feature extraction by using a preset third feature extraction module, so that universal feature data are obtained; the third feature extraction module is a feature extraction module which does not distinguish image contents;
And fusing the weather feature data, the road feature data and the general feature data to obtain the second sub-feature data.
In some embodiments, the driver's status data includes at least one of:
image data characterizing driving behavior of the driver;
physiological data of the driver.
In some embodiments, the method comprises:
after the interactive message is output, second state data of the driver in the process of driving the vehicle are obtained;
and adjusting the output of the interactive message according to the second state data.
According to a second aspect of embodiments of the present application, there is provided a vehicle interaction device, including:
the first acquisition module is used for acquiring first state data in the process of driving the vehicle by a driver;
the second acquisition module is used for acquiring the environmental data of the vehicle in the running process;
the output module is used for outputting an interaction message according to the first state data of the driver and the environment data of the vehicle; wherein the interactive message is used for improving the driving state of the driver.
In some embodiments, the output module is further configured to determine a status score based on the first status data of the driver and the environmental data of the vehicle; wherein the status score characterizes a degree of safety of the driver driving the vehicle; and outputting the interactive message according to the state score.
In some embodiments, the output module is further configured to determine a state level to which the state score belongs; outputting the interactive message with the intensity corresponding to the state level.
In some embodiments, the output module is further configured to perform feature extraction on environmental data of the vehicle to obtain a first feature data set; extracting the characteristics of the first state data of the driver to obtain a second characteristic data set; and aligning the first characteristic data set with the second characteristic data set, and then inputting the aligned first characteristic data set and the aligned second characteristic data set into a preset state evaluation model to obtain the state score.
In some embodiments, the apparatus further comprises:
the obtaining module is used for inputting the first characteristic data set and the second characteristic data set into the preset state evaluation model after aligning to obtain the driving intention of the driver;
the output module is further configured to output the interaction message according to the status score and the driving intention.
In some embodiments, the environmental data of the vehicle includes: the running data of the vehicle itself, and image data representing the external environment of the vehicle; wherein the external environment comprises a weather environment and a road environment;
The output module is further used for extracting characteristics of the running data of the vehicle to obtain first sub-characteristic data; extracting features of the image data to obtain second sub-feature data; and fusing the first sub-feature data and the second sub-feature data to obtain the first feature data set.
In some embodiments, the output module is further configured to perform feature extraction on the image data representing the weather environment by using a preset first feature extraction module, so as to obtain weather feature data; the first feature extraction module is a module suitable for extracting weather features; carrying out feature extraction on the image data representing the road environment by using a preset second feature extraction module to obtain road feature data; the second feature extraction module is a module suitable for extracting road features; the image data comprising the weather environment and the road environment are subjected to feature extraction by using a preset third feature extraction module, so that universal feature data are obtained; the third feature extraction module is a feature extraction module which does not distinguish image contents; and fusing the weather feature data, the road feature data and the general feature data to obtain the second sub-feature data.
In some embodiments, the first obtaining module is further configured to obtain second state data during driving of the vehicle by the driver after outputting the interaction message;
the output module is further configured to adjust output of the interaction message according to the second status data.
According to a third aspect of embodiments of the present application, there is provided a vehicle interaction device, including:
a memory for storing computer executable instructions;
a processor, coupled to the memory, for implementing the method of any of the above first aspects by executing the computer-executable instructions.
According to a fourth aspect of embodiments of the present application, there is provided a storage medium having stored therein computer executable instructions configured to perform the method provided in any of the above first aspects.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
in the embodiment of the application, the first state data of the driver in the vehicle driving process and the environment data of the vehicle in the driving process are obtained, the state score representing the safety degree of the driver in driving the vehicle is determined according to the first state data of the driver and the environment data of the vehicle, and then the interaction message is output according to the state score, so that the interaction accuracy can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart illustrating a method of vehicle interaction according to an exemplary embodiment;
FIG. 2 is a block diagram illustrating a vehicle interaction method according to an exemplary embodiment;
FIG. 3 is an exemplary diagram illustrating one method of vehicle interaction, according to one exemplary embodiment;
FIG. 4 is a diagram illustrating an exemplary configuration of a vehicle interaction device, according to an exemplary embodiment;
fig. 5 is a block diagram illustrating a vehicle-mounted terminal according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
FIG. 1 is a flow chart illustrating a vehicle interaction method, as shown in FIG. 1, according to an exemplary embodiment, comprising the steps of:
s101, acquiring first state data in the process of driving the vehicle by a driver;
s102, acquiring environment data of the vehicle in the running process;
s103, outputting an interaction message according to the first state data of the driver and the environment data of the vehicle; wherein the interactive message is used for improving the driving state of the driver.
The method of the embodiment of the application can be applied to the vehicle-mounted terminal of the vehicle and can also be applied to the service terminal connected with the vehicle-mounted terminal. If the method is applied to the service end, one or more sensing devices in the vehicle can acquire data representing the driving state of the driver and vehicle driving data, an external camera can acquire environment data in the vehicle driving process and send the environment data to the vehicle machine end, the vehicle machine end transmits the data to the service end, the service end determines the state score according to the driving state data and the environment data, the corresponding interaction message identification is output to the vehicle machine end according to the state score, and the vehicle machine end outputs the corresponding interaction message according to the interaction message identification. In the embodiments of the present application, the vehicle-machine side or the service side that executes the above method may be collectively referred to as a vehicle interaction device.
In some embodiments, the first status data of the driver includes at least one of:
image data characterizing driving behavior of the driver;
physiological data of the driver.
The image data representing the driving behavior of the driver can be data acquired by one or more sensing devices in the vehicle during driving of the driver, for example, the vehicle interaction device can acquire video stream data comprising the driving behavior of the driver through a camera in the vehicle. The physiological data of the driver is obtained by a sensor (such as a pressure sensor) and/or a wearable device (such as a emotion recognition wrist-worn device based on physiological signals, a motion monitoring watch, etc.) arranged in the driver seat, wherein the physiological data comprises physiological signals such as an Electrocardiograph (ECG), an Electroencephalogram (EEG), an Electromyogram (EMG), a pulse signal (Photo Plethysmo Graphy, PPG), and the like, and the physiological signals can reflect the health condition of the driver and the emotion of the driver.
In the embodiment of the present application, the environmental data of the vehicle during the current driving may include external environmental data and driving data of the vehicle itself. Wherein the external environment data may be obtained through one or more sensing devices on the vehicle and/or external devices communicable with the vehicle interface. For example, the vehicle interaction device may acquire video stream data of external environments including weather environments and/or lane environments through an off-board camera; acquiring running data of the vehicle through an in-vehicle sensor such as a triaxial gyroscope, a radar or a GPS; weather data is acquired through a third party weather monitoring device communicable with the vehicle interaction device, and travel data of the vehicle is acquired through a user terminal communicable with the vehicle interaction device. In addition, the vehicle interaction device may further acquire driving data of the vehicle including a video signal, a radar signal, a brake signal, a turning signal, a vehicle speed signal, a vehicle acceleration signal, or a vehicle inclination signal through an advanced driving assistance system (Advanced Driver Assistance, ADAS) of a third party, and acquire operation data of the engine through an On-Board Diagnostic (OBD) of the third party.
In the embodiment of the application, the driving state data and the environment data of the driver are combined, and the corresponding interaction message can be output. Wherein the interactive messages may be presented in different forms, such as audio, video, lights, etc. The interactive messages may also be sent by different terminals in the vehicle, such as in-vehicle displays, voice devices, in-vehicle atmosphere lights, steering wheels, etc. The interactive message may also be sent by multiple terminals in multiple forms, which is not limited in this embodiment of the present application.
The interactive message can be an early warning message or an interactive message which is pleasant to the mood of the driver, and can be used for improving the driving state of the driver. For example, when the state data characterizes driver fatigue and/or the environmental data characterizes road conditions complex, outputting an early warning message comprising steering wheel vibration or early warning speech; also, for example, when the driver's mood state is bad, an interactive message such as light music is played.
After the first state data and the environment data of the driver are determined, the interactive information corresponding to the first state data and the environment data is output according to the corresponding relation between the first state data and the interactive information. The corresponding relation between the state data, the environment data and the interactive message can be set manually according to historical experience, for example, when the state data represents that a driver is distracted and the environment data represents that road traffic is complex, warning voice with high volume and steering wheel high-frequency vibration are correspondingly output; when the state data represent distraction of a driver and the environment data represent a road traffic list, setting corresponding output warning voice with moderate volume; when the state data represents the mood of the driver, setting corresponding output light music. Alternatively, the first state data and the environmental data are input into an interaction model to obtain an interaction message. The interaction model can be trained by utilizing a deep learning network based on a combination of preset various driving state data and environment data. The combination of the preset driving state data and the environment data corresponds to a preset interactive message, namely a training label mark. Based on the trained interaction model, after acquiring state data and environment data of a driver in the process of driving a vehicle, the model can be input to be matched with a corresponding interaction message identifier. The interactive terminal outputs the corresponding interactive message according to the interactive message identifier.
In the related art, whether the driver is tired or in dangerous behavior, the state is a state during driving of the driver, and various external factors such as external weather environment, external lane vehicle environment, etc. are not integrated, and the external factors also affect driving of the driver. In this regard, according to the present application, the interactive message is output according to the first state data and the external environment data, and compared with the mode of outputting the interactive message only according to the first state data, the present driving situation can be more accurately judged, so that the accuracy of outputting the interactive message is improved, the driving safety is improved, or the emotion of the driver is improved, and the driving experience is improved.
In some embodiments, the outputting an interaction message according to the first state data of the driver and the environmental data of the vehicle includes:
determining a status score based on the first status data of the driver and the environmental data of the vehicle; wherein the status score characterizes a degree of safety of the driver driving the vehicle;
and outputting the interactive message according to the state score.
In an embodiment of the present application, the driving state data and the environmental data of the driver may correspond to a state score, and the state score may represent a safety degree of the driver driving the vehicle. Taking the positive correlation of the state score and the safety degree of the driving vehicle as an example, when the driving state represents that the driver is concentrated in spirit, good in emotion and good in health, the environment state represents that the environment outside the vehicle is single and the weather is good (such as sunny day and high in visibility), the state score is high; when the driving state represents that the driver has fatigue, bad emotion or health problem, or the environmental state represents that the weather is bad (such as heavy rain) and the vehicle body condition is bad, the state score is low. Of course, in the embodiment of the present application, the state score may be set to be inversely related to the safety degree of the driving vehicle, which is not limited to this embodiment of the present application.
When determining the state score according to the first state data and the environmental state data, the method can obtain the corresponding state score based on a preset state evaluation model. The state evaluation model is trained by using a deep learning network based on preset data corresponding to one or more driving states, wherein the preset driving states correspond to preset state scores, namely trained tag values. Based on the trained state evaluation model, after state data and environment data of a driver in the process of driving a vehicle are obtained, the model can be input to obtain corresponding state scores.
After the state evaluation model outputs the state score representing the driving safety degree, the vehicle interaction device outputs a corresponding interaction message. In this embodiment, the interaction message has a correspondence with the state score, and may be that different state scores match different interaction messages, and different interaction messages may be that state scores of different intervals match different interaction messages, so that corresponding interaction messages are output according to the state scores.
The state score output by the state evaluation model is positively correlated with the safety degree of the driver driving the vehicle, and the state score is assumed to be in a value range of [0,1], when the detected first state data and environment data are input into the state evaluation model to obtain the state score of 0.3, the current driving safety degree is determined to be low, and then high-frequency warning voice can be output; when the state evaluation model obtains that the state score is 0.5, determining that the current driving safety degree is not high enough, and outputting low-frequency warning voice at the moment; when the state evaluation model obtains that the state score is 0.8, the current driving safety degree is determined to be higher, and interaction information pleasant to mood can be correspondingly output.
In the embodiment of the application, the state score representing the current driving safety degree is determined together according to the first state data and the external environment data, the safety degree is quantized, the evaluation accuracy of the safety degree can be improved, and corresponding interaction information is output according to different quantized state scores, so that the interaction accuracy is improved.
In some embodiments, the outputting the interaction message according to the status score includes:
determining a state grade to which the first state score belongs;
outputting the interactive message with the intensity corresponding to the state level.
In this embodiment, different state levels are preset according to the state score value range, for example, the state score value range is [0,1], and four levels of [0,0.3 ], [0.3,0.6 ], [0.6,0.9 ]), and [0.9,1] may be preset. The state grade to which the state score belongs is the grade corresponding to the value range to which the state score falls. Each state level corresponds to an interactive message of an intensity including, for example, the output frequency or amplitude size of the message, etc.
Taking the situation that the state score is positively correlated with the driving safety degree, the higher the state grade is, the higher the driving safety degree is, if the state score determined according to the first state data and the environment data is 0.2 and belongs to the state grade of [0, 0.3), the driving safety degree is low, and at the moment, a high-strength interaction message such as a warning voice with high volume can be output. If the state score determined according to the first state data and the environment data is 0.5 and belongs to the state grade of [0.3,0.6 ], the driving safety degree is not high enough, and a medium-strength interaction message, such as a warning voice with medium volume, can be output. If the state score determined according to the first state data and the environment data is 0.7 and belongs to the state grade of [0.6,0.9), the driving safety degree is higher, and at the moment, a low-intensity interaction message such as a warning voice with smaller volume can be output; if the state score determined from the first state data and the environment data is 0.9, and the state score belongs to the state class of [0.9,1], the driving safety degree is high, and light music which can be pleasurable can be output.
It can be understood that according to the embodiment of the application, the interactive message with the corresponding strength is output according to the state grade, so that on one hand, the configuration process of the state score and the interactive message can be simplified, and on the other hand, the driver can intuitively feel the state of the driver by outputting the interactive messages with different strengths, and the perception experience of the driver on the interactive message is improved.
In some embodiments, the determining a status score from the first status data of the driver and the environmental data of the vehicle includes:
extracting features of the environmental data of the vehicle to obtain a first feature data set;
extracting the characteristics of the first state data of the driver to obtain a second characteristic data set;
and aligning the first characteristic data set with the second characteristic data set, and then inputting the aligned first characteristic data set and the aligned second characteristic data set into a preset state evaluation model to obtain the state score.
When the first characteristic data set is obtained by extracting the characteristics of the environmental data, for example, the image frames in the video stream data outside the vehicle can be analyzed to obtain the road characteristic data set which can be used for representing the road identification plate, the ground driving identification and the related concentration of surrounding vehicles, and also can obtain the weather characteristic data set which can be used for representing sky colors, environment brightness, raining or snowing characteristics and the like; for another example, travel characteristic data of speed, acceleration, steering wheel angle, etc. included therein may be extracted from vehicle travel data. It should be noted that, the present application may directly integrate the weather data acquired by the weather monitoring device into the first feature data set, and omit the process of obtaining the weather data set based on the image analysis.
When the first state data of the driver is subjected to feature extraction to obtain a second feature data set, for example, photographed in-vehicle video stream data including the driver can be analyzed to obtain the second feature data set including behaviors such as expression, head motion or gestures which can characterize the driver; and/or, the monitored physiological data such as PPG or ECG may be analyzed to obtain a second feature data set including whether the pulse signal is abnormal, whether the electrocardiograph signal is abnormal, and the like.
Since the time attributes of the feature data included in the first feature data set and the second feature data set may be different, or the time attributes of the feature data in the first feature data set and the time attributes of the feature data in the second feature data set may be different, the first feature data set and the second feature data set need to be aligned, where the alignment includes time alignment of the first feature data set and the second feature data set, and also includes time alignment of the feature data in the first feature data set and the second feature data set.
Taking the time alignment of the feature data in the first feature data set as an example, the data representing the weather features and the data representing the road signs or the road feature data related to the density of surrounding vehicles can be integrated according to the same time tag or the same preset time span range. The road sign data can be aligned in different time dimensions, for example, the first frame of ground driving sign image data and the second frame of ground driving sign image data are compared, and the position change of the driving sign in the image is determined, so that the characteristic data result of the position change of the driving sign in the preset time range is obtained.
The alignment of the feature data within the second feature data set in time, and the alignment between the first feature data set and the second feature data set in time, is the same as above and will not be described in detail here.
In addition, the method for aligning the first characteristic data set and the second characteristic data set further comprises the step of aligning the first characteristic data set and the second characteristic data set in a data format according to a data input format of a preset state evaluation model. The present application is not limited to any data format. It should be noted that, the sequence of time alignment and data format alignment may be flexibly set, for example, the first feature data set and the second feature data set may be aligned in time first, and then the data format alignment is performed on the data set result after the time alignment; it is also possible to first align the data format of the first data set with the data format of the second data set, respectively, and then time align the first feature data set and the second feature data set after aligning the data formats.
After the first feature data set and the second feature data set are aligned, the state evaluation model can be input to obtain the state score of the driver.
It can be understood that the method and the device for evaluating the state of the vehicle can align the first characteristic data set and the second characteristic data set obtained after the characteristic extraction of the environment data and the first state data and then determine the state score, and can improve the accuracy of state evaluation because the time-aligned characteristic data can more accurately represent the driving state and the environment state at the same moment or in the same time period.
In some embodiments, the method further comprises:
aligning the first characteristic data set and the second characteristic data set, and then inputting the first characteristic data set and the second characteristic data set into the preset state evaluation model to obtain the driving intention of the driver;
the outputting the interactive message according to the state score comprises the following steps:
and outputting the interaction message according to the state score and the driving intention.
After aligning the first feature data set and the second feature data set as described above, a preset state evaluation model is input to obtain a state score, and in this embodiment, the preset state evaluation model may also output the driving intention of the driver. The preset state evaluation model capable of synchronously outputting the state score and the driving intention can also be formed by training through a deep learning network based on preset data comprising one or more driving states and corresponding environmental states, wherein the preset driving states and the environmental states correspond to preset driving intention, namely, trained tag identifications. Based on the trained state evaluation model, after state data and environment data of a driver in the process of driving a vehicle are obtained, the model can be input to synchronously obtain corresponding driving intention. For example, the intent of the driver to turn left may be determined from the driver gaze left angle offset data (belonging to the first feature data set), the vehicle steering wheel left angle offset data, and the ground drive identification right offset data (belonging to the second feature data set).
After the state score and the driving intention are determined through the preset state evaluation model, the interactive message can be output according to the state score and the driving intention. One way is: the state score is combined with the driving intention, and corresponding interactive information is output, for example, according to the corresponding relation among the state score, the driving intention and the interactive information, the interactive information is correspondingly output. Another way is: and correcting the state score according to the driving intention, and outputting the interaction message according to the corrected state score.
For example, when the first state data indicates that the driving state of the driver is good and the environmental data indicates that the straight-through path driving complexity is low, the driver state score is 0.8, but it is known from the driving intention that the driver will accelerate on the path of the left turn with great vehicle density, and the state score can be corrected in combination with the driving intention. In some embodiments, different driving behaviors representing driving intents are preset with different weights, and the product of the preset weights and the state score can be directly used as a corrected state score value. For example, if the preset weight of the acceleration running on the turning path is 0.5, the corrected state score is, for example, 0.4, and the vehicle interaction device may output a corresponding high-intensity interaction message according to the corrected state score of 0.4.
In one embodiment, if the interactive message is sent only according to the state score, for example, when the first state data indicates that the driving state of the driver is good and the environmental data indicates that the driving complexity of the straight-going path is low, the obtained state score may be higher, and a low-intensity interactive message is correspondingly output. However, if the driving intention is left turning, and the distance between the vehicle and the front vehicle on the left turning path is determined to be very close according to the ground driving identification data and the surrounding vehicle density data in the environmental data, and the vehicle is determined to be in acceleration driving according to the vehicle self data, the danger may occur in the acceleration left turning (i.e. the driving safety degree is low), in this case, the interactive message is only output according to the state score, and the driver may still face dangerous driving. In this embodiment, if the driving intention is combined on the basis of the state score, for example, the state score is corrected, and then a high-strength interactive message is output according to the corrected state score, so that the output interactive message can enable the driver to predict the risk in advance.
It can be understood that, this application combines the driving intention on the basis of the state score, confirms driving situation and the driving situation that probably appears jointly, and the accuracy of interactive message output can be improved according to this, especially, before driving appearing dangerous, to the driver output interactive message, can warn the risk in advance, promotes safe driving degree.
In some embodiments, the environmental data of the vehicle includes: the running data of the vehicle itself, and image data representing the external environment of the vehicle; wherein the external environment comprises a weather environment and a road environment;
the feature extraction is performed on the environmental data of the vehicle to obtain a first feature data set, including:
extracting characteristics of the running data of the vehicle to obtain first sub-characteristic data;
extracting features of the image data to obtain second sub-feature data;
and fusing the first sub-feature data and the second sub-feature data to obtain the first feature data set.
In the embodiment of the application, the obtained running data of the vehicle is subjected to feature extraction to obtain a running feature data set including speed, acceleration, steering wheel angle and the like, namely first sub-feature data. And extracting the characteristics of the obtained image data comprising the weather image data and the road environment data to obtain a data set comprising the weather characteristic data and the road characteristic data, namely second sub-characteristic data. The feature extraction principle is consistent with the principle of obtaining the first feature data set by performing feature extraction on the environmental data, and is not described herein.
After the first sub-feature data and the second sub-feature data are obtained, the first sub-feature data and the second sub-feature data are fused, and the first feature data set containing the vehicle running data and the vehicle external environment data is obtained. The feature data may be automatically analyzed and integrated according to a preset criterion, where the preset criterion may be integration according to a time dimension, and integration according to related information before and after the feature (for example, a certain feature value is in the same preset range), which is not limited in this embodiment.
According to the method and the device, the running data of the vehicle and the external environment data are subjected to feature extraction and then are fused, and the obtained first feature data set contains the vehicle data and the external environment data, so that a more sufficient data basis is provided when the state score and the driving intention of the driver for driving the vehicle are judged later, and the accuracy of outputting the interaction message is higher.
In some embodiments, the feature extraction of the image data to obtain second sub-feature data includes:
carrying out feature extraction on the image data representing the weather environment by using a preset first feature extraction module to obtain weather feature data; the first feature extraction module is a module suitable for extracting weather features;
Carrying out feature extraction on the image data representing the road environment by using a preset second feature extraction module to obtain road feature data; the second feature extraction module is a module suitable for extracting road features;
the image data comprising the weather environment and the road environment are subjected to feature extraction by using a preset third feature extraction module, so that universal feature data are obtained; the third feature extraction module is a feature extraction module which does not distinguish image contents;
and fusing the weather feature data, the road feature data and the general feature data to obtain the second sub-feature data.
From the foregoing, the external environment may include a weather environment and a road environment, and the image data representing the weather environment may be input into the preset first feature extraction module to obtain weather feature data. The first feature extraction module is a module for extracting weather features, for example, the feature extraction module extracts feature data representing the day or night based on sky colors; and extracting characteristic data representing rainy days based on the color characteristics of the accumulated water on the road surface.
The image data representing the road environment can be input into a preset second feature extraction module to obtain road feature data. The second feature extraction module is a module for extracting road features, for example, the feature extraction module extracts feature data representing the marking lines based on shape features of the road marking lines; the feature is extracted based on a predetermined rule of the shape feature or license plate number of the vehicle to obtain feature data of the preceding vehicle, and the like.
It should be noted that, in the embodiment of the present application, after the image data representing the weather environment and the image data representing the road environment are acquired by different cameras, the image data representing the weather environment and the image data representing the road environment may be input to the corresponding feature extraction module to perform feature extraction.
In addition, the embodiment of the application also inputs the image data representing the weather environment and the road environment into a preset third feature extraction module to perform feature extraction to obtain general feature data; for example, the third feature extraction module is a module that performs edge feature extraction for an image without distinguishing specific image contents.
After the weather feature data extraction result, the road feature data extraction result and the general feature data extraction result are obtained, the feature extraction results can be fused to obtain second sub-feature data containing the weather feature data, the road feature data and the general feature data. Because the first feature extraction module and the second feature extraction module are targeted, the extracted features are more accurate; the third feature extraction module is universal, but can supplement the refined feature result, so that the weather feature data and the road feature data can be improved by combining the refined feature extraction result and the universal feature extraction result, and the result of the subsequent fusion analysis of the feature result is more accurate.
As described above, in the embodiment of the present application, the first state data includes image data and physiological data that characterize the driving behavior of the driver. Similarly, the method and the device can also perform feature extraction on the image data representing the driving behavior to obtain third sub-feature data, perform feature extraction on the physiological data to obtain fourth sub-feature data, and fuse the third sub-feature data with the fourth sub-feature data to obtain a second feature data set.
When the image data representing the driving behavior is subjected to feature extraction to obtain third sub-feature data, the image data representing the driving behavior can be input into a preset fourth feature extraction module to obtain feature data of the driver action, wherein the preset fourth feature extraction module is a module for extracting action features of the driver, for example, the feature extraction module extracts feature data representing the driver hand action based on the change of the driver hand position. In addition, the image data representing the driving behavior can be input into a preset fifth feature extraction module to obtain feature data representing the emotion of the driver, wherein the preset fifth feature extraction module is a module for extracting emotion features of the driver, for example, the feature extraction module extracts feature data representing the emotion of the driver by detecting the shapes of eyebrows, eyes and corners of mouth in a face image. The embodiment of the application can also input the image data representing the driving behavior into a sixth feature extraction module for feature extraction to obtain general feature data; for example, the sixth feature extraction module is a module that performs edge feature extraction for an image without distinguishing specific image contents.
When the physiological data is extracted to obtain the fourth sub-characteristic data, the physiological data of the driver can be extracted, for example, the ECG signal is extracted, so as to obtain the fourth sub-characteristic data containing the characteristic data of the amplitude and the frequency of the electrocardiosignal of the driver.
It can be understood that, in the present application, since the fourth feature extraction module and the fifth feature extraction module are targeted, the extracted features are more accurate; the sixth feature extraction module is universal, but can supplement the refined feature result, so that the sufficiency of the action feature data and the emotion feature data can be improved by combining the refined feature extraction result and the universal feature extraction result, and the result of the fusion analysis of the feature result is more accurate.
Fig. 2 is a block diagram illustrating a vehicle interaction method according to an exemplary embodiment, and as can be seen from fig. 2, image data of an external environment, driving data of a vehicle, image data of driving behavior of a driver, and physiological data of a driver are acquired during driving of the vehicle, wherein the image data and the physiological data of the driving behavior are first state data belonging to step S101, and the image data of the external environment and the driving data of the vehicle are environment data belonging to step S102.
The image data of the external environment comprises weather image data and road image data, and the image data of the external environment is respectively input into a first multi-task recognition model and a third feature extraction model. The first multi-task recognition model may include the first feature extraction module and the second feature extraction module, and the third feature extraction model includes the third feature extraction module. Weather feature data and road feature data obtained based on the first multi-task recognition model, and general data representing an external environment obtained based on the third feature extraction model belong to the second sub-feature data. The running data of the vehicle is input into a first feature extraction model, and the first sub-feature data can be obtained, wherein the first feature extraction model executes the function of extracting the features of the obtained running data of the vehicle to obtain a running feature data set comprising speed, acceleration, steering wheel angle and the like. And carrying out feature fusion on the first sub-feature data and the second sub-feature data to obtain the first feature data set.
The image data of the driving behavior may include image data of the hand movements of the driver, facial image data, and the image data of the driving behavior is input into a second multi-task recognition model according to the similar method, and the second multi-task recognition model includes the fourth feature extraction module and the fifth feature extraction module. The image data comprising the hand motions is input into a fourth feature extraction module, the feature data of the hand motions of the driver can be obtained, and the facial image data is input into a fifth feature extraction module, so that the feature data representing the emotion of the driver is obtained. And inputting the image data of the driving behavior into a sixth feature extraction model to obtain the general feature data representing the driving behavior, wherein the sixth feature extraction model comprises the sixth feature extraction module. The feature data of the hand motion obtained based on the second multitask recognition model, the feature data representing the emotion of the driver, and the general data representing the driving behavior obtained based on the sixth feature extraction model belong to the aforementioned third sub-feature data. And inputting the physiological data of the driver into a second feature extraction model to obtain the fourth sub-feature data, wherein the second feature extraction model performs feature extraction on the obtained physiological data of the driver to obtain a feature data set containing the amplitude or frequency of the electrocardiosignal of the driver. And carrying out feature fusion on the fourth sub-feature data and the third sub-feature data to obtain the second feature data set.
And (3) aligning the first characteristic data set with the second characteristic data set, and inputting the aligned first characteristic data set and the aligned second characteristic data set into the state evaluation model to obtain the state score and the driving intention.
In some embodiments, the method further comprises:
after the interactive message is output, second state data of the driver in the process of driving the vehicle are obtained;
and adjusting the output of the interactive message according to the second state data.
In this embodiment, the second state data of the driver during driving of the vehicle after outputting the interactive message may be, for example, obtained during a preset period after outputting the interactive message, where the second state data is also feedback of the interactive message by the driver.
For example, after the driver perceives the interactive message, the driver is more concentrated, tired, or the driver's mood is better, at this time, the interactive message is explained to play a role in improving the driving state of the driver, so that the output of the interactive message can be adjusted according to the second state data, for example, the intensity of the output of the interactive message is reduced or the interactive message pleasant to the mood music is output, and the interference of the output of the interactive message to the driver when the driving safety degree is high is reduced; if the driver perceives the interactive message, the driver does not change the state or has a restless emotion, and the interactive message is not improved, so that the output of the interactive message can be adjusted according to the second state data, for example, the output intensity of the interactive message is increased, the warning effect of the interactive message on the driver is improved, or the output mode of the interactive message is changed, and the perception degree of the driver on the interactive message is improved.
It can be understood that according to the feedback of the driver on the interactive message, the interactive message is updated immediately, and driving experience can be improved.
FIG. 3 is a schematic illustration of a vehicle interaction method according to an exemplary embodiment, and as can be seen from FIG. 3, external factor data including external environment data and vehicle driving data, and internal factor data including driving behavior data and physiological data of a driver are input into a data fusion module to perform data fusion, so as to obtain a fusion result data set; and determining interaction classification according to the fusion result data set, determining corresponding interaction information according to the corresponding relation between the interaction level and the interaction information in the interaction strategy, and outputting the interaction information. The extrinsic data is environment data mentioned in the application, the intrinsic data is first state data mentioned in the application, and the interaction determined after the intrinsic data and the extrinsic data are fused is classified, namely the state score of the application. In this example, the interaction message is output based on the correspondence of the state score and the interaction message, i.e., the correspondence of the state score and the interaction message belongs to the interaction policy.
Fig. 4 is a diagram showing an exemplary configuration of a vehicle interaction device according to an exemplary embodiment, and as shown in fig. 4, the device includes:
A first obtaining module 201, configured to obtain first state data during driving of the vehicle by a driver;
a second obtaining module 202, configured to obtain environmental data of the vehicle during a driving process;
an output module 203, configured to output an interaction message according to the first state data of the driver and the environmental data of the vehicle; wherein the interactive message is used for improving the driving state of the driver.
In some embodiments, the output module 203 is further configured to determine a status score according to the first status data of the driver and the environmental data of the vehicle; wherein the status score characterizes a degree of safety of the driver driving the vehicle; and outputting the interactive message according to the state score.
In some embodiments, the output module 203 is further configured to determine a state level to which the state score belongs; outputting the interactive message with the intensity corresponding to the state level.
In some embodiments, the output module 203 is further configured to perform feature extraction on the environmental data of the vehicle to obtain a first feature data set; extracting the characteristics of the first state data of the driver to obtain a second characteristic data set; and aligning the first characteristic data set with the second characteristic data set, and then inputting the aligned first characteristic data set and the aligned second characteristic data set into a preset state evaluation model to obtain the state score.
In some embodiments, the apparatus further comprises:
the obtaining module is used for inputting the first characteristic data set and the second characteristic data set into the preset state evaluation model after aligning to obtain the driving intention of the driver;
the output module 203 is further configured to output the interaction message according to the status score and the driving intention.
In some embodiments, the environmental data of the vehicle includes: the running data of the vehicle itself, and image data representing the external environment of the vehicle; wherein the external environment comprises a weather environment and a road environment;
the output module 203 is further configured to perform feature extraction on the running data of the vehicle to obtain first sub-feature data; extracting features of the image data to obtain second sub-feature data; and fusing the first sub-feature data and the second sub-feature data to obtain the first feature data set.
In some embodiments, the output module 203 is further configured to perform feature extraction on the image data representing the weather environment by using a preset first feature extraction module to obtain weather feature data; the first feature extraction module is a module suitable for extracting weather features; carrying out feature extraction on the image data representing the road environment by using a preset second feature extraction module to obtain road feature data; the second feature extraction module is a module suitable for extracting road features; the image data comprising the weather environment and the road environment are subjected to feature extraction by using a preset third feature extraction module, so that universal feature data are obtained; the third feature extraction module is a feature extraction module which does not distinguish image contents; and fusing the weather feature data, the road feature data and the general feature data to obtain the second sub-feature data.
In some embodiments, the first obtaining module 201 is further configured to obtain, after outputting the interaction message, second status data during driving of the vehicle by the driver;
the output module 203 is further configured to adjust output of the interaction message according to the second status data.
Fig. 5 is a block diagram of a vehicle-mounted terminal according to an exemplary embodiment, and referring to fig. 5, a vehicle-mounted terminal 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the vehicle-side 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the vehicle-mounted terminal 800. Examples of such data include instructions for any application or method operating on the vehicle side 800, contact data, phonebook data, messages, pictures, video, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the vehicle-side 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the vehicle-side 800.
The multimedia component 808 includes a screen between the vehicle side 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the vehicle-mounted device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the vehicle-mounted device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the vehicle-side 800. For example, the sensor assembly 814 may detect an open/closed state of the vehicle end 800, a relative positioning of the components, such as a display and keypad of the vehicle end 800, the sensor assembly 814 may also detect a change in position of the vehicle end 800 or a component of the vehicle end 800, the presence or absence of a user's contact with the vehicle end 800, an orientation or acceleration/deceleration of the vehicle end 800, and a change in temperature of the vehicle end 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the vehicle-mounted terminal 800 and other devices. The vehicle-mounted terminal 800 may access a wireless network based on a communication standard, such as WiFi, 4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the vehicle-side 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of the vehicle-side 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of a vehicle interaction device, causes the vehicle interaction device to perform a vehicle interaction method, the method comprising:
acquiring first state data of a driver in the process of driving the vehicle;
acquiring environmental data of the vehicle in the running process;
outputting an interaction message according to the first state data of the driver and the environment data of the vehicle; wherein the interactive message is used for improving the driving state of the driver.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

1. A method of vehicle interaction, the method comprising:
acquiring first state data of a driver in the process of driving the vehicle; wherein the first state data includes: image data characterizing driving behavior of the driver and physiological data of the driver;
acquiring environmental data of the vehicle in the running process;
outputting an interaction message according to the first state data of the driver and the environment data of the vehicle; wherein the interactive message is used for improving the driving state of the driver;
wherein, according to the first state data of the driver and the environmental data of the vehicle, outputting an interactive message includes:
determining a status score based on the first status data of the driver and the environmental data of the vehicle; wherein the status score characterizes a degree of safety of the driver driving the vehicle;
outputting the interactive message according to the state score;
wherein the determining a status score according to the first status data of the driver and the environmental data of the vehicle comprises:
extracting features of the environmental data of the vehicle to obtain a first feature data set;
Extracting the characteristics of the first state data of the driver to obtain a second characteristic data set;
aligning the first characteristic data set and the second characteristic data set, and then inputting the aligned first characteristic data set and the aligned second characteristic data set into a preset state evaluation model to obtain the state score and the driving intention of the driver;
wherein, according to the state score, outputting the interactive message includes:
and outputting the interaction message according to the state score and the driving intention.
2. The method of claim 1, wherein outputting an interaction message based on the status score comprises:
determining a state grade to which the state score belongs;
outputting the interactive message with the intensity corresponding to the state level.
3. The method of claim 1, wherein the environmental data of the vehicle comprises: the running data of the vehicle itself, and image data representing the external environment of the vehicle; wherein the external environment comprises a weather environment and a road environment;
the feature extraction is performed on the environmental data of the vehicle to obtain a first feature data set, including:
extracting characteristics of the running data of the vehicle to obtain first sub-characteristic data;
Extracting features of the image data to obtain second sub-feature data;
and fusing the first sub-feature data and the second sub-feature data to obtain the first feature data set.
4. A method according to claim 3, wherein said feature extraction of said image data to obtain second sub-feature data comprises:
carrying out feature extraction on the image data representing the weather environment by using a preset first feature extraction module to obtain weather feature data; the first feature extraction module is a module suitable for extracting weather features;
carrying out feature extraction on the image data representing the road environment by using a preset second feature extraction module to obtain road feature data; the second feature extraction module is a module suitable for extracting road features;
the image data comprising the weather environment and the road environment are subjected to feature extraction by using a preset third feature extraction module, so that universal feature data are obtained; the third feature extraction module is a feature extraction module which does not distinguish image contents;
and fusing the weather feature data, the road feature data and the general feature data to obtain the second sub-feature data.
5. The method according to claim 1, characterized in that the method comprises:
after the interactive message is output, second state data of the driver in the process of driving the vehicle are obtained;
and adjusting the output of the interactive message according to the second state data.
6. A vehicle interaction device, the device comprising:
the first acquisition module is used for acquiring first state data in the process of driving the vehicle by a driver; wherein the first state data includes: image data characterizing driving behavior of the driver and physiological data of the driver;
the second acquisition module is used for acquiring the environmental data of the vehicle in the running process;
the output module is used for carrying out feature extraction on the environmental data of the vehicle to obtain a first feature data set; extracting the characteristics of the first state data of the driver to obtain a second characteristic data set; aligning the first characteristic data set and the second characteristic data set, and then inputting the aligned first characteristic data set and the aligned second characteristic data set into a preset state evaluation model to obtain a state score; wherein the status score characterizes a degree of safety of the driver driving the vehicle;
The obtaining module is used for inputting the first characteristic data set and the second characteristic data set into the preset state evaluation model after aligning to obtain the driving intention of the driver;
the output module is further used for outputting an interaction message according to the state score and the driving intention; wherein the interactive message is used for improving the driving state of the driver.
7. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the output module is further used for determining a state grade to which the state score belongs; outputting the interactive message with the intensity corresponding to the state level.
8. The apparatus of claim 6, wherein the environmental data of the vehicle comprises: the running data of the vehicle itself, and image data representing the external environment of the vehicle; wherein the external environment comprises a weather environment and a road environment;
the output module is further used for extracting characteristics of the running data of the vehicle to obtain first sub-characteristic data; extracting features of the image data to obtain second sub-feature data; and fusing the first sub-feature data and the second sub-feature data to obtain the first feature data set.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the output module is further used for carrying out feature extraction on the image data representing the weather environment by utilizing a preset first feature extraction module to obtain weather feature data; the first feature extraction module is a module suitable for extracting weather features; carrying out feature extraction on the image data representing the road environment by using a preset second feature extraction module to obtain road feature data; the second feature extraction module is a module suitable for extracting road features; the image data comprising the weather environment and the road environment are subjected to feature extraction by using a preset third feature extraction module, so that universal feature data are obtained; the third feature extraction module is a feature extraction module which does not distinguish image contents; and fusing the weather feature data, the road feature data and the general feature data to obtain the second sub-feature data.
10. The apparatus of claim 6, wherein the device comprises a plurality of sensors,
the first obtaining module is further configured to obtain second state data of the driver in the driving process of the vehicle after the interactive message is output;
The output module is further configured to adjust output of the interaction message according to the second status data.
11. A vehicle interaction device, the device comprising:
a memory for storing computer executable instructions;
a processor, coupled to the memory, for implementing the method of any one of claims 1 to 5 by executing the computer-executable instructions.
12. A computer readable storage medium having stored therein computer executable instructions configured to perform the method provided in any one of the preceding claims 1 to 5.
CN202111505828.9A 2021-12-10 2021-12-10 Vehicle interaction method, vehicle interaction device and storage medium Active CN114360241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111505828.9A CN114360241B (en) 2021-12-10 2021-12-10 Vehicle interaction method, vehicle interaction device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111505828.9A CN114360241B (en) 2021-12-10 2021-12-10 Vehicle interaction method, vehicle interaction device and storage medium

Publications (2)

Publication Number Publication Date
CN114360241A CN114360241A (en) 2022-04-15
CN114360241B true CN114360241B (en) 2023-05-16

Family

ID=81099392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111505828.9A Active CN114360241B (en) 2021-12-10 2021-12-10 Vehicle interaction method, vehicle interaction device and storage medium

Country Status (1)

Country Link
CN (1) CN114360241B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101051419A (en) * 2006-04-05 2007-10-10 中国科学院电子学研究所 Vehicle and road interaction system and method based on radio sensor network
CN103810877A (en) * 2012-11-09 2014-05-21 无锡美新物联网科技有限公司 Automobile information interaction safety system
JP2021099877A (en) * 2020-03-17 2021-07-01 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッドBeijing Baidu Netcom Science Technology Co., Ltd. Method, device, apparatus and storage medium for reminding travel on exclusive driveway

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207895699U (en) * 2017-09-25 2018-09-21 南京富朗星科技有限公司 One kind having driver's dangerous driving behavior prior-warning device
CN108877120A (en) * 2018-08-31 2018-11-23 惠州市名商实业有限公司 On-vehicle safety drives terminal and safety driving system
DE102018217425A1 (en) * 2018-10-11 2020-04-16 Continental Automotive Gmbh Driver assistance system for a vehicle
CN110171361B (en) * 2019-06-17 2022-09-23 山东理工大学 Automobile safety early warning method based on emotion and driving tendency of driver
CN111402925B (en) * 2020-03-12 2023-10-10 阿波罗智联(北京)科技有限公司 Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium
CN113643512B (en) * 2021-07-28 2023-07-18 北京中交兴路信息科技有限公司 Fatigue driving detection method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101051419A (en) * 2006-04-05 2007-10-10 中国科学院电子学研究所 Vehicle and road interaction system and method based on radio sensor network
CN103810877A (en) * 2012-11-09 2014-05-21 无锡美新物联网科技有限公司 Automobile information interaction safety system
JP2021099877A (en) * 2020-03-17 2021-07-01 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッドBeijing Baidu Netcom Science Technology Co., Ltd. Method, device, apparatus and storage medium for reminding travel on exclusive driveway

Also Published As

Publication number Publication date
CN114360241A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US10908677B2 (en) Vehicle system for providing driver feedback in response to an occupant's emotion
CN112141119B (en) Intelligent driving control method and device, vehicle, electronic equipment and storage medium
CN114026611A (en) Detecting driver attentiveness using heatmaps
US10636301B2 (en) Method for assisting operation of an ego-vehicle, method for assisting other traffic participants and corresponding assistance systems and vehicles
EP3518205A1 (en) Vehicle control device, vehicle control method, and moving body
CN107097793A (en) Driver assistance and the vehicle with the driver assistance
CN106448221B (en) Pushing method of online driving assistance information pushing system and application of pushing method
JP6613623B2 (en) On-vehicle device, operation mode control system, and operation mode control method
CN102752458A (en) Driver fatigue detection mobile phone and unit
US10666901B1 (en) System for soothing an occupant in a vehicle
US10741076B2 (en) Cognitively filtered and recipient-actualized vehicle horn activation
CN114841377B (en) Federal learning model training method and recognition method applied to image target recognition
CN114987500A (en) Driver state monitoring method, terminal device and storage medium
KR20190063986A (en) Artificial intelligence dashboard robot base on cloud server for recognizing states of a user
CN112954486B (en) Vehicle-mounted video trace processing method based on sight attention
CN112667084B (en) Control method and device for vehicle-mounted display screen, electronic equipment and storage medium
CN113352989A (en) Intelligent driving safety auxiliary method, product, equipment and medium
CN114360241B (en) Vehicle interaction method, vehicle interaction device and storage medium
CN115720555A (en) Method and system for improving user alertness in an autonomous vehicle
CN105253062A (en) Automobile advanced driver assistance system-based image display system and implementation method thereof
Kashevnik et al. Context-based driver support system development: Methodology and case study
CN114013367A (en) High beam use reminding method and device, electronic equipment and storage medium
CN111815904A (en) Method and system for pushing V2X early warning information
CN112258813A (en) Vehicle active safety control method and device
CN111918461A (en) Road condition sharing method, system, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant