CN117999210A - Driving state abnormality reminding method and device, automobile and storage medium - Google Patents

Driving state abnormality reminding method and device, automobile and storage medium Download PDF

Info

Publication number
CN117999210A
CN117999210A CN202380013714.3A CN202380013714A CN117999210A CN 117999210 A CN117999210 A CN 117999210A CN 202380013714 A CN202380013714 A CN 202380013714A CN 117999210 A CN117999210 A CN 117999210A
Authority
CN
China
Prior art keywords
information
vehicle
driving
data
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202380013714.3A
Other languages
Chinese (zh)
Inventor
古天生
韩永刚
顾子贤
王玉玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Streamax Technology Co Ltd
Original Assignee
Streamax Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Streamax Technology Co Ltd filed Critical Streamax Technology Co Ltd
Publication of CN117999210A publication Critical patent/CN117999210A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

An abnormality reminding method, device, automobile and storage medium for driving state, the method comprises: acquiring driving environment information, historical dialogue information and vehicle driving data; generating reminding content corresponding to the current environment according to the driving environment information, the historical dialogue information and the vehicle driving data and combining a pre-trained interaction intervention model; and generating a voice prompt according to the prompt content. The output voice prompt is associated with the vehicle running data and the driving environment information in the current environment, so that the voice prompt is more accurate and reliable, and the voice prompt is generated based on the historical dialogue information, so that the service quality of the voice prompt can be effectively improved.

Description

Driving state abnormality reminding method and device, automobile and storage medium
Technical Field
The application relates to the field of intelligent driving, in particular to an abnormal reminding method and device for driving states, an automobile and a storage medium.
Background
Along with the improvement of the living standard of people and the popularization of automobiles, the automobiles become important travel tools for people, and great convenience is brought to life and work. However, during driving of an automobile, abnormal driving conditions may exist, including conditions such as fatigue driving or driver distraction, traffic accidents are easy to occur, and personal injury and property loss are caused.
In order to be able to detect the abnormal state of the driver in time, a driving assistance system based on visual information may be employed, using the state information of the driver as input to a state monitoring model. When the state information of the driver is abnormal, the state monitoring model provides a voice broadcasting prompt or a manual warning. However, when the state of the driver is identified, the state is easily interfered by environmental information, so that the accuracy of the state identification is not high, and the high-quality intelligent intervention is not facilitated.
Disclosure of Invention
In view of the above, embodiments of the present application provide a driving state abnormality reminding method, apparatus, automobile and storage medium, so as to solve the problem in the prior art that when the state of the driver is identified, the state identification is easily interfered by environmental information, resulting in low accuracy of state identification, which is not beneficial to providing high-quality intelligent intervention.
A first aspect of an embodiment of the present application provides an abnormality alert method for a driving state, the method including:
acquiring driving environment information, historical dialogue information and vehicle driving data;
Generating reminding content corresponding to the current environment according to the driving environment information, the historical dialogue information and the vehicle driving data and combining a pre-trained interaction intervention model;
And generating a voice prompt according to the prompt content.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the driving environment information includes in-vehicle environment information and in-vehicle environment information;
According to the driving environment information, the dialogue information and the vehicle driving data, and combining the interaction intervention model which is trained in advance, generating reminding content corresponding to the current environment, wherein the method comprises the following steps:
Inputting the in-vehicle environment information into a pre-trained state monitoring model to obtain the state information of the driver;
And inputting the in-vehicle environment information, the out-vehicle environment information, the historical dialogue information, the vehicle driving data and the state information of the driver into the interactive intervention model to generate reminding content corresponding to the current environment.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, acquiring driving environment information includes:
acquiring at least one of lane line information, pedestrian information, surrounding vehicle information, barrier information and traffic signal information of a scene where a vehicle is located through an automatic data acquisition system;
At least one of face orientation information, eye information, mouth information and hand information of a driver is acquired by a driver monitoring system.
With reference to the first possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, before generating, according to the driving environment information, the historical dialogue information, and the vehicle driving data, the reminder content corresponding to the current environment in combination with the pre-trained interaction intervention model, the method further includes:
Acquiring sound information of a microphone in a current scene;
According to the driving environment information, the historical dialogue information and the vehicle driving data, and combining the interaction intervention model which is trained in advance, generating reminding content corresponding to the current environment, wherein the reminding content comprises the following components:
Inputting the in-vehicle environment information into a pre-trained state monitoring model to obtain the state information of the driver;
And inputting the driving environment information, the historical dialogue information, the vehicle running data, the state information of the driver and the sound information into a pre-trained interaction intervention model, and generating reminding content corresponding to the current environment.
With reference to the first possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, inputting the driving environment information, the historical dialogue information, the vehicle driving data, the state information of the driver, and the sound information into a pre-trained interactive intervention model includes:
Extracting image features in an image through a feature extraction model, wherein the image comprises an in-vehicle environment image and an out-of-vehicle environment image in the driving environment information;
identifying text information in the historical dialogue information and the sound information through a voice recognition model, and extracting feature vectors in the text information according to a pre-trained vector extraction model;
And connecting the image features, the feature vectors, the state information of the driver and the vehicle driving data according to a preset marker, and inputting the marker-connected data into the interactive intervention model.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, before generating, according to the driving environment information, the historical dialogue information, and the vehicle driving data, the reminder content corresponding to the current environment in combination with the pre-trained interactive intervention model, the method further includes:
Acquiring sample data, wherein the sample data comprises driving environment information, historical dialogue information, vehicle driving data and standard content;
driving environment information, historical dialogue information and vehicle driving data in the sample data are input into the interactive intervention model, and reminding content is obtained through calculation;
And determining the difference between the reminding content and the standard content, and adjusting the parameters of the interaction intervention model according to the difference until the difference meets the preset requirement.
With reference to the fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, determining a difference between the reminder content and the standard content includes:
And calculating word shift distance between the reminding content and the standard content, and determining the difference between the reminding content and the standard content according to the word shift distance.
A second aspect of an embodiment of the present application provides an abnormality alert device for a driving state, the device including:
An information acquisition unit configured to acquire driving environment information, history dialogue information, and vehicle travel data;
The reminding content generation unit is used for generating reminding content corresponding to the current environment according to the driving environment information, the historical dialogue information and the vehicle driving data and combining a pre-trained interaction intervention model;
and the voice reminding unit is used for generating voice reminding according to the reminding content.
A third aspect of an embodiment of the application provides a vehicle comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspects when the computer program is executed.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method according to any of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the method, the driving environment information, the historical dialogue information and the vehicle running data are acquired, the reminding content corresponding to the current environment is obtained by calculating based on the driving environment information, the historical dialogue information and the vehicle running data and combining the interactive intervention model, and the voice reminding is generated according to the reminding content, so that the voice reminding is associated with the vehicle running data and the driving environment information in the current environment, the output voice reminding is more accurate and reliable, and the service quality of the voice reminding can be effectively improved based on the historical dialogue information generation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or exemplary technical descriptions will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic diagram of an implementation scenario of a driving state abnormality reminding method according to an embodiment of the present application;
Fig. 2 is a schematic implementation flow chart of a driving state abnormality reminding method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of an implementation of a method for generating reminder content according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of an implementation of a training interaction intervention model according to an embodiment of the present application;
Fig. 5 is a schematic diagram of an abnormality alert device for driving status according to an embodiment of the present application;
fig. 6 is a schematic diagram of an automobile according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
In order to avoid traffic accidents caused by fatigue driving or distraction of a driver during driving of a vehicle, status data of the driver are generally collected. And analyzing whether the abnormal states such as fatigue driving or distraction driving are generated according to the state data of the driver. If the driver is in an abnormal state, an intervention alert may be sent to the driver. However, the intervention effect is poor, and the driver may return to the original state after a period of time is passed after the intervention reminding, so that the fatigue driving or distraction driving cannot be continuously and effectively prevented.
Fig. 1 is a schematic diagram of an implementation scenario of a driving state abnormality reminding method according to an embodiment of the present application. As shown in fig. 1, the implementation scenario includes ADAS (english fully called Automatic Data Acquisition System, chinese fully called automatic data acquisition system), DMS (english fully called Driver Monitoring System, chinese fully called driver monitoring system), microphone, speaker, ECU (english fully called Electronic Control Unit, chinese fully called electronic control unit) and interactive intervention system.
The ADAS is used for collecting the vehicle exterior environment information in the driving environment information. The vehicle exterior environment information includes one or more of lane line information around the vehicle, pedestrian information around the vehicle, vehicle information around the vehicle, obstacle information around the vehicle, traffic signal information around the vehicle. The vehicle information around the vehicle may include information on a distance between the own vehicle and other vehicles, a speed of other vehicles, acceleration, and the like.
The DMS is used to collect in-vehicle environment information in the driving environment information. In-vehicle environmental information may be acquired through in-vehicle environmental images. The in-vehicle environment information may include, for example, at least one of face orientation information of the driver, eye information of the driver, mouth information of the driver, and hand information of the driver.
Microphones may be used to collect sound information in the current scene, including sound information of the driver and/or occupant. The sound information may include, but is not limited to, content of speech, mood assistance words, etc.
The ECU may be used to obtain travel data of the vehicle including, but not limited to, the speed of the vehicle (own vehicle), acceleration, distance from a preceding vehicle or obstacle, and the like.
The loudspeaker is used for playing the calculated reminding content, so that a driver can timely know the current abnormal state.
The interactive intervention system can be used for calculating driving environment information acquired by the ADAS and the DMS, vehicle driving data transmitted by the ECU and voice information acquired by the microphone, and calculating by combining preset historical dialogue information to obtain reminding content for output. For example, in-vehicle environmental information may be calculated by a state monitoring model, and state information of the driver may be determined from the in-vehicle environmental information, including determining whether the driver is in a tired driving state or in a distracted driving state.
After the state information of the driver is determined, the in-vehicle environment information, the history dialogue information, the sound information in the current scene, the vehicle driving data and the state information of the driver can be input into the interaction interference model to generate reminding content corresponding to the current environment.
Or the vehicle environment information, the vehicle exterior environment information, the historical dialogue information, the sound information in the current scene and the vehicle driving data can be directly input into the interaction model to generate the reminding content corresponding to the current environment.
The state information of the driver is predicted by a state monitoring model. Because the task scene is simpler, the accuracy of the predicted state information of the driver can be effectively improved. And the state monitoring model is combined with the interactive intervention model, so that the complexity of data is increased, and the difficulty of tasks is increased. However, the state information of the driver output from the state monitoring model is used as redundant input of the interactive intervention model, so that the learning capability of the interactive intervention model can be improved, the interactive intervention model can not only see the predicted state information of the driver, but also learn other behavior characteristics from driving environment information (such as an in-vehicle environment image and/or an out-vehicle environment image), and the accuracy of the output reminding content is improved.
Fig. 2 is a schematic implementation flow chart of a driving state abnormality reminding method according to an embodiment of the present application, which is described in detail below:
In S201, driving environment information, history dialogue information, and vehicle running data are acquired.
The driving environment information may include in-vehicle environment information and out-of-vehicle environment information, among others. In-vehicle environment information can be acquired through a DMS, and in-vehicle environment information can be acquired through an ADAS. For example, in-vehicle environment information may be acquired through an in-vehicle environment image, and out-of-vehicle environment information may be acquired through an out-of-vehicle environment image.
The in-vehicle environment information may include, for example, at least one of face orientation information of the driver, eye information of the driver, mouth information of the driver, and hand information of the driver. The vehicle exterior environment information includes one or more of lane line information around the vehicle, pedestrian information around the vehicle, vehicle information around the vehicle, obstacle information around the vehicle, traffic signal information around the vehicle. The vehicle information around the vehicle may include information on a distance between the own vehicle and other vehicles, a speed of other vehicles, acceleration, and the like.
The historical dialog information may include historical dialog content of the driver. The dialogue content generated by the user in the history driving record can be recorded through a microphone, and the dialogue content comprises the dialogue content of a driver and other passengers, or the dialogue content of the driver through communication tools such as telephones, interphones and the like. Or can also send historical dialogue information through the cloud platform.
In a possible implementation manner, the application scenario of the abnormality reminding method of the driver may further include a microphone, where the microphone is used to collect sound information of the current environment of the driver. And inputting the sound information into an interaction interference model to obtain reminding contents which are more matched with the current environment.
In S202, according to the driving environment information, the historical dialogue information and the vehicle driving data, in combination with the interaction intervention model trained in advance, a reminder content corresponding to the current environment is generated.
The interactive intervention model may comprise a model of a neural network structure. To improve the learning ability of the interactive intervention model, when generating reminder content using the interactive intervention model, it may include, as shown in fig. 3:
in S301, the in-vehicle environment information is input into a pre-trained state monitoring model, and the state information of the driver is obtained.
The in-vehicle environment image acquired by the DMS can be input into a pre-trained state monitoring model to obtain the state information of the driver in the current environment.
The state monitoring model can be obtained through training of preset sample data. For example, the sample data may include a sample image and sample state information, the sample image is input into a state monitoring model to be trained, state information corresponding to the sample image is calculated and output, the calculated state information is compared with standard state information, and parameters in the state monitoring model are adjusted according to the difference between the calculated state information and the standard state information until the difference between the calculated state information and the standard state information meets a preset requirement.
In S302, the in-vehicle environment information, the history dialogue information, the vehicle driving data, and the state information of the driver are input to the interaction intervention model, and a reminder corresponding to the current environment is generated.
The difficulty of task calculation is high because of the large data volume input by the interactive intervention model. The state information of the driver predicted by the data collected by the DMS is input into the interactive intervention model, so that the redundant input of the interactive intervention model can be increased, the model learning capacity can be improved, the model can obtain the prediction result of the state information, other behavior characteristics can be learned from the image information collected by the DMS, such as an in-vehicle environment image, and the accuracy of the output reminding content can be improved.
When the application scene comprises the microphone, driving environment information, historical dialogue information, vehicle driving data, state information of a driver and sound information in the current environment collected by the microphone can be input into an interaction intervention model trained in advance to generate reminding content corresponding to the current environment when the reminding content is calculated.
In the embodiment of the application, when driving environment information, historical dialogue information, vehicle driving data, state information of a driver and sound information in a current environment collected by a microphone are input into an interactive intervention model which is trained in advance, an in-vehicle environment image and an out-vehicle environment image in the driving environment information can be flattened into two-dimensional data, the two-dimensional data and other two-dimensional data are spliced, and the spliced data are input into the interactive intervention model. Through the attention mechanism, the abnormal points in the input are focused, including the abnormality of the state information of the driver, and the problems of overspeed caused by too high running speed of the vehicle or too close following the vehicle are accompanied, so that the reminding content corresponding to the current environment is generated, including the problems of overspeed reminding, too close following the vehicle and the like, the problem that the reminding content is too single is effectively reduced, and the driver can more effectively notice the problem caused by the state abnormality.
When data is input to the interactive intervention model, different data may be characterized as the same data form. For example, the environmental image information obtained by the ADAS and the DMS may be subjected to image size conversion. For example, an image of 224×224 resolution may be processed. The size-transformed image may be used as input to a pre-trained CLIP ViT-L/14 model (a neural network model) for extracting image features in environmental image information, including image features of an in-vehicle environment image and an out-of-vehicle environment image. The key content in the environment image can be extracted by the downsampling method, and the data volume can be reduced at the same time.
In addition, pre-trained bert-large-uncased (a neural network model) may be used to convert text information into feature vectors for voice information (including recognized text information) translated from microphone inputs, as well as past historical dialog information (including text information of historical dialogues). The characteristics of the vehicle are self-learned by the interactive intervention model by taking the vehicle driving data and additional data (such as lane lines, pedestrians, face directions and the like) acquired from the ADAS and the DMS as inputs of the learnable embedded layer. All of the above features will be flattened into two-dimensional data and connected by special marks in the markers.
For example, < s > and < s > may be used to represent the beginning and end of a sequence. The special marks < text > and </text > represent the beginning and end of text embedding of the driver's speech data conversion. The special marks < image > and </image > represent the start and end of the embedding of the encoded image. The special tags < history > and </history > represent the beginning and ending of the history dialogue embedding. The special marks < other > and </other > denote the beginning and end of the embedding of the remaining data.
For example ,"<s><other>other_embeddings</other><history>dialogue_content</hist ory><text>current_speech_text</text></s>" is a multimodal input of an interactive intervention model. The training of the next marker prediction task, i.e. learning to generate the next marker from the previous context, is accepted based on the multimodal input model, i.e. the interactive intervention model. Training objectives may include maximizing the log likelihood of the markers in the example.
In the embodiment of the application, before the calculation of the reminding content by using the interactive intervention model, training of the interactive intervention model can be further included, as shown in fig. 4, and the training process can include:
in S401, sample data including driving environment information, history dialogue information, vehicle running data, and standard contents is acquired.
The sample data in the embodiment of the application is the data which is preset with the corresponding relation between the driving information and the reminding content. The driving information includes driving environment information, history dialogue information, vehicle running data, and sound information of the current environment. Different standard contents may be corresponding to different driving information. And diversified standard contents are generated based on different driving information, so that the diversification of reminding contents is improved. And the reminder content is related to the dialog content such that the reminder is more pertinent to the driver dialog content.
In S402, driving environment information, history dialogue information, and vehicle driving data in the sample data are input to the interactive intervention model, and reminder content is calculated.
The environmental information in the sample data, including driving environmental information, historical dialogue information and vehicle driving data, or the voice information of the current environment in the sample data, is input into the interaction intervention model, and reminding content generated by the interaction intervention model is calculated according to parameters in the interaction intervention model.
In S403, a difference between the reminding content and the standard content is determined, and parameters of the interactive intervention model are adjusted according to the difference until the difference meets a predetermined requirement.
The reminding content calculated by the interactive intervention model can be subjected to difference calculation with the standard content, and the accuracy of the current interactive intervention model calculation is determined according to the calculated difference. And adjusting parameters of the interactive intervention model through the difference, so that the difference is reduced to meet the preset requirement, and thus training of the interactive intervention model is completed.
When the difference between the reminding content calculated by the interaction intervention model and the standard content is determined, the word shift distance between the standard content and the calculated reminding content can be calculated, and the size of the difference is determined based on the size of the word shift distance. That is, the larger the word shift distance is, the larger the difference between the standard content and the reminding content is, the smaller the word shift distance is, and the smaller the difference between the standard content and the reminding content is. When determining the word shift distance, the word vector in the standard content and the word vector in the reminding content can be determined, and the word shift distance is determined according to the word vectors of the word vector and the word vector in the reminding content. The difference between the standard content and the reminding content is determined through the word shift distance, so that the semantic relevance between the manual labeling answer and the generated reminding content can be well considered, the specific expression mode of sentences can not be limited, and the diversity of the reminding content is improved.
In S203, a voice reminder is generated from the reminder content.
The reminding content can be converted into voice through the voice module, and the voice content is played through the loudspeaker to carry out diversified voice reminding on the driver.
According to the method, the driving environment information, the historical dialogue information and the vehicle driving data are acquired, the reminding content corresponding to the current environment is obtained by calculating based on the driving environment information, the historical dialogue information and the vehicle driving data and combining the interactive intervention model, and the voice reminding is generated according to the reminding content, so that the voice reminding is associated with the vehicle driving data and the driving environment information in the current environment, the voice reminding is more accurate and reliable, and the service quality of the voice reminding can be effectively improved based on the historical dialogue information generation.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 5 is a schematic diagram of an abnormality alert device for driving status according to an embodiment of the present application. As shown in fig. 5, the apparatus includes:
An information acquisition unit 501 for acquiring driving environment information, history dialogue information, and vehicle running data.
The reminding content generating unit 502 is configured to generate, according to the driving environment information, the historical dialogue information and the vehicle driving data, a reminding content corresponding to the current environment in combination with the interaction intervention model that is trained in advance.
And the voice reminding unit 503 is configured to generate a voice reminder according to the reminder content.
The abnormality alert device for the driving state shown in fig. 5 corresponds to the abnormality alert method for the driving state shown in fig. 2.
Fig. 6 is a schematic view of an automobile according to an embodiment of the present application. As shown in fig. 6, the automobile 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60, such as an abnormality alert program for driving status. The processor 60, when executing the computer program 62, implements the steps in the above-described embodiments of the abnormality alert method for each driving state. Or the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments described above.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 62 in the car 6.
The automobile may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the automobile 6 and is not intended to be limiting of the automobile 6, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the automobile may further include input and output devices, network access devices, buses, etc.
The Processor 60 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the car 6, such as a hard disk or a memory of the car 6. The memory 61 may also be an external storage device of the automobile 6, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the automobile 6. Further, the memory 61 may also include both an internal memory unit and an external memory device of the car 6. The memory 61 is used for storing the computer program as well as other programs and data required for the car. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on this understanding, the present application may also be implemented by implementing all or part of the procedures in the methods of the above embodiments, and the computer program may be stored in a computer readable storage medium, where the computer program when executed by a processor may implement the steps of the respective method embodiments. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. An abnormality alert method of a driving state, the method comprising:
acquiring driving environment information, historical dialogue information and vehicle driving data;
Generating reminding content corresponding to the current environment according to the driving environment information, the historical dialogue information and the vehicle driving data and combining a pre-trained interaction intervention model;
And generating a voice prompt according to the prompt content.
2. The method of claim 1, wherein the driving environment information includes in-vehicle environment information and out-of-vehicle environment information;
According to the driving environment information, the dialogue information and the vehicle driving data, and combining the interaction intervention model which is trained in advance, generating reminding content corresponding to the current environment, wherein the method comprises the following steps:
Inputting the in-vehicle environment information into a pre-trained state monitoring model to obtain the state information of the driver;
And inputting the in-vehicle environment information, the out-vehicle environment information, the historical dialogue information, the vehicle driving data and the state information of the driver into the interactive intervention model to generate reminding content corresponding to the current environment.
3. The method according to claim 1 or 2, characterized in that acquiring driving environment information includes:
acquiring at least one of lane line information, pedestrian information, surrounding vehicle information, barrier information and traffic signal information of a scene where a vehicle is located through an automatic data acquisition system;
At least one of face orientation information, eye information, mouth information and hand information of a driver is acquired by a driver monitoring system.
4. The method of claim 2, wherein prior to generating alert content corresponding to a current environment in accordance with the driving environment information, historical dialog information, and vehicle travel data in conjunction with a pre-trained interactive intervention model, the method further comprises:
Acquiring sound information of a microphone in a current scene;
According to the driving environment information, the historical dialogue information and the vehicle driving data, and combining the interaction intervention model which is trained in advance, generating reminding content corresponding to the current environment, wherein the reminding content comprises the following components:
Inputting the in-vehicle environment information into a pre-trained state monitoring model to obtain the state information of the driver;
And inputting the driving environment information, the historical dialogue information, the vehicle running data, the state information of the driver and the sound information into a pre-trained interaction intervention model, and generating reminding content corresponding to the current environment.
5. The method of claim 2, wherein inputting the driving environment information, historical dialog information, vehicle travel data, driver status information, and the sound information into a pre-trained interactive intervention model comprises:
Extracting image features in an image through a feature extraction model, wherein the image comprises an in-vehicle environment image and an out-of-vehicle environment image in the driving environment information;
identifying text information in the historical dialogue information and the sound information through a voice recognition model, and extracting feature vectors in the text information according to a pre-trained vector extraction model;
And connecting the image features, the feature vectors, the state information of the driver and the vehicle driving data according to a preset marker, and inputting the marker-connected data into the interactive intervention model.
6. The method of claim 1, wherein prior to generating alert content corresponding to a current environment in accordance with the driving environment information, historical dialog information, and vehicle travel data in conjunction with a pre-trained interactive intervention model, the method further comprises:
Acquiring sample data, wherein the sample data comprises driving environment information, historical dialogue information, vehicle driving data and standard content;
driving environment information, historical dialogue information and vehicle driving data in the sample data are input into the interactive intervention model, and reminding content is obtained through calculation;
And determining the difference between the reminding content and the standard content, and adjusting the parameters of the interaction intervention model according to the difference until the difference meets the preset requirement.
7. The method of claim 6, wherein determining the difference of the reminder content from the standard content comprises:
And calculating word shift distance between the reminding content and the standard content, and determining the difference between the reminding content and the standard content according to the word shift distance.
8. An abnormality alert device for a driving state, the device comprising:
An information acquisition unit configured to acquire driving environment information, history dialogue information, and vehicle travel data;
The reminding content generation unit is used for generating reminding content corresponding to the current environment according to the driving environment information, the historical dialogue information and the vehicle driving data and combining a pre-trained interaction intervention model;
and the voice reminding unit is used for generating voice reminding according to the reminding content.
9. An automobile comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202380013714.3A 2023-06-25 2023-06-25 Driving state abnormality reminding method and device, automobile and storage medium Pending CN117999210A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2023102195 2023-06-25

Publications (1)

Publication Number Publication Date
CN117999210A true CN117999210A (en) 2024-05-07

Family

ID=90891097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202380013714.3A Pending CN117999210A (en) 2023-06-25 2023-06-25 Driving state abnormality reminding method and device, automobile and storage medium

Country Status (1)

Country Link
CN (1) CN117999210A (en)

Similar Documents

Publication Publication Date Title
US10931772B2 (en) Method and apparatus for pushing information
CN108944939B (en) Method and system for providing driving directions
Malta et al. A study of driver behavior under potential threats in vehicle traffic
US20190120649A1 (en) Dialogue system, vehicle including the dialogue system, and accident information processing method
CN111402925B (en) Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium
JP6613623B2 (en) On-vehicle device, operation mode control system, and operation mode control method
CN111415654B (en) Audio recognition method and device and acoustic model training method and device
CN108492819A (en) Language exercise method, apparatus, intelligent vehicle mounted terminal and storage medium
CN110855934A (en) Fatigue driving identification method, device and system, vehicle-mounted terminal and server
CN110533941B (en) Vehicle communication method, device, electronic equipment and computer medium
CN117999210A (en) Driving state abnormality reminding method and device, automobile and storage medium
CN110263664A (en) A kind of more occupant lanes are broken rules and regulations recognition methods and device
CN110826433A (en) Method, device and equipment for processing emotion analysis data of pilot driving user and storage medium
CN115544232A (en) Vehicle-mounted intelligent question answering and information recommending method and device
CN115431995A (en) Equipment control method and device based on different levels of assistant driving
US11869488B2 (en) Agent device, agent system, and computer-readable storage medium
CN113393643B (en) Abnormal behavior early warning method and device, vehicle-mounted terminal and medium
CN117115788B (en) Intelligent interaction method for vehicle, back-end server and front-end equipment
CN115158210B (en) Method, device, equipment and storage medium for monitoring extension of object from vehicle window
CN113409776B (en) Voice recognition method and device, electronic equipment and storage medium
Sathyanarayana et al. Automatic maneuver boundary detection system for naturalistic driving massive corpora
CN113844456B (en) ADAS automatic opening method and device
CN111008586A (en) Data processing method, device, equipment and storage medium for passenger car conflict detection
CN115859219A (en) Multi-modal interaction method, device, equipment and storage medium
CN117116000A (en) Driving early warning method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination