CN111652065B - Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing - Google Patents

Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing Download PDF

Info

Publication number
CN111652065B
CN111652065B CN202010361618.6A CN202010361618A CN111652065B CN 111652065 B CN111652065 B CN 111652065B CN 202010361618 A CN202010361618 A CN 202010361618A CN 111652065 B CN111652065 B CN 111652065B
Authority
CN
China
Prior art keywords
vehicle
data
equipment
neural network
sent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010361618.6A
Other languages
Chinese (zh)
Other versions
CN111652065A (en
Inventor
孙善宝
罗清彩
金长新
于�玲
于晓艳
张鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Scientific Research Institute Co Ltd
Original Assignee
Shandong Inspur Scientific Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Scientific Research Institute Co Ltd filed Critical Shandong Inspur Scientific Research Institute Co Ltd
Priority to CN202010361618.6A priority Critical patent/CN111652065B/en
Publication of CN111652065A publication Critical patent/CN111652065A/en
Application granted granted Critical
Publication of CN111652065B publication Critical patent/CN111652065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The invention relates to a multi-mode safe driving method, equipment and a system based on vehicle perception and intelligent wearing, wherein the method comprises the following steps: respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment; fusing the data through a control gate of a fusion neural network; and judging the fused data through a judging module of the fused neural network, and controlling or assisting the driving of the vehicle based on a judgment result. The embodiment of the invention utilizes the deep learning technology to fuse the data, and outputs the judgment result through the neural network to remind a driver or directly control the vehicle, thereby ensuring the driving safety.

Description

Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing
Technical Field
The invention relates to the technical field of intelligent networking, multi-mode fusion and deep learning, in particular to a multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing.
Background
In recent years, the development of artificial intelligence technology is rapid, the commercialization speed of the artificial intelligence technology exceeds expectations, and the artificial intelligence technology will bring subversive changes to the whole society and become an important development strategy for countries in the future. Particularly, the algorithm evolution taking deep learning as a core has the super-strong evolutionary capability, and under the support of big data, a large-scale neural network similar to a human brain structure is obtained through training and construction, so that various problems can be solved. Various complex factors are combined together in a nonlinear mode, the learning of characteristics is particularly important, the problem of over-training fitting is relieved to a great extent due to the occurrence of massive training data, deep learning is carried out from big data, and good application practice effects in computer vision, sound processing and natural language processing are achieved through a neural network, so that the traditional mode recognition mode is broken through, and subversive changes are generated in various fields.
Disclosure of Invention
The embodiment of the invention provides a multi-mode safe driving method, equipment and a system based on vehicle perception and intelligent wearing, and aims to solve the following technical problems to at least a certain extent:
how to effectively fuse the information data from the external environment sensing equipment, the internal sensing equipment and the intelligent driver sensing equipment, and the driving safety of the driver is improved.
The first aspect of the embodiment of the invention provides a multi-modal safe driving method based on vehicle perception and intelligent wearing, which comprises the following steps:
respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment;
fusing the data through a control gate of a fusion neural network;
and judging the fused data through a judging module of the fused neural network, and controlling or assisting the driving of the vehicle based on a judgment result.
In one example, after receiving the data sent by the vehicle-mounted sensing device, the vehicle central control system, the vehicle networking system and the intelligent wearable device respectively, before fusing the data through the gate of the fusion neural network, the method further includes:
and respectively sending the data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment to the corresponding data processing units so that the corresponding data processing units respectively preprocess the data from a plurality of sources.
In one example, the receiving of data sent by the vehicle-mounted sensing device, the vehicle central control system, the internet of vehicles system and the intelligent wearable device respectively includes:
receiving driver video data sent by an image sensor in the vehicle-mounted sensing equipment;
performing feature extraction on the driver video data through a first feature extraction neural network to obtain action features and face features;
and sending the facial features to a facial recognition neural network, and sending the action features to an action recognition neural network to obtain first driving state data of the driver.
In one example, the receiving of data sent by the vehicle-mounted sensing device, the vehicle central control system, the internet of vehicles system and the intelligent wearable device respectively comprises:
receiving voice data sent by a sound sensor in the vehicle-mounted sensing equipment;
and after the voice data is subjected to feature extraction through the second feature extraction neural network, recognizing the voice instruction data of the driver through the voice recognition neural network.
In one example, the receiving of data sent by the vehicle-mounted sensing device, the vehicle central control system, the internet of vehicles system and the intelligent wearable device respectively includes:
receiving data sent by at least one of the following devices in the intelligent wearable device; wherein, intelligence wearing equipment includes gyroscope, spirit level, accelerometer, heartbeat detection device, blood pressure monitoring device.
In one example, the receiving of data sent by the vehicle-mounted sensing device, the vehicle central control system, the internet of vehicles system and the intelligent wearable device respectively includes:
receiving vehicle running data through a vehicle central control system, wherein the vehicle running data at least comprises one of vehicle speed and turning angle;
and receiving data sent by the cloud and the roadbed side through the Internet of vehicles system so as to acquire data of road conditions and other vehicles.
In one example, the training method of the converged neural network comprises the following steps:
establishing a control gate of the converged neural network based on prior knowledge;
and training the fusion neural network based on the time sequence relation of the data through the pre-acquired data.
A second aspect of an embodiment of the present invention provides a multimodal safe driving apparatus based on vehicle sensing and smart wearing, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment;
fusing the data through a control gate of a fusion neural network;
and judging the fused data through a judging module of the fused neural network, and controlling the running of the vehicle based on a judgment result.
A third aspect of the embodiments of the present invention provides a multimodal safe driving system based on vehicle sensing and intelligent wearing, including:
the vehicle-mounted sensing equipment comprises external sensing equipment and internal sensing equipment, wherein the vehicle-mounted external sensing equipment is used for collecting environmental data, and the internal sensing equipment is used for collecting facial features and sound instructions of a driver;
the vehicle central control system is used for receiving vehicle running data;
the vehicle networking system is used for receiving data of a road base side and a cloud end;
the data computing center is used for respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment; and fusing the data through a fusion neural network, judging the fused data, and controlling or assisting the driving of the vehicle based on the judgment result.
In one example, the system further comprises:
set up the intelligent wearing equipment on the vehicle, intelligent wearing equipment is used for the driver to dress when driving to detect driver's health.
Has the advantages that:
the embodiment of the invention takes a vehicle-mounted data computing center (fusion neural network) as a core, and fuses multi-mode data such as cloud data, environment perception data from the outside, vehicle driving data, road base side and cooperative driving data, driver emotion data and behavior data captured by a camera in a vehicle, voice data acquisition, intelligent wearing acquisition of relevant body condition data and the like; compared with the traditional design of a multi-mode data fusion mode by using experience, the design of the data fusion control door and the learning by using the neural network can more reasonably use data, thereby ensuring the driving safety; the long-short memory network is adopted to take continuity and time sequence relation of data into consideration, and information such as intelligent wearing data, data detected by a road base side and an external sensor is added into multi-mode data, so that diversity and completeness of the data are improved, and safety of a driver is improved; the neural network outputs a judgment result to generate various instructions to remind a driver or directly control the vehicle, the personalized scene mode is more friendly to the driver, and the driving safety is also improved. In addition, the cloud continuously collects the user data, and the individuation and the accuracy of the model are continuously improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart of a method provided by an embodiment of the present invention;
FIG. 2 is a schematic flow chart provided by a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a system framework provided in accordance with an embodiment of the present invention;
fig. 4 is a schematic diagram of an apparatus framework according to an embodiment of the present invention.
Detailed Description
In order to more clearly explain the overall concept of the present application, the following detailed description is given by way of example in conjunction with the accompanying drawings.
The automatic driving technology develops rapidly, reforms transform traditional vehicle, increases core sensors such as high definition digtal camera, laser radar, high accuracy positioner to through the real-time data acquisition of sensor, cooperate high-precision map, realized the automatic driving of simple environment, nevertheless apart from unmanned driving completely still very far, drive more still the judgement and the operation of the driver that can not leave, need keep driver's state like this, avoid driving fatigue and violation operation. With the development of vehicle wireless communication technology (V2X), vehicle-road cooperation becomes possible, providing more environmental data support for vehicle perception, and on the other hand, intelligent wearable devices are also increasingly popular. Under the circumstances, how to effectively fuse information data from the external environment sensing equipment, the internal sensing equipment and the intelligent driver sensing equipment, and the improvement of the driving safety of the driver becomes a problem which needs to be solved urgently.
The embodiment of the invention provides a multimode safe driving method based on vehicle perception and intelligent wearing, which takes a vehicle-mounted computing data center as a core, combines environment perception data from the outside, vehicle driving state data, V2X roadbed and cooperative driving data, emotion data and behavior data of a driver captured by a camera in a vehicle, collected voice data and related body condition data collected by intelligent wearing, fuses the data by utilizing a deep learning technology, outputs a judgment result through a neural network, reminds the driver or directly controls the vehicle, and ensures the driving safety.
Reference will now be made in detail to some embodiments of the invention, examples of which are illustrated in the accompanying drawings.
According to a first aspect of the embodiments of the present invention, the present invention provides a multi-modal safe driving method based on vehicle sensing and intelligent wearing, and fig. 1 is a schematic flow chart of the method provided by the embodiments of the present invention, as shown in the figure, including:
s101, receiving data sent by vehicle-mounted sensing equipment, a vehicle central control system, a vehicle networking system and intelligent wearable equipment respectively;
s102, fusing the data through a control gate of a fusion neural network;
s103, the fused data are judged through a judgment module of the fused neural network, and the driving of the vehicle is controlled or assisted based on the judgment result.
In the embodiment of the present invention, step S101 receives data sent by the vehicle-mounted sensing device, the vehicle central control system, the vehicle networking system, and the smart wearable device, where the data is a general term, and the data includes, but is not limited to, various data obtained by the above-mentioned devices or systems, in other words, various data obtained by the above-mentioned devices or systems are sub-data of the data.
In some preferred embodiments of the present invention, after step S101 and before step S102, the following steps are further included:
and data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment are respectively sent to the corresponding data processing units, so that the corresponding data processing units respectively preprocess data from a plurality of sources, and the data of different types are processed through the plurality of data processing units, thereby avoiding errors caused by format problems of a plurality of data as much as possible.
Fig. 2 is a schematic flow chart provided by a preferred embodiment of the present invention, and the method of the embodiment of the present invention is described in detail below with reference to fig. 2.
Firstly, preparing data for training a neural network, collecting mass data through various channels, such as an automobile manufacturer database and the like, setting a data standard, and labeling the data;
forming a large network by a control gate Fuc of the fusion neural network and a judgment module Fuf of the fusion neural network, designing the control gate (data fusion mode) according to prior knowledge, deleting completely irrelevant data association, and training the fusion neural network by considering a data time sequence relation on the basis;
in some preferred embodiments of the present invention, the converged neural network is trained from a Long Short-Term Memory network (LSTM), and the implementation of the neural network control gate in the embodiments of the present invention is similar to the input gate of the Long Short-Term Memory network.
Then download the model of high in the clouds training to on-vehicle data computing center, data computing center is that vehicle-mounted end calculates, storage, the data center that the network constitutes, provides edge side service such as infrastructure and AI acceleration, has received through this data computing center that different equipment sent and has come about with the data that the car drove, specifically speaking, includes:
the external sensing device of the vehicle environment sensing device collects environment data DS1, and inputs the environment data to a data Fusion module Fusion (trained Fusion neural network) of the data computing center through a vehicle CAN bus and a data processing unit DM 1.
Specifically, the external sensing device includes an external acquisition device such as a laser radar, a millimeter wave radar, or a camera, and the current driving condition can be determined from the data.
The vehicle central control system inputs the collected running condition data DS2 to a data Fusion module Fusion of a data calculation center through a vehicle CAN bus and a data processing unit DM 2.
Specifically, the vehicle central control system feeds back current running data DS2 of the vehicle, including data such as vehicle speed, turning angle and accelerator, to the data calculation center through a vehicle CAN bus.
The vehicle networking system receives data DS3 from the cloud data center, and the data DS3 is input into the data Fusion module Fusion through the data processing unit DM 3; the vehicle networking system receives data DS4 from a V2X road base side and other vehicles, and the data passes through the data processing unit DM4 and is input into the data Fusion module Fusion; preferably, the car networking system is realized through a vehicle Telematics BOX (vehicle T-BOX for short).
In the internal perception device in the vehicle environment perception device, an in-vehicle camera acquires a driver video DS5, and the driver video DS5 is input into a data Fusion module Fusion through a characteristic extraction neural network Ef1, a face recognition neural network Sr and an action recognition neural network Mr and a data processing unit DM 5; the voice input device of the internal perception device in the vehicle environment perception device collects voice data DS6, analyzes audio data through a feature extraction neural network Ef2 and a voice recognition neural network Ar, and inputs the audio data to a data Fusion module Fusion through a data processing unit DM 6;
the intelligent wearable device collects the physical condition and action information DS7 of the driver, and the information is input into the data Fusion module Fusion through the data processing unit DM 7; the intelligent wearable equipment comprises data generated by a gyroscope, a level meter, an accelerometer, heartbeat detection, blood pressure monitoring and the like in the intelligent wearable equipment.
The data Fusion module Fusion receives data, as mentioned above, the main body of the data Fusion module Fusion is a long-short term memory neural network, and includes two sub-networks of a neural network control gate Fuc and a neural network Fusion judgment Fuf, the neural network control gate Fuc module receives data structured by the data processing unit DMi, and determines how to perform data Fusion, and the neural network Fusion judgment Fuf module determines an output result according to the data Fusion, and includes instructions of warning, braking, deceleration and the like, and then delivers the result to the vehicle execution Em module to execute relevant instruction output.
The cloud continuously collects the model application conditions, a pointed model is formed for the user, and model optimization is continuously carried out at the cloud.
Fig. 3 is a schematic diagram of a system framework provided by an embodiment of the present invention, and as shown in fig. 3, the multimodal safe driving system based on vehicle sensing and smart wearing includes:
the vehicle-mounted sensing equipment comprises external sensing equipment and internal sensing equipment, wherein the vehicle-mounted external sensing equipment is used for acquiring environmental data, can be an environmental sensor and the like, and the internal sensing equipment is used for acquiring facial features and voice instructions of a driver and comprises an in-vehicle camera, in-vehicle Mic equipment and the like;
the vehicle central control system is used for receiving vehicle running data;
the vehicle networking system is used for receiving data of a road base side and a cloud end;
the data computing center is used for respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment; and fusing the data through a fusion neural network, judging the fused data, and controlling or assisting the driving of the vehicle based on the judgment result.
According to the specific embodiment of the invention, the system further comprises an intelligent wearable device arranged on the vehicle, and the intelligent wearable device can be realized through a sensor arranged on the safety belt or other intelligent wearable devices physically connected with the vehicle.
Fig. 4 is a schematic structural diagram of a multi-modal safe driving device based on vehicle sensing and smart wearing, corresponding to fig. 1, provided in an embodiment of the present application, where the device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to:
respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment;
fusing the data through a control gate of a fusion neural network;
and judging the fused data through a judging module of the fused neural network, and controlling or assisting the driving of the vehicle based on a judgment result.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the device and media embodiments, since they are substantially similar to the platform embodiments, the description is relatively simple, and reference may be made to the partial description of the platform embodiments for relevant points.
The device and the system provided by the embodiment of the application correspond to the method one to one, so the device and the medium also have the similar beneficial technical effects as the corresponding method, and the beneficial technical effects of the platform are explained in detail above, so the beneficial technical effects of the device and the medium are not repeated herein.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (4)

1. A multi-modal safe driving method based on vehicle perception and intelligent wearing is characterized by comprising the following steps:
respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment;
fusing the data through a control gate of a fusion neural network;
judging the fused data through a judgment module of the fused neural network, and controlling or assisting the driving of the vehicle based on a judgment result;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving driver video data sent by an image sensor in the vehicle-mounted sensing equipment;
performing feature extraction on the driver video data through a first feature extraction neural network to obtain action features and face features;
sending the facial features to a facial recognition neural network, and sending the action features to an action recognition neural network to obtain first driving state data of a driver;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving voice data sent by a sound sensor in the vehicle-mounted sensing equipment;
after the voice data is subjected to feature extraction through a second feature extraction neural network, recognizing the voice instruction data of the driver through a voice recognition neural network;
after receiving the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, the gate through fusing neural network will before data fuse, still include:
the data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment are respectively sent to corresponding data processing units, so that the corresponding data processing units respectively preprocess data from a plurality of sources;
the training method of the fusion neural network comprises the following steps:
establishing a control gate of the converged neural network based on prior knowledge;
training the fusion neural network based on the time sequence relation of the data through the pre-acquired data;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving data sent by at least one of the following devices in the intelligent wearable device; the intelligent wearable equipment comprises a gyroscope, a level meter, an accelerometer, a heartbeat detection device and a blood pressure monitoring device;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving vehicle running data through a vehicle central control system, wherein the vehicle running data at least comprises one of vehicle speed and turning angle;
and receiving data sent by the cloud and the roadbed side through the Internet of vehicles system so as to acquire data of road conditions and other vehicles.
2. A multimode safe driving equipment based on vehicle perception and intelligence are dressed which characterized in that includes: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment;
fusing the data through a control gate of a fusion neural network;
judging the fused data through a judging module of the fused neural network, and controlling the vehicle to run based on a judgment result;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving driver video data sent by an image sensor in the vehicle-mounted sensing equipment;
performing feature extraction on the driver video data through a first feature extraction neural network to obtain action features and face features;
sending the facial features to a facial recognition neural network, and sending the action features to an action recognition neural network to obtain first driving state data of a driver;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving voice data sent by a sound sensor in the vehicle-mounted sensing equipment;
after the voice data is subjected to feature extraction through a second feature extraction neural network, recognizing the voice instruction data of the driver through a voice recognition neural network;
after receiving the data that on-vehicle perception equipment, vehicle central control system, car networking system and intelligent wearing equipment sent respectively, through the control gate that fuses neural network will before data fuse, still include:
the data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment are respectively sent to corresponding data processing units, so that the corresponding data processing units respectively preprocess data from a plurality of sources;
the training method of the fusion neural network comprises the following steps:
establishing a control gate of the converged neural network based on prior knowledge;
training the fusion neural network based on the time sequence relation of the data through the pre-acquired data;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving data sent by at least one of the following devices in the intelligent wearable device; the intelligent wearable equipment comprises a gyroscope, a level meter, an accelerometer, a heartbeat detection device and a blood pressure monitoring device;
receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include:
receiving vehicle running data through a vehicle central control system, wherein the vehicle running data at least comprises one of vehicle speed and turning angle;
and receiving data sent by the cloud and the roadbed side through the Internet of vehicles system so as to acquire data of road conditions and other vehicles.
3. A multimode safe driving system based on vehicle perception and intelligent wearing, characterized by comprising:
the vehicle-mounted sensing equipment comprises external sensing equipment and internal sensing equipment, wherein the vehicle-mounted external sensing equipment is used for acquiring environmental data, and the internal sensing equipment is used for acquiring facial features and voice instructions of a driver;
the vehicle central control system is used for receiving vehicle running data;
the system comprises a vehicle networking system, a cloud terminal and a server, wherein the vehicle networking system is used for receiving data of a road base side and the cloud terminal;
the data computing center is used for respectively receiving data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment; fusing the data through a fusion neural network, judging the fused data, and controlling or assisting the driving of the vehicle based on a judgment result; receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include: receiving driver video data sent by an image sensor in the vehicle-mounted sensing equipment; performing feature extraction on the driver video data through a first feature extraction neural network to obtain action features and face features; sending the facial features to a facial recognition neural network, and sending the action features to an action recognition neural network to obtain first driving state data of a driver; receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include: receiving voice data sent by a sound sensor in the vehicle-mounted sensing equipment; after the voice data is subjected to feature extraction through a second feature extraction neural network, recognizing the voice instruction data of the driver through a voice recognition neural network; after receiving the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, the gate through fusing neural network will before data fuse, still include: the data sent by the vehicle-mounted sensing equipment, the vehicle central control system, the vehicle networking system and the intelligent wearable equipment are respectively sent to the corresponding data processing units, so that the corresponding data processing units respectively preprocess the data from multiple sources; the training method of the fusion neural network comprises the following steps: establishing a control gate of the converged neural network based on prior knowledge; training the fusion neural network based on the time sequence relation of the data through the pre-acquired data; receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include: receiving data sent by at least one of the following devices in the intelligent wearable device; the intelligent wearable equipment comprises a gyroscope, a level meter, an accelerometer, a heartbeat detection device and a blood pressure monitoring device; receive the data that on-vehicle perception equipment, vehicle center control system, car networking system and intelligent wearing equipment sent respectively, include: receiving vehicle running data through a vehicle central control system, wherein the vehicle running data at least comprises one of vehicle speed and turning angle; and receiving data sent by the cloud and the roadbed side through the Internet of vehicles system so as to acquire data of road conditions and other vehicles.
4. The system of claim 3, further comprising:
set up the intelligent wearing equipment on the vehicle, intelligent wearing equipment is used for the driver to dress when driving to detect driver's health.
CN202010361618.6A 2020-04-30 2020-04-30 Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing Active CN111652065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010361618.6A CN111652065B (en) 2020-04-30 2020-04-30 Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010361618.6A CN111652065B (en) 2020-04-30 2020-04-30 Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing

Publications (2)

Publication Number Publication Date
CN111652065A CN111652065A (en) 2020-09-11
CN111652065B true CN111652065B (en) 2022-11-15

Family

ID=72345981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010361618.6A Active CN111652065B (en) 2020-04-30 2020-04-30 Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing

Country Status (1)

Country Link
CN (1) CN111652065B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113442950B (en) * 2021-08-31 2021-11-23 国汽智控(北京)科技有限公司 Automatic driving control method, device and equipment based on multiple vehicles

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10471963B2 (en) * 2017-04-07 2019-11-12 TuSimple System and method for transitioning between an autonomous and manual driving mode based on detection of a drivers capacity to control a vehicle
CN108407808A (en) * 2018-04-23 2018-08-17 安徽车鑫保汽车销售有限公司 A kind of running car intelligent predicting system
CN110736460B (en) * 2018-07-19 2023-08-04 博泰车联网科技(上海)股份有限公司 Position fusion method and system based on neural network and vehicle-mounted terminal
CN110329268B (en) * 2019-03-22 2021-04-06 中国人民财产保险股份有限公司 Driving behavior data processing method, device, storage medium and system

Also Published As

Publication number Publication date
CN111652065A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
US11475770B2 (en) Electronic device, warning message providing method therefor, and non-transitory computer-readable recording medium
CN111402925B (en) Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium
CN109270565B (en) Processing device for vehicle-mounted GPS big data
CN108803604A (en) Vehicular automatic driving method, apparatus and computer readable storage medium
US20190337532A1 (en) Autonomous vehicle providing driver education
US11237565B2 (en) Optimal driving characteristic adjustment for autonomous vehicles
US12005922B2 (en) Toward simulation of driver behavior in driving automation
EP2942012A1 (en) Driver assistance system
CN112802227B (en) Method and device for collecting ADAS driving data of vehicle, man-machine interaction device and vehicle
CN112630799B (en) Method and apparatus for outputting information
US20200234578A1 (en) Prioritized vehicle messaging
CN114379581A (en) Algorithm iteration system and method based on automatic driving
CN111652065B (en) Multi-mode safe driving method, equipment and system based on vehicle perception and intelligent wearing
CN115470884A (en) Platform for perception system development of an autopilot system
CN113734203A (en) Control method, device and system for intelligent driving and storage medium
US11881113B2 (en) Predictive vehicle acquisition
CN110225446B (en) System, method and device for identifying driving behavior and storage medium
CN114291113A (en) Risk threshold determination method, device, equipment and storage medium
CN114333309A (en) Traffic accident early warning system and method
CN110588666B (en) Method and device for controlling vehicle running
CN113619607B (en) Control method and control system for automobile running
US20230177112A1 (en) Method and system for generating a logical representation of a data set, as well as training method
US11462020B2 (en) Temporal CNN rear impact alert system
CN113256845B (en) Data acquisition method, device, storage medium and system
CN116888648A (en) Critical scene extraction system in lightweight vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220914

Address after: 250101 building S02, 1036 Chaochao Road, high tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Inspur Scientific Research Institute Co.,Ltd.

Address before: Floor 6, Chaochao Road, Shandong Province

Applicant before: JINAN INSPUR HIGH-TECH TECHNOLOGY DEVELOPMENT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant