CN115512511A - Early warning method, early warning device, mobile terminal and readable storage medium - Google Patents

Early warning method, early warning device, mobile terminal and readable storage medium Download PDF

Info

Publication number
CN115512511A
CN115512511A CN202110631725.0A CN202110631725A CN115512511A CN 115512511 A CN115512511 A CN 115512511A CN 202110631725 A CN202110631725 A CN 202110631725A CN 115512511 A CN115512511 A CN 115512511A
Authority
CN
China
Prior art keywords
information
driver
driving
vehicle
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110631725.0A
Other languages
Chinese (zh)
Inventor
谢仕云
阮祥兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile IoT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile IoT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile IoT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110631725.0A priority Critical patent/CN115512511A/en
Publication of CN115512511A publication Critical patent/CN115512511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0208Combination with audio or video communication, e.g. combination with "baby phone" function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0205Specific application combined with child monitoring using a transmitter-receiver system
    • G08B21/0211Combination with medical sensor, e.g. for measuring heart rate, temperature
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/0233System arrangements with pre-alarms, e.g. when a first distance is exceeded
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/0202Child monitoring systems using a transmitter-receiver system carried by the parent and the child
    • G08B21/028Communication between parent and child units via remote transmission means, e.g. satellite network
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0129Traffic data processing for creating historical data or processing based on historical data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/048Detecting movement of traffic to be counted or controlled with provision for compensation of environmental or other condition, e.g. snow, vehicle stopped at detector
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed

Abstract

The invention provides an early warning method, an early warning device, a mobile terminal and a readable storage medium, and relates to the technical field of intelligent early warning of safe driving. The method comprises the following steps: receiving fusion perception information of a target vehicle sent by a vehicle-mounted terminal; wherein the fusion awareness information is information related to a vehicle and/or a driver; according to the fusion perception information, early warning information is sent to the vehicle-mounted terminal and/or the road side equipment by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving model is established according to historical fusion perception information of the driver. The scheme of the invention solves the problem that the potential safety hazard exists in vehicle driving caused by the fact that real driving data and driver state data are difficult to acquire in the prior art.

Description

Early warning method, early warning device, mobile terminal and readable storage medium
Technical Field
The invention relates to the technical field of intelligent early warning of safe driving, in particular to an early warning method, an early warning device, a mobile terminal and a readable storage medium.
Background
With the great popularization of automobiles, the driving safety of automobiles is always a more important concern. Among them, fatigue driving and emotional driving of drivers are common causes of traffic safety accidents.
The current driver state detection and early warning method is mainly based on that a vehicle networking terminal collects a real-time track of a vehicle, a camera collects facial image information of a driver, and wearable equipment and the like detect physiological state information of the driver.
However, the methods and systems of the prior art suffer from several significant disadvantages:
because a plurality of drivers may exist in a vehicle, the driving data of the vehicle cannot reflect the real state of the current driver, and the matching between the vehicle and the real driver is ignored in the prior art;
the physiological characteristics of the driver have fluctuation characteristics in different time periods, different driving environment factors have interference effects on the driving behaviors, and the prior art establishes the threshold characteristics only according to the physiological characteristics or the driving behaviors of the driver and ignores the influences of time and the environment factors;
in the prior art, the time delay of real-time early warning calculation and notification transmission is large, and the early warning real-time performance cannot meet the requirement.
The existing driver state detection and early warning method is mainly based on that a vehicle networking terminal acquires a real-time track of a vehicle, facial image information acquired by a camera, and physiological state information detected by wearable equipment and the like; however, there are several significant disadvantages to the current methods and systems: firstly, the methods and systems neglect the real driver matching of the vehicle, and it is possible that a plurality of drivers drive one vehicle, so that the driving data of the vehicle cannot reflect the real state of the drivers, and the database relationship should be established from the latitude of the drivers; secondly, the physiological characteristic fluctuation characteristics of the driver at different time intervals and the interference factors of different driving environment factors on the driving behavior, the current method and the current system only establish the threshold value characteristics according to the physiological characteristics or the driving behavior of the driver, and the comparison time and the environmental factors are separated; thirdly, the real-time early warning calculation and the notification transmission delay are large, and the early warning real-time performance is insufficient.
In conclusion, the existing driving early warning method is lack of real driving data and driver state data, so that early warning is inaccurate, and certain potential safety hazard is caused to vehicle driving.
Disclosure of Invention
The invention aims to provide an early warning method, an early warning device, a mobile terminal and a readable storage medium, and solves the problem that in the prior art, real driving data and driver state data are difficult to acquire, so that potential safety hazards exist in vehicle driving.
In order to achieve the above object, an embodiment of the present invention provides an early warning method applied to an edge server, including:
receiving fusion perception information of a target vehicle sent by a vehicle-mounted terminal; wherein the fusion perception information is information related to a vehicle and/or a driver;
according to the fusion perception information, early warning information is sent to the vehicle-mounted terminal and/or the road side equipment by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving model is established according to historical fusion perception information of the driver.
Optionally, the fused perceptual information comprises at least one of:
vehicle travel information; the vehicle running information comprises at least one item of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, alarm information and vehicle Controller Area Network (CAN) information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the fusion perception information includes facial image information of the driver;
before sending early warning information to the vehicle-mounted terminal and/or road side equipment by using a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information, the early warning method further comprises the following steps:
acquiring the identity identification information of the driver according to the face image information;
and acquiring a driving model corresponding to the identity information of the driver.
Optionally, the sending, according to the fused sensing information, early warning information to the vehicle-mounted terminal and/or the roadside device by using a driving model corresponding to a driver currently driving the target vehicle includes:
acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state comprises at least one of a high risk state, a low risk state, and a safe state;
and sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
Optionally, after receiving the fusion perception information of the target vehicle sent by the vehicle-mounted terminal, the early warning method further includes:
sending the fusion perception information to a central cloud server;
and receiving a driving model updated by the central cloud server according to the fusion perception information.
In order to achieve the above object, an embodiment of the present invention provides an early warning method, which is applied to a central cloud server, and includes:
acquiring historical fusion perception information of a target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, the determining, according to the history fusion perception information, a driving model corresponding to a driver currently driving the target vehicle includes:
obtaining characteristic information corresponding to the target vehicle according to the historical fusion perception information;
according to the characteristic information, establishing a fusion perception information database of a driver currently driving the target vehicle;
determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
Optionally, the history fused perception information comprises at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the road information comprises a road type and/or a road classification;
wherein the road type comprises at least one of a national road, a provincial road, a county road, a rural road and a special highway;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway.
Optionally, when the history fused perception information includes vehicle driving information, the obtaining, according to the history fused perception information, feature information corresponding to the target vehicle includes:
and determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle driving information.
Optionally, when the history fused perception information includes road environment information, the obtaining, according to the history fused perception information, feature information corresponding to the target vehicle includes:
determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, ordinary, and bad.
Optionally, when the history fused perception information includes face image information, the obtaining, according to the history fused perception information, feature information corresponding to the target vehicle includes:
obtaining driving emotion characteristic information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
Optionally, when the historical fused sensing information includes physiological state information, the obtaining, according to the historical fused sensing information, feature information corresponding to the target vehicle includes:
according to the acquisition time period of the physiological state information, determining physiological characteristic information of the driver corresponding to the acquisition time period;
wherein the driver physiological characteristic information includes at least one of good, general, and poor.
Optionally, the determining, according to the fusion perception information database, a driving model corresponding to a driver currently driving the target vehicle includes:
establishing an initial driving model according to the characteristic information in the fusion perception information database, and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm;
and training the initial driving model by using the characteristic information to obtain the driving model.
Optionally, the early warning method further includes:
receiving fusion perception information sent by the edge server;
and updating the driving model according to the fusion perception information.
In order to achieve the above object, an embodiment of the present invention provides an early warning method applied to a vehicle-mounted terminal, including:
acquiring fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion perception information is information related to a vehicle and/or a driver;
sending the fusion perception information to an edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver;
receiving early warning information sent by the edge server according to the driving model;
and carrying out driving safety prompt on the driver of the target vehicle according to the early warning information.
Optionally, the obtaining of the fusion perception information of the target vehicle where the vehicle-mounted terminal is located includes at least one of the following:
collecting vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
receiving road environment information of a current running road of the target vehicle, which is sent by road side equipment; wherein the road environment information comprises road weather information and/or road information;
acquiring the facial image information of the driver through a camera of the target vehicle;
acquiring physiological state information of the driver through wearable equipment worn by the driver; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
To achieve the above object, an embodiment of the present invention provides an edge server, including a processor and a transceiver, wherein,
the transceiver is used for receiving fusion perception information of a target vehicle sent by the vehicle-mounted terminal; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the processor is used for sending early warning information to the vehicle-mounted terminal and/or the road side equipment by utilizing a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information; wherein the driving model is established according to historical fusion perception information of the driver.
Optionally, the fusion awareness information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
Optionally, the fusion perception information includes facial image information of the driver;
before sending early warning information to the vehicle-mounted terminal and/or road side equipment by using a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information, the processor is further configured to:
acquiring the identity identification information of the driver according to the face image information;
and acquiring a driving model corresponding to the identity identification information of the driver.
Optionally, when the processor sends the warning information to the vehicle-mounted terminal and/or the roadside device by using the driving model corresponding to the driver currently driving the target vehicle according to the fusion perception information, the processor is further specifically configured to:
acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state comprises at least one of a high risk state, a low risk state, and a safe state;
and sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
Optionally, the transceiver is further configured to:
sending the fusion perception information to a central cloud server;
and receiving a driving model updated by the central cloud server according to the fusion perception information.
To achieve the above object, an embodiment of the present invention provides a central cloud server, including a processor and a transceiver, wherein,
the processor is used for acquiring historical fusion perception information of the target vehicle; wherein the historical fused awareness information is information relating to a vehicle and/or a driver;
the processor is further used for determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
the transceiver is configured to send the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, when the processor is configured to determine, according to the historical fusion perception information, a driving model corresponding to a driver currently driving the target vehicle, the processor is specifically configured to:
obtaining characteristic information corresponding to the target vehicle according to the historical fusion perception information;
according to the characteristic information, establishing a fusion perception information database of a driver currently driving the target vehicle;
determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
Optionally, the historical fused perception information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the road information comprises a road type and/or a road classification;
wherein the road type comprises at least one of national road, provincial road, county road and special road;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway, and a fourth-level highway.
Optionally, when the historical fused perception information includes vehicle driving information, the processor is specifically configured to, when being configured to obtain the feature information corresponding to the target vehicle according to the historical fused perception information:
and determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle running information.
Optionally, when the history fused perception information includes road environment information, the processor is specifically configured to, when being configured to obtain feature information corresponding to the target vehicle according to the history fused perception information:
determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, common and bad.
Optionally, when the history fused sensing information includes face image information, the processor is specifically configured to, when being configured to obtain feature information corresponding to the target vehicle according to the history fused sensing information:
obtaining driving emotion feature information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
Optionally, when the historical fused sensing information includes physiological state information, the processor is specifically configured to, when being configured to obtain feature information corresponding to the target vehicle according to the historical fused sensing information:
according to the acquisition time period of the physiological state information, determining physiological characteristic information of the driver corresponding to the acquisition time period;
wherein the driver physiological characteristic information includes at least one of good, general, and poor.
Optionally, when the processor is configured to determine, according to the fusion perception information database, a driving model corresponding to a driver currently driving the target vehicle, the processor is specifically configured to:
establishing an initial driving model according to the characteristic information in the fusion perception information database, and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm;
and training the initial driving model by using the characteristic information to obtain the driving model.
Optionally, the transceiver is further configured to receive fusion awareness information sent by the edge server;
the processor is further configured to update the driving model according to the fusion perception information.
To achieve the above object, an embodiment of the present invention provides a vehicle-mounted terminal, which includes a processor and a transceiver, wherein,
the processor is used for acquiring fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the transceiver is used for sending the fusion perception information to an edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver;
the transceiver is also used for receiving early warning information sent by the edge server according to the driving model;
and the processor is also used for carrying out driving safety prompt on the driver of the target vehicle according to the early warning information.
Optionally, when acquiring the fusion perception information of the target vehicle in which the vehicle-mounted terminal is located, the processor is specifically configured to:
collecting vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
receiving road environment information of a current running road of the target vehicle, which is sent by road side equipment; wherein the road environment information comprises road weather information and/or road information;
acquiring the facial image information of the driver through a camera of the target vehicle;
acquiring physiological state information of the driver through wearable equipment worn by the driver; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
In order to achieve the above object, an embodiment of the present invention provides an early warning apparatus, applied to an edge server, including:
the first receiving module is used for receiving fusion perception information of a target vehicle, which is sent by the vehicle-mounted terminal; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the first processing module is used for sending early warning information to the vehicle-mounted terminal and/or the road side equipment by utilizing a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information; wherein the driving model is established according to historical fusion perception information of the driver.
Optionally, the fusion awareness information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the fusion perception information includes facial image information of the driver; the early warning device still includes:
the fourth processing module is used for acquiring the identity identification information of the driver according to the face image information;
and the model acquisition module is used for acquiring the driving model corresponding to the identity identification information of the driver.
Optionally, the first processing module includes:
the first processing unit is used for acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state comprises at least one of a high risk state, a low risk state, and a safe state;
and the second processing unit is used for sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
Optionally, the early warning device further includes:
the third sending module is used for sending the fusion perception information to a central cloud server;
and the third receiving module is used for receiving the driving model updated by the central cloud server according to the fusion perception information.
In order to achieve the above object, an embodiment of the present invention provides an early warning device, which is applied to a central cloud server, and includes:
the first acquisition module is used for acquiring historical fusion perception information of a target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
the second processing module is used for determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
the first sending module is used for sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, the second processing module includes:
the third processing unit is used for acquiring the characteristic information corresponding to the target vehicle according to the historical fusion perception information;
the fourth processing unit is used for establishing a fusion perception information database of a driver currently driving the target vehicle according to the characteristic information;
the fifth processing unit is used for determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
Optionally, the history fused perception information comprises at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the road information comprises a road type and/or a road classification;
wherein the road type comprises at least one of a national road, a provincial road, a county road, a rural road and a special highway;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway.
Optionally, the third processing unit includes:
the first processing subunit is used for determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle running information.
Optionally, the third processing unit includes:
the second processing subunit is used for determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, ordinary, and bad.
Optionally, the third processing unit includes:
the third processing subunit is used for obtaining driving emotion characteristic information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
Optionally, the third processing unit includes:
the fourth processing subunit is used for determining the physiological characteristic information of the driver corresponding to the acquisition time period according to the acquisition time period of the physiological state information;
wherein the driver physiological characteristic information includes at least one of good, general, and bad.
Optionally, the fifth processing unit includes:
the fifth processing subunit is used for establishing an initial driving model according to the characteristic information in the fusion perception information database and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm;
and the sixth processing subunit is configured to train the initial driving model by using the feature information, so as to obtain the driving model.
Optionally, the early warning device further includes:
the fourth receiving module is used for receiving the fusion perception information sent by the edge server;
and the model updating module is used for updating the driving model according to the fusion perception information.
In order to achieve the above object, an embodiment of the present invention provides an early warning device, which is applied to a central cloud server, and includes:
the first acquisition module is used for acquiring historical fusion perception information of a target vehicle; wherein the historical fused awareness information is information relating to a vehicle and/or a driver;
the second processing module is used for determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
the first sending module is used for sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, the first obtaining module includes at least one of:
the first acquisition unit is used for acquiring vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
the second acquisition unit is used for receiving road environment information of the current running road of the target vehicle, which is sent by road side equipment; wherein the road environment information comprises road weather information and/or road information;
the third acquisition unit is used for acquiring the facial image information of the driver through a camera of the target vehicle;
a fourth acquisition unit, configured to acquire physiological state information of the driver through a wearable device worn by the driver; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
To achieve the above object, an embodiment of the present invention provides an edge server, including a transceiver, a processor, a memory, and a program or instructions stored on the memory and executable on the processor; the processor, when executing the program or instructions, implements the early warning method as described above.
To achieve the above object, an embodiment of the present invention provides a central cloud server, which includes a transceiver, a processor, a memory, and a program or instructions stored in the memory and executable on the processor; the processor, when executing the program or instructions, implements the early warning method as described above.
In order to achieve the above object, an embodiment of the present invention provides an in-vehicle terminal, including a transceiver, a processor, a memory, and a program or an instruction stored in the memory and executable on the processor; the processor, when executing the program or instructions, implements the early warning method as described above.
To achieve the above object, an embodiment of the present invention provides a readable storage medium, on which a program or instructions are stored, the program or instructions implementing the steps in the warning method as described above when executed by a processor.
The technical scheme of the invention has the following beneficial effects:
according to the method provided by the embodiment of the invention, after the edge server receives the fusion perception information, the driving state of the driver can be calculated and early warned in real time by using the driving model corresponding to the driver, so that early warning information can be sent in time to prompt the driver when the driving risk of the driver is predicted, and the traffic danger is avoided; because the driving model is established for the driver, the driving model can objectively and accurately reflect the real state of the driver, and the problem of driver data inaccuracy caused by the fact that different drivers drive the same vehicle is effectively solved.
Drawings
FIG. 1 is a flow chart of an early warning method according to an embodiment of the present invention;
FIG. 2 is a system interaction diagram of an embodiment of the invention;
FIG. 3 is a flow chart of a warning method according to another embodiment of the present invention;
FIG. 4 is a flow chart of a warning method according to another embodiment of the present invention;
FIG. 5 is a block diagram of an edge server according to an embodiment of the present invention;
fig. 6 is a structural diagram of a vehicle-mounted terminal of the embodiment of the invention;
FIG. 7 is a block diagram of an early warning device according to an embodiment of the present invention;
FIG. 8 is a block diagram of an early warning device according to another embodiment of the present invention;
FIG. 9 is a block diagram of an early warning device according to another embodiment of the present invention;
FIG. 10 is a block diagram of an edge server according to another embodiment of the present invention;
fig. 11 is a configuration diagram of an in-vehicle terminal according to another embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In addition, the terms "system" and "network" are often used interchangeably herein.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
As shown in fig. 1, an early warning method according to an embodiment of the present invention is applied to an edge server, and includes:
step 101, receiving fusion perception information of a target vehicle sent by a vehicle-mounted terminal; wherein the fusion awareness information is information related to a vehicle and/or a driver;
102, sending early warning information to the vehicle-mounted terminal and/or road side equipment by using a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information; wherein the driving model is established according to the historical fused perception information of the driver.
In this embodiment, after the edge server (i.e., the edge server node) receives the fusion sensing information, the driving model corresponding to the driver can be used to calculate and warn the driving state (i.e., the safe driving state of the driver) in real time, so that when the driving risk of the driver is predicted, warning information can be sent in time to prompt the driver, and traffic danger is avoided.
Optionally, the fusion awareness information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
Optionally, the fusion perception information includes facial image information of the driver;
before sending early warning information to the vehicle-mounted terminal and/or the road side equipment by using a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information, the early warning method further comprises the following steps:
acquiring the identity identification information of the driver according to the face image information;
and acquiring a driving model corresponding to the identity information of the driver.
In this embodiment, since the edge server can recognize the driver identity (i.e., can obtain the identification information of the driver) and the fatigue state of the face of the driver through the facial image information of the driver. Therefore, the owner of the data (namely the fusion perception information) is identified through the face image of the driver, and the problem of driver data inaccuracy caused by the fact that multiple people drive the same vehicle can be effectively solved.
Specifically, the edge server may identify driver identity information (i.e., identity identification information of the driver may be obtained) according to the face image information of the driver collected in real time, so as to retrieve a driving model corresponding to the driver, and predict the driving state in real time by using real-time fusion sensing information (i.e., driver fusion sensing real-time data) as input information of the driving model (i.e., driver driving safety state learning model).
Optionally, the sending, according to the fused sensing information, early warning information to the vehicle-mounted terminal and/or the roadside device by using a driving model corresponding to a driver currently driving the target vehicle includes:
acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state includes at least one of a high risk state, a low risk state, and a safe state.
And sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
In the step, when the prediction result indicates that the driving safety risk exists (namely the driving state is a high risk state or a low risk state), the edge server can issue early warning information (namely driving safety early warning information) to a 5G vehicle-mounted terminal of a driver driving a vehicle in real time to inform the driver; and a safety early warning message can be sent to nearby vehicles through Road Side equipment, namely a Road Side Unit (RSU).
Optionally, after receiving the fusion perception information of the target vehicle sent by the vehicle-mounted terminal, the early warning method further includes: sending the fusion perception information to a central cloud server; and receiving a driving model updated by the central cloud server according to the fusion perception information.
In this embodiment, as shown in fig. 2, after receiving the fusion sensing information, the edge server may report (i.e., send) the fusion sensing information to the central cloud server, that is, the edge server may synchronize the fusion sensing information to the central cloud server in real time. The driving model is updated by the central cloud server (namely the central cloud server node) based on the fusion perception information reported by the edge server, and the edge server can ensure that the driving model is more accurate by receiving the updated driving model.
According to the method provided by the embodiment of the invention, the fusion perception information (namely the fusion perception original data of the driving scene of the driver) sent by the vehicle-mounted terminal is received, the fusion perception information can be synchronized to the central cloud server, and then the driving model is established according to the fusion perception information, and the driving model can reflect the real state of the driver more objectively and accurately, so that the edge cloud can accurately predict the driving state of the driver according to the driving model, and the problem of driver data inaccuracy caused by the fact that different drivers drive the same vehicle is avoided; through the 5G network and the edge calculation, the requirements of early warning calculation and low notification delay in the driving environment can be met.
As shown in fig. 3, an early warning method according to an embodiment of the present invention is applied to a central cloud server, and includes:
step 301, acquiring historical fusion perception information of a target vehicle; wherein the historical fused awareness information is information relating to a vehicle and/or a driver;
step 302, determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
step 303, sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
In this embodiment, the central cloud server can determine the driving model according to the fusion perception information (that is, the historical fusion perception information) acquired in history. Based on the history fusion perception information of the driver, an initial driving model (namely a driving state deep learning model) and a safety early warning processing algorithm are established, so that the real state of the driver can be reflected more objectively and accurately; and the latest driving model and the latest safety early warning processing algorithm are issued to each edge server.
Optionally, the determining, according to the history fusion perception information, a driving model corresponding to a driver currently driving the target vehicle includes:
the method comprises the following steps: and obtaining the characteristic information corresponding to the target vehicle according to the historical fusion perception information.
In the step, the central cloud server can perform calculation processing on the original fusion perception data (namely historical fusion perception information) to extract new features (namely feature information).
Step two: according to the characteristic information, establishing a fusion perception information database of a driver currently driving the target vehicle;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
In this step, the central cloud server may perform feature processing on the raw data (i.e., the historical fusion perception information) to obtain feature information, and store the feature information, thereby establishing a fusion perception driving database (i.e., a fusion perception information database) for the driver.
The fusion perception information database may include driver identification information (i.e., unique driver identification information), facial image information, driving time period characteristic data (i.e., driving time period characteristic information), driving road weather (i.e., weather characteristic information), road condition characteristic information, driving emotion characteristic information, and driver physiological characteristic data (i.e., driver physiological characteristic information), and may further include historical fusion perception information, such as driving behavior data (i.e., vehicle driving information), driving road information (i.e., road environment information), and the like.
Step three: and determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database.
Optionally, the history fused perception information comprises at least one of:
vehicle driving information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
(II) road environment information of the current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
(III) face image information of the driver;
(IV) physiological state information of the driver; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
Optionally, the road information comprises a road type and/or a road classification;
wherein the road type comprises at least one of a national road, a provincial road, a county road, a rural road and a special highway;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway.
Optionally, when the history fused perception information includes vehicle driving information, the obtaining, according to the history fused perception information, feature information corresponding to the target vehicle includes:
and determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle driving information.
In this embodiment, the edge server may perform driving trip segmentation on the vehicle driving information acquired by the vehicle-mounted terminal according to the acquisition time sequence of the GPS points (i.e., according to the acquisition time sequence of the vehicle driving information), and store the driving time period (i.e., the acquisition time period) and the corresponding vehicle driving information in segments, thereby determining the driving time period characteristic information corresponding to the acquisition time period.
For example, the driving state of the driver may be scored by the collection time period according to the vehicle driving information, so as to obtain the driving period characteristic information corresponding to the collection time period. For example, the collection time periods are respectively 9 to 10 am, 10 to 11 am, and 11 to 12 am, and the driving period characteristic information corresponding to each collection time period may be respectively 90 minutes, 85 minutes, and 50 minutes. It will be appreciated that the travel period characteristic information may take other forms, and is not limited to the scoring form in this example.
Optionally, when the history fused perception information includes road environment information, the obtaining, according to the history fused perception information, feature information corresponding to the target vehicle includes:
determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, common and bad.
In this embodiment, the road environment feature information may be added in segments (i.e., according to the collection period of the road environment information) according to the driving trip. Specifically, the road condition characteristic data (i.e., the road condition characteristic information) can be obtained by evaluating the road type and the road classification.
As an alternative embodiment of the present invention, the weather characteristic information includes snow (i.e., snowy weather), rain (i.e., rainy weather), sunny weather, and foggy weather (i.e., foggy weather). The weather characteristic information may specifically include light rain, heavy rain and medium rain in rainy days, and may specifically include strong fog, dense fog, heavy fog, fog and light fog in foggy days.
Optionally, when the historical fused perception information includes face image information, the obtaining, according to the historical fused perception information, feature information corresponding to the target vehicle includes:
obtaining driving emotion characteristic information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
In the embodiment, the edge server can process the face image information, identify the identity information of the driver from the face image information, and the face image information can be used as the head portrait data of the driver and is supplemented and perfected into the driver file; fatigue and emotional state of the driver can be analyzed from the face image information, and driving emotional characteristic information is evaluated according to the driving route segments.
Optionally, when the historical fused sensing information includes physiological state information, the obtaining, according to the historical fused sensing information, feature information corresponding to the target vehicle includes:
according to the acquisition time period of the physiological state information, determining physiological characteristic information of the driver corresponding to the acquisition time period;
wherein the driver physiological characteristic information includes at least one of good, general, and poor.
In this embodiment, the central cloud server may perform feature extraction on the physiological state feature data (i.e., the physiological state information) to obtain the driver physiological feature information. For example, according to the fluctuation rule of the physiological state of the driver, establishing the regular characteristic data of the normal physiological state time period of the driver, wherein the regular characteristic data can comprise a good time period, a normal time period and a poor time period; then, according to the collection time period of the physiological state information and the regular characteristic data, the physiological state information of the driver can be evaluated in the driving journey section (namely, the collection time period), and the physiological characteristic information of the driver corresponding to the collection time period is determined.
Of course, the physiological status condition of the driver may also be evaluated according to the collection time periods of the physiological status information and the physiological status information corresponding to each collection time period, so as to determine the driver physiological characteristic information corresponding to the collection time periods.
Optionally, the determining, according to the fusion perception information database, a driving model corresponding to a driver currently driving the target vehicle includes:
and establishing an initial driving model according to the characteristic information in the fusion perception information database, and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm.
And training the initial driving model by using the characteristic information to obtain the driving model.
In this embodiment, in this step, based on the characteristic information, an initial driving model (i.e. a safe driving state LightGBM deep learning model) of the driver may be established; the initial driving model learns the road condition characteristic information (road), the weather characteristic information (weather), the driving time interval characteristic information (time), the driver physiological characteristic information (physiology), the driving emotion characteristic information (score) and the driving state (Safe-status) of the driver through the historical driving big data (namely, historical fusion perception information and characteristic information) of the driver, wherein the Safe-status = f (road, weather, time, physiology, score).
Optionally, the early warning method further includes:
receiving fusion perception information sent by the edge server;
and updating the driving model according to the fusion perception information.
According to the method, the central cloud server can receive and process historical fusion perception information (namely, driver driving fusion perception original data), extract characteristic information from the historical fusion perception information, establish a fusion perception information database (namely, driving scene fusion perception database) of a driver, establish an initial driving model and a safety early warning processing algorithm, train the initial driving model and obtain the driving model; and the driving model can be updated according to the fusion perception information sent by the edge servers, and the latest driving model and the latest safety early warning processing algorithm are issued to each edge server, so that the accuracy of the driving model is ensured.
As shown in fig. 4, an early warning method according to an embodiment of the present invention is applied to a vehicle-mounted terminal, and includes:
step 401, acquiring fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion perception information is information related to a vehicle and/or a driver;
step 402, sending the fusion perception information to an edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver.
In this step, the vehicle-mounted terminal (for example, the vehicle-mounted 5G vehicle-mounted terminal) may report the acquired fusion perception information to the edge server in real time, so that the edge server can predict the driving state of the driver according to the fusion perception information.
Step 403, receiving early warning information sent by the edge server according to the driving model;
and 404, performing driving safety prompt on the driver of the target vehicle according to the early warning information.
In the embodiment, the fusion perception information related to the vehicle and/or the driver can be acquired and sent to the edge server, so that the edge server can predict the driving state of the driver according to the fusion perception information, and the driving safety prompt is carried out on the driver of the target vehicle according to the early warning information sent by the edge server, thereby avoiding accidents.
Optionally, the obtaining of the fusion perception information of the target vehicle where the vehicle-mounted terminal is located includes at least one of the following:
acquiring vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information.
For example, vehicle driving information (i.e., vehicle driving data), such as vehicle position or vehicle condition data (e.g., vehicle speed information, average speed information, rotation speed information, etc.), may be collected in real time by an On-board terminal (i.e., an On-board terminal, such as an On-board positioning terminal or an On-board Unit (OBU) with a 5G networking function), and reported to the cloud (i.e., an edge server).
(II) receiving road environment information of the current running road of the target vehicle, which is sent by a road side device (RSU); wherein the road environment information includes road weather information and/or road information.
In this embodiment, the RSU may collect weather information and road information of a driving road of the vehicle in real time, and the RSU may broadcast the weather information and the road information to nearby vehicles in real time, and the vehicle-mounted terminal may obtain weather (weather information) and road related data (road information) of the driving road from the road side unit, and may send the road environment information to the edge server after receiving the road environment information, for example, the vehicle-mounted terminal may report the road environment information to the edge server in real time through a vehicle state packet.
And thirdly, acquiring the facial image information of the driver through a camera of the target vehicle.
For example, the in-vehicle camera (for example, the in-vehicle built-in camera) is used for acquiring the facial image information of the driver, the facial image information is synchronously sent to the vehicle-mounted terminal in real time, and the facial image information is reported to the edge server through the vehicle-mounted terminal, so that the edge server can identify the identity of the driver and the facial fatigue state of the driver through the facial image information of the driver.
(IV) acquiring physiological state information of the driver through wearable equipment worn by the driver; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Here, the physiological status information (i.e. physiological status data, which may include heart rate, blood pressure, blood oxygen content, etc. for example) of the driver may be collected in real time by a wearable device (e.g. wearable body sensing device) worn by the driver. Specifically, the wearable device can synchronize the physiological state data to the vehicle-mounted terminal in real time through short-distance wireless communication networks such as Bluetooth/WIFI, and the physiological state data is reported to the edge server through the vehicle-mounted terminal.
The current physiological state of the driver can be known through the physiological state information, so that the physiological state information can be provided for the edge server, and the driving state of the driver can be judged by combining other information (combined fusion perception information) related to the vehicle and/or the driver so as to predict whether the driving risk exists.
It should be noted that historical physiological state information can be formed from the physiological state information collected at different times, and the historical physiological state information can be synchronized to the central cloud server by the edge server, so that the central cloud server can construct a driving model corresponding to the driver by using the historical physiological state information.
According to the method provided by the embodiment of the invention, the fusion perception information related to the vehicle and/or the driver can be obtained and sent to the edge server, so that the edge server can predict the driving state of the driver according to the fusion perception information, and therefore, the driving safety prompt is carried out on the driver of the target vehicle according to the early warning information sent by the edge server, and accidents are avoided.
The early warning method provided by the embodiment of the invention is applied to road side equipment (RSU), and comprises the following steps: receiving early warning information sent by an edge server; and sending the early warning information to vehicles within a preset range.
Optionally, the early warning method further includes: collecting road environment information of the road in the preset range; wherein the road environment information comprises road weather information and/or road information; and transmitting the road environment information to the vehicles within the preset range in a broadcasting mode.
In the embodiment, the RSU can collect the weather information and the road information of the running road section of the vehicle in real time, and the RSU can broadcast the weather information and the road information to nearby vehicles in real time.
Optionally, the road information comprises a road type and/or a road classification; wherein the road type comprises at least one of national road, provincial road, county road and special road; the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway, and a fourth-level highway.
As shown in fig. 5, an edge server 500 according to an embodiment of the present invention includes a processor 510 and a transceiver 520, wherein,
the processor is used for receiving fusion perception information of a target vehicle sent by the vehicle-mounted terminal; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the processor 510 is configured to send early warning information to the vehicle-mounted terminal and/or the road side device according to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving model is established according to historical fusion perception information of the driver.
Optionally, the fusion awareness information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the fusion perception information includes facial image information of the driver;
before sending the warning information to the vehicle-mounted terminal and/or the road side device by using the driving model corresponding to the driver currently driving the target vehicle according to the fusion perception information, the processor 510 is further configured to:
acquiring the identity identification information of the driver according to the face image information;
and acquiring a driving model corresponding to the identity information of the driver.
Optionally, when the processor 510 sends the warning information to the vehicle-mounted terminal and/or the roadside device by using the driving model corresponding to the driver currently driving the target vehicle according to the fusion perception information, the processor is further specifically configured to:
acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state comprises at least one of a high risk state, a low risk state, and a safe state;
and sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
Optionally, the transceiver 520 is further configured to:
sending the fusion perception information to a central cloud server;
and receiving a driving model updated by the central cloud server according to the fusion perception information.
According to the edge server provided by the embodiment, the fusion perception information (namely, the fusion perception original data of the driving scene of the driver) sent by the vehicle-mounted terminal is received, the fusion perception information can be synchronized to the center cloud server, and then the driving model can be established according to the fusion perception information, and the driving model can reflect the real state of the driver more objectively and accurately, so that the edge cloud can accurately predict the driving state of the driver according to the driving model, and the problem of driver data inaccuracy caused by the fact that different drivers drive the same vehicle is avoided.
A central cloud server 500 according to an embodiment of the present invention has a structure similar to that of the edge server shown in fig. 5, and includes a processor 510 and a transceiver 520, wherein,
the processor 510 is configured to obtain historical fusion perception information of the target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
the processor 510 is further configured to determine, according to the historical fusion perception information, a driving model corresponding to a driver currently driving the target vehicle;
the transceiver 520 is configured to send the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, when the processor 510 is configured to determine, according to the historical fusion perception information, a driving model corresponding to a driver currently driving the target vehicle, specifically, the processor is configured to:
obtaining characteristic information corresponding to the target vehicle according to the historical fusion perception information;
according to the characteristic information, establishing a fusion perception information database of a driver currently driving the target vehicle;
determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
Optionally, the historical fused perception information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver physiological status information; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
Optionally, the road information comprises a road type and/or a road classification;
wherein the road type comprises at least one of a national road, a provincial road, a county road, a rural road and a special highway;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway.
Optionally, when the history fused perception information includes vehicle driving information, the processor 510 is specifically configured to, when being configured to obtain feature information corresponding to the target vehicle according to the history fused perception information:
and determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle running information.
Optionally, in a case that the history fused perception information includes road environment information, when the processor 510 is configured to obtain the feature information corresponding to the target vehicle according to the history fused perception information, specifically, the processor is configured to:
determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, common and bad.
Optionally, in a case that the history fused perception information includes face image information, when the processor 510 is configured to obtain feature information corresponding to the target vehicle according to the history fused perception information, specifically, the processor is configured to:
obtaining driving emotion characteristic information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
Optionally, in a case that the historical fused perception information includes physiological state information, when the processor 510 is configured to obtain feature information corresponding to the target vehicle according to the historical fused perception information, specifically:
according to the acquisition time period of the physiological state information, determining physiological characteristic information of the driver corresponding to the acquisition time period;
wherein the driver physiological characteristic information includes at least one of good, general, and poor.
Optionally, when the processor 510 is configured to determine, according to the fusion perception information database, a driving model corresponding to a driver currently driving the target vehicle, specifically, the processor is configured to:
establishing an initial driving model according to the characteristic information in the fusion perception information database, and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm;
and training the initial driving model by using the characteristic information to obtain the driving model.
Optionally, the transceiver 520 is further configured to receive fusion awareness information sent by the edge server;
the processor 510 is further configured to update the driving model according to the fused perception information.
In the central cloud server of the embodiment, the central cloud server can receive and process the historical fusion perception information, extract the characteristic information from the historical fusion perception information, establish a fusion perception information database of a driver, establish an initial driving model and a safety early warning processing algorithm, and train the initial driving model to obtain the driving model; and the driving model can be updated according to the fusion perception information sent by the edge servers, and the latest driving model and the latest safety early warning processing algorithm are issued to each edge server, so that the accuracy of the driving model is ensured.
As shown in fig. 6, a vehicle-mounted terminal 600 according to an embodiment of the present invention includes a processor 610 and a transceiver 620, wherein,
the processor 610 is configured to obtain fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion perception information is information related to a vehicle and/or a driver;
the transceiver 620 is configured to send the fusion awareness information to an edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver;
the transceiver 620 is further configured to receive warning information sent by the edge server according to the driving model;
the processor 610 is further configured to perform driving safety prompting on a driver of the target vehicle according to the early warning information.
Optionally, when acquiring the fusion perception information of the target vehicle in which the vehicle-mounted terminal is located, the processor 610 is specifically configured to:
collecting vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
receiving road environment information of a current running road of the target vehicle, which is sent by road side equipment; wherein the road environment information comprises road weather information and/or road information;
acquiring the facial image information of the driver through a camera of the target vehicle;
acquiring physiological state information of the driver through wearable equipment worn by the driver; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
The vehicle-mounted terminal of the embodiment can acquire the fusion perception information related to the vehicle and/or the driver and send the fusion perception information to the edge server, so that the edge server can predict the driving state of the driver according to the fusion perception information, and the driving safety prompt is carried out on the driver of the target vehicle according to the early warning information sent by the edge server, and accidents are avoided.
As shown in fig. 7, an embodiment of the present invention provides an early warning apparatus applied to an edge server, including:
the first receiving module 710 is configured to receive fusion perception information of a target vehicle sent by a vehicle-mounted terminal; wherein the fusion perception information is information related to a vehicle and/or a driver;
the first processing module 720 is configured to send early warning information to the vehicle-mounted terminal and/or the roadside device according to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving model is established according to historical fusion perception information of the driver.
Optionally, the fusion awareness information includes at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the fusion perception information includes facial image information of the driver;
the early warning device still includes:
the fourth processing module is used for acquiring the identity identification information of the driver according to the face image information;
and the model acquisition module is used for acquiring a driving model corresponding to the identity identification information of the driver.
Optionally, the first processing module 720 includes:
the first processing unit is used for acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state comprises at least one of a high risk state, a low risk state, and a safe state;
and the second processing unit is used for sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
Optionally, the early warning device further includes:
the third sending module is used for sending the fusion perception information to the central cloud server;
and the third receiving module is used for receiving the driving model updated by the central cloud server according to the fusion perception information.
According to the early warning device, the fusion perception information (namely, the fusion perception original data of the driving scene of the driver) sent by the vehicle-mounted terminal is received, the fusion perception information can be synchronized to the central cloud server, and then the driving model can be established according to the fusion perception information, the driving model can reflect the real state of the driver more objectively and accurately, so that the edge cloud can accurately predict the driving state of the driver according to the driving model, and the problem of driver data inaccuracy caused by the fact that different drivers drive the same vehicle is avoided.
As shown in fig. 8, an embodiment of the present invention provides an early warning apparatus, which is applied to a central cloud server, and includes:
a first obtaining module 810, configured to obtain historical fusion perception information of a target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
the second processing module 820 is configured to determine, according to the historical fusion perception information, a driving model corresponding to a driver currently driving the target vehicle;
a first sending module 830, configured to send the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, the second processing module 820 includes:
the third processing unit is used for acquiring the characteristic information corresponding to the target vehicle according to the historical fusion perception information;
the fourth processing unit is used for establishing a fusion perception information database of a driver currently driving the target vehicle according to the characteristic information;
the fifth processing unit is used for determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
Optionally, the history fused perception information comprises at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, four-emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
Optionally, the road information comprises a road type and/or a road classification;
wherein the road type comprises at least one of national road, provincial road, county road and special road;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway.
Optionally, the third processing unit includes:
and the first processing subunit is used for determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle running information.
Optionally, the third processing unit includes:
the second processing subunit is used for determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, common and bad.
Optionally, the third processing unit includes:
the third processing subunit is used for obtaining driving emotion characteristic information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
Optionally, the third processing unit includes:
the fourth processing subunit is used for determining the physiological characteristic information of the driver corresponding to the acquisition time period according to the acquisition time period of the physiological state information;
wherein the driver physiological characteristic information includes at least one of good, general, and poor.
Optionally, the fifth processing unit includes:
the fifth processing subunit is used for establishing an initial driving model according to the characteristic information in the fusion perception information database and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm;
and the sixth processing subunit is used for training the initial driving model by using the characteristic information to obtain the driving model.
Optionally, the early warning device further includes:
the fourth receiving module is used for receiving the fusion perception information sent by the edge server;
and the model updating module is used for updating the driving model according to the fusion perception information.
In the early warning device of the embodiment, the central cloud server can receive and process the historical fusion perception information, extract the characteristic information from the historical fusion perception information, establish a fusion perception information database of a driver, establish an initial driving model and a safety early warning processing algorithm, train the initial driving model and obtain the driving model; and the driving model can be updated according to the fusion perception information sent by the edge servers, and the latest driving model and the latest safety early warning processing algorithm are issued to each edge server, so that the accuracy of the driving model is ensured.
As shown in fig. 9, an embodiment of the present invention provides an early warning apparatus, which is applied to a central cloud server, and includes:
the first acquisition module is used for acquiring historical fusion perception information of a target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
the second processing module is used for determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
the first sending module is used for sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
Optionally, the first obtaining module includes at least one of:
the first acquisition unit is used for acquiring vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
the second acquisition unit is used for receiving road environment information of a current running road of the target vehicle, which is sent by road side equipment; wherein the road environment information comprises road weather information and/or road information;
the third acquisition unit is used for acquiring the facial image information of the driver through a camera of the target vehicle;
a fourth acquisition unit, configured to acquire physiological state information of the driver through a wearable device worn by the driver; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
The early warning device of the embodiment can acquire the fusion perception information related to the vehicle and/or the driver and send the fusion perception information to the edge server, so that the edge server can predict the driving state of the driver according to the fusion perception information, and the driver of the target vehicle is prompted for driving safety according to the early warning information sent by the edge server, and accidents are avoided.
An edge server according to another embodiment of the present invention, as shown in fig. 10, includes a transceiver 1010, a processor 1000, a memory 1020, and a program or instructions stored in the memory 1020 and executable on the processor 1000; the processor 1000 implements the above-mentioned early warning method applied to the edge server when executing the program or the instructions.
The transceiver 1010 is used for receiving and transmitting data under the control of the processor 1000.
Wherein in fig. 10 the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 1000, and various circuits, represented by memory 1020, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1010 may be a number of elements including a transmitter and receiver that provide a means for communicating with various other apparatus over a transmission medium. The processor 1000 is responsible for managing the bus architecture and general processing, and the memory 1020 may store data used by the processor 1000 in performing operations.
The central cloud server according to another embodiment of the present invention, which adopts the same structure as the edge server as shown in fig. 10, includes a transceiver 1010, a processor 1000, a memory 1020, and a program or instructions stored in the memory 1020 and executable on the processor 1000; the processor 1000 implements the above-described early warning method applied to the central cloud server when executing the program or the instruction.
The transceiver 1010 is used for receiving and transmitting data under the control of the processor 1000.
Wherein in fig. 10 the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 1000, and various circuits, represented by memory 1020, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1010 may be a number of elements including a transmitter and a receiver that provide a means for communicating with various other apparatus over a transmission medium. The processor 1000 is responsible for managing the bus architecture and general processing, and the memory 1020 may store data used by the processor 1000 in performing operations.
A vehicle-mounted terminal according to another embodiment of the present invention, as shown in fig. 11, includes a transceiver 1110, a processor 1100, a memory 1120, and a program or instructions stored in the memory 1120 and executable on the processor 1100; the processor 1100 implements the above-described warning method applied to the in-vehicle terminal when executing the program or the instruction.
The transceiver 1110 is used for receiving and transmitting data under the control of the processor 1100.
Where in fig. 11, the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented by processor 1100, and various circuits, represented by memory 1120, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 1110 may be a number of elements including a transmitter and a receiver that provide a means for communicating with various other apparatus over a transmission medium. For different user devices, the user interface 1130 may also be an interface capable of interfacing with a desired device, including but not limited to a keypad, display, speaker, microphone, joystick, etc.
The processor 1100 is responsible for managing the bus architecture and general processing, and the memory 1120 may store data used by the processor 1100 in performing operations.
The readable storage medium of the embodiment of the present invention stores a program or an instruction thereon, and the program or the instruction when executed by the processor implements the steps in the above-described warning method, and can achieve the same technical effects, and the details are not repeated here to avoid repetition. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It is further noted that the terminals described in this specification include, but are not limited to, smart phones, tablets, etc., and that many of the functional components described are referred to as modules in order to more particularly emphasize their implementation independence.
In embodiments of the present invention, modules may be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be constructed as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different bits which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Likewise, operational data may be identified within the modules and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network.
When a module can be implemented by software, considering the level of existing hardware technology, a module implemented by software may build a corresponding hardware circuit to implement a corresponding function, without considering cost, and the hardware circuit may include a conventional Very Large Scale Integration (VLSI) circuit or a gate array and an existing semiconductor such as a logic chip, a transistor, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
The exemplary embodiments described above are described with reference to the drawings, and many different forms and embodiments of the invention may be made without departing from the spirit and teaching of the invention, therefore, the invention is not to be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. In the drawings, the size and relative sizes of elements may be exaggerated for clarity. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise indicated, a range of values, when stated, includes the upper and lower limits of the range and any subranges therebetween.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (27)

1. An early warning method is applied to an edge server and is characterized by comprising the following steps:
receiving fusion perception information of a target vehicle sent by a vehicle-mounted terminal; wherein the fusion perception information is information related to a vehicle and/or a driver;
according to the fusion perception information, early warning information is sent to the vehicle-mounted terminal and/or road side equipment by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving model is established according to historical fusion perception information of the driver.
2. The warning method of claim 1, wherein the fused perception information comprises at least one of:
vehicle travel information; the vehicle running information comprises at least one item of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle Controller Area Network (CAN) information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
3. The warning method of claim 1, wherein the fused perception information comprises facial image information of a driver;
before sending early warning information to the vehicle-mounted terminal and/or road side equipment by using a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information, the early warning method further comprises the following steps:
acquiring identity identification information of the driver according to the face image information;
and acquiring a driving model corresponding to the identity identification information of the driver.
4. The warning method according to claim 1, wherein the sending warning information to the vehicle-mounted terminal and/or road side equipment by using a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information comprises:
acquiring a driving state corresponding to the fusion perception information by using a driving model corresponding to a driver currently driving the target vehicle; wherein the driving state comprises at least one of a high risk state, a low risk state, and a safe state;
and sending early warning information to the vehicle-mounted terminal and/or the road side equipment under the condition that the driving state is a high risk state or a low risk state.
5. The warning method as claimed in claim 1, wherein after receiving the fused perception information of the target vehicle sent by the vehicle-mounted terminal, the warning method further comprises:
sending the fusion perception information to a central cloud server;
and receiving a driving model updated by the central cloud server according to the fusion perception information.
6. An early warning method is applied to a central cloud server and is characterized by comprising the following steps:
acquiring historical fusion perception information of a target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
7. The early warning method according to claim 6, wherein the determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fused perception information comprises:
obtaining characteristic information corresponding to the target vehicle according to the historical fusion perception information;
according to the characteristic information, establishing a fusion perception information database of a driver currently driving the target vehicle;
determining a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information database;
the fusion perception information database comprises at least one of driver identity identification information, face image information, driving time period characteristic information, weather characteristic information, road condition characteristic information, driving emotion characteristic information and driver physiological characteristic information.
8. The warning method of claim 7, wherein the historical fused perception information comprises at least one of:
vehicle travel information; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
road environment information of a current driving road of the target vehicle; wherein the road environment information comprises road weather information and/or road information;
face image information of the driver;
driver's physiological state information; wherein the physiological state information includes at least one of heart rate information, blood pressure information, and blood oxygen content information.
9. The warning method of claim 8, wherein the road information comprises a road type and/or a road grade;
wherein the road type comprises at least one of a national road, a provincial road, a county road, a rural road and a special highway;
the road grade includes at least one of an expressway, a first-level highway, a second-level highway, a third-level highway and a fourth-level highway.
10. The early warning method as claimed in claim 7, wherein, in a case that the historical fused perception information includes vehicle driving information, the obtaining feature information corresponding to the target vehicle according to the historical fused perception information includes:
and determining driving time period characteristic information corresponding to the acquisition time period according to the acquisition time period of the vehicle running information.
11. The early warning method according to claim 7, wherein in a case that the history fusion perception information includes road environment information, the obtaining feature information corresponding to the target vehicle according to the history fusion perception information includes:
determining road environment characteristic information corresponding to the acquisition time period according to the acquisition time period of the road environment information;
the road environment characteristic information comprises weather characteristic information and/or road condition characteristic information;
the weather characteristic information comprises at least one of snow, rain, sunny and foggy days;
the road condition characteristic information includes at least one of good, common and bad.
12. The early warning method according to claim 7, wherein when the history fusion perception information includes face image information, the obtaining feature information corresponding to the target vehicle according to the history fusion perception information includes:
obtaining driving emotion feature information corresponding to the face image information according to the face image information;
wherein the driving emotional characteristic information includes at least one of good, general, and bad.
13. The early warning method according to claim 7, wherein in a case that the historical fused perception information includes physiological state information, the obtaining feature information corresponding to the target vehicle according to the historical fused perception information includes:
according to the acquisition time period of the physiological state information, determining physiological characteristic information of the driver corresponding to the acquisition time period;
wherein the driver physiological characteristic information includes at least one of good, general, and bad.
14. The warning method according to claim 7, wherein the determining a driving model corresponding to a driver currently driving the target vehicle according to the fused perception information database includes:
establishing an initial driving model according to the characteristic information in the fusion perception information database, and adjusting the parameters of the initial driving model by using a Bayesian optimization algorithm;
and training the initial driving model by using the characteristic information to obtain the driving model.
15. The warning method of claim 6, further comprising:
receiving fusion perception information sent by the edge server;
and updating the driving model according to the fusion perception information.
16. An early warning method is applied to a vehicle-mounted terminal and is characterized by comprising the following steps:
acquiring fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion perception information is information related to a vehicle and/or a driver;
sending the fusion perception information to an edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver;
receiving early warning information sent by the edge server according to the driving model;
and carrying out driving safety prompt on the driver of the target vehicle according to the early warning information.
17. The early warning method according to claim 16, wherein the acquiring of the fusion perception information of the target vehicle in which the vehicle-mounted terminal is located includes at least one of:
collecting vehicle running information of the target vehicle; the vehicle running information comprises at least one of vehicle position information, vehicle speed information, average speed information, rotating speed information, emergency operation information, warning information and vehicle CAN information;
receiving road environment information of a current running road of the target vehicle, which is sent by road side equipment; wherein the road environment information comprises road weather information and/or road information;
acquiring the facial image information of the driver through a camera of the target vehicle;
acquiring physiological state information of the driver through wearable equipment worn by the driver; wherein the physiological status information comprises at least one of heart rate information, blood pressure information and blood oxygen content information.
18. The utility model provides an early warning device, is applied to edge server, its characterized in that includes:
the first receiving module is used for receiving fusion perception information of a target vehicle sent by the vehicle-mounted terminal; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the first processing module is used for sending early warning information to the vehicle-mounted terminal and/or the road side equipment by utilizing a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information; wherein the driving model is established according to historical fusion perception information of the driver.
19. The utility model provides an early warning device, is applied to central cloud ware, its characterized in that includes:
the first acquisition module is used for acquiring historical fusion perception information of the target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
the second processing module is used for determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
the first sending module is used for sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
20. The utility model provides an early warning device, is applied to vehicle terminal, its characterized in that includes:
the second acquisition module is used for acquiring fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the second sending module is used for sending the fusion perception information to the edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver;
the second receiving module is used for receiving early warning information sent by the edge server according to the driving model;
and the third processing module is used for carrying out driving safety prompt on the driver of the target vehicle according to the early warning information.
21. An edge server, comprising: a transceiver and a processor;
the transceiver is used for receiving fusion perception information of the target vehicle sent by the vehicle-mounted terminal; wherein the fusion perception information is information related to a vehicle and/or a driver;
the processor is used for sending early warning information to the vehicle-mounted terminal and/or the road side equipment by utilizing a driving model corresponding to a driver currently driving the target vehicle according to the fusion perception information; wherein the driving model is established according to historical fusion perception information of the driver.
22. A central cloud server, comprising: a transceiver and a processor;
the processor is used for acquiring historical fusion perception information of the target vehicle; wherein the history fused perception information is information related to a vehicle and/or a driver;
the processor is further used for determining a driving model corresponding to a driver currently driving the target vehicle according to the historical fusion perception information;
the transceiver is used for sending the driving model to an edge server; wherein the driving model is used for the edge server to predict the driving state of the driver.
23. A vehicle-mounted terminal characterized by comprising: a transceiver and a processor;
the processor is used for acquiring fusion perception information of a target vehicle where the vehicle-mounted terminal is located; wherein the fusion awareness information is information related to a vehicle and/or a driver;
the transceiver is used for sending the fusion perception information to an edge server; the fusion perception information is used for the central cloud server to establish a driving model, and the driving model is used for predicting the driving state of a driver;
the transceiver is also used for receiving early warning information sent by the edge server according to the driving model;
and the processor is also used for carrying out driving safety prompt on the driver of the target vehicle according to the early warning information.
24. An edge server, comprising: a transceiver, a processor, a memory, and a program or instructions stored on the memory and executable on the processor; characterized in that the processor implements the warning method according to any one of claims 1 to 5 when executing the program or instructions.
25. A central cloud server, comprising: a transceiver, a processor, a memory, and a program or instructions stored on the memory and executable on the processor; characterized in that the processor, when executing the program or instructions, implements the warning method according to any one of claims 6 to 15.
26. An edge server, comprising: a transceiver, a processor, a memory, and a program or instructions stored on the memory and executable on the processor; characterized in that the processor, when executing the program or instructions, implements the warning method according to claim 16 or 17.
27. A readable storage medium having a program or instructions stored thereon, which when executed by a processor, performs the steps in the warning method according to any one of claims 1 to 5, or the steps in the warning method according to any one of claims 6 to 15, or the steps in the warning method according to claim 16 or 17.
CN202110631725.0A 2021-06-07 2021-06-07 Early warning method, early warning device, mobile terminal and readable storage medium Pending CN115512511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110631725.0A CN115512511A (en) 2021-06-07 2021-06-07 Early warning method, early warning device, mobile terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110631725.0A CN115512511A (en) 2021-06-07 2021-06-07 Early warning method, early warning device, mobile terminal and readable storage medium

Publications (1)

Publication Number Publication Date
CN115512511A true CN115512511A (en) 2022-12-23

Family

ID=84499887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110631725.0A Pending CN115512511A (en) 2021-06-07 2021-06-07 Early warning method, early warning device, mobile terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN115512511A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129641A (en) * 2023-02-13 2023-05-16 中南大学 Vehicle security situation calculation method and system based on multi-terminal collaborative identification

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108423006A (en) * 2018-02-02 2018-08-21 辽宁友邦网络科技有限公司 A kind of auxiliary driving warning method and system
CN109460780A (en) * 2018-10-17 2019-03-12 深兰科技(上海)有限公司 Safe driving of vehicle detection method, device and the storage medium of artificial neural network
CN110406541A (en) * 2019-06-12 2019-11-05 天津五八到家科技有限公司 Driving data processing method, equipment, system and storage medium
CN111274881A (en) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 Driving safety monitoring method and device, computer equipment and storage medium
CN111291916A (en) * 2018-12-10 2020-06-16 北京嘀嘀无限科技发展有限公司 Driving behavior safety prediction method and device, electronic equipment and storage medium
CN111325872A (en) * 2020-01-21 2020-06-23 和智信(山东)大数据科技有限公司 Driver driving abnormity detection equipment and detection method based on computer vision
CN111709542A (en) * 2020-06-12 2020-09-25 浪潮集团有限公司 Vehicle prediction diagnosis method based on fog computing environment
CN111739191A (en) * 2020-05-29 2020-10-02 北京梧桐车联科技有限责任公司 Violation early warning method, device, equipment and storage medium
CN112634607A (en) * 2019-09-24 2021-04-09 福特全球技术公司 Real-time vehicle accident risk prediction based on vehicle to outside world (V2X)
CN112677983A (en) * 2021-01-07 2021-04-20 浙江大学 System for recognizing driving style of driver

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108423006A (en) * 2018-02-02 2018-08-21 辽宁友邦网络科技有限公司 A kind of auxiliary driving warning method and system
CN109460780A (en) * 2018-10-17 2019-03-12 深兰科技(上海)有限公司 Safe driving of vehicle detection method, device and the storage medium of artificial neural network
CN111291916A (en) * 2018-12-10 2020-06-16 北京嘀嘀无限科技发展有限公司 Driving behavior safety prediction method and device, electronic equipment and storage medium
CN110406541A (en) * 2019-06-12 2019-11-05 天津五八到家科技有限公司 Driving data processing method, equipment, system and storage medium
CN112634607A (en) * 2019-09-24 2021-04-09 福特全球技术公司 Real-time vehicle accident risk prediction based on vehicle to outside world (V2X)
CN111274881A (en) * 2020-01-10 2020-06-12 中国平安财产保险股份有限公司 Driving safety monitoring method and device, computer equipment and storage medium
CN111325872A (en) * 2020-01-21 2020-06-23 和智信(山东)大数据科技有限公司 Driver driving abnormity detection equipment and detection method based on computer vision
CN111739191A (en) * 2020-05-29 2020-10-02 北京梧桐车联科技有限责任公司 Violation early warning method, device, equipment and storage medium
CN111709542A (en) * 2020-06-12 2020-09-25 浪潮集团有限公司 Vehicle prediction diagnosis method based on fog computing environment
CN112677983A (en) * 2021-01-07 2021-04-20 浙江大学 System for recognizing driving style of driver

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129641A (en) * 2023-02-13 2023-05-16 中南大学 Vehicle security situation calculation method and system based on multi-terminal collaborative identification

Similar Documents

Publication Publication Date Title
CN111524357B (en) Method for fusing multiple data required for safe driving of vehicle
US11100795B2 (en) Driving service active sensing system and method in internet of vehicles environment
CN112660157B (en) Multifunctional remote monitoring and auxiliary driving system for barrier-free vehicle
EP2876620B1 (en) Driving assistance system and driving assistance method
WO2018058958A1 (en) Road vehicle traffic alarm system and method therefor
CN106781485B (en) Road congestion identification method, V2X vehicle-mounted terminal and Internet of vehicles system
CN109410567B (en) Intelligent analysis system and method for accident-prone road based on Internet of vehicles
CN111052202A (en) System and method for safe autonomous driving based on relative positioning
CN106448263B (en) Vehicle driving safety management system and method
CN105577755A (en) Car networking terminal service system
CN109993944B (en) Danger early warning method, mobile terminal and server
US10369995B2 (en) Information processing device, information processing method, control device for vehicle, and control method for vehicle
CN111465972B (en) System for calculating error probability of vehicle sensor data
CN205038808U (en) Position monitored control system
CN109272775A (en) A kind of expressway bend safety monitoring method for early warning, system and medium
CN109360417B (en) Dangerous driving behavior identification and pushing method and system based on block chain
CN107204055A (en) A kind of intelligent networking drive recorder
CN111951548B (en) Vehicle driving risk determination method, device, system and medium
CN111681454A (en) Vehicle-vehicle cooperative anti-collision early warning method based on driving behaviors
CN110576808B (en) Vehicle, vehicle machine equipment and scene information pushing method based on artificial intelligence
CN113837127A (en) Map and V2V data fusion model, method, system and medium
CN110793537A (en) Navigation path recommendation method, vehicle machine and vehicle
CN103810877A (en) Automobile information interaction safety system
CN111882924A (en) Vehicle testing system, driving behavior judgment control method and accident early warning method
CN115797403A (en) Traffic accident prediction method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination