CN110782670A - Scene restoration method based on data fusion, vehicle cloud platform and storage medium - Google Patents

Scene restoration method based on data fusion, vehicle cloud platform and storage medium Download PDF

Info

Publication number
CN110782670A
CN110782670A CN201911071145.XA CN201911071145A CN110782670A CN 110782670 A CN110782670 A CN 110782670A CN 201911071145 A CN201911071145 A CN 201911071145A CN 110782670 A CN110782670 A CN 110782670A
Authority
CN
China
Prior art keywords
information
data
scene
accident
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911071145.XA
Other languages
Chinese (zh)
Inventor
孙学龙
陈新
孙靓
许永在
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BAIC Motor Co Ltd
Beijing Automotive Group Co Ltd
Beijing Automotive Research Institute Co Ltd
Original Assignee
BAIC Motor Co Ltd
Beijing Automotive Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BAIC Motor Co Ltd, Beijing Automotive Research Institute Co Ltd filed Critical BAIC Motor Co Ltd
Priority to CN201911071145.XA priority Critical patent/CN110782670A/en
Publication of CN110782670A publication Critical patent/CN110782670A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a scene restoration method based on data fusion, a vehicle cloud platform and a storage medium, wherein the scene restoration method based on data fusion comprises the following steps: when an accident recovery request for a target vehicle is received, acquiring road condition data from a traffic cloud platform; fusing the road condition data and the driving data of the target vehicle to obtain accident scene information; and constructing an accident scene condition according to the accident scene information. According to the accident scene information acquiring method and device, the accident scene information is acquired by fusing the road condition data and the driving data of the target vehicle, and then the accident scene situation can be quickly established and restored according to the accident scene information, so that relevant personnel can more clearly, truly and completely know the actual situation of the target vehicle during the accident according to the accident scene situation, the relevant personnel can conveniently, rapidly and accurately determine the accident responsibility, and meanwhile, the relevant personnel can also find out whether the target vehicle has a fault according to the accident scene situation.

Description

Scene restoration method based on data fusion, vehicle cloud platform and storage medium
Technical Field
The application relates to the technical field of automatic driving, in particular to a scene restoration method based on data fusion, a vehicle cloud platform and a storage medium.
Background
At present, the development of the auto-driving automobile industry is accelerated continuously, and the research of auto-driving vehicles is getting more and more hot. Meanwhile, accidents occurring in the automatic driving test process are also imperceptible, and the responsibility corresponding to the accidents is difficult to judge, so that the accident occurrence scene needs to be quickly restored so as to judge the responsibility of the accidents.
Disclosure of Invention
The embodiment of the application aims to disclose a scene restoration method based on data fusion and a vehicle cloud platform, which are used for restoring an accident scene so as to facilitate people to judge the responsibility of an accident.
In a first aspect of the present application, a scene restoration method based on data fusion is applied to a vehicle cloud platform, and the method includes:
when an accident recovery request for a target vehicle is received, acquiring road condition data from a traffic cloud platform;
fusing road condition data and driving data of a target vehicle to obtain accident scene information;
and constructing an accident scene condition according to the accident scene information.
According to the accident scene information acquiring method and device, the accident scene information is acquired by fusing the road condition data and the driving data of the target vehicle, and then the accident scene situation can be quickly established and restored according to the accident scene information, so that relevant personnel can more clearly, truly and completely know the actual situation of the target vehicle during the accident according to the accident scene situation, the relevant personnel can conveniently, rapidly and accurately determine the accident responsibility, and meanwhile, the relevant personnel can also find out whether the target vehicle has a fault according to the accident scene situation.
In some optional embodiments, fusing the road condition data and the driving data of the target vehicle and obtaining the accident scene information includes:
normalization processing is carried out on the road condition data, so that the space-time information of different objects in the road condition data is converted into the same data coordinate system and is led into a scene simulation model;
and importing the driving data of the target vehicle into the scene simulation model to obtain the accident scene information.
In the optional embodiment, the road condition data is normalized, so that the road condition data can be converted into a dimensionless expression to become a scalar, and the scene simulation model can obtain the accident scene information according to the scalar.
In some optional embodiments, before constructing the incident scene status from the incident scene information, the method further comprises:
and constructing a scene simulation model according to at least one training sample, wherein the scene simulation model comprises at least one simulation index.
In this optional embodiment, the training samples are collected from real samples, and the training samples may be training samples under different conditions, so that the scene simulation model constructed based on the training samples can accurately generate accident scene information.
In some optional embodiments, the obtaining the road condition data from the traffic cloud platform includes:
sending a data pulling request to a traffic cloud platform, wherein the data pulling request comprises accident occurrence time period information and accident occurrence position information;
and receiving road condition data screened from the road condition database by the traffic cloud platform according to the accident occurrence time period information and the accident occurrence position.
In the optional embodiment, the road condition data screened from the road condition database according to the accident occurrence time period information and the accident occurrence position can reduce the simulation calculation amount of the scene simulation model, and meanwhile, the accident scene conditions of the scene simulation model construction and restoration can be better matched with the accident scene, so that the accuracy of accident responsibility judgment can be further improved.
In some optional embodiments, the road condition data includes at least one of surrounding vehicle information, surrounding pedestrian information, and surrounding obstacle information. In the optional embodiment, the matching degree between the accident scene condition and the accident scene can be further improved by the peripheral vehicle information, the peripheral pedestrian information and the peripheral obstacle information.
In some optional embodiments, the nearby vehicle information includes at least one of position information of the nearby vehicle, shape information of the nearby vehicle, traveling speed information of the nearby vehicle, and traveling direction information of the nearby vehicle. In this optional embodiment, the matching degree between the accident scene condition and the accident scene can be further improved by the position information of the surrounding vehicle, the shape information of the surrounding vehicle, and the running speed information of the surrounding vehicle.
In some optional embodiments, the accident scene condition includes the traveling track information of the surrounding vehicles and the traveling track information of the surrounding pedestrians. In the optional embodiment, the running track information of the surrounding vehicles and the running track information of the surrounding pedestrians can visually display the accident scene to related personnel, so that the accuracy of accident responsibility judgment is improved.
In some optional embodiments, the driving data includes at least one of traveling speed information of the target vehicle, traveling acceleration information of the target vehicle, traveling route decision information of the target vehicle, and traveling control information of the target vehicle. In this alternative embodiment, the running speed information of the target vehicle, the running acceleration information of the target vehicle, the running route decision information of the target vehicle, and the running control information of the target vehicle can further improve the accuracy of the accident scene condition.
The second aspect of the present application discloses a vehicle cloud platform, wherein, vehicle cloud platform includes:
the acquiring unit is used for acquiring road condition data from the traffic cloud platform when receiving an accident recovery request aiming at a target vehicle;
the fusion unit is used for fusing road condition data and driving data of the target vehicle and obtaining accident site information;
and the construction unit is used for constructing the accident scene condition according to the accident scene information.
The vehicle cloud platform of the embodiment of the application can fuse road condition data and driving data of a target vehicle to obtain accident scene information by executing a scene restoration method based on data fusion, and further can quickly construct and restore accident scene conditions according to the accident scene information, so that relevant personnel can more clearly, really and completely know the real conditions of the target vehicle during an accident according to the accident scene conditions, and accordingly the relevant personnel can conveniently, quickly and accurately determine accident responsibilities, and meanwhile, the relevant personnel can also investigate whether the target vehicle has a fault according to the accident scene conditions.
The third aspect of the present application further discloses a vehicle cloud platform, wherein the vehicle cloud platform includes:
a processor; and
a memory configured to store machine readable instructions, which when executed by the processor, cause the processor to perform the method for scene restoration based on data fusion of the first aspect of the present application.
The fourth aspect of the present application also discloses a computer-readable storage medium, in which a computer program is stored, and the computer program is executed by a processor to perform the scene restoration method based on data fusion of the first aspect of the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a scene restoration method based on data fusion disclosed in an embodiment of the present application;
fig. 2 is a schematic flowchart of a scene restoration method based on data fusion disclosed in the second embodiment of the present application;
FIG. 3 is a schematic flow chart illustrating the sub-steps of step 203 in FIG. 2;
fig. 4 is a schematic flowchart of a scene restoration method based on data fusion disclosed in the third embodiment of the present application;
fig. 5 is a schematic structural diagram of a vehicle cloud platform disclosed in the fourth embodiment of the present application;
fig. 6 is a schematic structural diagram of a vehicle cloud platform disclosed in the fifth embodiment of the present application;
fig. 7 is a schematic structural diagram of a vehicle cloud platform disclosed in the sixth embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In this application, both the vehicle cloud platform and the traffic cloud platform may be formed by one or more servers, and when a plurality of servers form the vehicle cloud platform and the traffic cloud platform, the plurality of servers may be deployed in a distributed manner.
In the present application, the target vehicle may be an autonomous vehicle, wherein the autonomous vehicle is mounted with an in-vehicle terminal capable of communicating with a vehicle cloud platform.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart of a scene restoration method based on data fusion, which is disclosed in an embodiment of the present application and is applied to a vehicle cloud platform. As shown in fig. 1, the method comprises the steps of:
101. and when an accident restoration request aiming at the target vehicle is received, acquiring road condition data from the traffic cloud platform.
In the embodiment of the application, when related personnel need to judge the accident responsibility of a target vehicle with traffic, the related personnel can send an accident restoration request to a vehicle cloud platform through a vehicle-mounted terminal installed on the target vehicle.
In the embodiment of the application, the accident restoration request at least comprises the unique identification code of the target vehicle, so that the vehicle cloud platform can acquire the road condition data associated with the unique identification code of the target vehicle from the traffic cloud platform.
In this embodiment of the application, optionally, the accident restoration request may carry driving data of the target vehicle, where the driving data includes at least one of driving speed information of the target vehicle, driving acceleration information of the target vehicle, driving route decision information of the target vehicle, and driving control information of the target vehicle. In this alternative embodiment, the running speed information of the target vehicle, the running acceleration information of the target vehicle, the running route decision information of the target vehicle, and the running control information of the target vehicle can further improve the accuracy of the accident scene condition.
In the embodiment of the present application, further optionally, the driving data of the target vehicle may further include environmental data that can be detected by the target vehicle, such as surrounding vehicle information that can be detected by the target vehicle, surrounding pedestrian information that can be detected by the target vehicle, obstacle information that can be detected by the target vehicle, and the like. Further, the nearby vehicle information that can be detected by the target vehicle includes at least the position, shape, size, traveling direction, traveling speed of the vehicle; the peripheral pedestrian information which can be detected by the target vehicle at least comprises the position, the size, the driving direction and the driving speed of the pedestrian; the obstacle information that can be detected by the target vehicle includes at least the position, size, shape of the obstacle.
In this embodiment, optionally, the target vehicle may detect the environmental data through a camera or a radar carried by the camera.
In the embodiment of the application, optionally, the target vehicle may upload the driving data to the traffic platform in real time on the way of the driving route, or may first store the data corresponding to each time node in the vehicle-mounted terminal, and when a preset upload trigger threshold is met, the vehicle-mounted terminal uploads the driving data to the traffic cloud platform.
Illustratively, eight nights each day, the in-vehicle terminal executes a data upload command to upload driving data generated by the target vehicle of the day into the traffic cloud platform.
In this embodiment of the application, the road condition data is collected by at least one drive test unit, and the plurality of drive test units may be arranged at intervals, for example, in the driving route of the target vehicle, one drive test unit is arranged every 50 meters to monitor the road condition in the driving route of the target vehicle and generate the road condition data.
Specifically, every drive test unit all is equipped with rotatable high definition digtal camera, remote communication device at least, and wherein, rotatable high definition digtal camera is used for monitoring the road conditions and generates road conditions data, and remote communication device is arranged in sending road conditions data to traffic cloud platform. Optionally, the remote communication device may be a 4G communication module or a 5G communication module, where the 5G communication module is preferred in this embodiment, and the 5G communication module can improve the transmission speed of data.
In this embodiment of the application, optionally, after the traffic cloud platform receives the road condition data, the traffic cloud platform classifies the road condition data according to a preset data classification processing rule to store the road condition data in a classified manner.
Illustratively, the traffic cloud platform classifies the road condition data according to time nodes, further classifies the road condition data according to time nodes, and secondarily classifies the road condition data according to places. The traffic cloud platform can return data in a specified range to the vehicle cloud platform by classifying and storing the road condition data.
In this embodiment of the application, optionally, the traffic cloud platform further receives the road condition data, and verifies the road condition data according to a preset data verification rule, where verifying the road condition data according to the preset data verification rule at least includes performing data consistency verification on the road condition data and performing data integrity verification on the road condition data.
In the embodiment of the present application, optionally, the road condition data includes at least one of surrounding vehicle information, surrounding pedestrian information, and surrounding obstacle information. In the optional embodiment, the matching degree between the accident scene condition and the accident scene can be further improved by the peripheral vehicle information, the peripheral pedestrian information and the peripheral obstacle information.
In the embodiment of the present application, optionally, the nearby vehicle information includes at least one of position information of the nearby vehicle, shape information of the nearby vehicle, traveling speed information of the nearby vehicle, and traveling direction information of the nearby vehicle. In this optional embodiment, the matching degree between the accident scene condition and the accident scene can be further improved by the position information of the surrounding vehicle, the shape information of the surrounding vehicle, and the running speed information of the surrounding vehicle.
102. And integrating the road condition data and the driving data of the target vehicle to obtain accident site information.
103. And constructing an accident scene condition according to the accident scene information.
In the embodiment of the application, optionally, the accident scene condition may be displayed by using a 3D model or a 2D model, wherein, preferably, the accident scene condition is displayed by using the 3D model, so that related personnel can intuitively know the accident scene condition.
According to the accident scene information acquiring method and device, the accident scene information is acquired by fusing the road condition data and the driving data of the target vehicle, and then the accident scene situation can be quickly established and restored according to the accident scene information, so that relevant personnel can more clearly, truly and completely know the actual situation of the target vehicle during the accident according to the accident scene situation, the relevant personnel can conveniently, rapidly and accurately determine the accident responsibility, and meanwhile, the relevant personnel can also find out whether the target vehicle has a fault according to the accident scene situation.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a scene restoration method based on data fusion according to an embodiment of the present application. As shown in fig. 2, the scene restoration method based on data fusion includes:
201. when an accident recovery request for a target vehicle is received, acquiring road condition data from a traffic cloud platform;
202. constructing a scene simulation model according to at least one training sample, wherein the scene simulation model comprises at least one simulation index;
203. fusing road condition data and driving data of a target vehicle to obtain accident scene information;
204. and constructing an accident scene condition according to the accident scene information.
In the embodiment of the present application, as shown in fig. 3, step 203 includes the sub-steps of:
2031. and normalizing the road condition data to convert the space-time information of different objects in the road condition data into the same data coordinate system and introduce the space-time information into the scene simulation model.
In the embodiment of the present application, as an optional implementation manner, the normalizing the road condition data includes one of processing the road condition data by using a dispersion normalization algorithm and processing the road condition data by using a mean normalization algorithm.
In this optional embodiment, the deviation normalization algorithm and the mean normalization algorithm can map data with different units and different numerical ranges to scalar quantities in the interval of 0 to 1, so as to process multi-source and multi-dimensional road condition data. For example, the spatiotemporal information of different objects in the road condition data is transformed into the same data coordinate system.
In the embodiment of the present application, the object in the traffic data refers to a data object encapsulated by a plurality of related data, for example, the speed and position information of a pedestrian is encapsulated into a data object named as a person. Data objects are object-oriented data representations that facilitate persistent storage of data. In the embodiment of the application, the object-oriented data encapsulation form is adopted, so that the persistent storage of the road condition data can be facilitated.
In the optional embodiment, the road condition data is normalized, so that the road condition data can be converted into a dimensionless expression to become a scalar, and the scene simulation model can obtain the accident scene information according to the scalar.
2032. And importing the driving data of the target vehicle into the scene simulation model to obtain the accident scene information.
In the embodiment of the application, the scene simulation model can construct the accident scene information according to the road condition data and the driving data, so that the accident scene information contains both the data which can be detected by the target vehicle and the data which cannot be detected by the target vehicle, and the actual accident scene can be more accurately and truly mapped.
In the embodiment of the application, the scene simulation model includes at least one index, such as a pedestrian speed index, wherein in the process of importing the driving data and the normalized road condition data into the scene simulation model, the vehicle cloud platform will sequentially associate the driving data and the normalized road condition data with the indexes in the scene simulation model. For example, assuming that there is an index expressed as the pedestrian speed in the scene simulation model and the road condition data also has a data item named "pedestrian speed" whose value is "10", when the road condition data is imported into the scene simulation model, the value of the pedestrian speed index in the scene simulation model is "10".
In the embodiment of the present application, optionally, the scene simulation model may be constructed based on one basic simulation model and a large number of training samples, wherein the basic simulation model may be trained by a machine learning algorithm based on the training samples. Further, the training samples may be training samples in different scenes, for example, training samples in rainy or sunny days. Therefore, the scene simulation model can construct accident scene conditions according to different scenes, and the simulation precision of the scene simulation model can be improved.
It should be noted that step 202 may be executed after step 201 and before step 203, or may be executed before step 201, which is not limited in this embodiment of the present application.
Please refer to detailed descriptions of step 101 and step 103 in the first embodiment of the present application for detailed descriptions of step 201 and step 204, which are not repeated herein.
EXAMPLE III
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a scene restoration method based on data fusion according to an embodiment of the present application. As shown in fig. 4, in the embodiment of the present application, the method includes the steps of:
301. when an accident recovery request aiming at a target vehicle is received, sending a data pulling request to a traffic cloud platform, wherein the data pulling request comprises accident occurrence time period information and accident occurrence position information;
302. receiving road condition data screened from a road condition database by a traffic cloud platform according to the accident occurrence time period information and the accident occurrence position;
303. constructing a scene simulation model according to at least one training sample, wherein the scene simulation model comprises at least one simulation index;
304. normalization processing is carried out on the road condition data, so that the space-time information of different objects in the road condition data is converted into the same data coordinate system and is led into a scene simulation model;
305. importing driving data of a target vehicle into a scene simulation model to obtain accident site information;
306. and constructing an accident scene condition according to the accident scene information.
In the embodiment of the application, the road condition data screened from the road condition database according to the accident occurrence time period information and the accident occurrence position can reduce the simulation calculation amount of the scene simulation model, and meanwhile, the accident scene conditions of the scene simulation model construction and restoration can be better matched with the accident scene, so that the accuracy of accident responsibility judgment can be further improved.
It should be noted that step 303 may be executed after step 302 and before step 304, or may be executed before step 302, which is not limited in this embodiment of the application.
Please refer to detailed descriptions of step 202, step 203, step 204, sub-step 2031, and sub-step 2032 in the second embodiment of the present application for detailed descriptions of step 303, step 304, step 305, and step 306, which are not described herein again.
In the embodiment of the present application, the accident scene condition includes the traveling track information of the surrounding vehicles and the traveling track information of the surrounding pedestrians. In the optional embodiment, the running track information of the surrounding vehicles and the running track information of the surrounding pedestrians can visually display the accident scene to related personnel, so that the accuracy of accident responsibility judgment is improved.
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of a vehicle cloud platform disclosed in the embodiment of the present application. As shown in fig. 5, in the embodiment of the present application, the vehicle cloud platform includes an obtaining unit 401, a fusing unit 402, and a constructing unit 403, where:
the acquiring unit 401 is configured to acquire road condition data from the traffic cloud platform when an accident recovery request for the target vehicle is received.
In the embodiment of the application, when related personnel need to judge the accident responsibility of a target vehicle with traffic, the related personnel can send an accident restoration request to a vehicle cloud platform through a vehicle-mounted terminal installed on the target vehicle.
In the embodiment of the application, the accident restoration request at least comprises the unique identification code of the target vehicle, so that the vehicle cloud platform can acquire the road condition data associated with the unique identification code of the target vehicle from the traffic cloud platform.
In this embodiment of the application, optionally, the accident restoration request may carry driving data of the target vehicle, where the driving data includes at least one of driving speed information of the target vehicle, driving acceleration information of the target vehicle, driving route decision information of the target vehicle, and driving control information of the target vehicle. In this alternative embodiment, the running speed information of the target vehicle, the running acceleration information of the target vehicle, the running route decision information of the target vehicle, and the running control information of the target vehicle can further improve the accuracy of the accident scene condition.
In the embodiment of the present application, further optionally, the driving data of the target vehicle may further include environmental data that can be detected by the target vehicle, such as surrounding vehicle information that can be detected by the target vehicle, surrounding pedestrian information that can be detected by the target vehicle, obstacle information that can be detected by the target vehicle, and the like. Further, the nearby vehicle information that can be detected by the target vehicle includes at least the position, shape, size, traveling direction, traveling speed of the vehicle; the peripheral pedestrian information which can be detected by the target vehicle at least comprises the position, the size, the driving direction and the driving speed of the pedestrian; the obstacle information that can be detected by the target vehicle includes at least the position, size, shape of the obstacle.
In this embodiment, optionally, the target vehicle may detect the environmental data through a camera or a radar carried by the camera.
In the embodiment of the application, optionally, the target vehicle may upload the driving data to the traffic platform in real time on the way of the driving route, or may first store the data corresponding to each time node in the vehicle-mounted terminal, and when a preset upload trigger threshold is met, the vehicle-mounted terminal uploads the driving data to the traffic cloud platform.
Illustratively, eight nights each day, the in-vehicle terminal executes a data upload command to upload driving data generated by the target vehicle of the day into the traffic cloud platform.
In the embodiment of the present application, the road condition data is collected by at least one road test unit, for example, one road test unit is arranged every 50 meters in the driving route of the target vehicle, so as to monitor the road condition in the driving route of the target vehicle and generate the road condition data.
Specifically, every drive test unit all is equipped with rotatable high definition digtal camera, remote communication device at least, and wherein, rotatable high definition digtal camera is used for monitoring the road conditions and generates road conditions data, and remote communication device is arranged in sending road conditions data to traffic cloud platform. Optionally, the remote communication device may be a 4G communication module or a 5G communication module, where the 5G communication module is preferred in this embodiment, and the 5G communication module can improve the transmission speed of data.
In this embodiment of the application, optionally, after the traffic cloud platform receives the road condition data, the traffic cloud platform classifies the road condition data according to a preset data classification processing rule to store the road condition data in a classified manner.
Illustratively, the traffic cloud platform classifies the road condition data according to time nodes, further classifies the road condition data according to time nodes, and secondarily classifies the road condition data according to places. The traffic cloud platform can return data in a specified range to the vehicle cloud platform by classifying and storing the road condition data.
In this embodiment of the application, optionally, the traffic cloud platform further receives the road condition data, and verifies the road condition data according to a preset data verification rule, where verifying the road condition data according to the preset data verification rule at least includes performing data consistency verification on the road condition data and performing data integrity verification on the road condition data.
In the embodiment of the present application, optionally, the road condition data includes at least one of surrounding vehicle information, surrounding pedestrian information, and surrounding obstacle information. In the optional embodiment, the matching degree between the accident scene condition and the accident scene can be further improved by the peripheral vehicle information, the peripheral pedestrian information and the peripheral obstacle information.
In the embodiment of the present application, optionally, the nearby vehicle information includes at least one of position information of the nearby vehicle, shape information of the nearby vehicle, traveling speed information of the nearby vehicle, and traveling direction information of the nearby vehicle. In this optional embodiment, the matching degree between the accident scene condition and the accident scene can be further improved by the position information of the surrounding vehicle, the shape information of the surrounding vehicle, and the running speed information of the surrounding vehicle.
A fusion unit 402, configured to fuse road condition data and driving data of a target vehicle and obtain accident scene information;
and a construction unit 403, configured to construct an accident scene status according to the accident scene information.
In the embodiment of the application, optionally, the accident scene condition may be displayed by using a 3D model or a 2D model, wherein, preferably, the accident scene condition is displayed by using the 3D model, so that related personnel can intuitively know the accident scene condition.
According to the accident scene information acquiring method and device, the accident scene information is acquired by fusing the road condition data and the driving data of the target vehicle, and then the accident scene situation can be quickly established and restored according to the accident scene information, so that relevant personnel can more clearly, truly and completely know the actual situation of the target vehicle during the accident according to the accident scene situation, the relevant personnel can conveniently, rapidly and accurately determine the accident responsibility, and meanwhile, the relevant personnel can also find out whether the target vehicle has a fault according to the accident scene situation.
It should be noted that, the vehicle cloud platform and the traffic cloud platform in the present application may both be formed by one or more servers, where when a plurality of servers form the vehicle cloud platform and the traffic cloud platform, the plurality of servers may be deployed in a distributed manner.
EXAMPLE five
Referring to fig. 6, fig. 6 is a schematic structural diagram of a vehicle cloud platform disclosed in the embodiment of the present application. As shown in fig. 6, in the embodiment of the present application, the vehicle cloud platform includes, in addition to an obtaining unit 401, a fusing unit 402, and a constructing unit 403:
a modeling unit 404, configured to construct a scene simulation model according to the at least one training sample, where the scene simulation model includes at least one simulation indicator.
In this optional embodiment, the training samples are collected from real samples, and the training samples may be training samples under different conditions, so that the scene simulation model constructed based on the training samples can accurately generate accident scene information.
In this embodiment of the present application, as an optional implementation manner, the obtaining unit 401 includes:
the sending sub-unit 4011 is configured to send a data pulling request to the traffic cloud platform, where the data pulling request includes accident occurrence time period information and accident occurrence position information;
the receiving sub-unit 4012 is configured to receive the traffic data screened from the traffic database by the traffic cloud platform according to the accident occurrence time period information and the accident occurrence position.
In the optional embodiment, the road condition data screened from the road condition database according to the accident occurrence time period information and the accident occurrence position can reduce the simulation calculation amount of the scene simulation model, and meanwhile, the accident scene conditions of the scene simulation model construction and restoration can be better matched with the accident scene, so that the accuracy of accident responsibility judgment can be further improved.
In the embodiment of the present application, as an optional implementation manner, the fusion unit 402 includes:
the normalization processing subunit 4021 is configured to perform normalization processing on the traffic data, so that the spatiotemporal information of different objects in the traffic data is converted into the same data coordinate system;
an importing subunit 4022, configured to import the road condition data after the normalization processing into the scene simulation model;
the importing subunit 4022 is further configured to import the driving data of the target vehicle into the scene simulation model to obtain the accident site information.
In the embodiment of the present application, as an optional implementation manner, the normalizing the road condition data includes one of processing the road condition data by using a dispersion normalization algorithm and processing the road condition data by using a mean normalization algorithm.
In this optional embodiment, the deviation normalization algorithm and the mean normalization algorithm can map data with different units and different numerical ranges to scalar quantities in the interval of 0 to 1, so as to process multi-source and multi-dimensional road condition data. For example, the spatiotemporal information of different objects in the road condition data is transformed into the same data coordinate system.
In the embodiment of the present application, the object in the traffic data refers to a data object encapsulated by associated data, for example, the speed and position information of a pedestrian is encapsulated into a data object named as a person. Data objects are object-oriented data representations that utilize persistent storage of data. In the embodiment of the application, the object-oriented data encapsulation form is adopted, so that the persistent storage of the road condition data can be facilitated.
In the embodiment of the application, the scene simulation model can construct the accident scene information according to the road condition data and the driving data, so that the accident scene information contains both the data which can be detected by the target vehicle and the data which cannot be detected by the target vehicle, and the actual accident scene can be more accurately and truly mapped.
In the embodiment of the application, the scene simulation model includes at least one index, such as a pedestrian speed index, wherein in the process of importing the driving data and the normalized road condition data into the scene simulation model, the vehicle cloud platform will sequentially associate the driving data and the normalized road condition data with the indexes in the scene simulation model. For example, assuming that there is an index expressed as the pedestrian speed in the scene simulation model and the road condition data also has a data item named "pedestrian speed" whose value is "10", when the road condition data is imported into the scene simulation model, the value of the pedestrian speed index in the scene simulation model is "10".
In the embodiment of the present application, optionally, the scene simulation model may be constructed based on one basic simulation model and a large number of training samples, wherein different training basic simulation models may be calculated by machine learning based on the training samples. Further, the training samples may be training samples in different scenes, for example, training samples in rainy or sunny days. Therefore, the scene simulation model can construct accident scene conditions according to different scenes, and the simulation precision of the scene simulation model can be improved.
In the embodiment of the present application, the accident scene condition includes the traveling track information of the surrounding vehicles and the traveling track information of the surrounding pedestrians. In the optional embodiment, the running track information of the surrounding vehicles and the running track information of the surrounding pedestrians can visually display the accident scene to related personnel, so that the accuracy of accident responsibility judgment is improved.
EXAMPLE six
Referring to fig. 7, fig. 7 is a schematic structural diagram of a vehicle cloud platform disclosed in the embodiment of the present application. As shown in fig. 7, the vehicle cloud platform includes:
a processor 502; and
the memory 501 is configured to store machine-readable instructions, which when executed by the processor 502, cause the processor 502 to perform the steps of the scene restoration method based on data fusion according to any one of the first to third embodiments of the present application.
The vehicle cloud platform of the embodiment of the application can fuse road condition data and driving data of a target vehicle to obtain accident scene information by executing a scene restoration method based on data fusion, and further can quickly construct and restore accident scene conditions according to the accident scene information, so that relevant personnel can more clearly, really and completely know the real conditions of the target vehicle during an accident according to the accident scene conditions, and accordingly the relevant personnel can conveniently, quickly and accurately determine accident responsibilities, and meanwhile, the relevant personnel can also investigate whether the target vehicle has a fault according to the accident scene conditions.
EXAMPLE seven
The embodiment of the application discloses a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and the computer program is executed by a processor to perform the steps in the scene restoration method based on data fusion according to any one of the first embodiment to the third embodiment of the application.
The computer-readable storage medium of the embodiment of the application can fuse road condition data and driving data of a target vehicle to obtain accident scene information by executing a scene restoration method based on data fusion, and further can quickly construct and restore accident scene conditions according to the accident scene information, so that relevant personnel can more clearly, really and completely know the real situation of the target vehicle during an accident according to the accident scene conditions, and accordingly the relevant personnel can conveniently, quickly and accurately determine accident responsibility, and meanwhile, the relevant personnel can also investigate whether the target vehicle has a fault according to the accident scene conditions.
Example eight
The embodiment of the application discloses a computer program product, which comprises a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to make a computer execute the steps in the scene restoration method based on data fusion according to any one of the first to third embodiments of the application.
The computer program product of the embodiment of the application can fuse road condition data and driving data of a target vehicle to obtain accident scene information by executing a scene restoration method based on data fusion, and further can quickly construct and restore accident scene conditions according to the accident scene information, so that relevant personnel can more clearly, really and completely know the real situation of the target vehicle during an accident according to the accident scene conditions, and accordingly the relevant personnel can conveniently, quickly and accurately determine accident responsibility, and meanwhile, the relevant personnel can also investigate whether the target vehicle has a fault according to the accident scene conditions.
In the embodiments disclosed in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A scene restoration method based on data fusion is applied to a vehicle cloud platform, and comprises the following steps:
when an accident recovery request for a target vehicle is received, acquiring road condition data from a traffic cloud platform;
fusing the road condition data and the driving data of the target vehicle to obtain accident scene information;
and constructing an accident scene condition according to the accident scene information.
2. The scene restoration method based on data fusion of claim 1, wherein fusing the road condition data and the driving data of the target vehicle to obtain accident scene information comprises:
normalizing the road condition data to convert the time-space information of different objects in the road condition data into the same data coordinate system and introduce the time-space information into a scene simulation model;
and importing the driving data of the vehicle into the scene simulation model to obtain the accident scene information.
3. The data fusion-based scene restoration method according to claim 2, wherein before constructing an accident scene condition from the accident scene information, the method further comprises:
and constructing the scene simulation model according to at least one training sample, wherein the scene simulation model comprises at least one simulation index.
4. The scene restoration method based on data fusion of claim 1, wherein obtaining the road condition data from the traffic cloud platform comprises:
sending a data pulling request to the traffic cloud platform, wherein the data pulling request comprises accident occurrence time period information and accident occurrence position information;
and receiving road condition data screened from a road condition database by the traffic cloud platform according to the accident occurrence time period information and the accident occurrence position.
5. The scene restoration method based on data fusion of any one of claims 1-4, wherein the road condition data comprises at least one of surrounding vehicle information, surrounding pedestrian information, and surrounding obstacle information;
and the nearby vehicle information includes at least one of position information of a nearby vehicle, shape information of the nearby vehicle, traveling speed information of the nearby vehicle, and traveling direction information of the nearby vehicle.
6. The data fusion-based scene restoration method according to claim 5, wherein the accident scene condition includes travel track information of the surrounding vehicle and travel track information of the surrounding pedestrian.
7. The data fusion-based scene restoration method according to claim 1, wherein the driving data includes at least one of driving speed information of the target vehicle, driving acceleration information of the target vehicle, driving route decision information of the target vehicle, and driving control information of the target vehicle.
8. A vehicle cloud platform, comprising:
the acquiring unit is used for acquiring road condition data from the traffic cloud platform when receiving an accident recovery request aiming at a target vehicle;
the fusion unit is used for fusing the road condition data and the driving data of the target vehicle and obtaining accident site information;
and the construction unit is used for constructing an accident scene state according to the accident scene information.
9. A vehicle cloud platform, comprising:
a processor; and
a memory configured to store machine readable instructions that, when executed by the processor, cause the processor to perform the data fusion based scene restoration method of any of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a computer program, which is executed by a processor to perform the data fusion based scene restoration method according to any one of claims 1 to 7.
CN201911071145.XA 2019-11-05 2019-11-05 Scene restoration method based on data fusion, vehicle cloud platform and storage medium Pending CN110782670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911071145.XA CN110782670A (en) 2019-11-05 2019-11-05 Scene restoration method based on data fusion, vehicle cloud platform and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911071145.XA CN110782670A (en) 2019-11-05 2019-11-05 Scene restoration method based on data fusion, vehicle cloud platform and storage medium

Publications (1)

Publication Number Publication Date
CN110782670A true CN110782670A (en) 2020-02-11

Family

ID=69389193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911071145.XA Pending CN110782670A (en) 2019-11-05 2019-11-05 Scene restoration method based on data fusion, vehicle cloud platform and storage medium

Country Status (1)

Country Link
CN (1) CN110782670A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563313A (en) * 2020-03-18 2020-08-21 交通运输部公路科学研究所 Driving event simulation reproduction method, system, equipment and storage medium
CN112132993A (en) * 2020-08-07 2020-12-25 南京市德赛西威汽车电子有限公司 Traffic accident scene restoration method based on V2X
CN113222331A (en) * 2021-03-29 2021-08-06 北京中交兴路信息科技有限公司 Method, device, equipment and storage medium for identifying authenticity of vehicle accident
CN113302614A (en) * 2021-04-25 2021-08-24 华为技术有限公司 Data management method and device and terminal equipment
CN111399481B (en) * 2020-03-30 2022-02-01 东风汽车集团有限公司 Automatic driving scene information collection and remote upgrading method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034013A (en) * 2010-12-30 2011-04-27 长安大学 Automobile oblique collision accident analytic computation and simulative reappearance computer system
CN102034012A (en) * 2010-12-30 2011-04-27 长安大学 Computer system for analyzing, calculating, simulating and reconstructing vehicle-pedestrian collision accident
CN103377494A (en) * 2012-04-25 2013-10-30 财团法人工业技术研究院 Cooperative driving recording system and method
CN104091441A (en) * 2014-05-29 2014-10-08 吉林大学 Motor vehicle hit-and-run path tracking system based on wireless communication technology and vehicle collision information acquiring and transmitting method
CN104463842A (en) * 2014-10-23 2015-03-25 燕山大学 Automobile accident process reappearing method based on motion vision
CN104820763A (en) * 2015-05-25 2015-08-05 西华大学 Traffic accident three-dimensional simulation method based on microscopic traffic simulation software (VISSIM)
CN105828020A (en) * 2015-01-04 2016-08-03 中国移动通信集团辽宁有限公司 Accident reduction control method and accident reduction control system based on Internet of vehicles
CN105975721A (en) * 2016-05-27 2016-09-28 大连楼兰科技股份有限公司 Accident recurrence collision simulation establishing method and accident recurrence collision simulation method based on vehicle real-time motion state
CN107195024A (en) * 2016-06-08 2017-09-22 南京航空航天大学 Universal vehicle operation data record system and processing method
CN108846491A (en) * 2018-08-08 2018-11-20 中链科技有限公司 Car accident processing method and processing device based on block chain
CN108860166A (en) * 2018-05-21 2018-11-23 温州中佣科技有限公司 Processing system and processing method occur for pilotless automobile accident
CN110009903A (en) * 2019-03-05 2019-07-12 同济大学 A kind of scene of a traffic accident restoring method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034013A (en) * 2010-12-30 2011-04-27 长安大学 Automobile oblique collision accident analytic computation and simulative reappearance computer system
CN102034012A (en) * 2010-12-30 2011-04-27 长安大学 Computer system for analyzing, calculating, simulating and reconstructing vehicle-pedestrian collision accident
CN103377494A (en) * 2012-04-25 2013-10-30 财团法人工业技术研究院 Cooperative driving recording system and method
CN104091441A (en) * 2014-05-29 2014-10-08 吉林大学 Motor vehicle hit-and-run path tracking system based on wireless communication technology and vehicle collision information acquiring and transmitting method
CN104463842A (en) * 2014-10-23 2015-03-25 燕山大学 Automobile accident process reappearing method based on motion vision
CN105828020A (en) * 2015-01-04 2016-08-03 中国移动通信集团辽宁有限公司 Accident reduction control method and accident reduction control system based on Internet of vehicles
CN104820763A (en) * 2015-05-25 2015-08-05 西华大学 Traffic accident three-dimensional simulation method based on microscopic traffic simulation software (VISSIM)
CN105975721A (en) * 2016-05-27 2016-09-28 大连楼兰科技股份有限公司 Accident recurrence collision simulation establishing method and accident recurrence collision simulation method based on vehicle real-time motion state
CN107195024A (en) * 2016-06-08 2017-09-22 南京航空航天大学 Universal vehicle operation data record system and processing method
CN108860166A (en) * 2018-05-21 2018-11-23 温州中佣科技有限公司 Processing system and processing method occur for pilotless automobile accident
CN108846491A (en) * 2018-08-08 2018-11-20 中链科技有限公司 Car accident processing method and processing device based on block chain
CN110009903A (en) * 2019-03-05 2019-07-12 同济大学 A kind of scene of a traffic accident restoring method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563313A (en) * 2020-03-18 2020-08-21 交通运输部公路科学研究所 Driving event simulation reproduction method, system, equipment and storage medium
CN111399481B (en) * 2020-03-30 2022-02-01 东风汽车集团有限公司 Automatic driving scene information collection and remote upgrading method and system
CN112132993A (en) * 2020-08-07 2020-12-25 南京市德赛西威汽车电子有限公司 Traffic accident scene restoration method based on V2X
CN113222331A (en) * 2021-03-29 2021-08-06 北京中交兴路信息科技有限公司 Method, device, equipment and storage medium for identifying authenticity of vehicle accident
CN113222331B (en) * 2021-03-29 2024-03-05 北京中交兴路信息科技有限公司 Method, device, equipment and storage medium for identifying authenticity of vehicle accident
CN113302614A (en) * 2021-04-25 2021-08-24 华为技术有限公司 Data management method and device and terminal equipment
CN113302614B (en) * 2021-04-25 2023-02-03 华为技术有限公司 Data management method and device and terminal equipment

Similar Documents

Publication Publication Date Title
CN110782670A (en) Scene restoration method based on data fusion, vehicle cloud platform and storage medium
CN108334055B (en) Method, device and equipment for checking vehicle automatic driving algorithm and storage medium
CN111179585B (en) Site testing method and device for automatic driving vehicle
Essa et al. Simulated traffic conflicts: do they accurately represent field-measured conflicts?
JP7371157B2 (en) Vehicle monitoring method, device, electronic device, storage medium, computer program, cloud control platform and roadway coordination system
CN110796007B (en) Scene recognition method and computing device
CN107782564A (en) A kind of automatic driving vehicle evaluation system and method
CN112740188A (en) Log-based simulation using biases
CN111178454A (en) Automatic driving data labeling method, cloud control platform and storage medium
CN109510851A (en) The construction method and equipment of map datum
FR3020616A1 (en) DEVICE FOR SIGNALING OBJECTS TO A NAVIGATION MODULE OF A VEHICLE EQUIPPED WITH SAID DEVICE
CN104670155A (en) VSAS (Vehicle Security Alarm System) based on cloud vehicle networking
CN114077541A (en) Method and system for validating automatic control software for an autonomous vehicle
CN114120650B (en) Method and device for generating test results
CN113665570A (en) Method and device for automatically sensing driving signal and vehicle
CN113674523A (en) Traffic accident analysis method, device and equipment
CN113936465A (en) Traffic incident detection method and device
CN114783188A (en) Inspection method and device
CN117056153A (en) Methods, systems, and computer program products for calibrating and verifying driver assistance systems and/or autopilot systems
CN115635961A (en) Sample data generation method and trajectory prediction method and device applying same
CN115165398A (en) Vehicle driving function test method and device, computing equipment and medium
CN114771548A (en) Data logging for advanced driver assistance system testing and verification
CN205264045U (en) Vehicle management system
CN116583891A (en) Critical scene identification for vehicle verification and validation
Li A scenario-based development framework for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211