CN113778108B - Data acquisition system and data processing method based on road side sensing unit - Google Patents

Data acquisition system and data processing method based on road side sensing unit Download PDF

Info

Publication number
CN113778108B
CN113778108B CN202111177026.XA CN202111177026A CN113778108B CN 113778108 B CN113778108 B CN 113778108B CN 202111177026 A CN202111177026 A CN 202111177026A CN 113778108 B CN113778108 B CN 113778108B
Authority
CN
China
Prior art keywords
data
target
collision
vehicle
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111177026.XA
Other languages
Chinese (zh)
Other versions
CN113778108A (en
Inventor
曾杰
丁雪聪
张迪思
廖伟
李慎言
胡雄
范立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Testing Vehicle Technology Research Institute Co Ltd
Original Assignee
China Merchants Testing Vehicle Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Testing Vehicle Technology Research Institute Co Ltd filed Critical China Merchants Testing Vehicle Technology Research Institute Co Ltd
Priority to CN202111177026.XA priority Critical patent/CN113778108B/en
Publication of CN113778108A publication Critical patent/CN113778108A/en
Application granted granted Critical
Publication of CN113778108B publication Critical patent/CN113778108B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention belongs to the technical field of data acquisition, and particularly relates to a data acquisition system based on a road side sensing unit, wherein the road side sensing unit is arranged on a preset road section and is used for acquiring movement data, video stream data and environmental condition data of traffic participants on the preset road section; the server is connected with the road side sensing unit and is used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit; processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set; the data processing device is connected with the server and is used for acquiring the sensing data set and carrying out data processing on the sensing data set. Automatically identifying and extracting scenes or dangerous scenes of road right games among traffic participants from mass traffic data, and realizing high-efficiency accumulation of automatic driving test scenes.

Description

Data acquisition system and data processing method based on road side sensing unit
Technical Field
The invention belongs to the technical field of Internet, and particularly relates to a data acquisition system and a data processing method based on a road side sensing unit.
Background
The acquisition of sufficiently reliable drive test data is one of the key challenges faced on roads where automatic driving technology advances. The automatic driving test technology brings a key solution to the problem of lack of automatic driving test data. The automatic driving test scene database is rich, can be used for enriching scenes of field real vehicle tests, can be applied to simulation tests of automatic driving system sensing, decision making and control algorithms, further adds hardware systems such as sensors and real vehicles into a closed loop, and can be used for checking most of omission and problems existing in automatic driving sensing decision making control software and hardware systems, so that the test efficiency is greatly improved.
Therefore, the construction of the test scene library is a key ring in the automatic driving test technology. The test scenario is the beginning of the simulation hierarchy and plays an extremely important role in the overall hierarchy. However, the test scene cannot be manufactured by blank kneading, and the test scene data set which is actually generated, namely, is composed of the actual stope data and the derivative data after the parameters of the actual stope data are reasonably changed and combined and is normative and reasonable can be used as the input of the automatic driving test. The accumulation of scene data must be accomplished in an efficient, fast, reliable way.
The main method for acquiring the automatic driving simulation test scene adopted by the current industry is as follows: and (3) driving a scene acquisition vehicle provided with a redundant perception system by specially trained acquisition personnel, and acquiring and recording road right game scenes encountered in the driving process. And then forming available scene data by a manual cleaning and checking mode.
This acquisition mode has the following disadvantages: 1. the capital expenditure is great. If a large number of scenes are accumulated, a plurality of collection vehicles and sensing systems are required to be purchased, and a large amount of manpower resource cost is consumed. 2. The collection efficiency is low. The occurrence probability of the road right game scene and the dangerous scene is not high. Therefore, there is a high probability that the real valuable scene is few in the case of collecting tens of thousands of mileage of a vehicle, and the data of dangerous scenes, accident scenes and the like cannot be collected in consideration of the safety of the collection personnel. 3. The type of the main vehicle is single. Because the acquisition vehicle and the system thereof have large configuration cost investment, the acquisition vehicle cannot cover excessive vehicle types. And because the number of the specially trained collection personnel is limited, the driving habit is single and fixed. Therefore, the acquisition mode cannot be accumulated to various types of vehicles with various driving habits as rich and varied scenes of the main vehicle.
Disclosure of Invention
In order to solve the problem that the data acquisition mode in the prior art cannot accumulate the rich and changeable scenes of vehicles with various vehicle types and various driving habits as a main vehicle, the embodiment of the invention provides the following technical scheme:
a data acquisition system based on a roadside awareness unit, the system comprising:
the road side sensing unit is arranged on a preset road section and is used for collecting movement data, video stream data and environmental condition data of traffic participants on the preset road section;
the server is connected with the road side sensing unit and is used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit; processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set;
the data processing device is connected with the server and used for acquiring the perception data set and carrying out data processing on the perception data set.
Further, the road side sensing unit comprises a collector, a laser radar, a camera and a rainfall illumination sensor, wherein the collector is connected with the laser radar, the camera and the rainfall illumination sensor respectively.
Further, the method comprises the steps of,
the laser radar is connected with the collector through a network port;
the camera is connected with the collector through a network port;
the rainfall illumination sensor is connected with the collector through a serial port.
Further, the server is further configured to send a request to the road side awareness unit to obtain the road segment traffic participant movement data, video stream data, and environmental condition data.
Further, the data processing device comprises a data processing unit for processing the perceived dataset.
Further, the data processing device further comprises a data storage unit;
the data storage unit is used for storing the perception data set according to a preset time interval and a file format.
In a second aspect, a data processing method based on a road side sensing unit sends a data acquisition request to the road side sensing unit, wherein the data acquisition request comprises acquisition of movement data, video stream data and environmental condition data of traffic participants in a preset road section;
receiving the road section traffic participant motion data, video stream data and environmental condition data acquired by the road side sensing unit;
processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set;
the perceived data set is sent to a data processing device for data processing by the processing device.
Further, sending a data acquisition request to the road side sensing unit, wherein the data acquisition request comprises the road section traffic participant motion data, video stream data and environmental condition data, and the data acquisition request comprises:
and sending a request for acquiring data to the road side sensing unit in a release/subscription decoupling mode according to a preset time interval.
Further, receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit, and writing the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit into a scene database.
Further, sending the perceived dataset to a data processing apparatus for data processing by the processing apparatus on the perceived dataset, comprising:
the data processing device marks, extracts and mines the data acquired by the sensing layer to obtain a scene data set.
The invention has the following beneficial effects:
the embodiment of the invention provides a data acquisition system based on a road side sensing unit, which comprises: the road side sensing unit is arranged on a preset road section and is used for collecting movement data, video stream data and environmental condition data of traffic participants on the preset road section; the server is connected with the road side sensing unit and is used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit; processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set; the data processing device is connected with the server and used for acquiring the perception data set and carrying out data processing on the perception data set. The original data collected by the road side sensing unit is processed by the server to become a sensing data set taking the target list as a core, then the sensing data set is transmitted to data storage software deployed at a far end, the data storage software is input into automatic scene recognition conversion software, target motion data are automatically processed and analyzed, dangerous driving scenes are cleaned and intercepted, a large number of automatic driving test dangerous scene data sets are efficiently obtained, and efficient accumulation of automatic driving test scenes is realized.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a data acquisition system based on a road side sensing unit according to an exemplary embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps of a data processing method based on a road side sensing unit according to an exemplary embodiment of the present invention.
Fig. 3 is a flowchart illustrating a dangerous scene recognition according to another exemplary embodiment of the present invention.
Fig. 4 is a diagram illustrating a movement relationship between a host vehicle and a target object according to another exemplary embodiment of the present invention.
Fig. 5 is a target velocity projection diagram illustrating another exemplary embodiment of the present invention.
Fig. 6 is a perspective view of a distance between a host vehicle and a target, according to another exemplary embodiment of the present invention.
Fig. 7 is a diagram showing a definition of a motion relationship parameter between a host vehicle and a target according to another exemplary embodiment of the present invention.
Fig. 8 is a diagram showing a motion relationship between a host vehicle and a target at time t according to another exemplary embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, based on the examples herein, which are within the scope of the invention as defined by the claims, will be within the scope of the invention as defined by the claims.
The conventional road side unit which is widely applied at present generally only comprises a camera, and the combination of the camera and millimeter wave radar speed measurement can not accurately and fully acquire all movement information required by forming an automatic driving scene.
Referring to fig. 1, fig. 1 is a block diagram of a data acquisition system based on a road side sensing unit according to an exemplary embodiment of the present invention, as shown in fig. 1, the system includes:
the road side sensing unit 11 is arranged on a preset road section and is used for collecting the movement data, the video stream data and the environmental condition data of traffic participants on the preset road section;
in one embodiment, a road side sensing unit is arranged on the road side of a road section with dense traffic flow, and the motion data and the environment data of traffic participants, the video flow data and the environment condition data of the road section are recorded continuously for twenty-four hours and day and night.
The road side sensing unit comprises a collector, a laser radar, a camera, a rainfall sensor and an illumination sensor, wherein the collector is connected with the laser radar, the camera, the rainfall sensor and the illumination sensor respectively. The laser radar is connected with the collector through a network port; the camera is connected with the collector through a network port; the rainfall illumination sensor is connected with the collector through a serial port. The laser radar can accurately sense the information of the traffic participants, and can ensure that the ID of the identified object is unique in the sensing range through radar splicing and synchronization technology. The camera is used for collecting video stream files. The rainfall illumination sensor is used for collecting rainfall and environment illumination data.
Wherein, radar concatenation and synchronous technique are prior art, and this application does not improve it.
The server 12 is connected with the road side sensing unit and is used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit; processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set;
specifically, the original data collected by the laser radar are point cloud data, and after the server receives the point cloud data, the point cloud data is processed by a sensing algorithm to form target list data. The target list data is a list of motion parameters of targets detected by the lidar at different times. The method comprises the steps of including the longitude and latitude position, the local coordinate system position, the speed, the acceleration, the course angle and other motion information of all targets entering the detection range, and is key data for reproducing dynamic elements of a scene and judging whether the scene is valuable. And the video stream file collected by the camera provides necessary supplementary checking information for the target list. The rainfall illumination sensor is used for collecting rainfall and environmental illumination data, providing natural environment information for a scene, and is also one of important elements for forming the scene.
And the data processing device 13 is connected with the server and is used for acquiring the perception data set and performing data processing on the perception data set.
In some embodiments, the data processing apparatus comprises a data processing unit; the data processing unit is used for processing the perception data set.
In some embodiments, the data processing apparatus further comprises a data storage unit for storing the perceived data set at preset time intervals and file formats.
In one embodiment, the data storage unit remotely acquires a sensing data set acquired by the road side sensing system in the period and processed by a sensing algorithm from the edge server in a mode of time interval and file format set by a user, wherein the sensing data set comprises a laser radar target list, a video stream and rainfall illumination data.
And inputting the perception data set into the automatic scene recognition and data conversion software to realize automatic data processing.
It can be appreciated that the present application provides a data acquisition system based on a road side sensing unit, the system comprising: the road side sensing unit is arranged on a preset road section and is used for collecting movement data, video stream data and environmental condition data of traffic participants on the preset road section; the server is connected with the road side sensing unit and is used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit; processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set; the data processing device is connected with the server and used for acquiring the perception data set and carrying out data processing on the perception data set. The system utilizes the continuous perception capability formed by splicing the multiple laser radars to acquire accurate information of traffic participants, such as type, size, longitude and latitude, speed, acceleration, direction angle and the like, and is used as a main criterion for scene recognition and judgment, utilizes videos acquired by cameras as check basis, and utilizes data acquired by rainfall illumination sensors as the supplement of necessary environmental parameters, so that the system is more accurate and complete than the traditional mode, and can meet the requirements of reconstruction and generalization of automatic driving simulation test scenes. The system is arranged on a road section with dense traffic flow, a road side sensing unit is utilized to replace a traditional road driving mode of collecting vehicles, and the motion information, video flow and environmental condition information of traffic participants on the road section can be continuously collected twenty-four hours, so that a large amount of time and operation supervision are not needed to be input by a large amount of professionals like collecting vehicle recorded data, a large amount of time and energy are not needed to be input by professional collectors, and labor cost is saved. The acquisition efficiency is high, twenty-four hours of uninterrupted data acquisition can be realized, massive perception data sets can be accumulated efficiently, and valuable scene data can be automatically intercepted from the data sets. Scenes which can be collected in the traditional modes such as dangerous scenes, accident scenes and the like are rarely collected.
In some embodiments, the server is further configured to send a request to the road side awareness unit to obtain the road segment traffic participant movement data, video stream data, and environmental condition data.
Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a data processing method based on a road side sensing unit according to an exemplary embodiment of the present invention, as shown in fig. 2, the method includes:
step S11, sending a data acquisition request to a road side sensing unit, wherein the data acquisition request comprises acquisition of movement data, video stream data and environmental condition data of traffic participants in a preset road section;
step S12, receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit;
step S13, the motion data, the video stream data and the environmental condition data of the road traffic participants are processed by a perception algorithm to obtain a perception data set;
step S14, the perception data set is sent to a data processing device, so that the processing device can process the perception data set.
The traditional scene recognition mode is to manually check mass data by a professional engineer, find out valuable scene fragments from the mass data and intercept the valuable scene fragments, and delete other non-valuable fragments. However, in the mass data, the truly valuable scene segments only occupy a small part, so that the automatic, efficient and accurate identification and interception of dangerous scenes in a large amount of traffic data set recorded from the road side sensing unit are also the key of the system.
As shown in fig. 3, fig. 3 is a flowchart illustrating a dangerous scene recognition according to another exemplary embodiment of the present invention.
Since the road end perception data set records the motion data of all traffic participants passing through the road section from the 'emperor view angle', the road end view angle needs to be projected to a certain main vehicle view angle through coordinate conversion, and then the motion relation between the current main vehicle and other target objects is judged.
Firstly, traversing all target objects in a complete data segment, screening out vehicle targets, and dividing the data segment into a plurality of data segments according to the time of each vehicle target in a detection range. And the vehicle targets are sequentially taken as a main vehicle to realize the motion projection of other targets, and the projection method is as follows:
as shown in fig. 4, fig. 4 is a diagram illustrating a movement relationship between a host vehicle and a target object according to another exemplary embodiment of the present invention.
As shown in fig. 5, fig. 5 is a target velocity projection diagram illustrating another exemplary embodiment of the present invention.
According to the absolute speed information and course angle information of the main car SV and the target TV in the laser radar target list data, taking the main car SV coordinate system as a reference, projecting the absolute speed Vtv of the TV to the main car coordinate system to obtain the longitudinal speed V of the target TV tvx And transverse velocity V tvy
As shown in fig. 6, fig. 6 is a perspective view showing a distance between a host vehicle and a target according to another exemplary embodiment of the present invention.
In a lidar target list based on host vehicle SV and target TVLongitude data (x in the figure) and latitude data (y in the figure), and the longitudinal distance R is obtained by projecting the distance between the two data to the host vehicle SV coordinate system with reference to the host vehicle SV coordinate system lon And a lateral distance R lats
As shown in fig. 7, fig. 7 is a diagram showing a definition of a motion relation parameter between a host vehicle and a target according to another exemplary embodiment of the present invention. Through the visual angle conversion, any traffic participant meeting the requirements can be selected as the host vehicle, the limitation of the type of the host vehicle and the behavior habits of a certain number of drivers in the traditional acquisition mode is avoided, and the richness and the complexity of the scene are improved.
After the visual angle conversion is realized, the dangerous scene can be judged and identified. And setting the motion relation parameters between the current main vehicle and the target vehicle.
As shown in fig. 7, the time when the host vehicle and the target vehicle longitudinally reach the same point (i.e., the time required for the longitudinal distance to be 0) is represented by t, and the calculation formula is:
D f0 for the initial longitudinal distance of two vehicles D ft Is the longitudinal distance of two vehicles at the time t, D ft =0。V Ef And v El The longitudinal and transverse speeds of the host vehicle, respectively. V (V) Of And V Ol The longitudinal and transverse speeds of the target vehicle after projection to the host vehicle viewing angle, respectively.
As shown in fig. 8, fig. 8 is a diagram showing a movement relationship between a host vehicle and a target at time t according to another exemplary embodiment of the present invention. the motion relation between the main vehicle and the target at the moment t, and the initial distance between the main vehicle and the target is D l0 Calculating the transverse distance D of two vehicles at the time t lt I.e. using the initial time transverse distance D l0 Subtracting the distance of two vehicles approaching in the t time period:
D lt =D l0 -t×V Ol
the lateral distance between two vehicles is obtained at the moment, and whether the scene is a dangerous scene or not needs to be judged, so that the range of the collision crisis area needs to be calculated. The width range W (lateral collision crisis range) of the collision region is calculated as follows:
W=W E ×V Ef ×Q
W E is the width of the host vehicle itself. Q is an empirical parameter for adjusting the collision crisis width region. It follows that the collision crisis width range will vary with the current speed of the host vehicle.
The longitudinal crash crisis range L is calculated as follows:
L=(V Ef -V Of )×TTC
V Ef -V Of and calculating to obtain the longitudinal relative speed of the two vehicles, wherein TTC is a longitudinal collision time empirical threshold. According to the empirical data obtained by statistics of a large number of traffic accident data, the specific value of the TTC can be set.
And comparing the distance between the two vehicles obtained by calculation at the moment t with the collision crisis area to see whether the target vehicle is in the collision crisis area at the moment. The result of the definition comparison is the collision crisis judgment parameter Ic.
If: i C ≤0
The target vehicle is in the collision crisis area, and the collision crisis exists between the two vehicles.
If: i C >0
The target vehicle is not in the collision crisis range, and the collision crisis does not exist for the two vehicles.
The dangerous scene recognition algorithm is used for evaluating the collision crisis between the current host vehicle and other traffic participants in the whole course in the mode, and when the collision crisis is calculated and judged, the time of the collision crisis is taken as the core, and the data segments of 7 seconds before and after are intercepted to be used as a dangerous scene segment.
And intercepting the time-matched segments by taking the target list data segments as references for the video stream and the rainfall illumination data.
The method uses automatic scene recognition and data conversion software to recognize dangerous scenes, wherein the overall thought is to take target list data acquired by a laser radar as a core, select a host vehicle meeting the conditions to realize visual angle conversion of motion parameters, analyze and judge motion relations among targets by utilizing the motion parameters in the target list data, evaluate collision risks between a current host vehicle (SV) and a Target Vehicle (TV), intercept target list data fragments with crisis, and simultaneously perform simultaneous segment matching operation on camera video, rainfall and illumination sensor data to serve as supplementary information elements during subsequent scene checking and reproduction. By utilizing the motion projection view angle conversion algorithm, a large number of drivers with different vehicle types and different behavioral habits can be used as a main vehicle view angle, and rich and changeable scenes can be extracted. And the automation of data processing is realized, the original manual auditing mode is replaced, and the scene is obtained efficiently.
In some embodiments, sending a request for acquisition data to the road side awareness unit, the request for acquisition data including the road segment traffic participant movement data, video stream data, and environmental condition data, includes:
and sending a request for acquiring data to the road side sensing unit in a release/subscription decoupling mode according to a preset time interval.
In some embodiments, the receiving the road segment traffic participant motion data, the video stream data, and the environmental condition data collected by the road side sensing unit further includes writing the road segment traffic participant motion data, the video stream data, and the environmental condition data collected by the road side sensing unit into a scene database.
In some embodiments, transmitting the perceived data set to a data processing apparatus for data processing by the processing apparatus, comprises:
the data processing device marks, extracts and mines the data acquired by the sensing layer to obtain a scene data set.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. A data acquisition system based on a roadside awareness unit, the system comprising:
the road side sensing unit is arranged on a preset road section and is used for collecting movement data, video stream data and environmental condition data of traffic participants on the preset road section;
the server is connected with the road side sensing unit and is used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data acquired by the road side sensing unit; processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set;
the data processing device is connected with the server and is used for acquiring the perception data set and carrying out data processing on the perception data set;
the data processing includes: projecting the road end view angle of the road section traffic participant motion data in the perception data set to a certain main vehicle view angle through coordinate conversion; judging the motion relation between the current host vehicle and other targets, and judging whether the collision crisis exists between the host vehicle and the targets, wherein the data processing process is as follows:
coordinate conversion:
firstly, traversing all target objects in a complete data segment, screening out vehicle targets, dividing the data segment into a plurality of data segments according to the time of each vehicle target in a detection range, and sequentially taking the vehicle targets as a main vehicle to realize the motion projection of other target objects, wherein the projection method comprises the following steps:
according to the absolute speed information and course angle information of the main vehicle and the target in the laser radar target list data, taking a main vehicle coordinate system as a reference, projecting the absolute speed of the target to the main vehicle coordinate system, and obtaining the longitudinal speed and the transverse speed of the target; wherein the lidar target list data is obtained by the server;
according to longitude data and latitude data of a host vehicle and a target in the laser radar target list, taking a host vehicle coordinate system as a reference, projecting the distance between the host vehicle and the target to the host vehicle coordinate system to obtain a longitudinal distance and a transverse distance;
judging the motion relation between the current main vehicle and other targets:
calculating the time of the host vehicle and the target to longitudinally reach the same point, wherein the calculation formula is as follows:
D f0 for the initial longitudinal distance of two vehicles D ft Is the longitudinal distance of two vehicles at the time t, D ft =0;V Ef And V EI Longitudinal and transverse speeds of the host vehicle, respectively; v (V) Of And V OI The longitudinal speed and the transverse speed of the target after being projected to the view angle of the main vehicle are respectively;
calculating the motion relation between the host vehicle and the target at the moment t, wherein the calculation formula is as follows:
D lt =D l0 -t×V Ol
D l0 for the initial distance of two vehicles D lt The transverse distance between two vehicles at the time t;
the range of the collision danger area is calculated, and the calculation formula of the range of the transverse collision crisis is:
W=W E ×V Ef ×Q
W E the width of the main vehicle is Q is an empirical parameter for adjusting the width area of the collision crisis;
the longitudinal collision crisis range calculation formula is:
L=(V Ef -V Of )×TTC
V Ef -V Of calculating to obtain the longitudinal relative speed of two vehicles, wherein TTC is a longitudinal collision time empirical threshold value which is obtained according to a large amount of traffic accident data statistics;
the method comprises the steps of calculating a collision crisis judging parameter, wherein the collision crisis judging parameter is used for judging whether a target vehicle is in a collision crisis area at the moment, and a calculation formula is as follows:
if: i C If the collision risk is less than or equal to 0, the target is in the collision risk area, and if collision risk exists between the two vehicles, the collision risk is that: i C >And 0, the target is not in the collision crisis area, and the collision crisis does not exist for the two vehicles.
2. The data acquisition system of claim 1, wherein the roadside sensing unit comprises a collector, a laser radar, a camera, and a rain illumination sensor, the collector being connected to the laser radar, the camera, and the rain illumination sensor, respectively.
3. The data acquisition system of claim 2, wherein,
the laser radar is connected with the collector through a network port;
the camera is connected with the collector through a network port;
the rainfall illumination sensor is connected with the collector through a serial port.
4. The data acquisition system of claim 1 wherein the server is further configured to send a request to the road side awareness unit to obtain the road segment traffic participant movement data, video stream data, and environmental condition data.
5. The system according to claim 1, wherein the data processing means comprises a data processing unit for processing the perceived dataset.
6. The data acquisition system of claim 5 wherein the data processing device further comprises a data storage unit for storing the perceived data set at a predetermined time interval and file format.
7. A data processing method based on a road side sensing unit is characterized in that,
sending a data acquisition request to a road side sensing unit, wherein the data acquisition request comprises acquisition of movement data, video stream data and environmental condition data of traffic participants in a preset road section;
receiving the road section traffic participant motion data, video stream data and environmental condition data acquired by the road side sensing unit;
processing the road section traffic participant motion data, the video stream data and the environmental condition data through a perception algorithm to obtain a perception data set;
transmitting the perceived data set to a data processing device so that the processing device performs data processing on the perceived data set;
the data processing includes: projecting the road end view angle of the road section traffic participant motion data in the perception data set to a certain main vehicle view angle through coordinate conversion; judging the motion relation between the current host vehicle and other targets, and judging whether the collision crisis exists between the host vehicle and the targets, wherein the data processing process is as follows:
coordinate conversion:
firstly, traversing all target objects in a complete data segment, screening out vehicle targets, dividing the data segment into a plurality of data segments according to the time of each vehicle target in a detection range, and sequentially taking the vehicle targets as a main vehicle to realize the motion projection of other target objects, wherein the projection method comprises the following steps:
according to the absolute speed information and course angle information of the main vehicle and the target in the laser radar target list data, taking a main vehicle coordinate system as a reference, projecting the absolute speed of the target to the main vehicle coordinate system, and obtaining the longitudinal speed and the transverse speed of the target; the laser radar target list data are obtained by a server;
according to longitude data and latitude data of a host vehicle and a target in the laser radar target list, taking a host vehicle coordinate system as a reference, projecting the distance between the host vehicle and the target to the host vehicle coordinate system to obtain a longitudinal distance and a transverse distance;
judging the motion relation between the current main vehicle and other targets:
calculating the time of the host vehicle and the target to longitudinally reach the same point, wherein the calculation formula is as follows:
D f0 for the initial longitudinal distance of two vehicles D ft Is the longitudinal distance of two vehicles at the time t, D ft =0;V Ef And V EI Longitudinal and transverse speeds of the host vehicle, respectively; v (V) Of And V OI The longitudinal speed and the transverse speed of the target after being projected to the view angle of the main vehicle are respectively;
calculating the motion relation between the host vehicle and the target at the moment t, wherein the calculation formula is as follows:
D lt =D l0 -t×V Ol
D l0 for the initial distance of two vehicles D lt The transverse distance between two vehicles at the time t;
the range of the collision danger area is calculated, and the calculation formula of the range of the transverse collision crisis is:
W=W E ×V Ef ×Q
W E the width of the main vehicle is Q is an empirical parameter for adjusting the width area of the collision crisis;
the longitudinal collision crisis range calculation formula is:
L=(V Ef -V Of )×TTC
V Ef -V Of calculating to obtain the longitudinal relative speed of two vehicles, wherein TTC is a longitudinal collision time empirical threshold value which is obtained according to a large amount of traffic accident data statistics;
the method comprises the steps of calculating a collision crisis judging parameter, wherein the collision crisis judging parameter is used for judging whether a target vehicle is in a collision crisis area at the moment, and a calculation formula is as follows:
if: i C If the collision risk is less than or equal to 0, the target is in the collision risk area, and if collision risk exists between the two vehicles, the collision risk is that: i c >And 0, the target is not in the collision crisis area, and the collision crisis does not exist for the two vehicles.
8. The method of claim 7, wherein sending a get data request to the road side awareness unit, the get data request including the road segment traffic participant movement data, video stream data, and environmental condition data, comprises:
and sending a request for acquiring data to the road side sensing unit in a release/subscription decoupling mode according to a preset time interval.
9. The method of claim 7, wherein receiving the road segment traffic participant motion data, video stream data, and environmental condition data collected by the road side awareness unit further comprises writing the road segment traffic participant motion data, video stream data, and environmental condition data collected by the road side awareness unit to a scene database.
10. The method of claim 7, wherein transmitting the perceived dataset to a data processing device for data processing by the processing device comprises:
the data processing device marks, extracts and mines the data acquired by the sensing layer to obtain a scene data set.
CN202111177026.XA 2021-10-09 2021-10-09 Data acquisition system and data processing method based on road side sensing unit Active CN113778108B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111177026.XA CN113778108B (en) 2021-10-09 2021-10-09 Data acquisition system and data processing method based on road side sensing unit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111177026.XA CN113778108B (en) 2021-10-09 2021-10-09 Data acquisition system and data processing method based on road side sensing unit

Publications (2)

Publication Number Publication Date
CN113778108A CN113778108A (en) 2021-12-10
CN113778108B true CN113778108B (en) 2023-07-21

Family

ID=78855247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111177026.XA Active CN113778108B (en) 2021-10-09 2021-10-09 Data acquisition system and data processing method based on road side sensing unit

Country Status (1)

Country Link
CN (1) CN113778108B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114089773B (en) * 2022-01-11 2022-05-27 深圳佑驾创新科技有限公司 Test method, device, equipment and storage medium for automatic driving vehicle
CN114550450A (en) * 2022-02-15 2022-05-27 云控智行科技有限公司 Method and device for verifying perception accuracy of roadside sensing equipment and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101497330A (en) * 2008-01-29 2009-08-05 福特全球技术公司 A system for collision course prediction
CN109360445A (en) * 2018-07-09 2019-02-19 重庆大学 A kind of high speed lane-change risk checking method based on the distribution of laterally and longitudinally kinematics character
CN112347567A (en) * 2020-11-27 2021-02-09 青岛莱吉传动系统科技有限公司 Vehicle intention and track prediction method
CN112896190A (en) * 2018-03-20 2021-06-04 御眼视觉技术有限公司 System, method and computer readable medium for navigating a host vehicle
CN113246974A (en) * 2021-04-12 2021-08-13 南京航空航天大学 Risk avoidance/loss reduction control method in unmanned emergency scene, storage medium and electronic device
CN113327458A (en) * 2021-07-08 2021-08-31 潍柴动力股份有限公司 Vehicle collision prediction method, vehicle collision prediction system, and electronic device
KR20210120384A (en) * 2020-03-26 2021-10-07 현대모비스 주식회사 Collision distance estimation device and advanced driver assistance system using the same

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108604292B (en) * 2015-11-26 2023-10-13 御眼视觉技术有限公司 Automatic prediction and lithe response of vehicles to cut lanes
CN106564495B (en) * 2016-10-19 2018-11-06 江苏大学 The intelligent vehicle safety for merging space and kinetic characteristics drives envelope reconstructing method
US10403145B2 (en) * 2017-01-19 2019-09-03 Ford Global Technologies, Llc Collison mitigation and avoidance
CN108010360A (en) * 2017-12-27 2018-05-08 中电海康集团有限公司 A kind of automatic Pilot context aware systems based on bus or train route collaboration
CN108182817A (en) * 2018-01-11 2018-06-19 北京图森未来科技有限公司 Automatic Pilot auxiliary system, trackside end auxiliary system and vehicle-mounted end auxiliary system
CN109657355B (en) * 2018-12-20 2021-05-11 安徽江淮汽车集团股份有限公司 Simulation method and system for vehicle road virtual scene
CN109835348B (en) * 2019-01-25 2021-04-23 中国汽车技术研究中心有限公司 Screening method and device for road traffic dangerous scene
CN110009765B (en) * 2019-04-15 2021-05-07 合肥工业大学 Scene format conversion method of automatic driving vehicle scene data system
CN112069643B (en) * 2019-05-24 2023-10-10 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device
CN111123920A (en) * 2019-12-10 2020-05-08 武汉光庭信息技术股份有限公司 Method and device for generating automatic driving simulation test scene
WO2021189275A1 (en) * 2020-03-25 2021-09-30 华为技术有限公司 Vehicle lighting control method and apparatus
CN111583690B (en) * 2020-04-15 2021-08-20 北京踏歌智行科技有限公司 Curve collaborative perception method of 5G-based unmanned transportation system in mining area
CN112115819B (en) * 2020-09-03 2022-09-20 同济大学 Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index
WO2022082476A1 (en) * 2020-10-21 2022-04-28 华为技术有限公司 Simulated traffic scene file generation method and apparatus
CN112287566A (en) * 2020-11-24 2021-01-29 北京亮道智能汽车技术有限公司 Automatic driving scene library generation method and system and electronic equipment
CN112816954B (en) * 2021-02-09 2024-03-26 中国信息通信研究院 Road side perception system evaluation method and system based on true value
CN112991764B (en) * 2021-04-26 2021-08-06 中汽研(天津)汽车工程研究院有限公司 Overtaking scene data acquisition, identification and extraction system based on camera
CN113485319A (en) * 2021-06-08 2021-10-08 中兴智能汽车有限公司 Automatic driving system based on 5G vehicle-road cooperation
CN113378305B (en) * 2021-08-12 2022-02-01 深圳市城市交通规划设计研究中心股份有限公司 Driverless trolley-based vehicle-road cooperative testing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101497330A (en) * 2008-01-29 2009-08-05 福特全球技术公司 A system for collision course prediction
CN112896190A (en) * 2018-03-20 2021-06-04 御眼视觉技术有限公司 System, method and computer readable medium for navigating a host vehicle
CN109360445A (en) * 2018-07-09 2019-02-19 重庆大学 A kind of high speed lane-change risk checking method based on the distribution of laterally and longitudinally kinematics character
KR20210120384A (en) * 2020-03-26 2021-10-07 현대모비스 주식회사 Collision distance estimation device and advanced driver assistance system using the same
CN112347567A (en) * 2020-11-27 2021-02-09 青岛莱吉传动系统科技有限公司 Vehicle intention and track prediction method
CN113246974A (en) * 2021-04-12 2021-08-13 南京航空航天大学 Risk avoidance/loss reduction control method in unmanned emergency scene, storage medium and electronic device
CN113327458A (en) * 2021-07-08 2021-08-31 潍柴动力股份有限公司 Vehicle collision prediction method, vehicle collision prediction system, and electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Collision Avoidance/Mitigation System: Motion Planning of Autonomous Vehicle via Predictive Occupancy Map;Kibeom Lee,等;《IEEE Access》;第7卷;全文 *
基于机器学习的毫米波雷达车辆目标检测;魏涛,等;《客车技术与研究》;第41卷(第05期);全文 *

Also Published As

Publication number Publication date
CN113778108A (en) 2021-12-10

Similar Documents

Publication Publication Date Title
CN109657355B (en) Simulation method and system for vehicle road virtual scene
Krajewski et al. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems
CN113778108B (en) Data acquisition system and data processing method based on road side sensing unit
WO2022141506A1 (en) Method for constructing simulation scene, simulation method and device
Semertzidis et al. Video sensor network for real-time traffic monitoring and surveillance
US20150264296A1 (en) System and method for selection and viewing of processed video
Chen et al. Architecture of vehicle trajectories extraction with roadside LiDAR serving connected vehicles
CN111028529A (en) Vehicle-mounted device installed in vehicle, and related device and method
CN202422425U (en) Video-detection-based intelligent signal control system for crossing
CN113963539B (en) Highway traffic accident identification method, module and system
CN112633120B (en) Model training method of intelligent roadside sensing system based on semi-supervised learning
Gloudemans et al. I-24 MOTION: An instrument for freeway traffic science
CN105138525A (en) Traffic video processing device and method, and retrieval device and method
CN116824859B (en) Intelligent traffic big data analysis system based on Internet of things
Wang et al. Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open TJRD TS dataset
US20220237919A1 (en) Method, Apparatus, and Computing Device for Lane Recognition
CN114771548A (en) Data logging for advanced driver assistance system testing and verification
Luo et al. Traffic signal transition time prediction based on aerial captures during peak hours
Ke et al. Lightweight edge intelligence empowered near-crash detection towards real-time vehicle event logging
KR102484789B1 (en) Intelligent crossroad integration management system with unmanned control and traffic information collection function
CN116242833A (en) Airport runway disease detection and early warning system
Bäumler et al. ‘Generating representative test scenarios: The fuse for representativity (Fuse4Rep) process model for collecting and analysing traffic observation data
US20220172606A1 (en) Systems and Methods for Extracting Data From Autonomous Vehicles
CN115188187A (en) Roadside perception data quality monitoring system and method based on vehicle-road cooperation
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant