CN113778108A - Data acquisition system and data processing method based on road side sensing unit - Google Patents
Data acquisition system and data processing method based on road side sensing unit Download PDFInfo
- Publication number
- CN113778108A CN113778108A CN202111177026.XA CN202111177026A CN113778108A CN 113778108 A CN113778108 A CN 113778108A CN 202111177026 A CN202111177026 A CN 202111177026A CN 113778108 A CN113778108 A CN 113778108A
- Authority
- CN
- China
- Prior art keywords
- data
- sensing unit
- road
- video stream
- environmental condition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000012545 processing Methods 0.000 claims abstract description 57
- 230000007613 environmental effect Effects 0.000 claims abstract description 43
- 230000008447 perception Effects 0.000 claims abstract description 33
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 23
- 238000005286 illumination Methods 0.000 claims description 17
- 238000013500 data storage Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 3
- 238000012360 testing method Methods 0.000 abstract description 18
- 238000009825 accumulation Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the technical field of data acquisition, and particularly relates to a data acquisition system based on a roadside sensing unit, which is provided by the embodiment of the invention, wherein the roadside sensing unit is arranged on a preset road section and is used for acquiring motion data, video stream data and environmental condition data of traffic participants on the preset road section; the server is connected with the road side sensing unit and used for receiving the road traffic participant motion data, the video stream data and the environmental condition data which are collected by the road side sensing unit; processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set; and the data processing device is connected with the server and used for acquiring the sensing data set and processing the data of the sensing data set. And automatically identifying and extracting a scene or a dangerous scene in which a road right game occurs among traffic participants from mass traffic data, thereby realizing the high-efficiency accumulation of an automatic driving test scene.
Description
Technical Field
The invention belongs to the technical field of internet, and particularly relates to a data acquisition system and a data processing method based on a roadside sensing unit.
Background
The acquisition of sufficiently reliable drive test data is one of the key challenges facing roads ahead of the development of autonomous driving techniques. The automatic driving test technology provides a key solution for the problem of insufficient automatic driving test data. The abundant automatic driving test scene database can be used for enriching the scene of real vehicle test in a field, and can be used for simulation test of perception, decision and control algorithms of an automatic driving system, hardware systems such as sensors and real vehicles are further added into a closed loop, most of negligence and problems of software and hardware systems controlled by the automatic driving perception decision are eliminated, and the test efficiency is greatly improved.
Therefore, the construction of the test scene library is a key ring in the automatic driving test technology. The test scenario is the beginning of the simulation system, and plays an extremely important role in the whole system. However, the test scenario cannot be created by empty, and a standardized and reasonable test scenario data set, which is composed of the actual scene data and the derivative data obtained by reasonably changing and combining the parameters thereof, is required to be used as the input of the automatic driving test. The accumulation of scene data must be achieved in an efficient, fast, and reliable way.
The main method for collecting the automatic driving simulation test scene adopted by the industry at present is as follows: the acquisition personnel who passes through professional training drives the scene collection car that is equipped with redundant perception system, acquires the road right game scene that the record driving in-process meets. And forming available scene data in a manual cleaning and checking mode.
This acquisition mode has the following disadvantages: 1. the capital cost is large. If a large number of scenes need to be accumulated, a plurality of sets of collecting vehicles and sensing systems need to be purchased, and a large amount of human resource cost is consumed. 2. The collection efficiency is low. The probability of occurrence of the road-right game scene and the dangerous scene is not high. Therefore, in the process of collecting tens of thousands of miles driven by the vehicle, really valuable scenes are few, and data such as dangerous scenes, accident scenes and the like cannot be collected in consideration of the safety of collection personnel. 3. The main vehicle type is single. Due to the fact that the acquisition vehicle and the system configuration cost of the acquisition vehicle are high in investment, the acquisition vehicle cannot cover too many vehicle types. And the driving habit is single and fixed due to the limitation of the number of acquisition personnel trained by professionals. Therefore, the acquisition mode cannot be accumulated to the rich and changeable scenes of vehicles with various vehicle types and driving habits as the main vehicle.
Disclosure of Invention
In order to solve the problem that the data acquisition mode in the prior art can not accumulate vehicles with various vehicle types and driving habits as a rich and variable scene of a main vehicle, the embodiment of the invention provides the following technical scheme:
a roadside sensing unit-based data acquisition system, the system comprising:
the road side sensing unit is arranged on a preset road section and used for collecting motion data, video stream data and environmental condition data of traffic participants on the preset road section;
the server is connected with the road side sensing unit and used for receiving the road section traffic participant movement data, the video stream data and the environmental condition data which are collected by the road side sensing unit; processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set;
and the data processing device is connected with the server and is used for acquiring the sensing data set and processing the sensing data set.
Furthermore, the roadside sensing unit comprises a collector, a laser radar, a camera and a rainfall illumination sensor, wherein the collector is respectively connected with the laser radar, the camera and the rainfall illumination sensor.
Further, the air conditioner is provided with a fan,
the laser radar is connected with the collector through a network port;
the camera is connected with the collector through a network port;
the rainfall illumination sensor is connected with the collector through a serial port.
Further, the server is further configured to send a request for acquiring the motion data of the road segment traffic participant, the video stream data, and the environmental condition data to the roadside sensing unit.
Further, the data processing apparatus comprises a data processing unit for processing the perceptual data set.
Further, the data processing apparatus further includes a data storage unit;
and the data storage unit is used for storing the sensing data set according to a preset time interval and a file format.
In a second aspect, a data processing method based on a roadside sensing unit sends a data acquisition request to the roadside sensing unit, wherein the data acquisition request comprises the acquisition of motion data, video stream data and environmental condition data of traffic participants on a preset road section;
receiving the road traffic participant motion data, the video stream data and the environmental condition data which are collected by the road side sensing unit;
processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set;
and sending the perception data set to a data processing device so that the processing device can perform data processing on the perception data set.
Further, sending a data acquisition request to the roadside sensing unit, where the data acquisition request includes the road segment traffic participant movement data, video stream data, and environmental condition data, and the data acquisition request includes:
and sending a request for acquiring data to the roadside sensing unit according to a preset time interval and in a release/subscription decoupling mode.
Further, the method includes the steps of receiving the road segment traffic participant movement data, the video stream data and the environmental condition data collected by the road side sensing unit, and writing the road segment traffic participant movement data, the video stream data and the environmental condition data collected by the road side sensing unit into a scene database.
Further, sending the sensing data set to a data processing device so that the processing device performs data processing on the sensing data set, including:
and the data processing device marks, extracts and excavates the data acquired by the sensing layer to obtain a scene data set.
The invention has the following beneficial effects:
the embodiment of the invention provides a data acquisition system based on a roadside sensing unit, which comprises: the road side sensing unit is arranged on a preset road section and used for collecting motion data, video stream data and environmental condition data of traffic participants on the preset road section; the server is connected with the road side sensing unit and used for receiving the road section traffic participant movement data, the video stream data and the environmental condition data which are collected by the road side sensing unit; processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set; and the data processing device is connected with the server and is used for acquiring the sensing data set and processing the sensing data set. The method comprises the steps that original data collected by a roadside sensing unit are processed into a sensing data set with a target list as a core through a server, then the sensing data set is transmitted to data storage software deployed at a far end, the sensing data set is input into automatic scene recognition conversion software, target motion data are automatically processed and analyzed, dangerous driving scenes are cleaned and intercepted, massive automatic driving test dangerous scene data sets are efficiently obtained, and efficient accumulation of the automatic driving test scenes is achieved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a structural diagram of a data acquisition system based on a roadside sensing unit according to an exemplary embodiment of the present invention.
Fig. 2 is a flowchart illustrating steps of a data processing method based on a roadside sensing unit according to an exemplary embodiment of the invention.
Fig. 3 is a flowchart illustrating a dangerous scene recognition according to another exemplary embodiment of the present invention.
FIG. 4 is a diagram illustrating a relationship between movement of a host vehicle and movement of a target object in accordance with another exemplary embodiment of the present invention.
FIG. 5 is a projection of object velocity shown in accordance with another exemplary embodiment of the present invention.
FIG. 6 is a projection of the distance between a host vehicle and a target, as shown in another exemplary embodiment of the present invention.
Fig. 7 is a diagram illustrating a kinematic relationship parameter definition between a host vehicle and a target according to another exemplary embodiment of the present invention.
FIG. 8 is a diagram illustrating a kinematic relationship between a host and a target at time t according to another exemplary embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the examples given herein without any inventive step, are within the scope of the present invention.
At present, the widely used traditional road side unit generally only comprises a camera, and the combination of the camera and a millimeter wave radar for speed measurement can not accurately and fully acquire all motion information required by forming an automatic driving scene.
Referring to fig. 1, fig. 1 is a structural diagram of a data acquisition system based on a roadside sensing unit according to an exemplary embodiment of the invention, and as shown in fig. 1, the system includes:
the road side sensing unit 11 is arranged on a preset road section and used for collecting motion data, video stream data and environmental condition data of traffic participants on the preset road section;
in one embodiment, a road side sensing unit is arranged on the road side of a road section with dense traffic flow, and the motion data and the environment data of traffic participants, the video stream data and the environment condition data passing through the road section are recorded in twenty-four hours day and night without interruption.
The roadside sensing unit comprises a collector, a laser radar, a camera, a rainfall sensor and an illumination sensor, wherein the collector is respectively connected with the laser radar, the camera, the rainfall sensor and the illumination sensor. The laser radar is connected with the collector through a network port; the camera is connected with the collector through a network port; the rainfall illumination sensor is connected with the collector through a serial port. The laser radar can accurately sense the information of the traffic participants, and the ID of the identification object can be ensured to be unique in the sensing range of the laser radar through radar splicing and synchronization technology. The camera is used for collecting video stream files. The rainfall illumination sensor is used for collecting rainfall and environmental illumination data.
Wherein, radar concatenation and synchronization technology are prior art, and this application is not improved to it.
The server 12 is connected with the roadside sensing unit and used for receiving the road section traffic participant motion data, the video stream data and the environmental condition data which are collected by the roadside sensing unit; processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set;
specifically, original data acquired by the laser radar are point cloud data, and after the server receives the point cloud data, the point cloud data are processed through a perception algorithm to form target list data. The target list data is a list of motion parameters of the targets detected by the lidar at different times. The motion information of the longitude and latitude positions, the position of a local coordinate system, the speed, the acceleration, the course angle and the like of all the target objects entering the detection range is included, and the motion information is key data for reproducing scene dynamic elements and judging whether the scene is valuable or not. And the video stream file collected by the camera provides necessary supplementary checking information for the target list. The rainfall illumination sensor collects rainfall and environmental illumination data, provides natural environment information for a scene, and is one of important elements forming the scene.
And the data processing device 13 is connected with the server and is used for acquiring the sensing data set and processing the sensing data set.
In some embodiments, the data processing apparatus comprises a data processing unit; the data processing unit is used for processing the perception data set.
In some embodiments, the data processing apparatus further comprises a data storage unit for storing the sensing data set according to a preset time interval and a file format.
In one embodiment, the data storage unit remotely obtains, from the edge server, a sensing data set including a laser radar target list, a video stream, and rainfall illumination data, which is acquired by the roadside sensing system and processed by a sensing algorithm in the time period, in a manner of a time interval and a file format set by a user.
And inputting the perception data set into automatic scene recognition and data conversion software to realize automatic data processing.
It can be understood that the present application provides a data acquisition system based on a roadside sensing unit, the system includes: the road side sensing unit is arranged on a preset road section and used for collecting motion data, video stream data and environmental condition data of traffic participants on the preset road section; the server is connected with the road side sensing unit and used for receiving the road section traffic participant movement data, the video stream data and the environmental condition data which are collected by the road side sensing unit; processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set; and the data processing device is connected with the server and is used for acquiring the sensing data set and processing the sensing data set. The system utilizes the continuous sensing capability formed by splicing multiple laser radars to acquire the accurate information of traffic participants, such as type, size, longitude and latitude, speed, acceleration, direction angle and the like, and is used as a main criterion for scene identification and judgment, the video acquired by the camera is used as a check basis, the data acquired by the rainfall illumination sensor is used as the supplement of necessary environmental parameters, the system is more accurate and complete than the traditional mode, and the requirements of automatic driving simulation test scene reconstruction and generalization can be met. The system is arranged on a road section with dense traffic flow, the road side sensing unit is used for replacing the traditional mode of collecting the road driving of the vehicle, so that the motion information, the video stream and the environmental condition information of the traffic participants of the road section can be collected uninterruptedly in twenty-four hours day and night, the operation and supervision of a large amount of time investment of a large number of professionals are not required like the collection of vehicle record data, the professional collection personnel are not required to invest a large amount of time and energy, and the labor cost is saved. The acquisition efficiency is high, twenty-four hours uninterrupted data acquisition can be realized, the massive sensing data sets can be accumulated efficiently, and valuable scene data can be intercepted automatically. Scenes which can be rarely collected in the traditional modes such as dangerous scenes, accident scenes and the like can be collected.
In some embodiments, the server is further configured to send a request for obtaining the movement data of the road segment traffic participants, the video stream data, and the environmental condition data to the roadside sensing unit.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing data based on a roadside sensing unit according to an exemplary embodiment of the invention, and as shown in fig. 2, the method includes:
step S11, sending a data acquisition request to a road side sensing unit, wherein the data acquisition request comprises the acquisition of motion data, video stream data and environmental condition data of traffic participants on a preset road section;
step S12, receiving the movement data, the video stream data and the environmental condition data of the road traffic participants collected by the road side sensing unit;
step S13, processing the movement data, the video stream data and the environmental condition data of the road section traffic participants through a perception algorithm to obtain a perception data set;
and step S14, sending the perception data set to a data processing device so that the processing device can process the perception data set.
In a traditional scene recognition mode, a professional engineer manually reviews mass data, valuable scene segments are found out from the mass data and are intercepted, and other non-valuable segments are deleted. However, in the mass data, the really valuable scene segments only account for a very small part, so that the key of the system is to automatically, efficiently and accurately identify and intercept dangerous scenes from a large amount of traffic data which are recorded by the road side sensing unit in a centralized manner.
As shown in fig. 3, fig. 3 is a flowchart illustrating a dangerous scene recognition according to another exemplary embodiment of the present invention.
Since the road end perception data set records the motion data of all traffic participants passing through the road section from the 'god view angle', the road end view angle needs to be projected to a certain main vehicle view angle through coordinate transformation, and then the motion relation between the current main vehicle and other target objects is judged.
Firstly, traversing all target objects in the complete data segment, screening out vehicle targets, and dividing the data segment into a plurality of data segments according to the time of each vehicle target existing in the detection range. And sequentially taking the vehicle targets as main vehicles to realize the motion projection of other target objects, wherein the projection method comprises the following steps:
FIG. 4 is a diagram illustrating a relationship between movement of a host vehicle and movement of a target object according to another exemplary embodiment of the present invention, as shown in FIG. 4.
As shown in fig. 5, fig. 5 is a projection view of the velocity of an object according to another exemplary embodiment of the present invention.
According to the absolute speed information and course angle information of the SV and the TV in the laser radar target list data, the absolute speed Vtv of the TV is projected to the coordinate system of the host SV by taking the coordinate system of the host SV as reference to obtain the longitudinal speed V of the target TVtvxAnd a lateral velocity Vtvy。
As shown in fig. 6, fig. 6 is a projection view of the distance between the host vehicle and the target, according to another exemplary embodiment of the present invention.
According to longitude data (x in the drawing) and latitude data (y in the drawing) of the host SV and the target TV in a laser radar target list, the longitudinal distance R is obtained by projecting the distance between the host SV and the target TV to the host coordinate system by taking the host SV coordinate system as referencelonAnd a transverse distance Rlats。
As shown in fig. 7, fig. 7 is a diagram illustrating a kinematic relationship parameter definition between a host vehicle and a target according to another exemplary embodiment of the present invention. Through the view angle conversion, any traffic participant meeting the requirements can be selected as a main vehicle, the limitation of the main vehicle category of the traditional acquisition mode and the limitation of the behavior habits of a certain number of drivers is avoided, and the richness and the complexity of the scene are improved.
After the view angle conversion is realized, the dangerous scene can be judged and identified. Setting of a motion relation parameter between the current host vehicle and the target vehicle.
As shown in fig. 7, the time (i.e., the time required for the longitudinal distance to be 0) for the host vehicle and the target vehicle to longitudinally reach the same point is represented by t, and the calculation formula is:
Df0is the initial longitudinal distance of the two cars, DftLongitudinal distance of two vehicles at time t, Dft=0。VEfAnd vElRespectively the longitudinal and transverse speeds of the tractor. VOfAnd VOlThe longitudinal and lateral velocities of the target vehicle projected to the perspective of the host vehicle, respectively.
FIG. 8 is a schematic representation of another exemplary embodiment of the present invention, as shown in FIG. 8Time t of the host vehicle and the target. the motion relation between the main vehicle and the target at the time t, and the initial distance between the two vehicles is Dl0And calculating the transverse distance D between the two vehicles at the time tltI.e. transverse distance D at the initial momentl0Subtracting the distance that the two vehicles approach in the t time period:
Dlt=Dl0-t×VOl
and obtaining the transverse distance between the two vehicles at the moment, and calculating the range of the collision crisis area to judge whether the scene is a dangerous scene. The width range W of the collision region (lateral collision crisis range) is calculated as follows:
W=WE×VEf×Q
WEis the width of the host vehicle itself. Q is an empirical parameter for adjusting the collision crisis width region. It follows that the collision crisis width range varies with the current vehicle speed of the host vehicle.
The longitudinal collision crisis range L is calculated as follows:
L=(VEf-VOf)×TTC
VEf-VOfand calculating to obtain the longitudinal relative speed of the two vehicles, wherein the TTC is a longitudinal collision time empirical threshold. The specific value of TTC can be set according to empirical data obtained by statistics of a large number of traffic accident data.
And comparing the distance between the two vehicles calculated at the moment t with the collision crisis area to see whether the target vehicle is in the collision crisis area at the moment. And defining the result obtained by comparison as a collision crisis judgment parameter Ic.
If: i isC≤0
The situation shows that the target vehicle is in the collision crisis area and the two vehicles have collision crisis.
If: i isC>0
The target vehicle is not in the collision crisis range, and the collision crisis does not exist in the two vehicles.
The danger scene recognition algorithm evaluates the collision crisis between the current main vehicle and other traffic participants in the whole way, and intercepts the data segments of 7 seconds before and after as a danger scene segment by taking the time of the collision crisis as the core when calculating and judging that the collision crisis exists.
And for video stream and rainfall illumination data, the time-matched segment is intercepted by taking the target list data segment as a reference.
The general idea of the automatic scene recognition and data conversion software used in the method is that target list data collected by a laser radar is used as a core, a main vehicle meeting conditions is selected to realize visual angle conversion of motion parameters, the motion parameters in the target list data are used for analyzing and judging the motion relation between target objects, the collision risk between the current main vehicle (SV) and a Target Vehicle (TV) is evaluated, a target list data segment with crisis is intercepted, and meanwhile, simultaneous segment matching operation is carried out on camera video, rainfall and illumination sensor data to serve as a supplementary information element in the follow-up scene checking and reappearing. By utilizing a motion projection visual angle transformation algorithm, drivers with different vehicle types and different behavior habits can be used as the main vehicle visual angle, and abundant and variable scenes can be extracted. The automation of data processing is realized, the original manual auditing mode is replaced, and the scene is efficiently acquired.
In some embodiments, sending a data acquisition request to the roadside sensing unit, where the data acquisition request includes the road segment traffic participant movement data, video stream data, and environmental condition data, and the data acquisition method includes:
and sending a request for acquiring data to the roadside sensing unit according to a preset time interval and in a release/subscription decoupling mode.
In some embodiments, receiving the road segment traffic participant movement data, the video stream data, and the environmental condition data collected by the roadside sensing unit further includes writing the road segment traffic participant movement data, the video stream data, and the environmental condition data collected by the roadside sensing unit into a scene database.
In some embodiments, sending the sensing data set to a data processing device for data processing of the sensing data set by the processing device includes:
and the data processing device marks, extracts and excavates the data acquired by the sensing layer to obtain a scene data set.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (10)
1. A roadside sensing unit-based data acquisition system, the system comprising:
the road side sensing unit is arranged on a preset road section and used for collecting motion data, video stream data and environmental condition data of traffic participants on the preset road section;
the server is connected with the road side sensing unit and used for receiving the road section traffic participant movement data, the video stream data and the environmental condition data which are collected by the road side sensing unit; processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set;
and the data processing device is connected with the server and is used for acquiring the sensing data set and processing the sensing data set.
2. The data acquisition system according to claim 1, wherein the roadside sensing unit comprises a collector, a laser radar, a camera and a rainfall illumination sensor, and the collector is connected to the laser radar, the camera and the rainfall illumination sensor respectively.
3. The data acquisition system of claim 2,
the laser radar is connected with the collector through a network port;
the camera is connected with the collector through a network port;
the rainfall illumination sensor is connected with the collector through a serial port.
4. The data acquisition system of claim 1, wherein the server is further configured to send a request for obtaining the road segment traffic participant movement data, the video stream data, and the environmental condition data to the roadside sensing unit.
5. The system of claim 1, wherein the data processing device comprises a data processing unit configured to process the perceptual data set.
6. The data acquisition system of claim 5, wherein the data processing device further comprises a data storage unit for storing the sensing dataset in a preset time interval and file format.
7. A data processing method based on a road side sensing unit is characterized in that,
sending a data acquisition request to a roadside sensing unit, wherein the data acquisition request comprises the acquisition of motion data, video stream data and environmental condition data of traffic participants on a preset road section;
receiving the road traffic participant motion data, the video stream data and the environmental condition data which are collected by the road side sensing unit;
processing the motion data, the video stream data and the environmental condition data of the road traffic participants through a perception algorithm to obtain a perception data set;
and sending the perception data set to a data processing device so that the processing device can perform data processing on the perception data set.
8. The method of claim 7, wherein sending a get data request to the roadside sensing unit, the get data request including the road segment traffic participant movement data, video stream data, and environmental condition data, comprises:
and sending a request for acquiring data to the roadside sensing unit according to a preset time interval and in a release/subscription decoupling mode.
9. The method of claim 7, wherein receiving the road segment traffic participant movement data, video stream data and environmental condition data collected by the roadside sensing unit further comprises writing the road segment traffic participant movement data, video stream data and environmental condition data collected by the roadside sensing unit into a scene database.
10. The method of claim 7, wherein sending the perception data set to a data processing device for data processing of the perception data set by the processing device comprises:
and the data processing device marks, extracts and excavates the data acquired by the sensing layer to obtain a scene data set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111177026.XA CN113778108B (en) | 2021-10-09 | 2021-10-09 | Data acquisition system and data processing method based on road side sensing unit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111177026.XA CN113778108B (en) | 2021-10-09 | 2021-10-09 | Data acquisition system and data processing method based on road side sensing unit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113778108A true CN113778108A (en) | 2021-12-10 |
CN113778108B CN113778108B (en) | 2023-07-21 |
Family
ID=78855247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111177026.XA Active CN113778108B (en) | 2021-10-09 | 2021-10-09 | Data acquisition system and data processing method based on road side sensing unit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113778108B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114089773A (en) * | 2022-01-11 | 2022-02-25 | 深圳佑驾创新科技有限公司 | Test method, device, equipment and storage medium for automatic driving vehicle |
CN114550450A (en) * | 2022-02-15 | 2022-05-27 | 云控智行科技有限公司 | Method and device for verifying perception accuracy of roadside sensing equipment and electronic equipment |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101497330A (en) * | 2008-01-29 | 2009-08-05 | 福特全球技术公司 | A system for collision course prediction |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN108182817A (en) * | 2018-01-11 | 2018-06-19 | 北京图森未来科技有限公司 | Automatic Pilot auxiliary system, trackside end auxiliary system and vehicle-mounted end auxiliary system |
US20180204460A1 (en) * | 2017-01-19 | 2018-07-19 | Ford Global Technologies, Llc | Collision mitigation and avoidance |
CN109360445A (en) * | 2018-07-09 | 2019-02-19 | 重庆大学 | A kind of high speed lane-change risk checking method based on the distribution of laterally and longitudinally kinematics character |
CN109657355A (en) * | 2018-12-20 | 2019-04-19 | 安徽江淮汽车集团股份有限公司 | A kind of emulation mode and system of road vehicle virtual scene |
CN109835348A (en) * | 2019-01-25 | 2019-06-04 | 中国汽车技术研究中心有限公司 | A kind of screening technique and device of road traffic danger scene |
CN110009765A (en) * | 2019-04-15 | 2019-07-12 | 合肥工业大学 | A kind of automatic driving vehicle contextual data system and scene format method for transformation |
US20190263399A1 (en) * | 2016-10-19 | 2019-08-29 | Jiangsu University | Intelligent vehicle safety driving envelope reconstruction method based on integrated spatial and dynamic characteristics |
US20200142411A1 (en) * | 2015-11-26 | 2020-05-07 | Mobileye Vision Technologies Ltd. | Predicting and responding to cut in vehicles and altruistic responses |
CN111123920A (en) * | 2019-12-10 | 2020-05-08 | 武汉光庭信息技术股份有限公司 | Method and device for generating automatic driving simulation test scene |
CN111583690A (en) * | 2020-04-15 | 2020-08-25 | 北京踏歌智行科技有限公司 | Curve collaborative perception method of 5G-based unmanned transportation system in mining area |
CN112069643A (en) * | 2019-05-24 | 2020-12-11 | 北京车和家信息技术有限公司 | Automatic driving simulation scene generation method and device |
CN112115819A (en) * | 2020-09-03 | 2020-12-22 | 同济大学 | Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index |
CN112287566A (en) * | 2020-11-24 | 2021-01-29 | 北京亮道智能汽车技术有限公司 | Automatic driving scene library generation method and system and electronic equipment |
CN112347567A (en) * | 2020-11-27 | 2021-02-09 | 青岛莱吉传动系统科技有限公司 | Vehicle intention and track prediction method |
CN112567374A (en) * | 2020-10-21 | 2021-03-26 | 华为技术有限公司 | Simulated traffic scene file generation method and device |
CN112816954A (en) * | 2021-02-09 | 2021-05-18 | 中国信息通信研究院 | Road side perception system evaluation method and system based on truth value |
CN112896190A (en) * | 2018-03-20 | 2021-06-04 | 御眼视觉技术有限公司 | System, method and computer readable medium for navigating a host vehicle |
CN112991764A (en) * | 2021-04-26 | 2021-06-18 | 中汽研(天津)汽车工程研究院有限公司 | Overtaking scene data acquisition, identification and extraction system based on camera |
CN113246974A (en) * | 2021-04-12 | 2021-08-13 | 南京航空航天大学 | Risk avoidance/loss reduction control method in unmanned emergency scene, storage medium and electronic device |
CN113327458A (en) * | 2021-07-08 | 2021-08-31 | 潍柴动力股份有限公司 | Vehicle collision prediction method, vehicle collision prediction system, and electronic device |
CN113378305A (en) * | 2021-08-12 | 2021-09-10 | 深圳市城市交通规划设计研究中心股份有限公司 | Driverless trolley-based vehicle-road cooperative testing method and device |
WO2021189275A1 (en) * | 2020-03-25 | 2021-09-30 | 华为技术有限公司 | Vehicle lighting control method and apparatus |
KR20210120384A (en) * | 2020-03-26 | 2021-10-07 | 현대모비스 주식회사 | Collision distance estimation device and advanced driver assistance system using the same |
CN113485319A (en) * | 2021-06-08 | 2021-10-08 | 中兴智能汽车有限公司 | Automatic driving system based on 5G vehicle-road cooperation |
-
2021
- 2021-10-09 CN CN202111177026.XA patent/CN113778108B/en active Active
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101497330A (en) * | 2008-01-29 | 2009-08-05 | 福特全球技术公司 | A system for collision course prediction |
US20200142411A1 (en) * | 2015-11-26 | 2020-05-07 | Mobileye Vision Technologies Ltd. | Predicting and responding to cut in vehicles and altruistic responses |
US20190263399A1 (en) * | 2016-10-19 | 2019-08-29 | Jiangsu University | Intelligent vehicle safety driving envelope reconstruction method based on integrated spatial and dynamic characteristics |
US20180204460A1 (en) * | 2017-01-19 | 2018-07-19 | Ford Global Technologies, Llc | Collision mitigation and avoidance |
CN108010360A (en) * | 2017-12-27 | 2018-05-08 | 中电海康集团有限公司 | A kind of automatic Pilot context aware systems based on bus or train route collaboration |
CN108182817A (en) * | 2018-01-11 | 2018-06-19 | 北京图森未来科技有限公司 | Automatic Pilot auxiliary system, trackside end auxiliary system and vehicle-mounted end auxiliary system |
CN112896190A (en) * | 2018-03-20 | 2021-06-04 | 御眼视觉技术有限公司 | System, method and computer readable medium for navigating a host vehicle |
CN109360445A (en) * | 2018-07-09 | 2019-02-19 | 重庆大学 | A kind of high speed lane-change risk checking method based on the distribution of laterally and longitudinally kinematics character |
CN109657355A (en) * | 2018-12-20 | 2019-04-19 | 安徽江淮汽车集团股份有限公司 | A kind of emulation mode and system of road vehicle virtual scene |
CN109835348A (en) * | 2019-01-25 | 2019-06-04 | 中国汽车技术研究中心有限公司 | A kind of screening technique and device of road traffic danger scene |
CN110009765A (en) * | 2019-04-15 | 2019-07-12 | 合肥工业大学 | A kind of automatic driving vehicle contextual data system and scene format method for transformation |
CN112069643A (en) * | 2019-05-24 | 2020-12-11 | 北京车和家信息技术有限公司 | Automatic driving simulation scene generation method and device |
CN111123920A (en) * | 2019-12-10 | 2020-05-08 | 武汉光庭信息技术股份有限公司 | Method and device for generating automatic driving simulation test scene |
WO2021189275A1 (en) * | 2020-03-25 | 2021-09-30 | 华为技术有限公司 | Vehicle lighting control method and apparatus |
KR20210120384A (en) * | 2020-03-26 | 2021-10-07 | 현대모비스 주식회사 | Collision distance estimation device and advanced driver assistance system using the same |
CN111583690A (en) * | 2020-04-15 | 2020-08-25 | 北京踏歌智行科技有限公司 | Curve collaborative perception method of 5G-based unmanned transportation system in mining area |
CN112115819A (en) * | 2020-09-03 | 2020-12-22 | 同济大学 | Driving danger scene identification method based on target detection and TET (transient enhanced test) expansion index |
CN112567374A (en) * | 2020-10-21 | 2021-03-26 | 华为技术有限公司 | Simulated traffic scene file generation method and device |
CN112287566A (en) * | 2020-11-24 | 2021-01-29 | 北京亮道智能汽车技术有限公司 | Automatic driving scene library generation method and system and electronic equipment |
CN112347567A (en) * | 2020-11-27 | 2021-02-09 | 青岛莱吉传动系统科技有限公司 | Vehicle intention and track prediction method |
CN112816954A (en) * | 2021-02-09 | 2021-05-18 | 中国信息通信研究院 | Road side perception system evaluation method and system based on truth value |
CN113246974A (en) * | 2021-04-12 | 2021-08-13 | 南京航空航天大学 | Risk avoidance/loss reduction control method in unmanned emergency scene, storage medium and electronic device |
CN112991764A (en) * | 2021-04-26 | 2021-06-18 | 中汽研(天津)汽车工程研究院有限公司 | Overtaking scene data acquisition, identification and extraction system based on camera |
CN113485319A (en) * | 2021-06-08 | 2021-10-08 | 中兴智能汽车有限公司 | Automatic driving system based on 5G vehicle-road cooperation |
CN113327458A (en) * | 2021-07-08 | 2021-08-31 | 潍柴动力股份有限公司 | Vehicle collision prediction method, vehicle collision prediction system, and electronic device |
CN113378305A (en) * | 2021-08-12 | 2021-09-10 | 深圳市城市交通规划设计研究中心股份有限公司 | Driverless trolley-based vehicle-road cooperative testing method and device |
Non-Patent Citations (2)
Title |
---|
KIBEOM LEE,等: "Collision Avoidance/Mitigation System: Motion Planning of Autonomous Vehicle via Predictive Occupancy Map", 《IEEE ACCESS》, vol. 7 * |
魏涛,等: "基于机器学习的毫米波雷达车辆目标检测", 《客车技术与研究》, vol. 41, no. 05 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114089773A (en) * | 2022-01-11 | 2022-02-25 | 深圳佑驾创新科技有限公司 | Test method, device, equipment and storage medium for automatic driving vehicle |
CN114550450A (en) * | 2022-02-15 | 2022-05-27 | 云控智行科技有限公司 | Method and device for verifying perception accuracy of roadside sensing equipment and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113778108B (en) | 2023-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Moers et al. | The exid dataset: A real-world trajectory dataset of highly interactive highway scenarios in germany | |
CN109657355B (en) | Simulation method and system for vehicle road virtual scene | |
Krajewski et al. | The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems | |
JP2018503160A (en) | Near-online multi-target tracking using aggregate local flow descriptor (ALFD) | |
CN113778108A (en) | Data acquisition system and data processing method based on road side sensing unit | |
US20220234588A1 (en) | Data Recording for Advanced Driving Assistance System Testing and Validation | |
EP2709066A1 (en) | Concept for detecting a motion of a moving object | |
US20220237919A1 (en) | Method, Apparatus, and Computing Device for Lane Recognition | |
CN111028529A (en) | Vehicle-mounted device installed in vehicle, and related device and method | |
Wang et al. | Realtime wide-area vehicle trajectory tracking using millimeter-wave radar sensors and the open TJRD TS dataset | |
CN113155173B (en) | Perception performance evaluation method and device, electronic device and storage medium | |
CN114973659A (en) | Method, device and system for detecting indirect event of expressway | |
US20220172606A1 (en) | Systems and Methods for Extracting Data From Autonomous Vehicles | |
CN116242833A (en) | Airport runway disease detection and early warning system | |
CN114241373A (en) | End-to-end vehicle behavior detection method, system, equipment and storage medium | |
CN117930673A (en) | Simulation test method and device for automatic driving vehicle, storage medium and electronic equipment | |
CN112991769A (en) | Traffic volume investigation method and device based on video | |
EP2709065A1 (en) | Concept for counting moving objects passing a plurality of different areas within a region of interest | |
KR102682309B1 (en) | System and Method for Estimating Microscopic Traffic Parameters from UAV Video using Multiple Object Tracking of Deep Learning-based | |
Zhang et al. | Arterial vehicle trajectory reconstruction based on Stopbar video sensor for automated traffic signal performance measures | |
CN113220805A (en) | Map generation device, recording medium, and map generation method | |
Abdelhalim | A real-time computer vision based framework for urban traffic safety assessment and driver behavior modeling using virtual traffic lanes | |
CN117593686B (en) | Model evaluation method and device based on vehicle condition true value data | |
CN117591847B (en) | Model pointing evaluating method and device based on vehicle condition data | |
CN118366312B (en) | Traffic detection system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |