CN110569602A - Data acquisition method and system for unmanned vehicle - Google Patents

Data acquisition method and system for unmanned vehicle Download PDF

Info

Publication number
CN110569602A
CN110569602A CN201910856083.7A CN201910856083A CN110569602A CN 110569602 A CN110569602 A CN 110569602A CN 201910856083 A CN201910856083 A CN 201910856083A CN 110569602 A CN110569602 A CN 110569602A
Authority
CN
China
Prior art keywords
vehicle
information
state information
control information
data acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910856083.7A
Other languages
Chinese (zh)
Other versions
CN110569602B (en
Inventor
秦家虎
王帅
高炤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201910856083.7A priority Critical patent/CN110569602B/en
Publication of CN110569602A publication Critical patent/CN110569602A/en
Application granted granted Critical
Publication of CN110569602B publication Critical patent/CN110569602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

A data collection method for an unmanned vehicle, comprising: simulating an unmanned simulation environment, and controlling the relative speed of the data acquisition vehicle and the target vehicle in the unmanned simulation environment to enable the relative distance between the data acquisition vehicle and the target vehicle to be within a preset range; acquiring state information and control information of a data acquisition vehicle and image information, state information and control information of a target vehicle, judging whether difference values among timestamps corresponding to the state information, the control information and the image information are within a preset difference value range, and if so, storing information corresponding to the difference values among the timestamps within the preset difference value range. The method can simulate various driving environments, autonomously acquire vehicle data in real time, reduce data redundancy, increase the diversity and accuracy of unmanned data, greatly reduce time and labor cost and collect data in a large scale.

Description

Data acquisition method and system for unmanned vehicle
Technical Field
The invention relates to the field of unmanned vehicles, in particular to a data acquisition method and system for an unmanned vehicle.
Background
In the problem of environmental perception of unmanned driving, most of current researches concern the problem of target detection, that is, important targets such as vehicles, pedestrians and traffic signs in images are detected and identified in real time and the coordinate positions of the important targets in the images are located through front or surrounding images acquired by a camera mounted on the vehicle, so that collision between the important targets and the vehicles or the pedestrians is avoided. However, since the braking distance of the vehicle is determined by the vehicle speed, the safety braking distance required under different vehicle speeds (including the vehicle speed of the unmanned vehicle itself and the vehicle speed of the surrounding vehicles) is different, and merely detecting the pixel coordinates of the front vehicle and the pedestrian in the image is far from sufficient for ensuring the driving safety. Therefore, estimating the traveling speed of the vehicle ahead in real time will be the key to research in the problem of perception of the unmanned environment in the future.
in the past decade, the explosive development of deep learning techniques has relied primarily on the dramatic increase in data and computing power. Therefore, the acquisition of a large-scale standard data set with accurate labeling and rich types is the key for solving the problems in the related field by using the deep learning technology. Based on the current research situation, the method for solving the speed estimation problem by adopting the deep neural network is a feasible scheme, and the currently-disclosed available speed estimation data set is small in quantity, low in data precision and small in data scale, and cannot be used for training a large-scale deep neural network.
Disclosure of Invention
Technical problem to be solved
The invention provides a data acquisition method and a data acquisition system for an unmanned vehicle, which are used for at least partially solving the technical problems.
(II) technical scheme
To achieve the above object, an aspect of the present invention provides a data acquisition method for an unmanned vehicle, comprising: s1, simulating an unmanned simulation environment, and controlling the relative speed of the data acquisition vehicle and the target vehicle in the unmanned simulation environment to enable the relative distance between the data acquisition vehicle and the target vehicle to be within a preset range; s2, collecting first state information and first control information of the data collection vehicle, wherein each first state information and each first control information respectively correspond to a time stamp; acquiring image information, second state information and second control information of a target vehicle, wherein each image information, each second state information and each second control information respectively correspond to a timestamp; s3, judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is within a first preset difference value range, judging whether the difference value between every two of the timestamp corresponding to the image information, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is within the first preset difference value range, and if yes, executing S4; s4, judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within a second preset difference value range, judging whether the difference value between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is within the second preset difference value range, and if yes, executing S5; s5, storing the first state information, the first control information, the image information, the second state information and the second control information satisfying S4.
Optionally, in step S1, simulating the unmanned simulation environment includes: and simulating the unmanned simulation environment by open source physical simulation software Gazebo.
Optionally, in step S1, controlling the relative speed of the data-collecting vehicle and the target vehicle includes: and controlling the relative speed of the data acquisition vehicle and the target vehicle by a robot operating system based on an open source through a lane line detection algorithm and a target tracking algorithm.
Optionally, the first status information comprises a first instantaneous speed of the data collection vehicle and the second status information comprises a second instantaneous speed of the target vehicle.
Optionally, the frame of image information is labeled with a second instantaneous velocity value corresponding to the acquisition time of each frame of image information.
optionally, in step S1, controlling the relative speed of the data-collecting vehicle and the target vehicle includes: in the manual mode, a control instruction is issued to the data acquisition vehicle through a manual control handle, so that the data acquisition vehicle and the target vehicle keep relative speed; or in an automatic mode, the data acquisition vehicle is enabled to track the target vehicle by setting a constant mode, a sine curve mode, a cosine curve mode or a step mode.
optionally, the method further comprises: s0, parameters needed for simulating the unmanned simulation environment are configured.
The present invention also provides a data acquisition system for an unmanned vehicle, comprising: the simulation module is used for simulating an unmanned simulation environment; the control module is used for controlling the relative speed of the data acquisition vehicle and the target vehicle in the simulated unmanned simulation environment, so that the relative distance between the data acquisition vehicle and the target vehicle is within a preset range; the acquisition module is used for acquiring first state information and first control information of a data acquisition vehicle, and the first state information and the first control information respectively correspond to a timestamp; acquiring image information, second state information and second control information of a target vehicle, wherein the image information, the second state information and the second control information respectively correspond to a timestamp; the first judging module is used for judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is within a first preset difference value range or not, judging whether the difference value between every two timestamps, namely the timestamp corresponding to the image information, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is within the first preset difference value range or not, and if so, entering the second judging module; the second judging module is used for judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within a second preset difference value range or not, judging whether the difference value between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is within the second preset difference value range or not, and if yes, entering the storage module; the storage module stores first state information, first control information, image information, second state information and second control information which meet the requirements of the first judgment module.
Optionally, the simulation module is designed based on open-source physical simulation software Gazebo, and the control module and the acquisition module are designed based on an open-source robot operating system.
Optionally, the open-source robot operating system sends a control instruction to the open-source physical simulation software Gazebo through a drive-by-wire fieldbus to control the relative speed of the data acquisition vehicle and the target vehicle; and the open-source robot operating system acquires information corresponding to the data acquisition vehicle and the target vehicle in the open-source physical simulation software Gazebo through an internal node network.
(III) advantageous effects
1. The unmanned simulation environment is established by using the Gazebo, the data acquisition system can operate autonomously without manually marking the speed, and the time and economic cost for manually collecting the speed data set in the real environment are reduced, so that the collection of large-scale speed estimation data is integrated into possibility.
2. the vehicle speed data set information collected in the simulation environment can reduce manual marking errors, and interference on algorithm evaluation caused by inaccurate manual data marking speed in the real environment is reduced.
3. Each frame of image collected in the simulation environment can be completely marked, and the obtained instantaneous speed of the vehicle is not the average speed, so that the diversity and the accuracy of the unmanned vehicle speed data set are improved.
Drawings
FIG. 1 schematically illustrates a flow chart of a data collection method for an unmanned vehicle provided in a first embodiment of the present invention;
FIG. 2 schematically illustrates a flow chart of a simulated unmanned simulation environment provided in a first embodiment of the present invention;
Fig. 3(a) schematically shows a photographed road effect map while a subject vehicle travels, provided in the first embodiment of the present invention;
Fig. 3(b) schematically shows a target vehicle lane line detection effect diagram provided in the first embodiment of the present invention;
Fig. 3(c) is a diagram schematically showing an effect when the target vehicle travels along the lane line provided in the first embodiment of the present invention;
FIG. 4 is a flow chart that schematically illustrates a method of controlling the relative speed of a data-gathering vehicle and a target vehicle, as provided in a first embodiment of the present invention;
FIG. 5 schematically illustrates a data collection vehicle-collected information distribution plot as provided in a first embodiment of the present invention;
FIG. 6 schematically illustrates a block diagram of a data acquisition system for an unmanned vehicle provided in a second embodiment of the present invention;
Fig. 7 schematically shows a schematic diagram of the control data collection vehicle provided in the first embodiment of the invention when it tracks a target vehicle.
Detailed Description
in order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
fig. 1 schematically shows a flowchart of a data collection method for an unmanned vehicle provided in a first embodiment of the present invention, the collection method including:
And S1, simulating an unmanned simulation environment, and controlling the relative speed of the data acquisition vehicle and the target vehicle in the unmanned simulation environment to enable the relative distance between the data acquisition vehicle and the target vehicle to be within a preset range.
in this embodiment, an unmanned simulation environment is simulated by using open-source physical simulation software Gazebo.
the open-source physical simulation software Gazebo can build an unmanned simulation environment containing various running environments, various road conditions and various running vehicles, and the data acquisition vehicle in the simulation environment is provided with a camera simulation plug-in used for acquiring image information of surrounding vehicles, surrounding road environments, vehicles and the like in real time.
In the embodiment of the invention, the simulation of the unmanned simulation environment comprises the following steps: road environment, vehicle parameters and simulated sensor parameters, the simulation process is expressed as follows:
configuring parameters required for simulating a desired unmanned simulation environment, including: and configuring a lane line detection program, a predicted track generation program and a track tracking program. The lane line detection algorithm used in the lane line detection program, the lane line holding algorithm used in the predicted trajectory generation program, and the vehicle tracking algorithm used in the trajectory tracking program are used together to control the target vehicle to be capable of automatically driving along the road.
FIG. 2 schematically illustrates a flow chart of a simulated unmanned simulation environment provided in a first embodiment of the invention, comprising:
The method includes simulating a road environment, namely simulating a virtual driving environment where an unmanned vehicle is located. The file format of the simulated road environment in the embodiment is a world file format. When roads, buildings and traffic signs in the simulated road environment are relative to the road environment of the real environment, the simulated road environment can automatically define various complex scenes in a route of a preset distance, and the redundancy of acquired data is reduced.
Building a simulated urban road three-dimensional simulation environment by using a Gazebo, wherein the road environment in the embodiment can comprise a right-turn lane, a straight lane, a branch driving lane, a bus special line, an underground tunnel, a rotary island, an overhead bridge, a continuous curved road and at least one crossroad with a signal lamp, wherein the right-turn lane, the straight lane, the branch driving lane are respectively a solid line and a dotted line; the road traffic signs can comprise turning signs, straight running signs, speed limit signs, turning signs and pedestrian crossings; the building may include a village, a school.
vehicle parameters are simulated. The urdf file format is taken as an example of a file format for simulating vehicle parameters, namely, the real-time position, initial position, speed, quantity, vehicle type and color of a used simulation vehicle are set, and the accurate position of the simulation vehicle is published in real time in the motion process;
in the simulation model, for example, 30 small buses with less than seven seats and 10 large and medium buses with more than seven seats can be arranged, all vehicles can run on a road at the speed of 10km/h-60km/h, and the initial positions of the data acquisition vehicle and the target vehicle are any roadside temporary stopping points.
simulating simulation sensor parameters, simulating shooting attitude angles of the simulation sensor parameters, or enabling the acquired virtual environment image to be closer to a real image by adding image noise and the like;
the data acquisition vehicle is provided with a camera, and the attitude angle of the camera is within a preset range so as to clearly shoot the target vehicle and the front road environment.
Controlling the relative speed of the data-gathering vehicle and the target vehicle includes: the method comprises the steps that a Robot Operating System (ROS) based on an open source controls the relative speed of a data acquisition vehicle and a target vehicle through a lane line detection algorithm and a target tracking algorithm. Most robots can control the robots by installing an open-source robot operating system ROS and combining related software function packages, and accurate position and speed information of all vehicles in a simulation environment is published in real time through the ROS.
The method for controlling the relative speed of the data acquisition vehicle and the target vehicle through the lane line detection algorithm and the target tracking algorithm based on the ROS comprises the following steps: and starting a lane line detection and holding node in the ROS system, wherein the lane line detection and holding node mainly comprises a lane line detection program, a predicted track generation program and a track tracking program, and each vehicle can independently run along the road by means of the node after the system is started.
The predicted trajectory generation program detects lane line information in lane line detection and holding nodes, and a middle line of two lane lines is fitted as an expected travel trajectory of a vehicle, and the specific embodiment refers to fig. 3(a) and 3(b), where fig. 3(a) schematically shows a road effect diagram shot by a camera carried by a target vehicle when the target vehicle travels in the first embodiment of the present invention, and shows the road effect diagram shot by the camera carried by the target vehicle when the lane line detection program is not started and the holding nodes are held, and the lane line shot by the effect diagram is not detected at this time, and is not different from a road environment in a real environment; fig. 3(b) shows an effect diagram of detecting lane lines α, β, and γ after controlling the ROS to start the lane line detection and holding node, and a lane middle line is fitted through the detected lane lines α, β, and γ as a driving route of the target vehicle, which refers to a path between the lane lines β and γ in fig. 3 (b).
And controlling the data acquisition vehicle to track the target vehicle to travel along the path fitted between the lane lines beta and gamma by the trajectory tracking program, so that the target vehicle and the data acquisition vehicle can autonomously travel along the lane lines. Specific embodiment referring to fig. 3(c), fig. 3(c) schematically shows an effect diagram of a target vehicle according to a lane line when the target vehicle travels along the lane line, according to the detected lane lines α, β, and γ, and in the continental china, according to a principle of traveling to the right, the target vehicle travels along a fitted path between the lane lines β and γ, and simultaneously the target vehicle sends its own real-time position to the data collection vehicle, and the ROS system controls the data collection vehicle to travel along the fitted path by tracking the target vehicle ahead through a target tracking algorithm.
Most of the existing unmanned experimental platforms are formed by modifying existing vehicles, and only vehicles supporting Drive By Wire (DBW) can control the vehicles to run through a computer program, so that the existing unmanned experimental platforms are modified. In the driving process of the vehicle, the ROS sends a control instruction to a virtual vehicle in an unmanned simulation environment simulated by open source physical simulation software Gazebo through a Drive byWire CAN (Controller Area Network), digital signals are converted into actual physical quantities through the ROS for controlling an accelerator, a brake, a steering and a gear of the vehicle, and in the road environment simulated by the Gazebo, the simulation vehicle is controlled to detect lane lines, generate predicted tracks and track tracks in the road environment, so that the simulation vehicle is controlled to Drive along the road autonomously, and meanwhile, the data acquisition vehicle is controlled to track a target vehicle.
all vehicles are driven in a simulated road environment in a Gazebo unmanned simulation environment, wherein the simulated road environment may include, for example: the system comprises a right-turn lane, a straight lane, a branch lane, a bus special line, an underground tunnel, a rotary island, an overhead bridge, a continuous curved road and at least one crossroad with a signal lamp, wherein the right-turn lane, the straight lane, the branch lane, the bus special line, the underground tunnel, the rotary island, the overhead bridge and the continuous curved road are provided with solid lines and dotted lines. Road traffic markings may include, for example: turning, straight running, speed limit sign, turning sign and pedestrian crossing. The building may include, for example: village and school. And the target vehicle detects the lane line on the front road according to the lane line, the lane line detection and holding node in the ROS system and a lane line detection algorithm, fits a driving path according to the detected lane line, and then drives to the right according to the driving path. Meanwhile, the target vehicle sends the real-time position of the target vehicle to the data acquisition vehicle, so that the ROS system controls the data acquisition vehicle to track the target vehicle to run through a target tracking algorithm. The data acquisition vehicle can follow a target vehicle with a real-time speed of 10km/h-60km/h at a speed of 10km/h-60km/h, adjust the relative driving speed of the data acquisition vehicle and the target vehicle in real time to obtain a relative distance, and enable the relative distance to be within a preset range, so that the target vehicle is always within a shooting range of a camera carried on the data acquisition vehicle. The data acquisition vehicle starts an automatic mode to track the target vehicle, the set tracking distance can be within 30 meters, and a camera carried on the data acquisition vehicle takes a picture and records the picture of the target vehicle when the target vehicle runs.
Fig. 4 schematically shows a flowchart of a method for controlling the relative speed of a data collection vehicle and a target vehicle so that the relative distance between the data collection vehicle and the target vehicle is within a preset range, the method comprising:
in the manual mode, a control instruction is issued to the data acquisition vehicle through a manual control handle, so that the data acquisition vehicle and the target vehicle keep relative speed; or
And in the automatic mode, the data acquisition vehicle is enabled to track the target vehicle by setting a constant mode, a sine curve mode, a cosine curve mode or a step mode.
s2, collecting first state information and first control information of the data collection vehicle, wherein each first state information and each first control information respectively correspond to a time stamp; and acquiring image information, second state information and second control information of the target vehicle, wherein each image information, each second state information and each second control information respectively corresponds to a timestamp.
Fig. 5 schematically shows a data distribution diagram of information collected by a data collection vehicle provided in a first embodiment of the present invention. The first state information of the data acquisition vehicle comprises a first instantaneous speed, a first acceleration and first global positioning system information; the second state information of the target vehicle includes a second instantaneous speed, a second acceleration, and second global positioning system information.
The first control information comprises a first accelerator, a first brake, a first gear and a first steering angle of the data acquisition vehicle; the second control information includes a second throttle, a second brake, a second gear, and a second steering angle of the target vehicle. The number of the collected first state information, the first control information, the image information, the second state information and the second control information is multiple, as shown in table 1.
TABLE 1
And marking the image information of each frame by using the second instantaneous speed corresponding to the acquisition time of the image information of each frame, thereby obtaining the instantaneous speed of the target vehicle instead of the average speed and increasing the diversity and the accuracy of the sample.
S3, judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is within a first preset difference value range, judging whether the difference value between every two of the timestamp corresponding to the image information, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is within the first preset difference value range, and if yes, executing S4.
In this embodiment, the first preset difference may be set to be 1 millisecond, for example, and the invention is not limited in particular. If the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is within 1 millisecond range, and meanwhile, the difference value between every two of the timestamp corresponding to the image information of the target vehicle, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is within 1 millisecond range, the matching is regarded as successful, and S4 is executed;
and if the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is not in the range of 1 millisecond, and/or the difference value between the timestamp corresponding to the image information of the target vehicle, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is not in the range of 1 millisecond, deleting the first state information, the first control information, the image information, the second state information and the second control information.
and S4, judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within a second preset difference value range, judging whether the difference value between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is within the second preset difference value range, and if so, executing S5.
the second preset difference may be set to 0.15 ms, for example, and the invention is not limited thereto. If the difference between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within the range of 0, 15 milliseconds, and meanwhile, the difference between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is also within the range of 0.15 milliseconds, the matching is considered to be successful, and S5 is executed; and if the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is no longer within the range of 0.15 milliseconds, and/or the difference value between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is not within the range of 0.15 milliseconds, deleting the first state information, the second state information, the first control information and the second control information.
S5, storing the first state information, the first control information, the image information, the second state information and the second control information satisfying S4.
In this embodiment, taking the second preset difference range of S4 as an example of being 0.15 ms, the difference between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is stored within 0.15 ms, and the difference between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is also stored within 0.15 ms, and the first state information, the second state information, the first control information, and the second control information are respectively time stamped.
The first control information and the first state information of the data acquisition vehicle with the time stamp and the second control information and the second state information and the image information of the target vehicle with the time stamp are jointly regarded as a storage record.
In the embodiment, under the unmanned simulation environment built by the Gazebo, the control instruction is sent to the virtual vehicle in the unmanned simulation environment simulated by the open-source physical simulation software Gazebo through the ROS, so that the data acquisition vehicle tracks the target vehicle under the simulation environment, the time and the economic cost for manually collecting the speed data set in the real environment are reduced, and the collection of large-scale speed estimation data is integrated into possibility.
A second embodiment of the present invention illustrates a data acquisition system for an unmanned vehicle, wherein fig. 6 schematically illustrates a block diagram of the data acquisition system for the unmanned vehicle provided in the second embodiment of the present invention, and the data acquisition system 500 includes a simulation module 510, a control module 520, an acquisition module 530, a first judgment module 540, a second judgment module 550, and a storage module 560.
A simulation module 510 for simulating an unmanned simulation environment.
Wherein, the simulation module 510 is based on the design of open-source physical simulation software Gazebo.
The simulation module 510 is used for simulating roads, buildings and traffic signs in a road environment, and compared with the road environment in a real environment, the simulated road environment has the advantages that various complex scenes can be automatically defined in a route of a preset distance, and the redundancy of collected data is reduced.
simulating vehicle parameters: simulating vehicle parameters, namely defining the real-time position, the initial position, the speed, the quantity, the vehicle type and the color of the used simulated vehicle and issuing the accurate position of the simulated vehicle in real time in the motion process;
Simulating simulation sensor parameters, simulating shooting attitude angles of the simulation sensor parameters, or enabling the acquired virtual environment image to be closer to a real image by adding image noise and the like.
And the control module 520 is used for controlling the relative speed of the data acquisition vehicle and the target vehicle in the simulated unmanned simulation environment, so that the relative distance between the data acquisition vehicle and the target vehicle is within a preset range.
the control module 520 is based on an open source robot operating system design. For controlling the relative speed of a data-capturing vehicle and a target vehicle, comprising: the method comprises the steps that a Robot Operating System (ROS) based on an open source controls the relative speed of a data acquisition vehicle and a target vehicle through a lane line detection algorithm and a target tracking algorithm.
the ROS sends a control command to a virtual vehicle in an unmanned simulation environment simulated by open source physical simulation software Gazebo through a Controller Area Network (CAN) in a Drive By Wire (DBW), digital signals are converted into actual physical quantities through the ROS for controlling an accelerator, a brake, a steering and a gear of the vehicle, the simulated vehicle is controlled to detect lane lines in the road environment in the Gazebo simulation environment, the simulated vehicle is made to autonomously run along the road, and meanwhile, the data acquisition vehicle is controlled to track a target vehicle.
The method for controlling the relative speed of the data acquisition vehicle and the target vehicle to enable the relative distance between the data acquisition vehicle and the target vehicle to be within a preset range comprises the following steps:
and in the manual mode, a control instruction is issued to the data acquisition vehicle through a manual control handle, so that the data acquisition vehicle and the target vehicle keep relative speed.
And in the automatic mode, the data acquisition vehicle is enabled to track the target vehicle by setting a constant mode, a sine curve mode, a cosine curve mode or a step mode.
The open-source robot operating system sends a control instruction to open-source physical simulation software Gazebo through a drive-by-wire control field bus to control the relative speed of a data acquisition vehicle and a target vehicle; and the open-source robot operating system acquires information corresponding to the data acquisition vehicle and the target vehicle in the open-source physical simulation software Gazebo through an internal node network.
FIG. 7 schematically illustrates a schematic diagram of a control data collection vehicle tracking a target vehicle as provided in a first embodiment of the present invention, with reference to FIG. 7, of the control modules mkz1 being a target vehicle control system; mkz2 is a data acquisition vehicle control system; the gazebo provides the bottom layer for the operation of the whole system. Data in an ellipse in the graph represents a control node and is used for sending a control command and receiving state information; the small box represents a topic and is used for receiving a control instruction or issuing state information; the arrows represent the direction of data transmission between the nodes and the topics. Wherein the state information includes: first state information and second state information.
In the mkz1 system, a/mkz 1/driving node sends steering, braking, gear and throttle control commands to 4 topics of/mkz 1/steering _ cmd,/mkz 1/brake _ cmd,/mkzl/gear _ cmd,/mkz 1_ throttle _ cmd respectively, so that the autonomous driving of a target vehicle is realized through the combined action of the four commands. The four topics further send the relevant control instructions to a/mkz 1/dbw _ node, the node converts the control instructions into torque control instructions which can be identified by the simulation vehicle, and sends the torque control instructions to a/gazebo topic, and finally, specific control over the vehicle is achieved in the gazebo. In addition,/mkz 1/driving simultaneously transmits the position information of itself to the/tf topic for the mkz2 system to use.
In the mkz2 system, the related nodes and topics are basically the same as the working process of the mkz1 system, except that the control is aimed at tracking the target vehicle, so the/mkz 2/tracking node also receives the position information of mkz1 in the/tf topic for calculating the target tracking control command.
The acquisition module 530 acquires first state information and first control information of a data acquisition vehicle, wherein the first state information and the first control information respectively correspond to a timestamp; and acquiring image information, second state information and second control information of the target vehicle, wherein the image information, the second state information and the second control information respectively correspond to a timestamp.
The acquisition module 530 is based on an open source robotic operating system design.
Wherein the first status information comprises a first instantaneous speed, a first acceleration, and first global positioning system information of the data collection vehicle; the second status information includes a second instantaneous speed, a second acceleration, and second global positioning system information of the target vehicle.
the first control information comprises a first accelerator size, a first brake size, a first gear height and a first steering angle size of the data acquisition vehicle; the second control information comprises a second accelerator size, a second brake size, a second accelerator height and a second steering angle size of the target vehicle.
The first judging module 540 is configured to judge whether a difference between a timestamp corresponding to the first state information and a timestamp corresponding to the first control information is within a first preset difference range, judge whether a difference between every two timestamps, which are corresponding to the image information, the timestamp corresponding to the second state information, and the timestamp corresponding to the second control information, is within the first preset difference range, and if so, enter the second judging module.
The second determining module 550 is configured to determine whether a difference between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within a second preset difference range, determine whether a difference between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is within the second preset difference range, and if so, enter the storage module.
the storage module 560 stores the first state information, the first control information, the image information, the second state information, and the second control information satisfying the first determination module.
the above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A data collection method for an unmanned vehicle, comprising:
S1, simulating an unmanned simulation environment, and controlling the relative speed of a data acquisition vehicle and a target vehicle in the unmanned simulation environment to enable the relative distance between the data acquisition vehicle and the target vehicle to be within a preset range;
S2, collecting first state information and first control information of a data collection vehicle, wherein each piece of first state information and each piece of first control information correspond to a time stamp respectively; acquiring image information, second state information and second control information of a target vehicle, wherein each piece of image information, each piece of second state information and each piece of second control information respectively correspond to a timestamp;
S3, judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is within a first preset difference value range, judging whether the difference value between every two of the timestamp corresponding to the image information, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is within a first preset difference value range, and if yes, executing S4;
S4, judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within a second preset difference value range, judging whether the difference value between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is within the second preset difference value range, and if yes, executing S5;
s5, storing the first state information, the first control information, the image information, the second state information, and the second control information satisfying S4.
2. The method of claim 1, wherein in step S1, the simulating an unmanned simulation environment comprises:
And simulating the unmanned simulation environment by open source physical simulation software Gazebo.
3. The method of claim 1, wherein in step S1, the controlling the relative speed of the data-gathering vehicle and the target vehicle comprises:
And controlling the relative speed of the data acquisition vehicle and the target vehicle by a robot operating system based on an open source through a lane line detection algorithm and a target tracking algorithm.
4. The method of claim 1, wherein the first status information includes a first instantaneous speed of the data collection vehicle and the second status information includes a second instantaneous speed of the target vehicle.
5. the method according to claim 4, wherein each frame of image information is labeled with a second instantaneous velocity value corresponding to the time of acquisition of the image information.
6. The method of claim 1, wherein in step S1, the controlling the data to acquire the relative speed of the vehicle and the target vehicle comprises:
In the manual mode, a control instruction is issued to the data acquisition vehicle through a manual control handle, so that the data acquisition vehicle and the target vehicle keep relative speed; or
and in the automatic mode, the data acquisition vehicle tracks the target vehicle by setting a constant mode, a sine curve mode, a cosine curve mode or a step mode.
7. The method of claim 1, further comprising:
S0, configuring parameters needed for simulating the unmanned simulation environment.
8. A data acquisition system for an unmanned vehicle, comprising:
the simulation module is used for simulating an unmanned simulation environment;
the control module is used for controlling the relative speed of a data acquisition vehicle and a target vehicle in the simulated unmanned simulation environment, so that the relative distance between the data acquisition vehicle and the target vehicle is within a preset range;
The system comprises an acquisition module, a time stamp module and a control module, wherein the acquisition module acquires first state information and first control information of a data acquisition vehicle, and the first state information and the first control information respectively correspond to the time stamp; acquiring image information, second state information and second control information of the target vehicle, wherein the image information, the second state information and the second control information respectively correspond to a timestamp;
The first judging module is used for judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the first control information is within a first preset difference value range or not, judging whether the difference value between every two of the timestamp corresponding to the image information, the timestamp corresponding to the second state information and the timestamp corresponding to the second control information is within the first preset difference value range or not, and if so, entering the second judging module;
The second judging module is used for judging whether the difference value between the timestamp corresponding to the first state information and the timestamp corresponding to the second state information is within a second preset difference value range or not, judging whether the difference value between the timestamp corresponding to the first control information and the timestamp corresponding to the second control information is within the second preset difference value range or not, and if yes, entering the storage module;
and the storage module stores the first state information, the first control information, the image information, the second state information and the second control information which meet the requirement of the first judgment module.
9. The system of claim 8, wherein the simulation module is based on the design of open source physics simulation software Gazebo, and the control module and the acquisition module are based on the open source robot operating system design.
10. the system of claim 9, wherein the open-source robot operating system sends a control instruction to the open-source physical simulation software Gazebo through a drive-by-wire controller local area network bus to control the relative speed of the data acquisition vehicle and the target vehicle;
And the open-source robot operating system acquires information corresponding to the data acquisition vehicle and the target vehicle in the open-source physical simulation software Gazebo through an internal node network.
CN201910856083.7A 2019-09-10 2019-09-10 Data acquisition method and system for unmanned vehicle Active CN110569602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910856083.7A CN110569602B (en) 2019-09-10 2019-09-10 Data acquisition method and system for unmanned vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910856083.7A CN110569602B (en) 2019-09-10 2019-09-10 Data acquisition method and system for unmanned vehicle

Publications (2)

Publication Number Publication Date
CN110569602A true CN110569602A (en) 2019-12-13
CN110569602B CN110569602B (en) 2022-04-19

Family

ID=68779282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910856083.7A Active CN110569602B (en) 2019-09-10 2019-09-10 Data acquisition method and system for unmanned vehicle

Country Status (1)

Country Link
CN (1) CN110569602B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523478A (en) * 2020-04-24 2020-08-11 中山大学 Pedestrian image detection method acting on target detection system
CN111624894A (en) * 2020-04-28 2020-09-04 东风汽车集团有限公司 Simulation test method and system for parallel driving
CN112256538A (en) * 2020-09-01 2021-01-22 北京航天控制仪器研究所 Unmanned ship equipment information acquisition processing and control method
CN112348879A (en) * 2020-10-30 2021-02-09 深圳市优必选科技股份有限公司 Vehicle operation control method and device, electronic equipment and storage medium
CN113610792A (en) * 2021-07-30 2021-11-05 杭州申昊科技股份有限公司 Track fastener detection method, device and readable storage medium
RU2817392C1 (en) * 2023-09-19 2024-04-16 Артем Анатольевич Задорожный Method for testing electronic countermeasures systems of unmanned aerial vehicles

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105679093A (en) * 2016-02-23 2016-06-15 江苏大学 Multi-vehicle coordination collision avoidance system and method based on vehicle-vehicle communication
US9953535B1 (en) * 2016-06-27 2018-04-24 Amazon Technologies, Inc. Annotated virtual track to inform autonomous vehicle control
CN108765926A (en) * 2018-05-29 2018-11-06 重庆大学 A kind of vehicle collaboration follower method based on truck traffic
CN109450582A (en) * 2018-11-01 2019-03-08 百度在线网络技术(北京)有限公司 Sensor time stabs synchronous detecting method, device, equipment, medium and vehicle
CN109552326A (en) * 2018-11-05 2019-04-02 浙江工业大学 A kind of vehicle speed variation cruise control method under radio communication channel decaying
CN109946995A (en) * 2019-03-26 2019-06-28 湖北亿咖通科技有限公司 Emulation test method, device and the intelligent terminal of automatic Pilot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105679093A (en) * 2016-02-23 2016-06-15 江苏大学 Multi-vehicle coordination collision avoidance system and method based on vehicle-vehicle communication
US9953535B1 (en) * 2016-06-27 2018-04-24 Amazon Technologies, Inc. Annotated virtual track to inform autonomous vehicle control
CN108765926A (en) * 2018-05-29 2018-11-06 重庆大学 A kind of vehicle collaboration follower method based on truck traffic
CN109450582A (en) * 2018-11-01 2019-03-08 百度在线网络技术(北京)有限公司 Sensor time stabs synchronous detecting method, device, equipment, medium and vehicle
CN109552326A (en) * 2018-11-05 2019-04-02 浙江工业大学 A kind of vehicle speed variation cruise control method under radio communication channel decaying
CN109946995A (en) * 2019-03-26 2019-06-28 湖北亿咖通科技有限公司 Emulation test method, device and the intelligent terminal of automatic Pilot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宋威龙: "城区动态环境下智能车辆行为决策研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》 *
黄武陵: "基于ROS构建无人驾驶车辆环境感知系统", 《单片机与嵌入式系统应用》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523478A (en) * 2020-04-24 2020-08-11 中山大学 Pedestrian image detection method acting on target detection system
CN111523478B (en) * 2020-04-24 2023-04-28 中山大学 Pedestrian image detection method acting on target detection system
CN111624894A (en) * 2020-04-28 2020-09-04 东风汽车集团有限公司 Simulation test method and system for parallel driving
CN111624894B (en) * 2020-04-28 2022-03-01 东风汽车集团有限公司 Simulation test method and system for parallel driving
CN112256538A (en) * 2020-09-01 2021-01-22 北京航天控制仪器研究所 Unmanned ship equipment information acquisition processing and control method
CN112348879A (en) * 2020-10-30 2021-02-09 深圳市优必选科技股份有限公司 Vehicle operation control method and device, electronic equipment and storage medium
CN112348879B (en) * 2020-10-30 2023-12-19 深圳市优必选科技股份有限公司 Vehicle operation control method and device, electronic equipment and storage medium
CN113610792A (en) * 2021-07-30 2021-11-05 杭州申昊科技股份有限公司 Track fastener detection method, device and readable storage medium
RU2817392C1 (en) * 2023-09-19 2024-04-16 Артем Анатольевич Задорожный Method for testing electronic countermeasures systems of unmanned aerial vehicles

Also Published As

Publication number Publication date
CN110569602B (en) 2022-04-19

Similar Documents

Publication Publication Date Title
CN110569602B (en) Data acquisition method and system for unmanned vehicle
CN111788102B (en) Odometer system and method for tracking traffic lights
JP2021049969A (en) Systems and methods for calibrating steering wheel neutral position
CN109062209A (en) A kind of intelligently auxiliary Ride Control System and its control method
JP2020087464A (en) System and method for registering 3d data in 2d image data
CN106114217A (en) Travel controlling system
BR112019027564A2 (en) vehicle information storage method, vehicle travel control method, and vehicle information storage device
CN110546696A (en) method for automatically generating and updating data sets for autonomous vehicles
CN105976606A (en) Intelligent urban traffic management platform
CN107664993A (en) A kind of paths planning method
CN107664504A (en) A kind of path planning apparatus
US11657625B2 (en) System and method for determining implicit lane boundaries
Zeilinger et al. Design of an autonomous race car for the formula student driverless (fsd)
US11556127B2 (en) Static obstacle map based perception system
EP3825958B1 (en) A new way to generate tight 2d bounding boxes for autonomous driving labeling
US20230150549A1 (en) Hybrid log simulated driving
EP4055460B1 (en) Method and system for controlling vehicle
US20240083458A1 (en) Using simulations to identify differences between behaviors of manually-driven and autonomous vehicles
WO2024006115A1 (en) Determining right of way
US11872981B2 (en) Operating a motor vehicle with onboard and cloud-based data
WO2021241834A1 (en) Virtual lane generation apparatus and method based on traffic flow information perception for autonomous driving in adverse weather conditions
CN113085868A (en) Method, device and storage medium for operating an automated vehicle
JP7260575B2 (en) map generator
GB2610446A (en) Navigation with drivable area detection
JP2022128712A (en) Road information generation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant