WO2020233436A1 - Vehicle speed determination method, and vehicle - Google Patents

Vehicle speed determination method, and vehicle Download PDF

Info

Publication number
WO2020233436A1
WO2020233436A1 PCT/CN2020/089606 CN2020089606W WO2020233436A1 WO 2020233436 A1 WO2020233436 A1 WO 2020233436A1 CN 2020089606 W CN2020089606 W CN 2020089606W WO 2020233436 A1 WO2020233436 A1 WO 2020233436A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
dimensional
data
point cloud
training
Prior art date
Application number
PCT/CN2020/089606
Other languages
French (fr)
Chinese (zh)
Inventor
苗振伟
黄庆乐
王兵
王刚
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020233436A1 publication Critical patent/WO2020233436A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • This application relates to the field of automatic driving technology, and in particular to a method for determining vehicle speed and a vehicle.
  • Estimating the speed of other vehicles while the vehicle is driving is the key to achieving road traffic safety and leading to automatic driving. It can help the self-driving vehicle predict the future trajectory of surrounding vehicles in the driving scene and help the self-vehicle avoid possible collisions. .
  • Autonomous vehicles are usually equipped with a variety of sensors, and the data from these sensors has the potential to estimate the vehicle speed.
  • the following describes the three commonly used methods for determining vehicle speed and their existing problems.
  • Vehicle speed determination method based on millimeter wave radar data. With the help of the Doppler effect, this method can provide a more accurate speed measurement for other vehicles. However, in order to give an accurate speed measurement, it has high requirements on the driving position and direction of other vehicles. Specifically, for vehicles that are not in the millimeter wave propagation area and whose moving direction is not parallel to the millimeter wave propagation direction, the speed measurement it gives often has a large error.
  • RGB image collected by the camera uses deep learning technology, especially optical flow estimation technology, to use the RGB image collected by the camera to estimate the speed of objects in the image, such as FlowNet technology and so on.
  • FlowNet technology FlowNet technology
  • ordinary RGB cameras have a more obvious defect, that is, they are almost unavailable at night.
  • the present application provides a method for determining vehicle speed to solve the problem of low speed estimation accuracy in the prior art.
  • This application additionally provides a vehicle speed determination device, a vehicle speed prediction model construction method and device, electronic equipment, road test sensing equipment, and vehicles.
  • This application provides a method for determining vehicle speed, including:
  • the vehicle speed is determined according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  • the determining the vehicle speed according to the time interval of the two frames of the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data includes:
  • the vehicle speed is determined according to the two-dimensional vehicle position offset data and the time interval.
  • the two-dimensional vehicle position offset data is determined according to the at least two two-dimensional vehicle images through a vehicle speed prediction model.
  • Optional also includes:
  • the vehicle speed prediction model is learned from a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data.
  • the vehicle speed prediction model is determined using the following steps:
  • the vehicle speed prediction model is learned from the training data set.
  • the network structure includes a vehicle displacement feature extraction layer and a vehicle displacement feature upsampling layer.
  • the two-dimensional vehicle position offset truth data includes a two-dimensional vehicle position offset truth map having the same image size as the training two-dimensional vehicle image.
  • the training data set is determined using the following steps:
  • the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset
  • the two-dimensional vehicle position deviation truth value map is formed according to the two-dimensional vehicle position deviation truth value; and, at least two training two-dimensional vehicles are generated based on the vehicle point cloud data in the at least two frames of environmental point cloud data for training image.
  • the training data set is determined using the following steps:
  • the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset
  • vehicle point cloud data in at least two frames of environmental point cloud data for training, at least two two-dimensional vehicle images for training are generated.
  • the two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image;
  • the determining the vehicle speed according to the two-dimensional vehicle position offset data and the time interval includes:
  • the ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each pixel corresponding to each vehicle in the two-dimensional vehicle position offset data map to the time interval is used as the ratio The speed of the vehicle.
  • the two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image;
  • the determining the vehicle speed according to the two-dimensional vehicle position offset data and the time interval includes:
  • the ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each spatial point corresponding to the vehicle to the time interval is used as the vehicle traveling speed.
  • the two-dimensional vehicle image includes a two-dimensional vehicle image from a top view angle.
  • the generating at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data includes:
  • attitude data converting the vehicle point cloud data before the last frame into the vehicle point cloud data in the point cloud coordinate system of the last frame
  • a two-dimensional vehicle image corresponding to the vehicle point cloud data before the last frame is generated.
  • Optional also includes:
  • the vehicle point cloud data is extracted from the road environment point cloud data through a vehicle detection model.
  • Optional also includes:
  • This application also provides a method for constructing a vehicle speed prediction model, including:
  • the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
  • the vehicle speed prediction model is learned from the training data set.
  • the network structure includes a vehicle displacement feature extraction layer and a vehicle displacement feature upsampling layer.
  • the two-dimensional vehicle position offset truth data includes a two-dimensional vehicle position offset truth map having the same image size as the training two-dimensional vehicle image.
  • the training data set is determined using the following steps:
  • the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset
  • the two-dimensional vehicle position deviation truth value map is formed according to the two-dimensional vehicle position deviation truth value; and, at least two training two-dimensional vehicles are generated based on the vehicle point cloud data in the at least two frames of environmental point cloud data for training image.
  • the training data set is determined using the following steps:
  • the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset
  • vehicle point cloud data in at least two frames of environmental point cloud data for training, at least two two-dimensional vehicle images for training are generated.
  • the application also provides a vehicle speed determining device, including:
  • An image generating unit configured to generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
  • the speed determining unit is configured to determine the vehicle driving speed according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  • This application also provides a vehicle speed prediction model construction device, including:
  • a data determining unit for determining a training data set includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
  • the network construction unit is used to construct the network structure of the vehicle speed prediction model
  • the model training unit is used to learn the vehicle speed prediction model from the training data set.
  • This application also provides a vehicle, including:
  • the memory is used to store the program for realizing the method of determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: collecting road environment point cloud data through the three-dimensional space scanning device; according to at least two frames Generating at least two two-dimensional vehicle images from the vehicle point cloud data in the road environment point cloud data; according to the at least two two-dimensional vehicle images and the time interval between the two frames of the at least two frames of road environment point cloud data, Determine the speed of the vehicle.
  • This application also provides a drive test sensing device, including:
  • the memory is used to store the program for realizing the method of determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: collecting road environment point cloud data through the three-dimensional space scanning device; according to at least two frames Generating at least two two-dimensional vehicle images from the vehicle point cloud data in the road environment point cloud data; according to the at least two two-dimensional vehicle images and the time interval between the two frames of the at least two frames of road environment point cloud data, Determine the speed of the vehicle.
  • This application also provides an electronic device, including:
  • the memory is used to store the program for realizing the method for determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: according to the vehicle point cloud data in at least two frames of road environment point cloud data, At least two two-dimensional vehicle images are generated; and the vehicle speed is determined according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  • This application also provides an electronic device, including:
  • the memory is used to store a program for realizing the method for constructing a vehicle speed prediction model. After the device is powered on and the program of the method is run through the processor, the following steps are performed: determining a training data set; the training data includes at least two training Using two-dimensional vehicle images and two-dimensional vehicle position offset true value data; constructing a network structure of a vehicle speed prediction model; learning the vehicle speed prediction model from the training data set.
  • the present application also provides a computer-readable storage medium having instructions stored in the computer-readable storage medium, which when run on a computer, cause the computer to execute the above-mentioned various methods.
  • the present application also provides a computer program product including instructions, which when run on a computer, causes the computer to execute the above-mentioned various methods.
  • At least two two-dimensional vehicle images are generated based on the vehicle point cloud data in at least two frames of road environment point cloud data; according to the at least two two-dimensional vehicle images and the The time interval between any two frames of at least two frames of road environment point cloud data is used to determine the speed of the vehicle; this processing method makes it possible to generate at least two frames corresponding to the vehicle point cloud data in the at least two frames of road environment point cloud data.
  • the vehicle speed prediction model construction method determines the training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data; the vehicle speed prediction model is constructed Network structure; learn the vehicle speed prediction model from the training data set; this processing method allows learning from a large amount of training data to obtain a model that can predict vehicle displacement based on at least two two-dimensional vehicle images; therefore , Can effectively improve the accuracy of the vehicle speed prediction model.
  • Fig. 1 is a flowchart of an embodiment of a method for determining vehicle speed provided by the present application
  • FIG. 2 is a specific flowchart of an embodiment of a method for determining vehicle speed provided by the present application
  • FIG. 3 is a schematic diagram of a network structure of a vehicle speed prediction model of an embodiment of a vehicle speed determination method provided by the present application;
  • Fig. 4 is a schematic diagram of an embodiment of a vehicle speed determining device provided by the present application.
  • Figure 5 is a schematic diagram of an embodiment of a vehicle provided by the present application.
  • Fig. 6 is a schematic diagram of an embodiment of a drive test sensing device provided by the present application.
  • FIG. 7 is a schematic diagram of an embodiment of an electronic device provided by the present application.
  • FIG. 8 is a flowchart of an embodiment of a method for constructing a vehicle speed prediction model provided by the present application.
  • FIG. 9 is a schematic diagram of an embodiment of a vehicle speed prediction model construction device provided by the present application.
  • Fig. 10 is a schematic diagram of an embodiment of an electronic device provided by the present application.
  • FIG. 1 is a flowchart of an embodiment of a method for determining vehicle speed provided by this application.
  • the execution subject of the method may be an unmanned vehicle, a road test sensing device, or a server, etc.
  • the following uses an unmanned vehicle as an example to describe a method for determining a vehicle speed provided in this application.
  • a method for determining vehicle speed provided by this application includes:
  • Step S101 Generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data.
  • the spatial coordinates of each sampling point on the surface of the environmental space object of the vehicle traveling road can be obtained through the three-dimensional space scanning device installed on the vehicle to obtain A collection of points, this massive point data is called road environment point cloud (Point Cloud) data.
  • Point Cloud road environment point cloud
  • the scanned object surface is recorded in the form of points.
  • Each point contains three-dimensional coordinates, and some may contain color information (RGB) or reflection intensity information (Intensity).
  • RGB color information
  • Intensity reflection intensity information
  • the three-dimensional space scanning device may be a Lidar (Light Detection And Ranging, Lidar), which performs laser detection and measurement through a laser scanning method to obtain information about obstacles in the surrounding environment, such as buildings, trees, people, vehicles, etc.
  • the measured data is the discrete point representation of the Digital Surface Model (DSM).
  • DSM Digital Surface Model
  • 16-line, 32-line, 64-line and other multi-line lidars can be used. Radars with different laser beam numbers have different frame rates for collecting point cloud data. For example, 16 and 32 lines generally collect 10 frames per second. Point cloud data.
  • the three-dimensional space scanning device may also be equipment such as a three-dimensional laser scanner or a photographic scanner.
  • the own vehicle can generate at least two two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of road environment point cloud data.
  • the road environment point cloud data may include point cloud data of various objects in the road environment space, and these objects may be trees, buildings, pedestrians and other vehicles on the road, and so on.
  • the method provided in the embodiment of the present application determines the driving speed of other vehicles on the road based on the vehicle point cloud data in at least two frames of road environment point cloud data.
  • the at least two frames of road environmental point cloud data may be two or more frames of environmental point cloud data recently collected by the current vehicle (self-vehicle), for example, the current vehicle is at t n- ⁇ ,..., t n-1 , A total of ⁇ +1 frames of environmental point cloud data during the driving of the vehicle are collected at t n ⁇ +1 times.
  • Each frame of environmental point cloud data may include point cloud data of multiple vehicles. Therefore, the method provided in the embodiment of the present application can According to the ⁇ +1 frame of environmental point cloud data, the driving speed of multiple vehicles is determined.
  • the vehicle point cloud data can be extracted from the road environment point cloud data through a vehicle detection model. After the lidar on the vehicle scans and obtains a frame of environmental point cloud data, the environmental point cloud data can be transmitted to the vehicle detection model, and the vehicle and its three-dimensional position data in the environmental point cloud data can be obtained through the model detection, that is to say, it is determined Output the vehicle point cloud data in the environmental point cloud data.
  • the three-dimensional position data may be vertex coordinate data of the rectangular cube bounding box of the vehicle, and so on.
  • the vehicle detection model can use the RefineDet method based on deep learning.
  • This method combines the fast running speed of single-stage methods such as SSD and two-stage methods such as Faster R-CNN. Therefore, it has the advantage of high vehicle detection accuracy.
  • the method detects vehicle point cloud data in the environmental point cloud data, it obtains the bounding box coordinates of the vehicle, that is, the position data of the vehicle point cloud data in the environmental point cloud data.
  • Step S101 generates at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of environmental point cloud data.
  • the two-dimensional vehicle image may be a two-dimensional environmental image (ie, a two-dimensional image of a three-dimensional scene graph constructed from environmental point cloud data) with images of objects other than the vehicle removed, that is, a two-dimensional vehicle image It may be a two-dimensional environment image including only the vehicle image.
  • the two-dimensional vehicle image may be a two-dimensional environment image including only the vehicle image in a top view perspective.
  • the two-dimensional vehicle image can include as many two-dimensional projection points of the vehicle as possible, and the vehicle speed determined based on a more comprehensive vehicle point will be more accurate.
  • two-dimensional vehicle images from other viewing angles such as left view, right view, front view, etc., can also be used.
  • the environmental point cloud data of two adjacent frames (the previous frame is recorded as the 0th frame, and the next frame is recorded as the 1st frame), and the point clouds of the two preceding and following vehicles are processed separately using the overhead view.
  • the range of the two-dimensional vehicle image can cover a certain area near the own vehicle.
  • the 0th frame is projected to the first One frame of the coordinate system where the point cloud is located, and then generate a two-dimensional vehicle image.
  • the environmental point cloud data of multiple frames such as frame 0, frame 1...frame 10
  • the overhead perspective to process the point cloud of all frames of vehicles separately to generate multiple (such as 10)
  • Corresponding multi-channel (including the density channel of vehicle points, number channels, etc.) two-dimensional vehicle image the range of the two-dimensional vehicle image can also cover a certain area near the own vehicle.
  • step S101 may include the following sub-steps: 1) Determine the posture data of the own vehicle; 2) According to the posture data, convert the point cloud data of the vehicle before the last frame (such as the 10th frame) into the last The vehicle point cloud data in the point cloud coordinate system of one frame; 3) According to the vehicle point cloud data before the last frame after the coordinate system conversion, a two-dimensional vehicle corresponding to the vehicle point cloud data before the last frame is generated image.
  • step S101 may include the following sub-steps: 1) Determine the posture data of the own vehicle; 2) According to the posture data, convert the point cloud data of the vehicle before the last frame (such as the 10th frame) into the last The vehicle point cloud data in the point cloud coordinate system of one frame; 3) According to the vehicle point cloud data before the last frame after the coordinate system conversion, a two-dimensional vehicle corresponding to the vehicle point cloud data before the last frame is generated image.
  • the 0th to 9th frames since the self-vehicle may move, it is necessary to synchronize the point cloud coordinate system in time.
  • the execution subject of the method provided in the embodiment of the present application is a drive test sensing device
  • the location of the device is fixed, so there is no need to determine the road when generating the two-dimensional vehicle image before the last frame.
  • To measure the posture data of the sensing device there is no need to convert the vehicle point cloud data before the last frame into the vehicle point cloud data in the point cloud coordinate system of the last frame according to the posture data.
  • Step S103 Determine the driving speed of the vehicle according to the time interval between any two frames of the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  • the time interval of the frame data can determine the speed of the vehicle.
  • the vehicle speed is determined according to the position of the vehicle in each frame of image and the time interval. For example, in the 0th frame, the vehicle A is in front of the vehicle B, and in the first frame, the vehicle B exceeds the vehicle A, but is in front of the vehicle A. This means that the speed of the vehicle B is higher than the speed of the vehicle A. According to The location of car, vehicle A, vehicle B and the time interval between these two frames of data can determine the speed of vehicle A and vehicle B.
  • step S103 may include the following sub-steps:
  • Step S1031 Determine the two-dimensional vehicle position offset data corresponding to the time interval according to the at least two two-dimensional vehicle images.
  • the two-dimensional vehicle position offset data may include the abscissa position offset and the ordinate position offset of the vehicle in a two-frame time interval.
  • the two-frame time interval may be the time interval of any two frames of data in the at least two frames of road environment point cloud data.
  • the self-vehicle collects ⁇ +1 frames of environmental point cloud data during the driving of the vehicle at t n- ⁇ ,..., t n-1 , t n ⁇ +1, and the method provided in the embodiment of this application can Determine the distance that other vehicles move between time t n-1 and time t n , that is, the position offset of other vehicles on the ground abscissa and ordinate, the time interval in this case is t n -t n-1 . In specific implementation, it may also be to determine the distance moved from t n-2 to time t n-1 .
  • the time interval is t n-1 -t n-2 ; or to determine t n- ⁇ to The distance moved between time t n-3 , in this case the time interval is t n-3 -t n- ⁇ .
  • the vehicle speed prediction model determines the two-dimensional vehicle position offset data of the vehicle in a two-frame time interval.
  • the vehicle speed prediction model can be learned from a large number of training data sets of at least two two-dimensional vehicle images marked with two-dimensional vehicle position offset true value data, that is, the training data includes at least two training data sets. Two-dimensional vehicle image and two-dimensional vehicle position offset true value data.
  • the two-dimensional vehicle position offset true value data may be the two-dimensional vehicle position offset true value of the vehicle in the last two frame time interval, or the two-dimensional vehicle position offset true value of the vehicle in any two frame time interval.
  • the two-dimensional vehicle position offset truth data may include a two-dimensional vehicle position offset truth map with the same image size as the training two-dimensional vehicle image, or it may be
  • the two-dimensional vehicle position offset true value map of the training two-dimensional vehicle image with a smaller image size can also include only a very small amount of two-dimensional vehicle position offset true values, and in extreme cases it can include only a displacement of the abscissa
  • the true value of the offset and the true value of the displacement offset of an ordinate that is, the true value of the displacement offset of the vehicle in the two-dimensional vehicle image for training only includes two data, one is the abscissa of the vehicle The true value of the displacement offset, and the other is the true value of the displacement offset of the vehicle's ordinate.
  • FIG. 2 is a specific flowchart of the method provided in an embodiment of this application.
  • the method may further include the following steps:
  • Step S201 learn from the training data set to obtain a vehicle speed prediction model.
  • the training data set includes a large amount of training data, that is, training samples. It should be noted that the number of two-dimensional vehicle images included in the training data during model training should be the same as the number of two-dimensional vehicle images input by the model when the model is used for speed prediction.
  • step S201 may include the following sub-steps:
  • Step S2011 Determine the training data set.
  • the training data set is determined using the following steps: 1) Obtain at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification annotation data; 2) According to the annotation data, the same The center point offset of the three-dimensional vehicle bounding box of the preset two frames of the vehicle is used as the true value of the three-dimensional vehicle position offset; 3) the true value of the three-dimensional vehicle position offset is projected to the top view coordinate system to obtain the two-dimensional vehicle position offset 4) forming the two-dimensional vehicle position deviation truth value map according to the two-dimensional vehicle position deviation truth value; and generating at least two vehicle point cloud data in the environmental point cloud data for training at least two frames Two-dimensional vehicle images for training.
  • the model network needs annotated data that can provide 3D rectangular bounding boxes (including vehicle point cloud data) for continuous frames (two or more frames) to track objects (vehicles) and their Correspondence between frames so that the same vehicle can be extracted from adjacent frames and at least two training two-dimensional vehicle images can be generated.
  • the offset of the center point of the 3D rectangular bounding box of the adjacent frame of the same vehicle is used as the true value of the offset of the network regression, and is projected into the top view coordinate system, and the true value of the offset of the center point is filled into the first
  • Table 1 shows the annotation data used to determine the training data in this embodiment.
  • the annotation data provided in Table 1 can provide a 3D rectangular enclosing frame for vehicles in n consecutive frames and the corresponding relationship between the frames, so that the same vehicle can be extracted in n consecutive frames.
  • Step S2013 construct a network structure of the prediction model.
  • FIG. 3 is a schematic diagram of the network structure of the prediction model of the method provided in an embodiment of the application.
  • the model network structure of this embodiment is a convolutional neural network, which may include multiple convolutional layers and multiple deconvolutional layers.
  • the two-dimensional vehicle position offset data map output by the vehicle speed prediction model is similar to The two-dimensional vehicle images for training have the same image size.
  • the network connects the two-dimensional vehicle images generated by the point clouds of the front and back frames in the channel direction as the input data of the model.
  • the model outputs a two-channel, two-dimensional vehicle position offset data map with width and height equal to the input image size.
  • the model output map includes two-dimensional vehicle position offset data, which reflects the speed information of the vehicle, it can also be called a speed map.
  • the two channels respectively reflect the offset components of the point cloud existing in the corresponding pixel position in the x and y directions in the image coordinate system.
  • the vehicle displacement high-dimensional features with smaller feature map size are extracted, and then through several layers of deconvolution
  • the layer is restored to the size of the original input image, and the model output map includes the two-dimensional vehicle position offset data of each pixel of each vehicle in the two-dimensional vehicle image at two-frame time intervals.
  • the convolutional layer used to extract higher-dimensional vehicle displacement features with a smaller feature map size from the input feature map is called the vehicle displacement feature extraction layer. In specific implementation, it may include multiple vehicle displacement feature extractions. Floor.
  • the deconvolution layer used to upsample the vehicle displacement feature with a larger feature map size from the input feature map is referred to as the vehicle displacement feature upsampling layer, until the last deconvolution layer is upsampled.
  • a two-dimensional vehicle position offset data map with the same size as the original input image is generated.
  • it may include multiple vehicle displacement feature up-sampling layers.
  • the input data of the vehicle displacement feature upsampling layer may include the output feature map of the previous vehicle displacement feature upsampling layer adjacent to it, and may also include the output feature map of the previous vehicle displacement feature upsampling layer.
  • the output feature map of the previous vehicle displacement feature extraction layer with the same image size.
  • the model network structure may not include the deconvolution layer, that is, it does not include the vehicle displacement feature upsampling layer.
  • the two-dimensional vehicle position output by the vehicle speed prediction model is offset
  • the data map may have a different image size from the two-dimensional vehicle image for training.
  • the training data set can be determined by the following steps: 1) Obtain at least two frames of training environment with three-dimensional vehicle bounding box and vehicle identification annotation data Point cloud data; 2) According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset; 3) the three-dimensional vehicle position is offset The true value is projected to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset; 4) at least two training two-dimensional vehicle images are generated according to the vehicle point cloud data in at least two frames of environmental point cloud data for training.
  • This processing method deletes the step of "forming the two-dimensional vehicle position offset true value map according to the two-dimensional vehicle offset true value", so the processing speed can be effectively improved.
  • the accuracy of the model obtained by this processing method will be lower than that of the above-mentioned up-sampling layer with vehicle displacement characteristics, and the two-dimensional vehicle position offset data map output by the model has the same image size as the two-dimensional vehicle image input to the model. The accuracy of the model.
  • Step S2015 learning the prediction model from the training data set.
  • the weights in the network can be trained according to the training data set.
  • the weights in the network make the two-dimensional vehicle position offset data map output by the model deviate from the two-dimensional vehicle position
  • the model training can be stopped.
  • a mask image is used.
  • the mask map can correspond to the first frame (the other frame is the zeroth frame) two-dimensional vehicle image, so that only the first frame of the two-dimensional vehicle image corresponds to If there is a pixel of a vehicle, the corresponding pixel value of the mask image is 1, and for other positions, the pixel value of the mask image is 0.
  • the loss function only pixels with a mask value of 1 participate in the calculation of the loss.
  • the model network of this embodiment adopts the idea of multi-scale, and calculates the loss function on the output of multiple deconvolution layers to help the network converge. Since the size of the output feature map of each deconvolution layer is inconsistent, it is necessary to downsample the truth map and the mask map to a corresponding size before calculating the loss.
  • Step S1033 Determine the driving speed of the vehicle according to the two-dimensional vehicle position offset data and the time interval.
  • the two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image; correspondingly, step S1033 may include the following sub-steps: 1) The two-dimensional vehicle position offset data of each vehicle is converted into the three-dimensional vehicle position offset data in the point cloud coordinate system; 2) For each vehicle, the average value of the abscissa offset component of each spatial point corresponding to the vehicle is summed The ratio of the average value of the ordinate offset components to the time interval is used as the driving speed of the vehicle.
  • the driving speed of other vehicles on the road is determined according to the last two frames of environmental point cloud data collected by the vehicle, and the point cloud data of each vehicle in the last frame of environmental point cloud data can be projected onto the speed map , Extract the two-dimensional vehicle offset component of the corresponding pixel after each point is projected, and convert the offset component back to the point cloud coordinate system as the offset component of the point in the x and y directions in the three-dimensional space.
  • the average value of the three-dimensional vehicle offset components of all points in the same vehicle is used as the three-dimensional vehicle position offset component of the vehicle.
  • the three-dimensional vehicle position offset component divided by the known two-frame time interval is the driving speed of the vehicle.
  • two two-dimensional vehicle images are generated according to the vehicle point cloud data in the two frames of environmental point cloud data; the two two-dimensional vehicle images are used as the input data of the prediction model to generate A two-dimensional vehicle position offset data map with the same image size as the two-dimensional vehicle image; the two-dimensional vehicle position offset data map includes an abscissa offset map and an ordinate offset map; for the two For each vehicle in the one-dimensional vehicle image, the driving speed is determined according to the abscissa offset map, the ordinate offset map, and the time interval.
  • the vehicle speed can be determined directly based on the two-dimensional vehicle position offset data corresponding to each point of the vehicle and the time interval; therefore, the speed estimation speed can be effectively improved.
  • the abscissa offset component of vehicle 1 from time t 0 to frame 1 time t 1 is 10 meters, the ordinate offset component is 5 meters, and the time t 0 and time t 1 are separated by 500 milliseconds, then the vehicle The driving speed of 1 is 72 miles; the abscissa offset component of vehicle 2 from frame 0 time t 0 to frame 1 time t 1 is 15 meters, the ordinate offset component is 5 meters, time t 0 and time t 1 With an interval of 500 milliseconds, the traveling speed of the vehicle 2 is 108 miles.
  • the ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each pixel point corresponding to the vehicle to the time interval may also be used as the vehicle traveling speed. In this way, the position offsets of all points of the vehicle are comprehensively considered; therefore, the accuracy of vehicle speed estimation can be effectively improved.
  • the vehicle speed prediction model can be used to determine the position offset of points (such as all points or some points) in the vehicle, and determine the driving of the vehicle based on these position offsets Speed; Therefore, it can effectively improve the accuracy of vehicle speed.
  • the method for determining the vehicle speed provided by the embodiments of the present application generates at least two two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of road environment point cloud data;
  • the vehicle image and the time interval between any two frames of the at least two frames of road environment point cloud data are used to determine the vehicle speed;
  • this processing method makes the generation of the vehicle point cloud data in the at least two frames of road environment point cloud data Corresponding to at least two two-dimensional vehicle images, and determine the driving speed of the vehicle based on these images and the time interval of any two frames of the at least two frames of data; therefore, the accuracy of the vehicle speed can be effectively improved, thereby improving road traffic safety Sex.
  • this application also provides a device for determining a vehicle speed.
  • This device corresponds to the embodiment of the above method.
  • FIG. 4 is a schematic diagram of an embodiment of the vehicle speed determining device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described below are merely illustrative.
  • This application additionally provides a vehicle speed determination device, including:
  • the image generating unit 401 is configured to generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
  • the speed determining unit 403 is configured to determine the driving speed of the vehicle according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  • a method for determining the speed of a vehicle is provided.
  • this application also provides a vehicle. This vehicle corresponds to the embodiment of the above method.
  • FIG. 5 is a schematic diagram of an embodiment of the vehicle of this application. Since the vehicle embodiment is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the vehicle embodiments described below are merely illustrative.
  • the application additionally provides a vehicle, including: a three-dimensional space scanning device 500; a processor 501; and a memory 502, which is used to store a program for realizing a method for determining a vehicle speed.
  • a vehicle including: a three-dimensional space scanning device 500; a processor 501; and a memory 502, which is used to store a program for realizing a method for determining a vehicle speed.
  • the device is powered on and the program of the method is run through the processor , Perform the following steps: collect road environment point cloud data through the three-dimensional space scanning device; generate at least two two-dimensional vehicle images according to the vehicle point cloud data in at least two frames of road environment point cloud data; The time interval between the two frames of the vehicle image and the at least two frames of road environment point cloud data is used to determine the speed of the vehicle.
  • FIG. 6 is a schematic diagram of an embodiment of the drive test sensing device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described below are merely illustrative.
  • the electronic device includes: a three-dimensional space scanning device 600; a processor 601 and a memory 602; the memory is used to store a program for implementing the method, and the device is powered on and passes through the processor
  • the following steps are performed: collecting road environment point cloud data through a three-dimensional space scanning device; generating at least two two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of road environment point cloud data; The time interval between the two frames of data in the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data determines the speed of the vehicle.
  • FIG. 7 is a schematic diagram of an embodiment of the electronic device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described below are merely illustrative.
  • the electronic device includes: a processor 701 and a memory 702; the memory is used to store a program for implementing the method. After the device is powered on and runs the program of the method through the processor, it executes The following steps: generating at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data; according to the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data The time interval between two frames of data in the middle determines the speed of the vehicle.
  • this application also provides a method for constructing a vehicle speed prediction model. This method corresponds to the embodiment of the above method.
  • FIG. 8 is a flowchart of an embodiment of a method for constructing a vehicle speed prediction model of this application. Since the method embodiment is basically similar to the method embodiment 1, the description is relatively simple, and the relevant part can refer to the part of the description of the method embodiment 1. The method embodiments described below are only illustrative.
  • Step S801 Determine the training data set.
  • the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data.
  • the two-dimensional vehicle position offset true value data may be a two-dimensional vehicle position offset true value map having the same image size as the training two-dimensional vehicle image, or it may be the same as the training two-dimensional vehicle image Two-dimensional vehicle position offset truth maps with different image sizes, etc.
  • the two-dimensional vehicle position offset truth data is a two-dimensional vehicle position offset truth map that has the same or different image size as the training two-dimensional vehicle image; correspondingly, the training data
  • the set can be determined by the following steps: 1) Obtain at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification annotation data; 2) According to the annotation data, preset two frames of three-dimensional The center point offset of the vehicle bounding box is used as the true value of the three-dimensional vehicle position offset; 3) The true value of the three-dimensional vehicle position offset is projected to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset; 4) According to two The two-dimensional vehicle position deviation truth value forms the two-dimensional vehicle position deviation truth value map; and at least two training two-dimensional vehicle images are generated based on the vehicle point cloud data in at least two frames of environmental point cloud data for training.
  • the two-dimensional vehicle position offset true value data only includes a very small amount of two-dimensional vehicle position offset true values. In extreme cases, it may only include a true value of the displacement offset of the abscissa and a vertical value.
  • the true value of the displacement offset of the coordinate that is, the true value of the displacement offset of the vehicle in the two-dimensional vehicle image for training only includes two data, one is the true value of the displacement offset of the abscissa of the vehicle, and the other One is the true value of the displacement offset of the ordinate of the vehicle.
  • the training data set can be determined by the following steps: 1) Obtain at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification annotation data; 2) According to the annotation data, combine the same vehicle Preset the center point offset of the three-dimensional vehicle bounding box of two frames as the true value of the three-dimensional vehicle position offset; 3) Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset Value; 4) Generate at least two training two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of environmental point cloud data for training.
  • Step S803 Construct a network structure of the vehicle speed prediction model.
  • the network structure may include at least one vehicle displacement feature extraction layer and at least one vehicle displacement feature up-sampling layer, or may only include a vehicle displacement feature extraction layer.
  • the vehicle displacement feature extraction layer may be implemented based on a convolution operation
  • the vehicle displacement feature upsampling layer may be implemented based on a deconvolution operation.
  • Step S805 Obtain the vehicle speed prediction model from the training data set.
  • the vehicle speed prediction model construction method determines the training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data Construct the network structure of the vehicle speed prediction model; learn from the training data set to obtain the vehicle speed prediction model; this processing method makes it possible to learn from a large amount of training data to obtain vehicle displacement based on at least two two-dimensional vehicle images Therefore, the accuracy of the vehicle speed prediction model can be effectively improved.
  • FIG. 9 is a schematic diagram of an embodiment of the vehicle speed prediction model construction device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described below are merely illustrative.
  • the data determining unit 901 is configured to determine a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
  • the network construction unit 903 is used to construct the network structure of the vehicle speed prediction model
  • the model training unit 905 is configured to learn the vehicle speed prediction model from the training data set.
  • FIG. 10 is a schematic diagram of an embodiment of the electronic device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described below are merely illustrative.
  • the electronic device includes: a processor 1001 and a memory 1002; the memory is used to store a program that implements the method. After the device is powered on and runs the program of the method through the processor, it executes The following steps: determining a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data; constructing a network structure of a vehicle speed prediction model; learning from the training data set Obtain the vehicle speed prediction model.
  • the computing device includes one or more processors (CPU), input/output interfaces, network interfaces, and memory.
  • processors CPU
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include non-transitory computer-readable media (transitory media), such as modulated data signals and carrier waves.
  • this application can be provided as methods, systems or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Abstract

A vehicle speed determination method, and a vehicle. The vehicle speed determination method comprises: generating, according to vehicle point cloud data from among at least two frames of road environment point cloud data, at least two two-dimensional vehicle images (S101); and determining the running speed of a vehicle according to the at least two two-dimensional vehicle images and a time interval between any two frames of data from among the at least two frames of road environment point cloud data (S102). By using such a processing means, at least two two-dimensional vehicle images corresponding to vehicle point cloud data from among at least two frames of road environment point cloud data are generated, and the running speed of a vehicle is determined according to these images and a time interval between any two frames of data from among the at least two frames of data; therefore, the accuracy of a vehicle speed can be effectively improved, thereby improving road traffic safety.

Description

车辆速度确定方法及车辆Method for determining vehicle speed and vehicle
本申请要求2019年05月22日递交的申请号为201910431632.6、发明名称为“车辆速度确定方法及车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 201910431632.6 and the invention title of "Vehicle Speed Determination Method and Vehicle" filed on May 22, 2019, the entire content of which is incorporated into this application by reference.
技术领域Technical field
本申请涉及自动驾驶技术领域,具体涉及车辆速度确定方法及车辆。This application relates to the field of automatic driving technology, and in particular to a method for determining vehicle speed and a vehicle.
背景技术Background technique
车辆在行驶过程中对其他车辆的速度进行估计是实现道路交通安全和通向自动驾驶的关键,它可以帮助自动驾驶车辆预测行驶场景中周围车辆未来的行驶轨迹,帮助自车规避可能发生的碰撞。Estimating the speed of other vehicles while the vehicle is driving is the key to achieving road traffic safety and leading to automatic driving. It can help the self-driving vehicle predict the future trajectory of surrounding vehicles in the driving scene and help the self-vehicle avoid possible collisions. .
自动驾驶车辆通常配备多种传感器,这些传感器的数据都有用来估计车辆速度的潜力。下面对常用的三种车辆速度确定方法及其存在的问题进行说明。Autonomous vehicles are usually equipped with a variety of sensors, and the data from these sensors has the potential to estimate the vehicle speed. The following describes the three commonly used methods for determining vehicle speed and their existing problems.
1)基于毫米波雷达数据的车辆速度确定方法。该方法借助于多普勒效应,可以为其他车辆给出一个较为精确的速度测量。然而,为了给出准确的速度测量,它对其他车辆的行驶位置和方向都有较高要求。具体而言,对于未在毫米波传播区域且运动方向不平行于毫米波传播方向的车辆,它给出的速度测量往往有较大的误差。1) Vehicle speed determination method based on millimeter wave radar data. With the help of the Doppler effect, this method can provide a more accurate speed measurement for other vehicles. However, in order to give an accurate speed measurement, it has high requirements on the driving position and direction of other vehicles. Specifically, for vehicles that are not in the millimeter wave propagation area and whose moving direction is not parallel to the millimeter wave propagation direction, the speed measurement it gives often has a large error.
2)基于相机数据的车辆速度确定方法。该方法通过深度学习技术,尤其是光流估计技术,利用相机采集的RGB图像对图像中物体进行速度估计,如FlowNet技术等等。但是,普通的RGB相机有一个比较明显的缺陷,那就是夜间几乎不可用。2) Vehicle speed determination method based on camera data. This method uses deep learning technology, especially optical flow estimation technology, to use the RGB image collected by the camera to estimate the speed of objects in the image, such as FlowNet technology and so on. However, ordinary RGB cameras have a more obvious defect, that is, they are almost unavailable at night.
3)基于激光雷达数据的车辆速度确定方法。该方法利用激光雷达点云估计速度,可以有效克服夜间问题,具体处理过程如下所述:根据点云检测算法检测出的凸包,计算两帧间检测出的同一对象凸包中心的偏移量,最后除以两帧时间间隔,即为对象速度。然而,这类方法受检测出的凸包形态影响,中心点往往不是对象真正的形态中心,因此估计的速度结果噪声较大。3) Vehicle speed determination method based on lidar data. This method uses lidar point cloud estimation speed to effectively overcome the night problem. The specific processing process is as follows: According to the convex hull detected by the point cloud detection algorithm, calculate the offset of the convex hull center of the same object detected between two frames , And finally divided by the two-frame time interval, it is the object velocity. However, this type of method is affected by the detected convex hull shape, and the center point is often not the true shape center of the object, so the estimated velocity result is noisy.
综上所述,现有技术存在车辆速度估计准确度较低的问题,如何准确地确定其他车辆的速度,成为本领域技术人员迫切需要解决的问题。In summary, the prior art has the problem of low accuracy of vehicle speed estimation. How to accurately determine the speed of other vehicles has become an urgent problem for those skilled in the art.
发明内容Summary of the invention
本申请提供车辆速度确定方法,以解决现有技术存在的速度估计准确度较低的问题。 本申请另外提供车辆速度确定装置,车辆速度预测模型构建方法和装置,电子设备,路测感知设备,以及车辆。The present application provides a method for determining vehicle speed to solve the problem of low speed estimation accuracy in the prior art. This application additionally provides a vehicle speed determination device, a vehicle speed prediction model construction method and device, electronic equipment, road test sensing equipment, and vehicles.
本申请提供一种车辆速度确定方法,包括:This application provides a method for determining vehicle speed, including:
根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;Generating at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The vehicle speed is determined according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
可选的,所述根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,并确定车辆行驶速度,包括:Optionally, the determining the vehicle speed according to the time interval of the two frames of the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data includes:
根据所述至少两个二维车辆图像确定与所述时间间隔对应的二维车辆位置偏移数据;Determining the two-dimensional vehicle position offset data corresponding to the time interval according to the at least two two-dimensional vehicle images;
根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度。The vehicle speed is determined according to the two-dimensional vehicle position offset data and the time interval.
可选的,通过车辆速度预测模型,根据所述至少两个二维车辆图像确定所述二维车辆位置偏移数据。Optionally, the two-dimensional vehicle position offset data is determined according to the at least two two-dimensional vehicle images through a vehicle speed prediction model.
可选的,还包括:Optional, also includes:
从训练数据集中学习得到所述车辆速度预测模型;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据。The vehicle speed prediction model is learned from a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data.
可选的,所述车辆速度预测模型采用如下步骤确定:Optionally, the vehicle speed prediction model is determined using the following steps:
确定所述训练数据集;Determining the training data set;
构建所述车辆速度预测模型的网络结构;Constructing the network structure of the vehicle speed prediction model;
从所述训练数据集中学习得到所述车辆速度预测模型。The vehicle speed prediction model is learned from the training data set.
可选的,所述网络结构包括车辆位移特征提取层和车辆位移特征上采样层。Optionally, the network structure includes a vehicle displacement feature extraction layer and a vehicle displacement feature upsampling layer.
可选的,所述二维车辆位置偏移真值数据包括与所述训练用二维车辆图像具有相同图像尺寸的二维车辆位置偏移真值图。Optionally, the two-dimensional vehicle position offset truth data includes a two-dimensional vehicle position offset truth map having the same image size as the training two-dimensional vehicle image.
可选的,所述训练数据集采用如下步骤确定:Optionally, the training data set is determined using the following steps:
获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
根据二维车辆位置偏移真值形成所述二维车辆位置偏移真值图;以及,根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。The two-dimensional vehicle position deviation truth value map is formed according to the two-dimensional vehicle position deviation truth value; and, at least two training two-dimensional vehicles are generated based on the vehicle point cloud data in the at least two frames of environmental point cloud data for training image.
可选的,所述训练数据集采用如下步骤确定:Optionally, the training data set is determined using the following steps:
获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。According to the vehicle point cloud data in at least two frames of environmental point cloud data for training, at least two two-dimensional vehicle images for training are generated.
可选的,所述二维车辆位置偏移数据包括与所述二维车辆图像具有相同图像尺寸的二维车辆位置偏移数据图;Optionally, the two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image;
所述根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度,包括:The determining the vehicle speed according to the two-dimensional vehicle position offset data and the time interval includes:
将各个车辆在所述二维车辆位置偏移数据图中对应的各个像素点的横坐标偏移分量的平均值和纵坐标偏移分量的平均值分别与所述时间间隔的比值,作为所述车辆行驶速度。The ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each pixel corresponding to each vehicle in the two-dimensional vehicle position offset data map to the time interval is used as the ratio The speed of the vehicle.
可选的,所述二维车辆位置偏移数据包括与所述二维车辆图像具有相同图像尺寸的二维车辆位置偏移数据图;Optionally, the two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image;
所述根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度,包括:The determining the vehicle speed according to the two-dimensional vehicle position offset data and the time interval includes:
将各个车辆的二维车辆位置偏移数据转换为点云坐标系下的三维车辆位置偏移数据;Convert the two-dimensional vehicle position offset data of each vehicle into the three-dimensional vehicle position offset data in the point cloud coordinate system;
将所述车辆对应的各个空间点的横坐标偏移分量的平均值和纵坐标偏移分量的平均值分别与所述时间间隔的比值,作为所述车辆行驶速度。The ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each spatial point corresponding to the vehicle to the time interval is used as the vehicle traveling speed.
可选的,所述二维车辆图像包括俯视角度的二维车辆图像。Optionally, the two-dimensional vehicle image includes a two-dimensional vehicle image from a top view angle.
可选的,所述根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像,包括:Optionally, the generating at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data includes:
确定车辆速度确定装置的姿态数据;Determine the attitude data of the vehicle speed determination device;
根据所述姿态数据,将最后一帧前的车辆点云数据转换为最后一帧的点云坐标系下的车辆点云数据;According to the attitude data, converting the vehicle point cloud data before the last frame into the vehicle point cloud data in the point cloud coordinate system of the last frame;
根据坐标系转换后的最后一帧前的车辆点云数据,生成与所述最后一帧前的车辆点云数据对应的二维车辆图像。According to the vehicle point cloud data before the last frame after the coordinate system conversion, a two-dimensional vehicle image corresponding to the vehicle point cloud data before the last frame is generated.
可选的,还包括:Optional, also includes:
通过车辆检测模型从所述道路环境点云数据中提取所述车辆点云数据。The vehicle point cloud data is extracted from the road environment point cloud data through a vehicle detection model.
可选的,还包括:Optional, also includes:
采集所述道路环境点云数据。Collect the road environment point cloud data.
本申请还提供一种车辆速度预测模型构建方法,包括:This application also provides a method for constructing a vehicle speed prediction model, including:
确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;Determining a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
构建车辆速度预测模型的网络结构;Construct the network structure of the vehicle speed prediction model;
从所述训练数据集中学习得到所述车辆速度预测模型。The vehicle speed prediction model is learned from the training data set.
可选的,所述网络结构包括车辆位移特征提取层和车辆位移特征上采样层。Optionally, the network structure includes a vehicle displacement feature extraction layer and a vehicle displacement feature upsampling layer.
可选的,所述二维车辆位置偏移真值数据包括与所述训练用二维车辆图像具有相同图像尺寸的二维车辆位置偏移真值图。Optionally, the two-dimensional vehicle position offset truth data includes a two-dimensional vehicle position offset truth map having the same image size as the training two-dimensional vehicle image.
可选的,所述训练数据集采用如下步骤确定:Optionally, the training data set is determined using the following steps:
获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
根据二维车辆位置偏移真值形成所述二维车辆位置偏移真值图;以及,根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。The two-dimensional vehicle position deviation truth value map is formed according to the two-dimensional vehicle position deviation truth value; and, at least two training two-dimensional vehicles are generated based on the vehicle point cloud data in the at least two frames of environmental point cloud data for training image.
可选的,所述训练数据集采用如下步骤确定:Optionally, the training data set is determined using the following steps:
获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。According to the vehicle point cloud data in at least two frames of environmental point cloud data for training, at least two two-dimensional vehicle images for training are generated.
本申请还提供一种车辆速度确定装置,包括:The application also provides a vehicle speed determining device, including:
图像生成单元,用于根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;An image generating unit, configured to generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
速度确定单元,用于根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The speed determining unit is configured to determine the vehicle driving speed according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
本申请还提供一种车辆速度预测模型构建装置,包括:This application also provides a vehicle speed prediction model construction device, including:
数据确定单元,用于确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;A data determining unit for determining a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
网络构建单元,用于构建车辆速度预测模型的网络结构;The network construction unit is used to construct the network structure of the vehicle speed prediction model;
模型训练单元,用于从所述训练数据集中学习得到所述车辆速度预测模型。The model training unit is used to learn the vehicle speed prediction model from the training data set.
本申请还提供一种车辆,包括:This application also provides a vehicle, including:
三维空间扫描装置;Three-dimensional space scanning device;
处理器;以及Processor; and
存储器,用于存储实现车辆速度确定方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:通过三维空间扫描装置采集道路环境点云数据;根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The memory is used to store the program for realizing the method of determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: collecting road environment point cloud data through the three-dimensional space scanning device; according to at least two frames Generating at least two two-dimensional vehicle images from the vehicle point cloud data in the road environment point cloud data; according to the at least two two-dimensional vehicle images and the time interval between the two frames of the at least two frames of road environment point cloud data, Determine the speed of the vehicle.
本申请还提供一种路测感知设备,包括:This application also provides a drive test sensing device, including:
三维空间扫描装置;Three-dimensional space scanning device;
处理器;以及Processor; and
存储器,用于存储实现车辆速度确定方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:通过三维空间扫描装置采集道路环境点云数据;根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The memory is used to store the program for realizing the method of determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: collecting road environment point cloud data through the three-dimensional space scanning device; according to at least two frames Generating at least two two-dimensional vehicle images from the vehicle point cloud data in the road environment point cloud data; according to the at least two two-dimensional vehicle images and the time interval between the two frames of the at least two frames of road environment point cloud data, Determine the speed of the vehicle.
本申请还提供一种电子设备,包括:This application also provides an electronic device, including:
处理器;以及Processor; and
存储器,用于存储实现车辆速度确定方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The memory is used to store the program for realizing the method for determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: according to the vehicle point cloud data in at least two frames of road environment point cloud data, At least two two-dimensional vehicle images are generated; and the vehicle speed is determined according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
本申请还提供一种电子设备,包括:This application also provides an electronic device, including:
处理器;以及Processor; and
存储器,用于存储实现车辆速度预测模型构建方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:确定训练数据集;所述训练数据包括至少 两个训练用二维车辆图像和二维车辆位置偏移真值数据;构建车辆速度预测模型的网络结构;从所述训练数据集中学习得到所述车辆速度预测模型。The memory is used to store a program for realizing the method for constructing a vehicle speed prediction model. After the device is powered on and the program of the method is run through the processor, the following steps are performed: determining a training data set; the training data includes at least two training Using two-dimensional vehicle images and two-dimensional vehicle position offset true value data; constructing a network structure of a vehicle speed prediction model; learning the vehicle speed prediction model from the training data set.
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各种方法。The present application also provides a computer-readable storage medium having instructions stored in the computer-readable storage medium, which when run on a computer, cause the computer to execute the above-mentioned various methods.
本申请还提供一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各种方法。The present application also provides a computer program product including instructions, which when run on a computer, causes the computer to execute the above-mentioned various methods.
与现有技术相比,本申请具有以下优点:Compared with the prior art, this application has the following advantages:
本申请实施例提供的车辆速度确定方法,通过根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中任意两帧数据的时间间隔,确定车辆行驶速度;这种处理方式,使得生成与至少两帧道路环境点云数据中的车辆点云数据对应的至少两个二维车辆图像,并根据这些图像和至少两帧数据中任意两帧数据的时间间隔,确定车辆的行驶速度;因此,可以有效提升车辆速度的准确度,从而提升道路交通安全性。According to the method for determining the vehicle speed provided by the embodiments of the present application, at least two two-dimensional vehicle images are generated based on the vehicle point cloud data in at least two frames of road environment point cloud data; according to the at least two two-dimensional vehicle images and the The time interval between any two frames of at least two frames of road environment point cloud data is used to determine the speed of the vehicle; this processing method makes it possible to generate at least two frames corresponding to the vehicle point cloud data in the at least two frames of road environment point cloud data. Dimension vehicle images and determine the driving speed of the vehicle based on these images and the time interval of any two frames of data in at least two frames; therefore, the accuracy of the vehicle speed can be effectively improved, thereby improving road traffic safety.
本申请实施例提供的车辆速度预测模型构建方法,通过确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;构建车辆速度预测模型的网络结构;从所述训练数据集中学习得到所述车辆速度预测模型;这种处理方式,使得从大量训练数据中学习得到可以根据至少两个二维车辆图像对车辆位移量进行预测的模型;因此,可以有效提升车辆速度预测模型的准确度。The vehicle speed prediction model construction method provided by the embodiment of the application determines the training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data; the vehicle speed prediction model is constructed Network structure; learn the vehicle speed prediction model from the training data set; this processing method allows learning from a large amount of training data to obtain a model that can predict vehicle displacement based on at least two two-dimensional vehicle images; therefore , Can effectively improve the accuracy of the vehicle speed prediction model.
附图说明Description of the drawings
图1是本申请提供的一种车辆速度确定方法的实施例的流程图;Fig. 1 is a flowchart of an embodiment of a method for determining vehicle speed provided by the present application;
图2是本申请提供的一种车辆速度确定方法的实施例的具体流程图;2 is a specific flowchart of an embodiment of a method for determining vehicle speed provided by the present application;
图3是本申请提供的一种车辆速度确定方法的实施例的车辆速度预测模型的网络结构示意图;FIG. 3 is a schematic diagram of a network structure of a vehicle speed prediction model of an embodiment of a vehicle speed determination method provided by the present application;
图4是本申请提供的一种车辆速度确定装置的实施例的示意图;Fig. 4 is a schematic diagram of an embodiment of a vehicle speed determining device provided by the present application;
图5是本申请提供的一种车辆的实施例的示意图;Figure 5 is a schematic diagram of an embodiment of a vehicle provided by the present application;
图6是本申请提供的一种路测感知设备的实施例的示意图;Fig. 6 is a schematic diagram of an embodiment of a drive test sensing device provided by the present application;
图7是本申请提供的一种电子设备的实施例的示意图;FIG. 7 is a schematic diagram of an embodiment of an electronic device provided by the present application;
图8是本申请提供的一种车辆速度预测模型构建方法的实施例的流程图;FIG. 8 is a flowchart of an embodiment of a method for constructing a vehicle speed prediction model provided by the present application;
图9是本申请提供的一种车辆速度预测模型构建装置的实施例的示意图;FIG. 9 is a schematic diagram of an embodiment of a vehicle speed prediction model construction device provided by the present application;
图10是本申请提供的一种电子设备实施例的示意图。Fig. 10 is a schematic diagram of an embodiment of an electronic device provided by the present application.
具体实施方式Detailed ways
在下面的描述中阐述了很多具体细节以便于充分理解本申请。但是本申请能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本申请内涵的情况下做类似推广,因此本申请不受下面公开的具体实施的限制。In the following description, many specific details are explained in order to fully understand this application. However, this application can be implemented in many other ways different from those described here, and those skilled in the art can make similar promotion without violating the connotation of this application. Therefore, this application is not limited by the specific implementation disclosed below.
在本申请中,提供了车辆速度确定方法及车辆。在下面的实施例中逐一对各种方案进行详细说明。In this application, a vehicle speed determination method and vehicle are provided. In the following embodiments, each solution will be described in detail.
第一实施例First embodiment
请参考图1,其为本申请提供的一种车辆速度确定方法实施例的流程图,该方法的执行主体可以是无人驾驶车辆,也可以是路测感知设备,或服务器等等。下面以无人驾驶车辆为例,对本申请提供的一种车辆速度确定方法进行说明。本申请提供的一种车辆速度确定方法包括:Please refer to FIG. 1, which is a flowchart of an embodiment of a method for determining vehicle speed provided by this application. The execution subject of the method may be an unmanned vehicle, a road test sensing device, or a server, etc. The following uses an unmanned vehicle as an example to describe a method for determining a vehicle speed provided in this application. A method for determining vehicle speed provided by this application includes:
步骤S101:根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像。Step S101: Generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data.
本申请实施例提供的方法,在车辆(以下简称自车)行驶过程中,可通过安装在车辆上的三维空间扫描装置,获取车辆行驶道路的环境空间物体表面每个采样点的空间坐标,得到点的集合,该海量点数据称为道路环境点云(Point Cloud)数据。通过道路环境点云数据,使得扫描物体表面以点的形式记录,每一个点包含有三维坐标,有些可能含有颜色信息(RGB)或反射强度信息(Intensity)。凭借点云数据,可以在同一空间参考系下表达目标空间。In the method provided by the embodiments of the present application, during the driving process of the vehicle (hereinafter referred to as the self-car), the spatial coordinates of each sampling point on the surface of the environmental space object of the vehicle traveling road can be obtained through the three-dimensional space scanning device installed on the vehicle to obtain A collection of points, this massive point data is called road environment point cloud (Point Cloud) data. Through the road environment point cloud data, the scanned object surface is recorded in the form of points. Each point contains three-dimensional coordinates, and some may contain color information (RGB) or reflection intensity information (Intensity). With point cloud data, the target space can be expressed in the same spatial reference system.
所述三维空间扫描装置,可以是激光雷达(Light Detection And Ranging,Lidar),通过激光扫描方式进行激光探测与测量,获得周围环境中障碍物信息,如建筑物、树木、人、车辆等等,其所测得的数据为数字表面模型(Digital Surface Model,DSM)的离散点表示。具体实施时,可采用16线、32线、64线等多线激光雷达,不同激光束数量的雷达采集点云数据的帧频(Frame Rate)不同,如16、32线每秒一般采集10帧点云数据。所述三维空间扫描装置,也可以是三维激光扫描仪或照相式扫描仪等设备。The three-dimensional space scanning device may be a Lidar (Light Detection And Ranging, Lidar), which performs laser detection and measurement through a laser scanning method to obtain information about obstacles in the surrounding environment, such as buildings, trees, people, vehicles, etc. The measured data is the discrete point representation of the Digital Surface Model (DSM). In specific implementation, 16-line, 32-line, 64-line and other multi-line lidars can be used. Radars with different laser beam numbers have different frame rates for collecting point cloud data. For example, 16 and 32 lines generally collect 10 frames per second. Point cloud data. The three-dimensional space scanning device may also be equipment such as a three-dimensional laser scanner or a photographic scanner.
本实施例中的自车在通过三维空间扫描装置采集道路环境点云数据之后,就可以根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像。After collecting road environment point cloud data by the three-dimensional space scanning device in this embodiment, the own vehicle can generate at least two two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of road environment point cloud data.
所述道路环境点云数据可包括道路环境空间中各种物体的点云数据,这些物体可以 是树木、建筑物、道路上的行人及其他车辆等等。本申请实施例提供的方法,根据至少两帧道路环境点云数据中的车辆点云数据,确定道路上其他车辆的行驶速度。The road environment point cloud data may include point cloud data of various objects in the road environment space, and these objects may be trees, buildings, pedestrians and other vehicles on the road, and so on. The method provided in the embodiment of the present application determines the driving speed of other vehicles on the road based on the vehicle point cloud data in at least two frames of road environment point cloud data.
所述至少两帧道路环境点云数据,可以是当前车辆(自车)最近采集到的两帧及以上的环境点云数据,例如,当前车辆在t n-τ,…,t n-1,t n这τ+1个时刻共采集了τ+1帧车辆驾驶途中的环境点云数据,每一帧环境点云数据可包括多个车辆的点云数据,因此本申请实施例提供的方法可根据这τ+1帧环境点云数据,确定多个车辆的行驶速度。 The at least two frames of road environmental point cloud data may be two or more frames of environmental point cloud data recently collected by the current vehicle (self-vehicle), for example, the current vehicle is at t n-τ ,..., t n-1 , A total of τ+1 frames of environmental point cloud data during the driving of the vehicle are collected at t n τ+1 times. Each frame of environmental point cloud data may include point cloud data of multiple vehicles. Therefore, the method provided in the embodiment of the present application can According to the τ+1 frame of environmental point cloud data, the driving speed of multiple vehicles is determined.
所述车辆点云数据,可通过车辆检测模型从所述道路环境点云数据中提取得到。车辆装载的激光雷达扫描得到一帧环境点云数据之后,可将环境点云数据传输到车辆检测模型,通过该模型检测得到车辆及其在环境点云数据中的三维位置数据,也就是说确定出环境点云数据中的车辆点云数据。所述三维位置数据,可以是车辆的矩形立方体包围盒的顶点坐标数据等等。The vehicle point cloud data can be extracted from the road environment point cloud data through a vehicle detection model. After the lidar on the vehicle scans and obtains a frame of environmental point cloud data, the environmental point cloud data can be transmitted to the vehicle detection model, and the vehicle and its three-dimensional position data in the environmental point cloud data can be obtained through the model detection, that is to say, it is determined Output the vehicle point cloud data in the environmental point cloud data. The three-dimensional position data may be vertex coordinate data of the rectangular cube bounding box of the vehicle, and so on.
在本实施例中,所述车辆检测模型可采用基于深度学习的RefineDet方法,该方法在借鉴SSD这类单阶段方法运行速率快的基础上,又结合了Faster R-CNN这类两阶段方法,因此具有车辆检测准确率高的优点。该方法在检测到环境点云数据中的车辆点云数据时,即得到车辆的包围盒(bounding box)坐标,即所述车辆点云数据在环境点云数据中的位置数据。In this embodiment, the vehicle detection model can use the RefineDet method based on deep learning. This method combines the fast running speed of single-stage methods such as SSD and two-stage methods such as Faster R-CNN. Therefore, it has the advantage of high vehicle detection accuracy. When the method detects vehicle point cloud data in the environmental point cloud data, it obtains the bounding box coordinates of the vehicle, that is, the position data of the vehicle point cloud data in the environmental point cloud data.
步骤S101根据至少两帧环境点云数据中的车辆点云数据,生成至少两个二维车辆图像。所述二维车辆图像,可以是二维环境图像(即:根据环境点云数据构建的三维场景图的二维图像)中去除车辆以外的其他物体图像的图像,也就是说,二维车辆图像可以是只包括车辆图像的二维环境图像。Step S101 generates at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of environmental point cloud data. The two-dimensional vehicle image may be a two-dimensional environmental image (ie, a two-dimensional image of a three-dimensional scene graph constructed from environmental point cloud data) with images of objects other than the vehicle removed, that is, a two-dimensional vehicle image It may be a two-dimensional environment image including only the vehicle image.
所述二维车辆图像,可以是在俯视图视角下的只包括车辆图像的二维环境图像。采用这种处理方式,使得二维车辆图像可以包括尽量多数量的车辆的二维投影点,根据较为全面的车辆点确定出的车辆行驶速度将更为准确。具体实施时,也可以采用其他视角下的二维车辆图像,如左视图、右视图、正面图等等。The two-dimensional vehicle image may be a two-dimensional environment image including only the vehicle image in a top view perspective. With this processing method, the two-dimensional vehicle image can include as many two-dimensional projection points of the vehicle as possible, and the vehicle speed determined based on a more comprehensive vehicle point will be more accurate. During specific implementation, two-dimensional vehicle images from other viewing angles, such as left view, right view, front view, etc., can also be used.
在一个示例中,采集两个相邻帧(前一帧记为第0帧,后一帧记为第1帧)的环境点云数据,并采用俯视视角分别处理前后两帧车辆的点云,生成两张对应的多通道(包括车辆点的密度通道、个数通道等等)二维车辆图像,二维车辆图像的范围可涵盖自车附近的一定区域。在这一过程中,对于第0帧,由于自车可能存在运动,需要对点云坐标系进行时间上的同步,根据自车定位相关传感器给出的姿态信息,将这个第0帧投影到第1帧点云所在的坐标系,再生成二维车辆图像。In one example, collect the environmental point cloud data of two adjacent frames (the previous frame is recorded as the 0th frame, and the next frame is recorded as the 1st frame), and the point clouds of the two preceding and following vehicles are processed separately using the overhead view. Generate two corresponding multi-channel (including the density channel of the vehicle point, the number channel, etc.) two-dimensional vehicle images. The range of the two-dimensional vehicle image can cover a certain area near the own vehicle. In this process, for the 0th frame, since the self-vehicle may move, it is necessary to synchronize the point cloud coordinate system in time. According to the posture information given by the self-vehicle positioning sensor, the 0th frame is projected to the first One frame of the coordinate system where the point cloud is located, and then generate a two-dimensional vehicle image.
在另一个示例中,采集多帧(如第0帧、第1帧…第10帧)的环境点云数据,并采用俯视视角分别处理所有帧车辆的点云,生成多张(如10张)对应的多通道(包括车辆点的密度通道、个数通道等等)二维车辆图像,二维车辆图像的范围同样可涵盖自车附近的一定区域。在这种情况下,步骤S101可包括如下子步骤:1)确定自车的姿态数据;2)根据所述姿态数据,将最后一帧(如第10帧)前的车辆点云数据转换为最后一帧的点云坐标系下的车辆点云数据;3)根据坐标系转换后的最后一帧前的车辆点云数据,生成与所述最后一帧前的车辆点云数据对应的二维车辆图像。在这一过程中,对于第0帧到第9帧,由于自车可能存在运动,需要对点云坐标系进行时间上的同步,根据自车定位相关传感器给出的姿态信息,将第0帧到第9帧投影到第10帧点云所在的坐标系,再生成二维车辆图像。In another example, collect the environmental point cloud data of multiple frames (such as frame 0, frame 1...frame 10), and use the overhead perspective to process the point cloud of all frames of vehicles separately to generate multiple (such as 10) Corresponding multi-channel (including the density channel of vehicle points, number channels, etc.) two-dimensional vehicle image, the range of the two-dimensional vehicle image can also cover a certain area near the own vehicle. In this case, step S101 may include the following sub-steps: 1) Determine the posture data of the own vehicle; 2) According to the posture data, convert the point cloud data of the vehicle before the last frame (such as the 10th frame) into the last The vehicle point cloud data in the point cloud coordinate system of one frame; 3) According to the vehicle point cloud data before the last frame after the coordinate system conversion, a two-dimensional vehicle corresponding to the vehicle point cloud data before the last frame is generated image. In this process, for the 0th to 9th frames, since the self-vehicle may move, it is necessary to synchronize the point cloud coordinate system in time. According to the posture information given by the self-vehicle positioning sensor, the 0th frame The 9th frame is projected to the coordinate system where the 10th frame point cloud is located, and then a two-dimensional vehicle image is generated.
需要说明的是,如果本申请实施例提供的方法的执行主体为路测感知设备,则由于该设备位置是固定不变的,因此在生成最后一帧前的二维车辆图像时,无需确定路测感知设备的姿态数据,也无需根据所述姿态数据,将最后一帧前的车辆点云数据转换为最后一帧的点云坐标系下的车辆点云数据。It should be noted that if the execution subject of the method provided in the embodiment of the present application is a drive test sensing device, the location of the device is fixed, so there is no need to determine the road when generating the two-dimensional vehicle image before the last frame. To measure the posture data of the sensing device, there is no need to convert the vehicle point cloud data before the last frame into the vehicle point cloud data in the point cloud coordinate system of the last frame according to the posture data.
步骤S103:根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中任意两帧数据的时间间隔,确定车辆行驶速度。Step S103: Determine the driving speed of the vehicle according to the time interval between any two frames of the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
在获得与至少两帧道路环境点云数据中的车辆点云数据对应的至少两个二维车辆图像后,根据至少两个二维车辆图像和所述至少两帧道路环境点云数据中任意两帧数据的时间间隔,就可以确定车辆行驶速度。After obtaining at least two two-dimensional vehicle images corresponding to the vehicle point cloud data in the at least two frames of road environment point cloud data, according to the at least two two-dimensional vehicle images and any two of the at least two frames of road environment point cloud data The time interval of the frame data can determine the speed of the vehicle.
在一个示例中,根据车辆在每一帧图像中的位置和所述时间间隔,确定车辆行驶速度。例如,在第0帧中车辆A在车辆B的前面,而在第1帧中车辆B超过车辆A,反而在车辆A的前面,这就说明车辆B的速度高于车辆A的速度,根据自车、车辆A、车辆B的位置及这两帧数据的时间间隔,即可确定车辆A和车辆B的行驶速度。In an example, the vehicle speed is determined according to the position of the vehicle in each frame of image and the time interval. For example, in the 0th frame, the vehicle A is in front of the vehicle B, and in the first frame, the vehicle B exceeds the vehicle A, but is in front of the vehicle A. This means that the speed of the vehicle B is higher than the speed of the vehicle A. According to The location of car, vehicle A, vehicle B and the time interval between these two frames of data can determine the speed of vehicle A and vehicle B.
在另一个示例中,步骤S103可包括如下子步骤:In another example, step S103 may include the following sub-steps:
步骤S1031:根据所述至少两个二维车辆图像确定与所述时间间隔对应的二维车辆位置偏移数据。Step S1031: Determine the two-dimensional vehicle position offset data corresponding to the time interval according to the at least two two-dimensional vehicle images.
所述二维车辆位置偏移数据,可以包括车辆在两帧时间间隔的横坐标位置偏移量和纵坐标位置偏移量。所述两帧时间间隔,可以是所述至少两帧道路环境点云数据中任意两帧数据的时间间隔。例如,自车在t n-τ,…,t n-1,t n这τ+1个时刻共采集了τ+1帧车辆驾驶途中的环境点云数据,通过本申请实施例提供的方法可确定其他车辆在时刻t n-1至时 刻t n之间移动的距离,即其他车辆在地面横坐标的位置偏移量和纵坐标的位置偏移量,这种情况下的时间间隔为t n-t n-1。具体实施时,也可以是确定t n-2至时刻t n-1之间移动的距离,这种情况下的时间间隔为t n-1-t n-2;或者是确定t n-τ至时刻t n-3之间移动的距离,这种情况下的时间间隔为t n-3-t n-τThe two-dimensional vehicle position offset data may include the abscissa position offset and the ordinate position offset of the vehicle in a two-frame time interval. The two-frame time interval may be the time interval of any two frames of data in the at least two frames of road environment point cloud data. For example, the self-vehicle collects τ+1 frames of environmental point cloud data during the driving of the vehicle at t n-τ ,..., t n-1 , t n τ+1, and the method provided in the embodiment of this application can Determine the distance that other vehicles move between time t n-1 and time t n , that is, the position offset of other vehicles on the ground abscissa and ordinate, the time interval in this case is t n -t n-1 . In specific implementation, it may also be to determine the distance moved from t n-2 to time t n-1 . In this case, the time interval is t n-1 -t n-2 ; or to determine t n-τ to The distance moved between time t n-3 , in this case the time interval is t n-3 -t n-τ .
需要说明的是,由于一个二维车辆图像中的多个车辆通常具有不同的车辆行驶速度,因此不同车辆在两帧时间间隔的二维车辆位置偏移数据通常不同。It should be noted that since multiple vehicles in a two-dimensional vehicle image usually have different vehicle driving speeds, the two-dimensional vehicle position offset data of different vehicles at two-frame time intervals are usually different.
本申请实施例提供的方法,通过车辆速度预测模型确定车辆在两帧时间间隔的二维车辆位置偏移数据。所述车辆速度预测模型,可从大量标注有二维车辆位置偏移真值数据的至少两个二维车辆图像的训练数据集中学习得到,也就是说,所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据。In the method provided by the embodiment of the present application, the vehicle speed prediction model determines the two-dimensional vehicle position offset data of the vehicle in a two-frame time interval. The vehicle speed prediction model can be learned from a large number of training data sets of at least two two-dimensional vehicle images marked with two-dimensional vehicle position offset true value data, that is, the training data includes at least two training data sets. Two-dimensional vehicle image and two-dimensional vehicle position offset true value data.
从时间维度而言,所述二维车辆位置偏移真值数据,可以是车辆在最后两帧时间间隔的二维车辆位置偏移真值,也可以是车辆在任意两帧时间间隔的二维车辆位置偏移真值。In terms of time dimension, the two-dimensional vehicle position offset true value data may be the two-dimensional vehicle position offset true value of the vehicle in the last two frame time interval, or the two-dimensional vehicle position offset true value of the vehicle in any two frame time interval. The vehicle position offsets the true value.
从数据粒度维度而言,所述二维车辆位置偏移真值数据,可以包括与所述训练用二维车辆图像具有相同图像尺寸的二维车辆位置偏移真值图,也可以是比所述训练用二维车辆图像的图像尺寸更小的二维车辆位置偏移真值图,还可以只包括极少量的二维车辆位置偏移真值,极端情况下可以只包括一个横坐标的位移偏移量真值和一个纵坐标的位移偏移量真值,也就是说,训练用二维车辆图像中车辆的位移偏移量真值只包括两个数据,一个是该车辆的横坐标的位移偏移量真值,另一个是该车辆的纵坐标的位移偏移量真值。In terms of data granularity, the two-dimensional vehicle position offset truth data may include a two-dimensional vehicle position offset truth map with the same image size as the training two-dimensional vehicle image, or it may be The two-dimensional vehicle position offset true value map of the training two-dimensional vehicle image with a smaller image size can also include only a very small amount of two-dimensional vehicle position offset true values, and in extreme cases it can include only a displacement of the abscissa The true value of the offset and the true value of the displacement offset of an ordinate, that is, the true value of the displacement offset of the vehicle in the two-dimensional vehicle image for training only includes two data, one is the abscissa of the vehicle The true value of the displacement offset, and the other is the true value of the displacement offset of the vehicle's ordinate.
请参考图2,其为本申请实施例提供的方法的具体流程图。在本实施例中,所述方法还可包括如下步骤:Please refer to FIG. 2, which is a specific flowchart of the method provided in an embodiment of this application. In this embodiment, the method may further include the following steps:
步骤S201:从训练数据集中学习得到车辆速度预测模型。Step S201: learn from the training data set to obtain a vehicle speed prediction model.
所述训练数据集包括大量训练数据,也就是训练样本。需要注意的是,模型训练时的训练数据包括的二维车辆图像数量,应该与利用模型进行速度预测时的模型输入的二维车辆图像数量相同。The training data set includes a large amount of training data, that is, training samples. It should be noted that the number of two-dimensional vehicle images included in the training data during model training should be the same as the number of two-dimensional vehicle images input by the model when the model is used for speed prediction.
在本实施例中,步骤S201可包括如下子步骤:In this embodiment, step S201 may include the following sub-steps:
步骤S2011:确定所述训练数据集。Step S2011: Determine the training data set.
在本实施例中,所述训练数据集采用如下步骤确定:1)获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;2)根据所述标注数据,将同一车辆 的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;3)将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;4)根据二维车辆位置偏移真值形成所述二维车辆位置偏移真值图;以及,根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。In this embodiment, the training data set is determined using the following steps: 1) Obtain at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification annotation data; 2) According to the annotation data, the same The center point offset of the three-dimensional vehicle bounding box of the preset two frames of the vehicle is used as the true value of the three-dimensional vehicle position offset; 3) the true value of the three-dimensional vehicle position offset is projected to the top view coordinate system to obtain the two-dimensional vehicle position offset 4) forming the two-dimensional vehicle position deviation truth value map according to the two-dimensional vehicle position deviation truth value; and generating at least two vehicle point cloud data in the environmental point cloud data for training at least two frames Two-dimensional vehicle images for training.
在训练所述车辆速度预测模型时,模型网络需要标注数据可以提供对连续帧(可以是两帧或两帧以上)跟踪对象(车辆)的3D矩形包围框(包括车辆点云数据)以及它们在帧间的对应关系,以便可以在相邻帧中提取出同一车辆,并生成至少两个训练用二维车辆图像。本实施例将同一车辆相邻帧3D矩形包围框的中心点偏移作为网络回归的偏移量真值,并投影到俯视图坐标系中,将该中心点的偏移量真值填充到第1帧该3D框对应的位置上,形成车辆在两帧时间间隔的二维车辆位置偏移真值图。表1示出了本实施例的确定训练数据用的标注数据。When training the vehicle speed prediction model, the model network needs annotated data that can provide 3D rectangular bounding boxes (including vehicle point cloud data) for continuous frames (two or more frames) to track objects (vehicles) and their Correspondence between frames so that the same vehicle can be extracted from adjacent frames and at least two training two-dimensional vehicle images can be generated. In this embodiment, the offset of the center point of the 3D rectangular bounding box of the adjacent frame of the same vehicle is used as the true value of the offset of the network regression, and is projected into the top view coordinate system, and the true value of the offset of the center point is filled into the first At the position corresponding to the 3D frame of the frame, a two-dimensional vehicle position offset truth map of the vehicle in the two-frame time interval is formed. Table 1 shows the annotation data used to determine the training data in this embodiment.
Figure PCTCN2020089606-appb-000001
Figure PCTCN2020089606-appb-000001
Figure PCTCN2020089606-appb-000002
Figure PCTCN2020089606-appb-000002
表1、确定训练数据用的标注数据Table 1. Determining the annotation data for training data
表1中提供的标注数据可以提供对n个连续帧中的车辆的3D矩形包围框及该框在帧间的对应关系,以便可以在n个连续帧中提取出同一车辆。The annotation data provided in Table 1 can provide a 3D rectangular enclosing frame for vehicles in n consecutive frames and the corresponding relationship between the frames, so that the same vehicle can be extracted in n consecutive frames.
步骤S2013:构建所述预测模型的网络结构。Step S2013: construct a network structure of the prediction model.
请参考图3,其为本申请实施例提供的方法的预测模型的网络结构示意图。由图3可见,本实施例的模型网络结构为卷积神经网络,可包括多个卷积层和多个反卷积层,所述车辆速度预测模型输出的二维车辆位置偏移数据图与所述训练用二维车辆图像具有相同的图像尺寸。该网络将前后两帧点云生成的二维车辆图像在通道方向上串联起来作为模型的输入数据,模型输出一张两通道、宽高等于输入图像大小的二维车辆位置偏移数据图。由于模型输出图包括二维车辆位置偏移数据,体现了车辆的速度信息,因此也可称为速度图。两个通道分别反映了对应像素位置上存在的点云在图像坐标系下x和y方向上的偏移分量。Please refer to FIG. 3, which is a schematic diagram of the network structure of the prediction model of the method provided in an embodiment of the application. As can be seen from Figure 3, the model network structure of this embodiment is a convolutional neural network, which may include multiple convolutional layers and multiple deconvolutional layers. The two-dimensional vehicle position offset data map output by the vehicle speed prediction model is similar to The two-dimensional vehicle images for training have the same image size. The network connects the two-dimensional vehicle images generated by the point clouds of the front and back frames in the channel direction as the input data of the model. The model outputs a two-channel, two-dimensional vehicle position offset data map with width and height equal to the input image size. Since the model output map includes two-dimensional vehicle position offset data, which reflects the speed information of the vehicle, it can also be called a speed map. The two channels respectively reflect the offset components of the point cloud existing in the corresponding pixel position in the x and y directions in the image coordinate system.
在本实施例中,对于输入的合并二维车辆图像,首先借助连续的几层卷积层和最大值池化层抽出特征图尺寸较小的车辆位移高维特征,再经由几层反卷积层恢复到原输入图像的大小,模型输出图包括二维车辆图像中各个车辆的各个像素点在两帧时间间隔的 二维车辆位置偏移数据。本实施例将用于从输入特征图中抽取出特征图尺寸较小的较高维度的车辆位移特征的卷积层称为车辆位移特征提取层,具体实施时,可以包括多个车辆位移特征提取层。相应的,本实施例将用于从输入特征图中上采样出特征图尺寸较大的车辆位移特征的反卷积层称为车辆位移特征上采样层,直至经由最后一个反卷积层上采样出与原输入图像大小一致的二维车辆位置偏移数据图,具体实施时,可以包括多个车辆位移特征上采样层。采用这种处理方式,使得根据车辆在二维车辆图像中的二维位置,即可直接从二维车辆位置偏移数据图中获得输入的二维车辆图像中各个车辆的二维车辆位置偏移数据;因此,可以有效提升车辆速度的准确度,同时可以提升处理速度。In this embodiment, for the input merged two-dimensional vehicle image, firstly, with the help of several consecutive layers of convolutional layer and maximum pooling layer, the vehicle displacement high-dimensional features with smaller feature map size are extracted, and then through several layers of deconvolution The layer is restored to the size of the original input image, and the model output map includes the two-dimensional vehicle position offset data of each pixel of each vehicle in the two-dimensional vehicle image at two-frame time intervals. In this embodiment, the convolutional layer used to extract higher-dimensional vehicle displacement features with a smaller feature map size from the input feature map is called the vehicle displacement feature extraction layer. In specific implementation, it may include multiple vehicle displacement feature extractions. Floor. Correspondingly, in this embodiment, the deconvolution layer used to upsample the vehicle displacement feature with a larger feature map size from the input feature map is referred to as the vehicle displacement feature upsampling layer, until the last deconvolution layer is upsampled. A two-dimensional vehicle position offset data map with the same size as the original input image is generated. In specific implementation, it may include multiple vehicle displacement feature up-sampling layers. Using this processing method, the two-dimensional vehicle position offset of each vehicle in the input two-dimensional vehicle image can be directly obtained from the two-dimensional vehicle position offset data map according to the two-dimensional position of the vehicle in the two-dimensional vehicle image Data; therefore, the accuracy of the vehicle speed can be effectively improved, and the processing speed can be improved at the same time.
由图3可见,车辆位移特征上采样层的输入数据可包括与其相邻的上一个车辆位移特征上采样层的输出特征图,还可包括与该上一个车辆位移特征上采样层的输出特征图的图像尺寸相同的前面一个车辆位移特征提取层的输出特征图。采用这种处理方式,使得可保留更丰富的与车辆速度有关的特征数据,并从更丰富的特征数据中上采样出二维车辆位置偏移数据;因此,可以有效提升车辆速度的准确度。It can be seen from Figure 3 that the input data of the vehicle displacement feature upsampling layer may include the output feature map of the previous vehicle displacement feature upsampling layer adjacent to it, and may also include the output feature map of the previous vehicle displacement feature upsampling layer. The output feature map of the previous vehicle displacement feature extraction layer with the same image size. By adopting this processing method, richer feature data related to vehicle speed can be retained, and two-dimensional vehicle position offset data can be up-sampled from the richer feature data; therefore, the accuracy of vehicle speed can be effectively improved.
在另一个示例中,模型网络结构也可以不包括反卷积层,也就是说,不包括车辆位移特征上采样层,在这样情况下,所述车辆速度预测模型输出的二维车辆位置偏移数据图可能与所述训练用二维车辆图像具有不同的图像尺寸。In another example, the model network structure may not include the deconvolution layer, that is, it does not include the vehicle displacement feature upsampling layer. In this case, the two-dimensional vehicle position output by the vehicle speed prediction model is offset The data map may have a different image size from the two-dimensional vehicle image for training.
在所述车辆速度预测模型的输入图像与输出图像尺寸不同的情况下,所述训练数据集可采用如下步骤确定:1)获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;2)根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;3)将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;4)根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。这种处理方式删减了“根据二维车辆偏移量真值形成所述二维车辆位置偏移真值图”的步骤,因此可以有效提升处理速度。然而,采用这种处理方式得到的模型准确度将低于上述具有车辆位移特征上采样层,且模型输出的二维车辆位置偏移数据图与输入至模型的二维车辆图像具有相同图像尺寸的模型的准确度。When the input image and output image size of the vehicle speed prediction model are different, the training data set can be determined by the following steps: 1) Obtain at least two frames of training environment with three-dimensional vehicle bounding box and vehicle identification annotation data Point cloud data; 2) According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset; 3) the three-dimensional vehicle position is offset The true value is projected to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset; 4) at least two training two-dimensional vehicle images are generated according to the vehicle point cloud data in at least two frames of environmental point cloud data for training. This processing method deletes the step of "forming the two-dimensional vehicle position offset true value map according to the two-dimensional vehicle offset true value", so the processing speed can be effectively improved. However, the accuracy of the model obtained by this processing method will be lower than that of the above-mentioned up-sampling layer with vehicle displacement characteristics, and the two-dimensional vehicle position offset data map output by the model has the same image size as the two-dimensional vehicle image input to the model. The accuracy of the model.
步骤S2015:从所述训练数据集中学习得到所述预测模型。Step S2015: learning the prediction model from the training data set.
在获得训练数据集、及构建模型网络结构后,就可以根据训练数据集对网络中的权重进行训练,当网络中的权重使得模型输出的二维车辆位置偏移数据图与二维车辆位置偏移真值图间的差异达到优化目标时,即可停止模型训练。After obtaining the training data set and constructing the model network structure, the weights in the network can be trained according to the training data set. When the weights in the network make the two-dimensional vehicle position offset data map output by the model deviate from the two-dimensional vehicle position When the difference between the true value maps reaches the optimization goal, the model training can be stopped.
在本实施例中,为了达到较好的收敛效果,模型训练过程中还可执行以下两种处理:In this embodiment, in order to achieve a better convergence effect, the following two processes can be performed during the model training process:
1)训练中在计算损失函数时,借助于一张掩码图。在模型输入数据为两个二维车辆图像的情况下,该掩码图可对应于第1帧(另一帧为第0帧)二维车辆图像,使得只有第1帧二维车辆图像中对应有车辆的像素,该掩码图的对应像素值才为1,其他位置,该掩码图的像素为0。计算损失函数时,只有掩码值为1的像素才参与计算损失。1) When calculating the loss function during training, a mask image is used. When the model input data is two two-dimensional vehicle images, the mask map can correspond to the first frame (the other frame is the zeroth frame) two-dimensional vehicle image, so that only the first frame of the two-dimensional vehicle image corresponds to If there is a pixel of a vehicle, the corresponding pixel value of the mask image is 1, and for other positions, the pixel value of the mask image is 0. When calculating the loss function, only pixels with a mask value of 1 participate in the calculation of the loss.
2)本实施例的模型网络采用多尺度的思想,在多个反卷积层的输出上计算损失函数,以帮助网络收敛。由于每个反卷积层的输出特征图的大小不一致,需要将所述真值图以及掩码图降采样到对应的大小后再计算损失。2) The model network of this embodiment adopts the idea of multi-scale, and calculates the loss function on the output of multiple deconvolution layers to help the network converge. Since the size of the output feature map of each deconvolution layer is inconsistent, it is necessary to downsample the truth map and the mask map to a corresponding size before calculating the loss.
步骤S1033:根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度。Step S1033: Determine the driving speed of the vehicle according to the two-dimensional vehicle position offset data and the time interval.
在一个示例中,所述二维车辆位置偏移数据包括与所述二维车辆图像具有相同图像尺寸的二维车辆位置偏移数据图;相应的,步骤S1033可包括如下子步骤:1)将各个车辆的二维车辆位置偏移数据转换为点云坐标系下的三维车辆位置偏移数据;2)针对各个车辆,将所述车辆对应的各个空间点的横坐标偏移分量的平均值和纵坐标偏移分量的平均值分别与所述时间间隔的比值,作为该车辆的行驶速度。In an example, the two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image; correspondingly, step S1033 may include the following sub-steps: 1) The two-dimensional vehicle position offset data of each vehicle is converted into the three-dimensional vehicle position offset data in the point cloud coordinate system; 2) For each vehicle, the average value of the abscissa offset component of each spatial point corresponding to the vehicle is summed The ratio of the average value of the ordinate offset components to the time interval is used as the driving speed of the vehicle.
在本实施例中,根据车辆最后采集的两帧环境点云数据确定行驶道路中其他车辆的行驶速度,可将最后一帧环境点云数据中的每个车辆的点云数据投影到速度图上,提取出每个点投影后对应像素的二维车辆偏移分量,并将该偏移分量转换回点云坐标系,作为该点在三维空间中x、y方向上的偏移分量,再将同一车辆所有点的三维车辆偏移分量的均值作为该车辆的三维车辆位置偏移分量。最终,该三维车辆位置偏移分量除以已知的两帧时间间隔,即为该车辆的行驶速度。采用这种处理方式,使得综合车辆所有点的位置偏移量确定车辆行驶速度;因此,可以有效提升车辆速度估计的准确度。In this embodiment, the driving speed of other vehicles on the road is determined according to the last two frames of environmental point cloud data collected by the vehicle, and the point cloud data of each vehicle in the last frame of environmental point cloud data can be projected onto the speed map , Extract the two-dimensional vehicle offset component of the corresponding pixel after each point is projected, and convert the offset component back to the point cloud coordinate system as the offset component of the point in the x and y directions in the three-dimensional space. The average value of the three-dimensional vehicle offset components of all points in the same vehicle is used as the three-dimensional vehicle position offset component of the vehicle. Finally, the three-dimensional vehicle position offset component divided by the known two-frame time interval is the driving speed of the vehicle. By adopting this processing method, the vehicle speed can be determined by integrating the position offsets of all points of the vehicle; therefore, the accuracy of the vehicle speed estimation can be effectively improved.
在另一个示例中,根据两帧环境点云数据中的车辆点云数据,生成两个二维车辆图像;将两个二维车辆图像作为所述预测模型的输入数据,通过所述预测模型生成与所述二维车辆图像具有相同图像尺寸的二维车辆位置偏移数据图;所述二维车辆位置偏移数据图包括横坐标偏移量图和纵坐标偏移量图;针对所述二维车辆图像中的各个车辆,根据所述横坐标偏移量图、纵坐标偏移量图和所述时间间隔,确定所述行驶速度。采用这种处理方式,使得直接根据车辆的各个点对应的二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度;因此,可以有效提升速度估计速度。In another example, two two-dimensional vehicle images are generated according to the vehicle point cloud data in the two frames of environmental point cloud data; the two two-dimensional vehicle images are used as the input data of the prediction model to generate A two-dimensional vehicle position offset data map with the same image size as the two-dimensional vehicle image; the two-dimensional vehicle position offset data map includes an abscissa offset map and an ordinate offset map; for the two For each vehicle in the one-dimensional vehicle image, the driving speed is determined according to the abscissa offset map, the ordinate offset map, and the time interval. With this processing method, the vehicle speed can be determined directly based on the two-dimensional vehicle position offset data corresponding to each point of the vehicle and the time interval; therefore, the speed estimation speed can be effectively improved.
例如,车辆1在第0帧时刻t 0至第1帧时刻t 1的横坐标偏移分量为10米,纵坐标偏移分量为5米,时刻t 0与时刻t 1间隔500毫秒,则车辆1的行驶速度为72迈;车辆2 在第0帧时刻t 0至第1帧时刻t 1的横坐标偏移分量为15米,纵坐标偏移分量为5米,时刻t 0与时刻t 1间隔500毫秒,则车辆2的行驶速度为108迈。 For example, the abscissa offset component of vehicle 1 from time t 0 to frame 1 time t 1 is 10 meters, the ordinate offset component is 5 meters, and the time t 0 and time t 1 are separated by 500 milliseconds, then the vehicle The driving speed of 1 is 72 miles; the abscissa offset component of vehicle 2 from frame 0 time t 0 to frame 1 time t 1 is 15 meters, the ordinate offset component is 5 meters, time t 0 and time t 1 With an interval of 500 milliseconds, the traveling speed of the vehicle 2 is 108 miles.
具体实施时,也可以将所述车辆对应的各个像素点的横坐标偏移分量的平均值和纵坐标偏移分量的平均值分别与所述时间间隔的比值,作为所述车辆行驶速度。采用这种方式,综合考虑车辆所有点的位置偏移量;因此,可以有效提升车辆速度估计准确度。During specific implementation, the ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each pixel point corresponding to the vehicle to the time interval may also be used as the vehicle traveling speed. In this way, the position offsets of all points of the vehicle are comprehensively considered; therefore, the accuracy of vehicle speed estimation can be effectively improved.
本实施例通过上述步骤S1031和1033的处理方式,使得可通过车辆速度预测模型确定车辆中的点(如所有点或部分点)的位置偏移量,并根据这些位置偏移量确定车辆的行驶速度;因此,可以有效提升车辆速度的准确度。In this embodiment, through the processing methods of steps S1031 and 1033 described above, the vehicle speed prediction model can be used to determine the position offset of points (such as all points or some points) in the vehicle, and determine the driving of the vehicle based on these position offsets Speed; Therefore, it can effectively improve the accuracy of vehicle speed.
从上述实施例可见,本申请实施例提供的车辆速度确定方法,通过根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中任意两帧数据的时间间隔,确定车辆行驶速度;这种处理方式,使得生成与至少两帧道路环境点云数据中的车辆点云数据对应的至少两个二维车辆图像,并根据这些图像和至少两帧数据中任意两帧数据的时间间隔,确定车辆的行驶速度;因此,可以有效提升车辆速度的准确度,从而提升道路交通安全性。It can be seen from the above embodiments that the method for determining the vehicle speed provided by the embodiments of the present application generates at least two two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of road environment point cloud data; The vehicle image and the time interval between any two frames of the at least two frames of road environment point cloud data are used to determine the vehicle speed; this processing method makes the generation of the vehicle point cloud data in the at least two frames of road environment point cloud data Corresponding to at least two two-dimensional vehicle images, and determine the driving speed of the vehicle based on these images and the time interval of any two frames of the at least two frames of data; therefore, the accuracy of the vehicle speed can be effectively improved, thereby improving road traffic safety Sex.
第二实施例Second embodiment
在上述的实施例中,提供了一种车辆速度确定方法,与之相对应的,本申请还提供一种车辆速度确定装置。该装置是与上述方法的实施例相对应。In the foregoing embodiment, a method for determining a vehicle speed is provided. Correspondingly, this application also provides a device for determining a vehicle speed. This device corresponds to the embodiment of the above method.
请参看图4,其为本申请的车辆速度确定装置的实施例的示意图。由于装置实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的装置实施例仅仅是示意性的。Please refer to FIG. 4, which is a schematic diagram of an embodiment of the vehicle speed determining device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment. The device embodiments described below are merely illustrative.
本申请另外提供一种车辆速度确定装置,包括:This application additionally provides a vehicle speed determination device, including:
图像生成单元401,用于根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;The image generating unit 401 is configured to generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
速度确定单元403,用于根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The speed determining unit 403 is configured to determine the driving speed of the vehicle according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
第三实施例The third embodiment
在上述的实施例中,提供了一种车辆速度确定方法,与之相对应的,本申请还提供一种车辆。该车辆是与上述方法的实施例相对应。In the foregoing embodiment, a method for determining the speed of a vehicle is provided. Correspondingly, this application also provides a vehicle. This vehicle corresponds to the embodiment of the above method.
请参看图5,其为本申请的车辆的实施例的示意图。由于车辆实施例基本相似于方 法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的车辆实施例仅仅是示意性的。Please refer to FIG. 5, which is a schematic diagram of an embodiment of the vehicle of this application. Since the vehicle embodiment is basically similar to the method embodiment, the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment. The vehicle embodiments described below are merely illustrative.
本申请另外提供一种车辆,包括:三维空间扫描装置500;处理器501;以及存储器502,用于存储实现车辆速度确定方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:通过三维空间扫描装置采集道路环境点云数据;根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The application additionally provides a vehicle, including: a three-dimensional space scanning device 500; a processor 501; and a memory 502, which is used to store a program for realizing a method for determining a vehicle speed. After the device is powered on and the program of the method is run through the processor , Perform the following steps: collect road environment point cloud data through the three-dimensional space scanning device; generate at least two two-dimensional vehicle images according to the vehicle point cloud data in at least two frames of road environment point cloud data; The time interval between the two frames of the vehicle image and the at least two frames of road environment point cloud data is used to determine the speed of the vehicle.
第四实施例Fourth embodiment
请参考图6,其为本申请的路测感知设备的实施例的示意图。由于设备实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的设备实施例仅仅是示意性的。Please refer to FIG. 6, which is a schematic diagram of an embodiment of the drive test sensing device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment. The device embodiments described below are merely illustrative.
本实施例的一种路测感知设备,该电子设备包括:三维空间扫描装置600;处理器601和存储器602;所述存储器,用于存储实现方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:通过三维空间扫描装置采集道路环境点云数据;根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。A drive test perception device of this embodiment, the electronic device includes: a three-dimensional space scanning device 600; a processor 601 and a memory 602; the memory is used to store a program for implementing the method, and the device is powered on and passes through the processor After running the program of the method, the following steps are performed: collecting road environment point cloud data through a three-dimensional space scanning device; generating at least two two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of road environment point cloud data; The time interval between the two frames of data in the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data determines the speed of the vehicle.
第五实施例Fifth embodiment
请参考图7,其为本申请的电子设备的实施例的示意图。由于设备实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的设备实施例仅仅是示意性的。Please refer to FIG. 7, which is a schematic diagram of an embodiment of the electronic device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment. The device embodiments described below are merely illustrative.
本实施例的一种电子设备,该电子设备包括:处理器701和存储器702;所述存储器,用于存储实现方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。An electronic device of this embodiment, the electronic device includes: a processor 701 and a memory 702; the memory is used to store a program for implementing the method. After the device is powered on and runs the program of the method through the processor, it executes The following steps: generating at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data; according to the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data The time interval between two frames of data in the middle determines the speed of the vehicle.
第六实施例Sixth embodiment
在上述的实施例中,提供了一种车辆速度确定方法,与之相对应的,本申请还提供一种车辆速度预测模型构建方法。该方法是与上述方法的实施例相对应。In the foregoing embodiment, a method for determining a vehicle speed is provided. Correspondingly, this application also provides a method for constructing a vehicle speed prediction model. This method corresponds to the embodiment of the above method.
请参考图8,其为本申请的车辆速度预测模型构建方法的实施例的流程图。由于该方法实施例基本相似于方法实施例一,所以描述得比较简单,相关之处参见方法实施例一的部分说明即可。下述描述的方法实施例仅仅是示意性的。Please refer to FIG. 8, which is a flowchart of an embodiment of a method for constructing a vehicle speed prediction model of this application. Since the method embodiment is basically similar to the method embodiment 1, the description is relatively simple, and the relevant part can refer to the part of the description of the method embodiment 1. The method embodiments described below are only illustrative.
本实施例的一种车辆速度预测模型构建方法,包括:The method for constructing a vehicle speed prediction model of this embodiment includes:
步骤S801:确定训练数据集。Step S801: Determine the training data set.
所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据。所述二维车辆位置偏移真值数据,可以是与所述训练用二维车辆图像具有相同图像尺寸的二维车辆位置偏移真值图,也可以是与所述训练用二维车辆图像具有不同图像尺寸的二维车辆位置偏移真值图,等等。The training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data. The two-dimensional vehicle position offset true value data may be a two-dimensional vehicle position offset true value map having the same image size as the training two-dimensional vehicle image, or it may be the same as the training two-dimensional vehicle image Two-dimensional vehicle position offset truth maps with different image sizes, etc.
在一个示例中,所述二维车辆位置偏移真值数据是与所述训练用二维车辆图像具有相同或不同图像尺寸的二维车辆位置偏移真值图;相应的,所述训练数据集,可采用如下步骤确定:1)获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;2)根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;3)将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;4)根据二维车辆位置偏移真值形成所述二维车辆位置偏移真值图;以及,根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。In an example, the two-dimensional vehicle position offset truth data is a two-dimensional vehicle position offset truth map that has the same or different image size as the training two-dimensional vehicle image; correspondingly, the training data The set can be determined by the following steps: 1) Obtain at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification annotation data; 2) According to the annotation data, preset two frames of three-dimensional The center point offset of the vehicle bounding box is used as the true value of the three-dimensional vehicle position offset; 3) The true value of the three-dimensional vehicle position offset is projected to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset; 4) According to two The two-dimensional vehicle position deviation truth value forms the two-dimensional vehicle position deviation truth value map; and at least two training two-dimensional vehicle images are generated based on the vehicle point cloud data in at least two frames of environmental point cloud data for training.
在另一个示例中,所述二维车辆位置偏移真值数据只包括极少量的二维车辆位置偏移真值,极端情况下可以只包括一个横坐标的位移偏移量真值和一个纵坐标的位移偏移量真值,也就是说,训练用二维车辆图像中车辆的位移偏移量真值只包括两个数据,一个是该车辆的横坐标的位移偏移量真值,另一个是该车辆的纵坐标的位移偏移量真值。相应的,所述训练数据集可采用如下步骤确定:1)获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;2)根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;3)将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;4)根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。In another example, the two-dimensional vehicle position offset true value data only includes a very small amount of two-dimensional vehicle position offset true values. In extreme cases, it may only include a true value of the displacement offset of the abscissa and a vertical value. The true value of the displacement offset of the coordinate, that is, the true value of the displacement offset of the vehicle in the two-dimensional vehicle image for training only includes two data, one is the true value of the displacement offset of the abscissa of the vehicle, and the other One is the true value of the displacement offset of the ordinate of the vehicle. Correspondingly, the training data set can be determined by the following steps: 1) Obtain at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification annotation data; 2) According to the annotation data, combine the same vehicle Preset the center point offset of the three-dimensional vehicle bounding box of two frames as the true value of the three-dimensional vehicle position offset; 3) Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset Value; 4) Generate at least two training two-dimensional vehicle images based on the vehicle point cloud data in at least two frames of environmental point cloud data for training.
步骤S803:构建车辆速度预测模型的网络结构。Step S803: Construct a network structure of the vehicle speed prediction model.
所述网络结构,可包括至少一个车辆位移特征提取层和至少一个车辆位移特征上采样层,也可只包括车辆位移特征提取层。所述车辆位移特征提取层可以基于卷积运算实现,所述车辆位移特征上采样层可以基于反卷积运算实现。The network structure may include at least one vehicle displacement feature extraction layer and at least one vehicle displacement feature up-sampling layer, or may only include a vehicle displacement feature extraction layer. The vehicle displacement feature extraction layer may be implemented based on a convolution operation, and the vehicle displacement feature upsampling layer may be implemented based on a deconvolution operation.
步骤S805:从所述训练数据集中学习得到所述车辆速度预测模型。Step S805: Obtain the vehicle speed prediction model from the training data set.
此处请参见方法实施例一的步骤S2015部分的相关说明,此处不再赘述。Please refer to the related description of the step S2015 of the method embodiment 1, which is not repeated here.
从上述实施例可可见,本申请实施例提供的车辆速度预测模型构建方法,通过确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;构建车辆速度预测模型的网络结构;从所述训练数据集中学习得到所述车辆速度预测模型;这种处理方式,使得从大量训练数据中学习得到可以根据至少两个二维车辆图像对车辆位移量进行预测的模型;因此,可以有效提升车辆速度预测模型的准确度。It can be seen from the foregoing embodiments that the vehicle speed prediction model construction method provided by the embodiments of this application determines the training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data Construct the network structure of the vehicle speed prediction model; learn from the training data set to obtain the vehicle speed prediction model; this processing method makes it possible to learn from a large amount of training data to obtain vehicle displacement based on at least two two-dimensional vehicle images Therefore, the accuracy of the vehicle speed prediction model can be effectively improved.
第七实施例Seventh embodiment
请参考图9,其为本申请的车辆速度预测模型构建装置的实施例的示意图。由于装置实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的装置实施例仅仅是示意性的。Please refer to FIG. 9, which is a schematic diagram of an embodiment of the vehicle speed prediction model construction device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment. The device embodiments described below are merely illustrative.
本实施例的一种车辆速度预测模型构建装置,包括:A vehicle speed prediction model construction device of this embodiment includes:
数据确定单元901,用于确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;The data determining unit 901 is configured to determine a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
网络构建单元903,用于构建车辆速度预测模型的网络结构;The network construction unit 903 is used to construct the network structure of the vehicle speed prediction model;
模型训练单元905,用于从所述训练数据集中学习得到所述车辆速度预测模型。The model training unit 905 is configured to learn the vehicle speed prediction model from the training data set.
第八实施例Eighth embodiment
请参考图10,其为本申请的电子设备的实施例的示意图。由于设备实施例基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。下述描述的设备实施例仅仅是示意性的。Please refer to FIG. 10, which is a schematic diagram of an embodiment of the electronic device of this application. Since the device embodiment is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment. The device embodiments described below are merely illustrative.
本实施例的一种电子设备,该电子设备包括:处理器1001和存储器1002;所述存储器,用于存储实现方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;构建车辆速度预测模型的网络结构;从所述训练数据集中学习得到所述车辆速度预测模型。An electronic device of this embodiment, the electronic device includes: a processor 1001 and a memory 1002; the memory is used to store a program that implements the method. After the device is powered on and runs the program of the method through the processor, it executes The following steps: determining a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data; constructing a network structure of a vehicle speed prediction model; learning from the training data set Obtain the vehicle speed prediction model.
本申请虽然以较佳实施例公开如上,但其并不是用来限定本申请,任何本领域技术人员在不脱离本申请的精神和范围内,都可以做出可能的变动和修改,因此本申请的保护范围应当以本申请权利要求所界定的范围为准。Although this application is disclosed as above in preferred embodiments, it is not intended to limit the application. Any person skilled in the art can make possible changes and modifications without departing from the spirit and scope of the application. Therefore, this application The scope of protection shall be subject to the scope defined by the claims of this application.
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, the computing device includes one or more processors (CPU), input/output interfaces, network interfaces, and memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。The memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
1、计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitory media),如调制的数据信号和载波。1. Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology. The information can be computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include non-transitory computer-readable media (transitory media), such as modulated data signals and carrier waves.
2、本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。2. Those skilled in the art should understand that the embodiments of the present application can be provided as methods, systems or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.

Claims (26)

  1. 一种车辆速度确定方法,其特征在于,包括:A method for determining vehicle speed, characterized in that it comprises:
    根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;Generating at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
    根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The vehicle speed is determined according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,并确定车辆行驶速度,包括:The method according to claim 1, wherein said determining the vehicle speed according to the time interval of the two frames of the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data, include:
    根据所述至少两个二维车辆图像确定与所述时间间隔对应的二维车辆位置偏移数据;Determining the two-dimensional vehicle position offset data corresponding to the time interval according to the at least two two-dimensional vehicle images;
    根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度。The vehicle speed is determined according to the two-dimensional vehicle position offset data and the time interval.
  3. 根据权利要求2所述的方法,其特征在于,The method according to claim 2, wherein:
    通过车辆速度预测模型,根据所述至少两个二维车辆图像确定所述二维车辆位置偏移数据。According to the vehicle speed prediction model, the two-dimensional vehicle position offset data is determined according to the at least two two-dimensional vehicle images.
  4. 根据权利要求3所述的方法,其特征在于,还包括:The method according to claim 3, further comprising:
    从训练数据集中学习得到所述车辆速度预测模型;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据。The vehicle speed prediction model is learned from a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data.
  5. 根据权利要求4所述的方法,其特征在于,所述车辆速度预测模型采用如下步骤确定:The method according to claim 4, wherein the vehicle speed prediction model is determined by the following steps:
    确定所述训练数据集;Determining the training data set;
    构建所述车辆速度预测模型的网络结构;Constructing the network structure of the vehicle speed prediction model;
    从所述训练数据集中学习得到所述车辆速度预测模型。The vehicle speed prediction model is learned from the training data set.
  6. 根据权利要求5所述的方法,其特征在于,The method of claim 5, wherein:
    所述网络结构包括车辆位移特征提取层和车辆位移特征上采样层。The network structure includes a vehicle displacement feature extraction layer and a vehicle displacement feature up-sampling layer.
  7. 根据权利要求6所述的方法,其特征在于,The method of claim 6, wherein:
    所述二维车辆位置偏移真值数据包括与所述训练用二维车辆图像具有相同图像尺寸的二维车辆位置偏移真值图。The two-dimensional vehicle position deviation truth data includes a two-dimensional vehicle position deviation truth map having the same image size as the training two-dimensional vehicle image.
  8. 根据权利要求7所述的方法,其特征在于,所述训练数据集采用如下步骤确定:The method according to claim 7, wherein the training data set is determined by the following steps:
    获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
    根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
    将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
    根据二维车辆位置偏移真值形成所述二维车辆位置偏移真值图;以及,根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。The two-dimensional vehicle position deviation truth value map is formed according to the two-dimensional vehicle position deviation truth value; and, at least two training two-dimensional vehicles are generated based on the vehicle point cloud data in the at least two frames of environmental point cloud data for training image.
  9. 根据权利要求5所述的方法,其特征在于,所述训练数据集采用如下步骤确定:The method according to claim 5, wherein the training data set is determined using the following steps:
    获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
    根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
    将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
    根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。According to the vehicle point cloud data in at least two frames of environmental point cloud data for training, at least two two-dimensional vehicle images for training are generated.
  10. 根据权利要求2所述的方法,其特征在于,The method according to claim 2, wherein:
    所述二维车辆位置偏移数据包括与所述二维车辆图像具有相同图像尺寸的二维车辆位置偏移数据图;The two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image;
    所述根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度,包括:The determining the vehicle speed according to the two-dimensional vehicle position offset data and the time interval includes:
    将各个车辆在所述二维车辆位置偏移数据图中对应的各个像素点的横坐标偏移分量的平均值和纵坐标偏移分量的平均值分别与所述时间间隔的比值,作为所述车辆行驶速度。The ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each pixel corresponding to each vehicle in the two-dimensional vehicle position offset data map to the time interval is used as the ratio The speed of the vehicle.
  11. 根据权利要求2所述的方法,其特征在于,The method according to claim 2, wherein:
    所述二维车辆位置偏移数据包括与所述二维车辆图像具有相同图像尺寸的二维车辆位置偏移数据图;The two-dimensional vehicle position offset data includes a two-dimensional vehicle position offset data map having the same image size as the two-dimensional vehicle image;
    所述根据所述二维车辆位置偏移数据及所述时间间隔,确定车辆行驶速度,包括:The determining the vehicle speed according to the two-dimensional vehicle position offset data and the time interval includes:
    将各个车辆的二维车辆位置偏移数据转换为点云坐标系下的三维车辆位置偏移数据;Convert the two-dimensional vehicle position offset data of each vehicle into the three-dimensional vehicle position offset data in the point cloud coordinate system;
    将所述车辆对应的各个空间点的横坐标偏移分量的平均值和纵坐标偏移分量的平均值分别与所述时间间隔的比值,作为所述车辆行驶速度。The ratio of the average value of the abscissa offset component and the average value of the ordinate offset component of each spatial point corresponding to the vehicle to the time interval is used as the vehicle traveling speed.
  12. 根据权利要求1所述的方法,其特征在于,所述二维车辆图像包括俯视角度的二维车辆图像。The method according to claim 1, wherein the two-dimensional vehicle image comprises a two-dimensional vehicle image at a top view angle.
  13. 根据权利要求1所述的方法,其特征在于,所述根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像,包括:The method according to claim 1, wherein said generating at least two two-dimensional vehicle images according to vehicle point cloud data in at least two frames of road environment point cloud data comprises:
    确定车辆速度确定装置的姿态数据;Determine the attitude data of the vehicle speed determination device;
    根据所述姿态数据,将最后一帧前的车辆点云数据转换为最后一帧的点云坐标系下的车辆点云数据;According to the attitude data, converting the vehicle point cloud data before the last frame into the vehicle point cloud data in the point cloud coordinate system of the last frame;
    根据坐标系转换后的最后一帧前的车辆点云数据,生成与所述最后一帧前的车辆点云数据对应的二维车辆图像。According to the vehicle point cloud data before the last frame after the coordinate system conversion, a two-dimensional vehicle image corresponding to the vehicle point cloud data before the last frame is generated.
  14. 根据权利要求1所述的方法,其特征在于,还包括:The method according to claim 1, further comprising:
    通过车辆检测模型从所述道路环境点云数据中提取所述车辆点云数据。The vehicle point cloud data is extracted from the road environment point cloud data through a vehicle detection model.
  15. 根据权利要求1所述的方法,其特征在于,还包括:The method according to claim 1, further comprising:
    采集所述道路环境点云数据。Collect the road environment point cloud data.
  16. 一种车辆速度预测模型构建方法,其特征在于,包括:A method for constructing a vehicle speed prediction model, characterized in that it comprises:
    确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;Determining a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
    构建车辆速度预测模型的网络结构;Construct the network structure of the vehicle speed prediction model;
    从所述训练数据集中学习得到所述车辆速度预测模型。The vehicle speed prediction model is learned from the training data set.
  17. 根据权利要求16所述的方法,其特征在于,The method of claim 16, wherein:
    所述网络结构包括车辆位移特征提取层和车辆位移特征上采样层。The network structure includes a vehicle displacement feature extraction layer and a vehicle displacement feature up-sampling layer.
  18. 根据权利要求17所述的方法,其特征在于,The method of claim 17, wherein:
    所述二维车辆位置偏移真值数据包括与所述训练用二维车辆图像具有相同图像尺寸的二维车辆位置偏移真值图。The two-dimensional vehicle position deviation truth data includes a two-dimensional vehicle position deviation truth map having the same image size as the training two-dimensional vehicle image.
  19. 根据权利要求18所述的方法,其特征在于,所述训练数据集采用如下步骤确定:The method according to claim 18, wherein the training data set is determined by the following steps:
    获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
    根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
    将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
    根据二维车辆位置偏移真值形成所述二维车辆位置偏移真值图;以及,根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。The two-dimensional vehicle position deviation truth value map is formed according to the two-dimensional vehicle position deviation truth value; and, at least two training two-dimensional vehicles are generated based on the vehicle point cloud data in the at least two frames of environmental point cloud data for training image.
  20. 根据权利要求16所述的方法,其特征在于,所述训练数据集采用如下步骤确定:The method according to claim 16, wherein the training data set is determined using the following steps:
    获取具有三维车辆包围盒及车辆标识标注数据的至少两帧训练用环境点云数据;Acquiring at least two frames of environmental point cloud data for training with three-dimensional vehicle bounding box and vehicle identification data;
    根据所述标注数据,将同一车辆的预设两帧的三维车辆包围盒的中心点偏移量作为三维车辆位置偏移真值;According to the annotation data, the center point offset of the three-dimensional vehicle bounding box of the same vehicle in the preset two frames is taken as the true value of the three-dimensional vehicle position offset;
    将所述三维车辆位置偏移真值投影至俯视图坐标系,得到二维车辆位置偏移真值;Project the true value of the three-dimensional vehicle position offset to the top view coordinate system to obtain the true value of the two-dimensional vehicle position offset;
    根据至少两帧训练用环境点云数据中的车辆点云数据,生成至少两个训练用二维车辆图像。According to the vehicle point cloud data in at least two frames of environmental point cloud data for training, at least two two-dimensional vehicle images for training are generated.
  21. 一种车辆速度确定装置,其特征在于,包括:A vehicle speed determining device, characterized in that it comprises:
    图像生成单元,用于根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;An image generating unit, configured to generate at least two two-dimensional vehicle images according to the vehicle point cloud data in the at least two frames of road environment point cloud data;
    速度确定单元,用于根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The speed determining unit is configured to determine the vehicle driving speed according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  22. 一种车辆速度预测模型构建装置,其特征在于,包括:A vehicle speed prediction model construction device, characterized in that it comprises:
    数据确定单元,用于确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;A data determining unit for determining a training data set; the training data includes at least two training two-dimensional vehicle images and two-dimensional vehicle position offset true value data;
    网络构建单元,用于构建车辆速度预测模型的网络结构;The network construction unit is used to construct the network structure of the vehicle speed prediction model;
    模型训练单元,用于从所述训练数据集中学习得到所述车辆速度预测模型。The model training unit is used to learn the vehicle speed prediction model from the training data set.
  23. 一种车辆,其特征在于,包括:A vehicle, characterized by comprising:
    三维空间扫描装置;Three-dimensional space scanning device;
    处理器;以及Processor; and
    存储器,用于存储实现车辆速度确定方法的程序,该车辆通电并通过所述处理器运行该方法的程序后,执行下述步骤:通过三维空间扫描装置采集道路环境点云数据;根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The memory is used to store the program for realizing the method of determining the speed of the vehicle. After the vehicle is powered on and the program of the method is run through the processor, the following steps are executed: collecting road environment point cloud data through the three-dimensional space scanning device; according to at least two frames Generating at least two two-dimensional vehicle images from the vehicle point cloud data in the road environment point cloud data; according to the at least two two-dimensional vehicle images and the time interval between the two frames of the at least two frames of road environment point cloud data, Determine the speed of the vehicle.
  24. 一种路测感知设备,其特征在于,包括:A drive test sensing device, characterized in that it comprises:
    三维空间扫描装置;Three-dimensional space scanning device;
    处理器;以及Processor; and
    存储器,用于存储实现车辆速度确定方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:通过三维空间扫描装置采集道路环境点云数据;根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The memory is used to store the program for realizing the method of determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: collecting road environment point cloud data through the three-dimensional space scanning device; according to at least two frames Generating at least two two-dimensional vehicle images from the vehicle point cloud data in the road environment point cloud data; according to the at least two two-dimensional vehicle images and the time interval between the two frames of the at least two frames of road environment point cloud data, Determine the speed of the vehicle.
  25. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器;以及Processor; and
    存储器,用于存储实现车辆速度确定方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:根据至少两帧道路环境点云数据中的车辆点云数据,生成至少两个二维车辆图像;根据所述至少两个二维车辆图像和所述至少两帧道路环境点云数据中两帧数据的时间间隔,确定车辆行驶速度。The memory is used to store the program for realizing the method for determining the vehicle speed. After the device is powered on and the program of the method is run through the processor, the following steps are executed: according to the vehicle point cloud data in at least two frames of road environment point cloud data, At least two two-dimensional vehicle images are generated; and the vehicle speed is determined according to the time interval between the at least two two-dimensional vehicle images and the at least two frames of road environment point cloud data.
  26. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器;以及Processor; and
    存储器,用于存储实现车辆速度预测模型构建方法的程序,该设备通电并通过所述处理器运行该方法的程序后,执行下述步骤:确定训练数据集;所述训练数据包括至少两个训练用二维车辆图像和二维车辆位置偏移真值数据;构建车辆速度预测模型的网络结构;从所述训练数据集中学习得到所述车辆速度预测模型。The memory is used to store a program for realizing the method for constructing a vehicle speed prediction model. After the device is powered on and the program of the method is run through the processor, the following steps are performed: determining a training data set; the training data includes at least two training Using two-dimensional vehicle images and two-dimensional vehicle position offset true value data; constructing a network structure of a vehicle speed prediction model; learning the vehicle speed prediction model from the training data set.
PCT/CN2020/089606 2019-05-22 2020-05-11 Vehicle speed determination method, and vehicle WO2020233436A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910431632.6A CN111986472B (en) 2019-05-22 2019-05-22 Vehicle speed determining method and vehicle
CN201910431632.6 2019-05-22

Publications (1)

Publication Number Publication Date
WO2020233436A1 true WO2020233436A1 (en) 2020-11-26

Family

ID=73436392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/089606 WO2020233436A1 (en) 2019-05-22 2020-05-11 Vehicle speed determination method, and vehicle

Country Status (2)

Country Link
CN (1) CN111986472B (en)
WO (1) WO2020233436A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634343A (en) * 2020-12-23 2021-04-09 北京百度网讯科技有限公司 Training method of image depth estimation model and processing method of image depth information
CN112652016A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Point cloud prediction model generation method, pose estimation method and device
CN114648886A (en) * 2022-03-07 2022-06-21 深圳市腾运发电子有限公司 New energy automobile control method and device, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049698B (en) * 2022-08-17 2022-11-04 杭州兆华电子股份有限公司 Cloud picture display method and device of handheld acoustic imaging equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108460791A (en) * 2017-12-29 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for handling point cloud data
US20180357773A1 (en) * 2017-06-13 2018-12-13 TuSimple Sparse image point correspondences generation and correspondences refinement system for ground truth static scene sparse flow generation
CN109271880A (en) * 2018-08-27 2019-01-25 深圳清创新科技有限公司 Vehicle checking method, device, computer equipment and storage medium
CN109376664A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 Machine learning training method, device, server and medium

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101778623B1 (en) * 2016-01-06 2017-09-27 (주)안세기술 Methof for safely guiding an airplane to a parking ramp by using scanner including 2D laser scanner and motor
CN107193011A (en) * 2016-03-15 2017-09-22 山东理工大学 A kind of method for being used to quickly calculate car speed in automatic driving car area-of-interest
JPWO2018020680A1 (en) * 2016-07-29 2019-05-16 パイオニア株式会社 Measuring device, measuring method, and program
CN106205126B (en) * 2016-08-12 2019-01-15 北京航空航天大学 Large-scale Traffic Network congestion prediction technique and device based on convolutional neural networks
US10724848B2 (en) * 2016-08-29 2020-07-28 Beijing Qingying Machine Visual Technology Co., Ltd. Method and apparatus for processing three-dimensional vision measurement data
EP3324209A1 (en) * 2016-11-18 2018-05-23 Dibotics Methods and systems for vehicle environment map generation and updating
CN106951847B (en) * 2017-03-13 2020-09-29 百度在线网络技术(北京)有限公司 Obstacle detection method, apparatus, device and storage medium
CN107194957B (en) * 2017-04-17 2019-11-22 武汉光庭科技有限公司 The method that laser radar point cloud data is merged with information of vehicles in intelligent driving
CN106872722B (en) * 2017-04-25 2019-08-06 北京精英智通科技股份有限公司 A kind of measurement method and device of speed
CN109425365B (en) * 2017-08-23 2022-03-11 腾讯科技(深圳)有限公司 Method, device and equipment for calibrating laser scanning equipment and storage medium
CN108470159B (en) * 2018-03-09 2019-12-20 腾讯科技(深圳)有限公司 Lane line data processing method and device, computer device and storage medium
CN108985171B (en) * 2018-06-15 2023-04-07 上海仙途智能科技有限公司 Motion state estimation method and motion state estimation device
CN109345510A (en) * 2018-09-07 2019-02-15 百度在线网络技术(北京)有限公司 Object detecting method, device, equipment, storage medium and vehicle
CN109345829B (en) * 2018-10-29 2021-11-23 百度在线网络技术(北京)有限公司 Unmanned vehicle monitoring method, device, equipment and storage medium
CN109631915B (en) * 2018-12-19 2021-06-29 百度在线网络技术(北京)有限公司 Trajectory prediction method, apparatus, device and computer readable storage medium
CN109683170B (en) * 2018-12-27 2021-07-02 驭势科技(北京)有限公司 Image driving area marking method and device, vehicle-mounted equipment and storage medium
CN109726692A (en) * 2018-12-29 2019-05-07 重庆集诚汽车电子有限责任公司 High-definition camera 3D object detection system based on deep learning
CN109782015A (en) * 2019-03-21 2019-05-21 同方威视技术股份有限公司 Laser velocimeter method, control device and laser velocimeter

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180357773A1 (en) * 2017-06-13 2018-12-13 TuSimple Sparse image point correspondences generation and correspondences refinement system for ground truth static scene sparse flow generation
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN108460791A (en) * 2017-12-29 2018-08-28 百度在线网络技术(北京)有限公司 Method and apparatus for handling point cloud data
CN109271880A (en) * 2018-08-27 2019-01-25 深圳清创新科技有限公司 Vehicle checking method, device, computer equipment and storage medium
CN109376664A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 Machine learning training method, device, server and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634343A (en) * 2020-12-23 2021-04-09 北京百度网讯科技有限公司 Training method of image depth estimation model and processing method of image depth information
CN112652016A (en) * 2020-12-30 2021-04-13 北京百度网讯科技有限公司 Point cloud prediction model generation method, pose estimation method and device
CN112652016B (en) * 2020-12-30 2023-07-28 北京百度网讯科技有限公司 Point cloud prediction model generation method, pose estimation method and pose estimation device
CN114648886A (en) * 2022-03-07 2022-06-21 深圳市腾运发电子有限公司 New energy automobile control method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111986472B (en) 2023-04-28
CN111986472A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
WO2020233436A1 (en) Vehicle speed determination method, and vehicle
CN111024040B (en) Distance estimation method and device
EP3745158B1 (en) Methods and systems for computer-based determining of presence of dynamic objects
JP2021523443A (en) Association of lidar data and image data
JP2023523243A (en) Obstacle detection method and apparatus, computer device, and computer program
US11780465B2 (en) System and method for free space estimation
KR20160123668A (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
CN111209825B (en) Method and device for dynamic target 3D detection
WO2021027710A1 (en) Method, device, and equipment for object detection
KR101864127B1 (en) Apparatus and method for environment mapping of an unmanned vehicle
CN112184799A (en) Lane line space coordinate determination method and device, storage medium and electronic equipment
EP4213128A1 (en) Obstacle detection device, obstacle detection system, and obstacle detection method
McManus et al. Distraction suppression for vision-based pose estimation at city scales
CN113449692A (en) Map lane information updating method and system based on unmanned aerial vehicle
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN114519772A (en) Three-dimensional reconstruction method and system based on sparse point cloud and cost aggregation
CN114648639B (en) Target vehicle detection method, system and device
Hirata et al. Real-time dense depth estimation using semantically-guided LIDAR data propagation and motion stereo
US20220245831A1 (en) Speed estimation systems and methods without camera calibration
Teutsch et al. 3d-segmentation of traffic environments with u/v-disparity supported by radar-given masterpoints
CN112183378A (en) Road slope estimation method and device based on color and depth image
CN114170267A (en) Target tracking method, device, equipment and computer readable storage medium
US20230102186A1 (en) Apparatus and method for estimating distance and non-transitory computer-readable medium containing computer program for estimating distance
WO2020135325A1 (en) Mobile device positioning method, device and system, and mobile device
Lin et al. Object Recognition with Layer Slicing of Point Cloud

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20810685

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20810685

Country of ref document: EP

Kind code of ref document: A1