WO2020073268A1 - Snapshot image to train roadmodel - Google Patents
Snapshot image to train roadmodel Download PDFInfo
- Publication number
- WO2020073268A1 WO2020073268A1 PCT/CN2018/109796 CN2018109796W WO2020073268A1 WO 2020073268 A1 WO2020073268 A1 WO 2020073268A1 CN 2018109796 W CN2018109796 W CN 2018109796W WO 2020073268 A1 WO2020073268 A1 WO 2020073268A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sensor data
- sensors
- data
- training
- vehicle
- Prior art date
Links
- 238000012549 training Methods 0.000 claims abstract description 69
- 238000000034 method Methods 0.000 claims abstract description 60
- 230000001131 transforming effect Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 11
- 238000010801 machine learning Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims 1
- 230000009466 transformation Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000005070 sampling Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000003936 working memory Effects 0.000 description 4
- 230000015654 memory Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/865—Combination of radar systems with lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
- G01S13/867—Combination of radar systems with cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
- G06F18/256—Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/93—Radar or analogous systems specially adapted for specific applications for anti-collision purposes
- G01S13/931—Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
- G01S2013/9323—Alternative operation using light waves
Definitions
- the present disclosure relates in general to automated driving vehicles, and in more particular, to training roadmodels with snapshot images.
- An automated driving vehicle also known as a driverless car, self-driving car, robotic car
- ADV Automated driving vehicles
- ADV use a variety of techniques to detect their surroundings, such as radar, laser light, GPS, odometry and computer vision.
- Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
- an ADV collects sensor data from a variety of on-board sensors, such as camera, Lidar, radar, etc. Based on the sensor data, the ADV can construct a real-time roadmodel around it.
- Roadmodels may include a variety of information including, but not limited to, lanemarkings, traffic lights, traffic signs, road boundaries, etc.
- the constructed roadmodel is compared to the pre-installed roadmodels, such as those provided by high definition (HD) map providers, so that the ADV may more accurately determine its location in the HD map.
- the ADV may also identify objects around it, such as vehicles and pedestrians based on the sensor data.
- the ADV can make appropriate driving decisions based on the determined roadmodel and identified surrounding objects, such as lane change, acceleration, break, etc.
- each type of sensor data has to be separately processed.
- one or more models for object identification have to be established.
- it may have drawbacks when being used to train a target model. For example, if using images directly obtained by a camera to train a model, the drawbacks may include: (1) no classification of elements in the image; (2) the images may be in arbitrary perspective; and (3) it requires a huge number of samples images to train a target model. Similar drawbacks may exist for other types of sensors. Therefore, an improved solution for training a roadmodel is desired.
- the present disclosure aims to provide a method and a system for training roadmodels with snapshot images which has at least the following advantages over traditional roadmodel training: (1) a smaller number of training samples is used due to the unified data representation; (2) the sample data can contain various types of information which can also include historical information; (3) computational efficient; and (4) enable the use of existing data of different format.
- a method for creating snapshot images of traffic scenarios comprises: obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; obtaining positions of each of the at least two sensors; transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors; and plotting all of the transformed sensor data onto an image to form a snapshot image.
- a method for training a road model with snapshot images comprises: obtaining an existing road model of a road scene; obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; for each of the at least two frames, creating a snapshot image with the obtained sensor data according to the method of the first embodiment; associating the existing road model with each of the snapshot image as training data; and training a new road model using the training data.
- an apparatus for creating snapshot images of traffic scenarios comprises a sensor data obtaining module configured for obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; a sensor position obtaining module configured for obtaining positions of the at least two sensors; a transforming module configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors; and a plotting module configured for plotting all of the transformed sensor data onto an image to form a snapshot image.
- a vehicle including at least two sensors and the apparatus of the third exemplary embodiment is provided.
- a system for training a road model with snapshot images may comprise at least two sensors configured for collecting sensor data of a road scene; and a processing unit configured for: obtaining an existing road model of a road scene; obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; for each of the at least two frames, creating a snapshot image with the obtained sensor data according to the method of the first embodiment; associating the existing road model with each of the snapshot image as training data; and training a new road model using the training data.
- Fig. 1 illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to an embodiment of the present invention.
- Fig. 2 is a flowchart of an exemplary method for creating snapshot images of traffic scenarios according to an embodiment of the present invention.
- Fig. 3 illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to another embodiment of the present invention.
- Fig. 4 is a flowchart of an exemplary method for creating snapshot images of traffic scenarios according to another embodiment of the present invention.
- Fig. 5 is a flowchart of an exemplary method for creating snapshot images of traffic scenarios according to yet another embodiment of the present invention.
- Fig. 6 is a flowchart of an exemplary method for training a road model with snapshot images according to an embodiment of the present invention.
- Fig. 7 is a flowchart of an exemplary method for training an event detector with snapshot images according to an embodiment of the present invention.
- Fig. 8 is a flowchart of an exemplary method implemented on a vehicle for detecting events according to an embodiment of the present invention.
- Fig. 9 illustrates an exemplary apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention.
- Fig. 10 illustrates an exemplary vehicle according to an embodiment of the present invention.
- Fig. 11 illustrates an exemplary apparatus for creating snapshot images of traffic scenario according to another embodiment of the invention.
- Fig. 12 illustrates an exemplary vehicle according to another embodiment of the present invention.
- Fig. 13 illustrates an exemplary apparatus for creating snapshot images of traffic scenario according to yet another embodiment of the invention.
- Fig. 14 illustrates an exemplary vehicle according to yet another embodiment of the present invention.
- Fig. 15 illustrates an exemplary system for training a road model with snapshot images according to an embodiment of the present invention.
- Fig. 16 illustrates an exemplary system for training an event detector with snapshot images according to an embodiment of the present invention.
- Fig. 17 illustrates an apparatus on a vehicle for detecting events according to an embodiment of the present invention.
- Fig. 18 illustrates an exemplary vehicle according to an embodiment of the present invention.
- Fig. 19 illustrates a general hardware environment wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
- vehicle used through the specification refers to a car, an airplane, a helicopter, a ship, or the like.
- vehicle refers to a car, an airplane, a helicopter, a ship, or the like.
- the invention is described with respect to “car” , but the embodiments described herein is not limited to “car” only, but applicable to other kinds of vehicles.
- a or B used through the specification refers to “A and B” and “A or B” rather than meaning that A and B are exclusive, unless otherwise specified.
- the present invention provides a method capable of efficiently integrating various types of sensor data on a vehicle in a unified manner to integrally exhibit information of traffic scenarios around a vehicle.
- the method is to some extent like taking a picture of a scene, so it is hereinafter called as “snapshot” , and the data of the snapshots are called “snapshot images” .
- a snapshot may be constructed by sensor data from multiple sensors captured at the same time.
- sensors such as Lidar, Radar and camera.
- Each sensor records its own sensor data and provides it to the central processing unit of the vehicle.
- the formats of sensor data provided by various types or various manufacturers of sensors are typically different. Therefore, the central processing unit needs to have the ability to read and recognize each of the various types of sensor data, and use them separately. It thus consumes a lot of resources and is very inefficient.
- the present invention integrates sensor data from multiple sensors in the form of a snapshot.
- the multiple sensors may be of the same type of sensors, but may also be of different types.
- the reference coordinate system of the present invention may be a two-dimensional plane parallel to the ground.
- the origin of the reference coordinate system may be the midpoint of the rear axle of the vehicle, for example.
- the origin may be the position of any one of the sensors, such as the geometric center of the sensor, or the origin of a local coordinate system used by the sensor.
- the origin can also be any point on a vehicle.
- the midpoint of the rear axle of the car is selected as the origin in this embodiment.
- one axis of the reference coordinate system can be parallel to the rear axle of the vehicle, and the other axis can be perpendicular to the rear axle of the vehicle.
- Figure 1 which illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to an embodiment of the present invention
- the x-axis is perpendicular to the rear axle of the vehicle, with the positive half of the x-axis representing positions in front of the direction of travel of the vehicle, and the negative half of the x-axis representing positions behind the direction of travel of the vehicle.
- the y-axis is parallel to the rear axle of the vehicle.
- the positive half of the y-axis can represent positions on the left side of the direction of travel of the vehicle, and the negative half-axis of the y-axis can represent positions on the right side of the direction of travel of the vehicle.
- the size of the reference coordinate system can be pre-determined so as to limit the data amount.
- the x-axis and the y-axiz may be defined to have a size of -50 to +50 meters, or -100 to +100 meters, or the like.
- the x-axis and the y-axiz may be determined by the maximum sensing range of the sensors mounted on the vehicle.
- the various sensors used in vehicles typically include at least a binary set of location information and value information, such as ⁇ (x, y) , d ⁇ , which represents that in the read out of the sensor at position (x, y) is d.
- the location information is in the local coordinate system of the sensor.
- the sensor data for each sensor can be transformed from its respective local coordinate system to the reference coordinate system.
- the position at which a sensor is mounted on the vehicle is known, so the corresponding position in the reference coordinate system can be determined.
- the relative position between the local coordinate system of a first sensor and the reference coordinate system is x c1 , y c1 , i.e., the origin of the local coordinate system of the first sensor is located at (x c1 , y c1 ) in the reference coordinate system.
- a given position (x s1 , y s1 ) in the local reference coordinate system can be transformed to (x s1 -x c1 , y s1 -y c1 ) .
- the relative position between the local coordinate system of a second sensor and the reference coordinate system is x c2 , y c2 , i.e., the origin of the local coordinate system of the second sensor is located at (x c2 , y c2 ) in the reference coordinate system.
- a given position (x s2 , y s2 ) in the local reference coordinate system can be transformed to (x s2 -x c2 , y s2 -y c2 ) .
- some sensors may use a three-dimensional local coordinate system, for example, point cloud data of Lidar is three-dimensional.
- Such three-dimensional coordinate systems can be projected onto the two-dimensional reference coordinate system. More specific, such three-dimensional coordinate systems are generally represented by x, y and z axes, wherein the planes formed by two of three axes (assuming the x and y axes) are typically parallel to the ground as well, and thus parallel to the x-y plane in the reference coordinate system of the present invention. Therefore, its x-y coordinates can be similarly transformed into coordinates in the reference coordinate system by translation. The z coordinates do not need to be transformed and can be retained as additional information in the snapshot image data.
- the snapshot image provided by the present invention if visually displayed, may look similar to a top view of a scenario.
- data provided by different sensors may have different data formats.
- the degrees of processing of the data may vary.
- some sensors can only provide raw data, while some sensors provide data that has been processed, for example, data with recognitions to some extent.
- some Lidars can provide further information of scenarios based on the point cloud data, such as segmentations or recognitions of some objects (e.g., street signs, etc. ) .
- Some cameras may also provide similar recognition, such as identifying lanemarkings in captured image.
- the data output by sensors always contain pairs of position data and values. In other words, the output of sensors always tell what information on what positions. Therefore, for creating snapshots according to the present invention, it is only necessary to record all of the correspondence between the positions and data in a single snapshot, so that the snapshot of the present invention can be made compatible with all sensors, and meanwhile contain all the original information of each sensor.
- the same object in the scenario may be sensed by different sensors.
- Lidar, Radar, and camera may all have sensed the tree, and respectively provide corresponding sensor data corresponding to the tree represented in their own local coordinate systems, such as ⁇ (x s1 , y s1 ) , d s1 ⁇ by a first sensor, and ⁇ (x s2 , y s2 ) , d s2 ⁇ by a second sensor.
- the positions given by the two sensors will be the same spot in the reference coordinate system, i.e., (x 1 , y 1 ) .
- the sensor data given by the two sensors can both be added it to (x 1 , y 1 ) , as ⁇ (x 1 , y 1 ) , d s1 , d s2 ⁇ , for example.
- the data format described herein is merely exemplary, and any suitable data format that reflects the relationships between positions and readout values can be used in recording of snapshot image data according to the present invention.
- Fig. 2 is a flowchart of an exemplary method 200 for creating snapshot images of traffic scenarios according to an embodiment of the present invention.
- the method 200 starts at step 202, where sensor data of at least two sensors installed on a vehicle may be obtained.
- the sensor data is collected at substantially same time (or having the same timestamp) .
- positions of each of the sensors may be obtained.
- the positions of each of the sensors are the relative position of each sensor in the reference coordinate system.
- the sensor data of each of the at least two sensors may be transformed into a reference coordinate system based on the obtained positions of the sensors.
- all of the transformed sensor data may be plotted onto an image to form a snapshot image.
- an optional “fusing” step may be performed on the sensor data. Since a plurality of sensors are used, the sensor data from different sensors can be used to enhance the reliability and confidence of the sensor data. For example, if a Lidar senses a traffic sign and gives a recognized result indicating that the object is a traffic sign, and now if a camera also captures the picture and recognizes the traffic sign, then the recognition of traffic sign has an almost 100%confidence. On the other hand, if the sensor data given by the Lidar is not quite sure on what is it (like a traffic sign with 50%confidence) , but with the sensor data from the camera, the confidence will also be increased to almost 100%confidence.
- Another case showing the advantage of using multiple sensors may be that a part of lanemarking may be temporarily blocked by an object, such as a car, so the blocked part may be not sensible by sensor A, but with reference to sensor data from sensor B, such as an image captured by a camera showing clearly that the lanemarking is there and just blocked, raw data given by sensor A can be processed so as to use data corresponding to lanemarking data to replace the raw data, as if there is no object blocking that part of the lanemarking.
- snapshot data does not have to be drawn as visible images. Instead, as aforementioned, snapshots or snapshot images are only representatives of recording sensor data of a surrounding scenario at a particular moment or moments. Therefore, the “plotting data onto an image” in step 208 does not mean that the data is visually presented as an image, but refers to integrating the transformed sensor data from various sensors into a unified data structure based on the coordinate positions in the reference coordinate system. This data structure is called a “snapshot” , “snapshot image” or “snapshot image data” . Of course, since the position information and the data values associated with the positions are completely retained in the snapshot image data, it can be visually rendered as an image by some specialized software if necessary, for example for human understanding.
- the snapshot may be constructed by sensor data from one single sensor, but captured at different times.
- the difference from the previously described embodiment is that the first embodiment records a snapshot of multiple sensors at the same time, while the second embodiment records a snapshot of one single sensor at different times.
- a reference coordinate system may be established first. Assume that it is still a two-dimensional coordinate system parallel to the ground. As an example, the midpoint of the rear axle of the car is again selected as the origin of the reference coordinate system.
- the x-axis is perpendicular to the rear axle of the vehicle, with the positive half and the negative half of the x-axis representing positions in front of and behind the direction of travel of the vehicle, respectively.
- the y-axis is parallel to the rear axle of the vehicle, with the positive half and the negative half of the y-axis can represent positions on the left side and the right side of the direction of travel of the vehicle, respectively.
- Sensor data captured by a sensor at a single time spot may be referred to as a sensor data frame.
- the n frames may be a serial of successive data frames of a sensor.
- the n frames of data can be sequentially taken using the sampling interval of the sensor itself.
- the n sensor data frames may be captured at a regular interval.
- an interval larger than the sampling interval of the sampling sensor itself may be appropriately selected.
- the sampling frequency of the sensor itself is 100 Hz, but one frame may be selected every 10 frames as a snapshot data frame.
- the sampling interval may be selected based on the speed of movement of the vehicle, for example, such that the data of the sensor can have a relatively significant difference when the vehicle is not moving too fast.
- the n frames of sensor data may be transformed to snapshot data.
- sensor data also typically contains a timestamp that records the time at which the data was captured.
- a particular time spot may be selected as the reference time, or reference timestamp.
- the acquisition time of the first frame or the last frame or any one of the frame data in the n frame may be taken as the reference time t 0 . It is assumed herein that the time of the first frame is taken as the reference time t 0 , and the subsequent second to n th frames can be denoted as times t 1 , ..., t n-1 .
- the times t 1 , ..., t n-1 herein are also called timestamps or age of the frames.
- each frame of sensor data may be transformed into data in the reference coordinate system.
- the transformation may include the transformation of the positions between the reference coordinate system and the local coordinate system of the sensor. Similar to the first embodiment, the position of the sensor on the vehicle is known, so the relative positional relationship between the origin of its local coordinate system and the origin of the reference coordinate system is known. Thus, the coordinates can be transformed by translation.
- the positional movement of the vehicle may be determined by the time interval between t 0 and t 1 and the speed of the car during this time period, or by an odometer or some other sensor data.
- ⁇ (x 1 , y 1 ) , d 1 , t 1 ⁇ in the second frame of data can be transformed into ⁇ (x 1 -x c -d x1 , y 1 -y c -d y1 ) , d 1 , t 1 ⁇ , wherein t 1 represents the time of capturing the second data frame.
- subsequent frames can also perform the same transformation.
- all of the n frames of transformed sensor data may be integrated to form a snapshot based on the transformed positions in the reference coordinate system.
- the snapshot data is visually rendered as an image, such as Fig. 3, which illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to another embodiment of the present invention
- the data from time t 0 to time t n is integrated into one single coordinate system.
- a stationary object in the scenario still appears to be stationary, but a moving object may appear as a motion trajectory.
- the building 102 in Fig. 1 since it is stationary, the frames of sensor data, the coordinate after being transformed into the reference coordinate system will coincide with each other. Thus, it is still shown as stationary in Fig. 3 at the same position as compared to Fig. 1.
- the moving car 103 in FIG. 1 will appear in Fig. 3 as traveling straight along the lane first, and then a lane change is performed.
- Fig. 4 is a flowchart of an exemplary method 400 for creating snapshot images of traffic scenarios according to another embodiment of the present invention.
- the method 400 starts at step 402, where at least two frames of sensor data of a sensor installed on a vehicle are obtained.
- the at least two frames of sensor data may be sequentially collected at different time.
- a position of the sensor is obtained.
- the position of the sensor is the relative position of the sensor in the reference coordinate system.
- each frame of the sensor data may be transformed into a current reference coordinate system based on the obtained position of the sensor.
- the relative movements of the vehicle during between frames should also be considered during the transformation.
- step 408 plot the transformed frames of sensor data onto an image to form a snapshot image.
- the sensor data captured at different timestamps can also be used to enhance the reliability and confidence of the sensor data. For example, at one timestamp, a sensor may sense an object but not be sure what it is. Several frames after, with the vehicle gets closer to the object, it clearly figure out what it is. Then, the previous data can be processed or fused with newer data.
- the snapshot may be constructed by sensor data from multiple sensors at different times.
- the third embodiment is similar in many aspects to the previously described second embodiment, except that only one sensor is used in the second embodiment, while a plurality of sensors are used in the third embodiment.
- a snapshot is created with multiple sensors at a single time spot. Similar to the first embodiment, on the basis of the second embodiment recording n frames of sensor data, the data from multiple sensors can be performed a coordinate system transformation and a snapshot can be formed based on coordinates.
- the relative position between the local coordinate system of a first sensor (e.g., Lidar) and the reference coordinate system is x c1 , y c1 , i.e., the origin of the local coordinate system is located at (x c1 , y c1 ) in the reference coordinate system
- the relative position between the local coordinate system of a second sensor (e.g., radar) and the reference coordinate system is x c2 , y c2 , i.e., the origin of the local coordinate system is located at (x c2 , y c2 ) in the reference coordinate system
- the positional movement of the car during t 0 to t 1 is (d x1 , d y1 )
- ⁇ (x s1 , y s1 ) , d 1 , t 1 ⁇ in the second frame of data of the first sensor can be transformed into ⁇ (x s1 -x c1 -d
- the ⁇ (x s2 , y s2 ) , d 2 , t 1 ⁇ in the second frame of data of the second sensor can be transformed into ⁇ (x s2 -x c2 -d x1 , y s2 -y c2 -d y1 ) , d 2 , t 1 ⁇ .
- each frame of transformed data of each sensor is integrated into a snapshot under the reference coordinate system.
- the snapshot formed according to the third embodiment looks like a combination of the snapshot data formats of the first embodiment and the second embodiment, and can be generally represented as ⁇ (x, y) , d s1 , d s2 , ..., d sn , t n-1 ⁇ to represent multiple sensor data values at (x, y) in the reference coordinate system with timestamps.
- the snapshot data of the third embodiment is visually rendered as an image, the image should appear similar to the second embodiment, reflecting the dynamic changes of the scenario.
- Fig. 5 is a flowchart of an exemplary method 500 for creating snapshot images of traffic scenarios according to an embodiment of the present invention.
- the method starts at step 502, where at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle are obtained.
- the at least two frames of sensor data may be sequentially collected at different time.
- positions of each of the at least two sensors are obtained.
- each frame of the sensor data are transformed into a current reference coordinate system based on the obtained positions of the at least two sensors. Similar to the second embodiment, the relative movements of the vehicle during between frames should also be considered during the transformation.
- all of the transformed sensor data may be plotted onto an image to form a snapshot image. Also, there may also be an optional fusing step in this embodiment, such as to fuse the sensor data which has overlapped positions in the reference coordinate system.
- AD automated driving
- the AD vehicle For an automated driving (AD) vehicle, it makes real-time driving decisions based on HD maps and a variety of sensor data.
- the AD vehicle must first determine the exact position of it on the road, and then decide how to drive (steering, accelerating, etc. ) .
- the AD vehicle identifies objects based on realtime sensor data, such as Lidar, camera, etc. Then, it compares the identified objects to the roadmodel contained in the HD map, thereby determining its position on the road.
- Fig. 6 is a flowchart of an exemplary method 600 for training a road model with snapshot images according to an embodiment of the present invention.
- the method starts at step 602, obtaining an existing road model of a road scene.
- step 604 obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time.
- step 606 for each of the at least two frames, creating a snapshot image with the obtained sensor data.
- training a new road model using the training data As an example, the training may be based on machine learning techniques.
- the snapshot images and the known elements from existing road models are paired, or called labeled, so as to be used as training data.
- the desired model can be trained.
- the amount of training data used for training a model is still large, the amount of data will be significantly less than training the model separately with each type of sensor data.
- the snapshot of the present invention may contain data collected by one or more sensors at multiple times, and thus can reflect dynamic information of objects in the scenarios.
- This feature is also useful in training an ADV to identify motion states (also known as events) of objects that occur in real time in the scenarios.
- the car in Figure 3 changes from the left lane of the current lane where the vehicle on which the sensors are mounted, to the current lane, which is a commonly-seen lane change on the road, also known as “cut in” .
- Similar events include but not limited to: lane change; overtaking; steering; break; collision; and lost control.
- Fig. 7 is a flowchart of an exemplary method 700 for training an event detector with snapshot images according to an embodiment of the present invention.
- the method 700 starts at step 702, where at least two frames of sensor data from at least one sensor installed on a vehicle are obtained.
- the at least two frames of sensor data may be sequentially collected at different time.
- results of events that are occurring while the sensor data are obtained may be obtained. These results may come from human. For example, the engineers may watch a video corresponding to the sensor data frames and identify events in the video.
- a snapshot image may be created with the obtained sensor data., such as via the methods 200, 400 or 500 of creating snapshot images described in Figs. 2, 4 and 5.
- step 708 associating the obtained results of events with corresponding snapshot images as training data.
- training an event detector using the training data may be based on machine learning techniques.
- the snapshot images and the known events are paired or labeled so as to be used as training data.
- the desired event detector can be trained. Although the amount of training data used for training the event detector is still large, the amount of data will be significantly less than training the event detector separately with each type of sensor data.
- Fig. 8 is a flowchart of an exemplary method 800 on a vehicle for detecting events.
- the method 800 starts at step 802, where an event detector, such as the event detector trained via the method 700, may be obtained.
- an event detector such as the event detector trained via the method 700
- at least one frame of sensor data from at least one sensor installed on a vehicle may be obtained.
- a snapshot image may be created with the obtained sensor data.
- events may be detected with the event detector based on the created snapshot image. More specific, this step may include inputting the created snapshot image to the event detector, and then the even detector outputting detected events based on the input snapshot image.
- the results, i.e., the detected events may be output with probabilities or confidences.
- Fig. 9 illustrates an exemplary apparatus 900 apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention.
- the apparatus 900 may comprise a sensor data obtaining module 902, a sensor position obtaining module 904, a transforming module 906, and a plotting module 908.
- the sensor data obtaining module 902 may be configured for obtaining sensor data of at least two sensors installed on a vehicle.
- the sensor position obtaining module 904 may be configured for positions of each of the sensors.
- the transforming module 906 may be configured for transforming the sensor data of each of the at least two sensors into a reference coordinate system based on the obtained positions of the sensors.
- the plotting module 908 may be configured for plotting the transformed sensor data onto an image to form a snapshot image.
- Fig. 10 illustrates an exemplary vehicle 1000 according to an embodiment of the present invention.
- the vehicle 1000 may comprise an apparatus for creating snapshot images of traffic scenarios, such as the apparatus 900 in Fig. 9. Like normal vehicles, the vehicle 1000 may further comprise at least two sensors 1002 for collecting sensor data of traffic scenarios.
- the sensors 1002 may be of different types and include but not limited to Lidars, radars and cameras.
- Fig. 11 illustrates an exemplary apparatus 1100 apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention.
- the apparatus 1100 may comprise a sensor data obtaining module 1102, a sensor position obtaining module 1104, a transforming module 1106, and a plotting module 1108.
- the sensor data obtaining module 1102 may be configured for obtaining at least two frames of sensor data of a sensor installed on a vehicle.
- the sensor position obtaining module 1104 may be configured for obtaining a position of the sensor.
- the transforming module 1106 may be configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the sensor.
- the plotting module 1108 may be configured for plotting the transformed sensor data onto an image to form a snapshot image.
- Fig. 12 illustrates an exemplary vehicle 1200 according to an embodiment of the present invention.
- the vehicle 1200 may comprise an apparatus for creating snapshot images of traffic scenarios, such as the apparatus 1100 in Fig. 11.
- the vehicle 1200 may further comprise at least one sensor 1202 for collecting sensor data of traffic scenarios.
- the at least one sensor 1202 may be of different types and include but not limited to Lidars, radars and cameras.
- Fig. 13 illustrates an exemplary apparatus 1300 apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention.
- the apparatus 1300 may comprise a sensor data obtaining module 1302, a sensor position obtaining module 1304, a transforming module 1306, and a plotting module 1308.
- the sensor data obtaining module 1302 may be configured for obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle.
- the sensor position obtaining module 1304 may be configured for obtaining positions of each of the at least two sensors.
- the transforming module 1306 may be configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors.
- the plotting module 1308 may be configured for plotting the transformed sensor data onto an image to form a snapshot image.
- Fig. 14 illustrates an exemplary vehicle 1400 according to an embodiment of the present invention.
- the vehicle 1400 may comprise an apparatus for creating snapshot images of traffic scenarios, such as the apparatus 1300 in Fig. 13.
- the vehicle 1400 may further comprise at least two sensors 1402 for collecting sensor data of traffic scenarios.
- the at least two sensors 1402 may be of different types and include but not limited to Lidars, radars and cameras.
- Fig. 15 illustrates an exemplary system 1500 for training a road model with snapshot images according to an embodiment of the present invention.
- the system 1500 may comprise at least two sensors 1502 configured for collecting sensor data of a road scene, and a processing unit 1504.
- the processing unit 1504 is configured to perform the method of training a road model with snapshot images, such as the method 600 described in Fig. 6.
- Fig. 16 illustrates an exemplary system 1600 for training an event detector with snapshot images.
- the system 1600 may comprise a sensor data obtaining module 1602, an event result obtaining module 1604, a snapshot image creating module 1606, an associating module 1608 and a training module 1610.
- the sensor data obtaining module 1602 may be configured for obtaining at least two frames of sensor data from at least one sensor installed on a vehicle.
- the event result obtaining module 1604 may be configured for obtaining results of events that are occurring while the sensor data are obtained.
- the snapshot image creating module 1606 may be configured for, for each of the at least two frames, creating a snapshot image with the obtained sensor data.
- the associating module 1608 may be configured for associating the obtained results of events with corresponding snapshot images as training data.
- the training module 1610 may be configured for training an event detector using the training data.
- Fig. 17 illustrates an apparatus 1700 on a vehicle for detecting events according to an embodiment of the present invention.
- the apparatus 1700 may comprise a detector obtaining module 1702, a sensor data obtaining module 1704, a snapshot image creating module 1706, and an event detecting module 1708.
- the detector obtaining module 1702 may be configured for obtaining an event detector trained by the method, such as the method 800 described with regard to Fig. 8.
- the sensor data obtaining module 1704 configured for obtaining at least two frames of sensor data from at least one sensor installed on a vehicle.
- the snapshot image creating module 1706 may be configured for, for each of the at least two frames, creating a snapshot image with the obtained sensor data.
- the event detecting module 1708 may be configured for detecting events with the event detector based on the created snapshot image.
- Fig. 18 illustrates an exemplary vehicle 1800 according to an embodiment of the present invention.
- the vehicle 1800 may comprise an apparatus for detecting events, such as the apparatus 1700 in Fig. 17.
- the vehicle 1800 may further comprise at least one sensor 1802 for collecting sensor data of traffic scenarios.
- the sensor 1802 may be of different types and include but not limited to Lidars, radars and cameras.
- Fig. 19 illustrates a general hardware environment 1900 wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
- the computing device 1900 may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any combination thereof.
- the aforementioned system may be wholly or at least partially implemented by the computing device 1900 or a similar device or system.
- the computing device 1900 may comprise elements that are connected with or in communication with a bus 1902, possibly via one or more interfaces.
- the computing device 1900 may comprise the bus 1902, and one or more processors 1904, one or more input devices 1906 and one or more output devices 1908.
- the one or more processors 1904 may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips) .
- the input devices 1906 may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control.
- the output devices 1908 may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer.
- the computing device 1900 may also comprise or be connected with non-transitory storage devices 1910 which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory) , a RAM (Random Access Memory) , a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code.
- non-transitory storage devices 1910 may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device,
- the non-transitory storage devices 1910 may be detachable from an interface.
- the non-transitory storage devices 1910 may have data/instructions/code for implementing the methods and steps which are described above.
- the computing device 1900 may also comprise a communication device 1912.
- the communication device 1912 may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a Bluetooth TM device, 802.11 device, WiFi device, WiMax device, cellular communication facilities and/or the like.
- the computing device 1900 When the computing device 1900 is used as an on-vehicle device, it may also be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on. In this way, the computing device 1900 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
- external device for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.
- the computing device 1900 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
- other facilities such as an engine system, a wiper, an anti-lock Braking System or the like
- non-transitory storage device 1910 may have map information and software elements so that the processor 1904 may perform route guidance processing.
- the output device 1906 may comprise a display for displaying the map, the location mark of the vehicle and also images indicating the travelling situation of the vehicle.
- the output device 1906 may also comprise a speaker or interface with an ear phone for audio guidance.
- the bus 1902 may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus 1902 may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile.
- ISA Industry Standard Architecture
- MCA Micro Channel Architecture
- EISA Enhanced ISA
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- CAN Controller Area Network
- the computing device 1900 may also comprise a working memory 1914, which may be any kind of working memory that may store instructions and/or data useful for the working of the processor 1904, and may comprise but is not limited to a random access memory and/or a read-only memory device.
- working memory 1914 may be any kind of working memory that may store instructions and/or data useful for the working of the processor 1904, and may comprise but is not limited to a random access memory and/or a read-only memory device.
- Software elements may be located in the working memory 1914, including but are not limited to an operating system 1916, one or more application programs 1918, drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs 1918, and the units of the aforementioned apparatus 800 may be implemented by the processor 1904 reading and executing the instructions of the one or more application programs 1918.
- the executable codes or source codes of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device (s) 1910 described above, and may be read into the working memory 1914 possibly with compilation and/or installation.
- the executable codes or source codes of the instructions of the software elements may also be downloaded from a remote location.
- the present disclosure may be implemented by software with necessary hardware, or by hardware, firmware and the like. Based on such understanding, the embodiments of the present disclosure may be embodied in part in a software form.
- the computer software may be stored in a readable storage medium such as a floppy disk, a hard disk, an optical disk or a flash memory of the computer.
- the computer software comprises a series of instructions to make the computer (e.g., a personal computer, a service station or a network terminal) execute the method or a part thereof according to respective embodiment of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Theoretical Computer Science (AREA)
- Electromagnetism (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Traffic Control Systems (AREA)
Abstract
Examples of the present disclosure describe a method and a system for training a road model with snapshot images. The method comprises: obtaining an existing road model of a road scene; obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; for each of the at least two frames, creating a snapshot image with the obtained sensor data; associating the existing road model with each of the snapshot image as training data; and training a new road model using the training data.
Description
The present disclosure relates in general to automated driving vehicles, and in more particular, to training roadmodels with snapshot images.
An automated driving vehicle (also known as a driverless car, self-driving car, robotic car) is a kind of vehicle that is capable of sensing its environment and navigating without human input. Automated driving vehicles (hereinafter, called as ADV) use a variety of techniques to detect their surroundings, such as radar, laser light, GPS, odometry and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
More specific, an ADV collects sensor data from a variety of on-board sensors, such as camera, Lidar, radar, etc. Based on the sensor data, the ADV can construct a real-time roadmodel around it. Roadmodels may include a variety of information including, but not limited to, lanemarkings, traffic lights, traffic signs, road boundaries, etc. The constructed roadmodel is compared to the pre-installed roadmodels, such as those provided by high definition (HD) map providers, so that the ADV may more accurately determine its location in the HD map. In the meantime, the ADV may also identify objects around it, such as vehicles and pedestrians based on the sensor data. The ADV can make appropriate driving decisions based on the determined roadmodel and identified surrounding objects, such as lane change, acceleration, break, etc.
As known in the art, different sensors produce different forms or formats of data. For example, cameras provide images, while Lidars provide point clouds. In processing such sensor data from different sensors, each type of sensor data has to be separately processed. Thus, for each type of sensor, one or more models for object identification have to be established. In addition, for any particular type of sensor, it may have drawbacks when being used to train a target model. For example, if using images directly obtained by a camera to train a model, the drawbacks may include: (1) no classification of elements in the image; (2) the images may be in arbitrary perspective; and (3) it requires a huge number of samples images to train a target model. Similar drawbacks may exist for other types of sensors. Therefore, an improved solution for training a roadmodel is desired.
SUMMARY OF THE INVENTION
The present disclosure aims to provide a method and a system for training roadmodels with snapshot images which has at least the following advantages over traditional roadmodel training: (1) a smaller number of training samples is used due to the unified data representation; (2) the sample data can contain various types of information which can also include historical information; (3) computational efficient; and (4) enable the use of existing data of different format.
In accordance with a first exemplary embodiment of the present disclosure, a method for creating snapshot images of traffic scenarios is provided. The method comprises: obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; obtaining positions of each of the at least two sensors; transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors; and plotting all of the transformed sensor data onto an image to form a snapshot image.
In accordance with a second exemplary embodiment of the present disclosure, a method for training a road model with snapshot images is provided. The method comprises: obtaining an existing road model of a road scene; obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; for each of the at least two frames, creating a snapshot image with the obtained sensor data according to the method of the first embodiment; associating the existing road model with each of the snapshot image as training data; and training a new road model using the training data.
In accordance with a third exemplary embodiment of the present disclosure, an apparatus for creating snapshot images of traffic scenarios is provided. The apparatus comprises a sensor data obtaining module configured for obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; a sensor position obtaining module configured for obtaining positions of the at least two sensors; a transforming module configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors; and a plotting module configured for plotting all of the transformed sensor data onto an image to form a snapshot image.
In accordance with a fourth exemplary embodiment of the present disclosure, a vehicle including at least two sensors and the apparatus of the third exemplary embodiment is provided.
In accordance with a fifth exemplary embodiment of the present disclosure, a system for training a road model with snapshot images is provided. The system may comprise at least two sensors configured for collecting sensor data of a road scene; and a processing unit configured for: obtaining an existing road model of a road scene; obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time; for each of the at least two frames, creating a snapshot image with the obtained sensor data according to the method of the first embodiment; associating the existing road model with each of the snapshot image as training data; and training a new road model using the training data.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Additional aspects, features, and/or advantages of examples will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the disclosure.
The above and other aspects and advantages of the present disclosure will become apparent from the following detailed description of exemplary embodiments taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the present disclosure. Note that the drawings are not necessarily drawn to scale.
Fig. 1 illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to an embodiment of the present invention.
Fig. 2 is a flowchart of an exemplary method for creating snapshot images of traffic scenarios according to an embodiment of the present invention.
Fig. 3 illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to another embodiment of the present invention.
Fig. 4 is a flowchart of an exemplary method for creating snapshot images of traffic scenarios according to another embodiment of the present invention.
Fig. 5 is a flowchart of an exemplary method for creating snapshot images of traffic scenarios according to yet another embodiment of the present invention.
Fig. 6 is a flowchart of an exemplary method for training a road model with snapshot images according to an embodiment of the present invention.
Fig. 7 is a flowchart of an exemplary method for training an event detector with snapshot images according to an embodiment of the present invention.
Fig. 8 is a flowchart of an exemplary method implemented on a vehicle for detecting events according to an embodiment of the present invention.
Fig. 9 illustrates an exemplary apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention.
Fig. 10 illustrates an exemplary vehicle according to an embodiment of the present invention.
Fig. 11 illustrates an exemplary apparatus for creating snapshot images of traffic scenario according to another embodiment of the invention.
Fig. 12 illustrates an exemplary vehicle according to another embodiment of the present invention.
Fig. 13 illustrates an exemplary apparatus for creating snapshot images of traffic scenario according to yet another embodiment of the invention.
Fig. 14 illustrates an exemplary vehicle according to yet another embodiment of the present invention.
Fig. 15 illustrates an exemplary system for training a road model with snapshot images according to an embodiment of the present invention.
Fig. 16 illustrates an exemplary system for training an event detector with snapshot images according to an embodiment of the present invention.
Fig. 17 illustrates an apparatus on a vehicle for detecting events according to an embodiment of the present invention.
Fig. 18 illustrates an exemplary vehicle according to an embodiment of the present invention.
Fig. 19 illustrates a general hardware environment wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the described exemplary embodiments. It will be apparent, however, to one skilled in the art that the described embodiments can be practiced without some or all of these specific details. In other exemplary embodiments, well known structures or process steps have not been described in detail in order to avoid unnecessarily obscuring the concept of the present disclosure.
The term “vehicle” used through the specification refers to a car, an airplane, a helicopter, a ship, or the like. For simplicity, the invention is described with respect to “car” , but the embodiments described herein is not limited to “car” only, but applicable to other kinds of vehicles. The term “A or B” used through the specification refers to “A and B” and “A or B” rather than meaning that A and B are exclusive, unless otherwise specified.
I.
Snapshot
The present invention provides a method capable of efficiently integrating various types of sensor data on a vehicle in a unified manner to integrally exhibit information of traffic scenarios around a vehicle. The method is to some extent like taking a picture of a scene, so it is hereinafter called as “snapshot” , and the data of the snapshots are called “snapshot images” .
1. Multiple Sensors, One Timestamp
As a first embodiment of the present invention, a snapshot may be constructed by sensor data from multiple sensors captured at the same time.
As mentioned above, vehicles (especially ADV) are equipped with different types of sensors, such as Lidar, Radar and camera. Each sensor records its own sensor data and provides it to the central processing unit of the vehicle. The formats of sensor data provided by various types or various manufacturers of sensors are typically different. Therefore, the central processing unit needs to have the ability to read and recognize each of the various types of sensor data, and use them separately. It thus consumes a lot of resources and is very inefficient.
The present invention integrates sensor data from multiple sensors in the form of a snapshot. The multiple sensors may be of the same type of sensors, but may also be of different types.
To perform a unified integration, a unified reference coordinate system is established. According to one embodiment of the present invention, the reference coordinate system of the present invention may be a two-dimensional plane parallel to the ground. The origin of the reference coordinate system may be the midpoint of the rear axle of the vehicle, for example. Alternatively, the origin may be the position of any one of the sensors, such as the geometric center of the sensor, or the origin of a local coordinate system used by the sensor. Of course, the origin can also be any point on a vehicle. For convenience of explanation, the midpoint of the rear axle of the car is selected as the origin in this embodiment.
Accordingly, one axis of the reference coordinate system can be parallel to the rear axle of the vehicle, and the other axis can be perpendicular to the rear axle of the vehicle. Thus, as shown in Figure 1, which illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to an embodiment of the present invention, the x-axis is perpendicular to the rear axle of the vehicle, with the positive half of the x-axis representing positions in front of the direction of travel of the vehicle, and the negative half of the x-axis representing positions behind the direction of travel of the vehicle. The y-axis is parallel to the rear axle of the vehicle. The positive half of the y-axis can represent positions on the left side of the direction of travel of the vehicle, and the negative half-axis of the y-axis can represent positions on the right side of the direction of travel of the vehicle. Optionally, the size of the reference coordinate system can be pre-determined so as to limit the data amount. As an example, the x-axis and the y-axiz may be defined to have a size of -50 to +50 meters, or -100 to +100 meters, or the like. In another example, the x-axis and the y-axiz may be determined by the maximum sensing range of the sensors mounted on the vechicle.
The various sensors used in vehicles, regardless of the data format they employ, typically include at least a binary set of location information and value information, such as { (x, y) , d} , which represents that in the read out of the sensor at position (x, y) is d. The location information is in the local coordinate system of the sensor. Thus, after the reference coordinate system is determined, the sensor data for each sensor can be transformed from its respective local coordinate system to the reference coordinate system. The position at which a sensor is mounted on the vehicle is known, so the corresponding position in the reference coordinate system can be determined. For example, assuming that the relative position between the local coordinate system of a first sensor and the reference coordinate system is x
c1, y
c1, i.e., the origin of the local coordinate system of the first sensor is located at (x
c1, y
c1) in the reference coordinate system. Then, a given position (x
s1, y
s1) in the local reference coordinate system can be transformed to (x
s1-x
c1, y
s1-y
c1) . Similarly, assuming that the relative position between the local coordinate system of a second sensor and the reference coordinate system is x
c2, y
c2, i.e., the origin of the local coordinate system of the second sensor is located at (x
c2, y
c2) in the reference coordinate system. Then, a given position (x
s2, y
s2) in the local reference coordinate system can be transformed to (x
s2-x
c2, y
s2-y
c2) .
In addition, some sensors may use a three-dimensional local coordinate system, for example, point cloud data of Lidar is three-dimensional. Such three-dimensional coordinate systems can be projected onto the two-dimensional reference coordinate system. More specific, such three-dimensional coordinate systems are generally represented by x, y and z axes, wherein the planes formed by two of three axes (assuming the x and y axes) are typically parallel to the ground as well, and thus parallel to the x-y plane in the reference coordinate system of the present invention. Therefore, its x-y coordinates can be similarly transformed into coordinates in the reference coordinate system by translation. The z coordinates do not need to be transformed and can be retained as additional information in the snapshot image data. Through the three-dimensional to two-dimensional transformation, the snapshot image provided by the present invention, if visually displayed, may look similar to a top view of a scenario.
In addition, as aforementioned, data provided by different sensors may have different data formats. In addition to the data format, the degrees of processing of the data may vary. For example, some sensors can only provide raw data, while some sensors provide data that has been processed, for example, data with recognitions to some extent. For example, some Lidars can provide further information of scenarios based on the point cloud data, such as segmentations or recognitions of some objects (e.g., street signs, etc. ) . Some cameras may also provide similar recognition, such as identifying lanemarkings in captured image. Regardless of the extent to which the data is processed, the data output by sensors always contain pairs of position data and values. In other words, the output of sensors always tell what information on what positions. Therefore, for creating snapshots according to the present invention, it is only necessary to record all of the correspondence between the positions and data in a single snapshot, so that the snapshot of the present invention can be made compatible with all sensors, and meanwhile contain all the original information of each sensor.
It can be contemplated that, since multiple sensors are used to sense the same scenario, the same object in the scenario may be sensed by different sensors. For example, as shown in Fig. 1, there is a building 102 at a particular position, assuming (x
1, y
1) in the reference coordinate system. Thus, Lidar, Radar, and camera may all have sensed the tree, and respectively provide corresponding sensor data corresponding to the tree represented in their own local coordinate systems, such as { (x
s1, y
s1) , d
s1} by a first sensor, and { (x
s2, y
s2) , d
s2} by a second sensor. Obviously, after transformations to the reference coordinate system, the positions given by the two sensors will be the same spot in the reference coordinate system, i.e., (x
1, y
1) . In other words, (x
s1-x
c1, y
s1-y
c1) = (x
s2-x
c2, y
s2-y
c2) = (x
1, y
1) . When creating a snapshot, the sensor data given by the two sensors can both be added it to (x
1, y
1) , as { (x
1, y
1) , d
s1, d
s2} , for example. Those skilled in the art will appreciate that the data format described herein is merely exemplary, and any suitable data format that reflects the relationships between positions and readout values can be used in recording of snapshot image data according to the present invention.
Fig. 2 is a flowchart of an exemplary method 200 for creating snapshot images of traffic scenarios according to an embodiment of the present invention. The method 200 starts at step 202, where sensor data of at least two sensors installed on a vehicle may be obtained. The sensor data is collected at substantially same time (or having the same timestamp) . Then, at step 204, positions of each of the sensors may be obtained. As aforementioned, the positions of each of the sensors are the relative position of each sensor in the reference coordinate system. Thereafter, at step 206, the sensor data of each of the at least two sensors may be transformed into a reference coordinate system based on the obtained positions of the sensors. Finally, at step 208, all of the transformed sensor data may be plotted onto an image to form a snapshot image.
Before plotting the sensor data onto the snapshot image, an optional “fusing” step may be performed on the sensor data. Since a plurality of sensors are used, the sensor data from different sensors can be used to enhance the reliability and confidence of the sensor data. For example, if a Lidar senses a traffic sign and gives a recognized result indicating that the object is a traffic sign, and now if a camera also captures the picture and recognizes the traffic sign, then the recognition of traffic sign has an almost 100%confidence. On the other hand, if the sensor data given by the Lidar is not quite sure on what is it (like a traffic sign with 50%confidence) , but with the sensor data from the camera, the confidence will also be increased to almost 100%confidence. Another case showing the advantage of using multiple sensors may be that a part of lanemarking may be temporarily blocked by an object, such as a car, so the blocked part may be not sensible by sensor A, but with reference to sensor data from sensor B, such as an image captured by a camera showing clearly that the lanemarking is there and just blocked, raw data given by sensor A can be processed so as to use data corresponding to lanemarking data to replace the raw data, as if there is no object blocking that part of the lanemarking.
It should be noted that although terms “snapshot” , “snapshot image” and “plotting” and the like are used in the present disclosure, the recorded snapshot data does not have to be drawn as visible images. Instead, as aforementioned, snapshots or snapshot images are only representatives of recording sensor data of a surrounding scenario at a particular moment or moments. Therefore, the “plotting data onto an image” in step 208 does not mean that the data is visually presented as an image, but refers to integrating the transformed sensor data from various sensors into a unified data structure based on the coordinate positions in the reference coordinate system. This data structure is called a “snapshot” , “snapshot image” or “snapshot image data” . Of course, since the position information and the data values associated with the positions are completely retained in the snapshot image data, it can be visually rendered as an image by some specialized software if necessary, for example for human understanding.
By transforming various sensor data into unified snapshots, vehicles do not have to separately record and use various types of sensor data, which greatly reduces the burdens of on-board systems. Meanwhile, the unified format of sensor data makes it unnecessary to separately train various models based on different sensors, which significantly reduces the amount of computation in the training progress and notably increases the training efficiency.
2. One Sensor, Multiple Timestamp
As a second embodiment of “snapshot” , the snapshot may be constructed by sensor data from one single sensor, but captured at different times. As could be appreciated, the difference from the previously described embodiment is that the first embodiment records a snapshot of multiple sensors at the same time, while the second embodiment records a snapshot of one single sensor at different times.
Similar to the first embodiment, a reference coordinate system may be established first. Assume that it is still a two-dimensional coordinate system parallel to the ground. As an example, the midpoint of the rear axle of the car is again selected as the origin of the reference coordinate system. In a same manner, the x-axis is perpendicular to the rear axle of the vehicle, with the positive half and the negative half of the x-axis representing positions in front of and behind the direction of travel of the vehicle, respectively. The y-axis is parallel to the rear axle of the vehicle, with the positive half and the negative half of the y-axis can represent positions on the left side and the right side of the direction of travel of the vehicle, respectively.
Sensor data captured by a sensor at a single time spot may be referred to as a sensor data frame. As an example, the number n of sensor data frames included in one snapshot may be preset, wherein n is a positive integer greater than or equal to 2, such as n=10 for example. In one embodiment, the n frames may be a serial of successive data frames of a sensor. For example, the n frames of data can be sequentially taken using the sampling interval of the sensor itself. Alternatively, the n sensor data frames may be captured at a regular interval. In another example, an interval larger than the sampling interval of the sampling sensor itself may be appropriately selected. For example, the sampling frequency of the sensor itself is 100 Hz, but one frame may be selected every 10 frames as a snapshot data frame. The sampling interval may be selected based on the speed of movement of the vehicle, for example, such that the data of the sensor can have a relatively significant difference when the vehicle is not moving too fast.
After taking n frames of sensor data, the n frames of sensor data may be transformed to snapshot data. In addition to position information and readout values, sensor data also typically contains a timestamp that records the time at which the data was captured. When creating a snapshot, in addition to establishing a reference coordinate system, a particular time spot may be selected as the reference time, or reference timestamp. For example, the acquisition time of the first frame or the last frame or any one of the frame data in the n frame may be taken as the reference time t
0. It is assumed herein that the time of the first frame is taken as the reference time t
0, and the subsequent second to n
th frames can be denoted as times t
1, …, t
n-1. The times t
1, …, t
n-1herein are also called timestamps or age of the frames.
Subsequently, each frame of sensor data may be transformed into data in the reference coordinate system. For the first frame of data, the transformation may include the transformation of the positions between the reference coordinate system and the local coordinate system of the sensor. Similar to the first embodiment, the position of the sensor on the vehicle is known, so the relative positional relationship between the origin of its local coordinate system and the origin of the reference coordinate system is known. Thus, the coordinates can be transformed by translation.
Next, for the second frame of data, in addition to the transformation of the position between the reference coordinate system and the local coordinate system, it is also necessary to consider the movement of the position of the vehicle itself during time t
0 to time t
1. As an example, the positional movement of the vehicle may be determined by the time interval between t
0 and t
1 and the speed of the car during this time period, or by an odometer or some other sensor data. Assuming that the relative position between the local coordinate system and the reference coordinate system is x
c, y
c, i.e., the origin of the local coordinate system is located at (x
c, y
c) in the reference coordinate system, and the positional movement of the car during t
0 to t
1 is (d
x1, d
y1) , then { (x
1, y
1) , d
1, t
1} in the second frame of data can be transformed into { (x
1-x
c-d
x1, y
1-y
c-d
y1) , d
1, t
1} , wherein t
1 represents the time of capturing the second data frame. Similarly, subsequent frames can also perform the same transformation. In the end, all of the n frames of transformed sensor data may be integrated to form a snapshot based on the transformed positions in the reference coordinate system.
It may be contemplated that, assuming that the snapshot data is visually rendered as an image, such as Fig. 3, which illustrates an exemplary diagram generated from a snapshot image of a traffic scenario according to another embodiment of the present invention, the data from time t
0 to time t
n is integrated into one single coordinate system. In such an image, a stationary object in the scenario still appears to be stationary, but a moving object may appear as a motion trajectory. Taking the building 102 in Fig. 1 as an example, since it is stationary, the frames of sensor data, the coordinate after being transformed into the reference coordinate system will coincide with each other. Thus, it is still shown as stationary in Fig. 3 at the same position as compared to Fig. 1. In contrast, the moving car 103 in FIG. 1, will appear in Fig. 3 as traveling straight along the lane first, and then a lane change is performed.
By combining the multiple frames of data into one single snapshot, it is possible to clearly reflect the dynamics of the scenario over a period of time, which is suitable for subsequent model trainings and will be described in further detail below.
Fig. 4 is a flowchart of an exemplary method 400 for creating snapshot images of traffic scenarios according to another embodiment of the present invention. The method 400 starts at step 402, where at least two frames of sensor data of a sensor installed on a vehicle are obtained. The at least two frames of sensor data may be sequentially collected at different time. Thereafter, at step 404, a position of the sensor is obtained. As in the first embodiment, the position of the sensor is the relative position of the sensor in the reference coordinate system. At step 406, each frame of the sensor data may be transformed into a current reference coordinate system based on the obtained position of the sensor. As previously mentioned, the relative movements of the vehicle during between frames should also be considered during the transformation. After all of the frames of sensor data are transformed, at step 408, plot the transformed frames of sensor data onto an image to form a snapshot image.
There may also be an optional fusing step in this embodiment. Although only one sensor is used, the sensor data captured at different timestamps can also be used to enhance the reliability and confidence of the sensor data. For example, at one timestamp, a sensor may sense an object but not be sure what it is. Several frames after, with the vehicle gets closer to the object, it clearly figure out what it is. Then, the previous data can be processed or fused with newer data.
3. Multiple Sensors, Multiple Timestamp
As a third embodiment of “snapshot” , the snapshot may be constructed by sensor data from multiple sensors at different times. The third embodiment is similar in many aspects to the previously described second embodiment, except that only one sensor is used in the second embodiment, while a plurality of sensors are used in the third embodiment. In the previous described first embodiment, it is described that a snapshot is created with multiple sensors at a single time spot. Similar to the first embodiment, on the basis of the second embodiment recording n frames of sensor data, the data from multiple sensors can be performed a coordinate system transformation and a snapshot can be formed based on coordinates.
As an example, assuming that the relative position between the local coordinate system of a first sensor (e.g., Lidar) and the reference coordinate system is x
c1, y
c1, i.e., the origin of the local coordinate system is located at (x
c1, y
c1) in the reference coordinate system, the relative position between the local coordinate system of a second sensor (e.g., radar) and the reference coordinate system is x
c2, y
c2, i.e., the origin of the local coordinate system is located at (x
c2, y
c2) in the reference coordinate system, and the positional movement of the car during t
0 to t
1 is (d
x1, d
y1) , then { (x
s1, y
s1) , d
1, t
1} in the second frame of data of the first sensor can be transformed into { (x
s1-x
c1-d
x1, y
s1-y
c1-d
y1) , d
1, t
1} , wherein t
1 represents the time of capturing the second data frame. Similarly, the { (x
s2, y
s2) , d
2, t
1} in the second frame of data of the second sensor can be transformed into { (x
s2-x
c2-d
x1, y
s2-y
c2-d
y1) , d
2, t
1} . Further, if there is a n
th sensor, e.g., a camera., then the { (x
sn, y
sn) , d
n, t
1} in the second frame of data of the third sensor can be transformed into { (x
sn-x
cn-d
x1, y
sn-y
cn-d
y1) , d
n, t
1} . As previously described, each frame of transformed data of each sensor is integrated into a snapshot under the reference coordinate system. In the terms of data structure, the snapshot formed according to the third embodiment looks like a combination of the snapshot data formats of the first embodiment and the second embodiment, and can be generally represented as { (x, y) , d
s1, d
s2, …, d
sn, t
n-1} to represent multiple sensor data values at (x, y) in the reference coordinate system with timestamps. Assuming that the snapshot data of the third embodiment is visually rendered as an image, the image should appear similar to the second embodiment, reflecting the dynamic changes of the scenario.
Fig. 5 is a flowchart of an exemplary method 500 for creating snapshot images of traffic scenarios according to an embodiment of the present invention. The method starts at step 502, where at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle are obtained. The at least two frames of sensor data may be sequentially collected at different time. At step 504, positions of each of the at least two sensors are obtained. Thereafter, at step 506, each frame of the sensor data are transformed into a current reference coordinate system based on the obtained positions of the at least two sensors. Similar to the second embodiment, the relative movements of the vehicle during between frames should also be considered during the transformation. At step 508, all of the transformed sensor data may be plotted onto an image to form a snapshot image. Also, there may also be an optional fusing step in this embodiment, such as to fuse the sensor data which has overlapped positions in the reference coordinate system.
II.
Training a Roadmodel
For an automated driving (AD) vehicle, it makes real-time driving decisions based on HD maps and a variety of sensor data. In general, the AD vehicle must first determine the exact position of it on the road, and then decide how to drive (steering, accelerating, etc. ) . More specific, the AD vehicle identifies objects based on realtime sensor data, such as Lidar, camera, etc. Then, it compares the identified objects to the roadmodel contained in the HD map, thereby determining its position on the road.
A substantial portion of existing road models are actually constructed based on sensor data collected on the road by sensors mounted on map information collection vehicles. It can be understood that at the beginning, such data depends on the judgments of human to identify objects. Slowly through the data accumulation, some rules are formed, and the objects can be automatically recognized by computers. The ultimate goal is to have a mature model that allows for identifying various objects and generate a roadmodel by simply entering acquired sensor data. However, existing roadmodel construction requires the use of various sensors that work independently of each other. Therefore, to train a certain model, one must train separately the model for each sensor. This is obviously inefficient and computationally intensive.
This problem can be solved by using the snapshot technique proposed by the present invention. According to the snapshot technique of the present invention, data of various sensors is integrated into a unified data structure. Thus, only one training upon such a unified data is required.
Fig. 6 is a flowchart of an exemplary method 600 for training a road model with snapshot images according to an embodiment of the present invention. The method starts at step 602, obtaining an existing road model of a road scene. At step 604, obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time. At step 606, for each of the at least two frames, creating a snapshot image with the obtained sensor data. At step 608, associating the existing road model with each of the snapshot image as training data. At step 610, training a new road model using the training data. As an example, the training may be based on machine learning techniques. The snapshot images and the known elements from existing road models are paired, or called labeled, so as to be used as training data. With a large amount of training data, the desired model can be trained. Although the amount of training data used for training a model is still large, the amount of data will be significantly less than training the model separately with each type of sensor data.
III.
Training an Event Detector
As aforementioned, the snapshot of the present invention may contain data collected by one or more sensors at multiple times, and thus can reflect dynamic information of objects in the scenarios. This feature is also useful in training an ADV to identify motion states (also known as events) of objects that occur in real time in the scenarios. For example, the car in Figure 3 changes from the left lane of the current lane where the vehicle on which the sensors are mounted, to the current lane, which is a commonly-seen lane change on the road, also known as “cut in” . Similar events include but not limited to: lane change; overtaking; steering; break; collision; and lost control.
Fig. 7 is a flowchart of an exemplary method 700 for training an event detector with snapshot images according to an embodiment of the present invention. The method 700 starts at step 702, where at least two frames of sensor data from at least one sensor installed on a vehicle are obtained. The at least two frames of sensor data may be sequentially collected at different time. At step 704, results of events that are occurring while the sensor data are obtained may be obtained. These results may come from human. For example, the engineers may watch a video corresponding to the sensor data frames and identify events in the video. At step 706, for each of the at least two frames, a snapshot image may be created with the obtained sensor data., such as via the methods 200, 400 or 500 of creating snapshot images described in Figs. 2, 4 and 5. At step 708, associating the obtained results of events with corresponding snapshot images as training data. At step 710, training an event detector using the training data. As an example, the training may be based on machine learning techniques. The snapshot images and the known events are paired or labeled so as to be used as training data. With a large amount of training data, the desired event detector can be trained. Although the amount of training data used for training the event detector is still large, the amount of data will be significantly less than training the event detector separately with each type of sensor data.
Fig. 8 is a flowchart of an exemplary method 800 on a vehicle for detecting events. The method 800 starts at step 802, where an event detector, such as the event detector trained via the method 700, may be obtained. At step 804, at least one frame of sensor data from at least one sensor installed on a vehicle may be obtained. At step 806, for each of the at least one frame, a snapshot image may be created with the obtained sensor data. At step 808, events may be detected with the event detector based on the created snapshot image. More specific, this step may include inputting the created snapshot image to the event detector, and then the even detector outputting detected events based on the input snapshot image. Preferably, the results, i.e., the detected events, may be output with probabilities or confidences.
Fig. 9 illustrates an exemplary apparatus 900 apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention. The apparatus 900 may comprise a sensor data obtaining module 902, a sensor position obtaining module 904, a transforming module 906, and a plotting module 908. The sensor data obtaining module 902 may be configured for obtaining sensor data of at least two sensors installed on a vehicle. The sensor position obtaining module 904 may be configured for positions of each of the sensors. The transforming module 906 may be configured for transforming the sensor data of each of the at least two sensors into a reference coordinate system based on the obtained positions of the sensors. The plotting module 908 may be configured for plotting the transformed sensor data onto an image to form a snapshot image.
Fig. 10 illustrates an exemplary vehicle 1000 according to an embodiment of the present invention. The vehicle 1000 may comprise an apparatus for creating snapshot images of traffic scenarios, such as the apparatus 900 in Fig. 9. Like normal vehicles, the vehicle 1000 may further comprise at least two sensors 1002 for collecting sensor data of traffic scenarios. The sensors 1002 may be of different types and include but not limited to Lidars, radars and cameras.
Fig. 11 illustrates an exemplary apparatus 1100 apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention. The apparatus 1100 may comprise a sensor data obtaining module 1102, a sensor position obtaining module 1104, a transforming module 1106, and a plotting module 1108. The sensor data obtaining module 1102 may be configured for obtaining at least two frames of sensor data of a sensor installed on a vehicle. The sensor position obtaining module 1104 may be configured for obtaining a position of the sensor. The transforming module 1106 may be configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the sensor. The plotting module 1108 may be configured for plotting the transformed sensor data onto an image to form a snapshot image.
Fig. 12 illustrates an exemplary vehicle 1200 according to an embodiment of the present invention. The vehicle 1200 may comprise an apparatus for creating snapshot images of traffic scenarios, such as the apparatus 1100 in Fig. 11. Like normal vehicles, the vehicle 1200 may further comprise at least one sensor 1202 for collecting sensor data of traffic scenarios. The at least one sensor 1202 may be of different types and include but not limited to Lidars, radars and cameras.
Fig. 13 illustrates an exemplary apparatus 1300 apparatus for creating snapshot images of traffic scenario according to an embodiment of the invention. The apparatus 1300 may comprise a sensor data obtaining module 1302, a sensor position obtaining module 1304, a transforming module 1306, and a plotting module 1308. The sensor data obtaining module 1302 may be configured for obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle. The sensor position obtaining module 1304 may be configured for obtaining positions of each of the at least two sensors. The transforming module 1306 may be configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors. The plotting module 1308 may be configured for plotting the transformed sensor data onto an image to form a snapshot image.
Fig. 14 illustrates an exemplary vehicle 1400 according to an embodiment of the present invention. The vehicle 1400 may comprise an apparatus for creating snapshot images of traffic scenarios, such as the apparatus 1300 in Fig. 13. Like normal vehicles, the vehicle 1400 may further comprise at least two sensors 1402 for collecting sensor data of traffic scenarios. The at least two sensors 1402 may be of different types and include but not limited to Lidars, radars and cameras.
Fig. 15 illustrates an exemplary system 1500 for training a road model with snapshot images according to an embodiment of the present invention. The system 1500 may comprise at least two sensors 1502 configured for collecting sensor data of a road scene, and a processing unit 1504. The processing unit 1504 is configured to perform the method of training a road model with snapshot images, such as the method 600 described in Fig. 6.
Fig. 16 illustrates an exemplary system 1600 for training an event detector with snapshot images. The system 1600 may comprise a sensor data obtaining module 1602, an event result obtaining module 1604, a snapshot image creating module 1606, an associating module 1608 and a training module 1610. The sensor data obtaining module 1602 may be configured for obtaining at least two frames of sensor data from at least one sensor installed on a vehicle. The event result obtaining module 1604 may be configured for obtaining results of events that are occurring while the sensor data are obtained. The snapshot image creating module 1606 may be configured for, for each of the at least two frames, creating a snapshot image with the obtained sensor data. The associating module 1608 may be configured for associating the obtained results of events with corresponding snapshot images as training data. The training module 1610 may be configured for training an event detector using the training data.
Fig. 17 illustrates an apparatus 1700 on a vehicle for detecting events according to an embodiment of the present invention. The apparatus 1700 may comprise a detector obtaining module 1702, a sensor data obtaining module 1704, a snapshot image creating module 1706, and an event detecting module 1708. The detector obtaining module 1702 may be configured for obtaining an event detector trained by the method, such as the method 800 described with regard to Fig. 8. The sensor data obtaining module 1704 configured for obtaining at least two frames of sensor data from at least one sensor installed on a vehicle. The snapshot image creating module 1706 may be configured for, for each of the at least two frames, creating a snapshot image with the obtained sensor data. The event detecting module 1708 may be configured for detecting events with the event detector based on the created snapshot image.
Fig. 18 illustrates an exemplary vehicle 1800 according to an embodiment of the present invention. The vehicle 1800 may comprise an apparatus for detecting events, such as the apparatus 1700 in Fig. 17. Like normal vehicles, the vehicle 1800 may further comprise at least one sensor 1802 for collecting sensor data of traffic scenarios. The sensor 1802 may be of different types and include but not limited to Lidars, radars and cameras.
Fig. 19 illustrates a general hardware environment 1900 wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
With reference to FIG. 19, a computing device 1900, which is an example of the hardware device that may be applied to the aspects of the present disclosure, will now be described. The computing device 1900 may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any combination thereof. The aforementioned system may be wholly or at least partially implemented by the computing device 1900 or a similar device or system.
The computing device 1900 may comprise elements that are connected with or in communication with a bus 1902, possibly via one or more interfaces. For example, the computing device 1900 may comprise the bus 1902, and one or more processors 1904, one or more input devices 1906 and one or more output devices 1908. The one or more processors 1904 may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips) . The input devices 1906 may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control. The output devices 1908 may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer. The computing device 1900 may also comprise or be connected with non-transitory storage devices 1910 which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory) , a RAM (Random Access Memory) , a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code. The non-transitory storage devices 1910 may be detachable from an interface. The non-transitory storage devices 1910 may have data/instructions/code for implementing the methods and steps which are described above. The computing device 1900 may also comprise a communication device 1912. The communication device 1912 may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a Bluetooth
TM device, 802.11 device, WiFi device, WiMax device, cellular communication facilities and/or the like.
When the computing device 1900 is used as an on-vehicle device, it may also be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on. In this way, the computing device 1900 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle. When the computing device 1900 is used as an on-vehicle device, it may also be connected to other facilities (such as an engine system, a wiper, an anti-lock Braking System or the like) for controlling the traveling and operation of the vehicle.
In addition, the non-transitory storage device 1910 may have map information and software elements so that the processor 1904 may perform route guidance processing. In addition, the output device 1906 may comprise a display for displaying the map, the location mark of the vehicle and also images indicating the travelling situation of the vehicle. The output device 1906 may also comprise a speaker or interface with an ear phone for audio guidance.
The bus 1902 may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus 1902 may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile.
The computing device 1900 may also comprise a working memory 1914, which may be any kind of working memory that may store instructions and/or data useful for the working of the processor 1904, and may comprise but is not limited to a random access memory and/or a read-only memory device.
Software elements may be located in the working memory 1914, including but are not limited to an operating system 1916, one or more application programs 1918, drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs 1918, and the units of the aforementioned apparatus 800 may be implemented by the processor 1904 reading and executing the instructions of the one or more application programs 1918. The executable codes or source codes of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device (s) 1910 described above, and may be read into the working memory 1914 possibly with compilation and/or installation. The executable codes or source codes of the instructions of the software elements may also be downloaded from a remote location.
Those skilled in the art may clearly know from the above embodiments that the present disclosure may be implemented by software with necessary hardware, or by hardware, firmware and the like. Based on such understanding, the embodiments of the present disclosure may be embodied in part in a software form. The computer software may be stored in a readable storage medium such as a floppy disk, a hard disk, an optical disk or a flash memory of the computer. The computer software comprises a series of instructions to make the computer (e.g., a personal computer, a service station or a network terminal) execute the method or a part thereof according to respective embodiment of the present disclosure.
Reference has been made throughout this specification to “one example” or “an example” , meaning that a particular described feature, structure, or characteristic is included in at least one example. Thus, usage of such phrases may refer to more than just one example. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more examples.
One skilled in the relevant art may recognize, however, that the examples may be practiced without one or more of the specific details, or with other methods, resources, materials, etc. In other instances, well known structures, resources, or operations have not been shown or described in detail merely to observe obscuring aspects of the examples.
While sample examples and applications have been illustrated and described, it is to be understood that the examples are not limited to the precise configuration and resources described above. Various modifications, changes, and variations apparent to those skilled in the art may be made in the arrangement, operation, and details of the methods and systems disclosed herein without departing from the scope of the claimed examples.
Claims (15)
- A computer-implemented method for creating snapshot images of traffic scenarios, characterized in comprising:obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time;obtaining positions of each of the at least two sensors;transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors; andplotting all of the transformed sensor data onto an image to form a snapshot image.
- The method according to claim 1, wherein the at least two sensors are of different types and are selected from the following sensors:Lidar;Radar; andcamera.
- The method according to any one of the preceding claims, wherein the reference coordinate system is a two-dimension coordinate system parallel to the ground.
- The method according to claim 3, wherein the original point of the reference coordinate system is the center point of the rear axle shaft of the vehicle or the centroid of any one of the at least two sensors.
- The method according to any one of the preceding claims, wherein the method further comprises:determining a reference timestamp of the snapshot image.
- The method according to any claim 5, wherein the method further comprises:labeling the age of each frame of sensor data relative to the reference timestamp.
- The method according to any one of the preceding claims, wherein the method further comprises:fusing the sensor data which has overlapped positions in the reference coordinate system.
- A computer-implemented method for training a road model with snapshot images, wherein the method comprises:obtaining an existing road model of a road scene;obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time;for each of the at least two frames, creating a snapshot image with the obtained sensor data according to the method of any one of claims 1-7;associating the existing road model with each of the snapshot image as training data; andtraining a new road model using the training data.
- The method according to claim 8, wherein training a new road model using the training data comprises:training the new road model based on machine learning using the training data as labels.
- An apparatus for creating snapshot images of traffic scenarios, characterized in comprising:a sensor data obtaining module configured for obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time;a sensor position obtaining module configured for obtaining positions of the at least two sensors;a transforming module configured for transforming each frame of the sensor data into a current reference coordinate system based on the obtained positions of the at least two sensors; anda plotting module configured for plotting all of the transformed sensor data onto an image to form a snapshot image.
- A vehicle, characterized in comprisingat least two sensors; andthe apparatus of any one of claim 8.
- The vehicle according to claim 11, wherein the at least two sensors are of different types and are selected from the following sensors:Lidar;Radar; andcamera.
- The vehicle according to any one of claims 11-12, wherein the reference coordinate system is a two-dimension coordinate system parallel to the ground, and the original point of the reference coordinate system is the center point of the rear axle shaft of the vehicle or the centroid of any one of the at least two sensors.
- A system for training a road model with snapshot images, wherein the system comprises:at least two sensors configured for collecting sensor data of a road scene; anda processing unit configured for:obtaining an existing road model of a road scene;obtaining at least two frames of sensor data of the road scene from at least two sensors installed on a vehicle, the at least two frames of sensor data are sequentially collected at different time;for each of the at least two frames, creating a snapshot image with the obtained sensor data according to any one of claims 1-7;associating the existing road model with each of the snapshot image as training data; andtraining a new road model using the training data.
- The system according to claim 14, wherein the new road model is trained based on machine learning using the training data as labels.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18936480.5A EP3864579A4 (en) | 2018-10-11 | 2018-10-11 | Snapshot image to train roadmodel |
CN201880098535.3A CN112889070A (en) | 2018-10-11 | 2018-10-11 | Snapshot images for training road models |
PCT/CN2018/109796 WO2020073268A1 (en) | 2018-10-11 | 2018-10-11 | Snapshot image to train roadmodel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2018/109796 WO2020073268A1 (en) | 2018-10-11 | 2018-10-11 | Snapshot image to train roadmodel |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020073268A1 true WO2020073268A1 (en) | 2020-04-16 |
Family
ID=70164293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/109796 WO2020073268A1 (en) | 2018-10-11 | 2018-10-11 | Snapshot image to train roadmodel |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3864579A4 (en) |
CN (1) | CN112889070A (en) |
WO (1) | WO2020073268A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559791A (en) * | 2013-10-31 | 2014-02-05 | 北京联合大学 | Vehicle detection method fusing radar and CCD camera signals |
US20150019088A1 (en) * | 2013-07-10 | 2015-01-15 | Kia Motors Corporation | Apparatus and method of processing road data |
CN104637059A (en) * | 2015-02-09 | 2015-05-20 | 吉林大学 | Night preceding vehicle detection method based on millimeter-wave radar and machine vision |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8812226B2 (en) * | 2009-01-26 | 2014-08-19 | GM Global Technology Operations LLC | Multiobject fusion module for collision preparation system |
US20180150695A1 (en) * | 2015-11-30 | 2018-05-31 | Seematics Systems Ltd | System and method for selective usage of inference models based on visual content |
US10474964B2 (en) * | 2016-01-26 | 2019-11-12 | Ford Global Technologies, Llc | Training algorithm for collision avoidance |
US20180150697A1 (en) * | 2017-01-09 | 2018-05-31 | Seematics Systems Ltd | System and method for using subsequent behavior to facilitate learning of visual event detectors |
US10445928B2 (en) * | 2017-02-11 | 2019-10-15 | Vayavision Ltd. | Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types |
CN107463907B (en) * | 2017-08-08 | 2021-06-25 | 东软集团股份有限公司 | Vehicle collision detection method and device, electronic equipment and vehicle |
CN107862293B (en) * | 2017-09-14 | 2021-05-04 | 北京航空航天大学 | Radar color semantic image generation system and method based on countermeasure generation network |
-
2018
- 2018-10-11 WO PCT/CN2018/109796 patent/WO2020073268A1/en unknown
- 2018-10-11 EP EP18936480.5A patent/EP3864579A4/en active Pending
- 2018-10-11 CN CN201880098535.3A patent/CN112889070A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150019088A1 (en) * | 2013-07-10 | 2015-01-15 | Kia Motors Corporation | Apparatus and method of processing road data |
CN103559791A (en) * | 2013-10-31 | 2014-02-05 | 北京联合大学 | Vehicle detection method fusing radar and CCD camera signals |
CN104637059A (en) * | 2015-02-09 | 2015-05-20 | 吉林大学 | Night preceding vehicle detection method based on millimeter-wave radar and machine vision |
CN106908783A (en) * | 2017-02-23 | 2017-06-30 | 苏州大学 | Obstacle detection method based on multi-sensor information fusion |
Non-Patent Citations (1)
Title |
---|
See also references of EP3864579A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3864579A4 (en) | 2022-06-08 |
CN112889070A (en) | 2021-06-01 |
EP3864579A1 (en) | 2021-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10417816B2 (en) | System and method for digital environment reconstruction | |
US11294387B2 (en) | Systems and methods for training a vehicle to autonomously drive a route | |
US11527078B2 (en) | Using captured video data to identify pose of a vehicle | |
US20200082182A1 (en) | Training data generating method for image processing, image processing method, and devices thereof | |
US20190311209A1 (en) | Feature Recognition Assisted Super-resolution Method | |
Gressenbuch et al. | Mona: The munich motion dataset of natural driving | |
US11461944B2 (en) | Region clipping method and recording medium storing region clipping program | |
CN111833443A (en) | Landmark position reconstruction in autonomous machine applications | |
US12067790B2 (en) | Method and system for identifying object | |
JP7532569B2 (en) | Method and apparatus for determining pedestrian location | |
EP4148600A1 (en) | Attentional sampling for long range detection in autonomous vehicles | |
CN112099481A (en) | Method and system for constructing road model | |
US20220309693A1 (en) | Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation | |
CN115841660A (en) | Distance prediction method, device, equipment, storage medium and vehicle | |
WO2020073268A1 (en) | Snapshot image to train roadmodel | |
WO2020073271A1 (en) | Snapshot image of traffic scenario | |
WO2020073270A1 (en) | Snapshot image of traffic scenario | |
WO2020073272A1 (en) | Snapshot image to train an event detector | |
CN112805200B (en) | Snapshot image of traffic scene | |
CN114127658A (en) | 3D range in 6D space using road model 2D manifold | |
US20230024799A1 (en) | Method, system and computer program product for the automated locating of a vehicle | |
US20240249493A1 (en) | Systems and methods for detecting a driving area in a video | |
EP3552388B1 (en) | Feature recognition assisted super-resolution method | |
CN112101392A (en) | Method and system for identifying objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18936480 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018936480 Country of ref document: EP Effective date: 20210511 |