WO2020248614A1 - 地图生成方法、驾驶控制方法、装置、电子设备及系统 - Google Patents
地图生成方法、驾驶控制方法、装置、电子设备及系统 Download PDFInfo
- Publication number
- WO2020248614A1 WO2020248614A1 PCT/CN2020/075083 CN2020075083W WO2020248614A1 WO 2020248614 A1 WO2020248614 A1 WO 2020248614A1 CN 2020075083 W CN2020075083 W CN 2020075083W WO 2020248614 A1 WO2020248614 A1 WO 2020248614A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- point cloud
- coordinate system
- area
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 121
- 238000012545 processing Methods 0.000 claims abstract description 81
- 230000011218 segmentation Effects 0.000 claims abstract description 24
- 230000008569 process Effects 0.000 claims description 37
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 238000005259 measurement Methods 0.000 claims description 16
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 21
- 238000010276 construction Methods 0.000 description 14
- 238000013528 artificial neural network Methods 0.000 description 6
- 230000007547 defect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the embodiments of the present disclosure relate to intelligent driving technology, and in particular, to a map generation method, driving control method, device, electronic equipment and system.
- High-precision maps play an important role and are an essential part of the field of intelligent driving.
- Vehicle-oriented maps are usually called high-precision maps, which are different from human-oriented maps (navigation maps).
- High-precision maps include rich semantic information and driving assistance information. For example, in a high-precision map, not only the road can be depicted, but also the information of the road markings on the road, such as the location and type of the road markings. Based on the above-mentioned high-precision map, vehicle positioning and driving control can be realized.
- the embodiments of the present disclosure provide a map generation method, driving control method, device, electronic equipment and system.
- embodiments of the present disclosure provide a map generation method, including:
- the first semantic information of the road element in the area and the point cloud information of the area are matched to obtain the second semantic information of the road element in the area.
- the second semantic information includes the information of the road element.
- a map is generated or a part of the map corresponding to the region is updated.
- an embodiment of the present disclosure provides a map generating device, including:
- An acquisition module for acquiring image information of at least a part of the road environment where the vehicle is located via a vehicle-mounted camera, and synchronously correspondingly acquiring point cloud information of at least a portion of the road environment where the vehicle is located through the vehicle-mounted radar sensor;
- a segmentation module configured to perform semantic segmentation processing on the image information to obtain first semantic information of road elements in the area, where the first semantic information includes two-dimensional location information and attribute information of the road elements;
- the matching module is used to perform matching processing on the first semantic information of the road element in the area and the point cloud information of the area to obtain the second semantic information of the road element in the area, and the second semantic information includes The three-dimensional position information and attribute information of the road element;
- the generating module is configured to generate a map or update a part of the map corresponding to the region based on the second semantic information.
- embodiments of the present disclosure provide a driving control method, including:
- the driving control device obtains map information of at least a partial area of the road environment where the vehicle is located, and the map information is obtained using the map generation method described in the first aspect;
- the driving control device performs intelligent driving control of the vehicle according to the map information.
- a driving control device including:
- the acquiring module is used to acquire map information of at least a part of the area of the road environment where the vehicle is located, and the map information is acquired using the map generation method described in the first aspect;
- the driving control module is used for intelligent driving control of the vehicle according to the map information.
- an electronic device including:
- Memory used to store program instructions
- the processor is configured to call and execute the program instructions in the memory to execute the method steps described in the first aspect.
- embodiments of the present disclosure provide an intelligent driving system, including: a sensor connected in communication, the electronic device according to the fifth aspect, and the driving control device according to the fourth aspect, where the sensor is used for Collect image information and point cloud information of at least a part of the road environment where the vehicle is located.
- embodiments of the present disclosure provide a readable storage medium in which a computer program is stored, and the computer program is used to execute the method steps described in the first aspect; or, the computer The program is used to execute the method steps described in the third aspect.
- the image information of at least part of the area in the road environment is collected by a vehicle-mounted camera, and the image information is semantically segmented to obtain the road elements in the area Two-dimensional position information and attribute information; at the same time, the point cloud information of the area is synchronously collected by the on-board radar sensor, and then the first semantic information and point cloud information are matched to obtain the three-dimensional position information and attribute information of the road elements in the area .
- the vehicle-mounted camera and the vehicle-mounted radar sensor work as different types of sensors installed on the vehicle.
- the matching processing based on the information collected by the different types of sensors can directly generate or update the map of the area, which can reduce or even eliminate Manual operations in the process of map generation or update make the construction of the map highly automated and greatly improve the efficiency of high-precision map construction.
- various types of information used to characterize road elements can be effectively integrated, thereby greatly improving the accuracy of road elements in the map.
- FIG. 1 is a schematic diagram 1 of the flow of a map construction method provided by an embodiment of the disclosure
- FIG. 2 is a second schematic diagram of the flow of a map construction method provided by an embodiment of the disclosure.
- FIG. 3 is a third schematic flowchart of a map construction method provided by an embodiment of the disclosure.
- FIG. 4 is a fourth schematic flowchart of a map construction method provided by an embodiment of the disclosure.
- FIG. 5 is a schematic flowchart five of a map construction method provided by an embodiment of the disclosure.
- FIG. 6 is the first module structure diagram of a map generating device provided by an embodiment of the disclosure.
- FIG. 7 is the second module structure diagram of the map generating device provided by the embodiment of the disclosure.
- FIG. 8 is the third module structure diagram of the map generating device provided by the embodiment of the disclosure.
- FIG. 9 is the fourth module structure diagram of the map generating device provided by the embodiment of the disclosure.
- FIG. 10 is the fifth diagram of the module structure of the map generating device provided by the embodiment of the disclosure.
- FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
- FIG. 12 is a schematic flowchart of a driving control method provided by an embodiment of the present disclosure.
- FIG. 13 is a schematic structural diagram of a driving control device provided by an embodiment of the present disclosure.
- FIG. 14 is a schematic structural diagram of an intelligent driving system provided by an embodiment of the disclosure.
- FIG. 1 is a schematic diagram 1 of the flow chart of a map construction method provided by an embodiment of the present disclosure.
- the execution subject of the method may be an electronic device with data calculation and processing capabilities. As shown in Figure 1, the method includes:
- S101 Acquire image information of at least a part of the road environment where the vehicle is located via the on-board camera, and acquire the point cloud information of at least a part of the road environment where the vehicle is located synchronously and correspondingly via the on-board radar sensor.
- the map generated by the embodiment of the present disclosure is a vehicle-oriented map, that is, a high-precision map.
- the embodiments of the present disclosure can be applied to a scene where a high-precision map is generated or a partial area in the high-precision map is updated.
- the vehicle Before generating a high-precision map of a certain area or updating at least part of the high-precision map, the vehicle can be driven in the area first, and multiple sensors on the vehicle can simultaneously collect the information in the area. The collected information is processed to obtain a map of the area.
- the driving mode of the vehicle may be a manual driving mode or an unmanned driving mode, which is not specifically limited in the embodiment of the present disclosure.
- the vehicle before constructing a map of a certain city, the vehicle can be driven on each road in the city.
- the vehicle-mounted camera also called the vehicle-mounted camera
- Image information the point cloud information of the surrounding environment of the vehicle is collected by the on-board radar sensor.
- a map of at least part of the area in the road environment where the vehicle is located can be generated or updated through the processing of the following steps.
- the GPS Global Positioning System, GPS
- IMU Inertial Measurement Unit
- the vehicle-mounted camera, vehicle-mounted radar sensor, GPS, IMU, etc. are different types of sensors installed on the vehicle, and these sensors work synchronously.
- the synchronization of different sensors in the embodiments of the present disclosure can be achieved by hardware devices simultaneously triggering data collection, or by using software methods to time-align the data collected by each sensor separately.
- the embodiments of the present disclosure combine this Not limited.
- each sensor collects information according to a certain period, and the time of the information collected by each sensor in each period is aligned.
- a vehicle-mounted camera collects a frame of image information in front of the vehicle at time A
- the vehicle-mounted radar sensor collects point cloud data corresponding to one frame of image information at time A
- GPS and IMU acquire the vehicle's pose information at time A respectively .
- the time of the image information and point cloud information used to construct or update the map is synchronized, that is, to ensure that the information of the same object is collected, and then the correctness of the subsequent matching processing results is ensured.
- sensors such as vehicle-mounted cameras, vehicle-mounted radar sensors, GPS, IMU, etc. are fused with each other to complete map generation or update.
- Performance boundaries include, but are not limited to: limited detection range, perception defects, prior information defects, and so on.
- the limited detection range means that the sensor has a fixed range for detecting the surrounding environment.
- the long-distance millimeter wave radar has a detection range of 1 meter (m)-280m
- the infrared sensor has a detection range of 0.2m-120m.
- Perception defect refers to the environmental conditions in which the sensor is used.
- a high-resolution camera can detect objects in an image, and a camera with a narrow field of view can detect distant objects.
- Prior information refers to information that can be collected in advance and will not change in a short time.
- Priori information defect refers to the inability to collect prior information through sensors.
- the sensor cannot perceive whether the vehicle is currently on a highway.
- the sensors are fused to obtain the information collected by each sensor at the same time, so that different types of information can be collected, and generated or updated based on these different types of information Maps can greatly reduce or even eliminate problems such as missing information collection that may be caused by the performance boundary of a single sensor.
- S102 Perform semantic segmentation processing on the image information to obtain first semantic information of a road element in the area, where the first semantic information includes two-dimensional position information and attribute information of the road element.
- the aforementioned road elements may be all types of objects that appear in the road environment.
- the road element may be one or more of the following: road marking lines, traffic lights, traffic signs, roadblocks, street lights on both sides of the road, trees on both sides of the road, buildings on both sides of the road, etc.
- the road marking line can be a lane line, a stop line, a cross lane, a turning line, and the like.
- the embodiment of the present disclosure does not limit the specific form of the road element.
- the first semantic information includes two-dimensional position information and attribute information of the road surface element.
- the two-dimensional position information may include, for example, two-dimensional coordinate values of each point of the road element.
- the attribute information in the above-mentioned first semantic information is used to describe the attribute of the road element.
- the attribute information in the first semantic information may have different meanings.
- the above attribute information refers to the types of lane lines in different dimensions, such as solid lines and dashed lines in the line type dimension, and white and yellow in the color dimension.
- the above attribute information may be a fixed value corresponding to the stop line. The embodiment of the present disclosure does not specifically limit the above attribute information.
- the vehicle-mounted radar sensor may first collect the original three-dimensional point cloud information, and then may perform filtering and other processing based on the pose information of the vehicle when the point cloud information is collected to obtain the above-mentioned point cloud information.
- the above-mentioned first semantic information includes the two-dimensional position information of the road element, and the above-mentioned point cloud information can represent the three-dimensional position information of a point in space, including the three-dimensional position information of the road element.
- the first semantic information of the road element and the point cloud information can be matched, that is, the three-dimensional position information and attribute information of the road element can be obtained.
- the road elements in the map thus obtained have three-dimensional location information and attribute information, which greatly improves the accuracy of the road elements in the map.
- S104 Generate a map based on the above second semantic information or update a part of the map corresponding to the area.
- the embodiments of the present disclosure can be applied to the scene of generating a high-precision map, and can also be applied to the scene of updating a part of the high-resolution map.
- a map of the area can be constructed based on the three-dimensional position information included in the second semantic information, and the map of the area can be The three-dimensional position information of the pavement elements in the area is marked in detail, and the attribute information of the pavement elements can also be marked in detail.
- the vehicle-mounted camera collects image information of at least part of the area in the road environment, and performs semantic segmentation processing on the image information to obtain the two-dimensional position information and attribute information of the road elements in the area; at the same time, the vehicle-mounted radar sensor collects the area synchronously Then, by matching the first semantic information and the point cloud information, the three-dimensional position information and attribute information of the road elements in the area can be obtained.
- the vehicle-mounted camera and the vehicle-mounted radar sensor work as different types of sensors installed on the vehicle. The matching processing based on the information collected by the different types of sensors can directly generate or update the map of the area, which can reduce or even eliminate Manual operation during map generation or update.
- the method of this embodiment enables a high degree of automation of map construction, and greatly improves the construction efficiency of high-precision maps.
- the method of this embodiment enables various types of information used to characterize road elements to be effectively integrated by matching the first semantic information and point cloud information, thereby greatly improving the accuracy of the road elements in the map.
- the above-mentioned first semantic information and point cloud information may be matched with each other in the following manner.
- FIG. 2 is a schematic diagram of the second flow of the map construction method provided by an embodiment of the present disclosure. As shown in FIG. 2, an optional implementation process of the foregoing step S103 includes:
- S201 Perform coordinate system conversion from a three-dimensional coordinate system to a two-dimensional coordinate system on the point cloud information to obtain the two-dimensional point cloud information of the point cloud information in the coordinate system where the first semantic information is located.
- the original point cloud information collected by the vehicle-mounted radar sensor is the data in the radar coordinate system.
- the North East Down (NED) can be obtained first.
- Data in the coordinate system that is, optionally, the above-mentioned point cloud information may be the point cloud information in the NED coordinate system.
- the NED coordinate system includes the north axis, the east axis and the earth axis. The north axis points to the north of the earth, the east axis points to the east of the earth, and the earth axis is perpendicular to the earth's surface and points downward.
- the data collected by the vehicle-mounted camera is data in a pixel coordinate system.
- the pixel coordinate system may also be referred to as an image coordinate system.
- the pixel coordinate system is a two-dimensional coordinate system whose origin is the upper left corner of the image collected by the vehicle-mounted camera.
- the point cloud information is data in the NED coordinate system, and the first semantic information is coordinates in a pixel coordinate system
- the point cloud information can be converted from the NED coordinate system to the pixel coordinate system to obtain the point The two-dimensional information of the point cloud in the pixel coordinate system. After this process, the point cloud information is projected from the NED coordinate system to the pixel coordinate system.
- any one of the following methods can be used to convert the above point cloud information from the NED coordinate system to the pixel coordinate system:
- the first method first convert the point cloud information from the NED coordinate system to the IMU coordinate system to obtain the information of the point cloud information in the IMU coordinate system; then according to the rotation and translation matrix of the IMU coordinate system and the camera coordinate system, the point The cloud information in the IMU coordinate system is converted to the camera coordinate system to obtain the point cloud information in the camera coordinate system; then, according to the camera parameter matrix, the point cloud information in the camera coordinate system is converted to pixels Coordinate system, get the two-dimensional point cloud information of the point cloud information in the pixel coordinate system.
- the point cloud information is first converted from the NED coordinate system to the radar coordinate system to obtain the point cloud information in the radar coordinate system; then according to the rotation and translation matrix of the radar coordinate system and the camera coordinate system, the point cloud The information in the radar coordinate system is converted to the camera coordinate system to obtain the point cloud information in the camera coordinate system; further, according to the camera parameter matrix, the point cloud information in the camera coordinate system is converted to pixel coordinates System, get the two-dimensional point cloud information of the point cloud information in the pixel coordinate system.
- the camera coordinate system mentioned in the above two methods refers to a coordinate system formed by taking the focus center of the vehicle-mounted camera as the origin and the optical axis as the Z axis.
- the parameter matrix of the camera described in the above second mode refers to the parameter matrix of the above-mentioned vehicle-mounted camera.
- the point cloud information may also be information in other three-dimensional coordinate systems.
- the first semantic information may also be information in other two-dimensional coordinate systems. Through coordinate system conversion, the point cloud information may also be projected to the coordinate system where the first semantic information is located.
- S202 Perform matching processing on the two-dimensional point cloud information and the two-dimensional position information of the road element to obtain second semantic information of the road element in the map of the area.
- the two-dimensional point cloud information and the two-dimensional position information in the first semantic information are located in the same coordinate system, so they can be in the same coordinate
- the system performs matching processing on the two-dimensional point cloud information and the two-dimensional position information in the first semantic information, so as to obtain the second semantic information of the road element.
- the point cloud information is projected under the coordinate system of the first semantic information by performing coordinate transformation on the point cloud information, and then the two-dimensional point cloud information and the two-dimensional information in the first semantic information are processed in the same coordinate system.
- the position information is matched, so that the accuracy of the matching result of the point cloud information and the two-dimensional position information in the first semantic information is greatly improved.
- the above-mentioned two-dimensional point cloud information and the two-dimensional position information in the first semantic information may be matched in the following manner.
- FIG. 3 is a schematic diagram of the third process of the map construction method provided by an embodiment of the disclosure. As shown in FIG. 3, in step S202, the two-dimensional point cloud information and the two-dimensional position information in the first semantic information are matched. Options include:
- S301 Determine whether each pixel in the two-dimensional point cloud information belongs to the road element according to the two-dimensional position information of the road element.
- the first semantic information of the road element can be obtained by processing the image collected by the vehicle-mounted camera. During processing, one or more frames of the image can be selected in a specific way. For each frame of image, Through the semantic segmentation of the image, the road elements are segmented from the image. After segmenting the road elements, it can be known whether each pixel in the image belongs to the road element, and each pixel belonging to the road element has specific two-dimensional position information, and the two-dimensional position information may be two-dimensional coordinate values.
- each pixel also has specific two-dimensional position information.
- each pixel in the two-dimensional point cloud information can be traversed pixel by pixel to determine whether each pixel is a pixel in the road element. If so, you can determine the pixel in the two-dimensional point cloud information It is a pixel in the road element.
- the pixel (x1, y1) in the two-dimensional point cloud information, for this pixel, it is determined that in a frame of image collected by the vehicle-mounted camera corresponding to the two-dimensional point cloud information, the pixel (x1 , Y1) is a pixel in the road element, if so, it can be determined that the pixel (x1, y1) in the two-dimensional point cloud information belongs to the above-mentioned road element.
- the first semantic information and the point cloud information of the road element to be matched are information for the same physical location, for example, both are information that characterizes a specific road in a city. Therefore, as an optional implementation manner, each sensor can collect data at the same time, and add time stamp information to the collected data. When processing the data collected by each sensor, the time stamp information can be retained in the processing result. . Furthermore, when performing matching in the embodiment of the present disclosure, the point cloud information and the first semantic information with the same time stamp can be selected to perform the above-mentioned matching process. For example, in the above example, a frame of image collected by a vehicle-mounted camera corresponding to the two-dimensional point cloud information may be a frame of image with the same time stamp as the time stamp of the point cloud information to which the two-dimensional point cloud information belongs.
- the first pixel point may refer to any pixel point in the two-dimensional information of the point cloud.
- the first pixel point is a pixel point obtained by projecting from the three-dimensional coordinate system where the point cloud information is located to the two-dimensional coordinate system where the first semantic information is located. Therefore, the first pixel point can uniquely correspond to the three-dimensional coordinate system A three-dimensional location information. Therefore, when it is determined that the first pixel point belongs to the aforementioned road element, the three-dimensional position information and attribute information of the first pixel point can be obtained at the same time. After this process, each point in the road element in the map has both three-dimensional position information and attribute information, so that high-precision road element information in the map is obtained.
- the map can be obtained at the same time.
- the information of the road elements in the map with draft accuracy can be obtained.
- the vehicle-mounted radar sensor collects data at multiple moments, and various point cloud information in the surrounding environment of the vehicle may be collected in one collection moment, such as the point cloud information of the road ahead, the point cloud of the surrounding trees Information, point cloud information of houses, etc.
- the point cloud information of the above-mentioned area is filtered out from the point cloud information set composed of the point cloud information collected by the radar sensor.
- the vehicle has a specific pose at each moment when the vehicle's sensors collect information.
- the position and attitude of the vehicle can be obtained through GPS, IMU and other sensors.
- the time stamp corresponding to the two-dimensional position information in the first semantic information can be used as a reference to search for the pose of the vehicle at the time corresponding to the time stamp.
- the heading of the vehicle at that moment can be known.
- only the point cloud information within the preset range in front of the downward vehicle may be selected from the point cloud information set.
- the preset range may be, for example, a rectangular frame with a preset size, and the preset range may include point cloud information of the road surface in front of the vehicle, for example.
- the aforementioned point cloud information set may be a set of point cloud information formed after processing a large amount of point cloud information collected by a vehicle-mounted radar sensor.
- the point cloud information in the set includes point cloud information of road surfaces in multiple areas. It can also include point cloud information of trees, houses, viaducts and other environments around vehicles in multiple areas.
- the point cloud information is filtered according to the vehicle pose, and the subsequent matching process only uses the filtered point cloud information for matching, which can greatly reduce the processing time during the matching process and avoid the invalid matching process. Greatly improve the efficiency of matching processing.
- the information of the free space corresponding to multiple areas may be filtered and spliced to obtain the above-mentioned point cloud information set.
- the above-mentioned drivable area information can be obtained by performing segmentation processing on an image collected at the same time as the point cloud information.
- the driveable area can be detected on the collected image information to obtain the information of the driveable area in the area.
- the point cloud information of multiple areas collected by the vehicle-mounted radar sensor not only includes the point cloud information of the drivable area, but may also include the point cloud information of the environment such as trees, houses, and viaducts around the drivable area.
- the following describes the process of filtering and splicing point cloud information based on the information of the drivable area.
- Fig. 4 is a schematic flowchart four of the map construction method provided by an embodiment of the present disclosure. As shown in Fig. 4, the process of filtering and splicing point cloud information according to the information of the drivable area includes:
- S401 Detect the drivable area on the collected image information of each area, and obtain the information of the drivable area in each area.
- all point cloud information collected by the vehicle radar sensor refers to the point cloud information collected by the vehicle radar sensor at multiple times. At each time, the camera collects a frame of image, and the vehicle radar sensor collects the corresponding point cloud information. Or IMU obtains the pose information of the vehicle.
- the information collected at multiple times includes the information of multiple regions.
- S402 Filter out the point cloud information of the drivable area in each area from the point cloud information of each area collected by the vehicle-mounted radar sensor.
- the point cloud information is projected to the coordinate system where the image is located by performing coordinate conversion on the point cloud information collected at the same time as the image, and It is determined which of the two-dimensional points after projection of the point cloud information belong to the points in the drivable area, and furthermore, only the point cloud information corresponding to the two-dimensional points belonging to the drivable area is retained.
- the point cloud information is filtered according to the drivable area, and only the The point cloud information corresponding to the driving area.
- the point cloud information of the drivable areas in multiple areas can be obtained.
- the foregoing steps S402-S403 may be filtered and saved frame by frame according to the image frame.
- the point cloud information collected at the time when the image is collected is filtered to obtain the point cloud information of the corresponding area at that time.
- the obtained point cloud information is converted from the radar coordinate system to the NED coordinate system.
- the converted point cloud information is indexed and stored. After obtaining the next point cloud information of the corresponding area at the next time, index and store the next point cloud information.
- the result is the point cloud information set.
- the indexes of two continuous regions can be continuous.
- the splicing of the point cloud information is completed when the point cloud information is stored, and then the point cloud information of all regions is obtained, and these point cloud information form a point cloud information collection.
- the point cloud information is first filtered according to the drivable area, so that the point cloud information in the stored point cloud information set only represents The point cloud information of the driving area can reduce the storage of point cloud information and improve the efficiency of matching processing when matching based on this.
- the point cloud information in the point cloud information collection can not only characterize the information of the drivable area, but also characterize the information outside the drivable area, such as the trees, houses, and viaducts around the vehicle.
- the point cloud information collected in the above step S101 can represent the above two types of information.
- the processing can be performed according to the following process:
- the above-mentioned second pixel point may be any pixel point in the two-dimensional information of the point cloud.
- the point cloud information in the point cloud information set is the information including the drivable area and other objects in the surrounding environment of the vehicle.
- the point cloud two-dimensional information of the point cloud information is filtered according to the drivable area.
- the information of the drivable area may be obtained by semantically segmenting a frame of image corresponding to the area.
- the image semantic segmentation result of a frame of image corresponding to the area may not only include the above-mentioned first semantic information, that is, the two-dimensional position information and attribute information of the road element, and at the same time, may also include the information of the drivable area.
- FIG. 5 is a schematic flowchart five of the map construction method provided by an embodiment of the disclosure. As shown in FIG. 5, the process of segmenting the image to obtain the first semantic information and the information of the drivable area includes:
- S501 Perform frame splitting processing on the image information collected by the vehicle-mounted camera to obtain multiple frames of images.
- S502 Perform semantic segmentation processing on each frame of image to obtain segmentation results of road elements, attribute information of road elements, and segmentation results of the drivable area.
- the segmentation result of the road element includes whether each pixel in a frame of image belongs to the road element, and the attribute information of the road element includes the attribute information of the pixel belonging to the road element.
- the attribute information of a pixel belonging to the road element may include information such as color and/or line type, such as a white solid line, etc.
- the segmentation result of the drivable area includes a frame of image Whether each pixel belongs to the drivable area.
- the neural network may be, for example, a convolutional neural network, a non-convolutional multilayer neural network, or the like.
- the above neural network has the ability to segment the corresponding elements.
- the training sample image set including the corresponding element label information can be used in advance to supervise or semi-supervise the neural network.
- the network is trained.
- S503 Perform clustering processing on the segmentation results of the above-mentioned road elements to obtain information of each road element in each frame of image.
- the obtained information of each road element includes the two-dimensional position information and attribute information of each pixel in the road element.
- the processing procedure described in the foregoing embodiment can be executed based on the information.
- the road elements in the map of the area are filtered according to the preset feature information corresponding to the above-mentioned road elements to obtain the filtered road elements.
- the aforementioned preset feature information is the inherent feature of the road element. After obtaining the three-dimensional position information and attribute information of the road element through the foregoing process, it can be judged whether the three-dimensional position information and/or attribute information of the road element conforms to the preset characteristics of the road element. If it does not conform, the item can be deleted from the map. Road elements. After this process, the wrong road elements can be eliminated, and the accuracy of the map can be further improved.
- fitting processing is performed on the points in the road elements represented by the above-mentioned three-dimensional position information to obtain the road elements represented by the line parameters.
- the road elements obtained through the foregoing processing can be characterized by a large number of points, and each point has three-dimensional position information and attribute information.
- the road elements represented by the line parameters can be fitted.
- the line parameter may include the equation of the line, the position of the starting point of the line, and the position of the ending point of the line.
- the attribute information of points belonging to the same road element is the same, and the attribute information of the fitted road element may be the attribute information of any one of the points before fitting.
- the high-precision road elements in the map can be represented by only a few line parameters, thereby greatly reducing the storage of road elements, and greatly improving when using the map for driving control and other operations. Operational efficiency.
- the above-mentioned road elements are sampled to obtain the sampled road elements.
- the number of points that make up the road elements is huge.
- the road elements can be sampled. After sampling, the road elements can be sampled. While the accuracy meets the requirements of the scene, the number of points of road elements can be greatly reduced, and the speed of operation processing can be greatly improved when the map is used for driving control and other operations.
- the point cloud information is the information in the NED coordinate system
- the three-dimensional position information of the road element in the obtained map is the position information in the NED coordinate system. If there is another map whose location information is the information in the World Geodetic System-84 (WGS84), the two maps cannot be directly integrated. Therefore, optionally, the three-dimensional position information obtained through the foregoing process can be converted from the NED coordinate system to the target coordinate system corresponding to another map to obtain the position information of the three-dimensional position information in the target coordinate system. After this processing, the fusion of different maps can be realized.
- FIG. 6 is the first module structure diagram of the map generating device provided by the embodiment of the disclosure. As shown in FIG. 6, the device includes:
- the acquisition module 601 is configured to acquire image information of at least a part of the road environment where the vehicle is located via a vehicle-mounted camera, and synchronously and correspondingly acquire image information and point cloud information of at least a portion of the road environment where the vehicle is located through the vehicle-mounted radar sensor;
- the segmentation module 602 is configured to perform semantic segmentation processing on the image information to obtain first semantic information of road elements in the area, where the first semantic information includes two-dimensional position information and attribute information of the road elements;
- the matching module 603 is configured to perform matching processing on the first semantic information of the road element in the area and the point cloud information of the area to obtain the second semantic information of the road element in the area, the second semantic information Including the three-dimensional position information and attribute information of the road element;
- the generating module 604 is configured to generate a map or update a part of the map corresponding to the region based on the second semantic information.
- the device is used to implement the foregoing method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
- FIG. 7 is the second module structure diagram of the map generating apparatus provided by the embodiment of the disclosure.
- the matching module 603 includes:
- the conversion unit 6031 is configured to convert the point cloud information from a three-dimensional coordinate system to a two-dimensional coordinate system to obtain the two-dimensional point cloud information of the point cloud information in the coordinate system where the two-dimensional position information is located;
- the matching unit 6032 is configured to perform matching processing on the two-dimensional point cloud information and the two-dimensional position information of the road element to obtain the second semantic information of the road element in the area.
- the matching unit 6032 is configured to determine whether each pixel in the two-dimensional point cloud information belongs to the road element according to the two-dimensional position information of the road element; in response to the point cloud The first pixel in the two-dimensional information belongs to the road element, and the three-dimensional position information of the first pixel in the point cloud information and the attribute of the first pixel in the first semantic information are acquired Information to obtain three-dimensional position information and attribute information of the first pixel; wherein, the first pixel is any pixel in the two-dimensional information of the point cloud.
- the conversion unit 6031 is configured to convert the point cloud information from the Northeast coordinate system to the pixel coordinate system to obtain the two-dimensional point cloud information of the point cloud information in the pixel coordinate system,
- the two-dimensional position information is information in the pixel coordinate system.
- the conversion unit 6031 is configured to convert the point cloud information from the Northeast coordinate system to the inertial measurement unit coordinate system to obtain the information of the point cloud information in the inertial measurement unit coordinate system; According to the rotation and translation matrix of the inertial measurement unit coordinate system and the camera coordinate system, the point cloud information in the inertial measurement unit coordinate system is converted to the camera coordinate system to obtain the point cloud information in The information in the camera coordinate system; according to the camera parameter matrix, the point cloud information in the camera coordinate system is converted to the pixel coordinate system to obtain the point cloud information in the pixel coordinate system Point cloud two-dimensional information under.
- the conversion unit 6031 is configured to convert the point cloud information from the northeast coordinate system to the radar coordinate system to obtain the information of the point cloud information in the radar coordinate system; according to the radar The rotation and translation matrix of the coordinate system and the camera coordinate system, and convert the point cloud information in the radar coordinate system to the camera coordinate system to obtain the point cloud information in the camera coordinate system According to the parameter matrix of the camera, the point cloud information in the camera coordinate system is converted to the pixel coordinate system to obtain the two-dimensional point cloud information of the point cloud information in the pixel coordinate system.
- the matching unit 6032 is configured to detect the drivable area on the image information to obtain information about the drivable area in the area; in response to the second pixel in the two-dimensional point cloud information
- the point is the pixel point in the drivable area in the area, and the second pixel point and the two-dimensional position information of the road element are matched to obtain the first pixel point of the road element in the map of the area.
- FIG. 8 is the third module structure diagram of the map generating device provided by the embodiment of the disclosure. As shown in FIG. 8, the device further includes:
- the obtaining module 605 is configured to obtain the pose information of the vehicle via the car navigation system while collecting the image information and the point cloud information of the area;
- the first screening module 606 is configured to screen out the point cloud information of the area from the point cloud information set composed of the point cloud information collected by the on-board radar sensor according to the pose information of the vehicle.
- FIG. 9 is the fourth module structure diagram of the map generating device provided by the embodiment of the disclosure. As shown in FIG. 9, the device further includes:
- the detection module 607 is configured to detect the drivable area of the collected image information of each area, and obtain the information of the drivable area in each area;
- the second screening module 608 is used to screen out the point cloud information of the drivable area in each area from the point cloud information of each area collected by the vehicle-mounted radar sensor;
- the splicing module 609 is used to splice the point cloud information of the drivable areas in each area to obtain the point cloud information set.
- the car navigation system includes a global positioning system and/or an inertial measurement unit.
- FIG. 10 is the module structure diagram 5 of the map generating apparatus provided by the embodiment of the disclosure. As shown in FIG. 10, the generating module 604 includes:
- the processing unit 6041 is configured to use one or more of the following processing methods to process the road elements:
- the road elements in the map of the area are screened to obtain the road elements after screening; and the road elements represented by the three-dimensional position information Perform fitting processing on the points of the line to obtain the road element represented by line parameters; perform sampling processing on the road element to obtain the sampled road element; obtain the three-dimensional position information from the northeast coordinates
- the system is converted to the target coordinate system, and the position information of the three-dimensional position information in the target coordinate system is obtained.
- the generating unit 6042 is configured to generate a map or update a part of the map corresponding to the region based on the second semantic information of the road element obtained by the processing.
- the division of the various modules of the above device is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
- these modules can all be implemented in the form of software called by processing elements; they can also be implemented in the form of hardware; some modules can be implemented in the form of calling software by processing elements, and some modules can be implemented in the form of software.
- the determining module may be a separately established processing element, or it may be integrated into a certain chip of the above-mentioned device for implementation.
- it may also be stored in the memory of the above-mentioned device in the form of program code, which is determined by a certain processing element of the above-mentioned device.
- each step of the above method or each of the above modules can be completed by hardware integrated logic circuits in the processor element or instructions in the form of software.
- the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more application specific integrated circuits (ASIC), or one or more microprocessors (Digital Signal Processor, DSP), or, one or more Field Programmable Gate Array (Field Programmable Gate Array, FPGA), etc.
- ASIC application specific integrated circuits
- DSP Digital Signal Processor
- FPGA Field Programmable Gate Array
- the processing element may be a general-purpose processor, such as a central processing unit (CPU) or other processors that can call program codes.
- CPU central processing unit
- these modules can be integrated together and implemented in the form of a System-On-a-Chip (SOC).
- SOC System-On-a-Chip
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
- FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
- the electronic device 1100 may include: a processor 111, a memory 112, a communication interface 113, and a system bus 114.
- the memory 112 and the communication interface 113 can communicate with the processor 111 through the system bus 114. Connect and complete the communication between each other, the memory 112 is used to store computer execution instructions, the communication interface 113 is used to communicate with other devices, and the processor 111 executes the computer program when the computer program is executed as shown in FIGS. 1 to 5 The scheme of any of the illustrated embodiments.
- the system bus mentioned in FIG. 11 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus.
- PCI Peripheral Component Interconnect
- EISA Extended Industry Standard Architecture
- the system bus can be divided into address bus, data bus, control bus, etc. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
- the communication interface is used to realize the communication between the database access device and other devices (such as client, read-write library and read-only library).
- the memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage.
- the above-mentioned processor may be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (DSP), a dedicated integrated Circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- CPU central processing unit
- NP Network Processor
- DSP digital signal processor
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- FIG. 12 is a schematic flowchart of a driving control method provided by an embodiment of the present disclosure.
- an embodiment of the present disclosure also provides a driving control method, including:
- the driving control device obtains map information of at least a part of the road environment where the vehicle is located, and the map information is obtained by using the map generation method provided by the embodiment of the present disclosure;
- the driving control device performs intelligent driving control of the vehicle according to the map information.
- the execution subject of this embodiment is the driving control device.
- the driving control device of this embodiment and the electronic equipment described in the foregoing embodiments may be located in the same device, or may be separately deployed in different devices.
- the driving control device of this embodiment establishes a communication connection with the above-mentioned electronic equipment.
- map information is obtained by using the method of the foregoing embodiment, and for the specific process, refer to the description of the foregoing embodiment, which will not be repeated here.
- the electronic device executes the above-mentioned map generation method, obtains map information of at least a part of the road environment where the vehicle is located, and outputs map information of at least a part of the road environment where the vehicle is located.
- the driving control device acquires map information of at least a part of the road environment where the vehicle is located, and performs intelligent driving control on the vehicle according to the map information of at least a part of the road environment where the vehicle is located.
- the intelligent driving in this embodiment includes at least one of driving mode switching between assisted driving, automatic driving, assisted driving, and automatic driving.
- the above-mentioned intelligent driving control may include at least one of the following: braking, changing the driving speed, changing the driving direction, maintaining the lane line, changing the state of the lights, switching the driving mode, etc., wherein the driving mode switching may be between assisted driving and automatic driving For example, switching from assisted driving to automatic driving.
- the driving control device obtains map information of at least part of the road environment where the vehicle is located, and performs intelligent driving control according to the map information of at least part of the road environment where the vehicle is located, thereby improving the safety of intelligent driving And reliability.
- FIG. 13 is a schematic structural diagram of a driving control device provided by an embodiment of the present disclosure.
- the driving control device 1300 of the embodiment of the present disclosure includes:
- the obtaining module 1301 is configured to obtain map information of at least a part of the road environment where the vehicle is located, and the map information is obtained by using the above-mentioned map generation method.
- the driving control module 1302 is used for intelligent driving control of the vehicle according to the map information.
- the driving control device of the embodiment of the present disclosure may be used to execute the technical solution of the method embodiment shown above, and its implementation principles and technical effects are similar, and will not be repeated here.
- FIG. 14 is a schematic diagram of an intelligent driving system provided by an embodiment of the disclosure.
- the intelligent driving system 1400 of this embodiment includes: a sensor 1401, an electronic device 1100, and a driving control device 1300 connected in communication, and an electronic device 1100 As shown in FIG. 11, the driving control device 1300 is as shown in FIG.
- the sensor 1401 may include sensors such as a vehicle-mounted camera, a vehicle-mounted radar sensor, GPS, and IMU.
- the sensor 1401 collects image information, point cloud information, and pose information of the vehicle in at least a part of the road environment where the vehicle is located, and sends these information to the electronic device 1100, the electronic device After receiving the information, 1100 generates a map or updates the corresponding area in the map according to the above-mentioned map generation method. Next, the electronic device 1100 sends the generated map or the updated map to the driving control device 1300, and the driving control device 1300 performs intelligent driving control of the vehicle according to the generated map or the updated map.
- an embodiment of the present disclosure further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the implementation is as shown in any one of FIGS. 1 to 5 above.
- the embodiment of the present disclosure further provides a chip for executing instructions, the chip is used to execute the method of any one of the embodiments shown in FIG. 1 to FIG. 5; or, the chip is used to execute the method shown in FIG. The method of the embodiment is shown.
- the embodiments of the present disclosure also provide a program product, the program product includes a computer program, the computer program is stored in a storage medium, at least one processor can read the computer program from the storage medium, and the at least one When the processor executes the computer program, the method of any one of the embodiments shown in FIGS. 1 to 5 can be implemented; or, when the computer program is executed by the at least one processing, the method of the embodiment shown in Figure 12 can be implemented .
- At least one refers to one or more, and “multiple” refers to two or more.
- “And/or” describes the association relationship of the associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A alone exists, both A and B exist, and B exists alone, where A, B can be singular or plural.
- the character "/” generally indicates that the associated objects before and after are in an “or” relationship; in the formula, the character “/” indicates that the associated objects before and after are in a “division” relationship.
- “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or plural items (a).
- at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple One.
- the size of the sequence numbers of the foregoing processes does not mean the order of execution.
- the execution order of the processes should be determined by their functions and internal logic, and should not correspond to the embodiments of the present disclosure.
- the implementation process constitutes any limitation.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
- Instructional Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (27)
- 一种地图生成方法,包括:经车载相机获取车辆所在道路环境至少部分区域的图像信息,经车载雷达传感器同步对应获取所述车辆所在道路环境至少部分区域的点云信息;对所述图像信息进行语义分割处理,得到所述区域中道路元素的第一语义信息,所述第一语义信息包括所述道路元素的二维位置信息和属性信息;对所述区域中的道路元素的第一语义信息和所述区域的点云信息进行匹配处理,得到所述区域中道路元素的第二语义信息,所述第二语义信息包括所述道路元素的三维位置信息和属性信息;基于所述第二语义信息生成地图或更新地图中对应所述区域的部分。
- 根据权利要求1所述的方法,其中,所述对所述区域中的道路元素的第一语义信息和所述区域的点云信息进行匹配处理,得到所述区域中道路元素的第二语义信息,包括:将所述点云信息进行三维坐标系至二维坐标系的坐标系转换,得到所述点云信息在所述二维位置信息所在坐标系下的点云二维信息;对所述点云二维信息和所述道路元素的二维位置信息进行匹配处理,得到所述区域中所述道路元素的第二语义信息。
- 根据权利要求2所述的方法,其中,对所述点云二维信息和所述道路元素的二维位置信息进行匹配处理,得到所述区域中所述道路元素的第二语义信息,包括:根据所述道路元素的二维位置信息,确定所述点云二维信息中的每个像素点是否属于所述道路元素;响应于所述点云二维信息中的第一像素点属于所述道路元素,获取所述第一像素点在所述点云信息中的三维位置信息以及所述第一像素点在所述第一语义信息中的属性信息,得到所述第一像素点的三维位置信息以及属性信息;其中,所述第一像素点为所述点云二维信息中的任意一个像素点。
- 根据权利要求2或3所述的方法,其中,对所述点云信息进行三维坐标系至二维坐标系的坐标系转换,得到所述点云信息在所述二维位置信息所在坐标系下的点云二维信息,包括:将所述点云信息从北东地坐标系转换至像素坐标系,得到所述点云信息在所述像素坐标系下的点云二维信息,所述二维位置信息为所述像素坐标系下的信息。
- 根据权利要求4所述的方法,其中,所述将所述点云信息从北东地坐标系转换至像素坐标系,包括:将所述点云信息从北东地坐标系转换至惯性测量单元坐标系,得到所述点云信息在所述惯性测量单元坐标系下的信息;根据所述惯性测量单元坐标系与相机坐标系的旋转平移矩阵,将所述点云信息在所述惯性测量单元坐标系下的信息转换到所述相机坐标系下,得到所述点云信息在所述相机坐标系下的信息;根据相机的参数矩阵,将所述点云信息在所述相机坐标系下的信息转换到所述像素坐标系,得到所述点云信息在所述像素坐标系下的点云二维信息。
- 根据权利要求4所述的方法,其中,所述将所述点云信息从北东地坐标系转换至像素坐标系,包括:将所述点云信息从北东地坐标系转换至雷达坐标系,得到所述点云信息在所述雷达 坐标系下的信息;根据所述雷达坐标系与相机坐标系的旋转平移矩阵,将所述点云信息在所述雷达坐标系下的信息转换到所述相机坐标系下,得到所述点云信息在所述相机坐标系下的信息;根据相机的参数矩阵,将所述点云信息在所述相机坐标系下的信息转换到所述像素坐标系,得到所述点云信息在所述像素坐标系下的点云二维信息。
- 根据权利要求2-6任一项所述的方法,其中,所述对所述点云二维信息和所述道路元素的二维位置信息进行匹配处理,得到所述区域中所述道路元素的第二语义信息,包括:对所述图像信息进行可行驶区域的检测,得到所述区域内的可行驶区域的信息;响应于所述点云二维信息中的第二像素点为所述区域内的可行驶区域中的像素点,对所述第二像素点和所述道路元素的二维位置信息进行匹配处理,得到所述区域的地图中的所述道路元素的第二语义信息;其中,所述第二像素点为所述点云二维信息中的任意一个像素点。
- 根据权利要求1-6任一项所述的方法,其中,所述对所述区域中的道路元素的第一语义信息和所述区域的点云信息进行匹配处理之前,还包括:在采集所述区域的所述图像信息和所述点云信息的同时,经车载导航系统获取所述车辆的位姿信息;根据车辆的位姿信息,从由所述车载雷达传感器采集到的点云信息所组成的点云信息集合中筛选出所述区域的点云信息。
- 根据权利要求8所述的方法,其中,所述从由采集到的点云信息所组成的点云信息集合中筛选出所述区域的点云信息之前,还包括:对采集的每个区域的图像信息进行可行驶区域的检测,得到每个区域内的可行驶区域的信息;从所述车载雷达传感器采集的每个区域的点云信息中筛选出每个区域内的可行驶区域的点云信息;对各区域内的可行驶区域的点云信息进行拼接,得到所述点云信息集合。
- 根据权利要求8或9所述的方法,其中,所述车载导航系统包括全球定位系统和/或惯性测量单元。
- 根据权利要求1-10任一项所述的方法,其中,所述基于所述第二语义信息生成地图或更新地图中对应所述区域的部分,包括:使用如下一种或多种处理方式,对所述道路元素进行处理:根据与所述道路元素对应的预设特征信息,对所述区域的地图中的所述道路元素进行筛选,得到筛选后的道路元素;对由所述三维位置信息所表征的所述道路元素中的点进行拟合处理,得到由线条参数所表征的所述道路元素;对所述道路元素进行采样处理,得到采样后的所述道路元素;将所述三维位置信息从北东地坐标系转换到目标坐标系,得到所述三维位置信息在所述目标坐标系下的位置信息;基于处理得到的所述道路元素的第二语义信息,生成地图或更新地图中对应所述区域的部分。
- 一种地图生成装置,包括:采集模块,用于经车载相机获取车辆所在道路环境至少部分区域的图像信息,经车载雷达传感器同步对应获取所述车辆所在道路环境至少部分区域的点云信息;分割模块,用于对所述图像信息进行语义分割处理,得到所述区域中道路元素的第一语义信息,所述第一语义信息包括所述道路元素的二维位置信息和属性信息;匹配模块,用于对所述区域中的道路元素的第一语义信息和所述区域的点云信息进行匹配处理,得到所述区域中道路元素的第二语义信息,所述第二语义信息包括所述道路元素的三维位置信息和属性信息;生成模块,用于基于所述第二语义信息生成地图或更新地图中对应所述区域的部分。
- 根据权利要求12所述的装置,其中,所述匹配模块,包括:转换单元,用于将所述点云信息进行三维坐标系至二维坐标系的坐标系转换,得到所述点云信息在所述二维位置信息所在坐标系下的点云二维信息;匹配单元,用于对所述点云二维信息和所述道路元素的二维位置信息进行匹配处理,得到所述区域中所述道路元素的第二语义信息。
- 根据权利要求13所述的装置,其中,所述匹配单元,用于根据所述道路元素的二维位置信息,确定所述点云二维信息中的每个像素点是否属于所述道路元素;响应于所述点云二维信息中的第一像素点属于所述道路元素,获取所述第一像素点在所述点云信息中的三维位置信息以及所述第一像素点在所述第一语义信息中的属性信息,得到所述第一像素点的三维位置信息以及属性信息;其中,所述第一像素点为所述点云二维信息中的任意一个像素点。
- 根据权利要求13或14所述的装置,其中,所述转换单元,用于将所述点云信息从北东地坐标系转换至像素坐标系,得到所述点云信息在所述像素坐标系下的点云二维信息,所述二维位置信息信息为所述像素坐标系下的信息。
- 根据权利要求15所述的装置,其中,所述转换单元,用于将所述点云信息从北东地坐标系转换至惯性测量单元坐标系,得到所述点云信息在所述惯性测量单元坐标系下的信息;根据所述惯性测量单元坐标系与相机坐标系的旋转平移矩阵,将所述点云信息在所述惯性测量单元坐标系下的信息转换到所述相机坐标系下,得到所述点云信息在所述相机坐标系下的信息;根据相机的参数矩阵,将所述点云信息在所述相机坐标系下的信息转换到所述像素坐标系,得到所述点云信息在所述像素坐标系下的点云二维信息。
- 根据权利要求15所述的装置,其中,所述转换单元,用于将所述点云信息从北东地坐标系转换至雷达坐标系,得到所述点云信息在所述雷达坐标系下的信息;根据所述雷达坐标系与相机坐标系的旋转平移矩阵,将所述点云信息在所述雷达坐标系下的信息转换到所述相机坐标系下,得到所述点云信息在所述相机坐标系下的信息;根据相机的参数矩阵,将所述点云信息在所述相机坐标系下的信息转换到所述像素坐标系,得到所述点云信息在所述像素坐标系下的点云二维信息。
- 根据权利要求13-17任一项所述的装置,其中,所述匹配单元,用于对所述图像信息进行可行驶区域的检测,得到所述区域内的可行驶区域的信息;响应于所述点云二维信息中的第二像素点为所述区域内的可行驶区域中的像素点,对所述第二像素点和所述道路元素的二维位置信息进行匹配处理,得到所述区域的地图中的所述道路元素的第二语义信息;其中,所述第二像素点为所述点云二维信息中的任意一个像素点。
- 根据权利要求12-17任一项所述的装置,其中,所述装置还包括:获取模块,用于在采集所述区域的所述图像信息和所述点云信息的同时,经车载导航系统获取所述车辆的位姿信息;第一筛选模块,用于根据车辆的位姿信息,从由所述车载雷达传感器采集到的点云信息所组成的点云信息集合中筛选出所述区域的点云信息。
- 根据权利要求19所述的装置,其中,所述装置还包括:检测模块,用于对采集的每个区域的图像信息进行可行驶区域的检测,得到每个区 域内的可行驶区域的信息;第二筛选模块,用于从所述车载雷达传感器采集的每个区域的点云信息中筛选出每个区域内的可行驶区域的点云信息;拼接模块,用于对各区域内的可行驶区域的点云信息进行拼接,得到所述点云信息集合。
- 根据权利要求20所述的装置,其中,所述车载导航系统包括全球定位系统和/或惯性测量单元。
- 根据权利要求12-21任一项所述的装置,其中,所述生成模块,包括:处理单元,用于使用如下一种或多种处理方式,对所述道路元素进行处理:根据与所述道路元素对应的预设特征信息,对所述区域的地图中的所述道路元素进行筛选,得到筛选后的道路元素;对由所述三维位置信息所表征的所述道路元素中的点进行拟合处理,得到由线条参数所表征的所述道路元素;对所述道路元素进行采样处理,得到采样后的所述道路元素;将所述三维位置信息从北东地坐标系转换到目标坐标系,得到所述三维位置信息在所述目标坐标系下的位置信息;生成单元,用于基于处理得到的所述道路元素的第二语义信息,生成地图或更新地图中对应所述区域的部分。
- 一种驾驶控制方法,包括:驾驶控制装置获取车辆所在道路环境至少部分区域的地图信息,地图信息采用如权利要求1-11任一项所述的地图生成方法得到;所述驾驶控制装置根据所述地图信息对车辆进行智能驾驶控制。
- 一种驾驶控制装置,包括:获取模块,用于获取车辆所在道路环境至少部分区域的地图信息,地图信息采用如权利要求1-11任一项所述的地图生成方法得到;驾驶控制模块,用于根据所述地图信息对车辆进行智能驾驶控制。
- 一种电子设备,包括:存储器,用于存储程序指令;处理器,用于调用并执行所述存储器中的程序指令,执行权利要求1-11任一项所述的方法步骤。
- 一种智能驾驶系统,包括:通信连接的传感器、如权利要求25所述的电子设备和如权利要求24所述的驾驶控制装置,所述传感器用于采集车辆所在道路环境至少部分区域的图像信息和点云信息。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-11任一项所述的方法步骤;或者,所述计算机程序被处理器执行时实现权利要求23所述的方法步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021531066A JP2022509302A (ja) | 2019-06-10 | 2020-02-13 | 地図生成方法、運転制御方法、装置、電子機器及びシステム |
KR1020217015319A KR20210082204A (ko) | 2019-06-10 | 2020-02-13 | 지도 생성 방법, 운전 제어 방법, 장치, 전자 기기 및 시스템 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910496345.3 | 2019-06-10 | ||
CN201910496345.3A CN112069856B (zh) | 2019-06-10 | 2019-06-10 | 地图生成方法、驾驶控制方法、装置、电子设备及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020248614A1 true WO2020248614A1 (zh) | 2020-12-17 |
Family
ID=73658193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/075083 WO2020248614A1 (zh) | 2019-06-10 | 2020-02-13 | 地图生成方法、驾驶控制方法、装置、电子设备及系统 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2022509302A (zh) |
KR (1) | KR20210082204A (zh) |
CN (1) | CN112069856B (zh) |
WO (1) | WO2020248614A1 (zh) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633722A (zh) * | 2020-12-29 | 2021-04-09 | 交通运输部公路科学研究所 | 车载道路安全风险评估系统及方法 |
CN112764004A (zh) * | 2020-12-22 | 2021-05-07 | 中国第一汽车股份有限公司 | 一种点云处理方法、装置、设备及存储介质 |
CN112802126A (zh) * | 2021-02-26 | 2021-05-14 | 上海商汤临港智能科技有限公司 | 一种标定方法、装置、计算机设备和存储介质 |
CN112862881A (zh) * | 2021-02-24 | 2021-05-28 | 清华大学 | 基于众包多车摄像头数据的道路地图构建与融合的方法 |
CN112907760A (zh) * | 2021-02-09 | 2021-06-04 | 浙江商汤科技开发有限公司 | 三维对象的标注方法及装置、工具、电子设备和存储介质 |
CN112907746A (zh) * | 2021-03-25 | 2021-06-04 | 上海商汤临港智能科技有限公司 | 电子地图的生成方法、装置、电子设备及存储介质 |
CN112966059A (zh) * | 2021-03-02 | 2021-06-15 | 北京百度网讯科技有限公司 | 针对定位数据的数据处理方法、装置、电子设备和介质 |
CN112967398A (zh) * | 2021-03-01 | 2021-06-15 | 北京奇艺世纪科技有限公司 | 一种三维数据重建方法、装置及电子设备 |
CN113052839A (zh) * | 2021-04-28 | 2021-06-29 | 闫丹凤 | 一种地图检测方法及装置 |
CN113189610A (zh) * | 2021-04-28 | 2021-07-30 | 中国科学技术大学 | 地图增强的自动驾驶多目标追踪方法和相关设备 |
CN113191323A (zh) * | 2021-05-24 | 2021-07-30 | 上海商汤临港智能科技有限公司 | 一种语义元素处理的方法、装置、电子设备及存储介质 |
CN113343858A (zh) * | 2021-06-10 | 2021-09-03 | 广州海格通信集团股份有限公司 | 路网地理位置识别方法、装置、电子设备及存储介质 |
CN113340314A (zh) * | 2021-06-01 | 2021-09-03 | 苏州天准科技股份有限公司 | 局部代价地图的生成方法、存储介质和智能无人巡检车 |
CN113421327A (zh) * | 2021-05-24 | 2021-09-21 | 郭宝宇 | 一种三维模型的构建方法、构建装置以及电子设备 |
CN113420805A (zh) * | 2021-06-21 | 2021-09-21 | 车路通科技(成都)有限公司 | 视频和雷达的动态轨迹图像融合方法、装置、设备及介质 |
CN113435392A (zh) * | 2021-07-09 | 2021-09-24 | 阿波罗智能技术(北京)有限公司 | 应用于自动泊车的车辆定位方法、装置及车辆 |
CN113688935A (zh) * | 2021-09-03 | 2021-11-23 | 阿波罗智能技术(北京)有限公司 | 高精地图的检测方法、装置、设备以及存储介质 |
CN113762413A (zh) * | 2021-09-30 | 2021-12-07 | 智道网联科技(北京)有限公司 | 点云数据与图像数据融合方法及存储介质 |
CN113807435A (zh) * | 2021-09-16 | 2021-12-17 | 中国电子科技集团公司第五十四研究所 | 一种基于多传感器的遥感图像特征点高程获取方法 |
CN114061564A (zh) * | 2021-11-01 | 2022-02-18 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114088082A (zh) * | 2021-11-01 | 2022-02-25 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114111758A (zh) * | 2021-11-01 | 2022-03-01 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114120631A (zh) * | 2021-10-28 | 2022-03-01 | 新奇点智能科技集团有限公司 | 构建动态高精度地图的方法、装置及交通云控平台 |
CN114141010A (zh) * | 2021-11-08 | 2022-03-04 | 南京交通职业技术学院 | 一种基于云平台数据的共享式交通控制方法 |
CN114356078A (zh) * | 2021-12-15 | 2022-04-15 | 之江实验室 | 一种基于注视目标的人物意图检测方法、装置及电子设备 |
CN114374723A (zh) * | 2022-01-17 | 2022-04-19 | 长春师范大学 | 一种计算机控制的智能监控系统 |
CN114413881A (zh) * | 2022-01-07 | 2022-04-29 | 中国第一汽车股份有限公司 | 高精矢量地图的构建方法、装置及存储介质 |
CN114425774A (zh) * | 2022-01-21 | 2022-05-03 | 深圳优地科技有限公司 | 机器人行走道路的识别方法、识别设备以及存储介质 |
CN114445802A (zh) * | 2022-01-29 | 2022-05-06 | 北京百度网讯科技有限公司 | 点云处理方法、装置及车辆 |
CN114445415A (zh) * | 2021-12-14 | 2022-05-06 | 中国科学院深圳先进技术研究院 | 可行驶区域的分割方法以及相关装置 |
CN114494267A (zh) * | 2021-11-30 | 2022-05-13 | 北京国网富达科技发展有限责任公司 | 一种变电站和电缆隧道场景语义构建系统和方法 |
CN114511600A (zh) * | 2022-04-20 | 2022-05-17 | 北京中科慧眼科技有限公司 | 基于点云配准的位姿计算方法和系统 |
CN114526721A (zh) * | 2021-12-31 | 2022-05-24 | 易图通科技(北京)有限公司 | 地图对齐处理方法、装置及可读存储介质 |
CN114581621A (zh) * | 2022-03-07 | 2022-06-03 | 北京百度网讯科技有限公司 | 地图数据处理方法、装置、电子设备和介质 |
CN114581287A (zh) * | 2022-02-18 | 2022-06-03 | 高德软件有限公司 | 数据处理方法以及装置 |
CN114620055A (zh) * | 2022-03-15 | 2022-06-14 | 阿波罗智能技术(北京)有限公司 | 道路数据处理方法、装置、电子设备及自动驾驶车辆 |
CN114754779A (zh) * | 2022-04-27 | 2022-07-15 | 镁佳(北京)科技有限公司 | 一种定位与建图方法、装置及电子设备 |
CN114782342A (zh) * | 2022-04-12 | 2022-07-22 | 北京瓦特曼智能科技有限公司 | 城市硬件设施缺陷的检测方法及装置 |
CN115290104A (zh) * | 2022-07-14 | 2022-11-04 | 襄阳达安汽车检测中心有限公司 | 仿真地图生成方法、装置、设备及可读存储介质 |
CN115435773A (zh) * | 2022-09-05 | 2022-12-06 | 北京远见知行科技有限公司 | 室内停车场高精度地图采集装置 |
CN115523929A (zh) * | 2022-09-20 | 2022-12-27 | 北京四维远见信息技术有限公司 | 一种基于slam的车载组合导航方法、装置、设备及介质 |
CN116027375A (zh) * | 2023-03-29 | 2023-04-28 | 智道网联科技(北京)有限公司 | 自动驾驶车辆的定位方法、装置及电子设备、存储介质 |
CN116030212A (zh) * | 2023-03-28 | 2023-04-28 | 北京集度科技有限公司 | 一种建图方法、设备、车辆及程序产品 |
CN116295463A (zh) * | 2023-02-27 | 2023-06-23 | 北京辉羲智能科技有限公司 | 一种导航地图元素的自动标注方法 |
WO2023123837A1 (zh) * | 2021-12-30 | 2023-07-06 | 广州小鹏自动驾驶科技有限公司 | 地图的生成方法、装置、电子设备及存储介质 |
CN116821854A (zh) * | 2023-08-30 | 2023-09-29 | 腾讯科技(深圳)有限公司 | 一种目标投影的匹配融合方法及相关装置 |
CN117315176A (zh) * | 2023-10-07 | 2023-12-29 | 北京速度时空信息有限公司 | 一种高精度地图生成方法及系统 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112667837A (zh) * | 2019-10-16 | 2021-04-16 | 上海商汤临港智能科技有限公司 | 图像数据自动标注方法及装置 |
CN118200353A (zh) * | 2020-12-28 | 2024-06-14 | 华为技术有限公司 | 用于车联网的数据传输方法、装置、存储介质和系统 |
JP2022137534A (ja) * | 2021-03-09 | 2022-09-22 | 本田技研工業株式会社 | 地図生成装置および車両位置認識装置 |
CN112960000A (zh) * | 2021-03-15 | 2021-06-15 | 新石器慧义知行智驰(北京)科技有限公司 | 高精地图更新方法、装置、电子设备和存储介质 |
CN113066009B (zh) * | 2021-03-24 | 2023-08-25 | 北京斯年智驾科技有限公司 | 港口高精度地图集的构建方法、装置、系统和存储介质 |
CN113034566B (zh) * | 2021-05-28 | 2021-09-24 | 湖北亿咖通科技有限公司 | 高精度地图构建方法、装置、电子设备及存储介质 |
CN113701770A (zh) * | 2021-07-16 | 2021-11-26 | 西安电子科技大学 | 一种高精地图生成方法及系统 |
US11608084B1 (en) * | 2021-08-27 | 2023-03-21 | Motional Ad Llc | Navigation with drivable area detection |
CN113822932B (zh) * | 2021-08-30 | 2023-08-18 | 亿咖通(湖北)技术有限公司 | 设备定位方法、装置、非易失性存储介质及处理器 |
CN113836251B (zh) * | 2021-09-17 | 2024-09-17 | 中国第一汽车股份有限公司 | 一种认知地图构建方法、装置、设备及介质 |
CN114111813B (zh) * | 2021-10-18 | 2024-06-18 | 阿波罗智能技术(北京)有限公司 | 高精地图元素更新方法、装置、电子设备及存储介质 |
CN114185613A (zh) * | 2021-11-30 | 2022-03-15 | 广州景骐科技有限公司 | 一种语义地图分块方法、装置、交通工具及存储介质 |
CN114440856A (zh) * | 2022-01-21 | 2022-05-06 | 北京地平线信息技术有限公司 | 一种构建语义地图的方法及装置 |
CN115527028A (zh) * | 2022-08-16 | 2022-12-27 | 北京百度网讯科技有限公司 | 地图数据处理方法及装置 |
CN116182831A (zh) * | 2022-12-07 | 2023-05-30 | 北京斯年智驾科技有限公司 | 车辆定位方法、装置、设备、介质及车辆 |
CN115861561B (zh) * | 2023-02-24 | 2023-05-30 | 航天宏图信息技术股份有限公司 | 一种基于语义约束的等高线生成方法和装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9910441B2 (en) * | 2015-11-04 | 2018-03-06 | Zoox, Inc. | Adaptive autonomous vehicle planner logic |
CN109410301A (zh) * | 2018-10-16 | 2019-03-01 | 张亮 | 面向无人驾驶汽车的高精度语义地图制作方法 |
CN109461211A (zh) * | 2018-11-12 | 2019-03-12 | 南京人工智能高等研究院有限公司 | 基于视觉点云的语义矢量地图构建方法、装置和电子设备 |
CN109829386A (zh) * | 2019-01-04 | 2019-05-31 | 清华大学 | 基于多源信息融合的智能车辆可通行区域检测方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10066946B2 (en) * | 2016-08-26 | 2018-09-04 | Here Global B.V. | Automatic localization geometry detection |
CN107818288B (zh) * | 2016-09-13 | 2019-04-09 | 腾讯科技(深圳)有限公司 | 标志牌信息获取方法及装置 |
US11761790B2 (en) * | 2016-12-09 | 2023-09-19 | Tomtom Global Content B.V. | Method and system for image-based positioning and mapping for a road network utilizing object detection |
US10657390B2 (en) * | 2017-11-27 | 2020-05-19 | Tusimple, Inc. | System and method for large-scale lane marking detection using multimodal sensor data |
CN109117718B (zh) * | 2018-07-02 | 2021-11-26 | 东南大学 | 一种面向道路场景的三维语义地图构建和存储方法 |
CN109064506B (zh) * | 2018-07-04 | 2020-03-13 | 百度在线网络技术(北京)有限公司 | 高精度地图生成方法、装置及存储介质 |
-
2019
- 2019-06-10 CN CN201910496345.3A patent/CN112069856B/zh active Active
-
2020
- 2020-02-13 KR KR1020217015319A patent/KR20210082204A/ko not_active Application Discontinuation
- 2020-02-13 WO PCT/CN2020/075083 patent/WO2020248614A1/zh active Application Filing
- 2020-02-13 JP JP2021531066A patent/JP2022509302A/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9910441B2 (en) * | 2015-11-04 | 2018-03-06 | Zoox, Inc. | Adaptive autonomous vehicle planner logic |
CN109410301A (zh) * | 2018-10-16 | 2019-03-01 | 张亮 | 面向无人驾驶汽车的高精度语义地图制作方法 |
CN109461211A (zh) * | 2018-11-12 | 2019-03-12 | 南京人工智能高等研究院有限公司 | 基于视觉点云的语义矢量地图构建方法、装置和电子设备 |
CN109829386A (zh) * | 2019-01-04 | 2019-05-31 | 清华大学 | 基于多源信息融合的智能车辆可通行区域检测方法 |
Cited By (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112764004A (zh) * | 2020-12-22 | 2021-05-07 | 中国第一汽车股份有限公司 | 一种点云处理方法、装置、设备及存储介质 |
CN112764004B (zh) * | 2020-12-22 | 2024-05-03 | 中国第一汽车股份有限公司 | 一种点云处理方法、装置、设备及存储介质 |
CN112633722B (zh) * | 2020-12-29 | 2024-01-12 | 交通运输部公路科学研究所 | 车载道路安全风险评估系统及方法 |
CN112633722A (zh) * | 2020-12-29 | 2021-04-09 | 交通运输部公路科学研究所 | 车载道路安全风险评估系统及方法 |
CN112907760A (zh) * | 2021-02-09 | 2021-06-04 | 浙江商汤科技开发有限公司 | 三维对象的标注方法及装置、工具、电子设备和存储介质 |
CN112907760B (zh) * | 2021-02-09 | 2023-03-24 | 浙江商汤科技开发有限公司 | 三维对象的标注方法及装置、工具、电子设备和存储介质 |
CN112862881B (zh) * | 2021-02-24 | 2023-02-07 | 清华大学 | 基于众包多车摄像头数据的道路地图构建与融合的方法 |
CN112862881A (zh) * | 2021-02-24 | 2021-05-28 | 清华大学 | 基于众包多车摄像头数据的道路地图构建与融合的方法 |
CN112802126A (zh) * | 2021-02-26 | 2021-05-14 | 上海商汤临港智能科技有限公司 | 一种标定方法、装置、计算机设备和存储介质 |
CN112967398A (zh) * | 2021-03-01 | 2021-06-15 | 北京奇艺世纪科技有限公司 | 一种三维数据重建方法、装置及电子设备 |
CN112967398B (zh) * | 2021-03-01 | 2023-07-25 | 北京奇艺世纪科技有限公司 | 一种三维数据重建方法、装置及电子设备 |
CN112966059A (zh) * | 2021-03-02 | 2021-06-15 | 北京百度网讯科技有限公司 | 针对定位数据的数据处理方法、装置、电子设备和介质 |
CN112966059B (zh) * | 2021-03-02 | 2023-11-24 | 北京百度网讯科技有限公司 | 针对定位数据的数据处理方法、装置、电子设备和介质 |
CN112907746A (zh) * | 2021-03-25 | 2021-06-04 | 上海商汤临港智能科技有限公司 | 电子地图的生成方法、装置、电子设备及存储介质 |
CN113189610A (zh) * | 2021-04-28 | 2021-07-30 | 中国科学技术大学 | 地图增强的自动驾驶多目标追踪方法和相关设备 |
CN113052839A (zh) * | 2021-04-28 | 2021-06-29 | 闫丹凤 | 一种地图检测方法及装置 |
CN113421327A (zh) * | 2021-05-24 | 2021-09-21 | 郭宝宇 | 一种三维模型的构建方法、构建装置以及电子设备 |
CN113191323A (zh) * | 2021-05-24 | 2021-07-30 | 上海商汤临港智能科技有限公司 | 一种语义元素处理的方法、装置、电子设备及存储介质 |
CN113340314A (zh) * | 2021-06-01 | 2021-09-03 | 苏州天准科技股份有限公司 | 局部代价地图的生成方法、存储介质和智能无人巡检车 |
CN113343858A (zh) * | 2021-06-10 | 2021-09-03 | 广州海格通信集团股份有限公司 | 路网地理位置识别方法、装置、电子设备及存储介质 |
CN113343858B (zh) * | 2021-06-10 | 2024-03-12 | 广州海格通信集团股份有限公司 | 路网地理位置识别方法、装置、电子设备及存储介质 |
CN113420805A (zh) * | 2021-06-21 | 2021-09-21 | 车路通科技(成都)有限公司 | 视频和雷达的动态轨迹图像融合方法、装置、设备及介质 |
CN113435392A (zh) * | 2021-07-09 | 2021-09-24 | 阿波罗智能技术(北京)有限公司 | 应用于自动泊车的车辆定位方法、装置及车辆 |
CN113688935A (zh) * | 2021-09-03 | 2021-11-23 | 阿波罗智能技术(北京)有限公司 | 高精地图的检测方法、装置、设备以及存储介质 |
CN113807435A (zh) * | 2021-09-16 | 2021-12-17 | 中国电子科技集团公司第五十四研究所 | 一种基于多传感器的遥感图像特征点高程获取方法 |
CN113762413A (zh) * | 2021-09-30 | 2021-12-07 | 智道网联科技(北京)有限公司 | 点云数据与图像数据融合方法及存储介质 |
CN113762413B (zh) * | 2021-09-30 | 2023-12-26 | 智道网联科技(北京)有限公司 | 点云数据与图像数据融合方法及存储介质 |
CN114120631A (zh) * | 2021-10-28 | 2022-03-01 | 新奇点智能科技集团有限公司 | 构建动态高精度地图的方法、装置及交通云控平台 |
CN114111758A (zh) * | 2021-11-01 | 2022-03-01 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114088082B (zh) * | 2021-11-01 | 2024-04-16 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114061564B (zh) * | 2021-11-01 | 2022-12-13 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114061564A (zh) * | 2021-11-01 | 2022-02-18 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114088082A (zh) * | 2021-11-01 | 2022-02-25 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114111758B (zh) * | 2021-11-01 | 2024-06-04 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114141010A (zh) * | 2021-11-08 | 2022-03-04 | 南京交通职业技术学院 | 一种基于云平台数据的共享式交通控制方法 |
CN114494267A (zh) * | 2021-11-30 | 2022-05-13 | 北京国网富达科技发展有限责任公司 | 一种变电站和电缆隧道场景语义构建系统和方法 |
CN114445415A (zh) * | 2021-12-14 | 2022-05-06 | 中国科学院深圳先进技术研究院 | 可行驶区域的分割方法以及相关装置 |
CN114356078B (zh) * | 2021-12-15 | 2024-03-19 | 之江实验室 | 一种基于注视目标的人物意图检测方法、装置及电子设备 |
CN114356078A (zh) * | 2021-12-15 | 2022-04-15 | 之江实验室 | 一种基于注视目标的人物意图检测方法、装置及电子设备 |
WO2023123837A1 (zh) * | 2021-12-30 | 2023-07-06 | 广州小鹏自动驾驶科技有限公司 | 地图的生成方法、装置、电子设备及存储介质 |
CN114526721B (zh) * | 2021-12-31 | 2024-05-24 | 易图通科技(北京)有限公司 | 地图对齐处理方法、装置及可读存储介质 |
CN114526721A (zh) * | 2021-12-31 | 2022-05-24 | 易图通科技(北京)有限公司 | 地图对齐处理方法、装置及可读存储介质 |
CN114413881A (zh) * | 2022-01-07 | 2022-04-29 | 中国第一汽车股份有限公司 | 高精矢量地图的构建方法、装置及存储介质 |
CN114413881B (zh) * | 2022-01-07 | 2023-09-01 | 中国第一汽车股份有限公司 | 高精矢量地图的构建方法、装置及存储介质 |
CN114374723A (zh) * | 2022-01-17 | 2022-04-19 | 长春师范大学 | 一种计算机控制的智能监控系统 |
CN114425774B (zh) * | 2022-01-21 | 2023-11-03 | 深圳优地科技有限公司 | 机器人行走道路的识别方法、识别设备以及存储介质 |
CN114425774A (zh) * | 2022-01-21 | 2022-05-03 | 深圳优地科技有限公司 | 机器人行走道路的识别方法、识别设备以及存储介质 |
CN114445802A (zh) * | 2022-01-29 | 2022-05-06 | 北京百度网讯科技有限公司 | 点云处理方法、装置及车辆 |
CN114581287A (zh) * | 2022-02-18 | 2022-06-03 | 高德软件有限公司 | 数据处理方法以及装置 |
CN114581621A (zh) * | 2022-03-07 | 2022-06-03 | 北京百度网讯科技有限公司 | 地图数据处理方法、装置、电子设备和介质 |
CN114620055B (zh) * | 2022-03-15 | 2022-11-25 | 阿波罗智能技术(北京)有限公司 | 道路数据处理方法、装置、电子设备及自动驾驶车辆 |
CN114620055A (zh) * | 2022-03-15 | 2022-06-14 | 阿波罗智能技术(北京)有限公司 | 道路数据处理方法、装置、电子设备及自动驾驶车辆 |
CN114782342B (zh) * | 2022-04-12 | 2024-02-09 | 北京瓦特曼智能科技有限公司 | 城市硬件设施缺陷的检测方法及装置 |
CN114782342A (zh) * | 2022-04-12 | 2022-07-22 | 北京瓦特曼智能科技有限公司 | 城市硬件设施缺陷的检测方法及装置 |
CN114511600A (zh) * | 2022-04-20 | 2022-05-17 | 北京中科慧眼科技有限公司 | 基于点云配准的位姿计算方法和系统 |
CN114754779A (zh) * | 2022-04-27 | 2022-07-15 | 镁佳(北京)科技有限公司 | 一种定位与建图方法、装置及电子设备 |
CN115290104A (zh) * | 2022-07-14 | 2022-11-04 | 襄阳达安汽车检测中心有限公司 | 仿真地图生成方法、装置、设备及可读存储介质 |
CN115435773A (zh) * | 2022-09-05 | 2022-12-06 | 北京远见知行科技有限公司 | 室内停车场高精度地图采集装置 |
CN115435773B (zh) * | 2022-09-05 | 2024-04-05 | 北京远见知行科技有限公司 | 室内停车场高精度地图采集装置 |
CN115523929A (zh) * | 2022-09-20 | 2022-12-27 | 北京四维远见信息技术有限公司 | 一种基于slam的车载组合导航方法、装置、设备及介质 |
CN116295463A (zh) * | 2023-02-27 | 2023-06-23 | 北京辉羲智能科技有限公司 | 一种导航地图元素的自动标注方法 |
CN116030212A (zh) * | 2023-03-28 | 2023-04-28 | 北京集度科技有限公司 | 一种建图方法、设备、车辆及程序产品 |
CN116027375A (zh) * | 2023-03-29 | 2023-04-28 | 智道网联科技(北京)有限公司 | 自动驾驶车辆的定位方法、装置及电子设备、存储介质 |
CN116821854A (zh) * | 2023-08-30 | 2023-09-29 | 腾讯科技(深圳)有限公司 | 一种目标投影的匹配融合方法及相关装置 |
CN116821854B (zh) * | 2023-08-30 | 2023-12-08 | 腾讯科技(深圳)有限公司 | 一种目标投影的匹配融合方法及相关装置 |
CN117315176A (zh) * | 2023-10-07 | 2023-12-29 | 北京速度时空信息有限公司 | 一种高精度地图生成方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
JP2022509302A (ja) | 2022-01-20 |
CN112069856A (zh) | 2020-12-11 |
CN112069856B (zh) | 2024-06-14 |
KR20210082204A (ko) | 2021-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020248614A1 (zh) | 地图生成方法、驾驶控制方法、装置、电子设备及系统 | |
US11105638B2 (en) | Method, apparatus, and computer readable storage medium for updating electronic map | |
KR102266830B1 (ko) | 차선 결정 방법, 디바이스 및 저장 매체 | |
CN110160502B (zh) | 地图要素提取方法、装置及服务器 | |
CN111999752B (zh) | 确定道路信息数据的方法、装置和计算机存储介质 | |
WO2020098316A1 (zh) | 基于视觉点云的语义矢量地图构建方法、装置和电子设备 | |
WO2020052530A1 (zh) | 一种图像处理方法、装置以及相关设备 | |
CN111582189B (zh) | 交通信号灯识别方法、装置、车载控制终端及机动车 | |
US11590989B2 (en) | Training data generation for dynamic objects using high definition map data | |
WO2020043081A1 (zh) | 定位技术 | |
WO2021051344A1 (zh) | 高精度地图中车道线的确定方法和装置 | |
WO2021253245A1 (zh) | 识别车辆变道趋势的方法和装置 | |
WO2023123837A1 (zh) | 地图的生成方法、装置、电子设备及存储介质 | |
WO2020156923A2 (en) | Map and method for creating a map | |
CN113286081B (zh) | 机场全景视频的目标识别方法、装置、设备及介质 | |
CN115164918B (zh) | 语义点云地图构建方法、装置及电子设备 | |
WO2022166606A1 (zh) | 一种目标检测方法及装置 | |
CN115376109B (zh) | 障碍物检测方法、障碍物检测装置以及存储介质 | |
WO2023155580A1 (zh) | 一种对象识别方法和装置 | |
US20220197893A1 (en) | Aerial vehicle and edge device collaboration for visual positioning image database management and updating | |
CN113378605A (zh) | 多源信息融合方法及装置、电子设备和存储介质 | |
CN116997771A (zh) | 车辆及其定位方法、装置、设备、计算机可读存储介质 | |
CN115344655A (zh) | 地物要素的变化发现方法、装置及存储介质 | |
CN112099481A (zh) | 用于构建道路模型的方法和系统 | |
US20220281459A1 (en) | Autonomous driving collaborative sensing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20823591 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217015319 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021531066 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 01.02.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20823591 Country of ref document: EP Kind code of ref document: A1 |