WO2022257358A1 - 高精地图的生产方法、装置、设备和计算机存储介质 - Google Patents
高精地图的生产方法、装置、设备和计算机存储介质 Download PDFInfo
- Publication number
- WO2022257358A1 WO2022257358A1 PCT/CN2021/131180 CN2021131180W WO2022257358A1 WO 2022257358 A1 WO2022257358 A1 WO 2022257358A1 CN 2021131180 W CN2021131180 W CN 2021131180W WO 2022257358 A1 WO2022257358 A1 WO 2022257358A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- point
- registration
- sequence
- data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 239000011159 matrix material Substances 0.000 claims description 37
- 230000009466 transformation Effects 0.000 claims description 36
- 238000006243 chemical reaction Methods 0.000 claims description 22
- 238000004519 manufacturing process Methods 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012937 correction Methods 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 9
- 230000033001 locomotion Effects 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000013480 data collection Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 240000004050 Pentaglottis sempervirens Species 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000011031 large-scale manufacturing process Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/38—Registration of image sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Definitions
- the present disclosure relates to the field of computer application technology, in particular to automatic driving and deep learning technology in the field of artificial intelligence technology.
- high-precision map is one of the key factors to promote the development of automatic driving.
- Traditional maps have low accuracy and can only provide road-level route planning.
- high-precision maps can help you know location information in advance, accurately plan driving routes, predict complex road surface information, and better avoid potential risks, etc. Therefore, how to realize the production of high-precision maps has become an urgent problem to be solved.
- the present disclosure provides a high-definition map production method, device, equipment and computer storage medium.
- a method for producing a high-precision map including:
- Map elements are identified on the top view to obtain high-precision map data.
- a high-precision map production device including:
- the acquiring unit is used to acquire the point cloud data and the front view image data respectively collected by the acquisition device at each position point, so as to obtain the point cloud sequence and the front view image sequence;
- a registration unit configured to register the point cloud sequence and the front view image sequence with the front view image and point cloud data
- a conversion unit configured to convert the front view image sequence into a top view according to the registration result and determine the coordinate information of each pixel in the top view
- the identification unit is configured to identify map elements on the top view to obtain high-precision map data.
- an electronic device including:
- the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor to enable the at least one processor to perform the method as described above.
- a non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are used to cause the computer to execute the method as described above.
- a computer program product comprises a computer program which, when executed by a processor, implements the method as described above.
- FIG. 1 is a flowchart of a production method of a high-precision map provided by an embodiment of the present disclosure
- FIG. 2 is a flow chart of a preferred registration process provided by an embodiment of the present disclosure
- FIG. 3 is a flow chart of a method for frame-by-frame registration of point cloud data provided by an embodiment of the present disclosure
- Fig. 4a and Fig. 4b are respectively the example figure of front view image and top view
- FIG. 5 is a structural diagram of a high-precision map production device provided by an embodiment of the present disclosure.
- FIG. 6 is a block diagram of an electronic device used to implement an embodiment of the present disclosure.
- high-precision maps Although there are already some productions of high-precision maps, they are mainly based on point cloud technology. That is, a large amount of dense point cloud data is collected by the lidar equipment, and after processing and identifying the point cloud data, information such as roads and ground signs are obtained, and then these data are manually corrected to finally generate high-precision map data.
- this traditional approach highly relies on point cloud data.
- due to the complex spatial structure of urban roads in order to ensure the accuracy of high-precision maps, a large amount of manpower is required for registration, resulting in low production efficiency of high-precision maps, high labor costs, and high requirements for the professional skills of operators. Ultimately, it will affect the large-scale production of high-precision maps.
- the present disclosure provides a production method of a high-definition map that is different from the above-mentioned traditional methods.
- the method provided by the present disclosure will be described in detail below with reference to the embodiments.
- Fig. 1 is a flowchart of a production method of a high-precision map provided by an embodiment of the present disclosure.
- the execution body of the method may be a recommendation device, which may be an application located on a local terminal, or may also be a plug-in in an application located on a local terminal or a functional unit such as a software development kit (Software Development Kit, SDK), or may also be located at the server side, which is not particularly limited in the embodiment of the present invention.
- the method may include:
- the point cloud data and front view image data respectively collected by the acquisition device at each position point are obtained to obtain a point cloud sequence and a front view image sequence.
- the point cloud sequence and the front-view image sequence are registered with the front-view image and point cloud data.
- step 103 the orthographic image sequence is converted into a top view according to the registration result, and the coordinate information of each pixel in the top view is determined.
- map elements are identified on the top view to obtain high-precision map data.
- the idea of the present disclosure is to fuse the image data collected by the image acquisition device with the point cloud data collected by the laser radar device, so as to realize the automatic registration of mutual fusion, and generate the final result based on the registration result.
- HD map This method does not require a lot of extra manpower for manual registration, improves production efficiency, reduces labor costs and requires professional skills for operators, and provides a basis for large-scale production of high-precision maps.
- step 101 that is, "obtain point cloud data and front-view image data respectively collected by the acquisition device at each position point, to obtain a point cloud sequence and front-view image sequence" will be described in detail in conjunction with an embodiment.
- the acquisition equipment involved in this step mainly includes the following two types:
- Image acquisition devices such as cameras and video cameras, can perform image acquisition at regular intervals or after being triggered.
- the laser radar device can acquire the data of the collection of reflection points on the surface of the surrounding environment by emitting laser scanning at regular intervals or after being triggered, that is, point cloud data.
- point cloud data include point coordinate information, usually the coordinate information is the coordinates in the lidar device coordinate system.
- GNSS Global Navigation Satellite System, global satellite navigation system
- a movable device such as a collection vehicle
- each collection equipment performs data collection according to a certain frequency, or is triggered to perform data collection at the same point. collection.
- the orthographic images collected by the image acquisition device according to a certain acquisition frequency constitute an orthographic image sequence Among them, I i is a frame of orthographic image collected at time t i .
- the point cloud data collected by the laser radar device according to a certain acquisition frequency constitutes a point cloud sequence
- P i is a frame of point cloud data collected at time t i .
- the position data collected by the position acquisition device according to a certain acquisition frequency constitutes a position sequence where L i is the location data collected at time t i .
- the above N is the number of data collections performed by the collection device, that is, the amount of data obtained by each collection device.
- clock synchronization and/or joint calibration can be performed on the acquisition device in advance.
- the specific synchronization method can choose GPS-based "PPS (Pulse Per Second, pulses per second) + NMEA (National Marine Electronics Association, National Marine Electronics Association)", or Ethernet-based IEEE 1588 (or IEEE 802.1AS) Clock synchronization protocol.
- PPS Pulse Per Second, pulses per second
- NMEA National Marine Electronics Association, National Marine Electronics Association
- Ethernet-based IEEE 1588 or IEEE 802.1AS
- the joint calibration of the acquisition equipment is mainly to obtain the internal and external parameter information of the image acquisition equipment in the acquisition equipment, the external parameter information of the laser radar equipment, the transformation from the laser radar coordinate system to the image acquisition equipment coordinate system and the translation matrix M 1 , and an internal reference information matrix M 2 of the image acquisition device.
- the way of joint calibration is mainly to preset a calibration board, adjust the lidar equipment and image acquisition equipment to take pictures and capture point clouds of the calibration board. Then find at least three corresponding two-dimensional points on the image and three-dimensional points of the point cloud, that is, three point pairs are formed. Using these three point pairs to perform PNP (perspective-n-point, multi-point perspective imaging) solution, the transformation relationship between the coordinate system of the lidar device and the coordinate system of the image acquisition device can be obtained.
- PNP perspective-n-point, multi-point perspective imaging
- step 102 that is, "registering the point cloud sequence and the orthographic image sequence with the orthographic image and the point cloud data" will be described in detail below in conjunction with an embodiment.
- FIG. 2 is a flowchart of a preferred registration process provided by an embodiment of the present disclosure. As shown in FIG. 2, the process may include the following steps:
- the adjacent images in the front-view image sequence are registered to obtain a set of corresponding pixels in the adjacent images.
- the K pixels in are the sum image
- the K pixels in are corresponding, respectively expressed as a set: where the image of pixels with image pixels correspond, and correspond, and so on.
- the feature-based method mainly includes: determining the features of each pixel in the two frames of images, where the features can be such as SIFT (Scale-invariant feature transform, scale-invariant feature transform); and then performing feature matching based on similarity, so as to obtain corresponding pixels. For example, the matching between two pixel points whose similarity between features exceeds the preset similarity threshold is successful.
- SIFT Scale-invariant feature transform, scale-invariant feature transform
- the deep learning method mainly includes: using such as convolutional neural network, VGG (Visual Geometry Group Network, visual geometry group network) layer to generate the feature vector representation of each pixel, and then perform feature vector representation based on the feature vector representation of each pixel in the two frames of images. Match to get the corresponding pixel. For example, the feature vector indicates that the matching between two pixel points whose similarity exceeds a preset similarity threshold is successful.
- VGG Visual Geometry Group Network, visual geometry group network
- distortion correction is performed on the point cloud data according to the amount of movement of the lidar device that collects the point cloud data for one revolution.
- This step is preferably performed, and helps to improve the accuracy of the point cloud data in the subsequent registration process.
- the image acquisition device uses a global shutter, which can be considered to be acquired in an instant.
- the laser radar equipment is not obtained instantaneously, but is usually collected after the transmitter and receiver rotate a circle, that is, 360 degrees. Assuming that one revolution is 100ms, then in a frame of point cloud data formed in one acquisition period, the difference between the first point and the last point is 100ms, and the lidar device is collected during motion, so the point Cloud data is distorted and cannot truly reflect the real environment at a certain moment. In order to better register the image data and the point cloud data, distortion correction is performed on the point cloud data in this step.
- the lidar calculates the laser point coordinates based on the lidar's own coordinate system when the laser beam is received, the reference coordinate system of each column of laser points is different during the movement of the lidar. But they are in the same frame point cloud, so they need to be unified in the same coordinate system during the process of distortion correction.
- the idea of distortion correction is to calculate the movement of the lidar during the acquisition process, and then compensate the movement amount on each frame point cloud, including the compensation of rotation and translation.
- First determine the first laser point in a frame point cloud the subsequent laser points can determine the rotation angle and translation relative to the first laser point, and then perform the compensation conversion of first rotation and then translation to get the corrected Coordinate information of the laser point.
- a set of corresponding point clouds in adjacent images may also be determined.
- the projection from the point cloud to the image can be obtained first according to the internal reference information matrix of the image acquisition device, the rotation matrix from the coordinate system of the image acquisition device to the image plane, and the conversion and translation matrix from the lidar coordinate system to the coordinate system of the image acquisition device matrix; the point cloud data is then projected onto the image using the projection matrix.
- the corresponding set of point clouds in adjacent images can be determined. Assuming two consecutive frames of images and image After the above projection, the projection to the image is obtained The set of K 1 points in and projected to the image In the set of K 2 points, the two sets are intersected, which is the image and image The collection of corresponding point clouds in The use of this collection will be covered in subsequent examples.
- a reference point cloud in the point cloud sequence is determined.
- the first frame in the point cloud sequence can be used as the reference point cloud.
- the point cloud of the first frame may not be the most accurate in the point cloud sequence. Therefore, a preferred way of determining a reference point cloud is provided in the present disclosure.
- the first frame in the point cloud sequence can be used as a reference, and other point cloud data can be registered frame by frame; in the point cloud image sequence, the frame with the highest proportion of registration points with the two frames of point cloud data before and after The point cloud is used as the reference point cloud.
- a transformation matrix between two frames of point clouds is learned from the reference point cloud and its unregistered adjacent point clouds.
- the point cloud of the first frame can theoretically be obtained after rotating and translating the point cloud of the second frame.
- a method such as ICP (Iterative Closest Point, Iterative Closest Point) can be used to learn the transformation matrix.
- the rotation matrix is expressed as R
- the translation matrix is expressed as t
- the loss function can be: each point in the point cloud as a reference is converted according to the transformation matrix and each conversion point in the adjacent point cloud The average or weighted average of the distances between the nearest points of each conversion point.
- loss function For example, the following loss function can be used:
- E(R, t) represents the loss function
- xi represents the point cloud as the reference point cloud such as the point in the first frame point cloud
- R( xi )+t represents the conversion of xi according to the transformation matrix
- n is the number of points that can be matched.
- loss function can also be used:
- the weighting coefficient w i is added, and its value can be determined according to whether the point in the point cloud as a reference belongs to the set of the corresponding point cloud to make sure.
- the following formula can be used:
- DGR Deep global registration, deep global registration
- the transformation matrix is used to transform the reference point cloud to obtain the registered adjacent point cloud.
- the registered points in the point cloud of the second frame are obtained.
- step 304 it is judged whether there are adjacent point clouds that have not yet been registered in the point cloud sequence, and if yes, step 304 is performed; otherwise, the current registration process is ended.
- step 304 use the adjacent point cloud as a new reference, and go to step 301.
- the registration point ratio A j of the point cloud P j of the jth frame can be determined by the following formula:
- match() indicates the intersection of the points that can be registered in the two frames of point clouds before and after, which can be reflected as the intersection of each point obtained after converting the point cloud of one frame according to the transformation matrix and each point of the point cloud of another frame.
- indicates the number of points in the set, for example
- the reference point cloud After the reference point cloud is determined, use the method shown in Figure 3 to register other point cloud data frame by frame based on the reference point cloud. If the reference point cloud is the first frame, the point clouds of subsequent frames are registered sequentially. If the reference point cloud is not the first frame, the point cloud of each frame is registered forward and backward based on the reference point cloud. Finally, the registered point cloud sequence is obtained.
- the registered point cloud data is projected into the set obtained in step 201, and the coordinate information of each pixel in the set is obtained.
- it may include projecting the coordinates of the point cloud data to the set to obtain the coordinate information of the point cloud corresponding to the pixel in the front-view image;
- the coordinate information of the point cloud is converted into the coordinate information of the pixel.
- the above-mentioned set is actually corresponding pixels obtained after adjacent image registration.
- the coordinate information of the point cloud corresponding to each pixel in these sets in the front-view image can be obtained.
- step 103 that is, "converting the front-view image sequence into a top view according to the registration result and determining the coordinate information of each pixel in the top view" will be described in detail below in conjunction with an embodiment.
- each frame of the front-view image in the front-view sequence can be converted into a top view based on inverse perspective transformation first; then, the coordinate information of each pixel in the top view is determined by matching on the top view according to the coordinate information of the pixels in the front-view image.
- R x the horizontal resolution of the image acquisition device
- R y vertical resolution of the image acquisition device.
- the inverse perspective transformation model can be expressed as follows:
- h is the height of the image acquisition device from the ground
- cot() is the cotangent function.
- the front view image such as that shown in FIG. 4a can be transformed into a top view image as shown in FIG. 4b.
- each frame of front-view images in the front-view sequence can be converted to obtain a top view. If there are N frames of front-view images, N top views can be obtained. These top views actually overlap each other, especially two adjacent top views, most of the areas overlap. Since the coordinate information of the pixels in the top view can be obtained in the above process, stitching can be performed one by one based on the position information of the pixels in each top view, and finally a high-definition map can be obtained.
- step 104 that is, "recognizing map elements on the top view to obtain high-precision map data" will be described in detail below in conjunction with an embodiment.
- road information identification can be performed on the bird's-eye view obtained in step 103; then, the recognized road information can be superimposed on the bird's-eye view to obtain high-precision map data.
- the road information can include lane lines, lane line types (such as white solid lines, single yellow solid lines, double yellow solid lines, yellow dashed solid lines, diversion lines, yellow no-stop lines, etc.), colors, and lane guide arrow information , lane type (such as main lane, bus lane, tidal lane, etc.), etc.
- lane line types such as white solid lines, single yellow solid lines, double yellow solid lines, yellow dashed solid lines, diversion lines, yellow no-stop lines, etc.
- colors and lane guide arrow information
- lane type such as main lane, bus lane, tidal lane, etc.
- a semantic segmentation model based on a deep neural network can be used to segment road information, such as DeepLabV3. It is also possible to use image recognition technology based on deep neural network to identify the above road information, such as Faster-RCNN (Regions with CNN features, convolutional neural network feature area).
- the recognition based on the top view above is mainly the recognition of ground elements, that is, the recognition of road information.
- map elements such as traffic signs, buildings, etc.
- This part may adopt methods in the prior art, which are not limited in this disclosure.
- the operator can directly compare the top view data with the superimposed road information, correct the problematic data, and produce the final high-precision map data.
- FIG. 5 is a structural diagram of a high-precision map production device provided by an embodiment of the present disclosure.
- the device 500 may include: an acquisition unit 510 , a registration unit 520 , a conversion unit 530 and an identification unit 540 .
- the main functions of each component unit are as follows:
- the acquiring unit 510 is configured to acquire point cloud data and front-view image data respectively collected by the collection device at each position point, to obtain a point cloud sequence and a front-view image sequence.
- the above-mentioned collection device at least includes an image collection device for collecting front-view images and a laser radar device for collecting point cloud data.
- clock synchronization and/or joint calibration can be performed on the acquisition device in advance.
- the registration unit 520 is configured to perform registration between the front view image and the point cloud data of the point cloud sequence and the front view image sequence.
- the conversion unit 530 is configured to convert the front view image sequence into a top view according to the registration result and determine the coordinate information of each pixel in the top view.
- the identification unit 540 is configured to identify map elements in the top view to obtain high-precision map data.
- the registration unit 520 may include: a first registration subunit 521 and a projection subunit 522, and may further include a correction subunit 523, a reference subunit 524, a second registration subunit 525, and a third registration subunit. Unit 526.
- the first registration subunit 521 is configured to register adjacent images in the front-view image sequence to obtain a set of corresponding pixels in the adjacent images.
- the projection subunit 522 is used to project the point cloud data to the set to obtain the coordinate information of each pixel in the set.
- the correction subunit 523 is configured to provide the projection subunit 522 with distortion correction on the point cloud data according to the amount of movement of the lidar device that collects the point cloud data for one rotation.
- the reference subunit 524 is configured to determine a reference point cloud in the point cloud sequence.
- the second registration subunit 525 is configured to register other point cloud data frame by frame based on the reference point cloud, and provide the registered point cloud data to the projection subunit 522 .
- the correction subunit 523 is used to correct the distortion of the point cloud data, and then the reference subunit 524 determines the reference point cloud, and the second registration subunit 525 performs registration.
- the reference subunit 524 may use the point cloud of the first frame in the point cloud sequence as the basic point cloud. However, as a preferred embodiment, the reference subunit 524 is specifically used to provide the first frame in the point cloud sequence to the second registration subunit 525 as a reference, and to register other point cloud data frame by frame. The second registration subunit 525 obtains the registration result; in the point cloud sequence, the point cloud of the frame with the highest proportion of the registration points of the two frames of point cloud data before and after is used as the reference point cloud.
- the second registration subunit 525 is used for:
- the second registration subunit 525 learns the transformation matrix between two frames of point clouds from the point cloud as a reference and its adjacent point clouds that have not yet been registered, it is specifically used for:
- the iterative closest point ICP algorithm is used to learn the transformation matrix between two frames of point clouds from the point cloud as the reference point cloud and the adjacent point cloud;
- the loss function of the ICP algorithm is: each point in the point cloud as the reference point cloud is based on the transformation matrix
- the third registration subunit 526 is configured to determine a set of corresponding point clouds in adjacent images.
- the weights used by each distance are determined according to whether the point in the point cloud as a reference belongs to the set formed by the corresponding point cloud.
- the projection subunit 522 is specifically used to project the coordinates of the point cloud data to the set, and obtain the coordinate information of the point cloud corresponding to the pixel in the front-view image;
- the coordinate information of the point cloud corresponding to the pixel in the image is converted into the coordinate information of the pixel.
- the conversion unit 530 is specifically configured to convert each frame of the front view image in the front view sequence into a top view based on inverse perspective transformation; perform matching on the top view according to the coordinate information of the pixels in the front view image, and determine the coordinate information of each pixel in the top view.
- the identification unit 540 is specifically used for identifying road information on the top view; superimposing the recognized road information on the top view to obtain high-precision map data.
- the road information can include lane lines, lane line types (such as white solid lines, single yellow solid lines, double yellow solid lines, yellow dashed solid lines, diversion lines, yellow no-stop lines, etc.), colors, and lane guide arrow information , lane type (such as main lane, bus lane, tidal lane, etc.), etc.
- lane line types such as white solid lines, single yellow solid lines, double yellow solid lines, yellow dashed solid lines, diversion lines, yellow no-stop lines, etc.
- colors and lane guide arrow information
- lane type such as main lane, bus lane, tidal lane, etc.
- a semantic segmentation model based on a deep neural network can be used to segment road information, such as DeepLabV3. It is also possible to use image recognition technology based on deep neural network to identify the above road information, such as Faster-RCNN (Regions with CNN features, convolutional neural network feature area).
- each embodiment in this specification is described in a progressive manner, the same and similar parts of each embodiment can be referred to each other, and each embodiment focuses on the differences from other embodiments.
- the description is relatively simple, and for relevant parts, please refer to part of the description of the method embodiment.
- the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
- FIG. 6 it is a block diagram of an electronic device according to a method for producing a high-definition map according to an embodiment of the present disclosure.
- Electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers.
- Electronic devices may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smart phones, wearable devices, and other similar computing devices.
- the components shown herein, their connections and relationships, and their functions, are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
- the device 600 includes a computing unit 601 that can execute according to a computer program stored in a read-only memory (ROM) 602 or loaded from a storage unit 608 into a random-access memory (RAM) 603. Various appropriate actions and treatments. In the RAM 603, various programs and data necessary for the operation of the device 600 can also be stored.
- the computing unit 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
- An input/output (I/O) interface 605 is also connected to the bus 604 .
- the I/O interface 605 includes: an input unit 606, such as a keyboard, a mouse, etc.; an output unit 607, such as various types of displays, speakers, etc.; a storage unit 608, such as a magnetic disk, an optical disk, etc. ; and a communication unit 609, such as a network card, a modem, a wireless communication transceiver, and the like.
- the communication unit 609 allows the device 600 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
- the computing unit 601 may be various general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of computing units 601 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
- the calculation unit 601 executes various methods and processes described above, for example, a production method of a high-definition map.
- the method for producing a high-definition map may be implemented as a computer software program tangibly contained in a machine-readable medium, such as the storage unit 608 .
- part or all of the computer program may be loaded and/or installed on the device 600 via the ROM 802 and/or the communication unit 609.
- the computer program When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the production method of the high-definition map described above can be performed.
- the computing unit 601 may be configured in any other appropriate way (for example, by means of firmware) to execute the high-precision map production method.
- Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, systems integrated circuits, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips system (SOC), complex programmable logic device (CPLD), computer hardware, firmware, software, and/or a combination thereof.
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- ASSPs application specific standard products
- SOC systems on chips system
- CPLD complex programmable logic device
- computer hardware firmware, software, and/or a combination thereof.
- programmable processor can be special-purpose or general-purpose programmable processor, can receive data and instruction from storage system, at least one input device, and at least one output device, and transmit data and instruction to this storage system, this at least one input device, and this at least one output device an output device.
- Program codes for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special-purpose computer, or other programmable data processing apparatus, so that the program codes, when executed by the processor or controller, make the flow diagrams and/or block diagrams specified The function/operation is implemented.
- the program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- the systems and techniques described herein can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user. ); and a keyboard and pointing device (eg, a mouse or a trackball) through which a user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and pointing device eg, a mouse or a trackball
- Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and can be in any form (including Acoustic input, speech input or, tactile input) to receive input from the user.
- the systems and techniques described herein can be implemented in a computing system that includes back-end components (e.g., as a data server), or a computing system that includes middleware components (e.g., an application server), or a computing system that includes front-end components (e.g., as a a user computer having a graphical user interface or web browser through which a user can interact with embodiments of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
- the components of the system can be interconnected by any form or medium of digital data communication, eg, a communication network. Examples of communication networks include: Local Area Network (LAN), Wide Area Network (WAN) and the Internet.
- a computer system may include clients and servers.
- Clients and servers are generally remote from each other and typically interact through a communication network.
- the relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
- the server can be a cloud server, also known as cloud computing server or cloud host, which is a host product in the cloud computing service system to solve the management problems existing in traditional physical host and virtual private server (VPs, VI irtual Private Server) services. Difficulty and weak business expansion.
- the server can also be a server of a distributed system, or a server combined with a blockchain.
- steps may be reordered, added or deleted using the various forms of flow shown above.
- each step described in the present disclosure may be executed in parallel, sequentially, or in a different order, as long as the desired result of the technical solution disclosed in the present disclosure can be achieved, no limitation is imposed herein.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Remote Sensing (AREA)
- Geometry (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Image Processing (AREA)
- Instructional Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (25)
- 一种高精地图的生产方法,包括:获取采集设备在各位置点分别采集的点云数据和正视图像数据,得到点云序列和正视图像序列;将所述点云序列和正视图像序列进行正视图像与点云数据的配准;依据所述配准结果将所述正视图像序列转换为俯视图并确定所述俯视图中各像素的坐标信息;对所述俯视图进行地图元素的识别,得到高精地图数据。
- 根据权利要求1所述的方法,其中,将所述点云序列和正视图像序列进行正视图像与点云数据的配准包括:将所述正视图像序列中相邻图像进行配准,得到相邻图像中对应像素构成的集合;将点云数据投影到所述集合,得到所述集合中各像素的坐标信息。
- 根据权利要求2所述的方法,其中,在所述将点云数据投影到所述集合之前,还包括:依据采集所述点云数据的激光雷达设备旋转一周的运动量,对所述点云数据进行畸变校正。
- 根据权利要求2所述的方法,其中,所述将点云数据投影到所述集合之前,还包括:确定所述点云序列中的基准点云;以所述基准点云为基准,逐帧对其他点云数据进行配准。
- 根据权利要求4所述的方法,其中,确定所述点云序列中的基准点云包括:将所述点云序列中的首帧作为基准,逐帧对其他点云数据进行配准;将所述点云序列中,与前后两帧点云数据的配准点占比最高的一帧点云作为基准点云。
- 根据权利要求4或5所述的方法,其中,所述逐帧对其他点云数据进行配准包括:从作为基准的点云及其尚未配准过的相邻点云中学习两帧点云之间的转换矩阵;利用所述转换矩阵将作为基准的点云进行转换,得到配准后的所述相邻点云;将所述相邻点云作为新的基准,转至执行所述从作为基准的点云及其尚未配准过的相邻点云中学习两帧点云之间的转换矩阵的步骤,直至完成对所述点云图像序列中的所有点云数据的配准。
- 根据权利要求6所述的方法,其中,所述从作为基准的点云及其尚未配准过的相邻点云中学习两帧点云之间的转换矩阵包括:采用迭代最近点ICP算法,从作为基准的点云以及所述相邻点云中学习两帧点云之间的转换矩阵;其中,所述ICP算法的损失函数为:所述作为基准的点云中各点依据转换矩阵转换得到的各转换点与所述相邻点云中各转换点最近的点之间的距离均值或者加权均值。
- 根据权利要求7所述的方法,其中,在确定所述点云图像序列中的基准点云之前,还包括:确定相邻图像中对应点云构成的集合;在确定所述加权均值时,各距离采用的权值依据作为基准的点云中的点是否属于所述对应点云构成的集合确定。
- 根据权利要求2所述的方法,其中,所述将点云数据投影到所述集合,得到所述集合中各像素的坐标信息包括:将点云数据的坐标投影到所述集合,得到正视图像中像素对应的点云的坐标信息;依据激光雷达坐标系到图像采集设备坐标系的转换及平移矩阵,将所述正视图像中像素对应的点云的坐标信息转换为像素的坐标信息。
- 根据权利要求1所述的方法,其中,依据所述配准结果将所述正视图像序列转换为俯视图并确定所述俯视图中各像素的坐标信息包括:基于逆透视变换,将所述正视图序列中的各帧正视图像转换为各张俯视图;根据正视图像中像素的坐标信息在对应俯视图上进行匹配,确定俯视图中像素的坐标信息;依据俯视图中像素的坐标信息,对所述各张俯视图进行拼接处理得到最终的俯视图。
- 根据权利要求1所述的方法,其中,对所述俯视图进行地图元素的识别,得到高精地图数据包括:对所述俯视图进行道路信息的识别;将识别得到的道路信息叠加至所述俯视图中展现,得到高精地图数据。
- 一种高精地图的生产装置,包括:获取单元,用于获取采集设备在各位置点分别采集的点云数据和正视图像数据,得到点云序列和正视图像序列;配准单元,用于将所述点云序列和正视图像序列进行正视图像与点云数据的配准;转换单元,用于依据所述配准结果将所述正视图像序列转换为俯视图并确定所述俯视图中各像素的坐标信息;识别单元,用于对所述俯视图进行地图元素的识别,得到高精地图数据。
- 根据权利要求12所述的装置,其中,所述配准单元包括:第一配准子单元,用于将所述正视图像序列中相邻图像进行配准,得到相邻图像中对应像素构成的集合;投影子单元,用于将点云数据投影到所述集合,得到所述集合中各像素的坐标信息。
- 根据权利要求13所述的装置,其中,所述配准单元还包括:校正子单元,用于依据采集所述点云数据的激光雷达设备旋转一周的运动量,对所述点云数据进行畸变校正后提供给所述投影子单元。
- 根据权利要求13所述的装置,其中,所述配准单元还包括:基准子单元,用于确定所述点云序列中的基准点云;第二配准子单元,用于以所述基准点云为基准,逐帧对其他点云数据进行配准,将配准后的点云数据提供给所述投影子单元。
- 根据权利要求15所述的装置,其中,所述基准子单元,具体用于将所述点云序列中的首帧提供给所述第二配准子单元作为基准,逐帧对其他点云数据进行配准,从所述第二配准子单元获取配准结果;将所述点云序列中,与前后两帧点云数据的配准点占比最高的一帧点云作为基准点云。
- 根据权利要求15或16所述的装置,其中,所述第二配准子单元,具体用于:从作为基准的点云及其尚未配准过的相邻点云中学习两帧点云之间的转换矩阵;利用所述转换矩阵将作为基准的点云进行转换,得到配准后的所述相邻点云;将所述相邻点云作为新的基准,转至执行所述从作为基准的点云及其尚未配准过的相邻点云中学习两帧点云之间的转换矩阵的操作,直至完成对所述点云图像序列中的所有点云数据的配准。
- 根据权利要求17所述的装置,其中,所述第二配准子单元在从作为基准的点云及其尚未配准过的相邻点云中学习两帧点云之间的转换矩阵时,具体用于:采用迭代最近点ICP算法,从作为基准的点云以及所述相邻点云中学习两帧点云之间的转换矩阵;其中,所述ICP算法的损失函数为:所述作为基准的点云中各点依据转换矩阵转换得到的各转换点与所述相邻点云中各转换点最近的点之间的距离均值或者加权均值。
- 根据权利要求18所述的装置,其中,所述配准单元还包括:第三配准子单元,用于确定相邻图像中对应点云构成的集合;所述第二配准子单元在确定所述加权均值时,各距离采用的权值依据作为基准的点云中的点是否属于所述对应点云构成的集合确定。
- 根据权利要求13所述的装置,其中,所述投影子单元,具体用于将点云数据的坐标投影到所述集合,得到正视图像中像素对应的点云的坐标信息;依据激光雷达坐标系到图像采集设备坐标系的转换及平移矩阵,将所述正视图像中像素对应的点云的坐标信息转换为像素的坐标信息。
- 根据权利要求12所述的装置,其中,所述转换单元,具体用于基于逆透视变换,将所述正视图序列中的各帧正视图像转换为各张俯视图;根据正视图像中像素的坐标信息在对应俯视图上进行匹配,确定俯视图中像素的坐标信息;依据俯视图中像素的坐标信息,对所述各张俯视图进行拼接处理得到最终的俯视图。
- 根据权利要求12所述的装置,其中,所述识别单元,具体用于对所述俯视图进行道路信息的识别;将识别得到的道路信息叠加至所述俯视图中展现,得到高精地图数据。
- 一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-11中任一项所述的方法。
- 一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行权利要求1-11中任一项所述的方法。
- 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-11中任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21916644.4A EP4174786A4 (en) | 2021-06-08 | 2021-11-17 | METHOD AND APPARATUS FOR GENERATING HIGH-PRECISION MAP, DEVICE AND COMPUTER STORAGE MEDIUM |
KR1020227023591A KR20220166779A (ko) | 2021-06-08 | 2021-11-17 | 고정밀 지도의 생산 방법, 장치, 설비 및 컴퓨터 저장 매체 |
JP2022541610A JP7440005B2 (ja) | 2021-06-08 | 2021-11-17 | 高精細地図の作成方法、装置、デバイス及びコンピュータプログラム |
US17/758,692 US20240185379A1 (en) | 2021-06-08 | 2021-11-17 | Method for generating high definition map, device and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110637791.9 | 2021-06-08 | ||
CN202110637791.9A CN113409459B (zh) | 2021-06-08 | 2021-06-08 | 高精地图的生产方法、装置、设备和计算机存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022257358A1 true WO2022257358A1 (zh) | 2022-12-15 |
Family
ID=77676974
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/131180 WO2022257358A1 (zh) | 2021-06-08 | 2021-11-17 | 高精地图的生产方法、装置、设备和计算机存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240185379A1 (zh) |
EP (1) | EP4174786A4 (zh) |
JP (1) | JP7440005B2 (zh) |
KR (1) | KR20220166779A (zh) |
CN (1) | CN113409459B (zh) |
WO (1) | WO2022257358A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965756A (zh) * | 2023-03-13 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | 地图构建方法、设备、驾驶设备和介质 |
CN116467323A (zh) * | 2023-04-11 | 2023-07-21 | 北京中科东信科技有限公司 | 一种基于路侧设施的高精地图的更新方法及系统 |
CN116863432A (zh) * | 2023-09-04 | 2023-10-10 | 之江实验室 | 基于深度学习的弱监督激光可行驶区域预测方法和系统 |
CN117934573A (zh) * | 2024-03-25 | 2024-04-26 | 北京华航唯实机器人科技股份有限公司 | 点云数据的配准方法、装置及电子设备 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111311743B (zh) * | 2020-03-27 | 2023-04-07 | 北京百度网讯科技有限公司 | 三维重建精度测试方法、测试装置和电子设备 |
CN113409459B (zh) * | 2021-06-08 | 2022-06-24 | 北京百度网讯科技有限公司 | 高精地图的生产方法、装置、设备和计算机存储介质 |
CN114419165B (zh) * | 2022-01-17 | 2024-01-12 | 北京百度网讯科技有限公司 | 相机外参校正方法、装置、电子设备和存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678689A (zh) * | 2015-12-31 | 2016-06-15 | 百度在线网络技术(北京)有限公司 | 高精地图数据配准关系确定方法及装置 |
CN110160502A (zh) * | 2018-10-12 | 2019-08-23 | 腾讯科技(深圳)有限公司 | 地图要素提取方法、装置及服务器 |
CN112105890A (zh) * | 2019-01-30 | 2020-12-18 | 百度时代网络技术(北京)有限公司 | 用于自动驾驶车辆的基于rgb点云的地图生成系统 |
CN112434706A (zh) * | 2020-11-13 | 2021-03-02 | 武汉中海庭数据技术有限公司 | 一种基于图像点云融合的高精度交通要素目标提取方法 |
CN112434119A (zh) * | 2020-11-13 | 2021-03-02 | 武汉中海庭数据技术有限公司 | 一种基于异构数据融合的高精度地图生产装置 |
US20210148722A1 (en) * | 2019-11-20 | 2021-05-20 | Thinkware Corporation | Method, apparatus, computer program, and computer-readable recording medium for producing high-definition map |
CN113409459A (zh) * | 2021-06-08 | 2021-09-17 | 北京百度网讯科技有限公司 | 高精地图的生产方法、装置、设备和计算机存储介质 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976429B (zh) * | 2010-10-27 | 2012-11-14 | 南京大学 | 基于游弋图像的水面鸟瞰图成像方法 |
CN106910217A (zh) * | 2017-03-17 | 2017-06-30 | 驭势科技(北京)有限公司 | 视觉地图建立方法、计算装置、计算机存储介质和智能车辆 |
US10452927B2 (en) * | 2017-08-09 | 2019-10-22 | Ydrive, Inc. | Object localization within a semantic domain |
CN108230247B (zh) * | 2017-12-29 | 2019-03-15 | 达闼科技(北京)有限公司 | 基于云端的三维地图的生成方法、装置、设备及计算机可读的存储介质 |
CN109059942B (zh) * | 2018-08-22 | 2021-12-14 | 中国矿业大学 | 一种井下高精度导航地图构建系统及构建方法 |
CN108801171B (zh) * | 2018-08-23 | 2020-03-31 | 南京航空航天大学 | 一种隧道断面形变分析方法及装置 |
CN109543520B (zh) * | 2018-10-17 | 2021-05-28 | 天津大学 | 一种面向语义分割结果的车道线参数化方法 |
CN111160360B (zh) * | 2018-11-07 | 2023-08-01 | 北京四维图新科技股份有限公司 | 图像识别方法、装置及系统 |
CN110568451B (zh) * | 2019-08-02 | 2021-06-18 | 北京三快在线科技有限公司 | 一种高精度地图中道路交通标线的生成方法和装置 |
CN111311709B (zh) * | 2020-02-05 | 2023-06-20 | 北京三快在线科技有限公司 | 一种生成高精地图的方法及装置 |
CN111508021B (zh) * | 2020-03-24 | 2023-08-18 | 广州视源电子科技股份有限公司 | 一种位姿确定方法、装置、存储介质及电子设备 |
CN111652179B (zh) * | 2020-06-15 | 2024-01-09 | 东风汽车股份有限公司 | 基于点线特征融合激光的语义高精地图构建与定位方法 |
CN111784836B (zh) * | 2020-06-28 | 2024-06-04 | 北京百度网讯科技有限公司 | 高精地图生成方法、装置、设备及可读存储介质 |
-
2021
- 2021-06-08 CN CN202110637791.9A patent/CN113409459B/zh active Active
- 2021-11-17 EP EP21916644.4A patent/EP4174786A4/en active Pending
- 2021-11-17 WO PCT/CN2021/131180 patent/WO2022257358A1/zh active Application Filing
- 2021-11-17 JP JP2022541610A patent/JP7440005B2/ja active Active
- 2021-11-17 US US17/758,692 patent/US20240185379A1/en active Pending
- 2021-11-17 KR KR1020227023591A patent/KR20220166779A/ko unknown
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678689A (zh) * | 2015-12-31 | 2016-06-15 | 百度在线网络技术(北京)有限公司 | 高精地图数据配准关系确定方法及装置 |
CN110160502A (zh) * | 2018-10-12 | 2019-08-23 | 腾讯科技(深圳)有限公司 | 地图要素提取方法、装置及服务器 |
CN112105890A (zh) * | 2019-01-30 | 2020-12-18 | 百度时代网络技术(北京)有限公司 | 用于自动驾驶车辆的基于rgb点云的地图生成系统 |
US20210148722A1 (en) * | 2019-11-20 | 2021-05-20 | Thinkware Corporation | Method, apparatus, computer program, and computer-readable recording medium for producing high-definition map |
CN112434706A (zh) * | 2020-11-13 | 2021-03-02 | 武汉中海庭数据技术有限公司 | 一种基于图像点云融合的高精度交通要素目标提取方法 |
CN112434119A (zh) * | 2020-11-13 | 2021-03-02 | 武汉中海庭数据技术有限公司 | 一种基于异构数据融合的高精度地图生产装置 |
CN113409459A (zh) * | 2021-06-08 | 2021-09-17 | 北京百度网讯科技有限公司 | 高精地图的生产方法、装置、设备和计算机存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4174786A4 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115965756A (zh) * | 2023-03-13 | 2023-04-14 | 安徽蔚来智驾科技有限公司 | 地图构建方法、设备、驾驶设备和介质 |
CN115965756B (zh) * | 2023-03-13 | 2023-06-06 | 安徽蔚来智驾科技有限公司 | 地图构建方法、设备、驾驶设备和介质 |
CN116467323A (zh) * | 2023-04-11 | 2023-07-21 | 北京中科东信科技有限公司 | 一种基于路侧设施的高精地图的更新方法及系统 |
CN116467323B (zh) * | 2023-04-11 | 2023-12-19 | 北京中科东信科技有限公司 | 一种基于路侧设施的高精地图的更新方法及系统 |
CN116863432A (zh) * | 2023-09-04 | 2023-10-10 | 之江实验室 | 基于深度学习的弱监督激光可行驶区域预测方法和系统 |
CN116863432B (zh) * | 2023-09-04 | 2023-12-22 | 之江实验室 | 基于深度学习的弱监督激光可行驶区域预测方法和系统 |
CN117934573A (zh) * | 2024-03-25 | 2024-04-26 | 北京华航唯实机器人科技股份有限公司 | 点云数据的配准方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
KR20220166779A (ko) | 2022-12-19 |
JP2023533625A (ja) | 2023-08-04 |
EP4174786A1 (en) | 2023-05-03 |
CN113409459B (zh) | 2022-06-24 |
US20240185379A1 (en) | 2024-06-06 |
JP7440005B2 (ja) | 2024-02-28 |
CN113409459A (zh) | 2021-09-17 |
EP4174786A4 (en) | 2024-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022257358A1 (zh) | 高精地图的生产方法、装置、设备和计算机存储介质 | |
CN112894832B (zh) | 三维建模方法、装置、电子设备和存储介质 | |
WO2021196294A1 (zh) | 一种跨视频人员定位追踪方法、系统及设备 | |
WO2020073936A1 (zh) | 地图要素提取方法、装置及服务器 | |
CN111174799B (zh) | 地图构建方法及装置、计算机可读介质、终端设备 | |
US20210366155A1 (en) | Method and Apparatus for Detecting Obstacle | |
EP4116462A2 (en) | Method and apparatus of processing image, electronic device, storage medium and program product | |
CN110675450B (zh) | 基于slam技术的正射影像实时生成方法及系统 | |
WO2021218123A1 (zh) | 用于检测车辆位姿的方法及装置 | |
WO2022262160A1 (zh) | 传感器标定方法及装置、电子设备和存储介质 | |
CN104361628A (zh) | 一种基于航空倾斜摄影测量的三维实景建模系统 | |
US20210248390A1 (en) | Road marking recognition method, map generation method, and related products | |
CN103426165A (zh) | 一种地面激光点云与无人机影像重建点云的精配准方法 | |
CN112799096B (zh) | 基于低成本车载二维激光雷达的地图构建方法 | |
CN108629829A (zh) | 一种球幕相机与深度相机结合的三维建模方法和系统 | |
WO2023065657A1 (zh) | 地图构建方法、装置、设备、存储介质及程序 | |
CN112947526A (zh) | 一种无人机自主降落方法和系统 | |
CN116188893A (zh) | 基于bev的图像检测模型训练及目标检测方法和装置 | |
Gao et al. | Multi-source data-based 3D digital preservation of largescale ancient chinese architecture: A case report | |
WO2022246812A1 (zh) | 定位方法、装置、电子设备及存储介质 | |
TW202132804A (zh) | 地圖建構系統以及地圖建構方法 | |
US9240055B1 (en) | Symmetry-based interpolation in images | |
CN113129422A (zh) | 一种三维模型构建方法、装置、存储介质和计算机设备 | |
CN107784666B (zh) | 基于立体影像的地形地物三维变化检测和更新方法 | |
CN115937449A (zh) | 高精地图生成方法、装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022541610 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 17758692 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21916644 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021916644 Country of ref document: EP Effective date: 20230127 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |