WO2023123837A1 - Procédé et appareil de génération de cartes, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de génération de cartes, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2023123837A1
WO2023123837A1 PCT/CN2022/094862 CN2022094862W WO2023123837A1 WO 2023123837 A1 WO2023123837 A1 WO 2023123837A1 CN 2022094862 W CN2022094862 W CN 2022094862W WO 2023123837 A1 WO2023123837 A1 WO 2023123837A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
trajectory data
base map
bird
vehicle
Prior art date
Application number
PCT/CN2022/094862
Other languages
English (en)
Chinese (zh)
Inventor
夏志勋
冯洁
王梓里
Original Assignee
广州小鹏自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州小鹏自动驾驶科技有限公司 filed Critical 广州小鹏自动驾驶科技有限公司
Publication of WO2023123837A1 publication Critical patent/WO2023123837A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application relates to the field of map technology, and in particular to a map generation method, device, electronic equipment, and storage medium.
  • the embodiment of the present application proposes a map generation method, device, electronic equipment and storage medium, which can quickly generate maps for areas with weaker signals and facilitate parking for users.
  • a method for generating a map including: acquiring generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory of a second type of vehicle data; obtain the determined second point cloud track data, wherein the second point cloud track data corresponds to the point cloud track data of the first type of vehicle; generate a point cloud base map according to the first point cloud track data; with The point cloud base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to obtain a map.
  • the first point cloud trajectory data is generated in the following manner: a plurality of trajectory data are acquired, and the trajectory data includes the pose information of the vehicle and the bird's-eye view stitching image associated with the vehicle pose information; The road element recognition is carried out on the stitched image of the bird's-eye view, and the road elements in each stitched image of the bird's-eye view are determined; according to the road elements in each stitched image of the bird's-eye view and the vehicle pose information associated with each stitched image of the bird's-eye view, the first-order road element corresponding to each trajectory data is generated.
  • a point cloud trajectory data; said generating a point cloud base map according to said first point cloud trajectory data includes: splicing the first point cloud trajectory data corresponding to a plurality of trajectory data to obtain a point cloud base map.
  • a map generation device including: a data acquisition module, configured to acquire the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the second type The point cloud track data of the vehicle; Acquire the determined second point cloud track data, wherein the second point cloud track data corresponds to the point cloud track data of the first type of vehicle;
  • the splicing module is used for according to the first
  • the point cloud track data generates a point cloud base map;
  • the fusion module is used to use the point cloud base map as an alignment medium to align and fuse the second point cloud track data and the point cloud base map to obtain a map.
  • the data acquisition module includes an acquisition module, an identification module, and a generation module; the acquisition module is used to acquire a plurality of trajectory data, and the trajectory data includes the pose information of the vehicle and the bird's-eye view associated with the vehicle pose information Angle of view mosaic image; identification module, used to identify road elements in each bird's-eye view angle mosaic image, and determine road elements in each bird's-eye view angle mosaic image; generation module, used for splicing road elements and each bird's-eye angle of view according to each bird's-eye view angle of view mosaic image Stitching the vehicle pose information associated with the images to generate first point cloud track data corresponding to each track data; the splicing module splicing the first point cloud track data corresponding to multiple track data to obtain a point cloud base map.
  • an electronic device including: a processor; a memory, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by the processor, the above The method for generating the map.
  • a computer-readable storage medium on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method for generating the above-mentioned map is implemented.
  • the first point cloud trajectory data is constructed by stitching images based on the vehicle’s bird’s-eye view angle and the vehicle’s pose information, and the point cloud base map is generated according to the first point cloud trajectory data, and the point cloud The base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to generate a map.
  • the solution of the present application can be applied to generate maps for areas with weak GNSS signals or weak GPS signals, so as to facilitate parking for users. For example, a map of an indoor parking lot can be generated, thereby solving the problem in the related art that users lose their way because there is no map of the indoor parking lot.
  • the point cloud base map is constructed by using the first point cloud track data with lower precision, and then the second point cloud track data with higher precision and more comprehensive perceived road elements is fused with the point cloud base map, Generating a map can improve the efficiency of map generation and ensure the accuracy and precision of the map.
  • the point cloud base map as the alignment medium since the point cloud base map as the alignment medium is generated first, the point cloud base map basically reflects the overall situation of the geographical environment area, thereby ensuring that the second point cloud track data is consistent with the point cloud base.
  • the alignment success rate of the graph reduces the probability of misalignment.
  • Fig. 1 is a schematic diagram showing an application scenario of the solution of the present application according to an embodiment of the present application.
  • Fig. 2 is a flowchart of a method for generating a map according to an embodiment of the present application.
  • FIG. 3 shows a schematic diagram of splicing images along a bird's-eye view to obtain a bird's-eye view stitching.
  • FIG. 4 is a schematic diagram of a bird's-eye view stitching image obtained by continuously collecting images collected in a vehicle trajectory along a bird's-eye view.
  • Fig. 5 is a schematic diagram of the first point cloud trajectory data corresponding to the bird's-eye view stitched image according to a specific embodiment.
  • Fig. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
  • Fig. 7 is a flow chart of generating first point cloud trajectory data according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a point cloud base map according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of splicing two first point cloud trajectory data according to an embodiment.
  • Fig. 10 is a flowchart showing steps before step 250 according to an embodiment of the present application.
  • Fig. 11 is a flow chart of updating a point cloud base map according to candidate point cloud trajectory data according to an embodiment of the present application.
  • Fig. 12 is a flowchart of a method for generating a map according to a specific embodiment of the present application.
  • Fig. 13 is a block diagram of an apparatus for generating a map according to an embodiment of the present application.
  • Fig. 14 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • second information may also be called first information.
  • a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • “plurality” means two or more, unless otherwise specifically defined.
  • Fig. 1 is a schematic diagram showing an application scenario of the solution according to an embodiment of the present application.
  • the application scenario includes a vehicle 110 , a server 120 and a terminal 130 .
  • the vehicle 110 and the server 120 may establish a communication connection through a wired or wireless network
  • the terminal 130 and the server 120 may establish a communication connection through a wired or wireless network.
  • the vehicle 110 can report its own track data to the server 120, so that the server 120 can generate a map according to the method of the embodiment of the present application based on the track data of the vehicle.
  • the server 120 may be an independent physical server or a cloud server, which is not specifically limited here.
  • the server 120 After the server 120 generates the point cloud base map and/or map, it can also send the generated point cloud base map and/or map to the terminal 130, and the point cloud base map and/or map can be checked and verified by the user at the terminal 130. For editing and modification, the server 120 receives the terminal 130 to review and edit the modified point cloud base map and/or map.
  • the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., which are not specifically limited here.
  • the server 120 may also send the generated map to each vehicle 110 for display on the vehicle-mounted display device of the vehicle 110 , or send it to the terminal 130 where the user is located.
  • An embodiment of the present application provides a method for generating a map, including:
  • a vehicle without a lidar and an intelligent perception module may be referred to as a vehicle of the second type.
  • the first point cloud trajectory data may refer to point cloud trajectory data corresponding to the second type of vehicle.
  • the first point cloud trajectory data may be a point cloud trajectory constructed for pose information of the second type of vehicle according to the stitched image of the bird's-eye view of the second type of vehicle and the associated vehicle.
  • a bird's-eye view stitching image can be obtained, and collected by a GNSS module (or GPS module), an IMU, and a wheel speedometer Get the pose information of the vehicle.
  • a vehicle equipped with a laser radar and/or an intelligent perception module may be referred to as a vehicle of the first type.
  • the second point cloud trajectory data may refer to point cloud trajectory data corresponding to the first type of vehicle.
  • the second point cloud track data can be combined with the information collected by the lidar and/or the environmental perception results and positioning results of the intelligent perception module, the visual information collected by the image acquisition device, IMU, wheel speedometer, GNSS ( Or GPS) module and other information to build.
  • the first point cloud trajectory data is constructed first, and the point cloud base map is generated according to the first point cloud trajectory data, and the point cloud base map is used as an alignment medium, and the second point cloud trajectory data and the point cloud The base map is aligned and fused to generate a map, so that maps can be quickly generated for areas with weak GNSS signals or weak GPS signals.
  • the point cloud base map is constructed by using the first point cloud track data with lower precision, and then the second point cloud track data with higher precision and more comprehensive perceived road elements is fused with the point cloud base map, Generating a map can improve the efficiency of map generation and ensure the accuracy and precision of the map.
  • Fig. 2 is a flowchart of a method for generating a map according to an embodiment of the present application.
  • the method can be executed by a computer device with processing capabilities, such as a server, a cloud server, etc., and is not specifically limited here.
  • the method at least includes steps 210 to 250, which are described in detail as follows:
  • Step 210 acquiring a plurality of trajectory data, the trajectory data including vehicle pose information and a bird's-eye view stitching image associated with the vehicle pose information.
  • the trajectory data is collected by the vehicle during driving, wherein multiple trajectory data can come from one vehicle or multiple vehicles.
  • the plurality of track data may be collected during multiple driving of the vehicle.
  • the pose information of the vehicle can be determined according to the information collected by the GNSS (Global Navigation Satellite System) module, GPS module, IMU (Inertial Measurement Unit, inertial measurement unit) module and wheel speedometer in the vehicle .
  • the IMU is a module composed of various sensors such as a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer.
  • the wheel speedometer is used to detect the distance that the wheel moves within a certain period of time, so as to calculate the change of the relative pose (position and heading) of the vehicle.
  • the pose information of the vehicle may indicate the position information of the vehicle and the attitude information of the vehicle, and the attitude of the vehicle may include a pitch angle, a yaw angle, and a roll angle of the vehicle.
  • the location information of the vehicle can be determined by the GNSS module according to the collected GNSS signals, or by the GPS module according to the collected GPS signals.
  • the vehicle can use the position information of the location point where the GNSS signal or GPS signal is lower than the set threshold and the wheel speedometer ,
  • the information collected by the IMU module is used for dead reckoning, and the position information of each position point during the driving process is obtained.
  • the bird's-eye view stitching image is obtained by stitching the images collected by the vehicle in at least two viewing angles along the bird's-eye view.
  • Multiple image capture devices can be installed in the vehicle to capture images of the surrounding environment of the vehicle from multiple perspectives during driving.
  • the image collected can be the image of the environment directly in front of the vehicle, the image of the left side, the image of the right side, the image of the left rear, the image of the right rear, etc.
  • three image acquisition devices are provided in the vehicle, which respectively acquire images from three perspectives of the front, left side, and right side of the vehicle.
  • images under other viewing angles may also be collected.
  • the vehicle can splice images collected at various points of view under multiple viewing angles along a bird's eye view (Birds Eye View, BEV) to obtain a bird's eye view stitching image.
  • BEV bird's Eye View
  • the vehicle can also upload the collected images under multiple viewing angles to the server, and the server can stitch them together along the bird's-eye view to obtain the stitched image from the bird's-eye view.
  • FIG. 3 shows a schematic diagram of splicing images along a bird's-eye view to obtain a bird's-eye view stitching.
  • Figures 3A-3C are captured by different cameras on the vehicle at the same location.
  • the image in FIG. 3A is the collected image of the environment directly in front of the vehicle
  • FIG. 3B is the collected image of the environment in front of the left side of the vehicle
  • FIG. 3C is the collected image of the environment in front of the vehicle right.
  • splicing Fig. 3A-Fig. 3C along the bird's-eye view angle the bird's-eye view stitching image shown in Fig. 3D can be obtained.
  • the image in Fig. 3B belongs to side forward looking perception.
  • the vehicle can collect images at multiple locations in real time. Therefore, multiple images collected by the vehicle can be continuously spliced along the bird's-eye view angle to obtain continuous bird's-eye view stitching images reflecting the surrounding environment of the vehicle's driving track.
  • the stitched bird's-eye view image in the trajectory data is obtained by continuously stitching the stitched bird's-eye view images of multiple locations along the driving track.
  • the image obtained by splicing the images collected at each location point under multiple viewing angles along the bird's-eye view is called the bird's-eye view stitching sub-image.
  • FIG. 4 is a schematic diagram of a bird's-eye view stitched image obtained by continuously stitching images collected in a vehicle trajectory along a bird's-eye view.
  • orientation optimization can be performed to reduce black edges (in the figure indicated by marker 401).
  • the black border is caused by the incomplete joint of two adjacent bird's-eye view stitching sub-images.
  • free stitching is performed on the bird's-eye view stitching sub-images corresponding to multiple position points on the straight line trajectory to reduce turning distortion (indicated by mark 402 in the figure).
  • step 220 road element identification is performed on each stitched image from a bird's-eye view, and road elements in each stitched image from a bird's-eye view are determined.
  • Road elements may include lane lines (such as solid lane lines and dashed lane lines), road arrows, stop lines, speed bumps, parking space borderlines in the parking lot, parking space entrance lines, etc., which are not specifically limited here.
  • Identifying road elements refers to determining the pixel area where the road elements are located in the stitched image from the bird's-eye view. It can be understood that, on the one hand, the recognition result obtained from road element recognition indicates which road elements are specifically included in the bird's-eye view stitched image, and on the other hand, indicates the position of each road element in the bird's-eye view stitched image.
  • a neural network model can be used to identify road elements on stitched images from a bird's-eye view.
  • Mask R-CNN Mask Recycle Convolutional Neural Network, Mask Recycle Convolutional Neural Network
  • PANet Path Aggregation Network, Path Aggregation Network
  • FCIS Fely Convolutional Instance-aware Semantic Segmentation, full The convolutional instance-aware semantic segmentation
  • the convolutional instance-aware semantic segmentation is used to segment each road element in the bird's-eye view stitching image, so as to determine the position of each road element in the bird's-eye view stitching image.
  • step 220 may include: inputting each stitched bird's-eye view image into the road element recognition model; performing road element recognition by the road element recognition model, and outputting road element information corresponding to each stitched bird's-eye view image. Indicates the road element in the stitched image corresponding to the bird's-eye view.
  • the road element recognition model may be constructed by one or more neural networks among convolutional neural network, fully connected neural network, feedforward neural network, long short-term memory network, and recurrent neural network.
  • the road element recognition model may be Mask R-CNN, PANet, FCIS, etc. as listed above.
  • the road element recognition model can be trained with training data before road element recognition.
  • the training data includes multiple sample bird's-eye view stitched images and annotation information of the sample bird's-eye view stitched images.
  • the annotation information is used to indicate the road elements in the stitched image from the bird's-eye view of the corresponding sample.
  • the bird's-eye view stitched image used for training the road element recognition model is referred to as a sample bird's-eye view stitched image.
  • the stitched image of the bird's-eye view of the sample is input into the road element recognition model, and the road element recognition model recognizes the road element on the stitched image of the bird's-eye view of the sample, and outputs the predicted road element information, which is used to indicate the sample Road elements in a bird's eye view stitched image.
  • the predicted road element information not only indicates the position information of the identified road element in the sample bird's-eye view stitched image, but also indicates the semantics of the road element (that is, indicates what kind of road element, such as lane line , speed bumps or stop lines, etc.).
  • the loss value of the loss function is calculated, and the parameters of the road element recognition model are reversely adjusted according to the loss value. It can be understood that the annotation information of the sample bird's-eye view stitched image also indicates the position information of each road element in the sample bird's-eye view stitched image.
  • the loss function may be set according to actual needs, for example, the loss function may be a cross-entropy loss function, a logarithmic loss function, etc., which are not specifically limited here.
  • the road element recognition model After completing the training of the road element recognition model, the road element recognition model can be applied online to accurately identify road elements.
  • Step 230 according to the road elements in each bird's-eye view stitching image and the vehicle pose information associated with each bird's-eye view stitching image, generate first point cloud trajectory data corresponding to each trajectory data.
  • the point cloud trajectory data used to construct the point cloud base map and constructed based on the stitched images from the bird's-eye view and the associated pose information is referred to as the first point cloud trajectory data.
  • the first point cloud trajectory data includes position information of each position point in the trajectory path and a point cloud model of road elements on the trajectory path. It can be understood that the relative positional relationship between the point cloud models of different road elements in the first point cloud trajectory is basically the same as the relative positional relationship presented by the bird's-eye view stitching image.
  • the point cloud model of road elements is a collection of massive points expressing the spatial distribution of road elements and the target surface characteristics in the same spatial reference system. After obtaining the spatial coordinates of each sampling point of road elements, the road elements are placed on the Arrange all the sampling points of the road element to obtain the point cloud model of the road element.
  • step 230 may include: according to the vehicle pose information associated with each bird's-eye view stitching image, perform three-dimensional reconstruction on each road element in each bird's-eye view stitching image, and obtain the corresponding The first point cloud trajectory data.
  • the 3D point cloud model of the corresponding road elements can be obtained by 3D reconstruction of the road elements in the bird's-eye view stitching image. On this basis, combining the pose information associated with the stitched image from the bird's-eye view and the obtained position information of the road elements in the stitched image from the bird's-eye view, the location information of the road element in the geographic space can be determined, so that according to the geographic location of the road element
  • the position information in the space is arranged by arranging the 3D point cloud models of the road elements in the bird's-eye view stitching image, and correspondingly obtaining the first point cloud trajectory data corresponding to the bird's-eye view stitching image.
  • deep learning may be used to perform three-dimensional reconstruction on the road elements in the bird's-eye view stitched image.
  • the neural network model used to generate the 3D point cloud model can be trained (for the sake of distinction, the neural network model used to generate the 3D point cloud model is called the 3D reconstruction model), and then the bird's-eye view can be obtained through the 3D reconstruction model.
  • Each road element in the perspective stitching image is reconstructed in 3D.
  • the three-dimensional reconstruction model may be a model constructed by a convolutional neural network, a fully connected neural network, or the like.
  • the three-dimensional reconstruction model may be an Im2Avatar model, a confrontation network, a generation network, etc., which are not specifically limited here.
  • Fig. 5 is a schematic diagram of the first point cloud trajectory data corresponding to the bird's-eye view stitched image according to an embodiment.
  • the edges of each road element appear to be lines, they are actually point sequences. Since the points are relatively dense, the visual effect feels like lines.
  • different road elements can be represented by different point clouds, for example, lane lines are represented by blue point clouds, parking space lines are represented by green point clouds, and parking spaces are represented by red point clouds.
  • the point cloud represents arrows, etc.
  • the height difference can be sensed through the pitch angle of the vehicle, and then the height of the vehicle in the vertical direction can be determined. For example, when the vehicle is on the first underground floor and the second underground floor of the underground parking lot, its height in the vertical direction is different. Further, when the vehicle is running on the slope, the slope of the slope where the vehicle is located can also be calculated according to the pitch angle of the vehicle.
  • Fig. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
  • FIG. 6 it can be clearly seen that there is a height difference between the first plane 610 and the second plane 620 and the third plane 630 in the vertical direction. Therefore, the first plane 610 , the second plane 620 , and the third plane 630 correspond to different floors, and the white shaded parts in FIG. 6 represent road elements in the corresponding floors.
  • FIG. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
  • the first oblique line 621 , the second oblique line 622 and the third oblique line 623 between the first plane 610 and the second plane 620 may indicate that the first floor 610 and the second floor 620 are connected at different positions.
  • the fourth oblique line 631 between the second plane 620 and the third plane 630 represents the inclined road connecting the second plane 620 and the third plane 630 .
  • Fig. 7 is a flow chart of generating first point cloud trajectory data according to an embodiment of the present application. As shown in Figure 7, including:
  • Step 710 stitching images along the bird's-eye view.
  • the camera used for collecting images on the vehicle may be a surround-view camera, and the surround-view camera may collect images of the surrounding environment during the driving of the vehicle based on fish-eye imaging.
  • the surround-view camera may collect images of the surrounding environment during the driving of the vehicle based on fish-eye imaging.
  • images from multiple perspectives can be collected.
  • the distorted region is regarded as ROI (Region of Interest, region of interest)
  • Distortion correction is also performed to avoid inaccurate subsequent generated maps due to the distortion caused by the splicing process.
  • it also involves inverse perspective transformation of images collected under multiple viewing angles, so as to project the images into a bird's-eye view, and then splicing to obtain a bird's-eye view stitching image.
  • Step 720 identifying road elements.
  • Each road element in the bird's-eye view stitched image is determined through step 720 .
  • Step 730 three-dimensional reconstruction.
  • the point cloud model of each road element in the bird's-eye view stitching image is obtained, and then the point cloud models are combined to obtain the first point cloud trajectory data.
  • step 240 splicing the first point cloud trajectory data corresponding to multiple trajectory data to obtain a point cloud base map.
  • first point cloud trajectory data cover different environmental regions. Therefore, multiple first point cloud trajectory data can be spliced to obtain a point cloud base map reflecting the basic global region.
  • step 240 may include: determining the first target road element representing the same geographic location in any two of the multiple first point cloud trajectories; based on the first target road element, splicing multiple first point cloud track data, Get the point cloud base map.
  • the first target road element refers to road elements representing the same geographic location in any two of the plurality of first point cloud trajectory data.
  • Different driving trajectories may have overlapping trajectories, so there may be road elements representing the same geographic location (ie, the first target road element) in the first point cloud trajectory data constructed based on different trajectory data.
  • the first point cloud trajectory data not only shows the point cloud model of each road element, but also corresponds to the element semantics of the road element (the element semantics indicates which kind of road element it is) and the position information of the road element, Therefore, based on the element semantics and position information of each road element in the point cloud trajectory data and the relative positional relationship between other road elements near the road element, each road element in any two first point cloud trajectory data can be compared. Thus, the first target road element representing the same geographic location is determined.
  • the first target road elements in different first point cloud trajectory data can be coincident by moving the first point cloud trajectory data. It can be understood that after the movement, the position of the overlapping first target road element is the splicing joint of different first point cloud trajectory data.
  • Fig. 8 is a schematic diagram of a point cloud base map according to an embodiment of the present application.
  • Step 250 using the point cloud base map as an alignment medium, aligning and fusing the second point cloud trajectory data with the point cloud base map to obtain a map.
  • the point cloud base map can reflect the whole field map skeleton of the geographical environment area (especially the indoor area with weak GNSS signal and GPS signal), and this method of constructing the point cloud base map is relatively fast.
  • the point cloud base map is constructed through the first point cloud trajectory data, and the first point cloud trajectory data is generated by splicing images from a bird's-eye view and the pose information of the associated vehicle, in practice, due to the vehicle The angle of view that the image acquisition device on the Internet can perceive is limited, and the point cloud trajectory data generated only by splicing images from the bird’s-eye view and the corresponding pose information may not be able to fully reflect all road elements in the geographical environment. Therefore, in this application In the embodiment, the second point cloud trajectory data is further obtained, and the map is generated by fusing the second point cloud trajectory data capable of expressing more road elements with the point cloud base map.
  • the second point cloud trajectory data may include a point cloud model with more road elements.
  • the point cloud model of road elements such as some road signs, pillars, ultrasonic obstacles, walls, gates, railings, zebra crossings, etc.
  • step 250 may include: using the point cloud basemap as an alignment medium, aligning the second point cloud trajectory data with the point cloud basemap, and determining the second point cloud trajectory data compared with the point cloud basemap The newly added road element; the point cloud model of the newly added road element is added to the point cloud base map to obtain a map.
  • the second point cloud track data refers to road elements that exist in the second point cloud track data but do not exist in the point cloud base map.
  • the point cloud basemap can basically reflect the global skeleton of the geographical environment area, the probability of road elements representing the same geographic location in the second point cloud trajectory data and the point cloud basemap is relatively high, thus, based on this
  • the road elements representing the same geographic location can semantically align the second point cloud trajectory data with the point cloud basemap, so as to realize the positioning of the second point cloud trajectory data on the point cloud basemap.
  • the second point cloud trajectory data is compared with the point cloud base map, thereby determining the newly added road element compared to the point cloud base map, and based on the newly added road element in the second point cloud track data
  • the position information of the newly added road element is determined in the point cloud base map, and according to the determined position information, the point cloud model of the newly added road element is added to the point cloud base map, and then the map can be obtained.
  • the first point cloud trajectory data is constructed according to the bird’s-eye view angle stitching image of the vehicle and the vehicle’s pose information
  • the point cloud base is generated according to the first point cloud trajectory data
  • the point cloud base map is used as the alignment medium
  • the second point cloud trajectory data is aligned and fused with the point cloud base map to generate a map.
  • the point cloud base map as the alignment medium is generated first, the point cloud base map basically reflects the overall situation of the geographical environment area, thereby ensuring that the second point cloud track data is consistent with the point cloud
  • the alignment success rate of the basemap reduces the probability of misalignment.
  • some vehicles are equipped with lidar, and/or, an intelligent perception module (also called a perception chip, wherein the intelligent perception module can be used in real time according to the acquired The image of the vehicle and the information collected by other sensors on the vehicle (such as the information collected by the wheel speedometer, IMU, etc.) to identify things in the environment around the vehicle, such as road elements, etc.), while some vehicles are not equipped with lidar and intelligent perception module.
  • an intelligent perception module also called a perception chip, wherein the intelligent perception module can be used in real time according to the acquired The image of the vehicle and the information collected by other sensors on the vehicle (such as the information collected by the wheel speedometer, IMU, etc.) to identify things in the environment around the vehicle, such as road elements, etc.)
  • vehicles equipped with lidar and/or intelligent perception modules are referred to as vehicles of the first type; vehicles not equipped with lidar and intelligent perception modules are referred to as is a vehicle of type II.
  • LiDAR can detect the size and location of objects in the environment around the vehicle. It can be understood that, for a vehicle equipped with a lidar, images collected by the vehicle, signals detected by the lidar, and other wheel speedometers, IMU, GNSS modules (or GPS modules) can be combined to correspond to the vehicle's driving trajectory
  • the point cloud trajectory data has more reference information, and the detection accuracy and sensing range of lidar are wider. Therefore, comparatively speaking, the precision and accuracy of point cloud trajectory data from vehicles equipped with lidar is higher.
  • the intelligent perception module can combine the information collected by the sensors set in the vehicle (such as image acquisition device, wheel speed information, IMU, GNSS module (or GPS module)) in real time to realize the scene perception and understanding of the environment in real time, such as obstacles Semantic classification of data such as types of objects, road signs and markings, detection of pedestrians and vehicles, traffic signals, etc., and then positioning based on the results of perception and understanding, so as to help vehicles more accurately understand their position relative to their environment.
  • the sensors set in the vehicle such as image acquisition device, wheel speed information, IMU, GNSS module (or GPS module)
  • GNSS module or GPS module
  • the precision and accuracy of the point cloud trajectory data from the first type of vehicle is higher than that of the point cloud trajectory data from the second type of vehicle, but in the market, there are the first type of The number of users of the vehicle is much lower than that of the second type of vehicle. Therefore, if the map is constructed only by point cloud trajectory data from the first type of vehicle, the map construction period will be long. Therefore, in this case, the method of the present application can be used to construct the map.
  • the first point cloud trajectory data may be a point cloud trajectory constructed for pose information of the second type of vehicle based on the stitched image from the bird's-eye view of the second type of vehicle and the associated vehicle.
  • the bird's-eye view stitching image can be obtained based on the images collected by the image acquisition device corresponding to multiple viewing angles in the first type of vehicle, and the GNSS module (or GPS module), IMU, and The wheel speedometer is used to collect the pose information of the vehicle.
  • the second point cloud trajectory data may refer to point cloud trajectory data corresponding to the first type of vehicle.
  • the second point cloud trajectory data can be combined with the information collected by the lidar and/or the environment perception results and positioning results of the intelligent perception module, the visual information collected by the image acquisition device, IMU, wheel speed It can be built with various information such as meter, GNSS (or GPS) module, etc. It can be understood that the second point cloud track data also correspondingly indicates the driving track of the vehicle, and each road element and position information of the road element in the driving environment.
  • the point cloud base map is constructed by using the first point cloud trajectory data with lower precision, and then the second point cloud trajectory data with higher accuracy and more comprehensive perceived road elements is compared with the point cloud base map. Fusion and map generation can improve map generation efficiency and ensure map accuracy and precision.
  • the scheme of directly splicing different point cloud trajectory data to generate a map is easy when there are no trajectory intersection points between two point cloud trajectory data, or there are few trajectory intersection points.
  • point cloud trajectory data splicing fails, or semantic alignment fails, and if the solution of the embodiment of the present application is adopted, since the point cloud base map that can reflect the basic global situation in the environment area is pre-built, this problem can be effectively solved, ensuring The second point cloud trajectory data can be fused with the point cloud base map.
  • the point cloud base map is constructed based on the first point cloud track data of the second type of vehicle, and the point cloud base map is optimized and updated by using the second point cloud track data of the first type of vehicle.
  • the period of map generation will be long.
  • the first point cloud trajectory data corresponding to vehicles belonging to the second type with a higher quantity is larger, and the coverage of the geographical environment area is larger, the first point cloud is used first.
  • the point cloud base map is optimized and updated by the second point cloud track data corresponding to the vehicle belonging to the first type, and the map is obtained , so that a compromise can be made between shortening the generation cycle of the map and improving the accuracy of the map.
  • the trajectory data corresponding to vehicles with different hardware configurations can be used to generate maps, making the data sources more comprehensive.
  • the solutions of the embodiments of the present application can be applied to constructing maps of areas with weak GNSS signals (or GPS signals), such as maps of indoor parking lots.
  • Fig. 9 is a schematic diagram of splicing two first point cloud trajectory data according to an embodiment. As shown in Figure 9, after the splicing of the first point cloud trajectory data I and the first point cloud trajectory data II, there is a disconnected area 910, and according to conventional judgments, the disconnected area 910 may be different from the actual geographical environment area. matched.
  • the method may also include: sending the point cloud base map to the client, so that the user can splice and edit the point cloud base map on the client, and then receive the point cloud base map from the client.
  • the point cloud base map after splicing and editing Therefore, by editing the point cloud base map, manually edit the part of the point cloud base map where there is a disconnected area, thereby improving the point cloud base map.
  • the method may further include:
  • Step 1010 acquire a candidate point cloud trajectory data from the candidate point cloud trajectory data set.
  • the candidate point cloud trajectory data in the candidate point cloud trajectory data set may be point cloud trajectory data from the first type of vehicle. In some other embodiments, the candidate point cloud trajectory data set may also include point cloud trajectory data from the first type of vehicle and the second type of vehicle.
  • the candidate point cloud trajectory data may be acquired from the candidate point cloud trajectory data randomly or in a set order.
  • step 1010 may include: according to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, according to the order of priority from high to low, obtain a Candidate point cloud trajectory data.
  • the priority corresponding to each candidate point cloud trajectory data can be set according to the vehicle information of the vehicle from which the candidate point cloud trajectory data originates, wherein the hardware modules installed in the vehicle can be determined according to the vehicle information.
  • the priority corresponding to the candidate point cloud track data originating from a vehicle equipped with a laser radar and an intelligent perception module is the first priority
  • that originating from a vehicle equipped with a laser radar or an intelligent perception module is the second priority
  • the corresponding priority of the candidate point cloud trajectory data from the vehicle without lidar and no intelligent perception module is the third priority, among which, the first The priority is higher than the second priority, and the second priority is higher than the third priority.
  • Step 1020 determine the coverage of the candidate point cloud trajectory data relative to the point cloud base map.
  • the length of the target part track located in the point cloud base map in the driving track indicated by the candidate point cloud track data can be determined, and then the length of the target part track Divide it with the total length of the driving trajectory indicated by the candidate point cloud trajectory data, and use the obtained ratio as the coverage of the candidate point cloud trajectory data relative to the point cloud base map.
  • Step 1030 if the coverage is greater than the set threshold, the candidate point cloud trajectory data is used as the second point cloud trajectory data.
  • the candidate point cloud trajectory data can be used as the second point cloud trajectory data, so as to align and fuse the candidate point cloud trajectory data with the point cloud base map.
  • the method may further include: if the coverage is not greater than the set threshold, splicing the candidate point cloud trajectory data with the point cloud base map to update the point cloud base map.
  • the candidate point cloud trajectory data is used as the first point cloud trajectory data. It is used to update the point cloud basemap.
  • splicing the candidate point cloud trajectory data with the point cloud base map, so that the step of updating the point cloud base map further includes:
  • Step 1110 determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data.
  • the target vehicle type refers to the vehicle type to which the source vehicle of the candidate point cloud trajectory data belongs.
  • the vehicles may be classified according to the types of hardware configured on the vehicles.
  • the vehicle type set based on the hardware on the vehicle includes the first type and the second type, wherein, the vehicle belonging to the first type is equipped with a laser radar and/or an intelligent perception module; The second type of vehicle is not equipped with lidar and intelligent perception modules.
  • Step 1120 based on the correspondence between vehicle types and weights, determine the target weight corresponding to the target vehicle type.
  • the target weight refers to the weight corresponding to the target vehicle type. Wherein, the corresponding relationship between vehicle types and weights can be set according to actual needs.
  • Step 1130 if the weight of the target is greater than the weight threshold, move the second target road element in the point cloud base map so that the second target road element in the point cloud base map is consistent with the second target in the candidate point cloud trajectory data
  • the road elements overlap; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud trajectory data.
  • Step 1140 if the weight of the target is not greater than the weight threshold, then move the second target road in the candidate point cloud trajectory data so that the second target road element in the point cloud base image is consistent with the second target road element in the candidate point cloud trajectory data.
  • Target road elements overlap.
  • Step 1150 combining the moved point cloud base map and candidate point cloud trajectory data as an updated point cloud base map.
  • higher weights may be assigned to vehicles with higher accuracy of point cloud trajectory data
  • lower weights may be assigned to vehicles with lower accuracy of point cloud trajectory data.
  • vehicles belonging to the first type are assigned a higher weight. Therefore, during the splicing process, it can be ensured that the point cloud trajectory data with higher precision moves less during the splicing process. On the contrary, the point cloud trajectory data with lower precision moves more during the splicing process, so as to avoid moving the original precision during the splicing process.
  • Higher point cloud trajectory data results in reduced accuracy and precision of the point cloud trajectory data.
  • the corresponding target weight is determined according to the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data, and then the object to be moved during the splicing process is determined according to the target weight, so that the original accuracy of moving during the splicing process can be avoided.
  • Higher point cloud trajectory data thereby ensuring the position accuracy of each road element in the point cloud base map.
  • Fig. 12 is a flowchart of a method for generating a map according to a specific embodiment of the present application.
  • the steps filled in gray in FIG. 12 may be steps participated by equipment or manually.
  • it includes: uploading the trajectory data to the server, and the server identifies road elements based on the trajectory data and generates corresponding point cloud trajectory data; then, data screening is performed, and the process of data screening can be referred to in Figure 10
  • the process shown is to determine the coverage of the point cloud trajectory data relative to the point cloud base map. If the coverage is not greater than the set threshold, it indicates that the coverage contribution of the point cloud trajectory data is high, and the point cloud trajectory
  • the data is used for splicing to generate a point cloud base map.
  • the point cloud base map can be sent to the client, and the user can splice and edit the point cloud base map.
  • the point cloud trajectory data can be used for alignment and fusion with the point cloud base map to generate a map.
  • Layers in a map include positioning layers and logical layers. Further, after the map is generated, the map can be sent to the client, so that the user can edit the positioning layer and/or edit the logical layer based on the map displayed on the client. After the map is edited, the map can be further inspected by technicians to obtain a high-precision and high-accuracy map.
  • An embodiment of the present application provides a map generation device, including: a data acquisition module, a splicing module, and a fusion module.
  • the data acquisition module is used to obtain the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory data of the second type of vehicle; obtain the determined second point cloud trajectory data, wherein the second point cloud trajectory data
  • the cloud track data corresponds to the point cloud track data of the first type of vehicle
  • the splicing module is used to generate a point cloud base map according to the first point cloud track data
  • the fusion module is used to use the point cloud base map as an alignment medium to convert the second
  • the two point cloud trajectory data are aligned and fused with the point cloud base map to obtain a map.
  • the data acquisition module may include an acquisition module, an identification module and a generation module.
  • the device for generating the map will be described in detail below in conjunction with the accompanying drawings.
  • Fig. 13 is a block diagram of a device for generating a map according to an embodiment of the present application.
  • the device for generating a map includes: an acquisition module 1310 for acquiring multiple trajectory data, the trajectory data including the pose of the vehicle Information and the bird's-eye view stitching image associated with the vehicle pose information; the identification module 1320 is used to identify road elements in each bird's-eye view stitching image, and determines the road elements in each bird's-eye view stitching image; the generation module 1330 is used to The road elements in each bird's-eye view stitching image and the vehicle pose information associated with each bird's-eye view stitching image generate the first point cloud track data corresponding to each track data; the stitching module 1340 is used to combine multiple track data.
  • the first point cloud track data is spliced to obtain a point cloud base map; the fusion module 1350 is used to use the point cloud base map as an alignment medium to align and fuse the second point cloud track data with the point cloud base map to obtain a map.
  • the recognition module 1320 includes: an input unit, configured to input the spliced images of bird's-eye view angles into the road element recognition model; Road element information corresponding to the image, where the road element information is used to indicate the corresponding road element in the bird's-eye view stitched image.
  • the generation module 1330 is further configured to: according to the vehicle pose information associated with each bird's-eye view stitching image, perform three-dimensional reconstruction on each road element in each bird's-eye view stitching image to obtain each bird's-eye view stitching image The first point cloud trajectory data corresponding to the image.
  • the splicing module 1340 includes: a first target road element determining unit, configured to determine first target road elements representing the same geographic location in any two of a plurality of first point cloud trajectories; Based on the first target road element, multiple first point cloud trajectory data are spliced to obtain a point cloud base map.
  • the map generation device further includes: a sending module, configured to send the point cloud base map to the client, so that the user can stitch and edit the point cloud base map on the client.
  • a sending module configured to send the point cloud base map to the client, so that the user can stitch and edit the point cloud base map on the client.
  • the map generation device also includes: a candidate point cloud trajectory data acquisition module, used to obtain a candidate point cloud trajectory data from the candidate point cloud trajectory data set; a coverage determination module, used to determine the candidate point cloud The coverage of the trajectory data relative to the point cloud base map; the second point cloud trajectory data determination module is used to use the candidate point cloud trajectory data as the second point cloud trajectory data if the coverage is greater than the set threshold.
  • the generating device of the map also includes: an update module, used for splicing the candidate point cloud trajectory data with the point cloud base map if the coverage is not greater than the set threshold, so as to update the point cloud base map .
  • the update module includes: a target vehicle type determination unit, configured to determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data; a target weight determination unit, configured to determine the vehicle type based on the corresponding relationship between the weight , to determine the target weight corresponding to the target vehicle type; the first moving unit is used to move the second target road element in the point cloud base map if the target weight is greater than the weight threshold, so that the second target road element in the point cloud base map The target road element overlaps with the second target road element in the candidate point cloud track data; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud track data; the second mobile unit, It is used to move the second target road in the candidate point cloud trajectory data
  • the candidate point cloud trajectory data acquisition module is further configured to: according to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, according to the order of priority from high to low, from the candidate points
  • the cloud trajectory data set obtains a candidate point cloud trajectory data.
  • the fusion module includes: a new road element determination unit, configured to use the point cloud basemap as an alignment medium, align the second point cloud trajectory data with the point cloud basemap, and determine the second point cloud trajectory The data is compared with the newly added road elements in the point cloud base map; the adding unit is used to add the point cloud model of the newly added road elements to the point cloud base map to obtain a map.
  • a new road element determination unit configured to use the point cloud basemap as an alignment medium, align the second point cloud trajectory data with the point cloud basemap, and determine the second point cloud trajectory The data is compared with the newly added road elements in the point cloud base map; the adding unit is used to add the point cloud model of the newly added road elements to the point cloud base map to obtain a map.
  • Fig. 14 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • the electronic device may be a physical server, a cloud server, etc., which is not specifically limited here.
  • the electronic device in this application may include: a processor 1410 and a memory 1420, where computer-readable instructions are stored on the memory 1420, and when the computer-readable instructions are executed by the processor 1410, any of the above method embodiments may be implemented method in .
  • Processor 1410 may include one or more processing cores.
  • the processor 1410 uses various interfaces and lines to connect various parts of the entire electronic device, and executes electronic operations by running or executing instructions, programs, code sets or instruction sets stored in the memory 1420, and calling data stored in the memory 1420.
  • the processor 1410 may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 1410 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used to render and draw the displayed content
  • the modem is used to handle wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 1410, but may be realized by a communication chip alone.
  • the memory 1420 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory).
  • the memory 1420 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 1420 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, an alarm function, etc.), and for implementing the following Instructions and the like of the various method embodiments described above.
  • the storage data area can also store data created by the electronic device during use (such as a disguised response command, obtained process status) and the like.
  • the present application also provides a computer-readable storage medium, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method in any one of the foregoing method embodiments is implemented.
  • the computer readable storage medium may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium has storage space for computer-readable instructions for performing any method steps in the methods described above. These computer readable instructions can be read from or written into one or more computer program products. Computer readable instructions may, for example, be compressed in a suitable form.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method in any of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Instructional Devices (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Procédé et appareil de génération de cartes, dispositif électronique et support de stockage. Le procédé consiste : à obtenir des premières données générées de trajectoires de nuages de points, les premières données générées de trajectoires de nuages de points correspondant à un second type de véhicule ; à obtenir de secondes données déterminées de trajectoires de nuages de points, les secondes données déterminées de trajectoires de nuages de points correspondant à un premier type de véhicule ; à générer une carte de base de nuages de points selon les premières données de trajectoires de nuages de points ; à utiliser la carte de base de nuages de points comme support d'alignement et à réaliser un alignement et une fusion sur les secondes données de trajectoires de nuages de points et sur la carte de base de nuages de points, puis à obtenir une carte. On peut rapidement générer une carte pour une zone de faible signal de navigation.
PCT/CN2022/094862 2021-12-30 2022-05-25 Procédé et appareil de génération de cartes, dispositif électronique et support de stockage WO2023123837A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111646647.8 2021-12-30
CN202111646647.8A CN114494618B (zh) 2021-12-30 2021-12-30 地图的生成方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023123837A1 true WO2023123837A1 (fr) 2023-07-06

Family

ID=81507703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094862 WO2023123837A1 (fr) 2021-12-30 2022-05-25 Procédé et appareil de génération de cartes, dispositif électronique et support de stockage

Country Status (2)

Country Link
CN (1) CN114494618B (fr)
WO (1) WO2023123837A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494618B (zh) * 2021-12-30 2023-05-16 广州小鹏自动驾驶科技有限公司 地图的生成方法、装置、电子设备及存储介质
CN115112114B (zh) * 2022-06-15 2024-05-03 苏州轻棹科技有限公司 一种修正自车周围车辆朝向角的处理方法和装置
CN116385529B (zh) * 2023-04-14 2023-12-26 小米汽车科技有限公司 确定减速带位置的方法、装置、存储介质以及车辆

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232583A1 (en) * 2017-02-16 2018-08-16 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof
CN111442776A (zh) * 2019-01-17 2020-07-24 通用汽车环球科技运作有限责任公司 顺序地面场景图像投影合成与复杂场景重构的方法及设备
WO2020248614A1 (fr) * 2019-06-10 2020-12-17 商汤集团有限公司 Procédé de génération de carte, procédé et appareil de commande de pilotage, équipement électronique et système
US20210063200A1 (en) * 2019-08-31 2021-03-04 Nvidia Corporation Map creation and localization for autonomous driving applications
WO2021089839A1 (fr) * 2019-11-08 2021-05-14 Outsight Système de cartographie combiné radar et lidar
CN113554698A (zh) * 2020-04-23 2021-10-26 杭州海康威视数字技术股份有限公司 车辆位姿信息生成方法、装置及电子设备、存储介质
CN113688935A (zh) * 2021-09-03 2021-11-23 阿波罗智能技术(北京)有限公司 高精地图的检测方法、装置、设备以及存储介质
CN113706702A (zh) * 2021-08-11 2021-11-26 重庆九洲星熠导航设备有限公司 矿区三维地图构建系统和方法
CN114494618A (zh) * 2021-12-30 2022-05-13 广州小鹏自动驾驶科技有限公司 地图的生成方法、装置、电子设备及存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570446B (zh) * 2015-10-12 2019-02-01 腾讯科技(深圳)有限公司 车道线提取的方法和装置
CN108959321B (zh) * 2017-05-25 2022-06-24 纵目科技(上海)股份有限公司 停车场地图构建方法、系统、移动终端及存储介质
CN110851545B (zh) * 2018-07-27 2023-11-14 比亚迪股份有限公司 地图绘制方法、装置及设备
CN111380543B (zh) * 2018-12-29 2023-05-05 沈阳美行科技股份有限公司 地图数据生成方法及装置
CN109740604B (zh) * 2019-04-01 2019-07-05 深兰人工智能芯片研究院(江苏)有限公司 一种行驶区域检测的方法和设备
CN112655226B (zh) * 2020-04-09 2022-08-26 华为技术有限公司 车辆感知的方法、装置和系统
CN111402588B (zh) * 2020-04-10 2022-02-18 河北德冠隆电子科技有限公司 基于时空轨迹重构异常道路高精地图快速生成系统与方法
CN111784835B (zh) * 2020-06-28 2024-04-12 北京百度网讯科技有限公司 制图方法、装置、电子设备及可读存储介质
CN112710318B (zh) * 2020-12-14 2024-05-17 深圳市商汤科技有限公司 地图生成方法、路径规划方法、电子设备以及存储介质
CN113537046A (zh) * 2021-07-14 2021-10-22 安徽酷哇机器人有限公司 基于检测车辆轨迹大数据的地图车道线标注方法及系统
CN113609148A (zh) * 2021-08-17 2021-11-05 广州小鹏自动驾驶科技有限公司 一种地图更新的方法和装置
CN113724390A (zh) * 2021-09-08 2021-11-30 广州小鹏自动驾驶科技有限公司 匝道生成方法及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232583A1 (en) * 2017-02-16 2018-08-16 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof
CN111442776A (zh) * 2019-01-17 2020-07-24 通用汽车环球科技运作有限责任公司 顺序地面场景图像投影合成与复杂场景重构的方法及设备
WO2020248614A1 (fr) * 2019-06-10 2020-12-17 商汤集团有限公司 Procédé de génération de carte, procédé et appareil de commande de pilotage, équipement électronique et système
US20210063200A1 (en) * 2019-08-31 2021-03-04 Nvidia Corporation Map creation and localization for autonomous driving applications
WO2021089839A1 (fr) * 2019-11-08 2021-05-14 Outsight Système de cartographie combiné radar et lidar
CN113554698A (zh) * 2020-04-23 2021-10-26 杭州海康威视数字技术股份有限公司 车辆位姿信息生成方法、装置及电子设备、存储介质
CN113706702A (zh) * 2021-08-11 2021-11-26 重庆九洲星熠导航设备有限公司 矿区三维地图构建系统和方法
CN113688935A (zh) * 2021-09-03 2021-11-23 阿波罗智能技术(北京)有限公司 高精地图的检测方法、装置、设备以及存储介质
CN114494618A (zh) * 2021-12-30 2022-05-13 广州小鹏自动驾驶科技有限公司 地图的生成方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114494618B (zh) 2023-05-16
CN114494618A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
US11482008B2 (en) Directing board repositioning during sensor calibration for autonomous vehicles
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
US10962366B2 (en) Visual odometry and pairwise alignment for high definition map creation
US20240124017A1 (en) Determination of lane connectivity at traffic intersections for high definition maps
US10670416B2 (en) Traffic sign feature creation for high definition maps used for navigating autonomous vehicles
WO2023123837A1 (fr) Procédé et appareil de génération de cartes, dispositif électronique et support de stockage
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
US11493635B2 (en) Ground intensity LIDAR localizer
US20200393265A1 (en) Lane line determination for high definition maps
US11367208B2 (en) Image-based keypoint generation
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
CN110796714B (zh) 一种地图构建方法、装置、终端以及计算机可读存储介质
JP2021089724A (ja) 構造的制約及び物理的制約を伴う3d自動ラベル付け
US20210001891A1 (en) Training data generation for dynamic objects using high definition map data
WO2020043081A1 (fr) Technique de positionnement
Li et al. Robust localization for intelligent vehicles based on pole-like features using the point cloud
WO2020199057A1 (fr) Système, procédé et dispositif de simulation de pilotage automatique, et support de stockage
KR102543871B1 (ko) 도로정보 변화 영역 보완 방법 및 시스템
CN114969221A (zh) 一种更新地图的方法及相关设备
CN116978010A (zh) 图像标注方法和装置、存储介质和电子设备
CN116917936A (zh) 双目相机外参标定的方法及装置
Lee et al. Semi-automatic framework for traffic landmark annotation
Luttrell IV Data Collection and Machine Learning Methods for Automated Pedestrian Facility Detection and Mensuration
Dumančić et al. Steering Angle Prediction Algorithm Performance Comparison in Different Simulators for Autonomous Driving
CN117893634A (zh) 一种同时定位与地图构建方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913130

Country of ref document: EP

Kind code of ref document: A1