WO2023123837A1 - Map generation method and apparatus, electronic device, and storage medium - Google Patents

Map generation method and apparatus, electronic device, and storage medium Download PDF

Info

Publication number
WO2023123837A1
WO2023123837A1 PCT/CN2022/094862 CN2022094862W WO2023123837A1 WO 2023123837 A1 WO2023123837 A1 WO 2023123837A1 CN 2022094862 W CN2022094862 W CN 2022094862W WO 2023123837 A1 WO2023123837 A1 WO 2023123837A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
trajectory data
base map
bird
vehicle
Prior art date
Application number
PCT/CN2022/094862
Other languages
French (fr)
Chinese (zh)
Inventor
夏志勋
冯洁
王梓里
Original Assignee
广州小鹏自动驾驶科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州小鹏自动驾驶科技有限公司 filed Critical 广州小鹏自动驾驶科技有限公司
Publication of WO2023123837A1 publication Critical patent/WO2023123837A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present application relates to the field of map technology, and in particular to a map generation method, device, electronic equipment, and storage medium.
  • the embodiment of the present application proposes a map generation method, device, electronic equipment and storage medium, which can quickly generate maps for areas with weaker signals and facilitate parking for users.
  • a method for generating a map including: acquiring generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory of a second type of vehicle data; obtain the determined second point cloud track data, wherein the second point cloud track data corresponds to the point cloud track data of the first type of vehicle; generate a point cloud base map according to the first point cloud track data; with The point cloud base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to obtain a map.
  • the first point cloud trajectory data is generated in the following manner: a plurality of trajectory data are acquired, and the trajectory data includes the pose information of the vehicle and the bird's-eye view stitching image associated with the vehicle pose information; The road element recognition is carried out on the stitched image of the bird's-eye view, and the road elements in each stitched image of the bird's-eye view are determined; according to the road elements in each stitched image of the bird's-eye view and the vehicle pose information associated with each stitched image of the bird's-eye view, the first-order road element corresponding to each trajectory data is generated.
  • a point cloud trajectory data; said generating a point cloud base map according to said first point cloud trajectory data includes: splicing the first point cloud trajectory data corresponding to a plurality of trajectory data to obtain a point cloud base map.
  • a map generation device including: a data acquisition module, configured to acquire the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the second type The point cloud track data of the vehicle; Acquire the determined second point cloud track data, wherein the second point cloud track data corresponds to the point cloud track data of the first type of vehicle;
  • the splicing module is used for according to the first
  • the point cloud track data generates a point cloud base map;
  • the fusion module is used to use the point cloud base map as an alignment medium to align and fuse the second point cloud track data and the point cloud base map to obtain a map.
  • the data acquisition module includes an acquisition module, an identification module, and a generation module; the acquisition module is used to acquire a plurality of trajectory data, and the trajectory data includes the pose information of the vehicle and the bird's-eye view associated with the vehicle pose information Angle of view mosaic image; identification module, used to identify road elements in each bird's-eye view angle mosaic image, and determine road elements in each bird's-eye view angle mosaic image; generation module, used for splicing road elements and each bird's-eye angle of view according to each bird's-eye view angle of view mosaic image Stitching the vehicle pose information associated with the images to generate first point cloud track data corresponding to each track data; the splicing module splicing the first point cloud track data corresponding to multiple track data to obtain a point cloud base map.
  • an electronic device including: a processor; a memory, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by the processor, the above The method for generating the map.
  • a computer-readable storage medium on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method for generating the above-mentioned map is implemented.
  • the first point cloud trajectory data is constructed by stitching images based on the vehicle’s bird’s-eye view angle and the vehicle’s pose information, and the point cloud base map is generated according to the first point cloud trajectory data, and the point cloud The base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to generate a map.
  • the solution of the present application can be applied to generate maps for areas with weak GNSS signals or weak GPS signals, so as to facilitate parking for users. For example, a map of an indoor parking lot can be generated, thereby solving the problem in the related art that users lose their way because there is no map of the indoor parking lot.
  • the point cloud base map is constructed by using the first point cloud track data with lower precision, and then the second point cloud track data with higher precision and more comprehensive perceived road elements is fused with the point cloud base map, Generating a map can improve the efficiency of map generation and ensure the accuracy and precision of the map.
  • the point cloud base map as the alignment medium since the point cloud base map as the alignment medium is generated first, the point cloud base map basically reflects the overall situation of the geographical environment area, thereby ensuring that the second point cloud track data is consistent with the point cloud base.
  • the alignment success rate of the graph reduces the probability of misalignment.
  • Fig. 1 is a schematic diagram showing an application scenario of the solution of the present application according to an embodiment of the present application.
  • Fig. 2 is a flowchart of a method for generating a map according to an embodiment of the present application.
  • FIG. 3 shows a schematic diagram of splicing images along a bird's-eye view to obtain a bird's-eye view stitching.
  • FIG. 4 is a schematic diagram of a bird's-eye view stitching image obtained by continuously collecting images collected in a vehicle trajectory along a bird's-eye view.
  • Fig. 5 is a schematic diagram of the first point cloud trajectory data corresponding to the bird's-eye view stitched image according to a specific embodiment.
  • Fig. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
  • Fig. 7 is a flow chart of generating first point cloud trajectory data according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a point cloud base map according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of splicing two first point cloud trajectory data according to an embodiment.
  • Fig. 10 is a flowchart showing steps before step 250 according to an embodiment of the present application.
  • Fig. 11 is a flow chart of updating a point cloud base map according to candidate point cloud trajectory data according to an embodiment of the present application.
  • Fig. 12 is a flowchart of a method for generating a map according to a specific embodiment of the present application.
  • Fig. 13 is a block diagram of an apparatus for generating a map according to an embodiment of the present application.
  • Fig. 14 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • first information may also be called second information, and similarly, second information may also be called first information.
  • second information may also be called first information.
  • a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features.
  • “plurality” means two or more, unless otherwise specifically defined.
  • Fig. 1 is a schematic diagram showing an application scenario of the solution according to an embodiment of the present application.
  • the application scenario includes a vehicle 110 , a server 120 and a terminal 130 .
  • the vehicle 110 and the server 120 may establish a communication connection through a wired or wireless network
  • the terminal 130 and the server 120 may establish a communication connection through a wired or wireless network.
  • the vehicle 110 can report its own track data to the server 120, so that the server 120 can generate a map according to the method of the embodiment of the present application based on the track data of the vehicle.
  • the server 120 may be an independent physical server or a cloud server, which is not specifically limited here.
  • the server 120 After the server 120 generates the point cloud base map and/or map, it can also send the generated point cloud base map and/or map to the terminal 130, and the point cloud base map and/or map can be checked and verified by the user at the terminal 130. For editing and modification, the server 120 receives the terminal 130 to review and edit the modified point cloud base map and/or map.
  • the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., which are not specifically limited here.
  • the server 120 may also send the generated map to each vehicle 110 for display on the vehicle-mounted display device of the vehicle 110 , or send it to the terminal 130 where the user is located.
  • An embodiment of the present application provides a method for generating a map, including:
  • a vehicle without a lidar and an intelligent perception module may be referred to as a vehicle of the second type.
  • the first point cloud trajectory data may refer to point cloud trajectory data corresponding to the second type of vehicle.
  • the first point cloud trajectory data may be a point cloud trajectory constructed for pose information of the second type of vehicle according to the stitched image of the bird's-eye view of the second type of vehicle and the associated vehicle.
  • a bird's-eye view stitching image can be obtained, and collected by a GNSS module (or GPS module), an IMU, and a wheel speedometer Get the pose information of the vehicle.
  • a vehicle equipped with a laser radar and/or an intelligent perception module may be referred to as a vehicle of the first type.
  • the second point cloud trajectory data may refer to point cloud trajectory data corresponding to the first type of vehicle.
  • the second point cloud track data can be combined with the information collected by the lidar and/or the environmental perception results and positioning results of the intelligent perception module, the visual information collected by the image acquisition device, IMU, wheel speedometer, GNSS ( Or GPS) module and other information to build.
  • the first point cloud trajectory data is constructed first, and the point cloud base map is generated according to the first point cloud trajectory data, and the point cloud base map is used as an alignment medium, and the second point cloud trajectory data and the point cloud The base map is aligned and fused to generate a map, so that maps can be quickly generated for areas with weak GNSS signals or weak GPS signals.
  • the point cloud base map is constructed by using the first point cloud track data with lower precision, and then the second point cloud track data with higher precision and more comprehensive perceived road elements is fused with the point cloud base map, Generating a map can improve the efficiency of map generation and ensure the accuracy and precision of the map.
  • Fig. 2 is a flowchart of a method for generating a map according to an embodiment of the present application.
  • the method can be executed by a computer device with processing capabilities, such as a server, a cloud server, etc., and is not specifically limited here.
  • the method at least includes steps 210 to 250, which are described in detail as follows:
  • Step 210 acquiring a plurality of trajectory data, the trajectory data including vehicle pose information and a bird's-eye view stitching image associated with the vehicle pose information.
  • the trajectory data is collected by the vehicle during driving, wherein multiple trajectory data can come from one vehicle or multiple vehicles.
  • the plurality of track data may be collected during multiple driving of the vehicle.
  • the pose information of the vehicle can be determined according to the information collected by the GNSS (Global Navigation Satellite System) module, GPS module, IMU (Inertial Measurement Unit, inertial measurement unit) module and wheel speedometer in the vehicle .
  • the IMU is a module composed of various sensors such as a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer.
  • the wheel speedometer is used to detect the distance that the wheel moves within a certain period of time, so as to calculate the change of the relative pose (position and heading) of the vehicle.
  • the pose information of the vehicle may indicate the position information of the vehicle and the attitude information of the vehicle, and the attitude of the vehicle may include a pitch angle, a yaw angle, and a roll angle of the vehicle.
  • the location information of the vehicle can be determined by the GNSS module according to the collected GNSS signals, or by the GPS module according to the collected GPS signals.
  • the vehicle can use the position information of the location point where the GNSS signal or GPS signal is lower than the set threshold and the wheel speedometer ,
  • the information collected by the IMU module is used for dead reckoning, and the position information of each position point during the driving process is obtained.
  • the bird's-eye view stitching image is obtained by stitching the images collected by the vehicle in at least two viewing angles along the bird's-eye view.
  • Multiple image capture devices can be installed in the vehicle to capture images of the surrounding environment of the vehicle from multiple perspectives during driving.
  • the image collected can be the image of the environment directly in front of the vehicle, the image of the left side, the image of the right side, the image of the left rear, the image of the right rear, etc.
  • three image acquisition devices are provided in the vehicle, which respectively acquire images from three perspectives of the front, left side, and right side of the vehicle.
  • images under other viewing angles may also be collected.
  • the vehicle can splice images collected at various points of view under multiple viewing angles along a bird's eye view (Birds Eye View, BEV) to obtain a bird's eye view stitching image.
  • BEV bird's Eye View
  • the vehicle can also upload the collected images under multiple viewing angles to the server, and the server can stitch them together along the bird's-eye view to obtain the stitched image from the bird's-eye view.
  • FIG. 3 shows a schematic diagram of splicing images along a bird's-eye view to obtain a bird's-eye view stitching.
  • Figures 3A-3C are captured by different cameras on the vehicle at the same location.
  • the image in FIG. 3A is the collected image of the environment directly in front of the vehicle
  • FIG. 3B is the collected image of the environment in front of the left side of the vehicle
  • FIG. 3C is the collected image of the environment in front of the vehicle right.
  • splicing Fig. 3A-Fig. 3C along the bird's-eye view angle the bird's-eye view stitching image shown in Fig. 3D can be obtained.
  • the image in Fig. 3B belongs to side forward looking perception.
  • the vehicle can collect images at multiple locations in real time. Therefore, multiple images collected by the vehicle can be continuously spliced along the bird's-eye view angle to obtain continuous bird's-eye view stitching images reflecting the surrounding environment of the vehicle's driving track.
  • the stitched bird's-eye view image in the trajectory data is obtained by continuously stitching the stitched bird's-eye view images of multiple locations along the driving track.
  • the image obtained by splicing the images collected at each location point under multiple viewing angles along the bird's-eye view is called the bird's-eye view stitching sub-image.
  • FIG. 4 is a schematic diagram of a bird's-eye view stitched image obtained by continuously stitching images collected in a vehicle trajectory along a bird's-eye view.
  • orientation optimization can be performed to reduce black edges (in the figure indicated by marker 401).
  • the black border is caused by the incomplete joint of two adjacent bird's-eye view stitching sub-images.
  • free stitching is performed on the bird's-eye view stitching sub-images corresponding to multiple position points on the straight line trajectory to reduce turning distortion (indicated by mark 402 in the figure).
  • step 220 road element identification is performed on each stitched image from a bird's-eye view, and road elements in each stitched image from a bird's-eye view are determined.
  • Road elements may include lane lines (such as solid lane lines and dashed lane lines), road arrows, stop lines, speed bumps, parking space borderlines in the parking lot, parking space entrance lines, etc., which are not specifically limited here.
  • Identifying road elements refers to determining the pixel area where the road elements are located in the stitched image from the bird's-eye view. It can be understood that, on the one hand, the recognition result obtained from road element recognition indicates which road elements are specifically included in the bird's-eye view stitched image, and on the other hand, indicates the position of each road element in the bird's-eye view stitched image.
  • a neural network model can be used to identify road elements on stitched images from a bird's-eye view.
  • Mask R-CNN Mask Recycle Convolutional Neural Network, Mask Recycle Convolutional Neural Network
  • PANet Path Aggregation Network, Path Aggregation Network
  • FCIS Fely Convolutional Instance-aware Semantic Segmentation, full The convolutional instance-aware semantic segmentation
  • the convolutional instance-aware semantic segmentation is used to segment each road element in the bird's-eye view stitching image, so as to determine the position of each road element in the bird's-eye view stitching image.
  • step 220 may include: inputting each stitched bird's-eye view image into the road element recognition model; performing road element recognition by the road element recognition model, and outputting road element information corresponding to each stitched bird's-eye view image. Indicates the road element in the stitched image corresponding to the bird's-eye view.
  • the road element recognition model may be constructed by one or more neural networks among convolutional neural network, fully connected neural network, feedforward neural network, long short-term memory network, and recurrent neural network.
  • the road element recognition model may be Mask R-CNN, PANet, FCIS, etc. as listed above.
  • the road element recognition model can be trained with training data before road element recognition.
  • the training data includes multiple sample bird's-eye view stitched images and annotation information of the sample bird's-eye view stitched images.
  • the annotation information is used to indicate the road elements in the stitched image from the bird's-eye view of the corresponding sample.
  • the bird's-eye view stitched image used for training the road element recognition model is referred to as a sample bird's-eye view stitched image.
  • the stitched image of the bird's-eye view of the sample is input into the road element recognition model, and the road element recognition model recognizes the road element on the stitched image of the bird's-eye view of the sample, and outputs the predicted road element information, which is used to indicate the sample Road elements in a bird's eye view stitched image.
  • the predicted road element information not only indicates the position information of the identified road element in the sample bird's-eye view stitched image, but also indicates the semantics of the road element (that is, indicates what kind of road element, such as lane line , speed bumps or stop lines, etc.).
  • the loss value of the loss function is calculated, and the parameters of the road element recognition model are reversely adjusted according to the loss value. It can be understood that the annotation information of the sample bird's-eye view stitched image also indicates the position information of each road element in the sample bird's-eye view stitched image.
  • the loss function may be set according to actual needs, for example, the loss function may be a cross-entropy loss function, a logarithmic loss function, etc., which are not specifically limited here.
  • the road element recognition model After completing the training of the road element recognition model, the road element recognition model can be applied online to accurately identify road elements.
  • Step 230 according to the road elements in each bird's-eye view stitching image and the vehicle pose information associated with each bird's-eye view stitching image, generate first point cloud trajectory data corresponding to each trajectory data.
  • the point cloud trajectory data used to construct the point cloud base map and constructed based on the stitched images from the bird's-eye view and the associated pose information is referred to as the first point cloud trajectory data.
  • the first point cloud trajectory data includes position information of each position point in the trajectory path and a point cloud model of road elements on the trajectory path. It can be understood that the relative positional relationship between the point cloud models of different road elements in the first point cloud trajectory is basically the same as the relative positional relationship presented by the bird's-eye view stitching image.
  • the point cloud model of road elements is a collection of massive points expressing the spatial distribution of road elements and the target surface characteristics in the same spatial reference system. After obtaining the spatial coordinates of each sampling point of road elements, the road elements are placed on the Arrange all the sampling points of the road element to obtain the point cloud model of the road element.
  • step 230 may include: according to the vehicle pose information associated with each bird's-eye view stitching image, perform three-dimensional reconstruction on each road element in each bird's-eye view stitching image, and obtain the corresponding The first point cloud trajectory data.
  • the 3D point cloud model of the corresponding road elements can be obtained by 3D reconstruction of the road elements in the bird's-eye view stitching image. On this basis, combining the pose information associated with the stitched image from the bird's-eye view and the obtained position information of the road elements in the stitched image from the bird's-eye view, the location information of the road element in the geographic space can be determined, so that according to the geographic location of the road element
  • the position information in the space is arranged by arranging the 3D point cloud models of the road elements in the bird's-eye view stitching image, and correspondingly obtaining the first point cloud trajectory data corresponding to the bird's-eye view stitching image.
  • deep learning may be used to perform three-dimensional reconstruction on the road elements in the bird's-eye view stitched image.
  • the neural network model used to generate the 3D point cloud model can be trained (for the sake of distinction, the neural network model used to generate the 3D point cloud model is called the 3D reconstruction model), and then the bird's-eye view can be obtained through the 3D reconstruction model.
  • Each road element in the perspective stitching image is reconstructed in 3D.
  • the three-dimensional reconstruction model may be a model constructed by a convolutional neural network, a fully connected neural network, or the like.
  • the three-dimensional reconstruction model may be an Im2Avatar model, a confrontation network, a generation network, etc., which are not specifically limited here.
  • Fig. 5 is a schematic diagram of the first point cloud trajectory data corresponding to the bird's-eye view stitched image according to an embodiment.
  • the edges of each road element appear to be lines, they are actually point sequences. Since the points are relatively dense, the visual effect feels like lines.
  • different road elements can be represented by different point clouds, for example, lane lines are represented by blue point clouds, parking space lines are represented by green point clouds, and parking spaces are represented by red point clouds.
  • the point cloud represents arrows, etc.
  • the height difference can be sensed through the pitch angle of the vehicle, and then the height of the vehicle in the vertical direction can be determined. For example, when the vehicle is on the first underground floor and the second underground floor of the underground parking lot, its height in the vertical direction is different. Further, when the vehicle is running on the slope, the slope of the slope where the vehicle is located can also be calculated according to the pitch angle of the vehicle.
  • Fig. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
  • FIG. 6 it can be clearly seen that there is a height difference between the first plane 610 and the second plane 620 and the third plane 630 in the vertical direction. Therefore, the first plane 610 , the second plane 620 , and the third plane 630 correspond to different floors, and the white shaded parts in FIG. 6 represent road elements in the corresponding floors.
  • FIG. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
  • the first oblique line 621 , the second oblique line 622 and the third oblique line 623 between the first plane 610 and the second plane 620 may indicate that the first floor 610 and the second floor 620 are connected at different positions.
  • the fourth oblique line 631 between the second plane 620 and the third plane 630 represents the inclined road connecting the second plane 620 and the third plane 630 .
  • Fig. 7 is a flow chart of generating first point cloud trajectory data according to an embodiment of the present application. As shown in Figure 7, including:
  • Step 710 stitching images along the bird's-eye view.
  • the camera used for collecting images on the vehicle may be a surround-view camera, and the surround-view camera may collect images of the surrounding environment during the driving of the vehicle based on fish-eye imaging.
  • the surround-view camera may collect images of the surrounding environment during the driving of the vehicle based on fish-eye imaging.
  • images from multiple perspectives can be collected.
  • the distorted region is regarded as ROI (Region of Interest, region of interest)
  • Distortion correction is also performed to avoid inaccurate subsequent generated maps due to the distortion caused by the splicing process.
  • it also involves inverse perspective transformation of images collected under multiple viewing angles, so as to project the images into a bird's-eye view, and then splicing to obtain a bird's-eye view stitching image.
  • Step 720 identifying road elements.
  • Each road element in the bird's-eye view stitched image is determined through step 720 .
  • Step 730 three-dimensional reconstruction.
  • the point cloud model of each road element in the bird's-eye view stitching image is obtained, and then the point cloud models are combined to obtain the first point cloud trajectory data.
  • step 240 splicing the first point cloud trajectory data corresponding to multiple trajectory data to obtain a point cloud base map.
  • first point cloud trajectory data cover different environmental regions. Therefore, multiple first point cloud trajectory data can be spliced to obtain a point cloud base map reflecting the basic global region.
  • step 240 may include: determining the first target road element representing the same geographic location in any two of the multiple first point cloud trajectories; based on the first target road element, splicing multiple first point cloud track data, Get the point cloud base map.
  • the first target road element refers to road elements representing the same geographic location in any two of the plurality of first point cloud trajectory data.
  • Different driving trajectories may have overlapping trajectories, so there may be road elements representing the same geographic location (ie, the first target road element) in the first point cloud trajectory data constructed based on different trajectory data.
  • the first point cloud trajectory data not only shows the point cloud model of each road element, but also corresponds to the element semantics of the road element (the element semantics indicates which kind of road element it is) and the position information of the road element, Therefore, based on the element semantics and position information of each road element in the point cloud trajectory data and the relative positional relationship between other road elements near the road element, each road element in any two first point cloud trajectory data can be compared. Thus, the first target road element representing the same geographic location is determined.
  • the first target road elements in different first point cloud trajectory data can be coincident by moving the first point cloud trajectory data. It can be understood that after the movement, the position of the overlapping first target road element is the splicing joint of different first point cloud trajectory data.
  • Fig. 8 is a schematic diagram of a point cloud base map according to an embodiment of the present application.
  • Step 250 using the point cloud base map as an alignment medium, aligning and fusing the second point cloud trajectory data with the point cloud base map to obtain a map.
  • the point cloud base map can reflect the whole field map skeleton of the geographical environment area (especially the indoor area with weak GNSS signal and GPS signal), and this method of constructing the point cloud base map is relatively fast.
  • the point cloud base map is constructed through the first point cloud trajectory data, and the first point cloud trajectory data is generated by splicing images from a bird's-eye view and the pose information of the associated vehicle, in practice, due to the vehicle The angle of view that the image acquisition device on the Internet can perceive is limited, and the point cloud trajectory data generated only by splicing images from the bird’s-eye view and the corresponding pose information may not be able to fully reflect all road elements in the geographical environment. Therefore, in this application In the embodiment, the second point cloud trajectory data is further obtained, and the map is generated by fusing the second point cloud trajectory data capable of expressing more road elements with the point cloud base map.
  • the second point cloud trajectory data may include a point cloud model with more road elements.
  • the point cloud model of road elements such as some road signs, pillars, ultrasonic obstacles, walls, gates, railings, zebra crossings, etc.
  • step 250 may include: using the point cloud basemap as an alignment medium, aligning the second point cloud trajectory data with the point cloud basemap, and determining the second point cloud trajectory data compared with the point cloud basemap The newly added road element; the point cloud model of the newly added road element is added to the point cloud base map to obtain a map.
  • the second point cloud track data refers to road elements that exist in the second point cloud track data but do not exist in the point cloud base map.
  • the point cloud basemap can basically reflect the global skeleton of the geographical environment area, the probability of road elements representing the same geographic location in the second point cloud trajectory data and the point cloud basemap is relatively high, thus, based on this
  • the road elements representing the same geographic location can semantically align the second point cloud trajectory data with the point cloud basemap, so as to realize the positioning of the second point cloud trajectory data on the point cloud basemap.
  • the second point cloud trajectory data is compared with the point cloud base map, thereby determining the newly added road element compared to the point cloud base map, and based on the newly added road element in the second point cloud track data
  • the position information of the newly added road element is determined in the point cloud base map, and according to the determined position information, the point cloud model of the newly added road element is added to the point cloud base map, and then the map can be obtained.
  • the first point cloud trajectory data is constructed according to the bird’s-eye view angle stitching image of the vehicle and the vehicle’s pose information
  • the point cloud base is generated according to the first point cloud trajectory data
  • the point cloud base map is used as the alignment medium
  • the second point cloud trajectory data is aligned and fused with the point cloud base map to generate a map.
  • the point cloud base map as the alignment medium is generated first, the point cloud base map basically reflects the overall situation of the geographical environment area, thereby ensuring that the second point cloud track data is consistent with the point cloud
  • the alignment success rate of the basemap reduces the probability of misalignment.
  • some vehicles are equipped with lidar, and/or, an intelligent perception module (also called a perception chip, wherein the intelligent perception module can be used in real time according to the acquired The image of the vehicle and the information collected by other sensors on the vehicle (such as the information collected by the wheel speedometer, IMU, etc.) to identify things in the environment around the vehicle, such as road elements, etc.), while some vehicles are not equipped with lidar and intelligent perception module.
  • an intelligent perception module also called a perception chip, wherein the intelligent perception module can be used in real time according to the acquired The image of the vehicle and the information collected by other sensors on the vehicle (such as the information collected by the wheel speedometer, IMU, etc.) to identify things in the environment around the vehicle, such as road elements, etc.)
  • vehicles equipped with lidar and/or intelligent perception modules are referred to as vehicles of the first type; vehicles not equipped with lidar and intelligent perception modules are referred to as is a vehicle of type II.
  • LiDAR can detect the size and location of objects in the environment around the vehicle. It can be understood that, for a vehicle equipped with a lidar, images collected by the vehicle, signals detected by the lidar, and other wheel speedometers, IMU, GNSS modules (or GPS modules) can be combined to correspond to the vehicle's driving trajectory
  • the point cloud trajectory data has more reference information, and the detection accuracy and sensing range of lidar are wider. Therefore, comparatively speaking, the precision and accuracy of point cloud trajectory data from vehicles equipped with lidar is higher.
  • the intelligent perception module can combine the information collected by the sensors set in the vehicle (such as image acquisition device, wheel speed information, IMU, GNSS module (or GPS module)) in real time to realize the scene perception and understanding of the environment in real time, such as obstacles Semantic classification of data such as types of objects, road signs and markings, detection of pedestrians and vehicles, traffic signals, etc., and then positioning based on the results of perception and understanding, so as to help vehicles more accurately understand their position relative to their environment.
  • the sensors set in the vehicle such as image acquisition device, wheel speed information, IMU, GNSS module (or GPS module)
  • GNSS module or GPS module
  • the precision and accuracy of the point cloud trajectory data from the first type of vehicle is higher than that of the point cloud trajectory data from the second type of vehicle, but in the market, there are the first type of The number of users of the vehicle is much lower than that of the second type of vehicle. Therefore, if the map is constructed only by point cloud trajectory data from the first type of vehicle, the map construction period will be long. Therefore, in this case, the method of the present application can be used to construct the map.
  • the first point cloud trajectory data may be a point cloud trajectory constructed for pose information of the second type of vehicle based on the stitched image from the bird's-eye view of the second type of vehicle and the associated vehicle.
  • the bird's-eye view stitching image can be obtained based on the images collected by the image acquisition device corresponding to multiple viewing angles in the first type of vehicle, and the GNSS module (or GPS module), IMU, and The wheel speedometer is used to collect the pose information of the vehicle.
  • the second point cloud trajectory data may refer to point cloud trajectory data corresponding to the first type of vehicle.
  • the second point cloud trajectory data can be combined with the information collected by the lidar and/or the environment perception results and positioning results of the intelligent perception module, the visual information collected by the image acquisition device, IMU, wheel speed It can be built with various information such as meter, GNSS (or GPS) module, etc. It can be understood that the second point cloud track data also correspondingly indicates the driving track of the vehicle, and each road element and position information of the road element in the driving environment.
  • the point cloud base map is constructed by using the first point cloud trajectory data with lower precision, and then the second point cloud trajectory data with higher accuracy and more comprehensive perceived road elements is compared with the point cloud base map. Fusion and map generation can improve map generation efficiency and ensure map accuracy and precision.
  • the scheme of directly splicing different point cloud trajectory data to generate a map is easy when there are no trajectory intersection points between two point cloud trajectory data, or there are few trajectory intersection points.
  • point cloud trajectory data splicing fails, or semantic alignment fails, and if the solution of the embodiment of the present application is adopted, since the point cloud base map that can reflect the basic global situation in the environment area is pre-built, this problem can be effectively solved, ensuring The second point cloud trajectory data can be fused with the point cloud base map.
  • the point cloud base map is constructed based on the first point cloud track data of the second type of vehicle, and the point cloud base map is optimized and updated by using the second point cloud track data of the first type of vehicle.
  • the period of map generation will be long.
  • the first point cloud trajectory data corresponding to vehicles belonging to the second type with a higher quantity is larger, and the coverage of the geographical environment area is larger, the first point cloud is used first.
  • the point cloud base map is optimized and updated by the second point cloud track data corresponding to the vehicle belonging to the first type, and the map is obtained , so that a compromise can be made between shortening the generation cycle of the map and improving the accuracy of the map.
  • the trajectory data corresponding to vehicles with different hardware configurations can be used to generate maps, making the data sources more comprehensive.
  • the solutions of the embodiments of the present application can be applied to constructing maps of areas with weak GNSS signals (or GPS signals), such as maps of indoor parking lots.
  • Fig. 9 is a schematic diagram of splicing two first point cloud trajectory data according to an embodiment. As shown in Figure 9, after the splicing of the first point cloud trajectory data I and the first point cloud trajectory data II, there is a disconnected area 910, and according to conventional judgments, the disconnected area 910 may be different from the actual geographical environment area. matched.
  • the method may also include: sending the point cloud base map to the client, so that the user can splice and edit the point cloud base map on the client, and then receive the point cloud base map from the client.
  • the point cloud base map after splicing and editing Therefore, by editing the point cloud base map, manually edit the part of the point cloud base map where there is a disconnected area, thereby improving the point cloud base map.
  • the method may further include:
  • Step 1010 acquire a candidate point cloud trajectory data from the candidate point cloud trajectory data set.
  • the candidate point cloud trajectory data in the candidate point cloud trajectory data set may be point cloud trajectory data from the first type of vehicle. In some other embodiments, the candidate point cloud trajectory data set may also include point cloud trajectory data from the first type of vehicle and the second type of vehicle.
  • the candidate point cloud trajectory data may be acquired from the candidate point cloud trajectory data randomly or in a set order.
  • step 1010 may include: according to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, according to the order of priority from high to low, obtain a Candidate point cloud trajectory data.
  • the priority corresponding to each candidate point cloud trajectory data can be set according to the vehicle information of the vehicle from which the candidate point cloud trajectory data originates, wherein the hardware modules installed in the vehicle can be determined according to the vehicle information.
  • the priority corresponding to the candidate point cloud track data originating from a vehicle equipped with a laser radar and an intelligent perception module is the first priority
  • that originating from a vehicle equipped with a laser radar or an intelligent perception module is the second priority
  • the corresponding priority of the candidate point cloud trajectory data from the vehicle without lidar and no intelligent perception module is the third priority, among which, the first The priority is higher than the second priority, and the second priority is higher than the third priority.
  • Step 1020 determine the coverage of the candidate point cloud trajectory data relative to the point cloud base map.
  • the length of the target part track located in the point cloud base map in the driving track indicated by the candidate point cloud track data can be determined, and then the length of the target part track Divide it with the total length of the driving trajectory indicated by the candidate point cloud trajectory data, and use the obtained ratio as the coverage of the candidate point cloud trajectory data relative to the point cloud base map.
  • Step 1030 if the coverage is greater than the set threshold, the candidate point cloud trajectory data is used as the second point cloud trajectory data.
  • the candidate point cloud trajectory data can be used as the second point cloud trajectory data, so as to align and fuse the candidate point cloud trajectory data with the point cloud base map.
  • the method may further include: if the coverage is not greater than the set threshold, splicing the candidate point cloud trajectory data with the point cloud base map to update the point cloud base map.
  • the candidate point cloud trajectory data is used as the first point cloud trajectory data. It is used to update the point cloud basemap.
  • splicing the candidate point cloud trajectory data with the point cloud base map, so that the step of updating the point cloud base map further includes:
  • Step 1110 determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data.
  • the target vehicle type refers to the vehicle type to which the source vehicle of the candidate point cloud trajectory data belongs.
  • the vehicles may be classified according to the types of hardware configured on the vehicles.
  • the vehicle type set based on the hardware on the vehicle includes the first type and the second type, wherein, the vehicle belonging to the first type is equipped with a laser radar and/or an intelligent perception module; The second type of vehicle is not equipped with lidar and intelligent perception modules.
  • Step 1120 based on the correspondence between vehicle types and weights, determine the target weight corresponding to the target vehicle type.
  • the target weight refers to the weight corresponding to the target vehicle type. Wherein, the corresponding relationship between vehicle types and weights can be set according to actual needs.
  • Step 1130 if the weight of the target is greater than the weight threshold, move the second target road element in the point cloud base map so that the second target road element in the point cloud base map is consistent with the second target in the candidate point cloud trajectory data
  • the road elements overlap; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud trajectory data.
  • Step 1140 if the weight of the target is not greater than the weight threshold, then move the second target road in the candidate point cloud trajectory data so that the second target road element in the point cloud base image is consistent with the second target road element in the candidate point cloud trajectory data.
  • Target road elements overlap.
  • Step 1150 combining the moved point cloud base map and candidate point cloud trajectory data as an updated point cloud base map.
  • higher weights may be assigned to vehicles with higher accuracy of point cloud trajectory data
  • lower weights may be assigned to vehicles with lower accuracy of point cloud trajectory data.
  • vehicles belonging to the first type are assigned a higher weight. Therefore, during the splicing process, it can be ensured that the point cloud trajectory data with higher precision moves less during the splicing process. On the contrary, the point cloud trajectory data with lower precision moves more during the splicing process, so as to avoid moving the original precision during the splicing process.
  • Higher point cloud trajectory data results in reduced accuracy and precision of the point cloud trajectory data.
  • the corresponding target weight is determined according to the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data, and then the object to be moved during the splicing process is determined according to the target weight, so that the original accuracy of moving during the splicing process can be avoided.
  • Higher point cloud trajectory data thereby ensuring the position accuracy of each road element in the point cloud base map.
  • Fig. 12 is a flowchart of a method for generating a map according to a specific embodiment of the present application.
  • the steps filled in gray in FIG. 12 may be steps participated by equipment or manually.
  • it includes: uploading the trajectory data to the server, and the server identifies road elements based on the trajectory data and generates corresponding point cloud trajectory data; then, data screening is performed, and the process of data screening can be referred to in Figure 10
  • the process shown is to determine the coverage of the point cloud trajectory data relative to the point cloud base map. If the coverage is not greater than the set threshold, it indicates that the coverage contribution of the point cloud trajectory data is high, and the point cloud trajectory
  • the data is used for splicing to generate a point cloud base map.
  • the point cloud base map can be sent to the client, and the user can splice and edit the point cloud base map.
  • the point cloud trajectory data can be used for alignment and fusion with the point cloud base map to generate a map.
  • Layers in a map include positioning layers and logical layers. Further, after the map is generated, the map can be sent to the client, so that the user can edit the positioning layer and/or edit the logical layer based on the map displayed on the client. After the map is edited, the map can be further inspected by technicians to obtain a high-precision and high-accuracy map.
  • An embodiment of the present application provides a map generation device, including: a data acquisition module, a splicing module, and a fusion module.
  • the data acquisition module is used to obtain the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory data of the second type of vehicle; obtain the determined second point cloud trajectory data, wherein the second point cloud trajectory data
  • the cloud track data corresponds to the point cloud track data of the first type of vehicle
  • the splicing module is used to generate a point cloud base map according to the first point cloud track data
  • the fusion module is used to use the point cloud base map as an alignment medium to convert the second
  • the two point cloud trajectory data are aligned and fused with the point cloud base map to obtain a map.
  • the data acquisition module may include an acquisition module, an identification module and a generation module.
  • the device for generating the map will be described in detail below in conjunction with the accompanying drawings.
  • Fig. 13 is a block diagram of a device for generating a map according to an embodiment of the present application.
  • the device for generating a map includes: an acquisition module 1310 for acquiring multiple trajectory data, the trajectory data including the pose of the vehicle Information and the bird's-eye view stitching image associated with the vehicle pose information; the identification module 1320 is used to identify road elements in each bird's-eye view stitching image, and determines the road elements in each bird's-eye view stitching image; the generation module 1330 is used to The road elements in each bird's-eye view stitching image and the vehicle pose information associated with each bird's-eye view stitching image generate the first point cloud track data corresponding to each track data; the stitching module 1340 is used to combine multiple track data.
  • the first point cloud track data is spliced to obtain a point cloud base map; the fusion module 1350 is used to use the point cloud base map as an alignment medium to align and fuse the second point cloud track data with the point cloud base map to obtain a map.
  • the recognition module 1320 includes: an input unit, configured to input the spliced images of bird's-eye view angles into the road element recognition model; Road element information corresponding to the image, where the road element information is used to indicate the corresponding road element in the bird's-eye view stitched image.
  • the generation module 1330 is further configured to: according to the vehicle pose information associated with each bird's-eye view stitching image, perform three-dimensional reconstruction on each road element in each bird's-eye view stitching image to obtain each bird's-eye view stitching image The first point cloud trajectory data corresponding to the image.
  • the splicing module 1340 includes: a first target road element determining unit, configured to determine first target road elements representing the same geographic location in any two of a plurality of first point cloud trajectories; Based on the first target road element, multiple first point cloud trajectory data are spliced to obtain a point cloud base map.
  • the map generation device further includes: a sending module, configured to send the point cloud base map to the client, so that the user can stitch and edit the point cloud base map on the client.
  • a sending module configured to send the point cloud base map to the client, so that the user can stitch and edit the point cloud base map on the client.
  • the map generation device also includes: a candidate point cloud trajectory data acquisition module, used to obtain a candidate point cloud trajectory data from the candidate point cloud trajectory data set; a coverage determination module, used to determine the candidate point cloud The coverage of the trajectory data relative to the point cloud base map; the second point cloud trajectory data determination module is used to use the candidate point cloud trajectory data as the second point cloud trajectory data if the coverage is greater than the set threshold.
  • the generating device of the map also includes: an update module, used for splicing the candidate point cloud trajectory data with the point cloud base map if the coverage is not greater than the set threshold, so as to update the point cloud base map .
  • the update module includes: a target vehicle type determination unit, configured to determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data; a target weight determination unit, configured to determine the vehicle type based on the corresponding relationship between the weight , to determine the target weight corresponding to the target vehicle type; the first moving unit is used to move the second target road element in the point cloud base map if the target weight is greater than the weight threshold, so that the second target road element in the point cloud base map The target road element overlaps with the second target road element in the candidate point cloud track data; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud track data; the second mobile unit, It is used to move the second target road in the candidate point cloud trajectory data
  • the candidate point cloud trajectory data acquisition module is further configured to: according to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, according to the order of priority from high to low, from the candidate points
  • the cloud trajectory data set obtains a candidate point cloud trajectory data.
  • the fusion module includes: a new road element determination unit, configured to use the point cloud basemap as an alignment medium, align the second point cloud trajectory data with the point cloud basemap, and determine the second point cloud trajectory The data is compared with the newly added road elements in the point cloud base map; the adding unit is used to add the point cloud model of the newly added road elements to the point cloud base map to obtain a map.
  • a new road element determination unit configured to use the point cloud basemap as an alignment medium, align the second point cloud trajectory data with the point cloud basemap, and determine the second point cloud trajectory The data is compared with the newly added road elements in the point cloud base map; the adding unit is used to add the point cloud model of the newly added road elements to the point cloud base map to obtain a map.
  • Fig. 14 is a structural block diagram of an electronic device according to an embodiment of the present application.
  • the electronic device may be a physical server, a cloud server, etc., which is not specifically limited here.
  • the electronic device in this application may include: a processor 1410 and a memory 1420, where computer-readable instructions are stored on the memory 1420, and when the computer-readable instructions are executed by the processor 1410, any of the above method embodiments may be implemented method in .
  • Processor 1410 may include one or more processing cores.
  • the processor 1410 uses various interfaces and lines to connect various parts of the entire electronic device, and executes electronic operations by running or executing instructions, programs, code sets or instruction sets stored in the memory 1420, and calling data stored in the memory 1420.
  • the processor 1410 may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 1410 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the CPU mainly handles the operating system, user interface and application programs, etc.
  • the GPU is used to render and draw the displayed content
  • the modem is used to handle wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 1410, but may be realized by a communication chip alone.
  • the memory 1420 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory).
  • the memory 1420 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 1420 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, an alarm function, etc.), and for implementing the following Instructions and the like of the various method embodiments described above.
  • the storage data area can also store data created by the electronic device during use (such as a disguised response command, obtained process status) and the like.
  • the present application also provides a computer-readable storage medium, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method in any one of the foregoing method embodiments is implemented.
  • the computer readable storage medium may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium has storage space for computer-readable instructions for performing any method steps in the methods described above. These computer readable instructions can be read from or written into one or more computer program products. Computer readable instructions may, for example, be compressed in a suitable form.
  • a computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method in any of the above embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Instructional Devices (AREA)
  • Navigation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A map generation method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to point cloud trajectory data of a second type of vehicle; obtaining determined second point cloud trajectory data, wherein the second point cloud trajectory data corresponds to point cloud trajectory data of a first type of vehicle; generating a point cloud base map according to the first point cloud trajectory data; using the point cloud base map as an alignment medium and performing alignment and merger on the second point cloud trajectory data and the point cloud base map, and obtaining a map. A map can be quickly generated for region having a weak navigation signal.

Description

地图的生成方法、装置、电子设备及存储介质Map generation method, device, electronic device and storage medium
本申请要求于2021年12月30日提交国家知识产权局、申请号为202111646647.8、申请名称为“地图的生成方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application submitted to the State Intellectual Property Office on December 30, 2021, with the application number 202111646647.8 and the application name "Map generation method, device, electronic equipment and storage medium", the entire content of which is passed References are incorporated in this application.
技术领域technical field
本申请涉及地图技术领域,具体涉及一种地图的生成方法、装置、电子设备及存储介质。The present application relates to the field of map technology, and in particular to a map generation method, device, electronic equipment, and storage medium.
背景技术Background technique
相关技术中,在一些GPS(Global Positioning System,全球定位系统)信号或者GNSS(Global Navigation Satellite System,全球导航卫星系统)信号较弱的室内区域,例如室内停车场等,一般是没有地图。因此,当用户的车辆驶入室内停车场中,由于没有室内停车场的地图,如果用户对室内停车场的内部环境不熟悉,用户容易在室内停车场中迷失方向,导致用户需要花费大量的时间寻找停车位。In related technologies, in some indoor areas where GPS (Global Positioning System, Global Positioning System) signals or GNSS (Global Navigation Satellite System, Global Navigation Satellite System) signals are weak, such as indoor parking lots, there is generally no map. Therefore, when the user's vehicle enters the indoor parking lot, since there is no map of the indoor parking lot, if the user is not familiar with the internal environment of the indoor parking lot, the user is likely to get lost in the indoor parking lot, causing the user to spend a lot of time Find a parking space.
发明内容Contents of the invention
为解决或部分解决相关技术中存在的问题,本申请实施例提出了一种地图的生成方法、装置、电子设备及存储介质,能够为信号较弱的区域快速生成地图,方便用户泊车。In order to solve or partially solve the problems existing in the related technologies, the embodiment of the present application proposes a map generation method, device, electronic equipment and storage medium, which can quickly generate maps for areas with weaker signals and facilitate parking for users.
根据本申请实施例的一个方面,提供了一种地图的生成方法,包括:获取生成的第一点云轨迹数据,其中所述第一点云轨迹数据对应于第二类型的车辆的点云轨迹数据;获取确定的第二点云轨迹数据,其中所述第二点云轨迹数据对应于第一类型的车辆的点云轨迹数据;根据所述第一点云轨迹数据生成点云底图;以所述点云底图作为对齐介质,将所述第二点云轨迹数据与所述点云底图进行对齐融合,得到地图。According to an aspect of an embodiment of the present application, a method for generating a map is provided, including: acquiring generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory of a second type of vehicle data; obtain the determined second point cloud track data, wherein the second point cloud track data corresponds to the point cloud track data of the first type of vehicle; generate a point cloud base map according to the first point cloud track data; with The point cloud base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to obtain a map.
在一实施方式中,所述第一点云轨迹数据按以下方式生成:获取多个轨迹数据,轨迹数据包括车辆的位姿信息和与车辆位姿信息相关联的鸟瞰视角拼接图像;对各鸟瞰视角拼接图像进行道路元素识别,确定各鸟瞰视角拼接图像中的道路元素;根据各鸟瞰视角拼接图像中的道路元素和各鸟瞰视角拼接图像所关联的车辆位姿信息,生成各轨迹数据对应的第一点云轨迹数据;所述根据所述第一点云轨迹数据生成点云底图,包括:将多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图。In one embodiment, the first point cloud trajectory data is generated in the following manner: a plurality of trajectory data are acquired, and the trajectory data includes the pose information of the vehicle and the bird's-eye view stitching image associated with the vehicle pose information; The road element recognition is carried out on the stitched image of the bird's-eye view, and the road elements in each stitched image of the bird's-eye view are determined; according to the road elements in each stitched image of the bird's-eye view and the vehicle pose information associated with each stitched image of the bird's-eye view, the first-order road element corresponding to each trajectory data is generated. A point cloud trajectory data; said generating a point cloud base map according to said first point cloud trajectory data includes: splicing the first point cloud trajectory data corresponding to a plurality of trajectory data to obtain a point cloud base map.
根据本申请实施例的一个方面,提供了一种地图的生成装置,包括:数据获取模块,用于获取生成的第一点云轨迹数据,其中所述第一点云轨迹数据对应于第二类型的车辆的点云轨迹数据;获取确定的第二点云轨迹数据,其中所述第二点云轨迹数据对应于第一类型的车辆的点云轨迹数据;拼接模块,用于根据所述第一点云轨迹数据生成点云底图;融合模块,用于以所述点云底图作为对齐介质,将所述第二点云轨迹数据与所述点云底图进行对齐融合,得到地图。According to an aspect of an embodiment of the present application, a map generation device is provided, including: a data acquisition module, configured to acquire the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the second type The point cloud track data of the vehicle; Acquire the determined second point cloud track data, wherein the second point cloud track data corresponds to the point cloud track data of the first type of vehicle; The splicing module is used for according to the first The point cloud track data generates a point cloud base map; the fusion module is used to use the point cloud base map as an alignment medium to align and fuse the second point cloud track data and the point cloud base map to obtain a map.
在一实施方式中,所述数据获取模块包括获取模块、识别模块和生成模块;获取模块,用于获取多个轨迹数据,轨迹数据包括车辆的位姿信息和与车辆位姿信息相关联的鸟瞰视角拼接图像;识别模块,用于对各鸟瞰视角拼接图像进行道路元素识别,确定各鸟瞰视角拼接图像中的道路元素;生成模块,用于根据各鸟瞰视角拼接图像中的道路元素和各鸟瞰视角拼接图像所关联的车辆位姿信息,生成各轨迹数据对应的第一点云轨迹数据; 所述拼接模块将多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图。In one embodiment, the data acquisition module includes an acquisition module, an identification module, and a generation module; the acquisition module is used to acquire a plurality of trajectory data, and the trajectory data includes the pose information of the vehicle and the bird's-eye view associated with the vehicle pose information Angle of view mosaic image; identification module, used to identify road elements in each bird's-eye view angle mosaic image, and determine road elements in each bird's-eye view angle mosaic image; generation module, used for splicing road elements and each bird's-eye angle of view according to each bird's-eye view angle of view mosaic image Stitching the vehicle pose information associated with the images to generate first point cloud track data corresponding to each track data; the splicing module splicing the first point cloud track data corresponding to multiple track data to obtain a point cloud base map.
根据本申请实施例的一个方面,提供了一种电子设备,包括:处理器;存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,实现如上所述地图的生成方法。According to an aspect of an embodiment of the present application, there is provided an electronic device, including: a processor; a memory, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by the processor, the above The method for generating the map.
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,其上存储有计算机可读指令,当所述计算机可读指令被处理器执行时,实现如上所述地图的生成方法。According to an aspect of the embodiments of the present application, a computer-readable storage medium is provided, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method for generating the above-mentioned map is implemented.
在本申请的方案中,先集合根据车辆的鸟瞰视角拼接图像和车辆的位姿信息来构建第一点云轨迹数据,并根据第一点云轨迹数据来生成点云底图,并将点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,生成地图。本申请的方案可以应用于为GNSS信号较弱或者GPS信号较弱的区域生成地图,方便用户泊车。例如,可以生成室内停车场的地图,由此可以解决相关技术中因没有室内停车场的地图,导致用户迷失方向的问题。本申请实施例通过精度较低的第一点云轨迹数据构建点云底图,然后将精度较高和所感知到的道路元素更全面的第二点云轨迹数据与点云底图进行融合,生成地图,可以提高地图的生成效率和保证地图的准确度和精度。In the solution of this application, the first point cloud trajectory data is constructed by stitching images based on the vehicle’s bird’s-eye view angle and the vehicle’s pose information, and the point cloud base map is generated according to the first point cloud trajectory data, and the point cloud The base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to generate a map. The solution of the present application can be applied to generate maps for areas with weak GNSS signals or weak GPS signals, so as to facilitate parking for users. For example, a map of an indoor parking lot can be generated, thereby solving the problem in the related art that users lose their way because there is no map of the indoor parking lot. In the embodiment of the present application, the point cloud base map is constructed by using the first point cloud track data with lower precision, and then the second point cloud track data with higher precision and more comprehensive perceived road elements is fused with the point cloud base map, Generating a map can improve the efficiency of map generation and ensure the accuracy and precision of the map.
而且,在本申请实施例方案中,由于是先生成作为对齐介质的点云底图,点云底图基本反映了地理环境区域的全局情况,从而可以保证第二点云轨迹数据与点云底图的对齐成功率,降低出现无法对齐的概率。Moreover, in the scheme of the embodiment of the present application, because the point cloud base map as the alignment medium is generated first, the point cloud base map basically reflects the overall situation of the geographical environment area, thereby ensuring that the second point cloud track data is consistent with the point cloud base The alignment success rate of the graph reduces the probability of misalignment.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
附图说明Description of drawings
通过结合附图对本申请示例性实施方式进行更详细的描述,本申请的上述以及其它目的、特征和优势将变得更加明显,其中,在本申请示例性实施方式中,相同的参考标号通常代表相同部件。The above and other objects, features and advantages of the present application will become more apparent by describing the exemplary embodiments of the present application in more detail with reference to the accompanying drawings, wherein, in the exemplary embodiments of the present application, the same reference numerals generally represent same parts.
图1是根据本申请一实施例示出的本申请方案的应用场景的示意图。Fig. 1 is a schematic diagram showing an application scenario of the solution of the present application according to an embodiment of the present application.
图2是根据本申请的一个实施例示出的地图的生成方法的流程图。Fig. 2 is a flowchart of a method for generating a map according to an embodiment of the present application.
图3示出了沿鸟瞰视角进行拼接得到鸟瞰视角拼接图像的示意图。FIG. 3 shows a schematic diagram of splicing images along a bird's-eye view to obtain a bird's-eye view stitching.
图4是将车辆行驶轨迹中所采集的图像沿鸟瞰视角进行连续所得到的鸟瞰视角拼接图像的示意图。FIG. 4 is a schematic diagram of a bird's-eye view stitching image obtained by continuously collecting images collected in a vehicle trajectory along a bird's-eye view.
图5是根据一具体实施例示出的鸟瞰视角拼接图像对应的第一点云轨迹数据的示意图。Fig. 5 is a schematic diagram of the first point cloud trajectory data corresponding to the bird's-eye view stitched image according to a specific embodiment.
图6是根据本申请一实施例示出的在竖直投影面上点云轨迹数据中各道路元素的点云模型的投影示意图。Fig. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application.
图7是根据本申请一实施例示出的生成第一点云轨迹数据的流程图。Fig. 7 is a flow chart of generating first point cloud trajectory data according to an embodiment of the present application.
图8是根据本申请一实施例示出的点云底图的示意图。Fig. 8 is a schematic diagram of a point cloud base map according to an embodiment of the present application.
图9是根据一实施例示出的将两第一点云轨迹数据进行拼接的示意图。Fig. 9 is a schematic diagram of splicing two first point cloud trajectory data according to an embodiment.
图10是根据本申请一实施例示出的步骤250之前步骤的流程图。Fig. 10 is a flowchart showing steps before step 250 according to an embodiment of the present application.
图11是根据本申请一实施例示出的根据候选点云轨迹数据对点云底图进行更新的流程图。Fig. 11 is a flow chart of updating a point cloud base map according to candidate point cloud trajectory data according to an embodiment of the present application.
图12是根据本申请一具体实施例示出的地图生成方法的流程图。Fig. 12 is a flowchart of a method for generating a map according to a specific embodiment of the present application.
图13是根据本申请一实施例示出的地图的生成装置的框图。Fig. 13 is a block diagram of an apparatus for generating a map according to an embodiment of the present application.
图14是根据本申请一实施例示出的电子设备的结构框图。Fig. 14 is a structural block diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将参照附图更详细地描述本申请的实施方式。虽然附图中显示了本申请的实施方式,然而应该理解,可以以各种形式实现本申请而不应被这里阐述的实施方式所限制。相反,提供这些实施方式是为了使本申请更加透彻和完整,并且能够将本申请的范围完整地传达给本领域的技术人员。Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. Although embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that this application will be thorough and complete, and will fully convey the scope of this application to those skilled in the art.
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。应当理解,尽管在本申请可能采用术语“第一”、“第二”、“第三”等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。The terminology used in this application is for the purpose of describing particular embodiments only, and is not intended to limit the application. As used in this application and the appended claims, the singular forms "a", "the", and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It should be understood that although the terms "first", "second", "third" and so on may be used in this application to describe various information, such information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of the present application, first information may also be called second information, and similarly, second information may also be called first information. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the present application, "plurality" means two or more, unless otherwise specifically defined.
以下结合附图详细描述本申请的技术方案。The technical solution of the present application will be described in detail below in conjunction with the accompanying drawings.
图1是根据本申请一实施例示出的本方案的应用场景的示意图。如图1所示,该应用场景包括车辆110、服务端120和终端130。其中,车辆110与服务端120可以通过有线或者无线网络建立通信连接,终端130和服务端120通过有线或者无线网络建立通信连接。Fig. 1 is a schematic diagram showing an application scenario of the solution according to an embodiment of the present application. As shown in FIG. 1 , the application scenario includes a vehicle 110 , a server 120 and a terminal 130 . Wherein, the vehicle 110 and the server 120 may establish a communication connection through a wired or wireless network, and the terminal 130 and the server 120 may establish a communication connection through a wired or wireless network.
基于车辆110与服务端120之间的通信连接,车辆110可以将自身的轨迹数据上报到服务端120,从而,服务端120可以基于车辆的轨迹数据,按照本申请实施例的方法来生成地图。服务端120可以是独立的物理服务器,也可以是云服务器,在此不进行具体限定。Based on the communication connection between the vehicle 110 and the server 120, the vehicle 110 can report its own track data to the server 120, so that the server 120 can generate a map according to the method of the embodiment of the present application based on the track data of the vehicle. The server 120 may be an independent physical server or a cloud server, which is not specifically limited here.
服务端120生成点云底图和/或地图后,还可以将所生成的点云底图和/或地图发送到终端130,由用户在终端130对点云底图和/或地图进行审核和编辑修改,服务端120再接收终端130进行审核和编辑修改后的点云底图和/或地图。其中,终端可以是智能手机、平板电脑、笔记本电脑、台式电脑等,在此不进行具体限定。After the server 120 generates the point cloud base map and/or map, it can also send the generated point cloud base map and/or map to the terminal 130, and the point cloud base map and/or map can be checked and verified by the user at the terminal 130. For editing and modification, the server 120 receives the terminal 130 to review and edit the modified point cloud base map and/or map. Wherein, the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., which are not specifically limited here.
在一些实施例中,服务端120还可以将所生成的地图发送到各个车辆110,以在车辆110的车载显示设备上进行显示,或者发送到用户所在的终端130。In some embodiments, the server 120 may also send the generated map to each vehicle 110 for display on the vehicle-mounted display device of the vehicle 110 , or send it to the terminal 130 where the user is located.
本申请实施例提供一种地图的生成方法,包括:An embodiment of the present application provides a method for generating a map, including:
1)获取生成的第一点云轨迹数据,其中第一点云轨迹数据对应于第二类型的车辆的点云轨迹数据。1) Acquiring the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory data of the second type of vehicle.
本申请实施例可以将未设有激光雷达和智能感知模块的车辆称为类型为第二类型的车辆。第一点云轨迹数据可以是指对应于第二类型的车辆的点云轨迹数据。例如,第一点云轨迹数据可以是根据第二类型的车辆的鸟瞰视角拼接图像和所关联的车辆为位姿信息所构建的点云轨迹。本申请实施例可以基于第一类型的车辆中对应于多个视角的图像采集装置所采集的图像来得到鸟瞰视角拼接图像,以及通过GNSS模块(或GPS模块)、IMU、和轮速计来采集得到车辆的位姿信息。In this embodiment of the present application, a vehicle without a lidar and an intelligent perception module may be referred to as a vehicle of the second type. The first point cloud trajectory data may refer to point cloud trajectory data corresponding to the second type of vehicle. For example, the first point cloud trajectory data may be a point cloud trajectory constructed for pose information of the second type of vehicle according to the stitched image of the bird's-eye view of the second type of vehicle and the associated vehicle. In the embodiment of the present application, based on the images collected by the image acquisition device corresponding to multiple viewing angles in the first type of vehicle, a bird's-eye view stitching image can be obtained, and collected by a GNSS module (or GPS module), an IMU, and a wheel speedometer Get the pose information of the vehicle.
2)获取确定的第二点云轨迹数据,其中第二点云轨迹数据对应于第一类型的车辆的点云轨迹数据。2) Acquiring the determined second point cloud trajectory data, where the second point cloud trajectory data corresponds to the point cloud trajectory data of the first type of vehicle.
本申请实施例可以将设有激光雷达,和/或,智能感知模块的车辆,称为类型为第一类型的车辆。第二点云轨迹数据可以是指对应于第一类型的车辆的点云轨迹数据。例如,第二点云轨迹数据可以是结合激光雷达所采集到的信息和/或智能感知模块的环境感知结果和定位结果、图像采集装置所采集到的视觉信息、IMU、轮速计、GNSS(或者GPS)模块等多种信息来构建的。In this embodiment of the present application, a vehicle equipped with a laser radar and/or an intelligent perception module may be referred to as a vehicle of the first type. The second point cloud trajectory data may refer to point cloud trajectory data corresponding to the first type of vehicle. For example, the second point cloud track data can be combined with the information collected by the lidar and/or the environmental perception results and positioning results of the intelligent perception module, the visual information collected by the image acquisition device, IMU, wheel speedometer, GNSS ( Or GPS) module and other information to build.
3)根据第一点云轨迹数据生成点云底图。3) Generate a point cloud base map according to the first point cloud trajectory data.
4)以点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,得到地图。4) Using the point cloud base map as the alignment medium, align and fuse the second point cloud trajectory data with the point cloud base map to obtain a map.
可见,本申请实施例先构建第一点云轨迹数据,并根据第一点云轨迹数据来生成点云底图,并将点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,生成地图,从而可以为GNSS信号较弱、或者GPS信号较弱的区域的快速生成地图。本申请实施例通过精度较低的第一点云轨迹数据构建点云底图,然后将精度较高和所感知到的道路元素更全面的第二点云轨迹数据与点云底图进行融合,生成地图,可以提高地图的生成效率和保证地图的准确度和精度。It can be seen that in the embodiment of the present application, the first point cloud trajectory data is constructed first, and the point cloud base map is generated according to the first point cloud trajectory data, and the point cloud base map is used as an alignment medium, and the second point cloud trajectory data and the point cloud The base map is aligned and fused to generate a map, so that maps can be quickly generated for areas with weak GNSS signals or weak GPS signals. In the embodiment of the present application, the point cloud base map is constructed by using the first point cloud track data with lower precision, and then the second point cloud track data with higher precision and more comprehensive perceived road elements is fused with the point cloud base map, Generating a map can improve the efficiency of map generation and ensure the accuracy and precision of the map.
图2是根据本申请的一个实施例示出的地图的生成方法的流程图。该方法可以由具备处理能力的计算机设备执行,例如服务器、云服务器等,在此不进行具体限定。Fig. 2 is a flowchart of a method for generating a map according to an embodiment of the present application. The method can be executed by a computer device with processing capabilities, such as a server, a cloud server, etc., and is not specifically limited here.
参照图2所示,该方法至少包括步骤210至250,详细介绍如下:Referring to Figure 2, the method at least includes steps 210 to 250, which are described in detail as follows:
步骤210,获取多个轨迹数据,轨迹数据包括车辆的位姿信息和与车辆位姿信息相关联的鸟瞰视角拼接图像。 Step 210, acquiring a plurality of trajectory data, the trajectory data including vehicle pose information and a bird's-eye view stitching image associated with the vehicle pose information.
轨迹数据是车辆在行驶过程中采集得到,其中,多个轨迹数据可以来源于一个车辆或多个车辆。该多个轨迹数据可以是车辆在多次行驶过程中采集得到。The trajectory data is collected by the vehicle during driving, wherein multiple trajectory data can come from one vehicle or multiple vehicles. The plurality of track data may be collected during multiple driving of the vehicle.
车辆的位姿信息可以是根据车辆中的GNSS(Global Navigation Satellite System,全球导航卫星系统)模块、GPS模块、IMU(Inertial Measurement Unit,惯性测量单元)模块、轮速计中所采集到的信息确定。其中,IMU是由三轴加速计、三轴陀螺仪、三轴磁力计等多种传感器组成的模块。轮速计用于检测车轮在一定时间内移动的距离,从而推算出车辆的相对位姿(位置和航向)的变化。The pose information of the vehicle can be determined according to the information collected by the GNSS (Global Navigation Satellite System) module, GPS module, IMU (Inertial Measurement Unit, inertial measurement unit) module and wheel speedometer in the vehicle . Among them, the IMU is a module composed of various sensors such as a three-axis accelerometer, a three-axis gyroscope, and a three-axis magnetometer. The wheel speedometer is used to detect the distance that the wheel moves within a certain period of time, so as to calculate the change of the relative pose (position and heading) of the vehicle.
车辆的位姿信息可以指示车辆的位置信息和车辆的姿态信息,车辆的姿态可以包括车辆的俯仰角、偏航角和翻滚角。The pose information of the vehicle may indicate the position information of the vehicle and the attitude information of the vehicle, and the attitude of the vehicle may include a pitch angle, a yaw angle, and a roll angle of the vehicle.
车辆的位置信息可以由GNSS模块根据所采集到的GNSS信号确定,或者由GPS模块根据所采集到的GPS信号确定。当车辆驶入GNSS信号较弱或者GPS信号较弱的地方(例如驶入地下停车场等室内区域),车辆可以根据GNSS信号或者GPS信号低于设定阈值的位置点的位置信息和轮速计、IMU模块所采集到的信息进行航位推算,得到行驶过程中各个位置点的位置信息。The location information of the vehicle can be determined by the GNSS module according to the collected GNSS signals, or by the GPS module according to the collected GPS signals. When the vehicle enters a place where the GNSS signal or GPS signal is weak (such as entering an indoor area such as an underground parking lot), the vehicle can use the position information of the location point where the GNSS signal or GPS signal is lower than the set threshold and the wheel speedometer , The information collected by the IMU module is used for dead reckoning, and the position information of each position point during the driving process is obtained.
鸟瞰视角拼接图像是将车辆在至少两个视角下采集的图像沿鸟瞰视角进行拼接得到。车辆中可以安装多个图像采集装置,以在多个视角下采集车辆在行驶过程中周围环境的图像。其中,所采集的图像可以是车辆的 正前方环境的图像、左侧面的图像、右侧面的图像、左后方的图像、右后方的图像等。在一实施例中,车辆中设有三个图像采集装置,分别采集车辆的正前方、左侧面、右侧面这三个视角下的图像。当然,在其他实施例中,还可以采集其他视角下的图像。The bird's-eye view stitching image is obtained by stitching the images collected by the vehicle in at least two viewing angles along the bird's-eye view. Multiple image capture devices can be installed in the vehicle to capture images of the surrounding environment of the vehicle from multiple perspectives during driving. Wherein, the image collected can be the image of the environment directly in front of the vehicle, the image of the left side, the image of the right side, the image of the left rear, the image of the right rear, etc. In one embodiment, three image acquisition devices are provided in the vehicle, which respectively acquire images from three perspectives of the front, left side, and right side of the vehicle. Certainly, in other embodiments, images under other viewing angles may also be collected.
在一些实施例中,可以由车辆将在各个位置点所采集的多个视角下的图像沿鸟瞰视角(Birds Eye View,BEV)进行拼接,得到鸟瞰视角拼接图像。在其他实施例中,车辆还可以将所采集的多个视角下的图像上传到服务端,由服务端来沿鸟瞰视角进行拼接,得到鸟瞰视角拼接图像。In some embodiments, the vehicle can splice images collected at various points of view under multiple viewing angles along a bird's eye view (Birds Eye View, BEV) to obtain a bird's eye view stitching image. In other embodiments, the vehicle can also upload the collected images under multiple viewing angles to the server, and the server can stitch them together along the bird's-eye view to obtain the stitched image from the bird's-eye view.
图3示出了沿鸟瞰视角进行拼接得到鸟瞰视角拼接图像的示意图。图3A-3C是车辆在同一位置点由车辆上不同的摄像头采集得到。其中,图3A的图像是所采集到的车辆正前方环境的图像,图3B是所采集到的车辆左侧前方环境的图像,图3C是所采集到的车辆右前方环境的图像。将图3A-图3C沿鸟瞰视角进行拼接,可以得到图3D所示的鸟瞰视角拼接图像。图3B的图像属于侧前视感知。FIG. 3 shows a schematic diagram of splicing images along a bird's-eye view to obtain a bird's-eye view stitching. Figures 3A-3C are captured by different cameras on the vehicle at the same location. Wherein, the image in FIG. 3A is the collected image of the environment directly in front of the vehicle, FIG. 3B is the collected image of the environment in front of the left side of the vehicle, and FIG. 3C is the collected image of the environment in front of the vehicle right. By splicing Fig. 3A-Fig. 3C along the bird's-eye view angle, the bird's-eye view stitching image shown in Fig. 3D can be obtained. The image in Fig. 3B belongs to side forward looking perception.
在车辆行驶过程中,车辆可以实时在多个位置点进行图像采集。因此,可以将车辆所采集的多张图像沿鸟瞰视角进行连续拼接,得到反映车辆行驶轨迹周围环境的连续的鸟瞰视角拼接图像。During the driving process of the vehicle, the vehicle can collect images at multiple locations in real time. Therefore, multiple images collected by the vehicle can be continuously spliced along the bird's-eye view angle to obtain continuous bird's-eye view stitching images reflecting the surrounding environment of the vehicle's driving track.
在本申请实施例中,轨迹数据中的鸟瞰视角拼接图像是将多个位置点的鸟瞰视角拼接图像沿行驶轨迹进行连续拼接得到。为便于区分,将在各位置点所采集到的多个视角下的图像沿鸟瞰视角进行拼接所得到的图像称为鸟瞰视角拼接子图像。In the embodiment of the present application, the stitched bird's-eye view image in the trajectory data is obtained by continuously stitching the stitched bird's-eye view images of multiple locations along the driving track. For the convenience of distinction, the image obtained by splicing the images collected at each location point under multiple viewing angles along the bird's-eye view is called the bird's-eye view stitching sub-image.
图4是将车辆行驶轨迹中所采集的图像沿鸟瞰视角进行连续拼接所得到的鸟瞰视角拼接图像的示意图。在拼接过程中,例如在将不同位置点对应的鸟瞰视角拼接子图像进行拼接过程中,特别是对应于拐弯处的鸟瞰视角拼接子图像进行拼接时,可以进行朝向优化以减少黑边(图中标记401所指示)。其中,黑边是因相邻两鸟瞰视角拼接子图像拼接处未完全贴合所导致。在拼接过程中,对于直线轨迹上的多个位置点对应的鸟瞰视角拼接子图像进行自由拼接以减少拐弯扭曲(图中标记402所指示)。FIG. 4 is a schematic diagram of a bird's-eye view stitched image obtained by continuously stitching images collected in a vehicle trajectory along a bird's-eye view. In the stitching process, for example, in the stitching process of stitching sub-images corresponding to different positions, especially when stitching sub-images corresponding to the bird’s-eye view at corners, orientation optimization can be performed to reduce black edges (in the figure indicated by marker 401). Among them, the black border is caused by the incomplete joint of two adjacent bird's-eye view stitching sub-images. During the stitching process, free stitching is performed on the bird's-eye view stitching sub-images corresponding to multiple position points on the straight line trajectory to reduce turning distortion (indicated by mark 402 in the figure).
步骤220,对各鸟瞰视角拼接图像进行道路元素识别,确定各鸟瞰视角拼接图像中的道路元素。In step 220, road element identification is performed on each stitched image from a bird's-eye view, and road elements in each stitched image from a bird's-eye view are determined.
道路元素可以包括车道线(例如车道实线、车道虚线)、路面箭头、停止线、减速带、停车场中的车位边线、车位入口线等,在此不进行具体限定。进行道路元素识别是指确定道路元素在鸟瞰视角拼接图像中所在的像素区域。可以理解的是,进行道路元素识别所得到的识别结果一方面指示了该鸟瞰视角拼接图像具体包括哪些道路元素,另一方面指示了各道路元素在鸟瞰视角拼接图像中的位置。Road elements may include lane lines (such as solid lane lines and dashed lane lines), road arrows, stop lines, speed bumps, parking space borderlines in the parking lot, parking space entrance lines, etc., which are not specifically limited here. Identifying road elements refers to determining the pixel area where the road elements are located in the stitched image from the bird's-eye view. It can be understood that, on the one hand, the recognition result obtained from road element recognition indicates which road elements are specifically included in the bird's-eye view stitched image, and on the other hand, indicates the position of each road element in the bird's-eye view stitched image.
例如,可以通过神经网络模型来对鸟瞰视角拼接图像进行道路元素识别。在一些实施例中,可以通过Mask R-CNN(Mask Recycle Convolutional Neural Network,掩膜循环卷积神经网络)、PANet(Path Aggregation Network,路径聚合网络)、FCIS(Fully Convolutional Instance-aware Semantic Segmentation,全卷积的实例感知语义分割)网络来将鸟瞰视角拼接图像的各道路元素分割,从而确定各道路元素在鸟瞰视角拼接图像中的位置。For example, a neural network model can be used to identify road elements on stitched images from a bird's-eye view. In some embodiments, Mask R-CNN (Mask Recycle Convolutional Neural Network, Mask Recycle Convolutional Neural Network), PANet (Path Aggregation Network, Path Aggregation Network), FCIS (Fully Convolutional Instance-aware Semantic Segmentation, full The convolutional instance-aware semantic segmentation) network is used to segment each road element in the bird's-eye view stitching image, so as to determine the position of each road element in the bird's-eye view stitching image.
在一些实施例中,步骤220可以包括:将各鸟瞰视角拼接图像输入道 路元素识别模型中;由道路元素识别模型进行道路元素识别,输出各鸟瞰视角拼接图像对应的道路元素信息,道路元素信息用于指示所对应鸟瞰视角拼接图像中的道路元素。在一些实施例中,道路元素识别模型可以是通过卷积神经网络、全连接神经网络、前馈神经网络、长短时记忆网络、循环神经网络等中的一种或者多个神经网络构建。在另一些实施例中,道路元素识别模型可以是如上所列举的Mask R-CNN、PANet、FCIS等。In some embodiments, step 220 may include: inputting each stitched bird's-eye view image into the road element recognition model; performing road element recognition by the road element recognition model, and outputting road element information corresponding to each stitched bird's-eye view image. Indicates the road element in the stitched image corresponding to the bird's-eye view. In some embodiments, the road element recognition model may be constructed by one or more neural networks among convolutional neural network, fully connected neural network, feedforward neural network, long short-term memory network, and recurrent neural network. In other embodiments, the road element recognition model may be Mask R-CNN, PANet, FCIS, etc. as listed above.
为了保证道路元素识别的准确度,在进行道路元素识别之前,可以通过训练数据对道路元素识别模型进行训练。其中,训练数据包括多个样本鸟瞰视角拼接图像和样本鸟瞰视角拼接图像的标注信息。标注信息用于指示对应样本鸟瞰视角拼接图像中的道路元素。在本申请实施例中,将用于训练道路元素识别模型的鸟瞰视角拼接图像称为样本鸟瞰视角拼接图像。在训练过程中,将样本鸟瞰视角拼接图像输入到道路元素识别模型中,由道路元素识别模型对样本鸟瞰视角拼接图像进行道路元素识别,输出预测道路元素信息,该预测道路元素信息用于指示样本鸟瞰视角拼接图像中的道路元素。可以理解的是,该预测道路元素信息不仅指示了所识别出道路元素在样本鸟瞰视角拼接图像中的位置信息,还指示了道路元素的语义(即指示了是哪种道路元素,例如是车道线、减速带或者停止线等)。之后,基于该样本鸟瞰视角拼接图像的标注信息和预测道路元素信息,计算损失函数的损失值,并根据该损失值反向调整道路元素识别模型的参数。可以理解的是,该样本鸟瞰视角拼接图像的标注信息同样指示各道路元素在样本鸟瞰视角拼接图像中的位置信息。In order to ensure the accuracy of road element recognition, the road element recognition model can be trained with training data before road element recognition. Wherein, the training data includes multiple sample bird's-eye view stitched images and annotation information of the sample bird's-eye view stitched images. The annotation information is used to indicate the road elements in the stitched image from the bird's-eye view of the corresponding sample. In the embodiment of the present application, the bird's-eye view stitched image used for training the road element recognition model is referred to as a sample bird's-eye view stitched image. During the training process, the stitched image of the bird's-eye view of the sample is input into the road element recognition model, and the road element recognition model recognizes the road element on the stitched image of the bird's-eye view of the sample, and outputs the predicted road element information, which is used to indicate the sample Road elements in a bird's eye view stitched image. It can be understood that the predicted road element information not only indicates the position information of the identified road element in the sample bird's-eye view stitched image, but also indicates the semantics of the road element (that is, indicates what kind of road element, such as lane line , speed bumps or stop lines, etc.). Afterwards, based on the annotation information and predicted road element information of the stitched image from the bird's-eye view of the sample, the loss value of the loss function is calculated, and the parameters of the road element recognition model are reversely adjusted according to the loss value. It can be understood that the annotation information of the sample bird's-eye view stitched image also indicates the position information of each road element in the sample bird's-eye view stitched image.
其中,损失函数可以根据实际需要设定,例如损失函数可以是交叉熵损失函数、对数损失函数等,在此不进行具体限定。Wherein, the loss function may be set according to actual needs, for example, the loss function may be a cross-entropy loss function, a logarithmic loss function, etc., which are not specifically limited here.
当完成道路元素识别模型的训练之后,可将道路元素识别模型进行在线应用,来准确地识别道路元素。After completing the training of the road element recognition model, the road element recognition model can be applied online to accurately identify road elements.
步骤230,根据各鸟瞰视角拼接图像中的道路元素和各鸟瞰视角拼接图像所关联的车辆位姿信息,生成各轨迹数据对应的第一点云轨迹数据。 Step 230, according to the road elements in each bird's-eye view stitching image and the vehicle pose information associated with each bird's-eye view stitching image, generate first point cloud trajectory data corresponding to each trajectory data.
在本申请中,为了便于区分,将用于构建点云底图,且是基于鸟瞰视角拼接图像和所关联的位姿信息构建的点云轨迹数据称为第一点云轨迹数据。第一点云轨迹数据包括轨迹路径中各个位置点的位置信息以及轨迹路径上的道路元素的点云模型。可以理解的是,第一点云轨迹中不同道路元素的点云模型之间的相对位置关系与鸟瞰视角拼接图像所呈现出的相对位置关系是基本相同的。In this application, for the convenience of distinction, the point cloud trajectory data used to construct the point cloud base map and constructed based on the stitched images from the bird's-eye view and the associated pose information is referred to as the first point cloud trajectory data. The first point cloud trajectory data includes position information of each position point in the trajectory path and a point cloud model of road elements on the trajectory path. It can be understood that the relative positional relationship between the point cloud models of different road elements in the first point cloud trajectory is basically the same as the relative positional relationship presented by the bird's-eye view stitching image.
道路元素的点云模型是在同一空间参考系下表达道路元素的空间分布和目标表面特性的海量点集合,在获取道路元素的每个采样点的空间坐标后,按照对应的坐标将道路元素上的全部采样点进行排布,得到该道路元素的点云模型。The point cloud model of road elements is a collection of massive points expressing the spatial distribution of road elements and the target surface characteristics in the same spatial reference system. After obtaining the spatial coordinates of each sampling point of road elements, the road elements are placed on the Arrange all the sampling points of the road element to obtain the point cloud model of the road element.
在一些实施例中,步骤230可以包括:根据各鸟瞰视角拼接图像所关联的车辆位姿信息,将各鸟瞰视角拼接图像中的每个道路元素进行三维重构,得到各鸟瞰视角拼接图像对应的第一点云轨迹数据。In some embodiments, step 230 may include: according to the vehicle pose information associated with each bird's-eye view stitching image, perform three-dimensional reconstruction on each road element in each bird's-eye view stitching image, and obtain the corresponding The first point cloud trajectory data.
通过对鸟瞰视角拼接图像中的道路元素进行三维重构,可以得到对应道路元素的三维点云模型。在此基础上,结合鸟瞰视角拼接图像所关联的位姿信息和所得到的道路元素在鸟瞰视角拼接图像中的位置信息,可以确定道路元素在地理空间中的位置信息,从而按照道路元素在地理空间中的 位置信息,将鸟瞰视角拼接图像中的各道路元素的三维点云模型进行排布,对应得到鸟瞰视角拼接图像对应的第一点云轨迹数据。The 3D point cloud model of the corresponding road elements can be obtained by 3D reconstruction of the road elements in the bird's-eye view stitching image. On this basis, combining the pose information associated with the stitched image from the bird's-eye view and the obtained position information of the road elements in the stitched image from the bird's-eye view, the location information of the road element in the geographic space can be determined, so that according to the geographic location of the road element The position information in the space is arranged by arranging the 3D point cloud models of the road elements in the bird's-eye view stitching image, and correspondingly obtaining the first point cloud trajectory data corresponding to the bird's-eye view stitching image.
在一些实施例中,可以采用深度学习的方式来对鸟瞰视角拼接图像中的道路元素进行三维重构。例如,可以训练用于生成三维点云模型的神经网络模型(为便于区分,将该用于生成三维点云模型的神经网络模型称为三维重构模型),之后通过三维重构模型来对鸟瞰视角拼接图像中的各道路元素进行三维重构。In some embodiments, deep learning may be used to perform three-dimensional reconstruction on the road elements in the bird's-eye view stitched image. For example, the neural network model used to generate the 3D point cloud model can be trained (for the sake of distinction, the neural network model used to generate the 3D point cloud model is called the 3D reconstruction model), and then the bird's-eye view can be obtained through the 3D reconstruction model. Each road element in the perspective stitching image is reconstructed in 3D.
在一些实施例中,三维重构模型可以是通过卷积神经网络、全连接神经网络等构建的模型。在一些实施例中,该三维重构模型可以是Im2Avatar模型、对抗网络、生成网络等,在此不进行具体限定。In some embodiments, the three-dimensional reconstruction model may be a model constructed by a convolutional neural network, a fully connected neural network, or the like. In some embodiments, the three-dimensional reconstruction model may be an Im2Avatar model, a confrontation network, a generation network, etc., which are not specifically limited here.
图5是根据一实施例示出的鸟瞰视角拼接图像对应的第一点云轨迹数据的示意图。在图5中,各个道路元素的边缘虽然看起来是线,但是实际上是点序列,由于点比较密集,所以视觉效果感觉是线。在一些实施例中,在第一点云轨迹数据中,可以通过不同的点云来表示不同的道路元素,例如通过蓝色的点云表示车道线,通过绿色的点云表示车位线,通过红色的点云表示箭头等。Fig. 5 is a schematic diagram of the first point cloud trajectory data corresponding to the bird's-eye view stitched image according to an embodiment. In Figure 5, although the edges of each road element appear to be lines, they are actually point sequences. Since the points are relatively dense, the visual effect feels like lines. In some embodiments, in the first point cloud trajectory data, different road elements can be represented by different point clouds, for example, lane lines are represented by blue point clouds, parking space lines are represented by green point clouds, and parking spaces are represented by red point clouds. The point cloud represents arrows, etc.
当车辆在高度差异较大的平面中行驶过程中,还需要关注车辆在竖直方向上的高度。例如,可以通过车辆的俯仰角感知高度差异,进而确定车辆在竖直方向上的高度。例如当车辆在地下停车场的地下1层和地下2层,其在竖直方向上的高度是不同的。进一步的,当车辆在斜面上行驶过程中,也可以通过车辆的俯仰角来计算车辆所在斜面的坡度。When the vehicle is driving on a plane with a large difference in height, it is also necessary to pay attention to the height of the vehicle in the vertical direction. For example, the height difference can be sensed through the pitch angle of the vehicle, and then the height of the vehicle in the vertical direction can be determined. For example, when the vehicle is on the first underground floor and the second underground floor of the underground parking lot, its height in the vertical direction is different. Further, when the vehicle is running on the slope, the slope of the slope where the vehicle is located can also be calculated according to the pitch angle of the vehicle.
图6是根据本申请一实施例示出的在竖直投影面上点云轨迹数据中各道路元素的点云模型的投影示意图。如图6所示,可以明显看出第一平面610和第二平面620、第三平面630在竖直方向上的高度差异。因此,第一平面610、第二平面620、第三平面630对应为不同的楼层,图6中的白色阴影部分表示的是对应楼层中的道路元素。在图6中,第一平面610和第二平面620之间的第一斜线621、第二斜线622和第三斜线623可以表示在不同位置处连接第一楼层610和第二楼层620的倾斜路面;同理,第二平面620和第三平面630之间的第四斜线631表示连接第二平面620和第三平面630的倾斜路面。Fig. 6 is a schematic diagram showing the projection of the point cloud model of each road element in the point cloud trajectory data on the vertical projection plane according to an embodiment of the present application. As shown in FIG. 6 , it can be clearly seen that there is a height difference between the first plane 610 and the second plane 620 and the third plane 630 in the vertical direction. Therefore, the first plane 610 , the second plane 620 , and the third plane 630 correspond to different floors, and the white shaded parts in FIG. 6 represent road elements in the corresponding floors. In FIG. 6 , the first oblique line 621 , the second oblique line 622 and the third oblique line 623 between the first plane 610 and the second plane 620 may indicate that the first floor 610 and the second floor 620 are connected at different positions. Similarly, the fourth oblique line 631 between the second plane 620 and the third plane 630 represents the inclined road connecting the second plane 620 and the third plane 630 .
图7是根据本申请一实施例示出的生成第一点云轨迹数据的流程图。如图7所示,包括:Fig. 7 is a flow chart of generating first point cloud trajectory data according to an embodiment of the present application. As shown in Figure 7, including:
步骤710,沿鸟瞰视角进行图像拼接。其中,车辆上用于采集图像的摄像头可以是环视摄像头,该环视摄像头可以基于鱼眼成像采集到车辆行驶过程中周围环境中的图像。通过设置多个环视摄像头,可以采集到多个视角下的图像。在一些实施例中,在将不同视角下的图像进行拼接的过程中,可能存在发生畸变的情况,在该种情况下,将发生畸变的区域视为ROI(Region of Interest,感兴趣区域),并进行畸变校正,从而避免因拼接过程所产生的畸变导致后续所生成地图不准确。进一步的,在拼接的过程中,还涉及到将多个视角下所采集的图像进行逆透视变换,以便于将图像投影到鸟瞰视角下,进而拼接得到鸟瞰视角拼接图像。Step 710, stitching images along the bird's-eye view. Wherein, the camera used for collecting images on the vehicle may be a surround-view camera, and the surround-view camera may collect images of the surrounding environment during the driving of the vehicle based on fish-eye imaging. By setting multiple surround-view cameras, images from multiple perspectives can be collected. In some embodiments, in the process of splicing images under different viewing angles, there may be distortions, in this case, the distorted region is regarded as ROI (Region of Interest, region of interest), Distortion correction is also performed to avoid inaccurate subsequent generated maps due to the distortion caused by the splicing process. Furthermore, in the process of splicing, it also involves inverse perspective transformation of images collected under multiple viewing angles, so as to project the images into a bird's-eye view, and then splicing to obtain a bird's-eye view stitching image.
步骤720,道路元素识别。Step 720, identifying road elements.
通过步骤720确定鸟瞰视角拼接图像中的各个道路元素。Each road element in the bird's-eye view stitched image is determined through step 720 .
步骤730,三维重构。Step 730, three-dimensional reconstruction.
通过三维重构,得到鸟瞰视角拼接图像中各个道路元素的点云模型,进而将各个点云模型进行组合,得到第一点云轨迹数据。Through three-dimensional reconstruction, the point cloud model of each road element in the bird's-eye view stitching image is obtained, and then the point cloud models are combined to obtain the first point cloud trajectory data.
步骤710-730的具体实现过程参见上文描述,在此不再赘述。For the specific implementation process of steps 710-730, refer to the above description, which will not be repeated here.
请继续参阅图2,步骤240,将多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图。Please continue to refer to FIG. 2 , step 240 , splicing the first point cloud trajectory data corresponding to multiple trajectory data to obtain a point cloud base map.
不同的第一点云轨迹数据覆盖了不同的环境区域,因此,多个第一点云轨迹数据进行拼接,可以得到反映基本全局区域情况的点云底图。Different first point cloud trajectory data cover different environmental regions. Therefore, multiple first point cloud trajectory data can be spliced to obtain a point cloud base map reflecting the basic global region.
例如,步骤240可以包括:确定多个第一点云轨迹中任意两个中表示相同地理位置的第一目标道路元素;基于第一目标道路元素,将多个第一点云轨迹数据进行拼接,得到点云底图。For example, step 240 may include: determining the first target road element representing the same geographic location in any two of the multiple first point cloud trajectories; based on the first target road element, splicing multiple first point cloud track data, Get the point cloud base map.
第一目标道路元素是指多个第一点云轨迹数据中任意两个中表示相同地理位置的道路元素。The first target road element refers to road elements representing the same geographic location in any two of the plurality of first point cloud trajectory data.
不同的行驶轨迹之间可能存在轨迹相重叠的部分,因此基于不同的轨迹数据所构建的第一点云轨迹数据中可能存在表示相同地理位置的道路元素(即第一目标道路元素)。Different driving trajectories may have overlapping trajectories, so there may be road elements representing the same geographic location (ie, the first target road element) in the first point cloud trajectory data constructed based on different trajectory data.
由于第一点云轨迹数据不仅示出了各道路元素的点云模型,而且,还对应关联了道路元素的元素语义(该元素语义即表示具体是哪种道路元素)以及道路元素的位置信息,因此,可以基于点云轨迹数据中各道路元素的元素语义、位置信息和道路元素附近的其他道路元素之间的相对位置关系,将任意两第一点云轨迹数据中的各道路元素进行比较,从而确定表示相同地理位置的第一目标道路元素。Since the first point cloud trajectory data not only shows the point cloud model of each road element, but also corresponds to the element semantics of the road element (the element semantics indicates which kind of road element it is) and the position information of the road element, Therefore, based on the element semantics and position information of each road element in the point cloud trajectory data and the relative positional relationship between other road elements near the road element, each road element in any two first point cloud trajectory data can be compared. Thus, the first target road element representing the same geographic location is determined.
在此基础上,可以通过移动第一点云轨迹数据,以使不同的第一点云轨迹数据中的第一目标道路元素相重合。可以理解的是,进行移动后,相重叠的第一目标道路元素所在位置即为不同的第一点云轨迹数据的拼接结合处。On this basis, the first target road elements in different first point cloud trajectory data can be coincident by moving the first point cloud trajectory data. It can be understood that after the movement, the position of the overlapping first target road element is the splicing joint of different first point cloud trajectory data.
通过将多个第一点云轨迹数据进行拼接,使得拼接后的多个第一点云轨迹数据中的道路元素可以反映更多地理环境区域中的道路元素,进而得到反映地理环境区域全局情况的点云底图。图8是根据本申请一实施例示出的点云底图的示意图。By splicing multiple first point cloud trajectory data, the road elements in the multiple first point cloud trajectory data after splicing can reflect road elements in more geographical environment areas, and then obtain the road elements that reflect the overall situation of the geographical environment area. Point cloud basemap. Fig. 8 is a schematic diagram of a point cloud base map according to an embodiment of the present application.
步骤250,以点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,得到地图。 Step 250, using the point cloud base map as an alignment medium, aligning and fusing the second point cloud trajectory data with the point cloud base map to obtain a map.
在本申请中,点云底图可以反映地理环境区域(特别是GNSS信号、GPS信号较弱的室内区域)的全场地图骨架,该种构建点云底图的方式较快速。但是,由于点云底图是通过第一点云轨迹数据来构建的,而第一点云轨迹数据是通过鸟瞰视角拼接图像和所关联的车辆的位姿信息生成的,在实际中,由于车辆上的图像采集装置所能感知的视角受限,仅通过鸟瞰视角拼接图像和对应的位姿信息所生成的点云轨迹数据可能不能全局体现出地理环境中的全部道路元素,因此,在本申请实施例中,进一步获取第二点云轨迹数据,通过能够表达更多道路元素的第二点云轨迹数据与点云底图进行融合,来生成地图。In this application, the point cloud base map can reflect the whole field map skeleton of the geographical environment area (especially the indoor area with weak GNSS signal and GPS signal), and this method of constructing the point cloud base map is relatively fast. However, since the point cloud base map is constructed through the first point cloud trajectory data, and the first point cloud trajectory data is generated by splicing images from a bird's-eye view and the pose information of the associated vehicle, in practice, due to the vehicle The angle of view that the image acquisition device on the Internet can perceive is limited, and the point cloud trajectory data generated only by splicing images from the bird’s-eye view and the corresponding pose information may not be able to fully reflect all road elements in the geographical environment. Therefore, in this application In the embodiment, the second point cloud trajectory data is further obtained, and the map is generated by fusing the second point cloud trajectory data capable of expressing more road elements with the point cloud base map.
第二点云轨迹数据相较于点云底图中可能包括更多的道路元素的点云模型。通过融合,可以将第二点云轨迹数据中存在而点云底图中不存在的道路元素(例如一些路标、柱子、超声波障碍物、墙面、闸机、栏杆、斑马线等)的点云模型添加到点云底图中,从而对点云底图进行融合更新, 进而得到地图。Compared with the point cloud base map, the second point cloud trajectory data may include a point cloud model with more road elements. Through fusion, the point cloud model of road elements (such as some road signs, pillars, ultrasonic obstacles, walls, gates, railings, zebra crossings, etc.) that exist in the second point cloud trajectory data but do not exist in the point cloud base map Add it to the point cloud base map, so as to fuse and update the point cloud base map, and then get the map.
在一些实施例中,步骤250可以包括:以点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐,确定第二点云轨迹数据相较于点云底图中的新增道路元素;将新增道路元素的点云模型添加到点云底图中,得到地图。In some embodiments, step 250 may include: using the point cloud basemap as an alignment medium, aligning the second point cloud trajectory data with the point cloud basemap, and determining the second point cloud trajectory data compared with the point cloud basemap The newly added road element; the point cloud model of the newly added road element is added to the point cloud base map to obtain a map.
其中,第二点云轨迹数据相较于点云底图中的新增道路元素,是指在第二点云轨迹数据中存在,而在点云底图中不存在的道路元素。Wherein, compared with the newly added road elements in the point cloud base map, the second point cloud track data refers to road elements that exist in the second point cloud track data but do not exist in the point cloud base map.
如上所描述,由于点云底图可以基本反映地理环境区域的全局骨架,因此,第二点云轨迹数据和点云底图中存在表示相同地理位置的道路元素的概率较高,从而,基于该表示相同地理位置的道路元素,可以将第二点云轨迹数据与点云底图进行语义对齐,实现第二点云轨迹数据在点云底图上的定位。在对齐后,将第二点云轨迹数据与点云底图进行比较,从而确定相较于点云底图中的新增道路元素,并基于该新增道路元素在第二点云轨迹数据中的位置信息确定该新增道路元素在点云底图中的位置信息,并按照所确定的位置信息,将新增道路元素的点云模型添加到点云底图中,进而可以得到地图。As described above, since the point cloud basemap can basically reflect the global skeleton of the geographical environment area, the probability of road elements representing the same geographic location in the second point cloud trajectory data and the point cloud basemap is relatively high, thus, based on this The road elements representing the same geographic location can semantically align the second point cloud trajectory data with the point cloud basemap, so as to realize the positioning of the second point cloud trajectory data on the point cloud basemap. After alignment, the second point cloud trajectory data is compared with the point cloud base map, thereby determining the newly added road element compared to the point cloud base map, and based on the newly added road element in the second point cloud track data The position information of the newly added road element is determined in the point cloud base map, and according to the determined position information, the point cloud model of the newly added road element is added to the point cloud base map, and then the map can be obtained.
综上所描述,在本申请实施例的方案中,先根据车辆的鸟瞰视角拼接图像和车辆的位姿信息来构建第一点云轨迹数据,并根据第一点云轨迹数据来生成点云底图,并将点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,生成地图。本申请实施例的方案可以应用于为GNSS信号较弱或者GPS信号较弱的区域快速生成地图。To sum up, in the solution of the embodiment of this application, the first point cloud trajectory data is constructed according to the bird’s-eye view angle stitching image of the vehicle and the vehicle’s pose information, and the point cloud base is generated according to the first point cloud trajectory data The point cloud base map is used as the alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to generate a map. The solutions of the embodiments of the present application can be applied to quickly generate maps for areas with weak GNSS signals or weak GPS signals.
而且,在本申请实施例的方案中,由于是先生成作为对齐介质的点云底图,点云底图基本反映了地理环境区域的全局情况,从而可以保证第二点云轨迹数据与点云底图的对齐成功率,降低出现无法对齐的概率。Moreover, in the solution of the embodiment of the present application, because the point cloud base map as the alignment medium is generated first, the point cloud base map basically reflects the overall situation of the geographical environment area, thereby ensuring that the second point cloud track data is consistent with the point cloud The alignment success rate of the basemap reduces the probability of misalignment.
在一些实施例中,不同车辆上的一硬件配置存在差异,例如,某些车辆上设有激光雷达,和/或,智能感知模块(也称为感知芯片,其中,智能感知模块可以实时根据采集的图像和车辆上的其他传感器所采集到的信息(例如轮速计、IMU等采集到的信息)来识别车辆周围环境的事物,例如道路元素等),而一些车辆没有安装激光雷达和智能感知模块。可以理解的是,由于鸟瞰视角拼接图像是经过一系列图像逆透视变换和拼接得到,在逆透视变换和拼接过程中可能损失一些原图像中的信息,因此,相较于基于鸟瞰视角拼接图像来识别道路元素,基于车辆实时采集的图像结合车辆上其他传感器的信息来识别道路元素的准确度更高。因此,来源于不同硬件配置的车辆的点云轨迹数据的精度和准确度存在差异。In some embodiments, there are differences in hardware configurations on different vehicles. For example, some vehicles are equipped with lidar, and/or, an intelligent perception module (also called a perception chip, wherein the intelligent perception module can be used in real time according to the acquired The image of the vehicle and the information collected by other sensors on the vehicle (such as the information collected by the wheel speedometer, IMU, etc.) to identify things in the environment around the vehicle, such as road elements, etc.), while some vehicles are not equipped with lidar and intelligent perception module. It is understandable that, since the bird’s-eye view stitching image is obtained through a series of inverse perspective transformations and stitching, some information in the original image may be lost during the inverse perspective transformation and stitching process. Therefore, compared with stitching images based on bird’s-eye view To identify road elements, the accuracy of identifying road elements based on real-time images collected by the vehicle combined with information from other sensors on the vehicle is higher. Therefore, there are differences in the precision and accuracy of point cloud trajectory data from vehicles with different hardware configurations.
在本申请实施例中,为便于区分,将设有激光雷达,和/或,智能感知模块的车辆,称为类型为第一类型的车辆;将未设有激光雷达和智能感知模块的车辆称为类型为第二类型的车辆。In this embodiment of the application, for the convenience of distinction, vehicles equipped with lidar and/or intelligent perception modules are referred to as vehicles of the first type; vehicles not equipped with lidar and intelligent perception modules are referred to as is a vehicle of type II.
激光雷达可以探测车辆周围环境中的物体尺寸和位置。可以理解的是,对于设有激光雷达的车辆而言,可以结合车辆所采集的图像、激光雷达探测到信号以及其他的轮速计、IMU、GNSS模块(或者GPS模块)来对应于车辆行驶轨迹的点云轨迹数据,其参考的信息更多,而且,激光雷达的探测精度和感测范围更广。因此,相较而言,来源于设有激光雷达的车辆的点云轨迹数据的精度和准确度更高。LiDAR can detect the size and location of objects in the environment around the vehicle. It can be understood that, for a vehicle equipped with a lidar, images collected by the vehicle, signals detected by the lidar, and other wheel speedometers, IMU, GNSS modules (or GPS modules) can be combined to correspond to the vehicle's driving trajectory The point cloud trajectory data has more reference information, and the detection accuracy and sensing range of lidar are wider. Therefore, comparatively speaking, the precision and accuracy of point cloud trajectory data from vehicles equipped with lidar is higher.
智能感知模块(感知芯片)可以实时结合车辆中设置的传感器(例如 图像采集装置、轮速计息、IMU、GNSS模块(或者GPS模块))采集到的信息实时进行环境的场景感知理解,例如障碍物的类型、道路标志及标线、行人车辆的检测、交通信号等数据的语义分类,然后基于对感知理解结果进行定位,从而帮助车辆更准确了解其相对于所处环境的位置。The intelligent perception module (sensing chip) can combine the information collected by the sensors set in the vehicle (such as image acquisition device, wheel speed information, IMU, GNSS module (or GPS module)) in real time to realize the scene perception and understanding of the environment in real time, such as obstacles Semantic classification of data such as types of objects, road signs and markings, detection of pedestrians and vehicles, traffic signals, etc., and then positioning based on the results of perception and understanding, so as to help vehicles more accurately understand their position relative to their environment.
一般而言,来源于第一类型的车辆的点云轨迹数据的精度和准确度相较于来源于第二类型的车辆的点云轨迹数据的精度更高,但是市场上,拥有第一类型的车辆的用户数量远远低于第二类型的车辆的用户数量,因此,如果仅依靠来源于第一类型的车辆的点云轨迹数据来构建地图,将导致地图的构建周期长。因此,在该种情况下,可以采用本申请的方法来构建地图。Generally speaking, the precision and accuracy of the point cloud trajectory data from the first type of vehicle is higher than that of the point cloud trajectory data from the second type of vehicle, but in the market, there are the first type of The number of users of the vehicle is much lower than that of the second type of vehicle. Therefore, if the map is constructed only by point cloud trajectory data from the first type of vehicle, the map construction period will be long. Therefore, in this case, the method of the present application can be used to construct the map.
例如在该种应用场景下,第一点云轨迹数据可以是根据第二类型的车辆的鸟瞰视角拼接图像和所关联的车辆为位姿信息所构建的点云轨迹。此时,在该种情况下,可以基于第一类型的车辆中对应于多个视角的图像采集装置所采集的图像来得到鸟瞰视角拼接图像,以及通过GNSS模块(或GPS模块)、IMU、和轮速计来采集得到车辆的位姿信息。For example, in this application scenario, the first point cloud trajectory data may be a point cloud trajectory constructed for pose information of the second type of vehicle based on the stitched image from the bird's-eye view of the second type of vehicle and the associated vehicle. At this time, in this case, the bird's-eye view stitching image can be obtained based on the images collected by the image acquisition device corresponding to multiple viewing angles in the first type of vehicle, and the GNSS module (or GPS module), IMU, and The wheel speedometer is used to collect the pose information of the vehicle.
第二点云轨迹数据可以是指对应于第一类型的车辆的点云轨迹数据。在一些实施例中,第二点云轨迹数据可以是结合激光雷达所采集到的信息和/或智能感知模块的环境感知结果和定位结果、图像采集装置所采集到的视觉信息、IMU、轮速计、GNSS(或者GPS)模块等多种信息来构建。可以理解的是,第二点云轨迹数据也对应指示了车辆的行驶轨迹,以及行驶环境中的各个道路元素和道路元素的位置信息。The second point cloud trajectory data may refer to point cloud trajectory data corresponding to the first type of vehicle. In some embodiments, the second point cloud trajectory data can be combined with the information collected by the lidar and/or the environment perception results and positioning results of the intelligent perception module, the visual information collected by the image acquisition device, IMU, wheel speed It can be built with various information such as meter, GNSS (or GPS) module, etc. It can be understood that the second point cloud track data also correspondingly indicates the driving track of the vehicle, and each road element and position information of the road element in the driving environment.
在该种情况下,通过精度较低的第一点云轨迹数据构建点云底图,然后将精度较高和所感知到的道路元素更全面的第二点云轨迹数据与点云底图进行融合,生成地图,可以提高地图的生成效率和保证地图的准确度和精度。In this case, the point cloud base map is constructed by using the first point cloud trajectory data with lower precision, and then the second point cloud trajectory data with higher accuracy and more comprehensive perceived road elements is compared with the point cloud base map. Fusion and map generation can improve map generation efficiency and ensure map accuracy and precision.
而且,如果没有点云底图,直接将不同的点云轨迹数据进行拼接生成地图的方案,当两点云轨迹数据之间没有轨迹交叉位置点、或者轨迹交叉位置点较少的情况下,容易导致点云轨迹数据拼接失败,或者说语义对齐失败,而如果采用本申请实施例的方案,由于预先构建了可以反映环境区域中基本全局情况的点云底图,从而可以有效解决该问题,保证第二点云轨迹数据均能够与点云底图进行融合。Moreover, if there is no point cloud base map, the scheme of directly splicing different point cloud trajectory data to generate a map is easy when there are no trajectory intersection points between two point cloud trajectory data, or there are few trajectory intersection points. As a result, point cloud trajectory data splicing fails, or semantic alignment fails, and if the solution of the embodiment of the present application is adopted, since the point cloud base map that can reflect the basic global situation in the environment area is pre-built, this problem can be effectively solved, ensuring The second point cloud trajectory data can be fused with the point cloud base map.
本申请实施例基于第二类型的车辆的第一点云轨迹数据来构建点云底图,并利用第一类型的车辆的第二点云轨迹数据来对点云底图进行优化和更新。如上所描述,相关技术中,归属于第二类型的车辆较多,而如果仅采用归属于第二类型的车辆对应的第一点云轨迹数据来直接生成地图,地图生成的周期长。而采用本申请实施例的方法,由于数量高更多的归属于第二类型的车辆对应的第一点云轨迹数据数据量更多,地理环境区域的覆盖度更大,先利用第一点云轨迹数据来构建点云底图,从而保证点云底图的覆盖面;在此基础上,通过归属于第一类型的车辆对应的第二点云轨迹数据对点云底图进行优化更新,得到地图,从而可以在缩短地图的生成周期和提高地图的准确度之间折中。而且,可以使不同硬件配置的车辆对应的轨迹数据均用于生成地图,使得数据来源更为全面。In the embodiment of the present application, the point cloud base map is constructed based on the first point cloud track data of the second type of vehicle, and the point cloud base map is optimized and updated by using the second point cloud track data of the first type of vehicle. As described above, in the related art, there are many vehicles belonging to the second type, and if only the first point cloud track data corresponding to the vehicles belonging to the second type is used to directly generate the map, the period of map generation will be long. However, with the method of the embodiment of the present application, since the first point cloud trajectory data corresponding to vehicles belonging to the second type with a higher quantity is larger, and the coverage of the geographical environment area is larger, the first point cloud is used first. track data to construct the point cloud base map, so as to ensure the coverage of the point cloud base map; on this basis, the point cloud base map is optimized and updated by the second point cloud track data corresponding to the vehicle belonging to the first type, and the map is obtained , so that a compromise can be made between shortening the generation cycle of the map and improving the accuracy of the map. Moreover, the trajectory data corresponding to vehicles with different hardware configurations can be used to generate maps, making the data sources more comprehensive.
此外,当地图中需要新增语义元素时,只需要跟点云底图对齐后就能快速融合生成,不需要基于新版本数据重新拼接来生成地图。In addition, when a new semantic element needs to be added to the map, it only needs to be aligned with the point cloud base map to be quickly fused and generated, and there is no need to re-splicing based on the new version of the data to generate the map.
本申请实施例的方案可以应用于构建GNSS信号(或者GPS信号)较弱区域的地图,例如室内停车场的地图等。The solutions of the embodiments of the present application can be applied to constructing maps of areas with weak GNSS signals (or GPS signals), such as maps of indoor parking lots.
在实际中,若两第一点云轨迹数据对应的轨迹不存在交叉轨迹点,则该两第一点云轨迹数据可能无法进行拼接,或者拼接后仍然存在点云底图中的某些区域不连续的情况,例如道路可能是断开的情况。图9是根据一实施例示出的将两第一点云轨迹数据进行拼接的示意图。如图9所示,第一点云轨迹数据I和第一点云轨迹数据II拼接后,存在断开区域910,而按照常规判断,该断开区域910可能与实际的地理环境区域可能是不相符的。In practice, if the trajectory corresponding to the two first point cloud trajectory data does not have an intersection trajectory point, the two first point cloud trajectory data may not be spliced, or there are still some areas in the point cloud base map that are not correct after splicing. A continuous situation, such as a road may be a disconnected situation. Fig. 9 is a schematic diagram of splicing two first point cloud trajectory data according to an embodiment. As shown in Figure 9, after the splicing of the first point cloud trajectory data I and the first point cloud trajectory data II, there is a disconnected area 910, and according to conventional judgments, the disconnected area 910 may be different from the actual geographical environment area. matched.
因此,在该种情况下,可以人工参与到点云底图的构建过程中。在该种情况下,步骤240之后,该方法还可以包括:将点云底图发送到客户端,以使用户在客户端对点云底图进行拼接编辑,然后可以接收客户端对点云底图进行拼接编辑后的点云底图。由此,通过对点云底图进行编辑,从而将点云底图中存在断开区域的部分进行人工编辑,从而完善点云底图。Therefore, in this case, it is possible to manually participate in the construction process of the point cloud base map. In this case, after step 240, the method may also include: sending the point cloud base map to the client, so that the user can splice and edit the point cloud base map on the client, and then receive the point cloud base map from the client. The point cloud base map after splicing and editing. Therefore, by editing the point cloud base map, manually edit the part of the point cloud base map where there is a disconnected area, thereby improving the point cloud base map.
在一些实施例中,如图10所示,步骤250之前,该方法还可以包括:In some embodiments, as shown in FIG. 10, before step 250, the method may further include:
步骤1010,从候选点云轨迹数据集合中获取一候选点云轨迹数据。 Step 1010, acquire a candidate point cloud trajectory data from the candidate point cloud trajectory data set.
在一些实施例中,候选点云轨迹数据集合中的候选点云轨迹数据可以是来源于第一类型的车辆的点云轨迹数据。在另一些实施例中,候选点云轨迹数据集合也可以包括来自第一类型的车辆和第二类型的车辆的点云轨迹数据。In some embodiments, the candidate point cloud trajectory data in the candidate point cloud trajectory data set may be point cloud trajectory data from the first type of vehicle. In some other embodiments, the candidate point cloud trajectory data set may also include point cloud trajectory data from the first type of vehicle and the second type of vehicle.
在一些实施例中,可以随机、或者按照设定的顺序从候选点云轨迹数据中获取候选点云轨迹数据。In some embodiments, the candidate point cloud trajectory data may be acquired from the candidate point cloud trajectory data randomly or in a set order.
在另一些实施例中,步骤1010可以包括:根据候选点云轨迹数据集合中各候选点云轨迹数据对应的优先级,按照优先级由高到低的顺序,从候选点云轨迹数据集合获取一候选点云轨迹数据。In some other embodiments, step 1010 may include: according to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, according to the order of priority from high to low, obtain a Candidate point cloud trajectory data.
例如,各候选点云轨迹数据对应的优先级可以是按照候选点云轨迹数据所来源车辆的车辆信息来设定,其中,根据车辆信息可以确定车辆中设置的硬件模块。For example, the priority corresponding to each candidate point cloud trajectory data can be set according to the vehicle information of the vehicle from which the candidate point cloud trajectory data originates, wherein the hardware modules installed in the vehicle can be determined according to the vehicle information.
在一些实施例中,可以设定,来源于设有激光雷达和智能感知模块的车辆的候选点云轨迹数据对应的优先级为第一优先级,来源于设有激光雷达或者智能感知模块的车辆的候选点云轨迹数据对应的优先级为第二优先级,来源于未设激光雷达和未设智能感知模块的车辆的候选点云轨迹数据对应的优先级为第三优先级,其中,第一优先级高于第二优先级,第二优先级高于第三优先级。In some embodiments, it can be set that the priority corresponding to the candidate point cloud track data originating from a vehicle equipped with a laser radar and an intelligent perception module is the first priority, and that originating from a vehicle equipped with a laser radar or an intelligent perception module The priority corresponding to the candidate point cloud trajectory data of the vehicle is the second priority, and the corresponding priority of the candidate point cloud trajectory data from the vehicle without lidar and no intelligent perception module is the third priority, among which, the first The priority is higher than the second priority, and the second priority is higher than the third priority.
步骤1020,确定候选点云轨迹数据相对于点云底图的覆盖度。 Step 1020, determine the coverage of the candidate point cloud trajectory data relative to the point cloud base map.
例如,可以根据候选点云轨迹数据所指示各道路元素的位置信息,确定候选点云轨迹数据所指示的行驶轨迹中位于点云底图中的目标部分轨迹的长度,然后将目标部分轨迹的长度与候选点云轨迹数据所指示的行驶轨迹的总长度相除,将所得到的比值作为候选点云轨迹数据相对于点云底图的覆盖度。For example, according to the position information of each road element indicated by the candidate point cloud trajectory data, the length of the target part track located in the point cloud base map in the driving track indicated by the candidate point cloud track data can be determined, and then the length of the target part track Divide it with the total length of the driving trajectory indicated by the candidate point cloud trajectory data, and use the obtained ratio as the coverage of the candidate point cloud trajectory data relative to the point cloud base map.
步骤1030,若覆盖度大于设定阈值,则将候选点云轨迹数据作为第二点云轨迹数据。 Step 1030, if the coverage is greater than the set threshold, the candidate point cloud trajectory data is used as the second point cloud trajectory data.
当覆盖度大于设定阈值,表明候选点云轨迹数据相对于点云底图的覆盖度较高,从而也表明候选点云轨迹数据对于提高点云底图的覆盖区域的 覆盖度贡献较低,因此,此时可以将候选点云轨迹数据作为第二点云轨迹数据,以将候选点云轨迹数据与点云底图进行对齐融合。When the coverage is greater than the set threshold, it indicates that the candidate point cloud trajectory data has a higher coverage relative to the point cloud base map, which also indicates that the candidate point cloud trajectory data contributes less to improving the coverage of the point cloud base map. Therefore, at this time, the candidate point cloud trajectory data can be used as the second point cloud trajectory data, so as to align and fuse the candidate point cloud trajectory data with the point cloud base map.
在一些实施例中,步骤1020之后,该方法还可以包括:若覆盖度不大于设定阈值,则将候选点云轨迹数据与点云底图进行拼接,以对点云底图进行更新。In some embodiments, after step 1020, the method may further include: if the coverage is not greater than the set threshold, splicing the candidate point cloud trajectory data with the point cloud base map to update the point cloud base map.
若覆盖度不大于设定阈值,则表明候选点云轨迹数据对于提高点云底图的覆盖区域的覆盖度贡献较高,因此,此时将候选点云轨迹数据作为第一点云轨迹数据,用于对点云底图进行更新。If the coverage is not greater than the set threshold, it indicates that the candidate point cloud trajectory data has a high contribution to the coverage of the coverage area of the point cloud base map. Therefore, at this time, the candidate point cloud trajectory data is used as the first point cloud trajectory data. It is used to update the point cloud basemap.
在一些实施例中,如图11所示,将候选点云轨迹数据与点云底图进行拼接,以对点云底图进行更新的步骤进一步包括:In some embodiments, as shown in FIG. 11 , splicing the candidate point cloud trajectory data with the point cloud base map, so that the step of updating the point cloud base map further includes:
步骤1110,确定候选点云轨迹数据所对应车辆的目标车辆类型。 Step 1110, determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data.
目标车辆类型是指候选点云轨迹数据所来源车辆所属的车辆类型。The target vehicle type refers to the vehicle type to which the source vehicle of the candidate point cloud trajectory data belongs.
在本实施例中,车辆可以是按照车辆上所配置的硬件来进行类型分类。在一具体实施例中,基于车辆上的硬件所设定的车辆类型包括第一类型和第二类型,其中,归属于第一类型的车辆上设有激光雷达和/或智能感知模块;归属于第二类型的车辆上未设有激光雷达和智能感知模块。In this embodiment, the vehicles may be classified according to the types of hardware configured on the vehicles. In a specific embodiment, the vehicle type set based on the hardware on the vehicle includes the first type and the second type, wherein, the vehicle belonging to the first type is equipped with a laser radar and/or an intelligent perception module; The second type of vehicle is not equipped with lidar and intelligent perception modules.
步骤1120,基于车辆类型与权重之间的对应关系,确定目标车辆类型对应的目标权重。Step 1120, based on the correspondence between vehicle types and weights, determine the target weight corresponding to the target vehicle type.
目标权重是指目标车辆类型对应的权重。其中,车辆类型与权重之间的对应关系可以根据实际需要进行设定。The target weight refers to the weight corresponding to the target vehicle type. Wherein, the corresponding relationship between vehicle types and weights can be set according to actual needs.
步骤1130,若目标权重大于权重阈值,则将点云底图中的第二目标道路元素进行移动,以使点云底图中的第二目标道路元素与候选点云轨迹数据中的第二目标道路元素重叠;其中,第二目标道路元素是指点云底图和候选点云轨迹数据中表示相同地理位置的道路元素。 Step 1130, if the weight of the target is greater than the weight threshold, move the second target road element in the point cloud base map so that the second target road element in the point cloud base map is consistent with the second target in the candidate point cloud trajectory data The road elements overlap; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud trajectory data.
步骤1140,若目标权重不大于权重阈值,则将候选点云轨迹数据中的第二目标道路进行移动,以使点云底图中的第二目标道路元素与候选点云轨迹数据中的第二目标道路元素重叠。 Step 1140, if the weight of the target is not greater than the weight threshold, then move the second target road in the candidate point cloud trajectory data so that the second target road element in the point cloud base image is consistent with the second target road element in the candidate point cloud trajectory data. Target road elements overlap.
步骤1150,组合进行移动后的点云底图和候选点云轨迹数据,作为更新后的点云底图。 Step 1150, combining the moved point cloud base map and candidate point cloud trajectory data as an updated point cloud base map.
在一些实施例中,可以将所对应点云轨迹数据精度更高的车辆配置较高的权重,将所对应点云轨迹数据精度较低的车辆设置较低的权重。例如,相较于归属于第二类型的车辆,归属于第一类型的车辆对应的权重更高。从而,在拼接过程中,则可以保证精度更高的点云轨迹数据在拼接过程中移动较少,相反,精度更低的点云轨迹数据在拼接过程中移动较多,从而避免在拼接过程中移动原本精度较高的点云轨迹数据导致降低该点云轨迹数据的准确度和精度。In some embodiments, higher weights may be assigned to vehicles with higher accuracy of point cloud trajectory data, and lower weights may be assigned to vehicles with lower accuracy of point cloud trajectory data. For example, compared with vehicles belonging to the second type, vehicles belonging to the first type are assigned a higher weight. Therefore, during the splicing process, it can be ensured that the point cloud trajectory data with higher precision moves less during the splicing process. On the contrary, the point cloud trajectory data with lower precision moves more during the splicing process, so as to avoid moving the original precision during the splicing process. Higher point cloud trajectory data results in reduced accuracy and precision of the point cloud trajectory data.
在本实施例中,根据候选点云轨迹数据所对应车辆的目标车辆类型来确定对应的目标权重,进而根据目标权重来确定拼接过程所要进行移动的对象,从而可以避免在在拼接过程中移动原本精度较高的点云轨迹数据,进而保证点云底图中各道路元素的位置准确度。In this embodiment, the corresponding target weight is determined according to the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data, and then the object to be moved during the splicing process is determined according to the target weight, so that the original accuracy of moving during the splicing process can be avoided. Higher point cloud trajectory data, thereby ensuring the position accuracy of each road element in the point cloud base map.
图12是根据本申请一具体实施例示出的地图的生成方法的流程图。在图12中灰色填充的步骤可以为设备参与的步骤,也可以是人工参与的步骤。如图12所示,包括:将轨迹数据上传到服务端,由服务端根据轨迹数据进行道路元素识别并生成对应的点云轨迹数据;然后,进行数据筛 选,其中数据筛选的过程可以参见图10所示的过程,即确定点云轨迹数据相对于点云底图的覆盖度,若覆盖度不大于设定阈值,则表明该点云轨迹数据的覆盖度贡献较高,则将该点云轨迹数据用于拼接生成点云底图,在拼接生成点云底图的过程中,可以将该点云底图发送到客户端,由用户对该点云底图进行拼接编辑。Fig. 12 is a flowchart of a method for generating a map according to a specific embodiment of the present application. The steps filled in gray in FIG. 12 may be steps participated by equipment or manually. As shown in Figure 12, it includes: uploading the trajectory data to the server, and the server identifies road elements based on the trajectory data and generates corresponding point cloud trajectory data; then, data screening is performed, and the process of data screening can be referred to in Figure 10 The process shown is to determine the coverage of the point cloud trajectory data relative to the point cloud base map. If the coverage is not greater than the set threshold, it indicates that the coverage contribution of the point cloud trajectory data is high, and the point cloud trajectory The data is used for splicing to generate a point cloud base map. In the process of splicing and generating a point cloud base map, the point cloud base map can be sent to the client, and the user can splice and edit the point cloud base map.
反之,若覆盖度大于设定阈值,则表明该点云轨迹数据的覆盖度贡献较低,则可以将该点云轨迹数据用于与点云底图进行对齐融合,进而生成地图。Conversely, if the coverage is greater than the set threshold, it indicates that the coverage contribution of the point cloud trajectory data is low, and the point cloud trajectory data can be used for alignment and fusion with the point cloud base map to generate a map.
地图中的图层包括定位图层和逻辑图层。进一步的,在生成地图之后,可以将该地图发送到客户端,从而,由用于基于客户端所显示的地图进行定位图层编辑和/或逻辑图层编辑。在进行地图编辑后,可以进一步通过技术人员对地图进行质检,进而得到高精度和高准确度的地图。Layers in a map include positioning layers and logical layers. Further, after the map is generated, the map can be sent to the client, so that the user can edit the positioning layer and/or edit the logical layer based on the map displayed on the client. After the map is edited, the map can be further inspected by technicians to obtain a high-precision and high-accuracy map.
以下介绍本申请实施例的装置实施例,可以用于执行本申请上述实施例中的方法。对于本申请装置实施例中未披露的细节,请参照本申请上述方法实施例。The following introduces the device embodiments of the embodiments of the present application, which can be used to implement the methods in the foregoing embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the foregoing method embodiments of the present application.
本申请实施例提供一种地图的生成装置,包括:数据获取模块、拼接模块和融合模块。数据获取模块,用于获取生成的第一点云轨迹数据,其中第一点云轨迹数据对应于第二类型的车辆的点云轨迹数据;获取确定的第二点云轨迹数据,其中第二点云轨迹数据对应于第一类型的车辆的点云轨迹数据;拼接模块,用于根据第一点云轨迹数据生成点云底图;融合模块,用于以点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,得到地图。An embodiment of the present application provides a map generation device, including: a data acquisition module, a splicing module, and a fusion module. The data acquisition module is used to obtain the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory data of the second type of vehicle; obtain the determined second point cloud trajectory data, wherein the second point cloud trajectory data The cloud track data corresponds to the point cloud track data of the first type of vehicle; the splicing module is used to generate a point cloud base map according to the first point cloud track data; the fusion module is used to use the point cloud base map as an alignment medium to convert the second The two point cloud trajectory data are aligned and fused with the point cloud base map to obtain a map.
在一实施方式中,数据获取模块可以包括获取模块、识别模块和生成模块。以下结合附图对地图的生成装置进行详细介绍。In one embodiment, the data acquisition module may include an acquisition module, an identification module and a generation module. The device for generating the map will be described in detail below in conjunction with the accompanying drawings.
图13是根据本申请一实施例示出的地图的生成装置的框图,如图13所示,该地图的生成装置包括:获取模块1310,用于获取多个轨迹数据,轨迹数据包括车辆的位姿信息和与车辆位姿信息相关联的鸟瞰视角拼接图像;识别模块1320,用于对各鸟瞰视角拼接图像进行道路元素识别,确定各鸟瞰视角拼接图像中的道路元素;生成模块1330,用于根据各鸟瞰视角拼接图像中的道路元素和各鸟瞰视角拼接图像所关联的车辆位姿信息,生成各轨迹数据对应的第一点云轨迹数据;拼接模块1340,用于将多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图;融合模块1350,用于以点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐融合,得到地图。Fig. 13 is a block diagram of a device for generating a map according to an embodiment of the present application. As shown in Fig. 13, the device for generating a map includes: an acquisition module 1310 for acquiring multiple trajectory data, the trajectory data including the pose of the vehicle Information and the bird's-eye view stitching image associated with the vehicle pose information; the identification module 1320 is used to identify road elements in each bird's-eye view stitching image, and determines the road elements in each bird's-eye view stitching image; the generation module 1330 is used to The road elements in each bird's-eye view stitching image and the vehicle pose information associated with each bird's-eye view stitching image generate the first point cloud track data corresponding to each track data; the stitching module 1340 is used to combine multiple track data. The first point cloud track data is spliced to obtain a point cloud base map; the fusion module 1350 is used to use the point cloud base map as an alignment medium to align and fuse the second point cloud track data with the point cloud base map to obtain a map.
在一些实施例中,识别模块1320,包括:输入单元,用于将各鸟瞰视角拼接图像输入道路元素识别模型中;输出单元,用于由道路元素识别模型进行道路元素识别,输出各鸟瞰视角拼接图像对应的道路元素信息,道路元素信息用于指示所对应鸟瞰视角拼接图像中的道路元素。In some embodiments, the recognition module 1320 includes: an input unit, configured to input the spliced images of bird's-eye view angles into the road element recognition model; Road element information corresponding to the image, where the road element information is used to indicate the corresponding road element in the bird's-eye view stitched image.
在一些实施例中,生成模块1330进一步被配置为:根据各鸟瞰视角拼接图像所关联的车辆位姿信息,将各鸟瞰视角拼接图像中的每个道路元素进行三维重构,得到各鸟瞰视角拼接图像对应的第一点云轨迹数据。In some embodiments, the generation module 1330 is further configured to: according to the vehicle pose information associated with each bird's-eye view stitching image, perform three-dimensional reconstruction on each road element in each bird's-eye view stitching image to obtain each bird's-eye view stitching image The first point cloud trajectory data corresponding to the image.
在一些实施例中,拼接模块1340,包括:第一目标道路元素确定单元,用于确定多个第一点云轨迹中任意两个中表示相同地理位置的第一目标道路元素;拼接单元,用于基于第一目标道路元素,将多个第一点云轨迹 数据进行拼接,得到点云底图。In some embodiments, the splicing module 1340 includes: a first target road element determining unit, configured to determine first target road elements representing the same geographic location in any two of a plurality of first point cloud trajectories; Based on the first target road element, multiple first point cloud trajectory data are spliced to obtain a point cloud base map.
在一些实施例中,地图的生成装置还包括:发送模块,用于将点云底图发送到客户端,以使用户在客户端对点云底图进行拼接编辑。In some embodiments, the map generation device further includes: a sending module, configured to send the point cloud base map to the client, so that the user can stitch and edit the point cloud base map on the client.
在一些实施例中,地图的生成装置还包括:候选点云轨迹数据获取模块,用于从候选点云轨迹数据集合中获取一候选点云轨迹数据;覆盖度确定模块,用于确定候选点云轨迹数据相对于点云底图的覆盖度;第二点云轨迹数据确定模块,用于若覆盖度大于设定阈值,则将候选点云轨迹数据作为第二点云轨迹数据。In some embodiments, the map generation device also includes: a candidate point cloud trajectory data acquisition module, used to obtain a candidate point cloud trajectory data from the candidate point cloud trajectory data set; a coverage determination module, used to determine the candidate point cloud The coverage of the trajectory data relative to the point cloud base map; the second point cloud trajectory data determination module is used to use the candidate point cloud trajectory data as the second point cloud trajectory data if the coverage is greater than the set threshold.
在一些实施例中,地图的生成装置还包括:更新模块,用于若覆盖度不大于设定阈值,则将候选点云轨迹数据与点云底图进行拼接,以对点云底图进行更新。在一些实施例中,更新模块,包括:目标车辆类型确定单元,用于确定候选点云轨迹数据所对应车辆的目标车辆类型;目标权重确定单元,用于基于车辆类型与权重之间的对应关系,确定目标车辆类型对应的目标权重;第一移动单元,用于若目标权重大于权重阈值,则将点云底图中的第二目标道路元素进行移动,以使点云底图中的第二目标道路元素与候选点云轨迹数据中的第二目标道路元素重叠;其中,第二目标道路元素是指点云底图和候选点云轨迹数据中表示相同地理位置的道路元素;第二移动单元,用于若目标权重不大于权重阈值,则将候选点云轨迹数据中的第二目标道路进行移动,以使点云底图中的第二目标道路元素与候选点云轨迹数据中的第二目标道路元素重叠;组合单元,用于组合进行移动后的点云底图和候选点云轨迹数据,作为更新后的点云底图。In some embodiments, the generating device of the map also includes: an update module, used for splicing the candidate point cloud trajectory data with the point cloud base map if the coverage is not greater than the set threshold, so as to update the point cloud base map . In some embodiments, the update module includes: a target vehicle type determination unit, configured to determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data; a target weight determination unit, configured to determine the vehicle type based on the corresponding relationship between the weight , to determine the target weight corresponding to the target vehicle type; the first moving unit is used to move the second target road element in the point cloud base map if the target weight is greater than the weight threshold, so that the second target road element in the point cloud base map The target road element overlaps with the second target road element in the candidate point cloud track data; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud track data; the second mobile unit, It is used to move the second target road in the candidate point cloud trajectory data if the target weight is not greater than the weight threshold, so that the second target road element in the point cloud base map is consistent with the second target in the candidate point cloud trajectory data The road elements overlap; the combination unit is used to combine the moved point cloud base map and candidate point cloud trajectory data as the updated point cloud base map.
在一些实施例中,候选点云轨迹数据获取模块进一步被配置为:根据候选点云轨迹数据集合中各候选点云轨迹数据对应的优先级,按照优先级由高到低的顺序,从候选点云轨迹数据集合获取一候选点云轨迹数据。In some embodiments, the candidate point cloud trajectory data acquisition module is further configured to: according to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, according to the order of priority from high to low, from the candidate points The cloud trajectory data set obtains a candidate point cloud trajectory data.
在一些实施例中,融合模块,包括:新增道路元素确定单元,用于以点云底图作为对齐介质,将第二点云轨迹数据与点云底图进行对齐,确定第二点云轨迹数据相较于点云底图中的新增道路元素;添加单元,用于将新增道路元素的点云模型添加到点云底图中,得到地图。In some embodiments, the fusion module includes: a new road element determination unit, configured to use the point cloud basemap as an alignment medium, align the second point cloud trajectory data with the point cloud basemap, and determine the second point cloud trajectory The data is compared with the newly added road elements in the point cloud base map; the adding unit is used to add the point cloud model of the newly added road elements to the point cloud base map to obtain a map.
图14是根据本申请一实施例示出的电子设备的结构框图。该电子设备可以是物理服务器、云服务器等,在此不进行具体限定。如图14所示,本申请中的电子设备可以包括:处理器1410和存储器1420,存储器1420上存储有计算机可读指令,计算机可读指令被处理器1410执行时,实现上述任一方法实施例中的方法。Fig. 14 is a structural block diagram of an electronic device according to an embodiment of the present application. The electronic device may be a physical server, a cloud server, etc., which is not specifically limited here. As shown in FIG. 14 , the electronic device in this application may include: a processor 1410 and a memory 1420, where computer-readable instructions are stored on the memory 1420, and when the computer-readable instructions are executed by the processor 1410, any of the above method embodiments may be implemented method in .
处理器1410可以包括一个或者多个处理核。处理器1410利用各种接口和线路连接整个电子设备内的各个部分,通过运行或执行存储在存储器1420内的指令、程序、代码集或指令集,以及调用存储在存储器1420内的数据,执行电子设备的各种功能和处理数据。可选地,处理器1410可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1410可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是, 上述调制解调器也可以不集成到处理器1410中,单独通过一块通信芯片进行实现。 Processor 1410 may include one or more processing cores. The processor 1410 uses various interfaces and lines to connect various parts of the entire electronic device, and executes electronic operations by running or executing instructions, programs, code sets or instruction sets stored in the memory 1420, and calling data stored in the memory 1420. Various functions and processing data of the device. Optionally, the processor 1410 may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). implemented in the form of hardware. The processor 1410 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like. Among them, the CPU mainly handles the operating system, user interface and application programs, etc.; the GPU is used to render and draw the displayed content; the modem is used to handle wireless communication. It can be understood that, the above-mentioned modem may not be integrated into the processor 1410, but may be realized by a communication chip alone.
存储器1420可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器1420可用于存储指令、程序、代码、代码集或指令集。存储器1420可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、报警功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储电子设备在使用中所创建的数据(比如伪装的响应命令、获取的进程状态)等。The memory 1420 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory). The memory 1420 may be used to store instructions, programs, codes, sets of codes, or sets of instructions. The memory 1420 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, an alarm function, etc.), and for implementing the following Instructions and the like of the various method embodiments described above. The storage data area can also store data created by the electronic device during use (such as a disguised response command, obtained process status) and the like.
本申请还提供了一种计算机可读存储介质,其上存储有计算机可读指令,当所述计算机可读指令被处理器执行时,实现上述任一方法实施例中的方法。计算机可读存储介质可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质具有执行上述方法中的任何方法步骤的计算机可读指令的存储空间。这些计算机可读指令可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。计算机可读指令可以例如以适当形式进行压缩。The present application also provides a computer-readable storage medium, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method in any one of the foregoing method embodiments is implemented. The computer readable storage medium may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. Optionally, the computer-readable storage medium includes a non-transitory computer-readable storage medium. The computer-readable storage medium has storage space for computer-readable instructions for performing any method steps in the methods described above. These computer readable instructions can be read from or written into one or more computer program products. Computer readable instructions may, for example, be compressed in a suitable form.
根据本申请实施例的一个方面,提供了计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述任一实施例中的方法。According to an aspect of the embodiments of the present application, a computer program product or computer program is provided, the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method in any of the above embodiments.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. Actually, according to the embodiment of the present application, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above can be further divided to be embodied by a plurality of modules or units.
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、触控终端、或者网络设备等)执行根据本申请实施方式的方法。本领域技术人员在考虑说明书及实践这里公开的实施方式后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。Through the description of the above implementations, those skilled in the art can easily understand that the example implementations described here can be implemented by software, or by combining software with necessary hardware. Therefore, the technical solutions according to the embodiments of the present application can be embodied in the form of software products, which can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, server, touch terminal, or network device, etc.) execute the method according to the embodiment of the present application. Other embodiments of the present application will be readily apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any modification, use or adaptation of the application, these modifications, uses or adaptations follow the general principles of the application and include common knowledge or conventional technical means in the technical field not disclosed in the application . It should be understood that the present application is not limited to the precise constructions which have been described above and shown in the accompanying drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

  1. 一种地图的生成方法,其特征在于,包括:A method for generating a map, comprising:
    获取生成的第一点云轨迹数据,其中所述第一点云轨迹数据对应于第二类型的车辆的点云轨迹数据;Acquiring the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory data of the second type of vehicle;
    获取确定的第二点云轨迹数据,其中所述第二点云轨迹数据对应于第一类型的车辆的点云轨迹数据;Obtaining the determined second point cloud trajectory data, wherein the second point cloud trajectory data corresponds to the point cloud trajectory data of the first type of vehicle;
    根据所述第一点云轨迹数据生成点云底图;Generate a point cloud base map according to the first point cloud trajectory data;
    以所述点云底图作为对齐介质,将所述第二点云轨迹数据与所述点云底图进行对齐融合,得到地图。Using the point cloud base map as an alignment medium, align and fuse the second point cloud trajectory data with the point cloud base map to obtain a map.
  2. 根据权利要求1所述的方法,其特征在于,The method according to claim 1, characterized in that,
    所述第一点云轨迹数据按以下方式生成:The first point cloud trajectory data is generated in the following manner:
    获取多个轨迹数据,所述轨迹数据包括车辆的位姿信息和与所述车辆位姿信息相关联的鸟瞰视角拼接图像,所述车辆为对应于第二类型的车辆;Acquiring a plurality of trajectory data, the trajectory data including vehicle pose information and a bird's-eye view stitching image associated with the vehicle pose information, the vehicle being a vehicle corresponding to the second type;
    对各所述鸟瞰视角拼接图像进行道路元素识别,确定各所述鸟瞰视角拼接图像中的道路元素;Carrying out road element identification on each of the bird's-eye view stitching images, and determining the road elements in each of the bird's-eye view stitching images;
    根据各所述鸟瞰视角拼接图像中的道路元素和各所述鸟瞰视角拼接图像所关联的车辆位姿信息,生成各所述轨迹数据对应的第一点云轨迹数据;According to the road elements in each of the bird's-eye view stitching images and the vehicle pose information associated with each of the bird's-eye view stitching images, generate the first point cloud track data corresponding to each of the track data;
    所述根据所述第一点云轨迹数据生成点云底图,包括:The generating point cloud base map according to the first point cloud trajectory data includes:
    将所述多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图。The first point cloud trajectory data corresponding to the plurality of trajectory data are spliced to obtain a point cloud base map.
  3. 根据权利要求2所述的方法,其特征在于,所述对各所述鸟瞰视角拼接图像进行道路元素识别,确定各所述鸟瞰视角拼接图像中的道路元素,包括:The method according to claim 2, wherein the road element identification is carried out on each of the bird's-eye view stitched images, and determining the road elements in each of the bird's-eye view stitched images includes:
    将各所述鸟瞰视角拼接图像输入道路元素识别模型中;Input each stitched image of the bird's-eye view into the road element recognition model;
    由所述道路元素识别模型进行道路元素识别,输出各所述鸟瞰视角拼接图像对应的道路元素信息,所述道路元素信息用于指示所对应鸟瞰视角拼接图像中的道路元素。Road element identification is performed by the road element identification model, and road element information corresponding to each stitched bird's-eye view image is output, and the road element information is used to indicate the road element in the corresponding stitched bird's-eye view image.
  4. 根据权利要求2所述的方法,其特征在于,所述根据各所述鸟瞰视角拼接图像中的道路元素和各所述鸟瞰视角拼接图像所关联的车辆位姿信息,生成各所述轨迹数据对应的第一点云轨迹数据,包括:The method according to claim 2, characterized in that, according to the road elements in each of the bird's-eye view stitching images and the vehicle pose information associated with each of the bird's-eye view stitching images, each of the trajectory data corresponding The first point cloud trajectory data, including:
    根据各所述鸟瞰视角拼接图像所关联的车辆位姿信息,将各所述鸟瞰视角拼接图像中的每个道路元素进行三维重构,得到各所述鸟瞰视角拼接图像对应的第一点云轨迹数据。According to the vehicle pose information associated with each of the bird's-eye view stitching images, perform three-dimensional reconstruction on each road element in each of the bird's-eye view stitching images, and obtain the first point cloud trajectory corresponding to each of the bird's-eye view stitching images data.
  5. 根据权利要求2所述的方法,其特征在于,所述将所述多个第一轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图,包括:The method according to claim 2, wherein the splicing of the first point cloud trajectory data corresponding to the plurality of first trajectory data to obtain a point cloud base map includes:
    确定多个所述第一点云轨迹数据中任意两个中表示相同地理位置的第一目标道路元素;Determining first target road elements representing the same geographic location in any two of the plurality of first point cloud trajectory data;
    基于所述第一目标道路元素,将多个所述第一点云轨迹数据进行拼接,得到所述点云底图。Based on the first target road element, multiple pieces of the first point cloud trajectory data are spliced to obtain the point cloud base map.
  6. 根据权利要求2所述的方法,其特征在于,所述将多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图之后,所述方法还包括:The method according to claim 2, wherein the first point cloud trajectory data corresponding to the plurality of trajectory data is spliced, and after obtaining the point cloud base map, the method also includes:
    将所述点云底图发送到客户端,接收所述客户端对所述点云底图进行 拼接编辑后的点云底图。Send the point cloud base map to the client, and receive the point cloud base map after the client splices and edits the point cloud base map.
  7. 根据权利要求1所述的方法,其特征在于,所述第二点云轨迹数据按以下方式确定:The method according to claim 1, wherein the second point cloud track data is determined in the following manner:
    从候选点云轨迹数据集合中获取一候选点云轨迹数据;Obtain a candidate point cloud trajectory data from the candidate point cloud trajectory data set;
    确定所述候选点云轨迹数据相对于所述点云底图的覆盖度;Determine the coverage of the candidate point cloud trajectory data relative to the point cloud base map;
    若所述覆盖度大于设定阈值,则将所述候选点云轨迹数据作为所述第二点云轨迹数据。If the coverage is greater than a set threshold, the candidate point cloud trajectory data is used as the second point cloud trajectory data.
  8. 根据权利要求7所述的方法,其特征在于,所述确定所述候选点云轨迹数据相对于所述点云底图的覆盖度之后,所述方法还包括:The method according to claim 7, wherein after said determining the degree of coverage of said candidate point cloud trajectory data relative to said point cloud base map, said method further comprises:
    若所述覆盖度不大于设定阈值,则将所述候选点云轨迹数据与所述点云底图进行拼接,以对所述点云底图进行更新。If the coverage is not greater than the set threshold, the candidate point cloud trajectory data is spliced with the point cloud base map to update the point cloud base map.
  9. 根据权利要求8所述的方法,其特征在于,所述将所述候选点云轨迹数据与所述点云底图进行拼接,以对所述点云底图进行更新,包括:The method according to claim 8, wherein the splicing the candidate point cloud trajectory data with the point cloud base map to update the point cloud base map includes:
    确定所述候选点云轨迹数据所对应车辆的目标车辆类型;Determine the target vehicle type of the vehicle corresponding to the candidate point cloud trajectory data;
    基于车辆类型与权重之间的对应关系,确定所述目标车辆类型对应的目标权重;determining the target weight corresponding to the target vehicle type based on the correspondence between the vehicle type and the weight;
    若所述目标权重大于权重阈值,则将所述点云底图中的第二目标道路元素进行移动,以使所述点云底图中的第二目标道路元素与所述候选点云轨迹数据中的第二目标道路元素重叠;其中,所述第二目标道路元素是指所述点云底图和所述候选点云轨迹数据中表示相同地理位置的道路元素;或,If the target weight is greater than the weight threshold, the second target road element in the point cloud base map is moved, so that the second target road element in the point cloud base map is consistent with the candidate point cloud trajectory data The second target road element overlaps; wherein, the second target road element refers to the road element representing the same geographic location in the point cloud base map and the candidate point cloud trajectory data; or,
    若所述目标权重不大于权重阈值,则将所述候选点云轨迹数据中的第二目标道路进行移动,以使所述点云底图中的第二目标道路元素与所述候选点云轨迹数据中的第二目标道路元素重叠;If the target weight is not greater than the weight threshold, then move the second target road in the candidate point cloud trajectory data so that the second target road element in the point cloud base map is consistent with the candidate point cloud trajectory The second target road element in the data overlaps;
    组合进行移动后的所述点云底图和所述候选点云轨迹数据,作为更新后的点云底图。Combining the moved point cloud base map and the candidate point cloud trajectory data as an updated point cloud base map.
  10. 根据权利要求7所述的方法,其特征在于,所述从候选点云轨迹数据集合中获取一候选点云轨迹数据,包括:The method according to claim 7, wherein said obtaining a candidate point cloud trajectory data from the candidate point cloud trajectory data set comprises:
    根据所述候选点云轨迹数据集合中各候选点云轨迹数据对应的优先级,按照优先级由高到低的顺序,从所述候选点云轨迹数据集合获取一候选点云轨迹数据。According to the priority corresponding to each candidate point cloud trajectory data in the candidate point cloud trajectory data set, a candidate point cloud trajectory data is obtained from the candidate point cloud trajectory data set in order of priority from high to low.
  11. 根据权利要求1所述的方法,其特征在于,所述以所述点云底图作为对齐介质,将第二点云轨迹数据与所述点云底图进行对齐融合,得到地图,包括:The method according to claim 1, wherein the point cloud base map is used as an alignment medium, and the second point cloud trajectory data is aligned and fused with the point cloud base map to obtain a map, including:
    以所述点云底图作为对齐介质,将所述第二点云轨迹数据与所述点云底图进行对齐,确定所述第二点云轨迹数据相较于所述点云底图中的新增道路元素;Using the point cloud base map as an alignment medium, align the second point cloud trajectory data with the point cloud base map, and determine that the second point cloud trajectory data is compared with the point cloud base map. Added road elements;
    将所述新增道路元素的点云模型添加到所述点云底图中,得到所述地图。The point cloud model of the newly added road element is added to the point cloud base map to obtain the map.
  12. 一种地图的生成装置,其特征在于,包括:A device for generating a map, comprising:
    数据获取模块,用于获取生成的第一点云轨迹数据,其中所述第一点云轨迹数据对应于第二类型的车辆的点云轨迹数据;获取确定的第二点云轨迹数据,其中所述第二点云轨迹数据对应于第一类型的车辆的点云轨迹数据;The data acquisition module is used to obtain the generated first point cloud trajectory data, wherein the first point cloud trajectory data corresponds to the point cloud trajectory data of the second type of vehicle; obtain the determined second point cloud trajectory data, wherein the The second point cloud trajectory data corresponds to the point cloud trajectory data of the first type of vehicle;
    拼接模块,用于根据所述第一点云轨迹数据生成点云底图;A splicing module, configured to generate a point cloud base map according to the first point cloud trajectory data;
    融合模块,用于以所述点云底图作为对齐介质,将所述第二点云轨迹数据与所述点云底图进行对齐融合,得到地图。The fusion module is configured to use the point cloud base map as an alignment medium to align and fuse the second point cloud trajectory data with the point cloud base map to obtain a map.
  13. 根据权利要求12所述的装置,其特征在于:The device according to claim 12, characterized in that:
    所述数据获取模块包括获取模块、识别模块和生成模块;The data acquisition module includes an acquisition module, an identification module and a generation module;
    获取模块,用于获取多个轨迹数据,所述轨迹数据包括车辆的位姿信息和与所述车辆位姿信息相关联的鸟瞰视角拼接图像;An acquisition module, configured to acquire a plurality of trajectory data, the trajectory data including vehicle pose information and a bird's-eye view stitching image associated with the vehicle pose information;
    识别模块,用于对各所述鸟瞰视角拼接图像进行道路元素识别,确定各所述鸟瞰视角拼接图像中的道路元素;An identification module, configured to perform road element identification on each stitched image from a bird's-eye view, and determine road elements in each stitched image from a bird's-eye view;
    生成模块,用于根据各所述鸟瞰视角拼接图像中的道路元素和各所述鸟瞰视角拼接图像所关联的车辆位姿信息,生成各所述轨迹数据对应的第一点云轨迹数据;A generating module, configured to generate the first point cloud trajectory data corresponding to each of the trajectory data according to the road elements in each of the bird's-eye perspective stitching images and the vehicle pose information associated with each of the bird's-eye perspective stitching images;
    所述拼接模块将所述多个轨迹数据所对应的第一点云轨迹数据进行拼接,得到点云底图。The splicing module splices the first point cloud track data corresponding to the plurality of track data to obtain a point cloud base map.
  14. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器;processor;
    存储器,所述存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,实现如权利要求1-11中任一项所述的方法。A memory, where computer-readable instructions are stored on the memory, and when the computer-readable instructions are executed by the processor, the method according to any one of claims 1-11 is realized.
  15. 一种计算机可读存储介质,其上存储有计算机可读指令,当所述计算机可读指令被处理器执行时,实现如权利要求1-11中任一项所述的方法。A computer-readable storage medium, on which computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the method according to any one of claims 1-11 is implemented.
PCT/CN2022/094862 2021-12-30 2022-05-25 Map generation method and apparatus, electronic device, and storage medium WO2023123837A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111646647.8A CN114494618B (en) 2021-12-30 2021-12-30 Map generation method and device, electronic equipment and storage medium
CN202111646647.8 2021-12-30

Publications (1)

Publication Number Publication Date
WO2023123837A1 true WO2023123837A1 (en) 2023-07-06

Family

ID=81507703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094862 WO2023123837A1 (en) 2021-12-30 2022-05-25 Map generation method and apparatus, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN114494618B (en)
WO (1) WO2023123837A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494618B (en) * 2021-12-30 2023-05-16 广州小鹏自动驾驶科技有限公司 Map generation method and device, electronic equipment and storage medium
CN115112114B (en) * 2022-06-15 2024-05-03 苏州轻棹科技有限公司 Processing method and device for correcting orientation angle of vehicle around vehicle
CN116051675A (en) * 2022-12-30 2023-05-02 广州小鹏自动驾驶科技有限公司 Parking lot map generation method, device, equipment and storage medium
WO2024174160A1 (en) * 2023-02-23 2024-08-29 Qualcomm Technologies, Inc. Point cloud alignment and combination for vehicle applications
WO2024174150A1 (en) * 2023-02-23 2024-08-29 Qualcomm Technologies, Inc. Point cloud alignment and combination for vehicle applications
CN116385529B (en) * 2023-04-14 2023-12-26 小米汽车科技有限公司 Method and device for determining position of deceleration strip, storage medium and vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232583A1 (en) * 2017-02-16 2018-08-16 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof
CN111442776A (en) * 2019-01-17 2020-07-24 通用汽车环球科技运作有限责任公司 Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
US20210063200A1 (en) * 2019-08-31 2021-03-04 Nvidia Corporation Map creation and localization for autonomous driving applications
WO2021089839A1 (en) * 2019-11-08 2021-05-14 Outsight Radar and lidar combined mapping system
CN113554698A (en) * 2020-04-23 2021-10-26 杭州海康威视数字技术股份有限公司 Vehicle pose information generation method and device, electronic equipment and storage medium
CN113688935A (en) * 2021-09-03 2021-11-23 阿波罗智能技术(北京)有限公司 High-precision map detection method, device, equipment and storage medium
CN113706702A (en) * 2021-08-11 2021-11-26 重庆九洲星熠导航设备有限公司 Mining area three-dimensional map construction system and method
CN114494618A (en) * 2021-12-30 2022-05-13 广州小鹏自动驾驶科技有限公司 Map generation method and device, electronic equipment and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570446B (en) * 2015-10-12 2019-02-01 腾讯科技(深圳)有限公司 The method and apparatus of lane line drawing
CN108959321B (en) * 2017-05-25 2022-06-24 纵目科技(上海)股份有限公司 Parking lot map construction method, system, mobile terminal and storage medium
CN110851545B (en) * 2018-07-27 2023-11-14 比亚迪股份有限公司 Map drawing method, device and equipment
CN111380543B (en) * 2018-12-29 2023-05-05 沈阳美行科技股份有限公司 Map data generation method and device
CN109740604B (en) * 2019-04-01 2019-07-05 深兰人工智能芯片研究院(江苏)有限公司 A kind of method and apparatus of running region detection
CN112655226B (en) * 2020-04-09 2022-08-26 华为技术有限公司 Vehicle sensing method, device and system
CN111402588B (en) * 2020-04-10 2022-02-18 河北德冠隆电子科技有限公司 High-precision map rapid generation system and method for reconstructing abnormal roads based on space-time trajectory
CN111784835B (en) * 2020-06-28 2024-04-12 北京百度网讯科技有限公司 Drawing method, drawing device, electronic equipment and readable storage medium
CN112710318B (en) * 2020-12-14 2024-05-17 深圳市商汤科技有限公司 Map generation method, path planning method, electronic device, and storage medium
CN113537046A (en) * 2021-07-14 2021-10-22 安徽酷哇机器人有限公司 Map lane marking method and system based on vehicle track big data detection
CN113609148A (en) * 2021-08-17 2021-11-05 广州小鹏自动驾驶科技有限公司 Map updating method and device
CN113724390A (en) * 2021-09-08 2021-11-30 广州小鹏自动驾驶科技有限公司 Ramp generation method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232583A1 (en) * 2017-02-16 2018-08-16 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof
CN111442776A (en) * 2019-01-17 2020-07-24 通用汽车环球科技运作有限责任公司 Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
WO2020248614A1 (en) * 2019-06-10 2020-12-17 商汤集团有限公司 Map generation method, drive control method and apparatus, electronic equipment and system
US20210063200A1 (en) * 2019-08-31 2021-03-04 Nvidia Corporation Map creation and localization for autonomous driving applications
WO2021089839A1 (en) * 2019-11-08 2021-05-14 Outsight Radar and lidar combined mapping system
CN113554698A (en) * 2020-04-23 2021-10-26 杭州海康威视数字技术股份有限公司 Vehicle pose information generation method and device, electronic equipment and storage medium
CN113706702A (en) * 2021-08-11 2021-11-26 重庆九洲星熠导航设备有限公司 Mining area three-dimensional map construction system and method
CN113688935A (en) * 2021-09-03 2021-11-23 阿波罗智能技术(北京)有限公司 High-precision map detection method, device, equipment and storage medium
CN114494618A (en) * 2021-12-30 2022-05-13 广州小鹏自动驾驶科技有限公司 Map generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114494618A (en) 2022-05-13
CN114494618B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
WO2023123837A1 (en) Map generation method and apparatus, electronic device, and storage medium
US11482008B2 (en) Directing board repositioning during sensor calibration for autonomous vehicles
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
US10962366B2 (en) Visual odometry and pairwise alignment for high definition map creation
US20240124017A1 (en) Determination of lane connectivity at traffic intersections for high definition maps
US20200393265A1 (en) Lane line determination for high definition maps
US11367208B2 (en) Image-based keypoint generation
US11493635B2 (en) Ground intensity LIDAR localizer
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
CN110796714B (en) Map construction method, device, terminal and computer readable storage medium
CN111542860A (en) Sign and lane creation for high definition maps for autonomous vehicles
JP2021089724A (en) 3d auto-labeling with structural and physical constraints
US20210001891A1 (en) Training data generation for dynamic objects using high definition map data
KR102543871B1 (en) Method and system for updating road information changes in map data
CN111754388B (en) Picture construction method and vehicle-mounted terminal
Li et al. Robust localization for intelligent vehicles based on pole-like features using the point cloud
CN114969221A (en) Method for updating map and related equipment
CN116978010A (en) Image labeling method and device, storage medium and electronic equipment
Tian et al. Vision-based mapping of lane semantics and topology for intelligent vehicles
Luo et al. Indoor mapping using low-cost MLS point clouds and architectural skeleton constraints
CN117132980A (en) Labeling model training method, road labeling method, readable medium and electronic device
CN115937436A (en) Road scene three-dimensional model reconstruction method and device and driver assistance system
CN116917936A (en) External parameter calibration method and device for binocular camera
Lee et al. Semi-automatic framework for traffic landmark annotation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913130

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE