CN113674287A - High-precision map drawing method, device, equipment and storage medium - Google Patents

High-precision map drawing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113674287A
CN113674287A CN202111032417.2A CN202111032417A CN113674287A CN 113674287 A CN113674287 A CN 113674287A CN 202111032417 A CN202111032417 A CN 202111032417A CN 113674287 A CN113674287 A CN 113674287A
Authority
CN
China
Prior art keywords
marked
neural network
point cloud
network model
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111032417.2A
Other languages
Chinese (zh)
Inventor
何雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202111032417.2A priority Critical patent/CN113674287A/en
Publication of CN113674287A publication Critical patent/CN113674287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The disclosure provides a high-precision map drawing method, device, equipment and storage medium, and relates to the technical fields of computer vision, automatic driving, intelligent transportation and the like. The specific implementation scheme is as follows: performing semantic segmentation processing on the acquired image data to obtain image data identifying an object to be labeled; fusing the acquired point cloud data with image data of an object to be marked, and obtaining semantic map data; and obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a pre-trained neural network model according to the semantic map and the point cloud data. The automatic drawing and automatic labeling of the map can be carried out by fusing the characteristics of the image data and the point cloud data. The geometric position of the object to be marked can be generated, and the color and the boundary of the object to be marked can also be generated. Automation of mapping and labeling is realized, and the labeling precision can be maintained.

Description

High-precision map drawing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of computer vision, automatic driving, intelligent transportation, and the like, and in particular, to a method, an apparatus, a device, and a storage medium for drawing a high-precision map.
Background
The high-precision map is also called as a high-precision map and is used for an automatic driving automobile. The high-precision map has accurate vehicle position information and abundant road element data information, can help an automobile to predict road surface complex information such as gradient, curvature, course and the like, and can better avoid potential risks. When objects such as traffic signs and the like are marked in a high-precision map, model training is carried out in a deep learning mode, point cloud data are identified by using the trained models, and marking is carried out according to the identification result. The above-described method has a drawback that the reflection value of the laser point cloud is unclear due to the influence of the material and wear of the object such as the traffic sign. In addition, the laser point cloud reflection value cannot represent the color, and automatic assignment of the color attribute cannot be realized.
Disclosure of Invention
The disclosure provides a high-precision map drawing method, device, equipment and storage medium.
According to an aspect of the present disclosure, there is provided a high-precision map drawing method, which may include:
performing semantic segmentation processing on the acquired image data to obtain image data identifying an object to be labeled;
fusing the acquired point cloud data with image data of an object to be marked, and obtaining semantic map data;
and obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a pre-trained neural network model according to the semantic map and the point cloud data.
According to another aspect of the present disclosure, there is provided a high-precision map drawing apparatus, which may include:
the semantic segmentation module is used for performing semantic segmentation processing on the acquired image data to obtain image data for identifying an object to be labeled;
the semantic map generation module is used for fusing the acquired point cloud data with the image data identifying the object to be marked to obtain a semantic map;
and the map drawing module is used for obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a pre-trained neural network model according to the semantic map and the point cloud data.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method in any of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the method in any of the embodiments of the present disclosure.
According to the technology disclosed by the invention, the automatic drawing and automatic labeling of the map can be carried out by fusing the characteristics of the image data and the point cloud data. The geometric position of the object to be marked can be generated, and the color and the boundary of the object to be marked can also be generated. Automation of mapping and labeling is realized, and the labeling precision can be maintained.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is one of the flow charts of a high precision map rendering method according to the present disclosure;
FIG. 2 is a flow chart for mapping using a neural network model according to the present disclosure;
FIG. 3 is a flow chart of pre-processing point cloud data according to the present disclosure;
FIG. 4 is a flow chart of a fusion process of point cloud data with image data identifying an object to be annotated according to the present disclosure;
FIG. 5 is a second flow chart of a high precision map rendering method according to the present disclosure;
FIG. 6 is a schematic diagram of a high-precision mapping apparatus according to the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a high-precision map drawing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As shown in fig. 1, the present disclosure relates to a high-precision map drawing method, which may include the steps of:
s101: performing semantic segmentation processing on the acquired image data to obtain image data identifying an object to be labeled;
s102: fusing the acquired point cloud data with image data of an object to be marked, and obtaining semantic map data;
s103: and obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a pre-trained neural network model according to the semantic map and the point cloud data.
The execution main body of the above scheme of the present disclosure may be a server of a map application program, or may be a vehicle with an automatic driving function, a cloud end communicating with the vehicle, and the like.
The acquired image data can be image data acquired by image acquisition equipment such as a vehicle-mounted camera in real time; alternatively, the acquired image data may be image data acquired by different vehicles or different workers, and the like.
The semantic segmentation processing on the acquired image data may include segmenting different objects in the image data from the perspective of image pixels, and labeling each segmented object.
By taking the image data as the road image data as an example, semantic segmentation processing is performed on the acquired image data, so that objects such as vehicles, pedestrians, traffic markers and the like in the road image data can be labeled. The traffic sign may include a traffic sign line, a signal lamp, or a traffic sign board. Further, the traffic sign lines may include solid lines, dashed lines, turning arrows, and the like. Traffic signs may include road signs, prohibition signs, warning signs, and the like.
By performing semantic segmentation processing on the acquired image data, the image data of the object to be labeled can be identified.
The point cloud data can be acquired by a three-dimensional laser sensor. The point cloud data and the aforementioned image data may include single frame data or (continuous) multiple frame data.
And fusing the point cloud data and the image data for identifying the object to be marked to obtain the semantic map. The fusion processing may include complementing the 3D coordinates of the object to be annotated in the point cloud data with the 2D coordinates of the object to be annotated in the image data identifying the object to be annotated to generate a semantic map; alternatively, the 2D coordinates of the object to be labeled in the image data in which the object to be labeled is identified may be projected into the point cloud data to generate semantic map data.
And inputting the semantic map data and the point cloud data into a pre-trained neural network model to obtain a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked.
The pre-trained neural network model can utilize semantic map data samples and point cloud data samples as input training data of the model; and taking the true value of the position of the object to be marked, the true value of the color of the object to be marked and the true value of the boundary of the object to be marked as output training data of the model so as to train the neural network model.
The training process may include: and inputting the semantic map data sample and the point cloud data sample into the neural network model to be trained to obtain a map comprising a predicted value of the position of the object to be marked, a predicted value of the color of the object to be marked and a predicted value of the boundary of the object to be marked. And adjusting parameters in the neural network model by utilizing errors existing between the predicted value and a true value of the position of the corresponding object to be marked, a true value of the color of the object to be marked and a true value of the boundary of the object to be marked. Wherein, the above error can be embodied by a loss function, and the function of the loss function can be understood as: when a predicted value obtained by forward propagation of the neural network model to be trained is close to the true value, the loss function takes a smaller value; conversely, the value of the loss function increases. The loss function is a function having parameters in the neural network model as arguments.
And adjusting all parameters in the neural network model to be trained by utilizing the errors. The error is propagated backwards in each layer of the neural network model to be trained, and the parameters of each layer of the neural network model to be trained are adjusted according to the error until the output of the neural network model to be trained converges or a desired effect is achieved.
According to the scheme, the automatic drawing and automatic labeling of the map can be carried out by fusing the characteristics of the image data and the point cloud data. The geometric position of the object to be marked can be generated, and the color and the boundary of the object to be marked can also be generated. Automation of mapping and labeling is realized, and the labeling precision can be maintained.
As shown in fig. 2, in an embodiment, step S103 may specifically include the following steps:
s201: determining the geometric characteristics of the object to be marked and the color characteristics of the object to be marked contained in the semantic map by utilizing the first sub-neural network model;
s202: preprocessing the point cloud data to obtain a preprocessing result;
s203: determining the geographic position characteristics of the object to be marked contained in the preprocessing result by utilizing the second sub-neural network model;
s204: and obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing the third sub-neural network model according to the geometric characteristics of the object to be marked, the color characteristics of the object to be marked and the geographic position characteristics of the object to be marked.
The neural network model may be composed of a plurality of sub-network models, each of which performs a corresponding function. For example, in the current embodiment, the neural network model may include a first sub-neural network model, a second sub-neural network model, and a third sub-neural network model.
The first sub-neural network model is configured to: and receiving semantic map data, and determining the geometric features and the color attribute features of the object to be labeled contained in the semantic map. Taking the object to be marked as a traffic sign line as an example, the geometric features of the solid line can be rectangles, and the geometric features of the pedestrian crossing can comprise rectangles or diamonds and the like. The color attribute features may include white, yellow, and the like. For example, a solid yellow line may be used to distinguish lanes in different directions and may be used to represent a curb. The white dotted line may be used for planning a lane, etc.
The processing of the point cloud data may include point cloud denoising, point cloud simplification, point cloud splicing, and the like.
The point cloud denoising may utilize a filtering method, such as bilateral filtering, gaussian filtering, direct filtering, etc. The point cloud data is processed by a filtering method, the condition that the density of the point cloud data is irregular can be eliminated by smoothing processing, and outliers caused by problems such as shading are eliminated.
The point cloud simplification can be realized by eliminating the remaining discrete points after clustering by using a clustering algorithm.
The point cloud splicing is suitable for the condition that multi-frame point cloud data exist, and the multi-frame point cloud data can be spliced to obtain a point cloud splicing base map.
The processing of the point cloud data may include the processes of point cloud denoising, point cloud simplification and point cloud splicing, or one or two of them may be selected, which is not described herein again. The preprocessed point cloud data can be used as a preprocessing result.
The second sub-neural network model is configured to: and receiving the preprocessing result, and determining the geographic position characteristics of the object to be marked contained in the preprocessing result. The geo-location features may include the location of the object to be annotated in the point cloud data or may include the location of the object to be annotated in the world coordinate system. The geographic position feature may be a geographic position feature of the geometric shape (contour) of the object to be labeled, such as the aforementioned (dotted line, solid line) line contour, the contour of the turning arrow, and the like.
The third sub-neural network model is configured to: and receiving the contents output by the first sub-neural network model and the second sub-neural network model, and fusing the contents output by the two sub-network models. The principle of the fusion process may be to complement or calibrate the contents output by the two sub-neural network models.
For example, the geographic position of each object to be labeled can be determined by taking the geographic position feature of the object to be labeled output by the second sub-neural network model as a main feature. And then, the content output by the first sub-neural network model is used as compensation information to make up the situation that the object to be marked is unclear and the like due to light reflection or loss possibly occurring in the point cloud data. Specifically, the relative position relationship between the point cloud data and the image data identifying the object to be marked can be determined by the (internal and external) parameters of the image acquisition device and the (internal and external) parameters of the point cloud data acquisition device. And then the projection of the pixel points is carried out by utilizing the relative position relation, thereby realizing complementation and mutual calibration. Alternatively, the spatial coordinates of the object to be labeled may also be determined according to (internal and external) parameters of the image acquisition device and the position of the object to be labeled in the image data. And performing complementation and mutual calibration by using the space coordinates and the 3D pixel points in the point cloud data.
By the scheme, the semantic map and the point cloud data can be processed by utilizing a plurality of sets of sub neural network models so as to correspondingly determine different characteristics. And finally, fusing the determined different characteristics, so that not only can the geometric position of the object to be marked be generated, but also the color and the boundary of the object to be marked can be generated. Automation of mapping and labeling is realized, and the labeling precision can be maintained.
As shown in fig. 3, in an embodiment, when there is a plurality of frames of point cloud data, the step S202 may specifically include the following steps:
s301: splicing the multi-frame point cloud data to obtain a splicing processing result;
s302: generating a top view base diagram of the object to be marked by utilizing the splicing processing result;
s303: the top base view is taken as the pre-processing result.
The multi-frame point cloud data is spliced to obtain point cloud data of a large range or a large area, such as point cloud data of one street or a plurality of streets. The process of splicing the multi-frame point cloud data may include coordinate identification of objects in each frame of point cloud data, so that matching, overlapping and the like of objects with coordinate differences within an allowable range in each frame of point cloud data are performed, and finally, the multi-frame point cloud data are spliced to obtain a splicing processing result.
And performing plane conversion by using the splicing processing result to convert and generate a top view base diagram of the object to be marked. The top view base map of the object to be annotated can be used as a preprocessing result of multi-frame point cloud data.
Through the process, the point cloud data can be converted into the overlooking angle which is in accordance with the use of the user on the map, and data support is provided for subsequent map marking.
In one embodiment, the first word neural network model is a deep neural network model and the second sub-neural network model is a shallow neural network model.
The deep neural network model may be a neural network model in which the number of hidden layers is more than the corresponding threshold. In contrast, the shallow neural network model may be a neural network model in which the number of hidden layers is less than the corresponding threshold.
The method comprises the steps of determining the geographic position characteristics of an object to be marked in point cloud data by utilizing a shallow neural network model, and determining the geometric characteristics of the object to be marked and the color characteristics of the object to be marked contained in a semantic map by utilizing a deep neural network model.
The above-mentioned purpose of adopting deep neural network model and shallow neural network model lies in: on one hand, the acquisition result of the point cloud data is relatively accurate and the contained content is relatively single. For point cloud data with relatively single content, a shallow neural network model with a small number of hidden layers can be adopted for feature extraction, and the contour feature of an object to be recognized and the geographic position feature of the object (contour) to be recognized can be determined. Namely, the accuracy of feature determination can be satisfied by using the shallow neural network model, and the calculation amount of feature extraction can be reduced. On the other hand, the content contained in the semantic map is relatively rich. Because the number of neurons of the deep neural network model is relatively rich, the geometric characteristics and the color characteristics of the object to be recognized can be determined in rich contents by utilizing multiple neurons of the deep neural network model.
The recognition result of the semantic map comprises the geometric features of the object to be labeled, and the recognition result of the point cloud data comprises the geographic position features of the object to be labeled, namely the geographic position features containing the outline of the object to be labeled. Therefore, when the configuration is carried out, the point cloud data and the image data can be accurately registered based on the point cloud data and the image data. The problem of image registration misalignment is solved, and the registration accuracy is improved.
In one embodiment, the first sub-neural network model, the second sub-neural network model, and the third sub-neural network model comprise an end-to-end neural network model.
Compared with a plurality of distributed neural network models, the end-to-end neural network model can be integrally trained, so that parameters in the first sub-neural network model, the second sub-neural network model and the third sub-neural network model can be adjusted in a linkage manner, and the effect of improving the final labeling accuracy is achieved.
As shown in fig. 4, in an embodiment, step S102 may specifically include the following steps:
s401: determining the relative position relationship between the point cloud data and the image data for identifying the object to be marked;
s402: according to the relative position relationship, constructing a projection relationship between a 3D pixel point in the point cloud data and a 2D pixel point in the image data of the object to be marked;
s403: and according to the projection relation, fusing the point cloud data and the image data of the identified object to be marked to obtain a semantic map.
The relative position relation between the point cloud data and the image data for identifying the object to be marked can be calculated through the hardware pose relation, the internal parameters, the external parameters and other parameters of the image acquisition equipment and the point cloud data acquisition equipment.
According to the relative position relationship, the projection position of the 2D pixel point of any object to be marked in the image data of the object to be marked, which is converted into the 3D pixel point in the point cloud data, can be calculated and identified, so that the projection relationship between the 3D pixel point in the point cloud data and the 2D pixel point in the image data can be determined. The 2D pixel point may be a center point, an angular point, or a contour point of the object to be labeled. For example, the spatial coordinates may be determined using the positions of the 2D pixel points in the image data and parameters of the image capture device, such as internal and external parameters. By using the relative position relationship and the spatial coordinates, the projection relationship between the 3D pixel points in the point cloud data and the 2D pixel points in the image data can be determined.
Based on the projection relation, the object to be labeled in the image data can be projected into the point cloud data to realize fusion processing, and a semantic map is obtained.
Through the process, the point cloud data and the image data of the object to be marked are identified to be fused so as to accurately determine the semantic map.
In one embodiment, the object to be marked comprises a traffic sign line.
As shown in fig. 5, in one embodiment, the present disclosure relates to a high-precision map drawing method, which may include the steps of:
s501: acquiring image data by using an image camera;
s502: determining a lane line in the image data by utilizing semantic segmentation to obtain the image data with the determined lane line;
s503: acquiring laser point cloud data;
s504: fusing the laser point cloud data and the image data for determining the lane line to obtain semantic map data;
s505: under the condition that multi-frame laser point cloud data exist, performing point cloud splicing processing on the multi-frame laser point cloud data to obtain point cloud splicing base map data;
s506: and inputting the semantic map data and the point cloud splicing base map data into a pre-trained deep convolutional neural network model to obtain a map for marking out the geometric position, the color and the boundary of the lane line.
The deep convolutional neural network model comprises a deep network, a shallow network and a decoding network. The deep network is used for determining lane line geometric features and lane line color features in the semantic map data. The shallow network is used for determining the geometric position characteristics of the lane lines in the point cloud splicing base map data. The decoding network is used for fusing the lane line geometric characteristics, the lane line color characteristics and the lane line geometric position characteristics to finally obtain a map marked with the lane line geometric position, the lane line color and the lane line boundary.
As shown in fig. 6, the present disclosure provides a high-precision map drawing apparatus, which may include:
the semantic segmentation module 601 is configured to perform semantic segmentation processing on the acquired image data to obtain image data identifying an object to be labeled;
a semantic map generation module 602, configured to perform fusion processing on the acquired point cloud data and the image data identifying the object to be labeled, so as to obtain a semantic map;
and the map drawing module 603 is configured to obtain, by using a pre-trained neural network model, a map for drawing the position of the object to be marked, the color of the object to be marked, and the boundary of the object to be marked according to the semantic map and the point cloud data.
In one embodiment, the mapping module 603 may specifically include:
the first drawing submodule is used for determining the geometric characteristics of the object to be marked and the color characteristics of the object to be marked contained in the semantic map by utilizing the first sub-neural network model;
the preprocessing submodule is used for preprocessing the point cloud data to obtain a preprocessing result;
the second drawing submodule is used for determining the geographical position characteristics of the object to be marked contained in the preprocessing result by utilizing the second sub-neural network model;
and the third drawing submodule is used for obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing the third sub-neural network model according to the geometric characteristics of the object to be marked, the color characteristics of the object to be marked and the geographic position characteristics of the object to be marked.
In an embodiment, in the presence of multiple frames of point cloud data, the preprocessing sub-module may specifically include:
the splicing unit is used for splicing the multi-frame point cloud data to obtain a splicing processing result;
the top view base map generating unit is used for generating a top view base map of the object to be marked by utilizing the splicing processing result;
the top base view is taken as the pre-processing result.
In one embodiment, the first word neural network model is a deep neural network model and the second sub-neural network model is a shallow neural network model.
In one embodiment, the first sub-neural network model, the second sub-neural network model, and the third sub-neural network model comprise an end-to-end neural network model.
In one embodiment, the semantic map generation module 602 may specifically include:
the relative position relation determining submodule is used for determining the relative position relation between the point cloud data and the image data for identifying the object to be marked;
the projection relation determining submodule is used for constructing a projection relation between a 3D pixel point in the point cloud data and a 2D pixel point in the image data of the object to be marked according to the relative position relation;
and the semantic map generation execution submodule is used for fusing the point cloud data and the image data for identifying the object to be marked according to the projection relation to obtain the semantic map.
In one embodiment, the object to be marked comprises a traffic sign line.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 710, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)720 or a computer program loaded from a storage unit 780 into a Random Access Memory (RAM) 730. In the RAM 730, various programs and data required for the operation of the device 700 can also be stored. The computing unit 710, the ROM 720 and the RAM 730 are connected to each other by a bus 740. An input/output (I/O) interface 750 is also connected to bus 740.
Various components in device 700 are connected to I/O interface 750, including: an input unit 760 such as a keyboard, a mouse, and the like; an output unit 770 such as various types of displays, speakers, and the like; a storage unit 780 such as a magnetic disk, an optical disk, or the like; and a communication unit 790 such as a network card, modem, wireless communication transceiver, etc. The communication unit 790 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 710 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 710 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 710 performs the respective methods and processes described above, such as a drawing method of a high-precision map. For example, in some embodiments, the high-precision map rendering method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 780. In some embodiments, some or all of the computer program may be loaded onto and/or installed onto device 700 via ROM 720 and/or communications unit 790. When the computer program is loaded into the RAM 730 and executed by the computing unit 710, one or more steps of the high precision map rendering method described above may be performed. Alternatively, in other embodiments, the computing unit 710 may be configured to perform the high-precision mapping method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A high-precision map drawing method comprises the following steps:
performing semantic segmentation processing on the acquired image data to obtain image data identifying an object to be labeled;
fusing the acquired point cloud data and the image data of the identified object to be marked to obtain semantic map data;
and obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a pre-trained neural network model according to the semantic map and the point cloud data.
2. The method of claim 1, wherein obtaining a map for drawing the position of the object to be marked, the color of the object to be marked, and the boundary of the object to be marked by using a pre-trained neural network model according to the semantic map and the point cloud data comprises:
determining the geometric characteristics of the object to be marked and the color characteristics of the object to be marked contained in the semantic map by utilizing a first sub-neural network model;
preprocessing the point cloud data to obtain a preprocessing result;
determining the geographic position characteristics of the object to be marked contained in the preprocessing result by utilizing a second sub-neural network model;
and obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a third sub-neural network model according to the geometric characteristics of the object to be marked, the color characteristics of the object to be marked and the geographic position characteristics of the object to be marked.
3. The method of claim 2, wherein the preprocessing the point cloud data to obtain a preprocessing result in the presence of multiple frames of point cloud data comprises:
splicing the multi-frame point cloud data to obtain a splicing processing result;
generating a top view base map of the object to be marked by using the splicing processing result;
and taking the top bottom view as the preprocessing result.
4. The method of claim 2, wherein the first word neural network model is a deep neural network model and the second sub-neural network model is a shallow neural network model.
5. The method of any one of claims 2 to 4, wherein the first sub-neural network model, the second sub-neural network model and the third sub-neural network model constitute an end-to-end neural network model.
6. The method of claim 1, wherein the fusing the point cloud data with the image data identifying the object to be labeled to obtain a semantic map comprises:
determining the relative position relationship between the point cloud data and the image data of the identified object to be marked;
according to the relative position relationship, constructing a projection relationship between a 3D pixel point in the point cloud data and a 2D pixel point in the image data of the object to be marked;
and according to the projection relation, fusing the point cloud data and the image data of the identified object to be marked to obtain a semantic map.
7. The method according to any one of claims 1 to 4 or 6, wherein the object to be marked comprises a traffic sign line.
8. A high-precision map drawing apparatus comprising:
the semantic segmentation module is used for performing semantic segmentation processing on the acquired image data to obtain image data for identifying an object to be labeled;
the semantic map generation module is used for fusing the acquired point cloud data with the image data of the identified object to be marked to obtain a semantic map;
and the map drawing module is used for obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked by utilizing a pre-trained neural network model according to the semantic map and the point cloud data.
9. The apparatus of claim 8, wherein the mapping module comprises:
the first drawing submodule is used for determining the geometric characteristics of the object to be marked and the color characteristics of the object to be marked contained in the semantic map by utilizing a first sub-neural network model;
the preprocessing submodule is used for preprocessing the point cloud data to obtain a preprocessing result;
the second drawing submodule is used for determining the geographic position characteristics of the object to be marked contained in the preprocessing result by utilizing a second sub-neural network model;
and the third drawing submodule is used for obtaining a map for drawing the position of the object to be marked, the color of the object to be marked and the boundary of the object to be marked according to the geometric characteristics of the object to be marked, the color characteristics of the object to be marked and the geographic position characteristics of the object to be marked by utilizing a third sub-neural network model.
10. The apparatus of claim 9, wherein the pre-processing sub-module, in the presence of multiple frames of point cloud data, comprises:
the splicing unit is used for splicing the multi-frame point cloud data to obtain a splicing processing result;
the top view base map generating unit is used for generating a top view base map of the object to be marked by utilizing the splicing processing result;
and taking the top bottom view as the preprocessing result.
11. The apparatus of claim 9, wherein the first word neural network model is a deep neural network model and the second sub-neural network model is a shallow neural network model.
12. The apparatus of claims 9 to 11, wherein the first, second, and third sub-neural network models constitute an end-to-end neural network model.
13. The apparatus of claim 8, wherein the semantic map generation module comprises:
a relative position relation determining submodule, configured to determine a relative position relation between the point cloud data and the image data of the identified object to be marked;
the projection relation determining submodule is used for constructing a projection relation between a 3D pixel point in the point cloud data and a 2D pixel point in the image data of the object to be marked;
and the semantic map generation execution submodule is used for fusing the point cloud data and the image data of the identified object to be marked according to the projection relation to obtain a semantic map.
14. The apparatus of any one of claims 8 to 11 or 13, wherein the object to be marked comprises a traffic sign line.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 7.
17. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 7.
CN202111032417.2A 2021-09-03 2021-09-03 High-precision map drawing method, device, equipment and storage medium Pending CN113674287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111032417.2A CN113674287A (en) 2021-09-03 2021-09-03 High-precision map drawing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111032417.2A CN113674287A (en) 2021-09-03 2021-09-03 High-precision map drawing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113674287A true CN113674287A (en) 2021-11-19

Family

ID=78548223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111032417.2A Pending CN113674287A (en) 2021-09-03 2021-09-03 High-precision map drawing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113674287A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359463A (en) * 2022-03-20 2022-04-15 宁波博登智能科技有限公司 Point cloud marking system and method for ground identification
CN114581621A (en) * 2022-03-07 2022-06-03 北京百度网讯科技有限公司 Map data processing method, map data processing device, electronic equipment and medium
CN114627239A (en) * 2022-03-04 2022-06-14 北京百度网讯科技有限公司 Bounding box generation method, device, equipment and storage medium
CN115131761A (en) * 2022-08-31 2022-09-30 北京百度网讯科技有限公司 Road boundary identification method, drawing method and device and high-precision map
CN116265862A (en) * 2021-12-16 2023-06-20 动态Ad有限责任公司 Vehicle, system and method for a vehicle, and storage medium
CN116563812A (en) * 2023-07-07 2023-08-08 小米汽车科技有限公司 Target detection method, target detection device, storage medium and vehicle
CN114627239B (en) * 2022-03-04 2024-04-30 北京百度网讯科技有限公司 Bounding box generation method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110160502A (en) * 2018-10-12 2019-08-23 腾讯科技(深圳)有限公司 Map elements extracting method, device and server
US20190271548A1 (en) * 2018-03-02 2019-09-05 DeepMap Inc. Reverse rendering of an image based on high definition map data
US20190287254A1 (en) * 2018-03-16 2019-09-19 Honda Motor Co., Ltd. Lidar noise removal using image pixel clusterings
CN111291676A (en) * 2020-02-05 2020-06-16 清华大学 Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111784837A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 High-precision map generation method and device
CN111968229A (en) * 2020-06-28 2020-11-20 北京百度网讯科技有限公司 High-precision map making method and device
CN111968129A (en) * 2020-07-15 2020-11-20 上海交通大学 Instant positioning and map construction system and method with semantic perception
CN112740268A (en) * 2020-11-23 2021-04-30 华为技术有限公司 Target detection method and device
CN113240009A (en) * 2021-05-14 2021-08-10 广州极飞科技股份有限公司 Point cloud data labeling method and device, storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190271548A1 (en) * 2018-03-02 2019-09-05 DeepMap Inc. Reverse rendering of an image based on high definition map data
CN112204343A (en) * 2018-03-02 2021-01-08 迪普迈普有限公司 Visualization of high definition map data
US20190287254A1 (en) * 2018-03-16 2019-09-19 Honda Motor Co., Ltd. Lidar noise removal using image pixel clusterings
CN110160502A (en) * 2018-10-12 2019-08-23 腾讯科技(深圳)有限公司 Map elements extracting method, device and server
CN111291676A (en) * 2020-02-05 2020-06-16 清华大学 Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111784837A (en) * 2020-06-28 2020-10-16 北京百度网讯科技有限公司 High-precision map generation method and device
CN111968229A (en) * 2020-06-28 2020-11-20 北京百度网讯科技有限公司 High-precision map making method and device
CN111968129A (en) * 2020-07-15 2020-11-20 上海交通大学 Instant positioning and map construction system and method with semantic perception
CN112740268A (en) * 2020-11-23 2021-04-30 华为技术有限公司 Target detection method and device
CN113240009A (en) * 2021-05-14 2021-08-10 广州极飞科技股份有限公司 Point cloud data labeling method and device, storage medium and electronic equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116265862A (en) * 2021-12-16 2023-06-20 动态Ad有限责任公司 Vehicle, system and method for a vehicle, and storage medium
CN114627239A (en) * 2022-03-04 2022-06-14 北京百度网讯科技有限公司 Bounding box generation method, device, equipment and storage medium
CN114627239B (en) * 2022-03-04 2024-04-30 北京百度网讯科技有限公司 Bounding box generation method, device, equipment and storage medium
CN114581621A (en) * 2022-03-07 2022-06-03 北京百度网讯科技有限公司 Map data processing method, map data processing device, electronic equipment and medium
CN114359463A (en) * 2022-03-20 2022-04-15 宁波博登智能科技有限公司 Point cloud marking system and method for ground identification
CN115131761A (en) * 2022-08-31 2022-09-30 北京百度网讯科技有限公司 Road boundary identification method, drawing method and device and high-precision map
CN116563812A (en) * 2023-07-07 2023-08-08 小米汽车科技有限公司 Target detection method, target detection device, storage medium and vehicle
CN116563812B (en) * 2023-07-07 2023-11-14 小米汽车科技有限公司 Target detection method, target detection device, storage medium and vehicle

Similar Documents

Publication Publication Date Title
CN113674287A (en) High-precision map drawing method, device, equipment and storage medium
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN113378760A (en) Training target detection model and method and device for detecting target
CN113688935A (en) High-precision map detection method, device, equipment and storage medium
CN113378693B (en) Method and device for generating target detection system and detecting target
CN115880536B (en) Data processing method, training method, target object detection method and device
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
CN113859264A (en) Vehicle control method, device, electronic device and storage medium
CN114283343B (en) Map updating method, training method and device based on remote sensing satellite image
CN113724388B (en) High-precision map generation method, device, equipment and storage medium
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN113421217A (en) Method and device for detecting travelable area
CN115761698A (en) Target detection method, device, equipment and storage medium
CN114140813A (en) High-precision map marking method, device, equipment and storage medium
CN113706705B (en) Image processing method, device, equipment and storage medium for high-precision map
CN113742440B (en) Road image data processing method and device, electronic equipment and cloud computing platform
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN113276888B (en) Riding method, device, equipment and storage medium based on automatic driving
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN114840626A (en) High-precision map data processing method, driving navigation method and related device
CN114998863A (en) Target road identification method, target road identification device, electronic equipment and storage medium
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113554882A (en) Method, apparatus, device and storage medium for outputting information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination