CN117132727B - Map data acquisition method, computer readable storage medium and electronic device - Google Patents

Map data acquisition method, computer readable storage medium and electronic device Download PDF

Info

Publication number
CN117132727B
CN117132727B CN202311369249.5A CN202311369249A CN117132727B CN 117132727 B CN117132727 B CN 117132727B CN 202311369249 A CN202311369249 A CN 202311369249A CN 117132727 B CN117132727 B CN 117132727B
Authority
CN
China
Prior art keywords
color information
map data
semantic
sampling point
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311369249.5A
Other languages
Chinese (zh)
Other versions
CN117132727A (en
Inventor
胡泽宇
谢晨
杨海波
李龙辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guanglun Intelligent Beijing Technology Co ltd
Original Assignee
Guanglun Intelligent Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guanglun Intelligent Beijing Technology Co ltd filed Critical Guanglun Intelligent Beijing Technology Co ltd
Priority to CN202311369249.5A priority Critical patent/CN117132727B/en
Publication of CN117132727A publication Critical patent/CN117132727A/en
Application granted granted Critical
Publication of CN117132727B publication Critical patent/CN117132727B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a map data acquisition method, a computer readable storage medium and electronic equipment, and aims to solve the problem that the existing three-dimensional reconstruction method cannot effectively and accurately acquire map data such as ground height. For this purpose, the map data acquisition method of the present invention comprises: by simplifying the input of the multi-layer perceptron, namely taking the two-dimensional coordinates of the sampling points as input and configuring the height or the height and semantic information as output of the multi-layer perceptron, the map data of the sampling points in the target ground scene image can be accurately and effectively obtained by utilizing the multi-layer perceptron, and the problem that the map data cannot be effectively and accurately obtained by the existing three-dimensional reconstruction method is solved.

Description

Map data acquisition method, computer readable storage medium and electronic device
Technical Field
The invention relates to the technical field of image data processing, and particularly provides a map data acquisition method, a computer readable storage medium and electronic equipment.
Background
In the related art, three-dimensional reconstruction can be generally performed based on an MVS (Multi-view stereoscopic) method or an unmanned aerial vehicle oblique photography method. The MVS method can perform better three-dimensional reconstruction in scenes with rich feature points, but is limited by few feature points of the road scene aiming at some scenes such as the road scene, the ground features are not obvious, the calculation is easy to fail, and the MVS method mainly focuses on the geometric shape of the reconstructed scene. Three-dimensional reconstruction based on unmanned aerial vehicle oblique photography is easily limited to the visual angle of the unmanned aerial vehicle, cannot accurately generate the height of the ground and cannot process dynamic vehicles, and is easy to cause cavities or floaters in the three-dimensional reconstructed scene. That is, the two methods cannot effectively and accurately obtain map data such as the height of the ground.
Disclosure of Invention
The invention aims to solve the technical problems, namely the problem that the existing three-dimensional reconstruction method cannot effectively and accurately obtain map data such as the height of the ground.
In a first aspect, the present invention provides a map data acquisition method, comprising:
acquiring two-dimensional coordinates of a sampling point in a target ground scene image, wherein the two-dimensional coordinates are coordinates of the sampling point in a two-dimensional coordinate system constructed in parallel to the ground plane direction;
inputting the two-dimensional coordinates of the sampling points into a trained multi-layer perceptron;
and obtaining at least map data corresponding to the sampling points, wherein the map data comprises altitude or altitude and semantic information.
In some embodiments, the trained multi-layer perceptron is obtained by:
acquiring two-dimensional coordinates of any sampling point in a ground scene training image sample;
inputting the two-dimensional coordinates of the arbitrary sampling points into an initial multi-layer perceptron;
obtaining color information and map data corresponding to the arbitrary sampling points;
and performing loss calculation at least based on the color information and the real color information of the arbitrary sampling points, and performing iterative training on the initial multi-layer perceptron based on a loss calculation result to obtain the trained multi-layer perceptron.
In some embodiments, when the map data includes altitude and semantic information, the performing the loss calculation based on at least the color information and the true color information of the arbitrary sampling point includes:
and performing color loss calculation based on the color information and the real color information of the arbitrary sampling points, performing semantic loss calculation based on the semantic information and the real semantic information of the arbitrary sampling points, and taking the results of the color loss calculation and the semantic loss calculation as final loss calculation results.
In some embodiments, the calculating the color loss based on the color information of the arbitrary sampling point and the true color information includes:
calculating the color loss by calculating the mean square error of the color information of the arbitrary sampling point and the real color information; and/or the number of the groups of groups,
the semantic loss calculation based on the semantic information and the real semantic information of the arbitrary sampling points comprises the following steps:
and calculating the semantic loss by calculating the mean square error of the semantic information of the arbitrary sampling point and the real semantic information.
In some embodiments, before the semantic loss calculation based on the semantic information of the arbitrary sampling point and the real semantic information, the method further includes:
carrying out semantic segmentation on the ground scene training image sample to obtain a semantic segmentation image;
and obtaining real semantic information corresponding to the arbitrary sampling point based on the semantic segmentation image.
In some embodiments, before the performing the loss calculation based at least on the color information of the arbitrary sampling point and the true color information, the method further includes:
acquiring a camera sampling pose corresponding to the ground scene training image sample;
obtaining a new image of the ground scene training image sample based on the color information and map data of the arbitrary sampling points;
rendering the new image based on the camera sampling pose;
the calculating the loss based on the color information and the true color information of the arbitrary sampling point at least comprises the following steps:
and performing loss calculation at least based on the color information and the true color information of any sampling point in the new rendered image.
In some embodiments, the acquiring the two-dimensional coordinates of the sampling point in the target ground scene image includes:
grid division is carried out on the plane where the two-dimensional coordinate system is located;
taking any vertex of the grid obtained after division as a sampling point and acquiring the two-dimensional coordinates of the sampling point.
In a second aspect, the present invention provides a ground scene three-dimensional reconstruction method, comprising:
acquiring map data of a target ground scene by adopting the map data acquisition method;
and carrying out three-dimensional reconstruction of the target ground scene based on the map data.
In a third aspect, the present invention provides a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the map data acquisition method of any one of the above or the above-described ground scene three-dimensional reconstruction method.
In a fourth aspect, the present invention provides an electronic device comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores a computer program that when executed by the at least one processor implements the map data acquisition method of any one of the above or the above-described ground scene three-dimensional reconstruction method.
Under the condition of adopting the technical scheme, the method and the device can acquire the two-dimensional coordinates of the sampling point in the target ground scene image, wherein the two-dimensional coordinates are the coordinates of the sampling point in a two-dimensional coordinate system constructed in parallel to the ground plane direction; inputting the two-dimensional coordinates of the sampling points into a trained multi-layer perceptron; at least map data corresponding to the sampling points is obtained, the map data including altitude or altitude and semantic information. The method can accurately and effectively acquire the map data of the sampling points in the target ground scene image by using the multi-layer perceptron by simplifying the input of the multi-layer perceptron, namely taking the two-dimensional coordinates of the sampling points as the input and configuring the height or the height and semantic information as the output of the multi-layer perceptron, thereby solving the problem that the existing three-dimensional reconstruction method cannot effectively and accurately acquire the map data.
Drawings
Preferred embodiments of the present invention are described below with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a map data obtaining method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a training method of a multi-layer perceptron provided by an embodiment of the invention;
FIG. 3 is a schematic flow chart of a real semantic information acquisition method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a training method of a multi-layer perceptron according to another embodiment of the present invention;
fig. 5 is a schematic flow chart of a three-dimensional reconstruction method of a ground scene provided by an embodiment of the invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a map data obtaining method according to an embodiment of the present invention, which may include:
step S11: acquiring two-dimensional coordinates of a sampling point in a target ground scene image, wherein the two-dimensional coordinates are coordinates of the sampling point in a two-dimensional coordinate system constructed in parallel to the ground plane direction;
step S12: inputting the two-dimensional coordinates of the sampling points into a trained multi-layer perceptron;
step S13: at least map data corresponding to the sampling points is obtained, the map data including altitude or altitude and semantic information.
In some embodiments, a two-dimensional coordinate system parallel to the ground plane direction may be pre-constructed for the target ground scene image, and in particular, may be a two-dimensional rectangular coordinate system.
In some embodiments, step S11 may be specifically:
grid division is carried out on a plane where the two-dimensional coordinate system is located;
taking any vertex of the grid obtained after division as a sampling point and acquiring the two-dimensional coordinates of the sampling point.
The neural radiation field is a deep learning model facing three-dimensional implicit space modeling, the deep learning model is also called a fully-connected neural network (also called a multi-layer perceptron), a mapping relation between sampling points and corresponding attributes of the sampling points can be fitted by adopting the neural radiation field, and the corresponding attributes can comprise colors by way of example. Typically, when applying a neural radiation field, a set of continuously captured images and camera poses are given, and the density (the degree of absorption or scattering of light passing through a specified point) and color of the sampling points are output using the light positions, the illumination directions, and the corresponding three-dimensional coordinates corresponding to the sampling points as inputs. In the invention, the input of the multi-layer perceptron, namely the nerve radiation field is simplified, the two-dimensional coordinates of the sampling points in the target ground scene image are taken as input, the output of the multi-layer perceptron is configured, and the trained multi-layer perceptron is adopted to acquire the mapping relation between the sampling points in the target ground scene image and the map data, so that at least the height or the height and semantic information corresponding to the sampling points can be obtained, thereby facilitating the follow-up three-dimensional reconstruction of the ground scene.
In some preferred embodiments, the output layer of the multi-layer perceptron may be configured to output color, height and semantic information, the multi-layer perceptron may include an input layer, a hidden layer and an output layer, wherein the input layer is set to input two-dimensional coordinates of the sampling points, the hidden layer may be provided with 4 layers, each hidden layer may be followed by an activation layer, and the output layer may be provided with 3 output nodes for outputting color, height and semantic information, respectively. It should be noted that, under the effect of acquiring the map data of the sampling points, the architecture of the multi-layer sensor may be set in other manners.
In some embodiments, step S13 may be specifically to obtain the height and color information corresponding to the sampling point. In some preferred embodiments, step S13 may specifically be to obtain the height, semantic information and color information corresponding to the sampling point, so as to obtain multiple types of attribute information of the sampling point, so as to facilitate the three-dimensional reconstruction of the subsequent ground scene.
In some embodiments, referring to fig. 2, fig. 2 is a schematic flow chart of a multi-layer perceptron training method according to an embodiment of the present invention, which may include:
step S21: acquiring two-dimensional coordinates of any sampling point in a ground scene training image sample;
step S22: inputting the two-dimensional coordinates of any sampling point into an initial multi-layer perceptron;
step S23: obtaining color information and map data corresponding to any sampling point;
step S24: and performing loss calculation at least based on the color information and the real color information of any sampling point, and performing iterative training on the initial multi-layer perceptron based on a loss calculation result to obtain a trained multi-layer perceptron.
In some embodiments, the two-dimensional coordinate system may be constructed for the ground scene training image sample by the same method as the target ground scene image, and step S21 may adapt the target ground scene image in step S11 to the ground scene training image sample, and may be implemented in the same manner as step S11, which may be described above.
In some embodiments, step S22 may be specifically inputting the two-dimensional coordinates of any one sampling point in the ground scene training image sample into the initial multi-layer perceptron, and step S23 may be specifically obtaining color information and map data corresponding to the sampling point, where the map data may include altitude or altitude and semantic information.
In some embodiments, step S24 may specifically be performing loss calculation based on color information and true color information of any sampling point, and performing iterative training on the initial multi-layer perceptron based on the loss calculation result, so as to obtain a trained multi-layer perceptron.
In some embodiments, a ground scene training image sample may be obtained in advance, where the ground scene training image sample may be labeled with the true color information of each pixel.
In some embodiments, performing the color loss calculation based on the color information and the true color information for any sampling point may include: and calculating the color loss by calculating the mean square error of the color information of any sampling point and the real color information. In other embodiments, other methods in the art may also be used for color loss calculation.
In other embodiments, when the map data includes altitude and semantic information, step S24 may be specifically:
and performing color loss calculation based on the color information and the real color information of any sampling point, performing semantic loss calculation based on the semantic information and the real semantic information of any sampling point, and taking the results of the color loss calculation and the semantic loss calculation as final loss calculation results. The loss calculation is carried out based on the color information and the semantic information, so that the training effectiveness is improved, and the accuracy of the trained multi-layer perceptron is improved.
In some embodiments, a ground scene training image sample may be obtained in advance, where the ground scene training image sample may be labeled with real color information and/or real semantic information for each pixel.
In other embodiments, referring to fig. 3, the real semantic information before the semantic loss calculation is performed based on the semantic information and the real semantic information of any sampling point may further obtain the real semantic information by:
step S31: carrying out semantic segmentation on the ground scene training image sample to obtain a semantic segmentation image;
step S32: and obtaining real semantic information corresponding to any sampling point based on the semantic segmentation image.
The semantic segmentation model in the art may be used to perform semantic segmentation on the ground scene training image sample, and as an example, the real semantic information may include at least one of a road, a road edge, and a travelable region.
In some embodiments, performing the color loss calculation based on the color information and the true color information for any sampling point may include: and calculating the color loss by calculating the mean square error of the color information of any sampling point and the real color information. In other embodiments, other methods in the art may also be used for color loss calculation.
In some embodiments, performing the semantic loss calculation based on the semantic information and the true semantic information of the arbitrary sampling point may include performing the semantic loss calculation by calculating a mean square error of the semantic information and the true semantic information of the arbitrary sampling point. In other embodiments, other methods in the art may also be employed for semantic loss computation.
In other embodiments, in order to improve the image quality and obtain the multi-layer perceptron with higher accuracy, a new image of the ground scene training image sample with the obtained color information and map data may be rendered before the loss calculation based on at least the color information of any sampling point and the true color information in the training process, which is described in the following embodiments.
Referring to fig. 4, fig. 4 is a schematic flow chart of a multi-layer perceptron training method according to another embodiment of the present invention, which may include:
step S41: acquiring two-dimensional coordinates of any sampling point in a ground scene training image sample;
step S42: inputting the two-dimensional coordinates of any sampling point into an initial multi-layer perceptron;
step S43: obtaining color information and map data corresponding to any sampling point;
step S44: acquiring a camera sampling pose corresponding to a ground scene training image sample;
step S45: obtaining a new image of the ground scene training image sample based on the color information and map data of any sampling point;
step S46: rendering a new image based on the camera sampling pose;
step S47: and performing loss calculation at least based on the color information and the real color information of any sampling point in the rendered new image, and performing iterative training on the initial multi-layer perceptron based on a loss calculation result to obtain the trained multi-layer perceptron.
Steps S41 to 43 may be implemented in the same manner as steps S21 to 23, and for brevity, reference is made to the above description. In addition, it should be noted that the step S44 is shown in this embodiment as being performed after the step S43 by way of example only, and the order of execution of the step S44 is not particularly limited, and the step S44 may be performed before the step S41 or may be performed in synchronization with any one of the steps S41 to S43 as an example.
In some embodiments, step S44 may be specifically to acquire a camera sampling pose under a world coordinate system through a camera tracking algorithm or using a sensor; in other embodiments, the camera sampling pose may also be obtained directly.
In some embodiments, step S45 may be specifically performed to obtain the ground scene training image sample of the color information and the map data of the sampling points as a new image of the ground scene training image sample. In some embodiments, any sampling point may be provided with a plurality of.
In some embodiments, step S46 may specifically be to project a new image of the ground scene training image sample through the camera sampling pose, obtain two-dimensional image coordinates of each sampling point under the camera coordinate system, perform color interpolation calculation on each pixel in the new image of the ground scene training image sample based on the color information of each sampling point, and mark each pixel in the new image of the ground scene training image sample based on the semantic information of each sampling point, so as to obtain a rendered new image.
In some embodiments, the loss calculation in step S47 may be performed in the same manner as step S24 based on the new image after rendering, and for brevity, reference is made to the description above.
The map data obtaining method provided by the embodiment of the invention is that two-dimensional coordinates of the sampling points in the target ground scene image are obtained, wherein the two-dimensional coordinates are coordinates of the sampling points in a two-dimensional coordinate system constructed in parallel to the ground plane direction; inputting the two-dimensional coordinates of the sampling points into a trained multi-layer perceptron; at least map data corresponding to the sampling points is obtained, the map data including altitude or altitude and semantic information. The method can accurately and effectively acquire the map data of the sampling points in the target ground scene image by using the multi-layer perceptron by simplifying the input of the multi-layer perceptron, namely taking the two-dimensional coordinates of the sampling points as the input and configuring the height or the height and semantic information as the output of the multi-layer perceptron, thereby solving the problem that the existing three-dimensional reconstruction method cannot effectively and accurately acquire the map data.
Referring to fig. 5, another aspect of the present invention further provides a three-dimensional reconstruction method of a ground scene, which may include:
step S51: the map data of the target ground scene is acquired by adopting the map data acquisition method according to any embodiment;
step S52: and carrying out three-dimensional reconstruction of the target ground scene based on the map data.
In some embodiments, the map data may include height or height and semantic information, and according to the three-dimensional reconstruction of the target ground scene based on the obtained map data using a three-dimensional reconstruction model in the field, the three-dimensional reconstructed target ground scene may represent the geometry of each target object in the scene, and may include the height and semantic information, providing richer three-dimensional data.
It will be appreciated by those skilled in the art that the present application may implement all or part of the processes in the methods of the above embodiments, or may be implemented by a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each of the method embodiments described above when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code.
In another aspect of the present application, there is further provided a computer readable storage medium, in which a computer program is stored, where the computer program, when executed by a processor, implements the map data acquisition method or the ground scene three-dimensional reconstruction method according to any one of the above embodiments. The computer readable storage medium may be a storage device including various electronic devices, and optionally, in embodiments of the present application, the computer readable storage medium is a non-transitory computer readable storage medium.
Another aspect of the present application also provides an electronic device that may include at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores a computer program, and the computer program when executed by at least one processor implements the map data acquisition method or the ground scene three-dimensional reconstruction method according to any one of the above embodiments.
Referring to fig. 6, a structure in which the memory 61 and the processor 62 are connected by a bus is exemplarily shown in fig. 6, and the memory 61 and the processor 62 are each provided with only one.
In other embodiments, the electronic device may include multiple memories 61 and multiple processors 62. While the program for performing the map data acquisition method or the ground scene three-dimensional reconstruction method of any of the above embodiments may be divided into a plurality of sub-programs, each of which may be loaded and executed by the processor 62 to perform the different steps of the map data acquisition method or the ground scene three-dimensional reconstruction method of the above method embodiments, respectively. Specifically, each of the sub-programs may be stored in a different memory 61, respectively, and each of the processors 62 may be configured to execute the programs in one or more memories 61 to collectively implement the map data acquisition method or the ground scene three-dimensional reconstruction method of the above-described method embodiment.
It should be noted that, the ground scene information possibly related in each embodiment of the present application is obtained by following legal, legal and necessary principles strictly according to requirements of laws and regulations, and based on reasonable purposes of service scenes, the ground scene information is authorized.
The applicant has very important consideration to the safety of ground scene information, and has adopted safety protection measures which meet industry standards and are reasonable and feasible to protect the information and prevent the information from unauthorized access, disclosure, use, modification, damage or loss.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (10)

1. A map data acquisition method, characterized by comprising:
acquiring two-dimensional coordinates of a sampling point in a target ground scene image, wherein the two-dimensional coordinates are coordinates of the sampling point in a two-dimensional coordinate system constructed in parallel to the ground plane direction;
inputting the two-dimensional coordinates of the sampling points into a trained multi-layer perceptron, wherein an output layer of the multi-layer perceptron is provided with an output node for outputting height, an output node for outputting semantic information and an output node for outputting color information;
and obtaining color information and map data corresponding to the sampling points, wherein the map data comprises height and semantic information.
2. The method of claim 1, wherein the trained multi-layer perceptron is obtained by:
acquiring two-dimensional coordinates of any sampling point in a ground scene training image sample;
inputting the two-dimensional coordinates of the arbitrary sampling points into an initial multi-layer perceptron;
obtaining color information and map data corresponding to the arbitrary sampling points;
and performing loss calculation at least based on the color information and the real color information of the arbitrary sampling points, and performing iterative training on the initial multi-layer perceptron based on a loss calculation result to obtain the trained multi-layer perceptron.
3. The method of claim 2, wherein when the map data includes altitude and semantic information, the performing a loss calculation based at least on the color information and true color information of the arbitrary sampling point comprises:
and performing color loss calculation based on the color information and the real color information of the arbitrary sampling points, performing semantic loss calculation based on the semantic information and the real semantic information of the arbitrary sampling points, and taking the results of the color loss calculation and the semantic loss calculation as final loss calculation results.
4. A method according to claim 3, wherein said calculating the color loss based on the color information of the arbitrary sampling point and the true color information comprises:
calculating the color loss by calculating the mean square error of the color information of the arbitrary sampling point and the real color information; and/or the number of the groups of groups,
the semantic loss calculation based on the semantic information and the real semantic information of the arbitrary sampling points comprises the following steps:
and calculating the semantic loss by calculating the mean square error of the semantic information of the arbitrary sampling point and the real semantic information.
5. A method according to claim 3, wherein prior to the semantic loss calculation based on the semantic information of the arbitrary sample points and the true semantic information, the method further comprises:
carrying out semantic segmentation on the ground scene training image sample to obtain a semantic segmentation image;
and obtaining real semantic information corresponding to the arbitrary sampling point based on the semantic segmentation image.
6. The method according to any one of claims 2 to 5, wherein before the loss calculation based at least on the color information and the true color information of the arbitrary sampling point, the method further comprises:
acquiring a camera sampling pose corresponding to the ground scene training image sample;
obtaining a new image of the ground scene training image sample based on the color information and map data of the arbitrary sampling points;
rendering the new image based on the camera sampling pose;
the calculating the loss based on the color information and the true color information of the arbitrary sampling point at least comprises the following steps:
and performing loss calculation at least based on the color information and the true color information of any sampling point in the new rendered image.
7. The method of claim 1, wherein the acquiring the two-dimensional coordinates of the sampling point in the target ground scene image comprises:
grid division is carried out on the plane where the two-dimensional coordinate system is located;
taking any vertex of the grid obtained after division as a sampling point and acquiring the two-dimensional coordinates of the sampling point.
8. A ground scene three-dimensional reconstruction method, comprising:
acquiring color information and map data of a target ground scene using the map data acquisition method according to any one of claims 1 to 7;
and carrying out three-dimensional reconstruction of the target ground scene based on the color information and the map data.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the map data acquisition method according to any one of claims 1 to 7 or the ground scene three-dimensional reconstruction method according to claim 8.
10. An electronic device, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory has stored therein a computer program which, when executed by the at least one processor, implements the map data acquisition method of any one of claims 1 to 7 or the ground scene three-dimensional reconstruction method of claim 8.
CN202311369249.5A 2023-10-23 2023-10-23 Map data acquisition method, computer readable storage medium and electronic device Active CN117132727B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311369249.5A CN117132727B (en) 2023-10-23 2023-10-23 Map data acquisition method, computer readable storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311369249.5A CN117132727B (en) 2023-10-23 2023-10-23 Map data acquisition method, computer readable storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN117132727A CN117132727A (en) 2023-11-28
CN117132727B true CN117132727B (en) 2024-02-06

Family

ID=88863028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311369249.5A Active CN117132727B (en) 2023-10-23 2023-10-23 Map data acquisition method, computer readable storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN117132727B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340939A (en) * 2020-02-21 2020-06-26 广东工业大学 Indoor three-dimensional semantic map construction method
CN111462324A (en) * 2020-05-18 2020-07-28 南京大学 Online spatiotemporal semantic fusion method and system
CN115937442A (en) * 2022-11-17 2023-04-07 安徽蔚来智驾科技有限公司 Road surface reconstruction method based on implicit neural expression, vehicle and storage medium
CN116295457A (en) * 2022-12-21 2023-06-23 辉羲智能科技(上海)有限公司 Vehicle vision positioning method and system based on two-dimensional semantic map
CN116797742A (en) * 2023-07-26 2023-09-22 重庆大学 Three-dimensional reconstruction method and system for indoor scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264572B (en) * 2019-06-21 2021-07-30 哈尔滨工业大学 Terrain modeling method and system integrating geometric characteristics and mechanical characteristics
US11798225B2 (en) * 2021-08-11 2023-10-24 Here Global B.V. 3D building generation using topology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340939A (en) * 2020-02-21 2020-06-26 广东工业大学 Indoor three-dimensional semantic map construction method
CN111462324A (en) * 2020-05-18 2020-07-28 南京大学 Online spatiotemporal semantic fusion method and system
CN115937442A (en) * 2022-11-17 2023-04-07 安徽蔚来智驾科技有限公司 Road surface reconstruction method based on implicit neural expression, vehicle and storage medium
CN116295457A (en) * 2022-12-21 2023-06-23 辉羲智能科技(上海)有限公司 Vehicle vision positioning method and system based on two-dimensional semantic map
CN116797742A (en) * 2023-07-26 2023-09-22 重庆大学 Three-dimensional reconstruction method and system for indoor scene

Also Published As

Publication number Publication date
CN117132727A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN108230235B (en) Disparity map generation system, method and storage medium
CN111275633A (en) Point cloud denoising method, system and device based on image segmentation and storage medium
CN111598993A (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
KR102215101B1 (en) Method and Apparatus for Generating Point Cloud Using Feature of Object Acquired from Image
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
CN110930453B (en) Target object positioning method, target object positioning device and readable storage medium
CN110428490A (en) The method and apparatus for constructing model
KR102223484B1 (en) System and method for 3D model generation of cut slopes without vegetation
CN112132770A (en) Image restoration method and device, computer readable medium and electronic equipment
CN108734773A (en) A kind of three-dimensional rebuilding method and system for mixing picture
CN106023147A (en) GPU-based linear array remote sensing image DSM rapid extraction method
CN113421217A (en) Method and device for detecting travelable area
CN117036571B (en) Image data generation, visual algorithm model training and evaluation method and device
CN117132727B (en) Map data acquisition method, computer readable storage medium and electronic device
CN110738677A (en) Full-definition imaging method and device for camera and electronic equipment
CN116962612A (en) Video processing method, device, equipment and storage medium applied to simulation system
CN116092035A (en) Lane line detection method, lane line detection device, computer equipment and storage medium
Dam et al. Terrain generation based on real world locations for military training and simulation
Zhu et al. Toward the ghosting phenomenon in a stereo-based map with a collaborative RGB-D repair
KR102648938B1 (en) Method and apparatus for 3D image reconstruction based on few-shot neural radiance fields using geometric consistency
CN117911603B (en) Partition NeRF three-dimensional reconstruction method, system and storage medium suitable for large-scale scene
CN117132744B (en) Virtual scene construction method, device, medium and electronic equipment
CN111010558B (en) Stumpage depth map generation method based on short video image
Chen et al. Fully in tensor computation manner: one‐shot dense 3D structured light and beyond
Paudel et al. Optimal transformation estimation with semantic cues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant