CN115937461A - Multi-source fusion model construction and texture generation method, device, medium and equipment - Google Patents

Multi-source fusion model construction and texture generation method, device, medium and equipment Download PDF

Info

Publication number
CN115937461A
CN115937461A CN202211436344.8A CN202211436344A CN115937461A CN 115937461 A CN115937461 A CN 115937461A CN 202211436344 A CN202211436344 A CN 202211436344A CN 115937461 A CN115937461 A CN 115937461A
Authority
CN
China
Prior art keywords
model
building
point cloud
partition
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211436344.8A
Other languages
Chinese (zh)
Other versions
CN115937461B (en
Inventor
刘俊伟
程文胜
刘路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terry Digital Technology Beijing Co ltd
Original Assignee
Terry Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terry Digital Technology Beijing Co ltd filed Critical Terry Digital Technology Beijing Co ltd
Priority to CN202211436344.8A priority Critical patent/CN115937461B/en
Publication of CN115937461A publication Critical patent/CN115937461A/en
Application granted granted Critical
Publication of CN115937461B publication Critical patent/CN115937461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Generation (AREA)

Abstract

The invention provides a method, a device, a medium and equipment for constructing a multi-source fusion model and generating textures, wherein the method comprises the steps of segmenting a building model disease building outer surface model in point cloud; meanwhile, grid division is carried out according to the multi-view images, and respective scene representations are learned; the method provided by the invention can realize full automation of model fitting, automatic generation of texture pictures and smoother generated texture pictures.

Description

Multi-source fusion model construction and texture generation method, device, medium and equipment
Technical Field
The invention relates to the technical field of three-dimensional modeling, in particular to a method, a device, a medium and equipment for constructing a multi-source fusion model and generating textures.
Background
The three-dimensional modeling is to construct a model with three-dimensional data through a virtual three-dimensional space by utilizing three-dimensional production software, and the model is from a simple geometric model to a complex role model; from static single product display to dynamic complex scenes such as movie animation, game design, industrial design, building design, indoor design, product design, landscape design and the like, three-dimensional modeling is required.
The traditional modeling scheme mainly comprises the steps of firstly collecting data, generating TDOM (True Orho/True Digital Ortho Map) by adopting air-to-three calculation, then manually drawing a single building DLG (Digital Line Graphic), and generating a building surface model by combining a point cloud model. Meanwhile, the texture picture is made by manually selecting the picture, and the related software is adopted for mapping processing. However, drawing a single building DLG needs to be assisted by exporting various data intermediate formats, so that the manual workload is large, and the efficiency is very low; the manual selection of the texture picture and the mapping operation are very time-consuming and labor-consuming, and due to the problem of the photographing angle, the selection of a proper picture is very difficult sometimes.
Disclosure of Invention
In view of the above, the present invention proposes a multi-source fusion model construction and texture generation method, apparatus, medium, and device that overcome or at least partially solve the above problems.
The invention provides a method for constructing a multi-source fusion model and generating textures, which comprises the following steps:
collecting laser point cloud data and multi-view image data of a target collection area;
adopting a point cloud example segmentation neural network to generate a building point cloud example based on the laser point cloud data segmentation, and carrying out surface fitting based on the building point cloud example to generate a building external surface model;
dividing the target acquisition area into a plurality of subarea grids, and learning scene representations corresponding to the subarea grids;
intercepting a corresponding local partition model from the building outer surface model according to the grid position of the partition grid, and rendering a texture image from a corresponding scene representation according to each surface position and the normal direction of the local partition model;
and storing the texture image, updating the texture coordinates to the model attributes of the corresponding local partition models, and combining the models through a plurality of local partition models to obtain the final texture-containing building model.
Optionally, the generating of the cloud instance of the building point based on the laser point cloud data segmentation by using the point cloud instance segmentation neural network includes:
generating corresponding colorful dense point cloud based on the laser point cloud data, and recording the position, intensity and color information of each point in the point cloud;
extracting the color dense point cloud into a sparse point cloud;
and identifying and reasoning based on the sparse point cloud by using a PointNet + + instance segmentation network model to segment the building point cloud instance to obtain a building point cloud instance segmentation result comprising a plurality of building point cloud instances.
Optionally, the generating of the building outer surface model by performing surface fitting based on the building point cloud instance comprises:
extracting an independent building point cloud example, and separating a point set of each surface of a building by adopting a noise density clustering algorithm;
and carrying out constrained surface fitting based on the point sets of all the surfaces of the building to generate a building outer surface model of a vector.
Optionally, dividing the target acquisition region into a plurality of partition grids, and learning a scene representation corresponding to each partition grid includes:
uniformly dividing the target acquisition area into grids with fixed sizes according to the image resolution and the bbox geographic coordinate range of the actual sampling area, and merging the grids to obtain a plurality of subarea grids;
obtaining a partition image corresponding to each partition grid based on the multi-view image data;
and learning scene representations corresponding to all grids according to the partition images.
Optionally, learning the scene representation corresponding to each grid according to the partition image includes:
according to a Block-NeRF neural network algorithm, generating a data set from a partitioned image corresponding to a single partitioned grid, taking the position and angle information of a camera as network input, taking an image as a label of network output, training the Block-NeRF neural network, and taking a trained neural network model as a scene representation of the grid;
and each partitioned scene image set trains a neural network model to represent the color distribution of the whole scene.
Optionally, intercepting a corresponding local partition model from the building outer surface model according to the grid position of the partition grid, and rendering a texture image from a corresponding scene representation according to each surface position and the normal direction of the local partition model includes:
intercepting a local partition model corresponding to the building outer surface model according to the coordinates of each partition grid to obtain a partition model example;
according to the positions of all component vector surfaces of the partition model examples and the normal direction of the component vector surfaces, the imaging pixel value of each pixel is calculated from the scene characterization neural network model at a given constraint distance, an orthoimage with the same size as the vector surfaces is generated through rendering, the orthoimage is stored as a texture mapping image, and the membership relation between the image and the building example component surfaces is recorded.
Optionally, the obtaining of the final texture-containing building model after model merging of the plurality of local partition models includes:
and acquiring a plurality of local subarea models corresponding to each subarea grid and corresponding texture mapping images, and combining the local subarea models to obtain the final texture-containing building model.
The invention also provides a multi-source fusion model building and texture generating device, which comprises one or more processors and a non-transitory computer readable storage medium storing program instructions, wherein when the one or more processors execute the program instructions, the one or more processors are used for realizing the multi-source fusion model building and texture generating method according to any item above.
The invention also provides a computer-readable storage medium for storing a program code, wherein the program code is used for executing any one of the multi-source fusion model building and texture generating methods.
The invention also provides a computing device comprising a processor and a memory: the memory is used for storing program codes and transmitting the program codes to the processor; the processor is used for executing any one of the multi-source fusion model construction and texture generation methods according to instructions in the program codes.
The invention provides a method, a device, a medium and equipment for constructing a multi-source fusion model and generating textures, which are implemented by segmenting a building model disease building outer surface model in point cloud; meanwhile, grid division is carried out according to the multi-view images, and respective scene representations are learned; the method provided by the invention can realize full automation of model fitting, automatic generation of texture pictures and smoother generated texture pictures.
The above description is only an overview of the technical solutions of the present invention, and the present invention can be implemented in accordance with the content of the description so as to make the technical means of the present invention more clearly understood, and the above and other objects, features, and advantages of the present invention will be more clearly understood.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a multi-source fusion model construction and texture generation method according to an embodiment of the invention;
fig. 2 is a flow chart of a multi-source fusion model construction and texture generation method according to another embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a schematic flow diagram illustrating a multi-source fusion model building and texture generating method according to an embodiment of the present invention, and as shown in fig. 1, the multi-source fusion model building and texture generating method provided in the embodiment of the present invention may include at least the following steps S1 to S5.
S1, collecting laser point cloud data and multi-view image data of a target collection area. In the embodiment of the invention, the collection of the laser point cloud data and the multi-view image data can be realized through the laser radar and the multi-view camera, and the target collection area can be any area specified in a geographic range.
And S2, generating a building point cloud example by adopting a point cloud example segmentation neural network based on the laser point cloud data segmentation, and performing surface fitting based on the building point cloud example to generate a building external surface model. The point cloud example segmentation neural network can be combined with a neural network which is created and trained in advance to obtain a corresponding building point cloud example through input point cloud data.
And S3, dividing the target acquisition area into a plurality of subarea grids, and learning the scene representation corresponding to each subarea grid. The size of the partition grid may be set according to different requirements, which is not limited in this embodiment.
S4, intercepting a corresponding local partition model from the building outer surface model according to the grid position of the partition grid, and rendering a texture image from a corresponding scene representation according to each surface position and the normal direction of the local partition model;
and S5, storing the texture image, updating the texture coordinates to the model attributes of the corresponding local partition models, and combining the models through a plurality of local partition models to obtain the final texture-containing building model.
The method provided by the embodiment of the invention comprises the steps of partitioning a building model in point cloud to obtain a building outer surface model; meanwhile, grid division is carried out according to the multi-view images, and respective scene representations are learned; intercepting a local partition model of the building outer surface model according to the grid position to realize full automation of model fitting; in addition, the texture image is rendered from the scene representation according to the partition model, the texture image is automatically generated, and the texture image is stored and the texture coordinates are updated to the model attributes. The multi-source fusion model construction and texture generation method of the present embodiment will be further described with reference to fig. 2.
S1, collecting laser point cloud data and multi-view image data of a target collection area.
During data acquisition, the embodiment can adopt an unmanned aerial vehicle or an unmanned vehicle to carry a multi-view camera, the track is set according to the requirements of parameters such as heading overlapping 80% of a forward-shooting image, sidewise overlapping 50% and the like, an image with the acquisition resolution of 0.05m is used as multi-view image data, and parameters such as camera positioning and the like are recorded at the same time. In addition, an unmanned aerial vehicle or an unmanned vehicle can be used for carrying a laser radar to collect laser point cloud data in a preset area.
And S2, generating a building point cloud example by adopting a point cloud example segmentation neural network based on the laser point cloud data segmentation, and performing surface fitting based on the building point cloud example to generate a building external surface model.
And S2-1, generating corresponding color dense point clouds based on the laser point cloud data, and recording the position, intensity and color information of each point in the point clouds. When generating color point cloud, the laser point cloud data collected by the laser radar can be imported into the point cloud computing software to generate color dense point cloud, and each point records the information of [ x, y, z, i, r, g, b ] (position, intensity, color) and stores as a.las file.
S2-2, extracting and analyzing the color dense point cloud into a sparse point cloud. The obtained dense point cloud can be preprocessed, the point clouds of different planes are roughly separated by adopting a noise density clustering algorithm, then each category is subjected to equal ratio sampling, and the dense point cloud is extracted and analyzed into a sparse point cloud.
And S2-3, carrying out identification reasoning on the basis of the sparse point cloud by utilizing a PointNet + + instance segmentation network model so as to carry out building point cloud instance segmentation and obtain a building point cloud instance segmentation result comprising a plurality of building point cloud instances.
In this embodiment, the point clouds may be partitioned on an [ x, y ] plane according to a coordinate partition grid, each point cloud is independently stored as a file, and the number of points is interpolated/extracted to 20480, where the intensity i of the interpolated point is set to 0 and the intensity i of the other normal point clouds is 1.
And (4) marking part of the point cloud as a data set, namely recording the number of each point belonging to a certain building (the number is randomly acquired, but the same building number is consistent), and if the point does not belong to any building, marking the number as-1. And (4) learning and training by adopting a PointNet + + based instance segmentation network model to obtain a trained neural network model as the instance segmentation network model.
The sparse point cloud extracted in the step S2-2 can be input into an instance segmentation network model, and the trained instance segmentation network model is used for identifying and reasoning the unmarked point cloud, so that a building point cloud instance segmentation result comprising a plurality of building point cloud instances is obtained. In the identification process, in order to ensure that a complete result can be obtained by cutting a damaged building, a Sudoku overlapping sampling mode is adopted, grid blocks of 3x3 around each grid block are sequentially merged to form point cloud blocks in a 1500mx1500m area, then an example segmentation network model is loaded to perform example segmentation, the segmentation result is subjected to non-maximum inhibition of [ x, y ] plane projection area, a repeated identification building is merged, a damaged point cloud identification result is removed, and an example segmentation result of the building point cloud is generated.
Further, the step S2 of performing surface fitting based on the building point cloud example to generate a building external surface model may include: extracting an independent building point cloud example, and separating a point set of each surface of a building by adopting a noise density clustering algorithm; and carrying out constrained surface fitting based on the point sets of all the surfaces of the building to generate a building outer surface model of a vector.
And (4) carrying out independent classification on the point cloud of each building by point cloud example segmentation, and independently extracting the point cloud of the building according to the judgment of the point cloud category attribute in the example segmentation. When the surface of the building point cloud example is fitted, the independent building point cloud can be extracted according to the partition result of the building point cloud example. In view of the characteristics of artificial buildings, the artificial buildings are generally composed of four or more side vertical surfaces and one or more top surfaces, and a noise density clustering algorithm is adopted to separate a point set of each surface of the building. According to the constrained surface number, if 4 side vertical surfaces and 1 top surface (flat top) are respectively fitted with a point set according to a least square method, constructing a building model vector surface, if an included angle between the normal direction and the horizontal direction of the ground is within 10 degrees, taking the minimum surrounding frame of a projection line of the vector surface on the horizontal plane as a box frame, taking the shortest side line of the box frame as the horizontal projection line of a calibration vector surface, and then drawing a calibration vector surface vertical to the horizontal plane by taking the height of an original vector as the height of the calibration vector surface; and the vector surfaces after vertical correction extend one third of the length of each vector surface in pairs along the horizontal direction, intersecting lines are taken as sideline reconstruction vector surfaces of the vector surfaces, the vector surfaces with the normal direction larger than 10 degrees extend one third of the length of each vector surface along four sides, intersecting lines with other surfaces are taken as sideline reconstruction vector surfaces of the vector surfaces, and a final building surface model is formed.
And S3, dividing the target acquisition area into a plurality of subarea grids, and learning scene representations corresponding to the subarea grids. In some embodiments, the step may further comprise:
and S3-1, uniformly dividing the target acquisition area into grids with fixed sizes according to the image resolution and the bbox geographic coordinate range of the actual sampling area, and merging the grids to obtain a plurality of partitioned grids. When the coordinate grid is partitioned, the grid can be uniformly partitioned into grids according to the image resolution and the bbox geographic coordinate range of the actual sampling area and according to 500mx500m, and after the grid which does not intersect with the sampling area is deleted, grid row and column numbers and corresponding vertex geographic coordinates are recorded.
And S3-2, obtaining the partition image corresponding to each partition grid based on the multi-view image data.
And combining 3x3 grid blocks around each grid block in sequence by adopting a Sudoku overlapping sampling mode according to the position parameters and the partition grid coordinates of the image record, screening the original image by the combined grid blocks, independently storing the original image as a partition scene image set, and simultaneously recording the coordinate information of the grid and the grid blocks.
And S3-3, learning scene representations corresponding to the grids according to the subarea images.
According to a Block-NeRF neural network algorithm, a data set is generated by the partitioned images corresponding to the single partitioned grids, the position and angle information of a camera is used as network input, images are used as labels of network output, the Block-NeRF neural network is trained, and the trained neural network model is the scene representation of the grids. And each partitioned scene image set can represent the color distribution of the whole scene by training a neural network model.
And S4, intercepting a corresponding local partition model from the building outer surface model according to the grid position of the partition grid, and rendering a texture image from a corresponding scene representation according to each surface position and the normal direction of the local partition model. The step may specifically include:
and S4-1, intercepting the local partition model corresponding to the building outer surface model according to the coordinates of each partition grid to obtain a partition model example. Wherein, when the local partition mode is determined, the smallest projection bounding box of the building model on the horizontal plane can be used as the position marking frame of the model, the grid marking frame is generated according to the partition grid coordinates, if the position marking frame of the building model is intersected with the grid marking frame, the building model belongs to the local partition model of the grid,
when extracting the partition model example, combining the 3x3 grid blocks around each grid block in sequence, screening the building surface model according to the coordinates of the combined grid blocks, selecting the partition model example only when the [ x, y ] plane projection bbox of the building surface model is completely in the combined grid blocks, and otherwise, abandoning to ensure that the complete model and texture are obtained.
And S4-2, calculating imaging pixel values of all pixels from the scene characterization neural network model at a given constraint distance according to the positions of all component vector surfaces of the partition model example and the normal direction of the component vector surfaces, rendering to generate an orthoimage with the same size as the vector surfaces, storing the orthoimage as a texture mapping image, and recording the membership relationship between the image and the building example component surfaces. If a vector plane of a partition model example is selected, the vector plane is translated to a position away from the model example side by a distance of 5 meters according to the normal direction of the vector plane, a rendering vector plane is generated, pixel grids are divided by the rendering vector plane according to the resolution of 0.05 meter, the central position coordinate of each pixel grid is calculated, the central position coordinate and the normal direction of the pixel are respectively input into a scene characterization neural network model, the pixel RGB value of the pixel grid is generated according to the volume density rendering, and the RGB values of all the pixel grids are encoded into a three-dimensional matrix sequence, namely the orthoimage of the vector plane. And storing the orthoimage as a texture image according to the building instance number, the vector surface number and the scene characterization neural network number as names, namely finishing the rendering process of the vector surface once.
Rendering of building example plane textures: according to the positions of all component vector surfaces of the partition model examples and the normal direction of the component vector surfaces, the imaging pixel value of each pixel is calculated from the scene characterization neural network model at a given constraint distance, the imaging pixel value is rendered according to the volume density function according to the resolution of 0.05m, an orthoimage with the same size as the vector surface is generated, the orthoimage is stored as a texture mapping image, and the membership relation between the image and the building example component surface is recorded. And rendering the construction surfaces one by one in the building examples to obtain the texture maps of the building models.
And S5, storing the texture image, updating the texture coordinates to the model attributes of the corresponding local partition models, and combining the models through a plurality of local partition models to obtain the final texture-containing building model. If a certain vector plane of a building example renders a plurality of texture images in a plurality of local distribution area scene characterization networks, selecting the scene characterization network number texture image with the vector plane projected to the center of the grid with the shortest distance in the [ x, y ] plane as the texture image of the vector plane. And sequentially determining texture images of all component surfaces of the building example, then carrying out UV expansion on the building example, synthesizing texture maps according to the number of the vector surfaces and the UV expansion position of the texture images of the component surfaces to generate texture UV coordinates, and then writing the texture UV coordinates and the name of the map file into a building example model point sequence to finish the texture mapping of the building example. And after texture mapping is carried out on each model instance in sequence, the model instances of the plurality of local partition models are arranged and combined into the final texture-containing building model according to the numbers.
When the texture model is updated, due to the adoption of Sudoku overlap sampling, the same building example can appear in different blocks, and the same building example can be rendered with a map from different scene representation neural network models. The method adopts the length of the distance from the projection of the building example composition surface on the [ x, y ] plane to the nine-square grid center as the screening basis, and selects the rendering image of the nine-square grid center block scene representation neural network model with the closest distance as the texture mapping. And meanwhile, converting the texture mapping coordinate into a texture UV coordinate, and recording the mapping storage path and the coordinate value in the surface model data according to the mapping rendering rule.
According to the above, a plurality of partition networks and corresponding local partition modules can be obtained, so that a plurality of local partition models corresponding to each partition grid and corresponding texture mapping images can be obtained, and a final texture-containing building model can be obtained after model combination is performed on the plurality of local partition models. That is, after each building instance is mapped, the respective partition models are merged together and exported as a CityGML model file containing models and textures.
The method provided by the embodiment of the invention adopts a deep learning technology to automatically extract the building model structure from multi-source data and generate the texture picture. And collecting point clouds by adopting a laser radar, collecting images by adopting a multi-view camera as source data, and recording camera parameters while collecting the data. On one hand, data collected by the laser radar are calculated to generate a color point cloud model, then the buildings in the point cloud are independently segmented through a deep learning point cloud example segmentation technology to extract the point cloud of the independent buildings, and then the point cloud is fitted into a building external surface model structure through a post-processing step. On the other hand, after data collected by the multi-view camera is cut into blocks, each local Block adopts a Block-NeRF network learning Block scene model structure, so that color information of the point location is rendered and determined by adopting a density function. And finally, deconstructing the building surface model into each independent hyperplane, rendering the hyperplane from the scene model structure at a given distance by taking the normal direction of the hyperplane as the light direction to generate texture colors, and generating the texture of the building surface model after texture combination and texture coordinate conversion. In the embodiment of the invention, the point cloud of the single building can be directly separated by adopting point cloud example segmentation, and then fitting is carried out, so that the building surface model can be automatically generated.
The embodiment of the invention also provides a multi-source fusion model building and texture generating device, which comprises one or more processors and a non-transitory computer-readable storage medium storing program instructions, wherein when the one or more processors execute the program instructions, the one or more processors are used for realizing the multi-source fusion model building and texture generating method according to the embodiment.
The embodiment of the invention also provides a computer-readable storage medium, which is used for storing a program code, and the program code is used for executing the multi-source fusion model building and texture generating method.
An embodiment of the present invention further provides a computing device, where the computing device includes a processor and a memory: the memory is used for storing program codes and transmitting the program codes to the processor; the processor is used for executing the multi-source fusion model construction and texture generation method according to the instructions in the program codes.
It is clear to those skilled in the art that the specific working processes of the above-described systems, devices, modules and units may refer to the corresponding processes in the foregoing method embodiments, and for the sake of brevity, further description is omitted here.
In addition, the functional units in the embodiments of the present invention may be physically independent of each other, two or more functional units may be integrated together, or all the functional units may be integrated in one processing unit. The integrated functional unit may be implemented in the form of hardware, or may also be implemented in the form of software or firmware.
Those of ordinary skill in the art will understand that: the integrated functional units, if implemented in software and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computing device (e.g., a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention when the instructions are executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (such as a computing device, e.g., a personal computer, a server, or a network device) associated with program instructions, which may be stored in a computer-readable storage medium, and when the program instructions are executed by a processor of the computing device, the computing device executes all or part of the steps of the method according to the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments can be modified or some or all of the technical features can be equivalently replaced within the spirit and principle of the present invention; such modifications or substitutions do not depart from the scope of the present invention.

Claims (10)

1. A multi-source fusion model construction and texture generation method is characterized by comprising the following steps:
collecting laser point cloud data and multi-view image data of a target collection area;
adopting a point cloud example segmentation neural network to generate a building point cloud example based on the laser point cloud data segmentation, and carrying out surface fitting based on the building point cloud example to generate a building external surface model;
dividing the target acquisition area into a plurality of subarea grids, and learning scene representations corresponding to the subarea grids;
intercepting a corresponding local partition model from the building outer surface model according to the grid position of the partition grid, and rendering a texture image from a corresponding scene representation according to each surface position and the normal direction of the local partition model;
and storing the texture image, updating the texture coordinates to the model attributes of the corresponding local partition models, and combining the models through a plurality of local partition models to obtain the final texture-containing building model.
2. The method of claim 1, wherein generating a building point cloud instance based on the laser point cloud data segmentation using a point cloud instance segmentation neural network comprises:
generating corresponding colorful dense point cloud based on the laser point cloud data, and recording the position, intensity and color information of each point in the point cloud;
extracting the color dense point cloud into a sparse point cloud;
and identifying and reasoning based on the sparse point cloud by using a PointNet + + instance segmentation network model so as to segment the building point cloud instance and obtain a building point cloud instance segmentation result comprising a plurality of building point cloud instances.
3. The method of claim 2, wherein generating a building outer surface model based on the building point cloud instance by surface fitting comprises:
extracting an independent building point cloud example, and separating a point set of each surface of a building by adopting a noise density clustering algorithm;
and carrying out constrained surface fitting based on the point sets of all the surfaces of the building to generate a building outer surface model of a vector.
4. The method of claim 2, wherein dividing the target acquisition region into a plurality of partition grids and learning the scene representation corresponding to each partition grid comprises:
uniformly dividing the target acquisition area into grids with fixed sizes according to the image resolution and the bbox geographic coordinate range of the actual sampling area, and merging the grids to obtain a plurality of subarea grids;
obtaining a partition image corresponding to each partition grid based on the multi-view image data;
and learning scene representations corresponding to all grids according to the partition images.
5. The method of claim 4, wherein learning the scene representation corresponding to each mesh from the segmented images comprises:
according to a Block-NeRF neural network algorithm, generating a data set from a partitioned image corresponding to a single partitioned grid, taking the position and angle information of a camera as network input, taking an image as a label of network output, training the Block-NeRF neural network, and taking a trained neural network model as a scene representation of the grid;
and each partitioned scene image set trains a neural network model to represent the color distribution of the whole scene.
6. The method of claim 4, wherein the corresponding local partition model is truncated from the building exterior surface model according to the grid positions of the partition grid, and wherein rendering the texture image from the corresponding scene representation according to the respective surface positions and normal directions of the local partition model comprises:
intercepting a local partition model corresponding to the building outer surface model according to the coordinates of each partition grid to obtain a partition model example;
according to the positions of all component vector surfaces of the partition model examples and the normal direction of the component vector surfaces, the imaging pixel value of each pixel is calculated from the scene characterization neural network model at a given constraint distance, an orthoimage with the same size as the vector surfaces is generated through rendering, the orthoimage is stored as a texture mapping image, and the membership relation between the image and the building example component surfaces is recorded.
7. The method of claim 1, wherein obtaining the final texture-containing building model after model merging of the plurality of local-partition models comprises:
and acquiring a plurality of local partition models corresponding to each partition grid and corresponding texture mapping images, and combining the local partition models to obtain the final texture-containing building model.
8. A multi-source fusion model building and texture generating apparatus comprising one or more processors and a non-transitory computer-readable storage medium storing program instructions which, when executed by the one or more processors, are configured to implement the multi-source fusion model building and texture generating method according to any one of claims 1-7.
9. A computer-readable storage medium storing program code for performing the multi-source fusion model building and texture generating method according to any one of claims 1-7.
10. A computing device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is used for executing the multi-source fusion model construction and texture generation method of any one of claims 1-7 according to instructions in the program code.
CN202211436344.8A 2022-11-16 2022-11-16 Multi-source fusion model construction and texture generation method, device, medium and equipment Active CN115937461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211436344.8A CN115937461B (en) 2022-11-16 2022-11-16 Multi-source fusion model construction and texture generation method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211436344.8A CN115937461B (en) 2022-11-16 2022-11-16 Multi-source fusion model construction and texture generation method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN115937461A true CN115937461A (en) 2023-04-07
CN115937461B CN115937461B (en) 2023-09-05

Family

ID=86554778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211436344.8A Active CN115937461B (en) 2022-11-16 2022-11-16 Multi-source fusion model construction and texture generation method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN115937461B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597063A (en) * 2023-07-19 2023-08-15 腾讯科技(深圳)有限公司 Picture rendering method, device, equipment and medium
CN117036570A (en) * 2023-05-06 2023-11-10 沛岱(宁波)汽车技术有限公司 Automatic generation method and system for 3D point cloud model mapping

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009743A (en) * 2019-02-22 2019-07-12 南京航空航天大学 A kind of grid surface method for reconstructing of scene understanding
CN112465976A (en) * 2020-12-14 2021-03-09 广州港数据科技有限公司 Storage yard three-dimensional map establishing method, inventory management method, equipment and medium
CN113066112A (en) * 2021-03-25 2021-07-02 泰瑞数创科技(北京)有限公司 Indoor and outdoor fusion method and device based on three-dimensional model data
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
US11127223B1 (en) * 2020-10-16 2021-09-21 Splunkinc. Mesh updates via mesh splitting
WO2021232463A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114742968A (en) * 2022-06-13 2022-07-12 西南石油大学 Elevation map generation method based on building elevation point cloud

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009743A (en) * 2019-02-22 2019-07-12 南京航空航天大学 A kind of grid surface method for reconstructing of scene understanding
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
WO2021232463A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
US11127223B1 (en) * 2020-10-16 2021-09-21 Splunkinc. Mesh updates via mesh splitting
CN112465976A (en) * 2020-12-14 2021-03-09 广州港数据科技有限公司 Storage yard three-dimensional map establishing method, inventory management method, equipment and medium
CN113066112A (en) * 2021-03-25 2021-07-02 泰瑞数创科技(北京)有限公司 Indoor and outdoor fusion method and device based on three-dimensional model data
CN113128405A (en) * 2021-04-20 2021-07-16 北京航空航天大学 Plant identification and model construction method combining semantic segmentation and point cloud processing
CN114417489A (en) * 2022-03-30 2022-04-29 宝略科技(浙江)有限公司 Building base contour refinement extraction method based on real-scene three-dimensional model
CN114742968A (en) * 2022-06-13 2022-07-12 西南石油大学 Elevation map generation method based on building elevation point cloud

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
YU ZHANG ;: "Multi-phenotypic parameters extraction and biomass estimation for lettuce based on point clouds", MEASUREMENT, pages 1 - 12 *
张亚;: "三维激光扫描技术在建筑物重建中的应用", 河南科技, no. 05, pages 31 - 32 *
梁楚萍;印杰;伍静;汪俊;魏明强;郭延文;: "三维网格分割中聚类分析技术综述", 计算机辅助设计与图形学学报, no. 04, pages 171 - 183 *
蔺小虎;姚顽强;马润霞;马飞;张昆巍;: "基于海量点云数据的大雁塔三维重建", 文物保护与考古科学, no. 03, pages 69 - 74 *
闫利;陈长海;费亮;张奕戈;: "密集点云的数字表面模型自动生成方法", 遥感信息, no. 05, pages 5 - 11 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036570A (en) * 2023-05-06 2023-11-10 沛岱(宁波)汽车技术有限公司 Automatic generation method and system for 3D point cloud model mapping
CN117036570B (en) * 2023-05-06 2024-04-09 沛岱(宁波)汽车技术有限公司 Automatic generation method and system for 3D point cloud model mapping
CN116597063A (en) * 2023-07-19 2023-08-15 腾讯科技(深圳)有限公司 Picture rendering method, device, equipment and medium
CN116597063B (en) * 2023-07-19 2023-12-05 腾讯科技(深圳)有限公司 Picture rendering method, device, equipment and medium

Also Published As

Publication number Publication date
CN115937461B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN111461245B (en) Wheeled robot semantic mapping method and system fusing point cloud and image
CN108648269B (en) Method and system for singulating three-dimensional building models
CN104134234B (en) A kind of full automatic three-dimensional scene construction method based on single image
CN109883401B (en) Method and system for measuring visual field of city mountain watching
CN115937461B (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
CN112633657B (en) Construction quality management method, device, equipment and storage medium
CN111784840B (en) LOD (line-of-sight) level three-dimensional data singulation method and system based on vector data automatic segmentation
CN112991534B (en) Indoor semantic map construction method and system based on multi-granularity object model
CN112307553A (en) Method for extracting and simplifying three-dimensional road model
CN115082254A (en) Lean control digital twin system of transformer substation
CN111273877B (en) Linkage display platform and linkage method for live-action three-dimensional data and two-dimensional grid picture
CN113724279A (en) System, method, equipment and storage medium for automatically dividing traffic cells into road networks
Zhang et al. A geometry and texture coupled flexible generalization of urban building models
CN112509110A (en) Automatic image data set acquisition and labeling framework for land confrontation intelligent agent
CN113838199B (en) Three-dimensional terrain generation method
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN115033972A (en) Method and system for unitizing building main body structures in batches and readable storage medium
CN113838188A (en) Tree modeling method based on single image, tree modeling device and equipment
Habib et al. Integration of lidar and airborne imagery for realistic visualization of 3d urban environments
CN110599587A (en) 3D scene reconstruction technology based on single image
Pantazis et al. Are the morphing techniques useful for cartographic generalization?
US20230107740A1 (en) Methods and systems for automated three-dimensional object detection and extraction
Wang et al. 3D Reconstruction and Rendering Models in Urban Architectural Design Using Kalman Filter Correction Algorithm
Zhang Automatic extraction and quantitative analysis of building facade information at large scale using street-level images and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant