CN117830540A - Three-dimensional model construction method, device, equipment and storage medium - Google Patents
Three-dimensional model construction method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN117830540A CN117830540A CN202311543810.7A CN202311543810A CN117830540A CN 117830540 A CN117830540 A CN 117830540A CN 202311543810 A CN202311543810 A CN 202311543810A CN 117830540 A CN117830540 A CN 117830540A
- Authority
- CN
- China
- Prior art keywords
- feature point
- target
- grid
- dimensional model
- plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010276 construction Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 53
- 238000000605 extraction Methods 0.000 claims abstract description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 10
- 238000005516 engineering process Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 7
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000001788 irregular Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Remote Sensing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a three-dimensional model construction method, a device, equipment and a storage medium, and relates to the technical field of model construction. The method comprises the following steps: extracting features of the geospatial data of the target object to obtain at least one target feature point of the target object; according to the geospatial data, performing multi-view image feature point matching on at least one target feature point to obtain at least one grid plane; generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode; and constructing a three-dimensional model of the target object according to the image control distribution network. According to the technical scheme, the feature point extraction and the matching are carried out on the geospatial data by combining the multi-view image feature point matching and the multi-view image dense matching mode, so that the three-dimensional model is generated, and the construction precision and the construction efficiency of the three-dimensional model are improved.
Description
Technical Field
The embodiment of the application relates to the technical field of building models, in particular to the technical field of model construction, and specifically relates to a three-dimensional model construction method, device, equipment and storage medium.
Background
The building information model is based on the relevant information data of the building engineering project, and is used for building a three-dimensional model and designing, building and operating management of the project by utilizing the digital model. The method has the characteristics of visualization, coordination, optimality, simulation, information comprehensiveness and the like.
The existing three-dimensional model generally utilizes coordinate measuring equipment to measure the three-dimensional coordinates of an object to be measured during construction, and when a model with a large range is required to be constructed, the field measurement is required, and the measurement mode generally adopts an unmanned aerial vehicle aerial photography mode; however, the unmanned aerial vehicle aerial photographing data is utilized to construct a model, and once characteristic point errors occur, larger errors occur.
Disclosure of Invention
The application provides a three-dimensional model construction method, a device, equipment and a storage medium, so as to improve the construction precision and construction efficiency of a three-dimensional model.
According to an aspect of the present application, there is provided a three-dimensional model construction method, including:
extracting features of geospatial data of a target object to obtain at least one target feature point of the target object;
performing multi-view image feature point matching on the at least one target feature point according to the geospatial data to obtain at least one grid plane;
generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode;
and constructing a three-dimensional model of the target object according to the image control distribution network.
According to another aspect of the present application, there is provided a three-dimensional model building apparatus including:
the feature point extraction module is used for extracting features of the geographic space data of the target object to obtain at least one target feature point of the target object;
the feature point matching module is used for performing multi-view image feature point matching on the at least one target feature point according to the geospatial data to obtain at least one grid plane;
the distribution network generation module is used for generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode;
and the model construction model is used for constructing a three-dimensional model of the target object according to the image control distribution network.
According to another aspect of the present application, there is provided an electronic device including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the three-dimensional model building methods provided by the embodiments of the present application.
According to another aspect of the present application, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the three-dimensional model building methods provided by the embodiments of the present application.
According to the method, at least one target characteristic point of the target object is obtained through characteristic extraction of the geospatial data of the target object; according to the geospatial data, performing multi-view image feature point matching on at least one target feature point to obtain at least one grid plane; generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode; and constructing a three-dimensional model of the target object according to the image control distribution network. According to the technical scheme, the feature point extraction and the matching are carried out on the geospatial data by combining the multi-view image feature point matching and the multi-view image dense matching mode, so that the three-dimensional model is generated, and the construction precision and the construction efficiency of the three-dimensional model are improved.
Drawings
FIG. 1 is a flow chart of a three-dimensional model building method according to an embodiment of the present application;
FIG. 2 is a flow chart of a three-dimensional model building method according to a second embodiment of the present application;
fig. 3 is a schematic structural diagram of a three-dimensional model building apparatus according to a third embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device implementing the three-dimensional model building method according to the embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, in the technical scheme of the application, the related processing such as collection, storage, use, processing, transmission, provision and disclosure of the geospatial data and the target feature points and the like accords with the regulations of related laws and regulations and does not violate the popular regulations.
Example 1
Fig. 1 is a flowchart of a three-dimensional model building method according to an embodiment of the present application, where the embodiment is applicable to a case of performing model building using unmanned aerial vehicle aerial data, and may be implemented by a three-dimensional model building apparatus, which may be implemented in the form of hardware and/or software, and the three-dimensional model building apparatus may be configured in a computer device, for example, a server. As shown in fig. 1, the method includes:
s110, extracting features of the geospatial data of the target object to obtain at least one target feature point of the target object.
The target object is an object to be subjected to three-dimensional model construction and can comprise at least one of natural scenery, animals, plants, urban buildings and the like. The geospatial data is data information obtained through various measurement, remote sensing and geographic information system technologies and used for three-dimensional model establishment, and can comprise at least one of aerial image data obtained by using an unmanned aerial vehicle to carry out multi-angle shooting at low altitude by using a high-resolution aerial survey camera, ground information pictures provided by local ground stations, geographic position information obtained through a global positioning system, image control points obtained through measurement of a ground measuring instrument and the like. The target feature points refer to points with significance and discrimination in the geospatial data, and may include at least one of corner points, edge points, and the like.
Optionally, collecting geospatial data required for constructing a three-dimensional model of the target object; and carrying out feature extraction on the geospatial data based on a multi-view feature point extraction algorithm to obtain at least one target feature point of the target object.
The multi-view image feature point extraction algorithm is an algorithm for extracting feature points with significance, stability and distinguishing property from images shot by a plurality of visual angles or cameras, and can comprise at least one of a scale-invariant feature transformation method, an acceleration robust feature method and the like.
It should be noted that, the target feature point may be obtained by convolution calculation of the gaussian differential scale space and the image, which is not specifically limited in the embodiment of the present application.
Optionally, determining at least one pixel point according to the geospatial data, and monitoring a gray value of the pixel point; and determining the target characteristic point according to the gray value.
Wherein, the pixel point refers to the smallest unit constituting the digital image in the geospatial data, and is used for representing a specific position in the digital image. The gray value refers to the brightness or gray level of each pixel in the digital image.
Specifically, at least one pixel point is determined according to the geospatial data; based on a multi-view image feature point extraction algorithm and a Harri (Harris Corner Detection ) algorithm, gray values of pixel points are monitored, and target feature points are selected according to the gray values.
The Harri algorithm is a mathematical method for detecting angular points, and is specifically used for identifying target feature points in an image by calculating gray level changes of local areas.
Optionally, the target feature point is determined according to the gray value, which may be that, for each pixel point, the gray value variation amplitude of the gray value corresponding to the pixel point in the two-dimensional direction is determined; and if the gray value change amplitude is larger than the gray value change threshold value, determining the pixel point as a target characteristic point.
Wherein, the two-dimensional direction refers to a direction in a two-dimensional space, and is used for describing the orientation of a certain point or a certain vector on a plane. The gray value variation range refers to the gray value variation degree in a pixel point neighborhood, and is used for judging a target feature point in an image. The gray value change threshold is set manually according to actual conditions or empirical values.
Specifically, for each pixel point, determining the gray value variation amplitude of the gray value corresponding to the pixel point in the two-dimensional direction and the covariance matrix corresponding to the gray value variation amplitude based on the Harri algorithm; determining eigenvalues of covariance matrix; if the characteristic value is larger than the gray value change threshold value, the pixel point is determined to be a target characteristic point.
The covariance matrix is used for describing the distribution condition of the gray value variation amplitude in the image. The eigenvalues of the covariance matrix are used to describe the magnitude of the local gray value variation in the neighborhood of the pixel point.
It can be understood that the target characteristic points are determined by judging the gray value variation amplitude, so that the data volume is reduced, the calculation complexity is reduced, and the characteristic point extraction efficiency is improved.
S120, performing multi-view image feature point matching on at least one target feature point according to the geospatial data to obtain at least one grid plane.
The multi-view image feature point matching means that matching and alignment between images are achieved by searching feature points corresponding to each other in a plurality of view angles or a plurality of images. Grid plane refers to a planar coordinate system used to represent the earth's surface coordinate location in a mapping and geographic information system.
Optionally, performing multi-view effect feature point matching on at least one target feature point according to the geospatial data to obtain at least one grid unit, and determining the geographic coordinates of the grid unit; at least one grid plane is generated based on the geographic coordinates of the grid cells and the grid cells.
The grid unit refers to a minimum unit used for dividing the map surface in a map drawing and geographic information system, and can be a square, rectangular or other shaped area, and is particularly used for dividing the map surface into a plurality of small blocks, and data storage, analysis and representation can be performed in each small block. The geographical coordinates of the grid cells refer to coordinates corresponding to the map projection system currently being employed.
It should be noted that, the map projection system currently adopted is set manually according to actual conditions or experience values.
Optionally, generating a virtual space plane according to the geospatial data; performing multi-view image feature point matching according to the virtual space plane and the target feature points, and determining a target grid unit; at least one mesh plane is generated from the target mesh unit.
The virtual space plane is a virtual plane used for representing the matching result of the image feature points. The target mesh unit refers to a mesh unit that matches the target feature point.
Specifically, according to the geospatial data, determining the coverage range of the image participating in the multi-view image feature point matching in the target object and the elevation range of the earth surface in the coverage range; establishing a virtual space plane in a three-dimensional space where a target object is located according to the coverage area; according to the elevation range, adjusting the virtual space plane; performing multi-view image feature point matching according to the adjusted virtual plane and the target feature point, and determining a target grid unit; at least one mesh plane is generated from the target mesh unit.
The coverage range refers to the coverage range of the image in the geographic space data in the three-dimensional space where the target object is located. Elevation range refers to the range of vertical elevation change of the earth's surface or terrain in a geospatial space.
Optionally, performing multi-view image feature point matching according to the virtual space plane and the target feature point, and determining the target grid unit may be determining an initial grid unit according to the virtual space plane; back projecting the target characteristic points onto a virtual space plane to obtain the number of the characteristic points in the initial grid unit; and determining the target grid unit according to the number of the characteristic points.
Wherein the initial grid cells refer to all grid cells of the virtual space plane. The number of feature points refers to the number of feature points within a grid cell.
Specifically, according to the elevation range, the virtual space plane is adjusted, and at least one initial grid unit on the adjusted virtual space plane is determined; back projecting the target characteristic points onto a virtual space plane to obtain the number of the characteristic points in the initial grid unit; and determining each initial grid unit as a target grid unit if the number of the characteristic points is greater than or equal to the threshold value of the number of the characteristic points.
It can be understood that by establishing the virtual space plane and performing multi-view image feature point matching according to the virtual space plane and the target feature points, the matching of the target feature points can be more accurate, and the construction precision of the three-dimensional model can be improved.
S130, generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode.
The multi-view image dense matching mode refers to a method for performing stereo measurement and three-dimensional reconstruction by utilizing image data of a plurality of view angles. The image control distribution network refers to a distribution network of image control points; the corresponding ground points can be accurately displayed in different images. The distribution network of image control points refers to a complex and diverse spatial relationship network formed by the image control points between images. The network can help us to determine the geometric relationship between different images, thereby realizing the orientation of the images and the calculation of three-dimensional information.
S140, constructing a three-dimensional model of the target object according to the image control distribution network.
Where a three-dimensional model refers to a process of modeling an object or scene in three-dimensional space, it is typically presented in the form of computer graphics.
Optionally, generating a three-dimensional TIN (Triangulated Irregular Network, triangular irregular grid) grid according to the image control distribution network; constructing a white three-dimensional model according to the three-dimensional TIN grid; and adjusting the white three-dimensional model to obtain the three-dimensional model of the target object.
Wherein, the three-dimensional TIN grid refers to an irregular triangular grid used in geological modeling and is used for describing the geometrical shape and attribute distribution of the underground medium. The white body three-dimensional model refers to a type of geologic modeling commonly used in the fields of earth science and geologic modeling.
It should be noted that, the adjustment to the three-dimensional model of the white body may be texture mapping to the three-dimensional model of the white body, which is not limited in particular in the embodiment of the present application.
Optionally, after the three-dimensional model of the target object is constructed, the three-dimensional model is model-modified and model-unitized.
The model modification means that further adjustment and improvement are carried out on the established geological model so as to better meet the actual situation or specific research requirements, and the model modification can comprise at least one of stepping, bridging, hole filling, texture modification operation and the like. Model unitization refers to the division of a subsurface structure into a series of discrete units, typically cubes or other geometries, used to represent geologic properties and structures in geologic modeling.
Specifically, after a three-dimensional model of a target object is constructed, carrying out model modification on the three-dimensional model; performing model unitization on the modified three-dimensional model; determining missing information according to the geospatial data and the unitized three-dimensional model; and adjusting the unitized three-dimensional model according to the missing information.
Wherein the missing information refers to the detail information of the three-dimensional model which is missing in the geospatial data.
According to the method, at least one target characteristic point of the target object is obtained through characteristic extraction of the geospatial data of the target object; according to the geospatial data, performing multi-view image feature point matching on at least one target feature point to obtain at least one grid plane; generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode; and constructing a three-dimensional model of the target object according to the image control distribution network. According to the technical scheme, the feature point extraction and the matching are carried out on the geospatial data by combining the multi-view image feature point matching and the multi-view image dense matching mode, so that the three-dimensional model is generated, and the construction precision and the construction efficiency of the three-dimensional model are improved.
Example two
Fig. 2 is a flowchart of a three-dimensional model construction method according to a second embodiment of the present application, where on the basis of the technical solutions of the foregoing embodiments, a "multi-view image-based dense matching method," according to a grid plane, at least one image control distribution network is generated and "thinned" according to the grid plane, and a plumb line track corresponding to a target feature point is determined; and (3) carrying out multi-view image dense matching on plumb line tracks according to the grid plane to generate at least one image control distribution network. It should be noted that, in the embodiments of the present application, parts are not described in detail, and reference may be made to related expressions of other embodiments. As shown in fig. 2, the method includes:
s210, extracting features of the geospatial data of the target object to obtain at least one target feature point of the target object.
S220, performing multi-view image feature point matching on at least one target feature point according to the geospatial data to obtain at least one grid plane.
S230, determining plumb line tracks corresponding to the target feature points according to the grid plane.
The plumb line track refers to a projection track of a vertical line corresponding to a certain point on the earth surface on a map projection or measurement plane.
Specifically, according to the grid plane, the target characteristic points are projected; the projected trajectory of the target feature point is determined as a plumb line trajectory.
S240, performing multi-view image dense matching on plumb line tracks according to the grid plane to generate at least one image control distribution network.
Optionally, performing multi-view image dense matching on the plumb line track method by taking the grid plane as basic data so as to restrict the plumb line track method; and generating at least one image control distribution network according to the grid plane and the constrained plumb line track.
Optionally, based on a three-dimensional scanning technique, constraining the plumb line track according to the grid plane; and generating at least one image control distribution network according to the grid plane and the constrained plumb line track.
The three-dimensional scanning technique refers to a method of scanning the ground with a laser scanner in a vertical direction to obtain three-dimensional coordinate information of points on the ground, and may include at least one of VLL (Vertical Line Laser scanning ) method, stereographic measurement, and the like.
Specifically, based on a three-dimensional scanning technology, using a grid plane as basic data, and adding constraint conditions to plumb line tracks; and generating at least one image control distribution network according to the grid plane and the constraint condition.
The constraint condition may include at least one of a constraint condition of feature point matching, a constraint condition of plumb line trajectory method, and a constraint condition of texture mapping.
It can be understood that when combining the VLL method and the grid plane obtained by feature point matching, constraint can be introduced through the plumb line track method to optimize and correct the ground model, so that the ground model can be helped to be more accurately simulated, and particularly under the conditions of large ground fluctuation and complex terrain change, the plumb line track method can effectively improve the ground modeling result.
Alternatively, the constraint condition of feature point matching can be realized by the following formula:
H(x,y)=argmin(H′)∑|I(x,y)-I′(x′,y′)| 2 ;
where H (x, y) is the elevation of the target object. argmin (H ') refers to the independent variable where H' takes the minimum value. H' refers to the elevation of the grid cell of the target object where the feature point (x, y) is located. x refers to the lateral coordinates of the target feature point. And the longitudinal coordinates of the y target feature points. x' refers to the lateral coordinates of candidate pixel points near the target feature point. And the longitudinal coordinates of candidate pixel points near the y' target feature point. I (x, y) refers to the pixel value of the target feature point. I ' (x ', y ') refers to the pixel value of the candidate pixel point.
Alternatively, the constraints of the plumb line trajectory method can be achieved by the following formula:
H(x,y)=H grid (x grid ,y grid );
where H (x, y) is the elevation of the target object. H grid (x grid ,y grid ) Refers to the elevation of the ground element corresponding to the grid. X is x grid Refers to the lateral coordinates of the ground element. y is grid Refers to the longitudinal coordinates of the ground elements.
Alternatively, the constraint of texture mapping can be implemented by the following formula:
T model (u,v)=T texture (u′,v′);
wherein T is model (u, v) are texture coordinates on the three-dimensional model. T (T) texture (u ', V') is the pixel coordinates on the fingerprint map. U refers to the coordinates of the texture on the three-dimensional model in the horizontal direction. V refers to the coordinates of the texture on the three-dimensional model in the vertical direction. u' is the coordinates of the pixels on the fingerprint map in the horizontal direction. v' refers to the coordinates of the pixels on the fingerprint map in the vertical direction.
Optionally, before performing multi-view image dense matching on the plumb line track, dividing a grid plane according to the elevation range of the target object to obtain at least one regular grid; and carrying out multi-view image dense matching on the plumb line track according to the regular grid.
Where a regular grid refers to a mesh structure consisting of horizontally and vertically staggered lines for representing and measuring locations and features in a geographic space.
S250, constructing a three-dimensional model of the target object according to the image control distribution network.
According to the method, at least one target characteristic point of the target object is obtained through characteristic extraction of the geospatial data of the target object; according to the geospatial data, performing multi-view image feature point matching on at least one target feature point to obtain at least one grid plane; determining plumb line tracks corresponding to the target feature points according to the grid plane; performing multi-view image dense matching on plumb line tracks according to grid planes to generate at least one image control distribution network; and constructing a three-dimensional model of the target object according to the image control distribution network. According to the technical scheme, the multi-view image dense matching is carried out on the plumb line track, so that the construction precision and the construction efficiency of the three-dimensional model are improved.
Example III
Fig. 3 is a schematic structural diagram of a three-dimensional model building apparatus according to a third embodiment of the present application, which may be applicable to a case of performing model building using unmanned aerial vehicle aerial data, and the three-dimensional model building apparatus may be implemented in the form of hardware and/or software, and the three-dimensional model building apparatus may be configured in a computer device, for example, a server. As shown in fig. 3, the apparatus includes:
the feature point extraction module 310 is configured to perform feature extraction on geospatial data of a target object to obtain at least one target feature point of the target object;
the feature point matching module 320 is configured to perform multi-view feature point matching on at least one target feature point according to the geospatial data, so as to obtain at least one grid plane;
the distribution network generating module 330 is configured to generate at least one image control distribution network according to the grid plane based on the multi-view image dense matching manner;
model building model 340 is used to build a three-dimensional model of the target object based on the image-controlled distribution network.
According to the method, at least one target characteristic point of the target object is obtained through characteristic extraction of the geospatial data of the target object; according to the geospatial data, performing multi-view image feature point matching on at least one target feature point to obtain at least one grid plane; generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode; and constructing a three-dimensional model of the target object according to the image control distribution network. According to the technical scheme, the feature point extraction and the matching are carried out on the geospatial data by combining the multi-view image feature point matching and the multi-view image dense matching mode, so that the three-dimensional model is generated, and the construction precision and the construction efficiency of the three-dimensional model are improved.
Optionally, the distribution network generating module 330 includes:
the track determining unit is used for determining plumb line tracks corresponding to the target characteristic points according to the grid plane;
and the distribution network generating unit is used for carrying out multi-view image dense matching on plumb line tracks according to the grid plane to generate at least one image control distribution network.
Optionally, the distribution network generating unit is specifically configured to:
based on a three-dimensional scanning technology, constraining plumb line tracks according to a grid plane;
and generating at least one image control distribution network according to the grid plane and the constrained plumb line track.
Optionally, the feature point extraction module 310 includes:
the pixel point determining unit is used for determining at least one pixel point according to the geospatial data and monitoring the gray value of the pixel point;
and the characteristic point determining unit is used for determining the target characteristic point according to the gray value.
Optionally, the feature point determining unit is specifically configured to:
for each pixel point, determining the gray value variation amplitude of the gray value corresponding to the pixel point in the two-dimensional direction;
and if the gray value change amplitude is larger than the gray value change threshold value, determining the pixel point as a target characteristic point.
Optionally, the feature point matching module 320 includes:
the plane generating unit is used for generating a virtual space plane according to the geographic space data;
the grid determining unit is used for performing multi-view image characteristic point matching according to the virtual space plane and the target characteristic point to determine a target grid unit;
and the grid plane generating unit is used for generating at least one grid plane according to the target grid unit.
Optionally, the grid determining unit is specifically configured to:
determining an initial grid unit according to the virtual space plane;
back projecting the target characteristic points onto a virtual space plane to obtain the number of the characteristic points in the initial grid unit;
and determining the target grid unit according to the number of the characteristic points.
The three-dimensional model construction device provided by the embodiment of the application can execute the three-dimensional model construction method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of executing the three-dimensional model construction methods.
Example IV
Fig. 4 is a schematic structural diagram of an electronic device 410 implementing the three-dimensional model building method according to the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 4, the electronic device 410 includes at least one processor 411, and a memory, such as a Read Only Memory (ROM) 412, a Random Access Memory (RAM) 413, etc., communicatively connected to the at least one processor 411, wherein the memory stores computer programs executable by the at least one processor, and the processor 411 may perform various suitable actions and processes according to the computer programs stored in the Read Only Memory (ROM) 412 or the computer programs loaded from the storage unit 418 into the Random Access Memory (RAM) 413. In the RAM413, various programs and data required for the operation of the electronic device 410 may also be stored. The processor 411, the ROM412, and the RAM413 are connected to each other through a bus 414. An input/output (I/O) interface 415 is also connected to bus 414.
Various components in the electronic device 410 are connected to the I/O interface 415, including: an input unit 416 such as a keyboard, a mouse, etc.; an output unit 417 such as various types of displays, speakers, and the like; a storage unit 418, such as a magnetic disk, optical disk, or the like; and a communication unit 419 such as a network card, modem, wireless communication transceiver, etc. The communication unit 419 allows the electronic device 410 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processor 411 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 411 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 411 performs the various methods and processes described above, such as a three-dimensional model building method.
In some embodiments, the three-dimensional model building method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 418. In some embodiments, some or all of the computer program may be loaded and/or installed onto the electronic device 410 via the ROM412 and/or the communication unit 419. When the computer program is loaded into RAM413 and executed by processor 411, one or more steps of the three-dimensional model building method described above may be performed. Alternatively, in other embodiments, the processor 411 may be configured as a three-dimensional model building method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out the methods of the present application may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solutions of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (10)
1. A three-dimensional model construction method, comprising:
extracting features of geospatial data of a target object to obtain at least one target feature point of the target object;
performing multi-view image feature point matching on the at least one target feature point according to the geospatial data to obtain at least one grid plane;
generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode;
and constructing a three-dimensional model of the target object according to the image control distribution network.
2. The method of claim 1, wherein the generating at least one image-controlled distribution network based on the grid plane based on the multi-view image dense matching method comprises:
determining plumb line tracks corresponding to the target feature points according to the grid plane;
and performing multi-view image dense matching on the plumb line track according to the grid plane to generate the at least one image control distribution network.
3. The method of claim 2, wherein said generating said at least one image-controlled distribution network by multi-view image dense matching of said plumb line trajectories according to said grid plane comprises:
based on a three-dimensional scanning technology, constraining the plumb line track according to the grid plane;
and generating the at least one image control distribution network according to the grid plane and the constrained plumb line track.
4. The method according to claim 1, wherein the feature extraction of the geospatial data of the target object to obtain at least one target feature point of the target object includes:
determining at least one pixel point according to the geospatial data, and monitoring the gray value of the pixel point;
and determining the target characteristic point according to the gray value.
5. The method of claim 4, wherein said determining said target feature point from said gray value comprises:
for each pixel point, determining the gray value variation amplitude of the gray value corresponding to the pixel point in the two-dimensional direction;
and if the gray value change amplitude is larger than the gray value change threshold value, determining the pixel point as a target characteristic point.
6. The method of claim 1, wherein performing multi-view feature point matching on the at least one target feature point according to the geospatial data to obtain at least one mesh plane comprises:
generating a virtual space plane according to the geographic space data;
performing multi-view image feature point matching according to the virtual space plane and the target feature point, and determining a target grid unit;
and generating at least one grid plane according to the target grid unit.
7. The method of claim 6, wherein the determining a target grid cell from the multi-view feature point matching of the virtual spatial plane and the target feature point comprises:
determining an initial grid unit according to the virtual space plane;
back-projecting the target feature points onto the virtual space plane to obtain the number of feature points in the initial grid unit;
and determining the target grid unit according to the number of the characteristic points.
8. A three-dimensional model construction apparatus, comprising:
the feature point extraction module is used for extracting features of the geographic space data of the target object to obtain at least one target feature point of the target object;
the feature point matching module is used for performing multi-view image feature point matching on the at least one target feature point according to the geospatial data to obtain at least one grid plane;
the distribution network generation module is used for generating at least one image control distribution network according to the grid plane based on a multi-view image dense matching mode;
and the model construction model is used for constructing a three-dimensional model of the target object according to the image control distribution network.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the three-dimensional model building method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the three-dimensional model construction method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311543810.7A CN117830540A (en) | 2023-11-17 | 2023-11-17 | Three-dimensional model construction method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311543810.7A CN117830540A (en) | 2023-11-17 | 2023-11-17 | Three-dimensional model construction method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117830540A true CN117830540A (en) | 2024-04-05 |
Family
ID=90503333
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311543810.7A Pending CN117830540A (en) | 2023-11-17 | 2023-11-17 | Three-dimensional model construction method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117830540A (en) |
-
2023
- 2023-11-17 CN CN202311543810.7A patent/CN117830540A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11209837B2 (en) | Method and device for generating a model of a to-be reconstructed area and an unmanned aerial vehicle flight trajectory | |
CN110866531A (en) | Building feature extraction method and system based on three-dimensional modeling and storage medium | |
CN109269472B (en) | Method and device for extracting characteristic line of oblique photogrammetry building and storage medium | |
US10235800B2 (en) | Smoothing 3D models of objects to mitigate artifacts | |
CN112489099B (en) | Point cloud registration method and device, storage medium and electronic equipment | |
CN109186551A (en) | Oblique photograph measures building feature point extracting method, device and storage medium | |
CN115330940B (en) | Three-dimensional reconstruction method, device, equipment and medium | |
CN116051777B (en) | Super high-rise building extraction method, apparatus and readable storage medium | |
CN117132649A (en) | Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion | |
CN116844124A (en) | Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium | |
CN114595238A (en) | Vector-based map processing method and device | |
CN117132737B (en) | Three-dimensional building model construction method, system and equipment | |
CN112634366B (en) | Method for generating position information, related device and computer program product | |
CN114299242A (en) | Method, device and equipment for processing images in high-precision map and storage medium | |
CN117367404A (en) | Visual positioning mapping method and system based on SLAM (sequential localization and mapping) in dynamic scene | |
CN115578432B (en) | Image processing method, device, electronic equipment and storage medium | |
CN115375740A (en) | Pose determination method, three-dimensional model generation method, device, equipment and medium | |
CN117830540A (en) | Three-dimensional model construction method, device, equipment and storage medium | |
Li et al. | Low-cost 3D building modeling via image processing | |
CN114019532A (en) | Project progress checking method and device | |
CN112767477A (en) | Positioning method, positioning device, storage medium and electronic equipment | |
Dong et al. | Quality inspection and analysis of three-dimensional geographic information model based on oblique photogrammetry | |
Guo et al. | Research on 3D geometric modeling of urban buildings based on airborne lidar point cloud and image | |
CN115439331B (en) | Corner correction method and generation method and device of three-dimensional model in meta universe | |
Zeng et al. | An improved extraction method of individual building wall points from mobile mapping system data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |