CN113421332A - Three-dimensional reconstruction method and device, electronic equipment and storage medium - Google Patents
Three-dimensional reconstruction method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113421332A CN113421332A CN202110739443.2A CN202110739443A CN113421332A CN 113421332 A CN113421332 A CN 113421332A CN 202110739443 A CN202110739443 A CN 202110739443A CN 113421332 A CN113421332 A CN 113421332A
- Authority
- CN
- China
- Prior art keywords
- image
- shot
- initial
- dimensional model
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000012545 processing Methods 0.000 claims description 22
- 230000009466 transformation Effects 0.000 claims description 21
- 238000001514 detection method Methods 0.000 claims description 19
- 238000007499 fusion processing Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000003708 edge detection Methods 0.000 claims description 5
- 238000011084 recovery Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000000605 extraction Methods 0.000 description 4
- 238000003064 k means clustering Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The application provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, electronic equipment and a storage medium, and relates to computer vision. The method comprises the following steps: determining at least one frame of image to be reconstructed in an image set which does not participate in reconstructing the initial three-dimensional model, wherein the image set comprises: at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model; determining at least one area to be shot according to the position information of at least one frame of image to be reconstructed; acquiring initial overlapped images shot from each area to be shot; and performing image three-dimensional reconstruction according to the initial overlapped image shot in each area to be shot and at least one frame of initial image to obtain a target three-dimensional model. By applying the embodiment of the application, the accuracy of the image data of the newly acquired missing area can be improved.
Description
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a three-dimensional reconstruction method and apparatus, an electronic device, and a storage medium.
Background
The three-dimensional reconstruction is to automatically calculate and construct a three-dimensional model by using a computer according to image data of a real scene acquired by an image acquisition device. The three-dimensional reconstruction technology is widely applied to the fields of smart cities, geological control, city management and the like.
However, after three-dimensional reconstruction is performed according to the acquired image data, a phenomenon of a missing region exists, and the accuracy of the complete three-dimensional model construction has a direct relation with the accuracy of the image data of the missing region.
Therefore, how to acquire image data of a high-precision missing region is a technical problem to be solved urgently at present.
Disclosure of Invention
An object of the present invention is to provide a three-dimensional reconstruction method, a three-dimensional reconstruction device, an electronic apparatus, and a storage medium, which can acquire image data of a missing region with high accuracy, in view of the above-described drawbacks of the related art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method, where the method includes:
determining at least one frame of image to be reconstructed in an image set which does not participate in reconstructing the initial three-dimensional model, wherein the image set comprises: the at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model;
determining at least one area to be shot according to the position information of the at least one frame of image to be reconstructed;
acquiring initial overlapped images shot from each area to be shot;
and performing image three-dimensional reconstruction according to the initial overlapped image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
Optionally, the determining at least one region to be shot according to the position information of the at least one frame of image to be reconstructed includes:
obtaining at least one shooting point set according to the position information of the at least one frame of image to be reconstructed, wherein each shooting point set comprises a plurality of shooting points, and each shooting point has corresponding position information;
and determining at least one area to be shot according to the position information of each shooting point in each shooting point set.
Optionally, the obtaining at least one shooting point set according to the position information of the at least one frame of image to be reconstructed includes:
optionally, the determining at least one region to be photographed according to the position information of each shooting point in each shooting point set includes:
carrying out boundary detection on each shooting point set according to the position information of each shooting point in each shooting point set;
and obtaining the at least one area to be shot according to the result of the boundary detection.
Optionally, the obtaining the at least one region to be photographed according to a result of the boundary detection includes:
obtaining at least one reference area according to the result of the edge detection;
and performing expansion processing on each reference area to obtain the at least one area to be shot.
Optionally, the performing image three-dimensional reconstruction according to the initial overlapping image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model includes:
respectively carrying out three-dimensional reconstruction on the initial overlapped images shot in the areas to be shot to obtain at least one middle three-dimensional model;
and performing fusion processing on the at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapped image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
Optionally, the three-dimensional reconstruction of the initial overlapped images shot in each to-be-shot area is performed to obtain at least one intermediate three-dimensional model, and the method includes:
performing feature matching on an initial overlapping image shot by a first region to be shot and the at least one frame of initial image, and determining at least three frames of matched images in the initial image, wherein the first region to be shot is any one region to be shot in each region to be shot;
adding the at least three matched images into the initial overlapped image shot by the first area to be shot to obtain a target overlapped image corresponding to the first area to be shot;
and performing three-dimensional reconstruction on the basis of the target overlapped image corresponding to the first area to be shot to obtain an intermediate three-dimensional model corresponding to the first area to be shot.
Optionally, the three-dimensional reconstruction based on the target overlapping image corresponding to the first area to be photographed to obtain an intermediate three-dimensional model corresponding to the first area to be photographed includes:
optionally, performing three-dimensional reconstruction based on each target overlapping image to obtain a pre-transformation three-dimensional model corresponding to the first region to be shot;
and transforming the three-dimensional model before transformation according to the position information corresponding to the matched image in each target overlapped image and the position information corresponding to the matched image in each initial image to obtain an intermediate three-dimensional model corresponding to the first region to be shot.
Optionally, the fusing the at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapping image and the at least one frame of initial image captured in each to-be-captured region to obtain a target three-dimensional model, including:
deleting the point cloud data corresponding to each matched image from the middle three-dimensional model corresponding to the first area to be shot to obtain a deleted middle three-dimensional model corresponding to the first area to be shot;
and fusing the deleted intermediate three-dimensional model corresponding to each first area to be shot with the initial three-dimensional model respectively to obtain a target three-dimensional model.
Optionally, the performing image three-dimensional reconstruction according to the initial overlapping image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model includes:
performing feature matching on the initial overlapped images and the initial images shot in the areas to be shot;
and based on the characteristic matching result, performing three-dimensional reconstruction on the image by adopting a motion recovery structure algorithm to obtain a target three-dimensional model.
Optionally, the performing image three-dimensional reconstruction according to the initial overlapping image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model includes:
and registering the initial overlapped image shot in each area to be shot and the at least one frame of image to be reconstructed into the initial three-dimensional model to obtain a target three-dimensional model.
In a second aspect, an embodiment of the present application further provides a three-dimensional reconstruction apparatus, where the apparatus includes:
a first determining module, configured to determine at least one to-be-reconstructed image that does not participate in reconstructing the initial three-dimensional model in an image set, where the image set includes: the at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model;
the second determining module is used for determining at least one area to be shot according to the position information of the at least one frame of image to be reconstructed;
the acquisition module is used for acquiring initial overlapped images shot from each area to be shot;
and the reconstruction module is used for performing image three-dimensional reconstruction according to the initial overlapped images shot in the areas to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
Optionally, the second determining module is specifically configured to obtain at least one shooting point set according to the position information of the at least one frame of image to be reconstructed, where each shooting point set includes multiple shooting points, and each shooting point has corresponding position information; and determining at least one area to be shot according to the position information of each shooting point in each shooting point set.
Optionally, the second determining module is further specifically configured to perform clustering processing on the position information of each image to be reconstructed, so as to obtain the at least one shooting point set.
Optionally, the second determining module is further specifically configured to perform boundary detection on each shooting point set according to the position information of each shooting point in each shooting point set; and obtaining the at least one area to be shot according to the result of the boundary detection.
Optionally, the second determining module is further specifically configured to obtain at least one reference area according to a result of the edge detection; and performing expansion processing on each reference area to obtain the at least one area to be shot.
Optionally, the reconstruction module is specifically configured to perform three-dimensional reconstruction on the initial overlapped images shot in each to-be-shot area, so as to obtain at least one intermediate three-dimensional model; and performing fusion processing on the at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapped image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
Optionally, the reconstruction module is further specifically configured to perform feature matching on the initial overlapping image captured in the first region to be captured and the at least one frame of initial image, and determine at least three frames of matched images in the initial image, where the first region to be captured is any one of the regions to be captured; adding the at least three matched images into the initial overlapped image shot by the first area to be shot to obtain a target overlapped image corresponding to the first area to be shot; and performing three-dimensional reconstruction on the basis of the target overlapped image corresponding to the first area to be shot to obtain an intermediate three-dimensional model corresponding to the first area to be shot.
Optionally, the reconstruction module is further specifically configured to perform three-dimensional reconstruction based on each target overlapping image to obtain a three-dimensional model before transformation corresponding to the first region to be photographed; and transforming the three-dimensional model before transformation according to the position information corresponding to the matched image in each target overlapped image and the position information corresponding to the matched image in each initial image to obtain an intermediate three-dimensional model corresponding to the first region to be shot.
Optionally, the reconstruction module is further specifically configured to delete the point cloud data corresponding to each of the matched images from the intermediate three-dimensional model corresponding to the first area to be photographed, so as to obtain a deleted intermediate three-dimensional model corresponding to the first area to be photographed; and fusing the deleted intermediate three-dimensional model corresponding to each first area to be shot with the initial three-dimensional model respectively to obtain a target three-dimensional model.
Optionally, the reconstruction module is further configured to perform feature matching on the initial overlapping image and the initial image captured in each to-be-captured region; and based on the characteristic matching result, performing three-dimensional reconstruction on the image by adopting a motion recovery structure algorithm to obtain a target three-dimensional model.
Optionally, the reconstruction module is further configured to register the initial overlapped image and the at least one frame of image to be reconstructed, which are shot in each area to be shot, in the initial three-dimensional model, so as to obtain a target three-dimensional model.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor communicates with the storage medium through the bus, and the processor executes the machine-readable instructions to perform the steps of the three-dimensional reconstruction method according to the first aspect.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the three-dimensional reconstruction method of the first aspect.
The beneficial effect of this application is:
the embodiment of the application provides a three-dimensional reconstruction method, a three-dimensional reconstruction device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining at least one frame of image to be reconstructed in an image set which does not participate in reconstructing the initial three-dimensional model, wherein the image set comprises: at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model; determining at least one area to be shot according to the position information of at least one frame of image to be reconstructed; acquiring initial overlapped images shot from each area to be shot; and performing image three-dimensional reconstruction according to the initial overlapped image shot in each area to be shot and at least one frame of initial image to obtain a target three-dimensional model. By adopting the three-dimensional reconstruction method provided by the embodiment of the application, the processor can determine at least one to-be-shot region according to the position information of each to-be-reconstructed image which does not participate in reconstructing the initial three-dimensional model, and can re-collect the image of the missing region on the initial three-dimensional model according to the position information of each to-be-shot region, namely can collect the initial overlapped image corresponding to each to-be-shot region, so that the precision of the re-obtained image data (initial overlapped image) of the missing region can be improved, and on the basis, the precision of obtaining the target three-dimensional model based on each initial overlapped image and each initial image can also be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a scene schematic diagram of a three-dimensional reconstruction system according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a three-dimensional reconstruction method according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating another three-dimensional reconstruction method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of another three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of another three-dimensional reconstruction method provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of another three-dimensional reconstruction method according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating another three-dimensional reconstruction method according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a further three-dimensional reconstruction method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Before explaining the embodiments of the present application, an application scenario of the present application is first described. The application scene is a scene in which a target object is three-dimensionally reconstructed, and the target object can be specifically a person, an object, a scene and the like, which is not limited in the application. Fig. 1 is a scene schematic diagram of a three-dimensional reconstruction system according to an embodiment of the present application. As shown in fig. 1, the three-dimensional reconstruction system includes an image acquisition device 101 and an image processing device 102, wherein the image acquisition device 101 is communicatively connected to the image processing device 102. The image capturing device 101 may be disposed on a body of a carrier such as an unmanned aerial vehicle or an unmanned vehicle, and the form of the image capturing device 101 may include a device having an image capturing function such as a camera or a video camera.
Here, it is described with the image acquisition device 101 being disposed on the body of the unmanned aerial vehicle as an example, the unmanned aerial vehicle may further include sensors such as an IMU (inertial measurement Unit), a GPS (global positioning System), and a barometer, the unmanned aerial vehicle may further include a controller, the controller is configured to receive information acquired by the sensors, and may further send a shooting instruction to the image acquisition device 101, and the image acquisition device 101 performs image acquisition based on the shooting instruction. The image processing device 102 may be integrated with the controller, or may be a device other than the controller, which is not limited in this application.
The unmanned aerial vehicle can shoot a target object from multiple angles through the image acquisition equipment 101 based on preset initial flight parameters (such as flight speed, altitude and the like) and initial image acquisition parameters (such as photographing resolution, overlapping degree and the like), acquire a plurality of images of the target object through the image acquisition equipment 101, send the acquired images to the image processing equipment 102, and the image processing equipment 102 performs three-dimensional modeling on the target object based on image data to obtain a three-dimensional model. However, due to the influence of external factors (such as illumination, viewpoint difference, and the like), the reconstructed three-dimensional model may have a phenomenon of a missing region, and image data corresponding to the missing region may be acquired by the method according to the feature information of the missing region, so as to finally obtain a complete three-dimensional model of the target object.
The three-dimensional modeling method mentioned in the present application is exemplified below with reference to the drawings. Fig. 2 is a schematic flow chart of a three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 2, the method may include:
s201, determining at least one frame of image to be reconstructed in the image set, wherein the frame of image to be reconstructed does not participate in reconstructing the initial three-dimensional model.
It should be noted that the image to be reconstructed, which does not participate in the reconstruction of the initial three-dimensional model, may be understood as an image that does not participate in the reconstruction task all the time, or an image that has failed to be reconstructed although participating in the reconstruction task.
Wherein, the image set includes: at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model. For example, the images in the image set may be acquired by three-dimensionally reconstructing a target area, the image acquisition device 101 in the three-dimensional reconstruction system in fig. 1 may acquire images corresponding to the target area, and form a plurality of images into an image set, where each image in the image set further corresponds to position information.
Each image in the image set may be sent to the aforementioned image processing apparatus, and it should be noted that the processor mentioned in this embodiment is the image processing apparatus. The processor can carry out preprocessing on each image, such as binarization, smoothing filtering and other operations, so that the quality and readability of the image can be improved, and subsequent feature extraction and matching operations are facilitated. The processor can extract and match the characteristics of the preprocessed images to obtain the matching relation among the images. And performing sparse reconstruction according to the matching relation of the image feature points to obtain a three-dimensional model of the target area.
When there is a missing region on the three-dimensional model, the three-dimensional model may be referred to as an initial three-dimensional model corresponding to the target region. According to the external parameters (such as shooting parameters) of each image in the initial three-dimensional model, obtaining the images participating in the reconstruction of the initial three-dimensional model, wherein the images are called initial images, according to each initial image and the image set, the images which do not participate in the reconstruction of the initial three-dimensional model can be determined, the images are called to-be-reconstructed images, and the images which do not undergo sparse reconstruction (to-be-reconstructed images) can form to-be-reconstructed image sets.
S202, determining at least one area to be shot according to the position information of at least one frame of image to be reconstructed.
The image collection device may be configured to collect images of a plurality of images to be reconstructed, wherein each image in the image collection set corresponds to position information, so that the position information of each image to be reconstructed can be determined. Based on the position coordinates of the images to be reconstructed, a plurality of position points can be obtained. The unmanned aerial vehicle can execute a shooting task according to the coordinate information of each area to be shot and reacquire an image corresponding to each area to be shot.
And S203, acquiring initial overlapped images shot from the areas to be shot.
In an implementation embodiment, the controller on the drone may control the flight device and the image capturing device to capture images from the respective regions to be captured based on the preset flight parameters (such as flight speed, altitude, and the like) and the preset photographing parameters, the captured images may be referred to as initial overlapping images, the initial overlapping images corresponding to the respective regions to be captured may be grouped into respective initial overlapping image sets, and the respective regions to be captured and the respective initial overlapping image sets are stored in an associated manner. For example, the processor may perform the reconstruction operation only after acquiring the initial overlapping image set corresponding to any one of the regions to be photographed. Specifically, the flight altitude in the reset flight parameters may be higher than the flight altitude in the flight parameters corresponding to the aforementioned image set (i.e., the historical flight parameters), the course overlap degree and the side overlap degree in the reset photographing parameters may be higher than the course overlap degree and the side overlap degree in the photographing parameters corresponding to the aforementioned image set, or the flight altitude in the reset flight parameters may be the same as the flight altitude in the flight parameters corresponding to the aforementioned image set, and other parameters (e.g., photographing parameters) in the reset flight parameters may be better than the photographing parameters corresponding to the aforementioned image set.
And S204, performing image three-dimensional reconstruction according to the initial overlapped images shot in the areas to be shot and at least one frame of initial image to obtain a target three-dimensional model.
In an implementation embodiment, each initial overlapped image may be taken from the initial overlapped image set corresponding to each region to be photographed, and each initial overlapped image and each initial image participating in reconstructing the initial three-dimensional model are subjected to image three-dimensional reconstruction, specifically, each initial overlapped image and each initial image may be subjected to preprocessing, such as binarization, smoothing and other operations, and each processed initial overlapped image and each processed initial image may be subjected to feature extraction and matching, so as to obtain a matching relationship between the images. Point cloud data can be constructed according to the matching relation, and a target three-dimensional model corresponding to the target area is obtained through reconstruction based on the point cloud data and a reconstruction algorithm, wherein the target three-dimensional model is a complete three-dimensional model corresponding to the target area.
In another practical embodiment, based on the initial three-dimensional model, the initial overlapped images are fused into the reconstructed initial three-dimensional model in a registration manner by using a method for solving 3D-2D point pair motion, so as to obtain a target three-dimensional model corresponding to the target area, and the method for solving 3D-2D point pair motion is a conventional method and is not explained in the present application.
To sum up, in the three-dimensional reconstruction method provided in the embodiment of the present application, the processor may determine at least one region to be photographed according to the position information of each image to be reconstructed that does not participate in reconstructing the initial three-dimensional model, and may reacquire the image of the missing region on the initial three-dimensional model according to the position information of each region to be photographed, and specifically, may acquire the initial overlapping image corresponding to each region to be photographed, so that the accuracy of the reacquired image data (initial overlapping image) of the missing region may be improved, and on this basis, the accuracy of obtaining the target three-dimensional model based on each initial overlapping image and each initial image may also be improved.
Fig. 3 is a flowchart illustrating another three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 3, optionally, the determining at least one region to be shot according to the position information of the at least one frame of image to be reconstructed may include:
s301, obtaining at least one shooting point set according to the position information of at least one frame of image to be reconstructed.
And each shooting point set comprises a plurality of shooting points, and each shooting point has corresponding position information. When the image acquisition equipment is used for acquiring the image of the target area, each image can be matched with the corresponding position coordinate, the position coordinate corresponding to each image to be reconstructed can be obtained according to the matching relation between the image to be reconstructed and the position coordinate, each position coordinate is equivalent to one shooting point, a plurality of shooting points can be obtained, and the plurality of shooting points can be divided into a plurality of shooting point sets.
S302, determining at least one area to be shot according to the position information of each shooting point in each shooting point set.
Each photo site set can be numbered, such as a photo site set a, a photo site set B, etc., where one photo site set (e.g., photo site set a) is taken as an example for description, and other photo site sets are similar. The photographing point set A comprises a plurality of photographing points, each photographing point has a corresponding position coordinate, and a region to be photographed corresponding to the photographing point set A can be determined by using a boundary detection method.
Optionally, the obtaining at least one shooting point set according to the position information of the at least one frame of image to be reconstructed includes: and clustering the position information of each image to be reconstructed to obtain at least one shooting point set.
The clustering process may be a k-means clustering algorithm (k-means clustering algorithm), the position information of each image to be reconstructed is equivalent to each shot point, and the k-means clustering algorithm is performed on each shot point to obtain a plurality of shot point sets. Specifically, K photographing points can be selected from the plurality of photographing points to serve as clustering centers, then the distance between each photographing point and each clustering center is calculated, each photographing point is allocated to the nearest clustering neutral until each photographing point is allocated, and finally a plurality of photographing point sets can be obtained. It should be noted that the number of the photographing point sets is related to the K value set when the K-means clustering algorithm is performed, and the number is not limited in the present application.
Fig. 4 is a schematic flowchart of another three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 4, optionally, the determining at least one region to be photographed according to the position information of each shooting point in each shooting point set includes:
s401, boundary detection is performed on each image capture point set based on the position information of each image capture point in each image capture point set.
S402, obtaining at least one area to be shot according to the result of the boundary detection.
In an embodiment, the boundary detection may be convex hull detection, but may also be other methods for performing boundary detection, which is not limited in this application. Here, the convex hull detection is taken as an example, the convex hull can be thought of as a rubber ring just covering all points, and the convex edge is a convex edge polygon formed by connecting outermost points and can cover all points in each shooting point set. And each shooting point set adopts convex hull detection, firstly, the outermost shooting points in each shooting point set can be identified, the outermost shooting points are connected, the area formed after connection can be directly used as the area to be shot, and the area formed after connection can be subjected to post-processing (such as expansion processing) to obtain the area to be shot.
Optionally, the obtaining at least one region to be photographed according to the result of the boundary detection includes: obtaining at least one reference area according to the result of the edge detection; and carrying out expansion processing on each reference area to obtain at least one area to be shot.
The reference areas corresponding to the shooting point sets can be obtained after the outermost shooting points in the shooting point sets are connected, the reference areas can be expanded outwards by preset distances, and the expanded reference areas serve as areas to be shot, so that initial overlapped images shot from the areas to be shot can be better overlapped with initial images participating in reconstruction of the initial three-dimensional model, and the precision of three-dimensional modeling can be improved.
Fig. 5 is a schematic flowchart of another three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 5, optionally, the performing image three-dimensional reconstruction according to the initial overlapping image and at least one frame of initial image shot in each area to be shot to obtain a target three-dimensional model includes:
s501, performing three-dimensional reconstruction on the initial overlapped images shot in the areas to be shot respectively to obtain at least one middle three-dimensional model.
The initial overlapped images corresponding to each to-be-photographed area may be respectively combined into an initial overlapped image set, and the initial overlapped image sets may be numbered, such as the initial overlapped image set 1 and the initial overlapped image set 2. According to the initial overlapped images in each initial overlapped image set, three-dimensional reconstruction can be performed on each area to be shot, here, the initial overlapped image set 1 corresponding to the area to be shot 1 is taken as an example for explanation, other similarities are that each initial overlapped image in the initial overlapped image set 1 is preprocessed, operations such as binarization, smooth filtering and the like are performed, then each preprocessed initial overlapped image is subjected to feature extraction and matching, a matching relation between each initial overlapped image is obtained, a three-dimensional model of the area to be shot 1 can be reconstructed according to the matching relation between each initial overlapped image and a reconstruction algorithm, and the three-dimensional model can be used as an intermediate three-dimensional model.
S502, according to the initial overlapped images shot in the areas to be shot and the at least one frame of initial image, at least one intermediate three-dimensional model and the initial three-dimensional model are subjected to fusion processing to obtain a target three-dimensional model.
The method includes the steps of taking an area to be shot as an example for explanation, recalculating feature tracking (track) information between an intermediate three-dimensional model and an initial three-dimensional model corresponding to the area to be shot according to a matching relationship between each initial overlapped image and each initial image corresponding to the area to be shot, performing fusion processing on the intermediate three-dimensional model and the initial three-dimensional model according to the feature tracking information, performing double-triangularization point cloud processing and BA (Beam Adjustment) optimization to obtain a model obtained by fusing the intermediate three-dimensional model and the initial three-dimensional model, and similarly, obtaining a model obtained by fusing other intermediate three-dimensional models and the initial three-dimensional model, and finally obtaining the target three-dimensional model.
Fig. 6 is a schematic flowchart of another three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 6, optionally, the three-dimensionally reconstructing the initial overlapped images taken in the regions to be taken to obtain at least one intermediate three-dimensional model includes:
s601, performing feature matching on the initial overlapped image shot in the first area to be shot and at least one frame of initial image, and determining at least three frames of matched images in the initial image.
The first area to be shot is any one area to be shot in each area to be shot. After the initial overlapping image corresponding to one to-be-shot area is acquired, the processor can directly process the initial overlapping image corresponding to the to-be-shot area, that is, as long as the image acquisition equipment finishes shooting the image on a certain to-be-shot area, the processor can directly process the image, and in the process of processing by the processor, the image acquisition equipment can shoot the image of the next to-be-shot area. Therefore, the efficiency of three-dimensional reconstruction can be improved, and the timeliness is high.
Specifically, the processor may first perform preprocessing (such as binarization and smoothing filtering) on the initial overlapping images and the initial images corresponding to the first to-be-photographed region photographed by the image acquisition device, and then perform feature matching on the preprocessed initial overlapping images and the preprocessed initial images, where feature information of each of the initial overlapping images and each of the initial images may include point features, line features, and plane features, and the extracted feature information may be used to determine a corresponding relationship between the images, that is, perform feature matching. In an implementation embodiment, the Harris operator may be used to extract the features on each image, and a Scale-invariant feature transform (Scale-invariant feature transform) algorithm may also be used to extract and match the features on each image, which needs to be described in this application, where a specific algorithm for feature matching is not limited in this application.
According to the feature matching result, an initial image having a feature matching relationship with each initial overlapping image in the first shooting region in the initial image set can be obtained, generally, at least three initial images having a feature matching relationship need to be determined, and the initial image having a feature matching relationship is used as a matching image, that is, at least three matching images need to exist in the initial image set, and it should be noted that the number of the matching images is not limited in the present application.
S602, adding at least three frames of matched images into the initial overlapped image shot by the first area to be shot to obtain a target overlapped image corresponding to the first area to be shot.
Wherein, at least three frames exist in the initial image set as the initial images of the matched images. Each initial image serving as a matching image may be added to the initial overlapping image set corresponding to the first area to be photographed to obtain a target overlapping image set, and the target overlapping image set may include the initial overlapping image and the initial image, which may be collectively referred to as a target overlapping image.
For example, taking a frame of matching image as an example for description, assuming that there is a matching relationship between the initial overlapping image K corresponding to the first region to be photographed and the initial overlapping image i, there is a matching relationship between the initial image i and the initial image j, and the initial overlapping image i and the initial image i are images photographed for the same spatial region, then the initial image j is equivalent to a matching image, the initial image j may be added to the initial overlapping image set photographed by the first region to be photographed, and further the updated initial overlapping image set is referred to as a target overlapping image set, each image in the target overlapping image set may be referred to as a target overlapping image, for example, both the initial overlapping image in the target overlapping image and the initial image serving as the matching image may be target overlapping images.
And S603, performing three-dimensional reconstruction based on the target overlapped image corresponding to the first area to be shot to obtain a middle three-dimensional model corresponding to the first area to be shot.
The processor can perform preprocessing according to each target overlapping image corresponding to the first area to be photographed, then perform feature extraction and matching on the preprocessed target overlapping images to obtain a matching relationship between the target overlapping images, construct point cloud data according to the matching relationship between the target overlapping images, and finally reconstruct a three-dimensional model of the first area to be photographed based on the point cloud data and a reconstruction algorithm (such as an SFM (Structure From Motion), wherein the three-dimensional model can be used as an intermediate three-dimensional model.
Fig. 7 is a flowchart illustrating another three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 7, optionally, the three-dimensional reconstruction based on the target overlapping image corresponding to the first area to be photographed to obtain the intermediate three-dimensional model corresponding to the first area to be photographed includes:
and S701, performing three-dimensional reconstruction based on the target overlapped images to obtain a pre-transformation three-dimensional model corresponding to the first area to be shot.
And S702, converting the three-dimensional model before conversion according to the position information corresponding to the matched image in each target overlapped image and the position information corresponding to the matched image in each initial image to obtain a middle three-dimensional model corresponding to the first area to be shot.
The specific process of performing three-dimensional reconstruction based on the target overlapped images to obtain the three-dimensional model before transformation is not described here, and can be described with reference to relevant parts. After the pre-transformation three-dimensional model is obtained, the processor can obtain a similar transformation parameter according to the position coordinate corresponding to the matched image in the target overlapping image set corresponding to the first area to be shot and the position coordinate corresponding to the matched image in each initial image, and the pre-transformation three-dimensional model is transformed into the intermediate three-dimensional model based on the similar transformation parameter.
Continuing with the above example, taking one of the frames of matched images as an example for description, there is a matching relationship between the initial overlapped image k and the initial overlapped image i, there is a matching relationship between the initial image i and the initial image j, and the initial overlapped image i and the initial image i are images captured in the same spatial region, so that the initial image j is equivalent to a matched image, and the initial overlapped image i and the initial image i can be regarded as a pair of matched image pairs.
When the matching image is at least three frames, three pairs of matching image pairs can be finally obtained according to the above description, for example, if the initial overlapping image j and the initial image j are used as a pair of matching image pairs, and the initial overlapping image k and the initial image k are used as a pair of matching image pairs, a similarity transformation parameter can be obtained according to the relationship between the position coordinates of the initial overlapping image (i, j, k) and the position coordinates of the initial image (i, j, k) in each matching image pair, and the three-dimensional model before transformation is transformed into the intermediate three-dimensional model based on the similarity transformation parameter.
Continuing the above example, calculating the similarity transformation parameter according to the relationship between the position coordinates of the target overlapped image j and the initial image j; at least three pairs of the target overlapped image and the initial image can ensure accurate transformation; the pre-transformation three-dimensional model is transformed into an intermediate three-dimensional model based on the similarity transformation parameters.
Fig. 8 is a flowchart illustrating another three-dimensional reconstruction method according to an embodiment of the present application. As shown in fig. 8, optionally, the fusing the at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapping image and the at least one frame of initial image captured in each to-be-captured region to obtain the target three-dimensional model includes:
s801, point cloud data corresponding to the matched images are deleted from the middle three-dimensional model corresponding to the first area to be shot, and the deleted middle three-dimensional model corresponding to the first area to be shot is obtained.
After the intermediate three-dimensional model corresponding to the first area to be shot is obtained, point cloud data corresponding to each matched image can be correspondingly found out, and the point cloud data corresponding to each matched image is deleted from the intermediate three-dimensional model corresponding to the first area to be shot.
Continuing with the above example, the point cloud data corresponding to the initial overlapped image i and the initial image j added to the target overlapped image set may be deleted from the intermediate three-dimensional model and only retained in the initial image set.
And S802, fusing the deleted intermediate three-dimensional model corresponding to each first area to be shot with the initial three-dimensional model respectively to obtain a target three-dimensional model.
If the first area to be shot is any one of the areas to be shot, the deleted intermediate three-dimensional model can be obtained according to the above manner for the intermediate three-dimensional model corresponding to each area to be shot, and the explanation is omitted here.
The deleted intermediate three-dimensional model corresponding to each region to be shot can be respectively fused with the initial three-dimensional model, the feature tracking (track) information between the deleted intermediate three-dimensional model and the initial three-dimensional model corresponding to each region to be shot can be respectively recalculated, fusion processing is carried out on each intermediate three-dimensional model and the initial three-dimensional model according to each feature tracking information, and moreover, heavy triangulation point cloud processing and BA (Beam Adjustment) optimization are carried out, and the target three-dimensional model is obtained.
Optionally, the performing image three-dimensional reconstruction according to the initial overlapping image and at least one frame of initial image shot in each area to be shot to obtain a target three-dimensional model includes: carrying out feature matching on the initial overlapped images and the initial images shot in each area to be shot; and based on the characteristic matching result, performing three-dimensional reconstruction on the image by adopting a motion recovery structure algorithm to obtain a target three-dimensional model.
The method can be used for extracting and matching the features of each image (such as initial overlapped images and initial images) by adopting a Sift algorithm to obtain a feature matching result. Based on the feature matching result corresponding to each region to be shot, an incremental Motion restoration Structure (SFM) algorithm can be adopted to carry out image three-dimensional reconstruction, and a target three-dimensional model can be obtained. Of course, the image three-dimensional reconstruction may also be performed by using a SLAM (Simultaneous Localization and Mapping) algorithm, which is not limited in the present application.
Optionally, the performing image three-dimensional reconstruction according to the initial overlapping image and at least one frame of initial image shot in each area to be shot to obtain a target three-dimensional model includes: and registering the initial overlapped image and at least one frame of image to be reconstructed which are shot in each area to be shot into the initial three-dimensional model to obtain the target three-dimensional model.
The target three-dimensional model can be obtained by a method (such as a PNP (passive-n-Point) algorithm) for solving the 3D-2D Point-to-Point motion, where the PNP algorithm is a method how to estimate the camera pose when knowing the 3D space points in the n world coordinate systems and the coordinates in the 2D normalized camera coordinates. The method comprises the steps of firstly, associating initial overlapped images corresponding to regions to be shot with an initial three-dimensional model, then normalizing position coordinates of the initial overlapped images, and obtaining a target three-dimensional model according to the normalized initial overlapped images and the initial three-dimensional model.
Fig. 9 is a schematic structural diagram of a three-dimensional reconstruction apparatus according to an embodiment of the present application. As shown in fig. 9, the apparatus includes:
a first determining module 901, configured to determine at least one frame of image to be reconstructed in the image set that does not participate in reconstructing the initial three-dimensional model;
a second determining module 902, configured to determine at least one region to be photographed according to position information of at least one frame of image to be reconstructed;
an obtaining module 903, configured to obtain initial overlapping images captured from each to-be-captured area;
and a reconstruction module 904, configured to perform image three-dimensional reconstruction according to the initial overlapping image and the at least one frame of initial image captured in each to-be-captured area, so as to obtain a target three-dimensional model.
Optionally, the second determining module 902 is specifically configured to obtain at least one shooting point set according to the position information of the at least one frame of image to be reconstructed; and determining at least one area to be shot according to the position information of each shooting point in each shooting point set.
Optionally, the second determining module 902 is further specifically configured to perform clustering processing on the position information of each image to be reconstructed to obtain at least one shooting point set.
Optionally, the second determining module 902 is further specifically configured to perform boundary detection on each shooting point set according to the position information of each shooting point in each shooting point set; and obtaining the at least one area to be shot according to the result of the boundary detection.
Optionally, the second determining module 902 is further specifically configured to obtain at least one reference area according to a result of the edge detection; and carrying out expansion processing on each reference area to obtain at least one area to be shot.
Optionally, the reconstructing module 904 is specifically configured to perform three-dimensional reconstruction on the initial overlapped images shot in each to-be-shot area, respectively, to obtain at least one intermediate three-dimensional model; and carrying out fusion processing on at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapped image and at least one frame of initial image shot in each area to be shot to obtain a target three-dimensional model.
Optionally, the reconstruction module 904 is further specifically configured to perform feature matching on the initial overlapping image captured in the first to-be-captured region and at least one frame of initial image, and determine at least three frames of matched images in the initial image; adding at least three matched images into an initial overlapped image shot by a first area to be shot to obtain a target overlapped image corresponding to the first area to be shot; and performing three-dimensional reconstruction on the basis of the target overlapped image corresponding to the first area to be shot to obtain a middle three-dimensional model corresponding to the first area to be shot.
Optionally, the reconstruction module 904 is further specifically configured to perform three-dimensional reconstruction based on the target overlapped images to obtain a pre-transformation three-dimensional model corresponding to the first region to be photographed; and transforming the three-dimensional model before transformation according to the position information corresponding to the matched image in each target overlapped image and the position information corresponding to the matched image in each initial image to obtain an intermediate three-dimensional model corresponding to the first region to be shot.
Optionally, the reconstructing module 904 is further specifically configured to delete the point cloud data corresponding to each matched image from the intermediate three-dimensional model corresponding to the first area to be photographed, so as to obtain a deleted intermediate three-dimensional model corresponding to the first area to be photographed; and respectively fusing the deleted intermediate three-dimensional model corresponding to each first area to be shot with the initial three-dimensional model to obtain a target three-dimensional model.
Optionally, the reconstruction module 904 is further configured to perform feature matching on the initial overlapping image and the initial image captured in each to-be-captured region; and based on the characteristic matching result, performing three-dimensional reconstruction on the image by adopting a motion recovery structure algorithm to obtain a target three-dimensional model.
Optionally, the reconstructing module 904 is further configured to register the initial overlapped image and at least one frame of image to be reconstructed, which are shot in each area to be shot, in the initial three-dimensional model, so as to obtain the target three-dimensional model.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors, or one or more Field Programmable Gate Arrays (FPGAs), etc. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 10, the electronic device may include: a processor 1001, a storage medium 1002 and a bus 1003, wherein the storage medium 1002 stores machine-readable instructions executable by the processor 1001, when the electronic device is operated, the processor 1001 and the storage medium 1002 communicate with each other through the bus 1003, and the processor 1001 executes the machine-readable instructions to execute the steps of the above method embodiment. The specific implementation and technical effects are similar, and are not described herein again.
Optionally, the present application further provides a storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program performs the steps of the above method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. Alternatively, the indirect coupling or communication connection of devices or units may be electrical, mechanical or other.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (14)
1. A method of three-dimensional reconstruction, the method comprising:
determining at least one frame of image to be reconstructed in an image set which does not participate in reconstructing the initial three-dimensional model, wherein the image set comprises: the at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model;
determining at least one area to be shot according to the position information of the at least one frame of image to be reconstructed;
acquiring initial overlapped images shot from each area to be shot;
and performing image three-dimensional reconstruction according to the initial overlapped image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
2. The method according to claim 1, wherein the determining at least one region to be captured according to the position information of the at least one frame of image to be reconstructed comprises:
obtaining at least one shooting point set according to the position information of the at least one frame of image to be reconstructed, wherein each shooting point set comprises a plurality of shooting points, and each shooting point has corresponding position information;
and determining at least one area to be shot according to the position information of each shooting point in each shooting point set.
3. The method according to claim 2, wherein obtaining at least one shot point set according to the position information of the at least one frame of image to be reconstructed comprises:
and clustering the position information of each image to be reconstructed to obtain the at least one shooting point set.
4. The method according to claim 2, wherein the determining at least one region to be shot according to the position information of each shooting point in each shooting point set comprises:
carrying out boundary detection on each shooting point set according to the position information of each shooting point in each shooting point set;
and obtaining the at least one area to be shot according to the result of the boundary detection.
5. The method according to claim 4, wherein the obtaining the at least one region to be photographed according to the result of the boundary detection comprises:
obtaining at least one reference area according to the result of the edge detection;
and performing expansion processing on each reference area to obtain the at least one area to be shot.
6. The method according to any one of claims 1 to 5, wherein the performing image three-dimensional reconstruction according to the initial overlapping image shot in each of the regions to be shot and the at least one frame of initial image to obtain a target three-dimensional model comprises:
respectively carrying out three-dimensional reconstruction on the initial overlapped images shot in the areas to be shot to obtain at least one middle three-dimensional model;
and performing fusion processing on the at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapped image shot in each area to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
7. The method according to claim 6, wherein the three-dimensional reconstruction of the initial overlapping images taken in each of the regions to be taken to obtain at least one intermediate three-dimensional model comprises:
performing feature matching on an initial overlapping image shot by a first region to be shot and the at least one frame of initial image, and determining at least three frames of matched images in the initial image, wherein the first region to be shot is any one region to be shot in each region to be shot;
adding the at least three matched images into the initial overlapped image shot by the first area to be shot to obtain a target overlapped image corresponding to the first area to be shot;
and performing three-dimensional reconstruction on the basis of the target overlapped image corresponding to the first area to be shot to obtain an intermediate three-dimensional model corresponding to the first area to be shot.
8. The method according to claim 7, wherein the three-dimensional reconstruction based on the target overlapping image corresponding to the first area to be photographed to obtain an intermediate three-dimensional model corresponding to the first area to be photographed includes:
performing three-dimensional reconstruction based on the target overlapped images to obtain a three-dimensional model before transformation corresponding to the first area to be shot;
and transforming the three-dimensional model before transformation according to the position information corresponding to the matched image in each target overlapped image and the position information corresponding to the matched image in each initial image to obtain an intermediate three-dimensional model corresponding to the first region to be shot.
9. The method according to claim 8, wherein the fusing the at least one intermediate three-dimensional model and the initial three-dimensional model according to the initial overlapping image and the at least one frame of initial image captured in each of the regions to be captured to obtain a target three-dimensional model comprises:
deleting the point cloud data corresponding to each matched image from the middle three-dimensional model corresponding to the first area to be shot to obtain a deleted middle three-dimensional model corresponding to the first area to be shot;
and fusing the deleted intermediate three-dimensional model corresponding to each first area to be shot with the initial three-dimensional model respectively to obtain a target three-dimensional model.
10. The method according to any one of claims 1 to 5, wherein the performing image three-dimensional reconstruction according to the initial overlapping image shot in each of the regions to be shot and the at least one frame of initial image to obtain a target three-dimensional model comprises:
performing feature matching on the initial overlapped images and the initial images shot in the areas to be shot;
and based on the characteristic matching result, performing three-dimensional reconstruction on the image by adopting a motion recovery structure algorithm to obtain a target three-dimensional model.
11. The method according to any one of claims 1 to 5, wherein the performing image three-dimensional reconstruction according to the initial overlapping image shot in each of the regions to be shot and the at least one frame of initial image to obtain a target three-dimensional model comprises:
and registering the initial overlapped image shot in each area to be shot and the at least one frame of image to be reconstructed into the initial three-dimensional model to obtain a target three-dimensional model.
12. A three-dimensional reconstruction apparatus, characterized in that the apparatus comprises:
a first determining module, configured to determine at least one to-be-reconstructed image that does not participate in reconstructing the initial three-dimensional model in an image set, where the image set includes: the at least one frame of image to be reconstructed and at least one frame of initial image participating in reconstructing the initial three-dimensional model;
the second determining module is used for determining at least one area to be shot according to the position information of the at least one frame of image to be reconstructed;
the acquisition module is used for acquiring initial overlapped images shot from each area to be shot;
and the reconstruction module is used for performing image three-dimensional reconstruction according to the initial overlapped images shot in the areas to be shot and the at least one frame of initial image to obtain a target three-dimensional model.
13. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the three-dimensional reconstruction method according to any one of claims 1-11.
14. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the steps of the three-dimensional reconstruction method according to one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110739443.2A CN113421332B (en) | 2021-06-30 | 2021-06-30 | Three-dimensional reconstruction method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110739443.2A CN113421332B (en) | 2021-06-30 | 2021-06-30 | Three-dimensional reconstruction method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113421332A true CN113421332A (en) | 2021-09-21 |
CN113421332B CN113421332B (en) | 2024-10-15 |
Family
ID=77717486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110739443.2A Active CN113421332B (en) | 2021-06-30 | 2021-06-30 | Three-dimensional reconstruction method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113421332B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557636A (en) * | 2023-11-01 | 2024-02-13 | 广西壮族自治区自然资源遥感院 | Incremental SfM system with self-adaptive sensing of matching relation |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295141A (en) * | 2016-08-01 | 2017-01-04 | 清华大学深圳研究生院 | A plurality of unmanned plane determining method of path and device for reconstructing three-dimensional model |
CN106446211A (en) * | 2016-09-30 | 2017-02-22 | 中国人民大学 | Method for recommending photographing locations in specific area |
CN110189399A (en) * | 2019-04-26 | 2019-08-30 | 浙江大学 | A kind of method and system that interior three-dimensional layout rebuilds |
CN110505463A (en) * | 2019-08-23 | 2019-11-26 | 上海亦我信息技术有限公司 | Based on the real-time automatic 3D modeling method taken pictures |
CN111862305A (en) * | 2020-06-30 | 2020-10-30 | 北京百度网讯科技有限公司 | Method, apparatus, and computer storage medium for processing image |
CN111899331A (en) * | 2020-07-31 | 2020-11-06 | 杭州今奥信息科技股份有限公司 | Three-dimensional reconstruction quality control method based on unmanned aerial vehicle aerial photography |
CN112634370A (en) * | 2020-12-31 | 2021-04-09 | 广州极飞科技有限公司 | Unmanned aerial vehicle dotting method, device, equipment and storage medium |
CN113001985A (en) * | 2021-02-19 | 2021-06-22 | 中冶建筑研究总院(深圳)有限公司 | 3D model, device, electronic equipment and storage medium based on oblique photography construction |
-
2021
- 2021-06-30 CN CN202110739443.2A patent/CN113421332B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295141A (en) * | 2016-08-01 | 2017-01-04 | 清华大学深圳研究生院 | A plurality of unmanned plane determining method of path and device for reconstructing three-dimensional model |
CN106446211A (en) * | 2016-09-30 | 2017-02-22 | 中国人民大学 | Method for recommending photographing locations in specific area |
CN110189399A (en) * | 2019-04-26 | 2019-08-30 | 浙江大学 | A kind of method and system that interior three-dimensional layout rebuilds |
CN110505463A (en) * | 2019-08-23 | 2019-11-26 | 上海亦我信息技术有限公司 | Based on the real-time automatic 3D modeling method taken pictures |
CN111862305A (en) * | 2020-06-30 | 2020-10-30 | 北京百度网讯科技有限公司 | Method, apparatus, and computer storage medium for processing image |
CN111899331A (en) * | 2020-07-31 | 2020-11-06 | 杭州今奥信息科技股份有限公司 | Three-dimensional reconstruction quality control method based on unmanned aerial vehicle aerial photography |
CN112634370A (en) * | 2020-12-31 | 2021-04-09 | 广州极飞科技有限公司 | Unmanned aerial vehicle dotting method, device, equipment and storage medium |
CN113001985A (en) * | 2021-02-19 | 2021-06-22 | 中冶建筑研究总院(深圳)有限公司 | 3D model, device, electronic equipment and storage medium based on oblique photography construction |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117557636A (en) * | 2023-11-01 | 2024-02-13 | 广西壮族自治区自然资源遥感院 | Incremental SfM system with self-adaptive sensing of matching relation |
Also Published As
Publication number | Publication date |
---|---|
CN113421332B (en) | 2024-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135455B (en) | Image matching method, device and computer readable storage medium | |
Zhou et al. | Seamless fusion of LiDAR and aerial imagery for building extraction | |
CN110176032B (en) | Three-dimensional reconstruction method and device | |
JP6322126B2 (en) | CHANGE DETECTION DEVICE, CHANGE DETECTION METHOD, AND CHANGE DETECTION PROGRAM | |
KR102200299B1 (en) | A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof | |
US9959625B2 (en) | Method for fast camera pose refinement for wide area motion imagery | |
CN103426165A (en) | Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds | |
KR20220064524A (en) | Method and system for visual localization | |
CN111143489B (en) | Image-based positioning method and device, computer equipment and readable storage medium | |
AliAkbarpour et al. | Parallax-tolerant aerial image georegistration and efficient camera pose refinement—without piecewise homographies | |
CN108876828A (en) | A kind of unmanned plane image batch processing three-dimensional rebuilding method | |
CN115797256B (en) | Method and device for processing tunnel rock mass structural plane information based on unmanned aerial vehicle | |
CN115371673A (en) | Binocular camera target positioning method based on Bundle Adjustment in unknown environment | |
Zingoni et al. | Real-time 3D reconstruction from images taken from an UAV | |
CN115053260A (en) | Data set generation method, neural network generation method and scene model construction method | |
Karantzalos et al. | Model-based building detection from low-cost optical sensors onboard unmanned aerial vehicles | |
CN113421332B (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
Zhang et al. | Integrating smartphone images and airborne lidar data for complete urban building modelling | |
Price et al. | Augmenting crowd-sourced 3d reconstructions using semantic detections | |
KR20220050386A (en) | Method of generating map and visual localization system using the map | |
Vasile et al. | Efficient city-sized 3D reconstruction from ultra-high resolution aerial and ground video imagery | |
CN113610952A (en) | Three-dimensional scene reconstruction method and device, electronic equipment and storage medium | |
Khosravani et al. | Coregistration of kinect point clouds based on image and object space observations | |
CN112070175B (en) | Visual odometer method, visual odometer device, electronic equipment and storage medium | |
CN116468878B (en) | AR equipment positioning method based on positioning map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |