Detailed Description
Hereinafter, exemplary embodiments of the present application will be described with reference to the accompanying drawings. Well-known functions or constructions are not described in detail for clarity and conciseness because they would obscure the application in unnecessary detail. Terms described below, which are defined in consideration of functions in the present application, may be different according to intentions or implementations of users and operators. Therefore, the terms should be defined based on the disclosure of the entire specification.
Please refer to fig. 1, which is a flowchart illustrating an embodiment of a method for constructing a building model according to the present application. It should be noted that, if the results are substantially the same, the method of the present application is not limited to the flow sequence shown in fig. 1, and the other flow diagrams described below are also not limited to the flow sequence shown in the figures. As shown in fig. 1, the method includes steps S10 to S50, in which:
s10: and identifying target buildings in the pictures shot from multiple angles.
Step S10 is to identify a building to be modeled in the acquired and/or received multiple pictures, where the building to be modeled is referred to as a target building, and the multiple pictures are pictures taken from different angles of the target building. It can be understood that the multiple pictures shot from multiple angles may be obtained by shooting through a shooting device in real time, or other pictures transmitted through a network, or pictures stored locally. Each picture contains a building (building of interest) to be modeled, and it should be noted that a plurality of pictures are taken from different angles of the target building.
In one embodiment, the identified picture is a call to a locally stored picture, wherein the identified picture is taken from at least 3 directions of the target building (any three directions of the front, back, left, right, and above the building).
In another embodiment, the identified pictures are taken from more than 5 directions of the target building. Specifically, the shooting direction includes: 5 directions such as front, back, left, right, and above of the building, and directions having a certain deviation angle with respect to the front, back, left, right, or above, such as a direction corresponding to a body diagonal of the building. However, since the shooting direction of the building is various, the shooting angle is not limited at all.
Further, in one embodiment, pixels associated with the target building in each of the identified pictures are labeled for subsequent recall or further processing of the pictures based on the pixels corresponding to the labeled target building.
Further, in an embodiment, after the step S10, the method further includes: and replacing part of pixels except the target building in each picture with pure white.
In one embodiment, the currently recognized picture includes other buildings or backgrounds, etc. in addition to the target building. It is also included after step S10 to replace all the pixels of each picture except the target building with pure white. The method has the advantages that all pixels except the outer part of the target building in each picture are replaced by pure white, so that the calculation and identification amount can be reduced, the calculation amount of the reconstructed model is simplified, the reconstruction accuracy is improved, the size of the reconstructed model file can be further reduced, and the occupied computer memory space is further reduced.
S20: calculate each sheet object in picture pose of the building and point cloud.
Further, after the target buildings in the pictures are identified in the step S10, the pose and the point cloud of the target buildings in each picture are further calculated. The pose of the target building refers to the position and the posture of the target building in a coordinate system, and can also be simply understood as the shooting direction of the current picture. The point cloud of the target building can be understood as a data set of the product viewpoint.
Specifically, step S20 further includes: and calculating the pose and the point cloud of the target building by using a motion recovery structure algorithm. Because the shooting angles of different pictures are different, the poses of the target buildings are different in different pictures. Wherein, the motion structure from motion algorithm automatically restores camera motion or scene structure using two or more scenes, the technology is self-calibration and can automatically complete the tracking and motion matching of the camera. In particular to matching the feature points in two or more adjacent pictures, to get the complete scene structure.
In an embodiment, according to the target buildings in the multiple multi-angle pictures identified in step S10, the feature points of the pictures at adjacent poses are matched by using a motion recovery structure algorithm to recover the same target building shot in multiple angles, so as to obtain a point cloud corresponding to the target building.
S30: and performing combined processing according to the poses of the target building and the point cloud corresponding to the multiple pictures to obtain a preliminary three-dimensional model.
And performing combination processing according to the pose and the point cloud of the target building in the multiple multi-angle pictures obtained by calculation in the step S20 to obtain a preliminary three-dimensional model.
Further, step S30 includes: and calculating by using a multi-view stereoscopic vision algorithm and performing combined processing to obtain a preliminary three-dimensional model according to the poses and point clouds of the target buildings corresponding to the multiple pictures.
And (5) further calculating the point cloud by using a multi-view stereo vision algorithm according to the pose and the point cloud of the target building in the plurality of pictures calculated in the step (S20), and performing combined processing to obtain a preliminary three-dimensional model.
As can be seen from the characteristics of the motion recovery structure algorithm, the point cloud of the target building calculated by the motion recovery algorithm is a sparse point cloud; similarly, according to the characteristics of the multi-view stereo vision algorithm, the point cloud of the target building calculated by the multi-view stereo vision algorithm is dense point cloud. It can be understood that when the multi-view stereo vision algorithm is based on the point cloud corresponding to the target building calculated by the motion recovery algorithm, further calculation is performed again according to the calculated pose relationship, and the obtained primary three-dimensional model is dense point cloud.
S40: and carrying out two-dimensional segmentation treatment on the preliminary three-dimensional model to obtain a preset number of two-dimensional slices.
Further, the obtained preliminary three-dimensional model is subjected to two-dimensional segmentation treatment according to a preset direction, so that two-dimensional slices with a preset number are obtained. The direction of the two-dimensional segmentation of the preliminary three-dimensional model can be a horizontal direction, the preliminary three-dimensional model can also be segmented along a vertical direction, and other directions obtained through specific calculation can also be used for segmenting the preliminary three-dimensional model so as to obtain the two-dimensional slices with preset quantity.
Optionally, in the step of performing two-dimensional segmentation processing on the preliminary three-dimensional model, the number of the set target two-dimensional slices is set according to the required segmentation precision. In an embodiment, when it is required to perform very precise and fine segmentation, a greater number of segmentation times is set to obtain a greater number of two-dimensional slices. In another embodiment, when the preliminary three-dimensional model needs to be subjected to ordinary segmentation, a smaller number of times of segmentation is set to obtain a smaller number of two-dimensional slices.
Further, in an embodiment, the step S40 further includes the steps of sequentially performing two-dimensional segmentation on the preliminary three-dimensional model according to a preset direction to obtain a preset number of two-dimensional slices, and numbering the two-dimensional slices according to a segmentation order during segmentation of the preliminary three-dimensional model. And in the cutting process, numbering the obtained two-dimensional slices in sequence according to the cutting sequence. If the slicing is performed from top to bottom along the horizontal direction, the two-dimensional slices obtained by the slicing are labeled in sequence along the slicing sequence.
Specifically, the numbered content may be "1,2,3 … …," and it is understood that in other embodiments, the numbered content may be set according to the user's needs, for example, it may be set in a manner of combining letters and numbers.
In another embodiment, referring to fig. 2, step S40 includes steps S41 to S43. Wherein,
s41: and detecting straight lines and planes of the preliminary three-dimensional model.
And S41, further detecting the obtained preliminary three-dimensional model according to the point cloud obtained by calculation, and extracting a straight line and a plane in the point cloud. The purpose of extracting the straight line and the plane is mainly to remove the noise caused by noise or other factors, so that the pose estimation and point cloud in the reconstruction process are more accurate. Meanwhile, on the premise of not reducing the quality of the model, the complexity of the point cloud of the target building and the grid structure of the corresponding target three-dimensional structure obtained by subsequent conversion is effectively reduced.
S42: and fitting a plane in the first direction and/or a plane in the second direction according to the straight line and/or the plane.
Step S42 is to further fit a plane in the first direction and/or a plane in the second direction according to the straight line and/or the plane obtained in step S41. The fitting is to calculate a certain preset number of points to obtain a straight line with the minimum sum of squares of distances to the points, and similarly, the fitting of a certain plane means to calculate a plane with the minimum sum of squares of distances to the preset number of straight lines. Wherein the distance is a vertical distance.
Further, points in the point cloud are firstly used for fitting straight lines of a preset number, and a plane in the first direction or a plane in the second direction is further fitted by using the straight lines obtained through fitting. The plane in the first direction is a plane in the horizontal direction, and the plane in the second direction is a plane in the vertical direction, and the fitting is performed as required.
In an embodiment, the plane in the first direction to be fitted is a plane in the horizontal direction, a straight line parallel to the plane in the horizontal direction is fitted first, and the plane in the horizontal direction is further fitted according to the fitted straight line. Similarly, in other embodiments, when it is necessary to fit a plane in a certain direction, a straight line parallel to the desired plane is fitted first, and then the desired plane is fitted.
S43: and performing two-dimensional segmentation on the preliminary three-dimensional model according to the plane in the first direction and/or the plane in the second direction to obtain a preset number of two-dimensional slices.
Step S43 is to perform two-dimensional segmentation on the preliminary three-dimensional model according to the plane in the first direction and/or the plane in the second direction obtained by fitting in step S42, so as to obtain a preset number of two-dimensional slices.
In one embodiment, when the fitting in step S42 results in a plane in the first direction, i.e. a plane in the horizontal direction, step S43 cuts the preliminary three-dimensional model according to the resulting plane in the horizontal direction to obtain a predetermined number of two-dimensional slices.
In another embodiment, when the plane in the second direction, i.e. the plane in the vertical direction, is obtained by the fitting in step S42, step S43 cuts the preliminary three-dimensional model according to the obtained plane in the vertical direction to obtain a predetermined number of two-dimensional slices. It can be understood that, when the preliminary three-dimensional model is sliced along a plane in the vertical direction, and the two-dimensional slices obtained by slicing are subsequently aligned, the alignment process is performed along the original slicing direction.
After the preliminary three-dimensional model is segmented, each two-dimensional slice obtained through segmentation is normalized to obtain a calibrated two-dimensional slice. Therein, a nominal two-dimensional slice is typically a two-dimensional picture of some polygons. The normalization process is to further process the two-dimensional slices obtained in the above steps to remove point outliers or outliers, and to complement the missing due to other factors (including noise, etc.), so as to obtain a more accurate polygon (normalized two-dimensional slice). Specifically, please refer to fig. 3a and fig. 3b, which are schematic diagrams illustrating the effect of the normalization process. Fig. 3a shows a two-dimensional slice without normalization, and it can be known from fig. 3a that there is a defect in the obtained two-dimensional slice due to noise and other variable factors, and the edge portion of the obtained two-dimensional slice is uneven. After normalization, a two-dimensional slice (polygon) with straighter lines and more accurate included angles is obtained as shown in fig. 3b, wherein after normalization, the missing parts are filled up. In summary, the normalization process models the two-dimensional slice as a polygon through the extracted straight line portions. In this process, the following are involved: in the process of fitting the straight line, removing the miscellaneous points, filling the missing part of the straight line in the polygonal structure, or adjusting the included angle more accurately.
S50: and aligning the two-dimensional slices according to a preset mode to obtain a target three-dimensional model.
Further, in step S50, the two-dimensional slices are aligned in a growing alignment manner to obtain a target three-dimensional model.
In an embodiment, in step S50, the two-dimensional slice obtained after the segmentation is directly aligned in a growing alignment manner to obtain the target three-dimensional model.
In another embodiment, step S50 is specifically to perform alignment processing on the normalized calibrated two-dimensional slice according to a growing alignment manner to obtain a target three-dimensional model. Therein, a nominal two-dimensional slice is typically a two-dimensional picture of some polygons. Further, in step S50, the two-dimensional slice is aligned according to a preset mode, and the two-dimensional slice is taken and processed in the alignment process to obtain a target three-dimensional model.
In one embodiment, after aligning the nominal two-dimensional slice in a growing manner, but there are still some minor errors that are present, at the moment, the error is taken and operated, and the error position or the numerical value is taken and is at most the numerical value or the accurate value as much as possible.
Optionally, in an embodiment, after the step S10 to the step S40 are performed, the method further includes converting the target three-dimensional model into a mesh structure. It can be understood that the transformation into the target three-dimensional model of the grid structure can occupy smaller computer memory, and is more convenient for transmission and analysis. According to the method, target buildings in multiple pictures shot at multiple angles are identified, the pose and point cloud of the target buildings in each picture are calculated, preset combination processing is carried out according to the calculated pose and point cloud, a preliminary three-dimensional model is obtained, two-dimensional segmentation processing is carried out on the obtained three-dimensional model to obtain two-dimensional slices of a preset number, the two-dimensional slices obtained through segmentation are aligned according to a preset mode to obtain a target three-dimensional model, and finally, a more accurate model is reconstructed in the process.
Referring to fig. 4, which is a schematic structural diagram of an embodiment of an apparatus 10 for reconstructing a model according to the present application, the apparatus 10 includes a processor 12 and a memory 14 that are electrically connected to each other, the processor 12 is coupled to the memory 14, and the processor 12 executes instructions to implement the building model building method as described above when operating, and stores processing results generated by the executed instructions in the memory 14.
In an embodiment, the device 10 for rebuilding model may be, but is not limited to, a mobile phone, a notebook computer, a tablet computer with communication and networking functions, or other devices with the rebuilding model function.
Referring to fig. 5, which is a schematic structural diagram of an embodiment of the apparatus 20 with a storage function according to the present application, the storage apparatus 20 stores program data, and the program data stored in the storage apparatus implements the building model constructing method described above when executed. Specifically, the apparatus 20 with a storage function may be one of a memory of a terminal device, a personal computer, a server, a network device, or a usb disk. The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.