CN118052935A - Old district deepening design system based on digitalization - Google Patents

Old district deepening design system based on digitalization Download PDF

Info

Publication number
CN118052935A
CN118052935A CN202410179526.4A CN202410179526A CN118052935A CN 118052935 A CN118052935 A CN 118052935A CN 202410179526 A CN202410179526 A CN 202410179526A CN 118052935 A CN118052935 A CN 118052935A
Authority
CN
China
Prior art keywords
building
height
shooting device
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410179526.4A
Other languages
Chinese (zh)
Inventor
张伟
张楠
刘光鹏
黄伟明
施国良
陈蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Prefabricated Architectural Design Institute Co ltd
Guangzhou Municipal Construction Group Co ltd
Guangzhou Municipal Group Co ltd
Gz Municipal Group Design Institute Co ltd
Original Assignee
Guangdong Prefabricated Architectural Design Institute Co ltd
Guangzhou Municipal Construction Group Co ltd
Guangzhou Municipal Group Co ltd
Gz Municipal Group Design Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Prefabricated Architectural Design Institute Co ltd, Guangzhou Municipal Construction Group Co ltd, Guangzhou Municipal Group Co ltd, Gz Municipal Group Design Institute Co ltd filed Critical Guangdong Prefabricated Architectural Design Institute Co ltd
Priority to CN202410179526.4A priority Critical patent/CN118052935A/en
Publication of CN118052935A publication Critical patent/CN118052935A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Computational Mathematics (AREA)
  • Civil Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Structural Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of three-dimensional modeling, and discloses a digital old community deepening design system, which comprises an unmanned aerial vehicle, a height calculation module and an unmanned aerial vehicle control module, wherein the unmanned aerial vehicle control module is used for controlling the old community deepening design system; the unmanned aerial vehicle comprises a first shooting device and a second shooting device which are positioned on the same horizontal plane; the first shooting device is used for acquiring a color image of a building; the second shooting device is used for acquiring modeling data of the building; the height calculating module is used for calculating the next scanning height according to the color image obtained from the previous scanning height; the unmanned aerial vehicle control module is used for controlling the unmanned aerial vehicle to fly around the building at the next scanning height, so that the first shooting device and the second shooting device can respectively acquire the color image and the modeling data of the building at the next scanning height. The invention can shorten the length of the total flight path as much as possible and ensure the accuracy of modeling data obtained by two adjacent scanning heights when fusion is carried out.

Description

Old district deepening design system based on digitalization
Technical Field
The invention relates to the field of three-dimensional modeling, in particular to a digital old cell deepening design system.
Background
Old cells typically require redesign of the appearance of the building when retrofitted, and because old cells typically have only two-dimensional plan views and no three-dimensional model of the building in the cell, this presents difficulties in designing, typically requiring re-modeling the building of the old cell to obtain a corresponding three-dimensional model, and then re-designing the building on the basis of the three-dimensional model.
In the prior art, three-dimensional modeling is usually performed by adopting a three-dimensional scanning device, but the effective working distance of the three-dimensional scanning device is usually smaller, and a complete three-dimensional model of a building cannot be obtained by single scanning, so that when the building of an old community needs to be modeled, the building needs to be scanned around at different heights by using an unmanned aerial vehicle to carry the three-dimensional scanning device, and modeling data (such as point cloud data) obtained by scanning a plurality of areas are fused, so that the complete three-dimensional model of the building can be obtained.
The existing scanning process is typically: firstly planning a flight route, and then controlling the unmanned aerial vehicle to fly by a flight hand according to the flight route. In flight path planning, the size between two adjacent scanning heights is generally the same and is smaller than the maximum scanning length of the scanning device in the longitudinal direction, so that modeling data obtained by scanning a building between the two adjacent heights has a certain overlapping part, and in the subsequent fusion process, accurate splicing and fusion of the modeling data of the adjacent scanning heights are realized based on the modeling data of the overlapping part.
However, the complexity of the details of the outer facade of the building is not considered in the scanning mode, if the overlapped area is too large, the total flight path is too long, the scanning efficiency of the building is affected, if the overlapped area is too small, the complicated areas of the outer facade of the building can not be fused accurately, the built three-dimensional model is not accurate enough, and the subsequent design is affected.
Disclosure of Invention
The invention aims to disclose a digital old cell deepening design system, which solves the problem of how to reasonably determine the difference between two adjacent scanning heights when a three-dimensional modeling is carried out on a building in the old cell by using an unmanned aerial vehicle, so that the length of a total flight path can be shortened as much as possible, and the accuracy degree of modeling data obtained by the two adjacent scanning heights when fusion is carried out is ensured.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the invention provides a digital old community deepening design system, which comprises an unmanned aerial vehicle, a height calculation module and an unmanned aerial vehicle control module, wherein the unmanned aerial vehicle control module is used for controlling the old community deepening design system;
the unmanned aerial vehicle comprises a first shooting device and a second shooting device which are positioned on the same horizontal plane; the first shooting device is used for acquiring a color image of a building; the second shooting device is used for acquiring modeling data of the building;
The height calculating module is used for calculating the next scanning height according to the color image obtained from the previous scanning height;
The unmanned aerial vehicle control module is used for controlling the unmanned aerial vehicle to fly around the building at the next scanning height, so that the first shooting device and the second shooting device can respectively acquire the color image and the modeling data of the building at the next scanning height.
Optionally, the unmanned aerial vehicle control module is further used for controlling the unmanned aerial vehicle to drop to the flying spot when the next scanning height exceeds the height of the building.
Optionally, the system further comprises a data fusion module;
The data fusion module is used for fusing modeling data obtained by different scanning heights to obtain complete modeling data.
Optionally, the device further comprises a preprocessing module;
The preprocessing module is used for preprocessing the complete modeling data to obtain preprocessed modeling data.
Optionally, the system further comprises a modeling module;
The modeling module is used for modeling the preprocessed modeling data to obtain a three-dimensional model of the building.
Optionally, the system further comprises a design module;
the design module is used for carrying out design on the basis of the three-dimensional model of the building to obtain the design scheme of the building.
Optionally, the vertical field angle of the first camera is 1.5 times the vertical field angle of the second camera.
Optionally, the horizontal angle of view of the first camera is the same as the horizontal angle of view of the second camera.
Optionally, controlling the unmanned aerial vehicle to fly around the building at the next scanning altitude includes:
When flying, the same horizontal distance is kept between the first shooting device and the building, the height is kept the same, the flying speed is the same, and hovering is carried out after each flying for a fixed time length, so that the first shooting device can acquire a color image of the building when hovering, and the second shooting device can acquire modeling data of the building when hovering.
Optionally, at the same scan height, there is a portion of overlap between the modeling data obtained from two adjacent hovers.
The beneficial effects are that:
Compared with the prior art, the method and the device calculate the next scanning height based on the color image obtained by the previous scanning height, so that the difference between two adjacent scanning heights can be adaptively changed along with the change of the complexity of the details of the outer facade of the building, when the complexity of the details of the outer facade of the building is low, the difference between the two adjacent scanning heights is as large as possible, thereby reducing the length of the total flight path of the unmanned aerial vehicle, improving the scanning efficiency, and when the complexity of the details of the outer facade of the building is high, the difference between the two adjacent scanning heights is reduced, so that a larger overlapping area exists between modeling data obtained by the adjacent heights, and a sufficiently accurate fusion result can be obtained conveniently when the data fusion is carried out subsequently.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a digital old cell deep design system according to the present invention.
Fig. 2 is another schematic diagram of the digital old cell deepening design system of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, based on the embodiments of the invention, which a person of ordinary skill in the art would obtain without inventive faculty, are within the parameters of the scope of the invention.
The invention provides an embodiment shown in fig. 1, which provides a digital old community deepening design system, comprising an unmanned aerial vehicle, a height calculation module and an unmanned aerial vehicle control module;
the unmanned aerial vehicle comprises a first shooting device and a second shooting device which are positioned on the same horizontal plane; the first shooting device is used for acquiring a color image of a building; the second shooting device is used for acquiring modeling data of the building;
The height calculating module is used for calculating the next scanning height according to the color image obtained from the previous scanning height;
The unmanned aerial vehicle control module is used for controlling the unmanned aerial vehicle to fly around the building at the next scanning height, so that the first shooting device and the second shooting device can respectively acquire the color image and the modeling data of the building at the next scanning height.
Compared with the prior art, the method and the device calculate the next scanning height based on the color image obtained by the previous scanning height, so that the difference between two adjacent scanning heights can be adaptively changed along with the change of the complexity of the details of the outer facade of the building, when the complexity of the details of the outer facade of the building is low, the difference between the two adjacent scanning heights is as large as possible, thereby reducing the length of the total flight path of the unmanned aerial vehicle, improving the scanning efficiency, and when the complexity of the details of the outer facade of the building is high, the difference between the two adjacent scanning heights is reduced, so that a larger overlapping area exists between modeling data obtained by the adjacent heights, and a sufficiently accurate fusion result can be obtained conveniently when the data fusion is carried out subsequently.
Optionally, the shooting angles of the first shooting device and the second shooting device are the same, and the bottom of the first shooting device and the bottom of the second shooting device are in the same horizontal plane. The first photographing device and the second photographing device both acquire data along the horizontal direction.
Optionally, the second photographing device is a depth camera.
Optionally, the unmanned aerial vehicle control module is further used for controlling the unmanned aerial vehicle to drop to the flying spot when the next scanning height exceeds the height of the building.
Specifically, for example, when the next scanning height is 50 meters and the building height is 45 meters, the unmanned aerial vehicle is controlled to drop to the flying spot, because if the unmanned aerial vehicle shoots at the position of 50 meters, the carried second shooting device cannot shoot the new feature of the facade of the building, because shooting is completed at the previous scanning height.
Optionally, the modeling data is point cloud data. By acquiring the point cloud data of the building, the three-dimensional reconstruction of the building can be realized.
Optionally, as shown in fig. 2, the system further comprises a data fusion module;
The data fusion module is used for fusing modeling data obtained by different scanning heights to obtain complete modeling data.
Specifically, when the modeling data is point cloud data, the modeling data obtained by respectively carrying out adjacent two scanning heights are spliced, so that complete modeling data is obtained.
Optionally, the device further comprises a preprocessing module;
The preprocessing module is used for preprocessing the complete modeling data to obtain preprocessed modeling data.
Specifically, the process of preprocessing the modeling data includes removing noise from the modeling data.
Optionally, the system further comprises a modeling module;
The modeling module is used for modeling the preprocessed modeling data to obtain a three-dimensional model of the building.
Specifically, the process of modeling the modeling data includes:
And modeling the preprocessed modeling data by using algorithms such as a curved surface reconstruction method, a model fitting method, a voxel method and the like to obtain a three-dimensional model of the building.
Optionally, the system further comprises a design module;
the design module is used for carrying out design on the basis of the three-dimensional model of the building to obtain the design scheme of the building.
Specifically, during design, the designed three-dimensional model of the building part can be directly added to the three-dimensional model of the building, so that the redesign of the building is realized. The three-dimensional model of the building part comprises a window, a balcony, a wall surface and the like.
Optionally, the vertical field angle of the first camera is 1.5 times the vertical field angle of the second camera.
Specifically, the angle of view is also called a field of view in optical engineering, and the size of the angle of view determines the field of view of the optical instrument. The vertical field angle refers to the magnitude of the field angle of the photographing apparatus in the vertical direction.
By setting the vertical field angle of the first photographing device to 1.5 times that of the second photographing device, the first photographing device can photograph the area of the building facade required to be scanned by the second photographing device at the next scanning height at the previous scanning height, and the next scanning height can be calculated according to the complexity degree of the details of the overlapping part between the areas of the facade required to be photographed by the second photographing device according to the two adjacent scanning heights, so that the same difference value is kept between the two adjacent scanning heights, the scanning efficiency is low, or the final modeling precision is influenced.
When the angle of view is 1.5 times, the actual length of the image in the vertical direction in the color image obtained by the first shooting device is 2 times that of the point cloud data obtained by the second shooting device in the vertical direction, so that the next scanning height can be obtained only by calculating the area of the upper quarter of the color image.
Optionally, calculating the next scan height from the color image obtained from the previous scan height includes:
The next scan height is calculated using the following formula:
scaheiz+1=scaheiz+adpheiz
scahei z+1 and scahei z denote the z+1th and z-th scanning heights, respectively, and adphei z denotes the height increase calculated based on the color image captured at the z-th scanning height;
adphei z the calculation process comprises:
first, the height increase amount obtained for each color image captured based on the z-th scanning height is calculated separately:
adphei (calpho z) represents a height increase calculated based on the color image calphoz captured at the z-th scanning height, Graydif z denotes a complexity factor in terms of gray, calpix z denotes a set of pixel points in the image calpho z, grayvalue i denotes a gray value of pixel point i in calpix z, numcalpix z denotes a total number of pixel points in calpix z, grayvalue z,max denotes a maximum value of gray values of all pixel points included in calpix z, epxnum z denotes a total number of pixel points meeting the recognition requirement in calpho z, and b 1 and b 2 denote set first weights and second weights, respectively; basehei denotes the maximum value of the height increase amount, calpho z denotes the interval/>, of the number of lines in the color image captured at the z-th scanning heightAn image composed of pixel points in the image; nrow denotes the total number of lines in the color image, nrow is the last line from bottom to top in the color image,/>Representing a downward rounding;
And secondly, taking the average value of all the height increment calculated in the first step as adphei z.
Specifically, the next scanning height is obtained by adding the height increment calculated based on the color image obtained from the previous scanning height to the previous scanning height. Therefore, the increase of the scanning height is closely related to the complexity of the outer vertical surface of the building, so that the self-adaptive change of the difference value between the scanning heights is realized when two adjacent scans with different heights are realized, the total flying mileage of the unmanned aerial vehicle is shortened as much as possible while the precision of the obtained modeling data after fusion is ensured, and the scanning efficiency is improved.
When the height increment is calculated, the method is obtained by comprehensively calculating the complexity of the color image in terms of gray scale and the complexity of the contour (namely the total number of pixel points meeting the identification requirement), when the complexity of the gray scale is smaller, the complexity of the contour is smaller, the complexity of the outer vertical surface of the building is lower, and accordingly, the height increment is larger, so that the number of circles surrounded by the unmanned aerial vehicle in aerial photography is reduced, the scanning efficiency is improved, and otherwise, the smaller the height increment is reduced, and the modeling precision of modeling data obtained by scanning is ensured.
Optionally, the first weight and the second weight are 0.4 and 0.6, respectively.
Alternatively, the calculation formula of the maximum value of the height increase amount is:
bsD denotes a safe shooting distance of the unmanned aerial vehicle, and θ denotes a vertical field angle of the second shooting device.
Specifically, the unmanned aerial vehicle's safe shooting distance is 6 meters.
Optionally, the process of judging whether the pixel point meets the identification requirement includes:
For a pixel point i in calpix z, calculating an identification coefficient of the pixel point i;
if the identification coefficient is larger than the set identification coefficient threshold value, the pixel point i meets the identification requirement.
Specifically, whether the identification requirement is met or not is judged according to the identification coefficient, the larger the identification coefficient is, the larger the probability that the identification requirement is met is, so that the pixel points capable of representing the outline are identified, and the larger the number of the identified pixel points is, the more complicated the outer vertical face of the building is at the next scanning height.
Optionally, the calculating process of the identification coefficient of the pixel point i includes:
calculating a first identification value:
idecoefi,1=grayvalue(x-1,y-1)+2grayvalue(x,y-1)+grayvalue(x+1,y-1)-
grayvalue(x-1,y+1)-2grayvalue(x,y+1)-grayvalue(x+1,y+1)
idecoef i,1 denotes the gray value of the pixel at coordinates (x-1, y-1), (x, y-1), (x+1, y-1), (x-1, y+1), (x, y+1), (x+1, y+1), and (x, y) for the first identification value ,grayvalue(x-1,y-1)、grayvalue(x,y-1)、grayvalue(x+1,y-1)、grayvalue(x-1,y+1)、grayvalue(x,y+1)、grayvalue(x+1,y+1) of the pixel i, respectively;
Calculating a second identification value:
idecoefi,2=grayvalue(x+1,y+1)+2grayvalue(x+1,y)+grayvalue(x+1,y-1)-
grayvalue(x-1,y+1)-2grayvalue(x-1,y)-grayvalue(x-1,y-1)
idecoef i,2 denotes a second identification value of the pixel point i, grayvalue (x+1, y) and grayvalue (x-1, y) denote gray values of the pixel point at coordinates (x+1, y) and (x-1, y), respectively;
Calculating an identification coefficient:
idecoeft i denotes the identification coefficient of the pixel point i.
Specifically, the identification coefficient is calculated based on the pixel points in the 8-neighborhood of the pixel point i. And the possibility that the pixel points belong to the image contour in the vertical direction and the horizontal direction is represented by respectively calculating the first identification value and the second identification value, and the larger the identification coefficient is, the larger the possibility that the pixel point i belongs to the pixel point of the image contour is.
Optionally, the set recognition coefficient threshold is 0.9.
Optionally, the process of obtaining the gray value of the pixel point in calpho z includes:
carrying out graying treatment on calpho z to obtain a gray image G;
For the pixel point with coordinates (x, y) in calpho z, the gray value is the pixel value of the pixel point with coordinates (x, y) in the gray image G.
Optionally, performing graying treatment on calpho z to obtain a gray-scale image G, including:
Carrying out graying treatment on calpho z to obtain a treated image;
and performing competitive filtering processing on the processed image to obtain a gray level image G.
In particular, the conventional filtering algorithm generally performs filtering based on a single filtering algorithm, but such an algorithm does not consider the problem of noise type during filtering, and the single filtering algorithm can perform filtering processing on only one type of pixel point. If the filtering results of the two different types of filtering algorithms are directly weighted and fused, more details in the image are lost.
Optionally, performing competitive filtering processing on the processed image to obtain a gray image G, including:
For a pixel point p in the processed image,
Filtering p by using a linear filtering algorithm and a nonlinear filtering algorithm respectively to obtain a linear filtering gray value and a nonlinear filtering gray value;
respectively calculating a linear filtering effective coefficient and a nonlinear filtering effective coefficient of the pixel point p based on the linear filtering gray value and the nonlinear filtering gray value;
if the linear filtering effective coefficient is larger than the nonlinear filtering effective coefficient, the linear filtering gray value is used as the gray value of the gray point p in the gray image G, otherwise, the nonlinear filtering gray value is used as the gray value of the gray point p in the gray image G.
Specifically, the invention can retain the filtering result with good filtering effect by competing the filtering treatment and comparing the magnitude relation between the linear filtering effective coefficient and the nonlinear filtering effective coefficient, thereby retaining more details while filtering.
Optionally, the calculating process of the linear filtering effective coefficient and the nonlinear filtering effective coefficient includes:
the linear filter effective coefficient is calculated by adopting the following formula:
liefltcoef p denotes a linear filter effective coefficient of the pixel point p, liegray p denotes a linear filter gray value of the pixel point p, grayvalue j denotes a gray value of the pixel point j, neip denotes a set of pixel points in a 5×5 square area centered on p; num gray represents the total number of pixels with gray values of gray in neip, and beta 1 and beta 2 represent the number weight of pixels and the gray distribution weight of pixels respectively;
The nonlinear filter effective coefficient is calculated by adopting the following formula:
nliefltcoef p denotes a nonlinear filter effective coefficient of the pixel point p, nliegray p denotes a nonlinear filter gray value of the pixel point p, and dli denotes a result obtained by normalizing a variable in a bracket.
Specifically, when calculating the effective coefficient of the filtering, the invention calculates the distribution of gray values of the pixel points in the square area after the filtering in addition to the difference between the filtering result and the original image, and the more the difference between the filtering result and the original image is, the more complex the distribution of the gray values of the pixel points in the square area is, the better the filtering effect is shown, and more details are reserved.
Optionally, the pixel point number weight and the pixel point gray distribution weight are 0.5 and 0.5, respectively.
Optionally, the horizontal angle of view of the first camera is the same as the horizontal angle of view of the second camera.
Optionally, controlling the unmanned aerial vehicle to fly around the building at the next scanning altitude includes:
When flying, the same horizontal distance is kept between the first shooting device and the building, the height is kept the same, the flying speed is the same, and hovering is carried out after each flying for a fixed time length, so that the first shooting device can acquire a color image of the building when hovering, and the second shooting device can acquire modeling data of the building when hovering.
Specifically, the horizontal distance is a safe shooting distance of the unmanned aerial vehicle.
Optionally, at the same scan height, there is a portion of overlap between the modeling data obtained from two adjacent hovers.
Specifically, there is also an overlapping portion in the horizontal direction, so that subsequent data fusion is realized.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The old community deepening design system based on digitalization is characterized by comprising an unmanned aerial vehicle, a height calculation module and an unmanned aerial vehicle control module;
the unmanned aerial vehicle comprises a first shooting device and a second shooting device which are positioned on the same horizontal plane; the first shooting device is used for acquiring a color image of a building; the second shooting device is used for acquiring modeling data of the building;
The height calculating module is used for calculating the next scanning height according to the color image obtained from the previous scanning height;
The unmanned aerial vehicle control module is used for controlling the unmanned aerial vehicle to fly around the building at the next scanning height, so that the first shooting device and the second shooting device can respectively acquire the color image and the modeling data of the building at the next scanning height.
2. The digital old cell deepening design system according to claim 1, wherein the unmanned aerial vehicle control module is further used for controlling the unmanned aerial vehicle to drop to a flying spot when the next scanning height exceeds the height of the building.
3. The digital old cell deepening design system according to claim 1, further comprising a data fusion module;
The data fusion module is used for fusing modeling data obtained by different scanning heights to obtain complete modeling data.
4. The digital old cell deepening design system according to claim 3, further comprising a preprocessing module;
The preprocessing module is used for preprocessing the complete modeling data to obtain preprocessed modeling data.
5. The digital old cell deepening design system according to claim 4, further comprising a modeling module;
The modeling module is used for modeling the preprocessed modeling data to obtain a three-dimensional model of the building.
6. The digital old cell deepening based design system according to claim 5, further comprising a design module;
the design module is used for carrying out design on the basis of the three-dimensional model of the building to obtain the design scheme of the building.
7. The digital old cell deepening design system according to claim 1, wherein a vertical field angle of the first camera is 1.5 times a vertical field angle of the second camera.
8. The digital old cell deepening design system according to claim 1, wherein a horizontal angle of view of the first camera is the same as a horizontal angle of view of the second camera.
9. The digital old cell deepening design system according to claim 1, wherein controlling the drone to fly around the building at a next scan height includes:
When flying, the same horizontal distance is kept between the first shooting device and the building, the height is kept the same, the flying speed is the same, and hovering is carried out after each flying for a fixed time length, so that the first shooting device can acquire a color image of the building when hovering, and the second shooting device can acquire modeling data of the building when hovering.
10. The digital old cell deepening design system according to claim 9, wherein at the same scan height, there is a portion of overlap between modeling data obtained by two adjacent hovers.
CN202410179526.4A 2024-02-18 2024-02-18 Old district deepening design system based on digitalization Pending CN118052935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410179526.4A CN118052935A (en) 2024-02-18 2024-02-18 Old district deepening design system based on digitalization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410179526.4A CN118052935A (en) 2024-02-18 2024-02-18 Old district deepening design system based on digitalization

Publications (1)

Publication Number Publication Date
CN118052935A true CN118052935A (en) 2024-05-17

Family

ID=91044263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410179526.4A Pending CN118052935A (en) 2024-02-18 2024-02-18 Old district deepening design system based on digitalization

Country Status (1)

Country Link
CN (1) CN118052935A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685894A (en) * 2018-12-03 2019-04-26 东莞理工学院 A kind of bifurcation full-view modeling system
CN109945867A (en) * 2019-03-04 2019-06-28 中国科学院深圳先进技术研究院 Paths planning method, device and the computer equipment of unmanned plane
US20210158009A1 (en) * 2019-11-21 2021-05-27 Beihang University UAV Real-Time Path Planning Method for Urban Scene Reconstruction
CN114863052A (en) * 2022-06-09 2022-08-05 常熟古建园林股份有限公司 Aviation oblique photography and three-dimensional laser scanning ancient building simulation restoration method
CN116071496A (en) * 2023-01-06 2023-05-05 南方电网数字电网研究院有限公司 Data processing system based on three-dimensional modeling technology
CN116628800A (en) * 2023-05-09 2023-08-22 海南华筑国际工程设计咨询管理有限公司 Building design system based on BIM
CN117079254A (en) * 2023-08-23 2023-11-17 太原福莱瑞达物流设备科技有限公司 Scanning method, scanning system, computer device and storage medium for vehicle cargo compartment
CN117474919A (en) * 2023-12-27 2024-01-30 常州微亿智造科技有限公司 Industrial quality inspection method and system based on reconstructed workpiece three-dimensional model
CN117541729A (en) * 2023-12-11 2024-02-09 承德应用技术职业学院 Building model generation system for design

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685894A (en) * 2018-12-03 2019-04-26 东莞理工学院 A kind of bifurcation full-view modeling system
CN109945867A (en) * 2019-03-04 2019-06-28 中国科学院深圳先进技术研究院 Paths planning method, device and the computer equipment of unmanned plane
US20210158009A1 (en) * 2019-11-21 2021-05-27 Beihang University UAV Real-Time Path Planning Method for Urban Scene Reconstruction
CN114863052A (en) * 2022-06-09 2022-08-05 常熟古建园林股份有限公司 Aviation oblique photography and three-dimensional laser scanning ancient building simulation restoration method
CN116071496A (en) * 2023-01-06 2023-05-05 南方电网数字电网研究院有限公司 Data processing system based on three-dimensional modeling technology
CN116628800A (en) * 2023-05-09 2023-08-22 海南华筑国际工程设计咨询管理有限公司 Building design system based on BIM
CN117079254A (en) * 2023-08-23 2023-11-17 太原福莱瑞达物流设备科技有限公司 Scanning method, scanning system, computer device and storage medium for vehicle cargo compartment
CN117541729A (en) * 2023-12-11 2024-02-09 承德应用技术职业学院 Building model generation system for design
CN117474919A (en) * 2023-12-27 2024-01-30 常州微亿智造科技有限公司 Industrial quality inspection method and system based on reconstructed workpiece three-dimensional model

Similar Documents

Publication Publication Date Title
CN112489212B (en) Intelligent building three-dimensional mapping method based on multi-source remote sensing data
CN109410256B (en) Automatic high-precision point cloud and image registration method based on mutual information
CN103426200B (en) Tree three-dimensional reconstruction method based on unmanned aerial vehicle aerial photo sequence image
CN109615653A (en) Percolating water area detecting and recognition methods based on deep learning and visual field projection model
CN112270251A (en) Self-adaptive multi-sensor data fusion method and system based on mutual information
CN112308928B (en) Camera without calibration device and laser radar automatic calibration method
CN110070571B (en) Phyllostachys pubescens morphological parameter detection method based on depth camera
CN113298035A (en) Unmanned aerial vehicle electric power tower detection and autonomous cruise method based on image recognition
CN110969654A (en) Corn high-throughput phenotype measurement method and device based on harvester and harvester
CN112947526B (en) Unmanned aerial vehicle autonomous landing method and system
AU2020103470A4 (en) Shadow Detection for High-resolution Orthorectificed Imagery through Multi-level Integral Relaxation Matching Driven by Artificial Shadows
CN115240087A (en) Tree barrier positioning analysis method and system based on binocular stereo vision and laser point cloud
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN117809297B (en) Three-dimensional reconstruction-based intelligent identification method for dangerous source of power transmission line
CN116171962A (en) Efficient targeted spray regulation and control method and system for plant protection unmanned aerial vehicle
CN114972646A (en) Method and system for extracting and modifying independent ground objects of live-action three-dimensional model
CN114066981A (en) Unmanned aerial vehicle ground target positioning method
CN114037895A (en) Unmanned aerial vehicle pole tower inspection image identification method
CN118115564A (en) Fruit tree canopy structure information measurement method and device
CN118052935A (en) Old district deepening design system based on digitalization
CN115239893B (en) Image reconstruction method for detecting defects of solar panel of warehouse ceiling
CN115712940A (en) BIM modeling-based visual power grid fault identification and positioning method and device
CN111143913A (en) Three-dimensional modeling method and system for transformer substation building
CN114972358B (en) Artificial intelligence-based urban surveying and mapping laser point cloud offset detection method
CN104616262A (en) H alpha cloud removing method based on quite sun chromospheres background processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination