CN114659499B - Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology - Google Patents

Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology Download PDF

Info

Publication number
CN114659499B
CN114659499B CN202210417331.XA CN202210417331A CN114659499B CN 114659499 B CN114659499 B CN 114659499B CN 202210417331 A CN202210417331 A CN 202210417331A CN 114659499 B CN114659499 B CN 114659499B
Authority
CN
China
Prior art keywords
shooting
aerial vehicle
unmanned aerial
photographing
flight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210417331.XA
Other languages
Chinese (zh)
Other versions
CN114659499A (en
Inventor
罗顺
王利
刘宁
韩建
李睿之
王新仁
周鹏耀
方超
冯莉莎
郑国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Shangyou Technology Co ltd
Original Assignee
Chongqing Shangyou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Shangyou Technology Co ltd filed Critical Chongqing Shangyou Technology Co ltd
Priority to CN202210417331.XA priority Critical patent/CN114659499B/en
Publication of CN114659499A publication Critical patent/CN114659499A/en
Application granted granted Critical
Publication of CN114659499B publication Critical patent/CN114659499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a smart city 3D map model photography establishing method based on unmanned aerial vehicle technology, which is characterized in that various photography modes are set in the process of aerial photography of buildings in an area to be photographed by an unmanned aerial vehicle, and an accurate and reliable photography position parameter analysis mode is provided for the various photography modes based on the overall dimensions of the buildings to be photographed, so that the building live-action pictures to be photographed obtained under the various photography modes and the corresponding photography position parameters can reflect multiple visual angles and can present a full view, the comprehensive high-quality imaging of the building live-action is realized, the building live-action aerial photography defects caused by the fact that the photography modes are too single and the photography positions are selected too randomly in the prior art are effectively overcome, the quality of the building live-action pictures is improved on one hand, the visual effect of building live-action imaging is improved on the other hand, and the 3D map generation requirement of urban buildings is greatly met.

Description

Smart city 3D map model photography establishing method based on unmanned aerial vehicle technology
Technical Field
The invention belongs to the technical field of urban map building, and particularly relates to a smart urban 3D map model photography building method based on an unmanned aerial vehicle technology.
Background
City planning is a series of deployments performed by city governments on comprehensive basic data such as natural characteristics, current development situation, resource structure, region position and historical conditions of a city region, and in recent years, with rapid development of economy and continuous acceleration of urbanization progress of China, people increasingly realize importance of city planning management, and the first step of city planning is to acquire a live-action map of a region to be planned, which is specifically divided into a road live-action map, a vegetation live-action map and a building live-action map.
As is known, the live-action map is acquired by firstly obtaining a live-action picture through a photographing mode and then generating the live-action picture into the live-action map, but in many live-action map classifications, the live-action picture of a building cannot be photographed manually due to the fact that the building has a huge appearance, and only aerial photography can be performed by an unmanned aerial vehicle, the aerial photography quality of the aerial photography of the unmanned aerial vehicle directly affects the generation effect of the live-action map of the building, and in this case, the aerial photography quality of the unmanned aerial vehicle becomes the focus of attention for the 3D map generation of the urban building.
However, in the prior art, when the unmanned aerial vehicle takes an aerial photograph of a building in a region to be photographed, on one hand, the photographing mode adopted by the unmanned aerial vehicle is single, so that a live-action picture of the building can only reflect a single viewing angle, the live-action imaging effect of the building is relatively one-sided, and the visual effect of the live-action picture is further reduced to a certain extent; on the other hand, the selection of the corresponding shooting position of the building to be shot by the unmanned aerial vehicle is lack of accurate and reliable selection basis, so that the selection of the shooting position is too random, imaging defects often occur to live-action pictures of the building, for example, the imaging proportion is not coordinated, partial areas are not shot, and the quality of the live-action pictures of the building is further influenced, and the generation requirement of the 3D map of the urban building is difficult to meet.
Disclosure of Invention
The technical task of the invention is to provide a smart city 3D map model photography establishing method based on the unmanned aerial vehicle technology aiming at the problems, and the method can effectively make up the defects of the unmanned aerial vehicle in the aerial photography process of the building in the area to be photographed in the prior art.
The purpose of the invention can be realized by the following technical scheme:
a smart city 3D map model photography establishing method based on unmanned aerial vehicle technology comprises the following steps:
step 1: recording an urban area to be subjected to 3D map photography as an area to be photographed, and arranging an atmospheric environment acquisition terminal, an ultrasonic sensor, a three-dimensional scanner, an aerial photography camera and a GPS (global positioning system) locator on a designated unmanned aerial vehicle;
step 2: acquiring the outline of the area to be photographed, planning the flight route of the designated unmanned aerial vehicle in the area to be photographed according to the outline, and transmitting the flight route to the designated unmanned aerial vehicle;
and step 3: the designated unmanned aerial vehicle executes flight operation according to the transmitted flight route, and acquires the atmospheric environment parameters of the designated unmanned aerial vehicle at the current air position in real time through an atmospheric environment acquisition terminal in the flight process, so that whether the flight height of the designated unmanned aerial vehicle at the current air position needs to be regulated and controlled is judged, and when the judgment result is that regulation and control are needed, the flight height of the designated unmanned aerial vehicle at the current air position is regulated and controlled;
and 4, step 4: the designated unmanned aerial vehicle senses a front area in real time through an ultrasonic sensor in the flight process, identifies whether a building exists, and marks the currently identified building as a target building if the building exists;
and 5: positioning the geographical position of a target building through a GPS positioning instrument, determining the number of photographing directions required for photographing the panorama of the target building, and numbering the photographing directions;
and 6: setting various shooting modes, carrying out shooting position parameter analysis of various shooting modes on each shooting direction corresponding to the target building, transmitting the shooting position parameter analysis to the designated unmanned aerial vehicle, and executing shooting operation by the designated unmanned aerial vehicle to obtain shooting pictures of each shooting direction corresponding to the target building in various shooting modes;
and 7: synthesizing the camera pictures of the target building corresponding to all the shooting directions in all the shooting modes to form a panoramic image of the target building, and acquiring the appearance size data of the target building according to the panoramic image;
and 8: generating a live-action 3D map of the target building based on the appearance size, the panoramic image and the geographic position of the target building;
and step 9: and 4-8, performing live-action 3D map generation on all buildings in the designated area according to the method in the step 4-8, and further combining the live-action 3D maps corresponding to all the buildings to form a building live-action 3D map corresponding to the area to be photographed.
In a further technical solution, the planning process for planning the flight route of the designated unmanned aerial vehicle in the area to be photographed in the step 2 includes:
step 2-1, acquiring a boundary contour corresponding to the designated area, and counting the area of the designated area based on the boundary contour;
step 2-2: according to the set subarea dividing quantity, equally dividing the area of the designated area to obtain each divided subarea;
step 2-3: acquiring the boundary contour of each subregion, and taking the boundary contour as a flight path of each subregion;
step 2-4: numbering the divided sub-areas according to a predefined sequence, and taking the numbering sequence of the sub-areas as the flight sequence of the sub-areas.
In a further technical scheme, the atmospheric environment parameters comprise wind speed and haze concentration.
In a further technical scheme, the specific method for judging whether the flying height of the designated unmanned aerial vehicle at the current flying position needs to be regulated and controlled is as follows:
the first step is as follows: extracting wind speed from atmospheric environment parameters, matching the wind speed with wind speed ranges corresponding to various wind speeds in a flight database, screening out a wind speed corresponding to the wind speed, and recording the wind speed as an adaptive wind speed;
the second step: comparing the adaptive wind power level of the designated unmanned aerial vehicle at the current position in the air with the minimum dead weight of the unmanned aerial vehicle capable of flying stably under various wind power levels stored in a flight database, so as to obtain the minimum dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level;
the third step: acquiring the dead weight of the designated unmanned aerial vehicle based on the model of the designated unmanned aerial vehicle, comparing the dead weight with the proper dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level, and calculating a dead weight matching coefficient of the designated unmanned aerial vehicle at the current position in the air;
the fourth step: extracting haze concentration from atmospheric environment parameters, comparing the haze concentration with vertical visibility corresponding to various haze concentrations in a flight database, screening out the vertical visibility corresponding to the haze concentration, comparing the vertical visibility with the flight height of the designated unmanned aerial vehicle at the current aerial position, and calculating a visibility matching coefficient of the designated unmanned aerial vehicle at the current aerial position;
the fifth step: comparing the self-weight matching coefficient and the visibility matching coefficient of the designated unmanned aerial vehicle at the current air position with corresponding set values respectively, if the self-weight matching coefficient and the visibility matching coefficient are both greater than or equal to the corresponding set values, judging that the flight height of the designated unmanned aerial vehicle at the current air flight position does not need to be regulated, otherwise, judging that the flight height of the designated unmanned aerial vehicle at the current air flight position needs to be regulated.
In a further technical scheme, the specific regulation and control process for regulating and controlling the flying height of the designated unmanned aerial vehicle at the current flying position in the air is as follows:
(1) Controlling the designated unmanned aerial vehicle to vertically descend at the flight height corresponding to the current air flight position according to the set single descending height;
(2) And (2) acquiring atmospheric environment parameters at the current descending position of the designated unmanned aerial vehicle through an atmospheric environment acquisition terminal after the designated unmanned aerial vehicle descends, operating according to the method from the first step to the fourth step to obtain a self-weight matching coefficient and a visibility matching coefficient of the designated unmanned aerial vehicle at the current descending position, comparing the self-weight matching coefficient and the visibility matching coefficient with corresponding set values, judging that the flying height of the designated unmanned aerial vehicle at the current descending position does not need to be regulated if the self-weight matching coefficient and the visibility matching coefficient are both greater than or equal to the corresponding set values, and continuing descending according to the method in the step (1) on the contrary until the designated unmanned aerial vehicle stops regulating when the flying height after the designated unmanned aerial vehicle descends is judged to not need to be regulated.
In a further technical solution, the determining manner corresponding to the number of the shooting orientations required for shooting the panorama of the target building in step 5 is to perform three-dimensional scanning on the target building through a three-dimensional scanner, so as to count the number of observation surfaces existing in the target building, where the number of the observation surfaces is the number of the shooting orientations required for shooting the panorama of the target building.
In a further technical scheme, the various shooting modes are specifically a flat shooting mode and a top shooting mode.
In a further technical solution, the photographing position parameters include a photographing height, a photographing distance, and a photographing angle.
In a further aspect, an analysis method for analyzing the imaging position parameters of each imaging mode for each imaging azimuth corresponding to the target building in step 6 is as follows:
step 6-1: determining an observation surface corresponding to each shooting direction;
step 6-2: acquiring the appearance size of an observation surface corresponding to each photographing direction based on a three-dimensional scanning result corresponding to the target building;
and 6-3: the method for analyzing the shooting position parameters of the horizontal shooting mode is to analyze each shooting azimuth corresponding to the target building, and comprises the following steps:
step 6-3-1: outlook of observation surface corresponding to each shooting direction is sketched based on the outline size of observation surface corresponding to each shooting direction in the target building;
step 6-3-2: acquiring a central point of an observation surface corresponding to each photographic direction according to the outline of the observation surface corresponding to each photographic direction in the target building, acquiring the height of the horizontal plane of the central point from the ground, and taking the height as the ground clearance of the observation surface corresponding to each photographic direction in a plain photography mode;
step 6-3-3: acquiring the area of the observation surface corresponding to each shooting position based on the overall dimension of the observation surface corresponding to each shooting position in the target building, analyzing the area and the imaging proportion in the set plain photography mode to obtain the image area of the observation surface corresponding to each shooting position in an imaging picture, and recording the image area as the imaging area of the observation surface corresponding to each shooting position in the plain photography mode;
step 6-3-4: counting the shooting distance of each shooting azimuth from the observation plane under the imaging area according to the imaging area of the observation plane corresponding to each shooting azimuth in the flat shooting mode and the shooting distance of the model of the aerial camera from the shot object under the unit imaging area, and taking the shooting distance as the shooting distance of the observation plane corresponding to each shooting azimuth in the flat shooting mode;
step 6-3-5: adjusting the photographing angle of the observation surface corresponding to each photographing direction to zero degree in a flat photographing mode;
and 6-4: the method for analyzing the shooting position parameters of the top shooting mode is carried out on each shooting azimuth corresponding to the target building, and the analysis method comprises the following steps:
step 6-4-1: extracting the height of the observation surface corresponding to each photographic direction from the overall dimension of the observation surface corresponding to each photographic direction;
step 6-4-2: acquiring the shooting height of the observation surface corresponding to each shooting direction in the overhead shooting mode according to the set safe distance between the aerial shooting camera and the shot object in the overhead shooting mode and the height of the observation surface corresponding to each shooting direction;
step 6-4-3: acquiring the area of the observation surface corresponding to each shooting position based on the overall dimension of the observation surface corresponding to each shooting position in the target building, analyzing the area and the imaging proportion in the set overhead shooting mode to obtain the image area of the observation surface corresponding to each shooting position in an imaging picture, and recording the image area as the imaging area of the observation surface corresponding to each shooting position in the overhead shooting mode;
step 6-4-4: counting the shooting distance of each shooting azimuth from the observation plane under the imaging area as the shooting distance of each shooting azimuth corresponding to the observation plane under the overhead shooting mode;
step 6-4-5: and importing the photographing height and the photographing distance of the observation surface corresponding to each photographing direction in the target building in the top-down photographing mode into a photographing angle calculation formula to obtain the photographing angle of the observation surface corresponding to each photographing direction in the top-down photographing mode.
In a further technical solution, the photographic angle calculation formula is
Figure BDA0003606545510000081
By combining all the technical schemes, the invention has the advantages and positive effects that:
(1) According to the invention, by setting various shooting modes in the process of aerial photography of the building in the area to be shot by the unmanned aerial vehicle and providing an accurate and reliable shooting position parameter analysis mode for the various shooting modes based on the overall dimension of the building to be shot, the live-action picture of the building to be shot obtained under the various shooting modes and the corresponding shooting position parameters can reflect multiple visual angles and can present a full appearance, so that the comprehensive high-quality imaging of the building live-action is realized, the building live-action aerial photography defect caused by the fact that the shooting mode is too single and the shooting position is too random in the prior art is effectively overcome, on one hand, the quality of the building live-action picture is improved, on the other hand, the visual effect of building live-action imaging is improved, and the 3D (three-dimensional) ground generation requirement of urban buildings is greatly met.
(2) According to the invention, the atmospheric environmental parameters of the unmanned aerial vehicle at the current aerial position are acquired in real time in the flight process of the unmanned aerial vehicle without executing the photographing operation, whether the flight height of the designated unmanned aerial vehicle at the current aerial position needs to be regulated and controlled is judged according to the atmospheric environmental parameters, and when the judgment result indicates that the flight height of the designated unmanned aerial vehicle needs to be regulated and controlled, the flight height of the designated unmanned aerial vehicle at the current aerial position is regulated and controlled, so that the agile intelligent regulation and control of the flight height of the unmanned aerial vehicle are realized, the flight safety of the unmanned aerial vehicle can be ensured in real time through the regulation and control result, a safety rear shield is provided for the unmanned aerial vehicle to carry out aerial photography on a building, and the problem that the unmanned aerial vehicle cannot carry out follow-up aerial photography due to the problem of flight safety in the flight process is avoided.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a flow chart of the steps of a method of the present invention;
FIG. 2 is a schematic view of a triangle for shooting in a downward-looking mode according to the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, the invention provides a smart city 3D map model photography establishing method based on an unmanned aerial vehicle technology, comprising the following steps:
step 1: the method comprises the following steps of marking an urban area to be subjected to 3D map photographing as an area to be photographed, and arranging an atmospheric environment acquisition terminal, an ultrasonic sensor, a three-dimensional scanner, an aerial photographing camera and a GPS (global positioning system) positioner on a designated unmanned aerial vehicle, wherein the atmospheric environment acquisition terminal is specifically composed of a wind speed detector and a haze detector and is used for acquiring atmospheric environment parameters, the ultrasonic sensor is used for sensing buildings in the area to be photographed, the three-dimensional scanner is used for three-dimensional scanning of the buildings in the area to be photographed, the aerial photographing camera is used for aerial photographing of the buildings in the area to be photographed, and the GPS positioner is used for positioning the geographical position of the buildings in the area to be photographed;
step 2: the method comprises the following steps of obtaining the outline of a region to be photographed, planning the flight route of a designated unmanned aerial vehicle in the region to be photographed according to the outline, and transmitting the flight route to the designated unmanned aerial vehicle, wherein the flight route planning process comprises the following steps:
step 2-1, acquiring a boundary contour corresponding to the designated area, and counting the area of the designated area based on the boundary contour;
step 2-2: according to the set subarea dividing quantity, equally dividing the area of the designated area to obtain each divided subarea;
step 2-3: acquiring the boundary contour of each subregion, and taking the boundary contour as a flight path of each subregion;
step 2-4: numbering the divided sub-regions according to a predefined sequence, and taking the numbering sequence of the sub-regions as the flight sequence of the sub-regions;
and step 3: the designated unmanned aerial vehicle carries out flight operation according to the flight route that conveys to gather the atmospheric environment parameter of designated unmanned aerial vehicle at the present aerial position of locating in real time through atmospheric environment collection terminal at the flight in-process, wherein atmospheric environment parameter includes wind speed and haze concentration, judges from this whether the flight height of designated unmanned aerial vehicle at the present aerial flight position of locating needs the regulation and control, its concrete judgement method is as follows:
the first step is as follows: extracting wind speed from atmospheric environment parameters, matching the wind speed with wind speed ranges corresponding to various wind speeds in a flight database, screening out a wind speed corresponding to the wind speed, and recording the wind speed as an adaptive wind speed;
the second step is that: comparing the adaptive wind power level of the designated unmanned aerial vehicle at the current position in the air with the minimum dead weight of the unmanned aerial vehicle capable of flying stably under various wind power levels stored in a flight database, so as to obtain the minimum dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level;
the third step: the model based on the designated unmanned aerial vehicle obtains the dead weight of the designated unmanned aerial vehicle, compares the dead weight with the proper dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level, calculates the dead weight matching coefficient of the designated unmanned aerial vehicle at the current position in the air, and has the calculation formula of
Figure BDA0003606545510000111
Eta is the self-weight matching coefficient of the designated unmanned aerial vehicle at the current position in the air, G is the self-weight of the designated unmanned aerial vehicle, and G is 0 Expressed as adapting to windThe unmanned aerial vehicle can fly stably under the proper dead weight;
it should be noted that, in the dead weight matching coefficient calculation formula, the smaller the difference between the dead weight of the designated unmanned aerial vehicle and the proper dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level is, the larger the dead weight matching coefficient is;
the fourth step: extracting haze concentration from atmospheric environment parameters, comparing the haze concentration with vertical visibility corresponding to various haze concentrations in a flight database, screening out the vertical visibility corresponding to the haze concentration, comparing the vertical visibility with the flight height of the designated unmanned aerial vehicle at the current aerial position, calculating the visibility matching coefficient of the designated unmanned aerial vehicle at the current aerial position, wherein the calculation formula is
Figure BDA0003606545510000112
Sigma is a visibility matching coefficient of the designated unmanned aerial vehicle at the current air position, H is the flying height of the designated unmanned aerial vehicle at the current air position, and H is 0 The vertical visibility corresponding to the haze concentration is expressed;
it should be noted that, in the visibility matching coefficient calculation formula, the closer the flying height of the designated unmanned aerial vehicle at the current aerial position is to the vertical visibility corresponding to the haze concentration in the flying height, the greater the visibility matching coefficient is;
the fifth step: comparing the self-weight matching coefficient and the visibility matching coefficient of the designated unmanned aerial vehicle at the current air position with corresponding set values respectively, if the self-weight matching coefficient and the visibility matching coefficient are both greater than or equal to the corresponding set values, judging that the flight height of the designated unmanned aerial vehicle at the current air flight position does not need to be regulated, otherwise, judging that the flight height of the designated unmanned aerial vehicle at the current air flight position needs to be regulated;
when the judgment result is that the regulation and control are needed, the flight height of the designated unmanned aerial vehicle at the current aerial flight position is regulated and controlled, and the specific regulation and control process is as follows:
(1) Controlling the designated unmanned aerial vehicle to vertically descend at the flight height corresponding to the current air flight position according to the set single descending height;
(2) Acquiring atmospheric environment parameters at the current descending position of the designated unmanned aerial vehicle through an atmospheric environment acquisition terminal after the designated unmanned aerial vehicle descends, operating according to the method from the first step to the fourth step to obtain a self-weight matching coefficient and a visibility matching coefficient of the designated unmanned aerial vehicle at the current descending position, comparing the self-weight matching coefficient and the visibility matching coefficient with corresponding set values, judging that the flying height of the designated unmanned aerial vehicle at the current descending position does not need to be regulated if the self-weight matching coefficient and the visibility matching coefficient are greater than or equal to the corresponding set values, and continuing descending according to the method in the step (1) on the contrary until the designated unmanned aerial vehicle stops regulating when the flying height after the designated unmanned aerial vehicle descends is judged to not need to be regulated;
according to the embodiment of the invention, in the flight process that the unmanned aerial vehicle does not execute the photography operation, the atmospheric environment parameters of the unmanned aerial vehicle at the current aerial position are collected in real time, whether the flight height of the designated unmanned aerial vehicle at the current aerial position needs to be regulated or not is judged according to the atmospheric environment parameters, and when the judgment result is that regulation is needed, the flight height of the designated unmanned aerial vehicle at the current aerial position is regulated or controlled, so that flexible and intelligent regulation and control of the flight height of the unmanned aerial vehicle are realized, the flight safety of the unmanned aerial vehicle can be ensured in real time through the regulation and control result, a safety rear shield is provided for the unmanned aerial vehicle to carry out aerial photography on a building, and the problem that the unmanned aerial vehicle cannot carry out follow-up aerial photography due to the flight safety problem in the flight process is avoided;
and 4, step 4: the designated unmanned aerial vehicle senses a front area in real time through an ultrasonic sensor in the flight process, identifies whether a building exists, and marks the currently identified building as a target building if the building exists;
in a specific embodiment, the reason why the building in the front area is identified by the ultrasonic sensor is that the building is generally tall, the ultrasonic wave of the unmanned aerial vehicle continuously sends ultrasonic information to the front area in the flight process, the ultrasonic information returns immediately when the unmanned aerial vehicle touches the building, and the ultrasonic sensor represents that the building exists in the front area when receiving the reflected wave;
and 5: positioning the geographical position of a target building through a GPS positioning instrument, determining the number of photographing directions required for photographing the panorama of the target building, and numbering the photographing directions;
the corresponding determination mode for determining the number of the shooting directions required for shooting the panorama of the target building is to perform three-dimensional scanning on the target building through a three-dimensional scanner, so that the number of observation surfaces existing in the target building is counted, wherein the number of the observation surfaces is the number of the shooting directions required for shooting the panorama of the target building;
preferably, the objective of analyzing the observation surfaces corresponding to the target building is to enable all the observation surfaces of the target building to be photographed, so that the imaging integrity of the target building is prevented from being influenced due to the missing photographing of the observation surfaces;
step 6: setting various shooting modes, wherein the various shooting modes are specifically a flat shooting mode and a top shooting mode, so as to analyze the shooting position parameters of the various shooting modes for each shooting direction corresponding to a target building, wherein the mentioned shooting position parameters comprise shooting height, shooting distance and shooting angle, and are transmitted to a designated unmanned aerial vehicle, and shooting operation is executed by the designated unmanned aerial vehicle, so that shooting pictures of each shooting direction corresponding to the target building in the various shooting modes are obtained;
the reason why the overhead shooting mode is not considered when the shooting mode is set is that the overhead shooting mode is characterized in that the shooting height of the aerial shooting camera is lower than the height of a shot object, so that the flying height of the unmanned aerial vehicle is in a lower level, the probability of touching an obstacle is increased, the flying safety of the unmanned aerial vehicle is influenced, and the safety of ground pedestrians is also influenced;
the analysis method for analyzing and corresponding to the shooting position parameters of the various shooting modes for the shooting orientations corresponding to the target building is as follows:
step 6-1: determining an observation surface corresponding to each shooting direction;
step 6-2: acquiring the appearance size of an observation surface corresponding to each photographing direction based on a three-dimensional scanning result corresponding to the target building;
step 6-3: the method for analyzing the shooting position parameters of the horizontal shooting mode is to analyze each shooting azimuth corresponding to the target building, and comprises the following steps:
step 6-3-1: outlook of observation surface corresponding to each shooting direction is sketched based on the outline size of observation surface corresponding to each shooting direction in the target building;
step 6-3-2: acquiring a central point of an observation surface corresponding to each photographic azimuth according to the outline of the observation surface corresponding to each photographic azimuth in the target building, acquiring the height of the horizontal plane where the central point is located from the ground, and taking the height as the photographic ground clearance of the observation surface corresponding to each photographic azimuth in the plain photography mode;
step 6-3-3: acquiring the area of the observation surface corresponding to each photographic orientation based on the overall dimension of the observation surface corresponding to each photographic orientation in the target building, analyzing the area and the imaging proportion in the set plain photography mode to obtain the image area of the observation surface corresponding to each photographic orientation in an imaging picture, and recording the image area as the imaging area of the observation surface corresponding to each photographic orientation in the plain photography mode;
illustratively, the imaging ratio in the panning mode is set to 1:k, and the area of the observation plane corresponding to each photographing orientation is denoted as s, in which case the imaging area of the observation plane corresponding to each photographing orientation in the panning mode can be expressed as
Figure BDA0003606545510000141
Step 6-3-4: counting the shooting distance of each shooting orientation from the observation surface under the imaging area according to the imaging area of the observation surface corresponding to each shooting orientation under the flat shooting mode and the shooting distance of the model of the aerial camera from the object under the unit imaging area, wherein the specific statistical mode is that the imaging area of the observation surface corresponding to each shooting orientation under the flat shooting mode is multiplied by the shooting distance of the model of the aerial camera from the object under the unit imaging area to obtain the shooting distance of each shooting orientation from the observation surface under the imaging area, and the shooting distance is used as the shooting distance of the observation surface corresponding to each shooting orientation under the flat shooting mode;
step 6-3-5: the flat shooting mode is characterized in that the camera and the shot main body are positioned on the same horizontal line, and the shooting angle of the observation surface corresponding to each shooting direction in the flat shooting mode is adjusted to be zero degree;
and 6-4: the method for analyzing the shooting position parameters of the overhead shooting mode is to carry out the analysis of each shooting azimuth corresponding to the target building, and comprises the following steps:
step 6-4-1: extracting the height of the observation surface corresponding to each photographic direction from the overall dimension of the observation surface corresponding to each photographic direction;
step 6-4-2: acquiring the photographing height of the observation surface corresponding to each photographing direction in the down-shooting mode according to the set safe distance between the aerial camera and the object to be photographed in the down-shooting mode and the height of the observation surface corresponding to each photographing direction, wherein specifically, the photographing height of the observation surface corresponding to each photographing direction in the down-shooting mode can be represented as the height of the observation surface plus the safe distance between the aerial camera and the object to be photographed;
step 6-4-3: acquiring the area of the observation surface corresponding to each photographing direction based on the overall dimension of the observation surface corresponding to each photographing direction in the target building, analyzing the area and the imaging proportion in the set top-down mode to obtain the image area of the observation surface corresponding to each photographing direction in an imaging picture, and recording the image area as the imaging area of the observation surface corresponding to each photographing direction in the top-down mode;
step 6-4-4: counting the shooting distance from each shooting position to the observation plane under the imaging area by using the imaging area of the observation plane corresponding to each shooting position under the overhead shooting mode and the shooting distance from the model of the aerial camera to the shot object under the shooting height corresponding to the single imaging area, and taking the shooting distance as the shooting distance of the observation plane corresponding to each shooting position under the overhead shooting mode;
step 6-4-5: referring to fig. 2, a photographing triangle of the observation plane corresponding to each photographing azimuth in the target building is constructed based on the photographing height and photographing distance of the observation plane corresponding to each photographing azimuth in the top view mode, thereby guiding the photographing height and photographing distance of the observation plane corresponding to each photographing azimuth in the target building in the top view mode according to the right angle attribute of the photographing triangleObtaining the shooting angle of the observation plane corresponding to each shooting direction in the top shooting mode in a shooting angle calculation formula
Figure BDA0003606545510000161
And 7: synthesizing the camera pictures of the target building corresponding to all the shooting directions in all the shooting modes to form a panoramic image of the target building, and acquiring the appearance size data of the target building according to the panoramic image;
and 8: generating a live-action 3D map of the target building based on the appearance size, the panoramic image and the geographic position of the target building;
and step 9: and 4-8, performing live-action 3D map generation on all buildings in the designated area according to the method in the step 4-8, and further combining the live-action 3D maps corresponding to all the buildings to form a building live-action 3D map corresponding to the area to be photographed.
According to the embodiment of the invention, various photographing modes are set in the process of aerial photographing of the building in the region to be photographed by the unmanned aerial vehicle, and an accurate and reliable photographing position parameter analysis mode is provided for the various photographing modes based on the overall dimensions of the building to be photographed, so that the building scene picture to be photographed obtained under the various photographing modes and the corresponding photographing position parameters can reflect multiple visual angles and can present a full view, the comprehensive high-quality imaging of the building scene is realized, the building scene aerial photographing defect caused by the fact that the photographing mode is too single and the photographing position is too randomly selected in the prior art is effectively overcome, the quality of the building scene picture is improved on one hand, the visual effect of the building scene imaging is improved on the other hand, and the generation requirement of the 3D map of the urban building is greatly met.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (9)

1. A smart city 3D map model photography establishing method based on unmanned aerial vehicle technology is characterized by comprising the following steps:
step 1: recording an urban area to be subjected to 3D map photography as an area to be photographed, and arranging an atmospheric environment acquisition terminal, an ultrasonic sensor, a three-dimensional scanner, an aerial photography camera and a GPS locator on a designated unmanned aerial vehicle;
step 2: acquiring the outline of the area to be photographed, planning the flight route of the designated unmanned aerial vehicle in the area to be photographed according to the outline, and transmitting the flight route to the designated unmanned aerial vehicle;
and step 3: the designated unmanned aerial vehicle executes flight operation according to the transmitted flight route, and acquires the atmospheric environment parameters of the designated unmanned aerial vehicle at the current air position in real time through an atmospheric environment acquisition terminal in the flight process, so that whether the flight height of the designated unmanned aerial vehicle at the current air position needs to be regulated and controlled is judged, and when the judgment result is that regulation and control are needed, the flight height of the designated unmanned aerial vehicle at the current air position is regulated and controlled;
and 4, step 4: the unmanned aerial vehicle is appointed to sense a front area in real time through an ultrasonic sensor in the flight process, whether a building exists is identified, and if the building exists, the currently identified building is marked as a target building;
and 5: positioning the geographic position of a target building through a GPS positioning instrument, determining the number of the photographing directions required for photographing the panorama of the target building, and numbering all the photographing directions;
step 6: setting various shooting modes, carrying out shooting position parameter analysis of various shooting modes on each shooting direction corresponding to the target building, transmitting the shooting position parameter analysis to a designated unmanned aerial vehicle, and executing shooting operation by the designated unmanned aerial vehicle to obtain shooting pictures of each shooting direction corresponding to the target building in various shooting modes;
and 7: synthesizing the camera pictures of the target building corresponding to all the shooting directions in all the shooting modes to form a panoramic image of the target building, and acquiring the appearance size data of the target building according to the panoramic image;
and 8: generating a live-action 3D map of the target building based on the appearance size, the panoramic image and the geographic position of the target building;
and step 9: performing live-action 3D map generation on each building in the designated area according to the method in the step 4-8, and further combining the live-action 3D maps corresponding to each building to form a building live-action 3D map corresponding to the area to be photographed;
the planning process for planning the corresponding flight route of the designated unmanned aerial vehicle in the area to be photographed in the step 2 comprises the following steps:
step 2-1, acquiring a boundary contour corresponding to the specified region, and counting the area of the specified region based on the boundary contour;
step 2-2: according to the set subarea dividing quantity, equally dividing the area of the designated area to obtain each divided subarea;
step 2-3: acquiring the boundary contour of each subregion, and taking the boundary contour as a flight path of each subregion;
step 2-4: and numbering the divided sub-areas according to a predefined sequence, and taking the numbering sequence of the sub-areas as the flight sequence of the sub-areas.
2. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: the atmospheric environmental parameters comprise wind speed and haze concentration.
3. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: the specific method for judging whether the flying height of the designated unmanned aerial vehicle at the current flying position needs to be regulated and controlled is as follows:
the first step is as follows: extracting wind speed from atmospheric environment parameters, matching the wind speed with wind speed ranges corresponding to various wind speeds in a flight database, screening out a wind speed corresponding to the wind speed, and recording the wind speed as an adaptive wind speed;
the second step: comparing the adaptive wind power level of the designated unmanned aerial vehicle at the current position in the air with the minimum dead weight of the unmanned aerial vehicle capable of flying stably under various wind power levels stored in a flight database, so as to obtain the minimum dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level;
the third step: acquiring the dead weight of the designated unmanned aerial vehicle based on the model of the designated unmanned aerial vehicle, comparing the dead weight with the proper dead weight of the unmanned aerial vehicle capable of flying stably under the adaptive wind power level, and calculating a dead weight matching coefficient of the designated unmanned aerial vehicle at the current position in the air;
the fourth step: extracting haze concentration from atmospheric environment parameters, comparing the haze concentration with vertical visibility corresponding to various haze concentrations in a flight database, screening out the vertical visibility corresponding to the haze concentration, comparing the vertical visibility with the flight height of the designated unmanned aerial vehicle at the current aerial position, and calculating a visibility matching coefficient of the designated unmanned aerial vehicle at the current aerial position;
the fifth step: comparing the self-weight matching coefficient and the visibility matching coefficient of the designated unmanned aerial vehicle at the current air position with corresponding set values respectively, if the self-weight matching coefficient and the visibility matching coefficient are both greater than or equal to the corresponding set values, judging that the flight height of the designated unmanned aerial vehicle at the current air flight position does not need to be regulated, otherwise, judging that the flight height of the designated unmanned aerial vehicle at the current air flight position needs to be regulated.
4. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: the specific regulation and control process for regulating and controlling the flying height of the designated unmanned aerial vehicle at the current flying position is as follows:
(1) Controlling the designated unmanned aerial vehicle to vertically descend at the flight height corresponding to the current air flight position according to the set single descent height;
(2) And (2) acquiring atmospheric environment parameters at the current descending position of the designated unmanned aerial vehicle through an atmospheric environment acquisition terminal after the designated unmanned aerial vehicle descends, operating according to the method from the first step to the fourth step to obtain a self-weight matching coefficient and a visibility matching coefficient of the designated unmanned aerial vehicle at the current descending position, comparing the self-weight matching coefficient and the visibility matching coefficient with corresponding set values, judging that the flying height of the designated unmanned aerial vehicle at the current descending position does not need to be regulated if the self-weight matching coefficient and the visibility matching coefficient are both greater than or equal to the corresponding set values, and continuing descending according to the method in the step (1) on the contrary until the designated unmanned aerial vehicle stops regulating when the flying height after the designated unmanned aerial vehicle descends is judged to not need to be regulated.
5. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: in the step 5, the determination mode corresponding to the number of the shooting directions required for shooting the panorama of the target building is to perform three-dimensional scanning on the target building through a three-dimensional scanner, so that the number of observation surfaces existing in the target building is counted, and the number of the observation surfaces is the number of the shooting directions required for shooting the panorama of the target building.
6. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: the various photographing modes are specifically a flat-shooting mode and a top-shooting mode.
7. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: the photographing position parameters include a photographing height, a photographing distance, and a photographing angle.
8. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 1, wherein: the analysis method for analyzing the shooting position parameters of each shooting mode for each shooting azimuth corresponding to the target building in the step 6 is as follows:
step 6-1: determining an observation surface corresponding to each shooting direction;
step 6-2: acquiring the appearance size of an observation surface corresponding to each photographing direction based on a three-dimensional scanning result corresponding to the target building;
step 6-3: the method for analyzing the shooting position parameters of the horizontal shooting mode is to analyze each shooting azimuth corresponding to the target building, and comprises the following steps:
step 6-3-1: outlook of the observation surface corresponding to each shooting direction is drawn based on the outline size of the observation surface corresponding to each shooting direction in the target building;
step 6-3-2: acquiring a central point of an observation surface corresponding to each photographic azimuth according to the outline of the observation surface corresponding to each photographic azimuth in the target building, acquiring the height of the horizontal plane where the central point is located from the ground, and taking the height as the photographic ground clearance of the observation surface corresponding to each photographic azimuth in the plain photography mode;
step 6-3-3: acquiring the area of the observation surface corresponding to each photographic orientation based on the overall dimension of the observation surface corresponding to each photographic orientation in the target building, analyzing the area and the imaging proportion in the set plain photography mode to obtain the image area of the observation surface corresponding to each photographic orientation in an imaging picture, and recording the image area as the imaging area of the observation surface corresponding to each photographic orientation in the plain photography mode;
step 6-3-4: counting the shooting distance of each shooting azimuth from the observation plane under the imaging area according to the imaging area of the observation plane corresponding to each shooting azimuth in the flat shooting mode and the shooting distance of the model of the aerial camera from the shot object under the unit imaging area, and taking the shooting distance as the shooting distance of the observation plane corresponding to each shooting azimuth in the flat shooting mode;
step 6-3-5: adjusting the shooting angle of the observation surface corresponding to each shooting direction to zero degree in the flat shooting mode;
step 6-4: the method for analyzing the shooting position parameters of the top shooting mode is carried out on each shooting azimuth corresponding to the target building, and the analysis method comprises the following steps:
step 6-4-1: extracting the height of the observation surface corresponding to each photographic orientation from the overall dimension of the observation surface corresponding to each photographic orientation;
step 6-4-2: acquiring the shooting height of the observation surface corresponding to each shooting direction in the overhead shooting mode according to the set safe distance between the aerial shooting camera and the shot object in the overhead shooting mode and the height of the observation surface corresponding to each shooting direction;
step 6-4-3: acquiring the area of the observation surface corresponding to each photographing direction based on the overall dimension of the observation surface corresponding to each photographing direction in the target building, analyzing the area and the imaging proportion in the set top-down mode to obtain the image area of the observation surface corresponding to each photographing direction in an imaging picture, and recording the image area as the imaging area of the observation surface corresponding to each photographing direction in the top-down mode;
step 6-4-4: counting the shooting distance of each shooting azimuth from the observation plane under the imaging area as the shooting distance of each shooting azimuth corresponding to the observation plane under the overhead shooting mode;
step 6-4-5: and importing the photographing height and the photographing distance of the observation surface corresponding to each photographing direction in the target building in the overhead photographing mode into a photographing angle calculation formula to obtain the photographing angle of the observation surface corresponding to each photographing direction in the overhead photographing mode.
9. The smart city 3D map model photography establishing method based on unmanned aerial vehicle technology as claimed in claim 8, wherein: the photographic angle is calculated by the formula
Figure FDA0004057106420000071
/>
CN202210417331.XA 2022-04-20 2022-04-20 Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology Active CN114659499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210417331.XA CN114659499B (en) 2022-04-20 2022-04-20 Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210417331.XA CN114659499B (en) 2022-04-20 2022-04-20 Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology

Publications (2)

Publication Number Publication Date
CN114659499A CN114659499A (en) 2022-06-24
CN114659499B true CN114659499B (en) 2023-04-07

Family

ID=82037649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210417331.XA Active CN114659499B (en) 2022-04-20 2022-04-20 Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology

Country Status (1)

Country Link
CN (1) CN114659499B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115326023B (en) * 2022-10-14 2023-01-13 长春市应天网络有限公司 Land measurement preprocessing method based on unmanned aerial vehicle image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6962775B2 (en) * 2017-10-24 2021-11-05 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co., Ltd Information processing equipment, aerial photography route generation method, program, and recording medium
CN113936108A (en) * 2021-09-23 2022-01-14 广东工贸职业技术学院 Unmanned aerial vehicle shooting and reconstruction method and device for building facade fine modeling

Also Published As

Publication number Publication date
CN114659499A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN109765930B (en) Unmanned aerial vehicle vision navigation
US11070725B2 (en) Image processing method, and unmanned aerial vehicle and system
CN106767706B (en) A kind of unmanned plane reconnoitres the Aerial Images acquisition method and system of the scene of a traffic accident
CN113038016B (en) Unmanned aerial vehicle image acquisition method and unmanned aerial vehicle
WO2022078240A1 (en) Camera precise positioning method applied to electronic map, and processing terminal
CN106878687A (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN115439424A (en) Intelligent detection method for aerial video image of unmanned aerial vehicle
CN111540048A (en) Refined real scene three-dimensional modeling method based on air-ground fusion
CN114373138A (en) Full-automatic unmanned aerial vehicle inspection method and system for high-speed railway
CN109520500A (en) One kind is based on the matched accurate positioning of terminal shooting image and streetscape library acquisition method
CN206611521U (en) A kind of vehicle environment identifying system and omni-directional visual module based on multisensor
CN112326686A (en) Unmanned aerial vehicle intelligent cruise pavement disease detection method, unmanned aerial vehicle and detection system
CN112113542A (en) Method for checking and accepting land special data for aerial photography construction of unmanned aerial vehicle
CN115170753B (en) Three-dimensional modeling processing method based on unmanned aerial vehicle oblique photography
CN108646727A (en) A kind of vision cradle and its localization method and recharging method
CN114020002A (en) Method, device and equipment for inspecting fan blade by unmanned aerial vehicle, unmanned aerial vehicle and medium
CN113298035A (en) Unmanned aerial vehicle electric power tower detection and autonomous cruise method based on image recognition
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN110009675A (en) Generate method, apparatus, medium and the equipment of disparity map
CN114659499B (en) Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology
CN114035606A (en) Pole tower inspection system, pole tower inspection method, control device and storage medium
CN116030194A (en) Air-ground collaborative live-action three-dimensional modeling optimization method based on target detection avoidance
CN117456092A (en) Three-dimensional live-action modeling system and method based on unmanned aerial vehicle aerial survey
CN109702747A (en) A kind of robot dog system and its implementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 401120, 27th floor, Building 1, No. 3 Yangliu Road, Dazhulin Street, Liangjiang New District, Chongqing (cluster registration)

Patentee after: Chongqing Shangyou Technology Co.,Ltd.

Address before: 401120 No. 62, Xingguang Avenue, North New Area, Liangjiang New Area, Yubei District, Chongqing (No. 1, floor 7, zone B, Neptune science and technology building)

Patentee before: Chongqing Shangyou Technology Co.,Ltd.