CN116433853B - Navigation survey navigation point generation method and device based on live-action model - Google Patents

Navigation survey navigation point generation method and device based on live-action model Download PDF

Info

Publication number
CN116433853B
CN116433853B CN202310707820.3A CN202310707820A CN116433853B CN 116433853 B CN116433853 B CN 116433853B CN 202310707820 A CN202310707820 A CN 202310707820A CN 116433853 B CN116433853 B CN 116433853B
Authority
CN
China
Prior art keywords
angle
visual angle
sampling point
navigation
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310707820.3A
Other languages
Chinese (zh)
Other versions
CN116433853A (en
Inventor
黄惠
陈鑫
付鸫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen
Shenzhen University
Original Assignee
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen, Shenzhen University filed Critical Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Shenzhen
Priority to CN202310707820.3A priority Critical patent/CN116433853B/en
Publication of CN116433853A publication Critical patent/CN116433853A/en
Application granted granted Critical
Publication of CN116433853B publication Critical patent/CN116433853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention discloses a navigation survey waypoint generating method and device based on a live-action model, wherein the live-action model is constructed according to a region to be rebuilt, sampling points are generated on the live-action model, a plurality of view angles are generated corresponding to each sampling point, view angle adjustment is carried out aiming at the view angles which do not meet preset conditions, and only if the view angles meet the preset conditions, the navigation survey waypoint is formed. Invalid waypoints can be avoided, the accuracy of each waypoint is improved, a plurality of waypoints can be provided for each sampling point, so that more degrees of freedom are provided when the aerial survey track of the unmanned aerial vehicle is planned to meet the performance limit of the unmanned aerial vehicle, the unmanned aerial vehicle can be ensured to shoot high-quality reconstructed images of each sampling point, and the quality of three-dimensional reconstruction is improved.

Description

Navigation survey navigation point generation method and device based on live-action model
Technical Field
The invention relates to the technical field of aerial survey and mapping, in particular to an aerial survey navigation point generation method and device based on a live-action model.
Background
The high quality three-dimensional reconstruction depends on the quality of the input image. In order to obtain a high quality reconstructed image of the covered scene, it is necessary to accurately plan the aerial survey trajectory of the drone. The aerial survey track is formed by connecting a plurality of navigation points, so that whether the navigation points are accurate or not determines the quality of the reconstructed image.
When generating the waypoints at present, the sampling points are mapped into the waypoints simply according to the flight height and the shooting visual angle of the unmanned aerial vehicle, the safety of the unmanned aerial vehicle and the visibility of the sampling points are not considered, the generated waypoints are inaccurate, and the quality of three-dimensional reconstruction is low.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The invention mainly aims to provide a navigation measurement waypoint generation method and device based on a live-action model, an intelligent terminal and a storage medium, which can solve the problems of inaccurate waypoints and low quality of three-dimensional reconstruction in the prior art.
In order to achieve the above object, a first aspect of the present invention provides a method for generating a navigation survey point based on a live-action model, the method comprising:
constructing a live-action model based on a region to be reconstructed, and generating a plurality of sampling points on the live-action model;
acquiring the azimuth of each sampling point on the live-action model, and respectively generating a plurality of view angles corresponding to each sampling point based on the azimuth;
if the visual angle meets the preset condition, setting the visual angle as a waypoint and adding the visual angle to a waypoint set;
for each visual angle which does not meet the preset condition, adopting a visual angle adjustment strategy to adjust the visual angle to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
And outputting the navigation point set.
Optionally, the obtaining the azimuth of each sampling point on the live-action model, and generating, based on the azimuth, a plurality of view angles corresponding to each sampling point respectively includes:
acquiring the viewing distance of an unmanned aerial vehicle shooting device;
setting a point which is at the sight distance from the sampling point as a first visual angle in the opposite direction of the normal direction of the sampling point;
if the azimuth is the top of the live-action model, adjusting the height of the first visual angle or the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of second visual angles;
and if the azimuth is the side face of the real scene model, adjusting the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of third visual angles.
Optionally, the viewing angle adjustment policy is provided with a plurality of viewing angles, the viewing angle is adjusted by adopting the viewing angle adjustment policy, an adjusted viewing angle is obtained, if the adjusted viewing angle meets a preset condition, the adjusted viewing angle is set as a waypoint and added to a waypoint set, including:
setting the first view angle adjustment strategy as a current view angle adjustment strategy;
adjusting the visual angle according to the current visual angle adjustment strategy to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
Otherwise, setting the next view angle adjustment strategy as the current view angle adjustment strategy, and returning to adjust the view angle according to the current view angle adjustment strategy until all view angle adjustment strategies are executed.
Optionally, the viewing angle adjustment strategy is used for adjusting one or more of a height of the viewing angle, a distance between the viewing angle and the sampling point, and an included angle between a direction of the viewing angle and a normal direction of the sampling point.
Optionally, the adjusting the viewing angle by using a viewing angle adjustment policy to obtain an adjusted viewing angle includes:
determining a sphere center according to the longitude and latitude of the sampling point and the height of the visual angle;
constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius;
in the sphere, taking the visual angle as a center to obtain a region of interest;
and selecting a plurality of points in the region of interest to obtain the adjusted rear view angle.
The second aspect of the present invention provides a navigation survey point generating device based on a live-action model, where the device includes:
the sampling point module is used for constructing a real scene model based on the region to be reconstructed, and generating a plurality of sampling points on the real scene model;
The view angle module is used for acquiring the azimuth of each sampling point on the live-action model and respectively generating a plurality of view angles corresponding to each sampling point based on the azimuth;
the navigation point module is used for setting the visual angle as a navigation point and adding the visual angle to a navigation point set if the visual angle meets a preset condition; for each visual angle which does not meet the preset condition, adopting a visual angle adjustment strategy to adjust the visual angle to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
and the output module is used for outputting the navigation point set.
Optionally, the view angle module includes a view distance unit, a first view angle unit, a second view angle unit and a third view angle unit, where the view distance unit is used to obtain a view distance of the unmanned aerial vehicle shooting device; the first view angle unit is used for setting a point which is at the sight distance from the sampling point as a first view angle in the opposite direction of the normal direction of the sampling point; the second view angle unit is used for adjusting the height of the first view angle or the angle between the direction of the first view angle and the normal direction of the sampling point if the azimuth is the top of the live-action model, so as to obtain a plurality of second view angles; and the third view angle unit is used for adjusting the angle between the direction of the first view angle and the normal direction of the sampling point if the azimuth is the side surface of the real scene model, so as to obtain a plurality of third view angles.
Optionally, the waypoint module includes a view angle adjusting unit, where the view angle adjusting unit is configured to determine a center of sphere according to longitude and latitude of the sampling point and height of the view angle; constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius; in the sphere, taking the visual angle as a center to obtain a region of interest; and selecting a plurality of points in the region of interest to obtain the adjusted rear view angle.
The third aspect of the present invention provides an intelligent terminal, where the intelligent terminal includes a memory, a processor, and a real-scene-model-based navigation point generating program stored in the memory and capable of running on the processor, and the real-scene-model-based navigation point generating program implements any one of the steps of the real-scene-model-based navigation point generating method when executed by the processor.
A fourth aspect of the present invention provides a computer-readable storage medium, on which a real-scene model-based navigation point generation program is stored, which when executed by a processor, implements any one of the steps of the real-scene model-based navigation point generation method.
From the above, according to the invention, a real scene model is constructed according to the region to be reconstructed, sampling points are generated on the real scene model, a plurality of view angles are generated corresponding to each sampling point, view angle adjustment is performed for view angles which do not meet preset conditions, and only the view angles meet the preset conditions, the real scene model becomes a navigation point. Invalid waypoints can be avoided, the accuracy of each waypoint is improved, a plurality of waypoints can be provided for each sampling point, so that more degrees of freedom are provided when the aerial survey track of the unmanned aerial vehicle is planned to meet the performance limit of the unmanned aerial vehicle, the unmanned aerial vehicle can be ensured to shoot high-quality reconstructed images of each sampling point, and the quality of three-dimensional reconstruction is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for generating navigation points based on a live-action model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a specific flow of step S200 in the embodiment of FIG. 1;
FIG. 3 is a flow chart of adjusting a viewing angle and generating waypoints using a plurality of viewing angle adjustment strategies according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of adjusting the view angle by using a ring adjustment strategy according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a navigation measurement point generating device based on a live-action model according to an embodiment of the present invention;
fig. 6 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted in context as "when …" or "upon" or "in response to a determination" or "in response to detection. Similarly, the phrase "if a condition or event described is determined" or "if a condition or event described is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a condition or event described" or "in response to detection of a condition or event described".
The following description of the embodiments of the present invention will be made more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown, it being evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
In the scene of planning the aerial survey track of the unmanned aerial vehicle, the generated waypoints at present do not consider the safety of the unmanned aerial vehicle and the visibility of sampling points, such as: the flight height of the unmanned aerial vehicle is limited, whether the unmanned aerial vehicle collides, whether a sampling point is shielded when the waypoint is photographed, and the like. Therefore, the generated waypoints are inaccurate, and the quality of the three-dimensional reconstruction is low.
The invention provides a navigation survey waypoint generation method based on a live-action model, which is used for generating a plurality of visual angles aiming at each sampling point, carrying out visual angle adjustment on the visual angles which do not meet preset conditions, and taking the visual angles as waypoints only when the visual angles meet the preset conditions. Invalid waypoints can be avoided, the accuracy of each waypoint is improved, a plurality of waypoints can be provided for each sampling point, so that more degrees of freedom are provided during the aerial survey track planning of the unmanned aerial vehicle to meet the performance limit of the unmanned aerial vehicle, the unmanned aerial vehicle can shoot high-quality reconstructed images of each sampling point, and the quality of three-dimensional reconstruction is improved.
Exemplary method
As shown in fig. 1, the embodiment of the invention provides a method for generating navigation survey points based on a live-action model, which is deployed on electronic terminals such as a computer, a mobile terminal, a server and the like and is used for three-dimensionally reconstructing a large-scale outdoor scene. Specifically, the method comprises the following steps:
step S100: and constructing a real scene model based on the region to be reconstructed, and generating a plurality of sampling points on the real scene model.
Specifically, the area to be reconstructed refers to an area to be reconstructed in an outdoor scene, the real scene model is a three-dimensional model constructed according to the outdoor scene, and is a miniature model of the outdoor scene, and buildings, trees, mountain bodies and the like in the outdoor scene are all represented in the real scene model.
A live-action model may be generated from the satellite map. The satellite map reflects a real image of the area to be reconstructed and comprises a top view of the area to be reconstructed, and the height data of each building are obtained by obtaining the shadow length of each building according to the corresponding relation between the shadow length and the building. In the three-dimensional space, taking the ground to be rebuilt as an X-Y surface, and longitudinally stretching the building contour along a Z axis according to the corresponding height to obtain a building model; and meanwhile, natural landscape models such as trees and the like can be obtained, and the construction of a live-action model is realized.
After the live-action model is generated, a plurality of sampling points can be generated on the live-action model according to a poisson sampling algorithm. The sampling points are arranged on the surface of the live-action model, and each sampling point can be expressed as a normal vector of the sampling point. And (3) making a vector which is perpendicular to the surface where the sampling point is located and extends to the outside of the live-action model at the position where the sampling point is located, wherein the vector is a normal vector of the sampling point. The starting point of the normal vector of the sampling point is the position of the sampling point, and the direction of the normal vector of the sampling point is vertical to the surface of the sampling point and extends to the outside of the real model. The direction of the normal vector of the sampling point is simply referred to as the normal of the sampling point.
Step S200: acquiring the azimuth of each sampling point on the live-action model, and respectively generating a plurality of visual angles corresponding to each sampling point according to the azimuth;
specifically, the orientation of the sampling point on the live-action model refers to the position of the surface on which the sampling point is located, such as the top and sides, are common. The azimuth of the sampling point on the live-action model can be judged simply according to the normal vector of the sampling point: when the included angle between the normal vector direction of the sampling point and the horizontal line is smaller than the designated angle, the sampling point is at the top, otherwise, the sampling point is at the side; it is also possible to obtain from the data of the live-action model whether the surface on which the sampling points are located is the top or the side.
The views correspond to sampling points, and each view may be represented by a view vector. The starting point of the view vector is the position of the view, the direction of the view vector is from the starting point of the view vector to the sampling point, and the direction of the view vector is also called the direction of the view. The generation method of the visual angle comprises the following steps: and determining the sight distance of the unmanned aerial vehicle according to the parameters of the unmanned aerial vehicle shooting device. In the normal vector direction of the sampling point, the point with the distance from the sampling point being the viewing distance is the position of the viewing angle, the viewing angle is also called an initial viewing angle, and the direction of the viewing angle vector of the initial viewing angle is opposite to the normal vector direction of the sampling point.
Typically one sampling point corresponds to one viewing angle, but since the limitations and safety factors of the drone are not considered when generating the viewing angle, such as: the unmanned aerial vehicle cannot collide with the live-action model, climbing capacity of the unmanned aerial vehicle, steering capacity of the unmanned aerial vehicle, endurance capacity of the unmanned aerial vehicle and the like, after the visual angle is used as a navigation point to generate a navigation track of the unmanned aerial vehicle, a situation that a certain visual angle exceeds the performance of the unmanned aerial vehicle or the situation that the unmanned aerial vehicle collides is likely to occur, and the visual angle needs to be removed from the navigation track, so that a reconstructed image of a sampling point area corresponding to the visual angle cannot be obtained; and also does not consider the reconstruction quality factors of the view angle, such as: under a certain view angle, the sampling point is shielded, and the unmanned aerial vehicle cannot shoot the condition of the sampling point area. And therefore may result in poor quality of the three-dimensional reconstruction.
After an initial view angle is generated, the invention also carries out certain transformation on the initial view angle to regenerate a plurality of view angles, so that each sampling point corresponds to a plurality of view angles. Because the unmanned aerial vehicle is safer when shooting the sampling point of live-action model top position for shooting the sampling point of live-action model side position, consequently, the visual angle number of sampling point at live-action model top position is less than the visual angle number of sampling point at live-action model side position. The method for transforming the visual angle is not limited, and the transformed visual angle can meet the condition that the unmanned aerial vehicle can shoot the sampling point area theoretically.
In this embodiment, 4 view angles are generated for sampling points at the top position of the real scene model, and 5 view angles are generated for sampling points at the side position of the real scene model.
As shown in fig. 2, the specific steps of generating a plurality of views corresponding to sampling points in this embodiment include:
step S210: acquiring the viewing distance of an unmanned aerial vehicle shooting device;
specifically, the viewing distance is the optimal distance taken along the viewing direction of the unmanned aerial vehicle. The shooting area of the unmanned aerial vehicle shooting device is a field of view. The line of sight, field of view may be obtained from hardware parameters of the drone camera.
Step S220: setting a point which is at a sight distance from the sampling point as a first visual angle in the opposite direction of the normal direction of the sampling point;
Specifically, the direction of the view angle vector of the first view angle is opposite to the normal direction of the sampling point, and the starting point of the first view angle is on the extension line of the normal direction of the sampling point, and the distance between the starting point of the first view angle and the sampling point is the viewing distance.
Step S230: if the azimuth is the top of the live-action model, adjusting the height of the first visual angle or the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of second visual angles;
specifically, if the sampling point is at the top of the live-action model, 3 second views are also generated, which are respectively: based on the opposite direction of the normal direction of the sampling point, the height is the same as the height of the sampling point, and a second visual angle is obtained; the first view angle deflects to the left by 20 degrees to obtain a second view angle; the first viewing angle is deflected to the right by 20 degrees, obtaining a second viewing angle.
Step S240: if the azimuth is the side face of the real model, adjusting the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of third visual angles;
specifically, if the sampling point is on the side of the live-action model, 4 third views are also generated, which are respectively: the first view angle is deflected to the left by 30 degrees, and a third view angle is obtained; the first view angle deflects to the right by 30 degrees to obtain a third view angle; the first view angle deflects upwards by 30 degrees to obtain a third view angle; the first viewing angle is deflected 30 degrees downward to obtain a third viewing angle.
The first view angle, all the second view angles and the third view angles are all view angles corresponding to the sampling points.
By transforming the first view into a plurality of second views or third views, each sampling point can correspond to a plurality of views. When the visual angle is used as the waypoint and the unmanned aerial vehicle aerial survey track is generated, if a certain waypoint does not meet the conditions such as performance limit or safety limit of the unmanned aerial vehicle, other waypoints of the same sampling point can be selected for replacement, so that each sampling point has a corresponding waypoint, and the quality of three-dimensional reconstruction is ensured.
Step S300: if the visual angle meets the preset condition, setting the visual angle as a waypoint and adding the visual angle into a waypoint set;
specifically, the preset condition is a condition that needs to be met when the viewing angle is taken as the waypoint, the preset condition is determined according to a specific use scene, and the preset condition is generally aimed at: the unmanned aerial vehicle is safe when shooting the waypoints, and cannot collide with buildings, trees, mountains and the like in outdoor scenes; and unmanned aerial vehicle is at the position of visual angle to when shooting the sampling point with the direction of this visual angle, the sampling point can not be sheltered from, for example: is not blocked by trees, nearby buildings, etc. And setting the visual angle as the waypoint and storing the visual angle into the waypoint set if the visual angle meets the preset condition. Therefore, the generated waypoints are high in accuracy, and invalid waypoints cannot exist. When the unmanned aerial vehicle shoots at the waypoint, the unmanned aerial vehicle can ensure that an image of the area where the sampling point is located is obtained. In one example, the live-action model is expanded in space according to a preset expansion radius to obtain a no-fly zone, the no-fly zone comprises a zone where the live-action model is located and an expansion zone, and whether the position of the visual angle is located in the no-fly zone or not can be judged according to the coordinates of the visual angle, so that whether the unmanned aerial vehicle collides or not can be judged; occlusion detection is performed on a view angle to detect whether a corresponding sampling point can be observed at the view angle.
Step S400: for each view angle which does not meet the preset condition, adopting a view angle adjustment strategy to adjust the view angle to obtain an adjusted view angle, and if the adjusted view angle meets the preset condition, setting the adjusted view angle as a waypoint and adding the same to a waypoint set;
specifically, in this embodiment, 4 viewing angles are generated for the sampling points on the top of the real scene model, and 5 viewing angles are generated for the sampling points on the side of the real scene model, so that ideally, 4 waypoints and 5 waypoints can be generated respectively. However, in actual situations, whether the viewing angle satisfies the preset condition is not considered when the viewing angle is generated for each sampling point, and therefore, there is a case that the viewing angle cannot satisfy the preset condition. Aiming at each view angle which does not meet the preset conditions, the invention adopts a view angle adjustment strategy to adjust the view angles, so that the adjusted view angles can meet the preset conditions and are set as waypoints to be stored in a waypoint set. If the adjusted view angles still cannot meet the preset conditions, the view angles are discarded and cannot be added to the navigation point set, so that the safety of the unmanned aerial vehicle during shooting is ensured. The viewing angle adjusting strategy mainly adjusts one or more of the height of the position of the viewing angle, the included angle between the direction of the viewing angle and the normal vector of the sampling point and the distance between the viewing angle and the sampling point, and the adjusting method is not limited.
Because the positions and the surrounding environments of the sampling points corresponding to the view angles are different, in order to enable the adjusted view angle to meet the preset condition as much as possible, in one embodiment, a plurality of view angle adjustment strategies are arranged, and each view angle adjustment strategy is executed in sequence for each view angle which does not meet the preset condition until the adjusted view angle meets the preset condition or all view angle adjustment strategies are executed.
As shown in fig. 3, the specific steps of adjusting the viewing angle and generating the waypoint using several viewing angle adjustment strategies include:
step A410: setting the first view angle adjustment strategy as a current view angle adjustment strategy;
step a420: adjusting the visual angle according to the current visual angle adjustment strategy to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
step a430: otherwise, the next view angle adjustment strategy is set as the current view angle adjustment strategy, and the step A420 is returned until all view angle adjustment strategies are executed.
Specifically, the viewing angles are adjusted one by one according to the viewing angle adjustment strategy, if the obtained adjusted viewing angle meets the preset condition after executing a certain viewing angle adjustment strategy, the adjusted viewing angle is set as a waypoint and added to the waypoint set, and the subsequent viewing angle adjustment strategy is not executed any more. Otherwise, all the view angle adjustment strategies are executed. If the viewing angle satisfying the preset condition cannot be obtained after all the viewing angle adjustment strategies are executed, the viewing angle cannot be used for enabling the unmanned aerial vehicle to safely shoot the area where the sampling point is located, so that the viewing angle is abandoned, and the safety of the unmanned aerial vehicle during shooting is ensured.
In this embodiment, the viewing angle is adjusted by using a plurality of viewing angle adjustment policies, so that the accuracy of each viewing angle is ensured as much as possible, and each sampling point has a plurality of corresponding viewing angles.
Step S500: and outputting the navigation point set.
Specifically, a set of waypoints consisting of all the waypoints is output, and then a navigational survey track of the unmanned aerial vehicle can be generated according to a flight path cost function and a tourist problem algorithm.
By the above, the multiple view angles are generated according to the azimuth of the sampling points on the live-action model, and the view angles which do not meet the preset conditions are adjusted by adopting the view angle adjustment strategy, so that each sampling point can have a plurality of corresponding navigation points, the planning of the navigation track of the unmanned aerial vehicle is facilitated, and the image quality of the sampling point area and the quality of three-dimensional reconstruction are improved.
In one embodiment, in order to adjust the viewing angle to obtain an adjusted viewing angle that meets the viewing distance requirement of the unmanned aerial vehicle photographing device as much as possible, an annular adjustment strategy is adopted to adjust the viewing angle to obtain an adjusted viewing angle, as shown in fig. 4, the specific steps include:
step B410: determining a sphere center according to the longitude, latitude and view angle height of the sampling point;
step B420: constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius;
Step B430: in the sphere, taking the visual angle as the center to obtain a region of interest;
step B440: and selecting a plurality of points in the region of interest to obtain the adjusted rearview angle.
Specifically, a point in a three-dimensional space, i.e., a center of sphere, may be determined according to the longitude of the sampling point, the latitude of the sampling point, and the height of the viewing angle. Taking the point as the center, and constructing a sphere by taking the line of sight of the unmanned aerial vehicle shooting device as the radius of the sphere; then, obtaining a region of interest in the sphere with the viewing angle as the center; in one example, a horizontal plane is constructed by taking the visual angle as the center, and the horizontal plane is intersected with the surface of the sphere to form a horizontal ring, wherein the horizontal ring is the region of interest; in another example, a vertical plane is constructed centered on the viewing angle, and intersects the surface of the sphere to form a vertical ring, which is the region of interest. And then selecting points in the region of interest as the adjusted back view angle.
Optionally, after the sphere is constructed, points can be randomly selected on the surface of the sphere or in the sphere to be used as the back view angle adjustment.
And selecting a plurality of points in the region of interest as the adjusted back view angles, so that the view angle number corresponding to the sampling points can be further improved.
In one embodiment, a zoom-in strategy is adopted to adjust the viewing angle, and the specific method is as follows: and a plurality of points are added at equal intervals in the viewing angle direction, so that the adjusted rear viewing angle is obtained.
In one embodiment, the angle of view is adjusted by adopting a remote strategy, which comprises the following specific steps: and a plurality of points are added in the direction opposite to the viewing angle direction at equal intervals to obtain the adjusted rear viewing angle.
In one embodiment, the viewing angle is adjusted by adopting a height adjustment strategy, which comprises the following specific steps: and a plurality of points are added at equal intervals in the direction of increasing the viewing angle height, so as to obtain the adjusted rear viewing angle.
It should be noted that the above-mentioned zoom-in strategy, zoom-out strategy, loop adjustment strategy, and altitude adjustment strategy may be used alone or in combination of multiple steps a410 to a 430.
In summary, firstly, a live-action model is constructed according to an outdoor scene, sampling points are set on the live-action model, the sampling points can be reasonably set, whether each sampling point in the live-action model is at the top or at the side is judged, a viewing angle is generated, for viewing angles which do not meet preset conditions, the viewing angle is adjusted according to a viewing angle adjustment strategy, and the viewing angle which meets the preset conditions is set as an navigation point corresponding to each sampling point. The safety of unmanned aerial vehicle shooting can be ensured, and the image quality and the quality of three-dimensional reconstruction are improved.
Exemplary apparatus
As shown in fig. 5, corresponding to the above-mentioned method for generating a navigation measurement point based on a real model, an embodiment of the present invention further provides a device for generating a navigation measurement point based on a real model, where the device includes:
The sampling point module 600 is used for constructing a real scene model based on the region to be reconstructed, and generating a plurality of sampling points on the real scene model;
the view angle module 610 is configured to obtain a position of each sampling point on the live-action model, and generate a plurality of views corresponding to each sampling point based on the position;
the waypoint module 620 is configured to set the viewing angle as a waypoint and add the viewing angle to a waypoint set if the viewing angle meets a preset condition; for each visual angle which does not meet the preset condition, adopting a visual angle adjustment strategy to adjust the visual angle to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
and the output module 630 is configured to output the waypoint set.
Optionally, the view angle module includes a view distance unit, a first view angle unit, a second view angle unit and a third view angle unit, where the view distance unit is used to obtain a view distance of the unmanned aerial vehicle shooting device; the first view angle unit is used for setting a point which is at the sight distance from the sampling point as a first view angle in the opposite direction of the normal direction of the sampling point; the second view angle unit is used for adjusting the height of the first view angle or the angle between the direction of the first view angle and the normal direction of the sampling point if the azimuth is the top of the live-action model, so as to obtain a plurality of second view angles; and the third view angle unit is used for adjusting the angle between the direction of the first view angle and the normal direction of the sampling point if the azimuth is the side surface of the real scene model, so as to obtain a plurality of third view angles.
Optionally, the waypoint module includes a view angle adjusting unit, where the view angle adjusting unit is configured to determine a center of sphere according to longitude and latitude of the sampling point and height of the view angle; constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius; in the sphere, taking the visual angle as a center to obtain a region of interest; and selecting a plurality of points in the region of interest to obtain the adjusted rear view angle.
Specifically, in this embodiment, specific functions of each module of the navigation measurement point generating device based on the real model may refer to corresponding descriptions in the navigation measurement point generating method based on the real model, which are not described herein again.
Based on the above embodiment, the present invention also provides an intelligent terminal, and a functional block diagram thereof may be shown in fig. 6. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. The processor of the intelligent terminal is used for providing computing and control capabilities. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a navigation point generating program based on a live-action model. The internal memory provides an environment for the operation of an operating system and a live-action model-based navigation point generation program in a nonvolatile storage medium. The network interface of the intelligent terminal is used for communicating with an external terminal through network connection. The method for generating the navigation survey points based on the real model comprises the step of realizing any one of the navigation survey point generating methods based on the real model when the navigation survey point generating program based on the real model is executed by a processor. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the schematic block diagram shown in fig. 6 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the smart terminal to which the present inventive arrangements are applied, and that a particular smart terminal may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, there is provided an intelligent terminal including a memory, a processor, and a live-action model-based waypoint generation program stored on the memory and executable on the processor, the live-action model-based waypoint generation program executing by the processor with instructions for:
constructing a live-action model based on a region to be reconstructed, and generating a plurality of sampling points on the live-action model;
acquiring the azimuth of each sampling point on the live-action model, and respectively generating a plurality of view angles corresponding to each sampling point based on the azimuth;
if the visual angle meets the preset condition, setting the visual angle as a waypoint and adding the visual angle to a waypoint set;
for each visual angle which does not meet the preset condition, adopting a visual angle adjustment strategy to adjust the visual angle to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
And outputting the navigation point set.
Optionally, the obtaining the azimuth of each sampling point on the live-action model, and generating, based on the azimuth, a plurality of view angles corresponding to each sampling point respectively includes:
acquiring the viewing distance of an unmanned aerial vehicle shooting device;
setting a point which is at the sight distance from the sampling point as a first visual angle in the opposite direction of the normal direction of the sampling point;
if the azimuth is the top of the live-action model, adjusting the height of the first visual angle or the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of second visual angles;
and if the azimuth is the side face of the real scene model, adjusting the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of third visual angles.
Optionally, the viewing angle adjustment policy is provided with a plurality of viewing angles, the viewing angle is adjusted by adopting the viewing angle adjustment policy, an adjusted viewing angle is obtained, if the adjusted viewing angle meets a preset condition, the adjusted viewing angle is set as a waypoint and added to a waypoint set, including:
setting the first view angle adjustment strategy as a current view angle adjustment strategy;
adjusting the visual angle according to the current visual angle adjustment strategy to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
Otherwise, setting the next view angle adjustment strategy as the current view angle adjustment strategy, and returning to adjust the view angle according to the current view angle adjustment strategy until all view angle adjustment strategies are executed.
Optionally, the viewing angle adjustment strategy is used for adjusting one or more of a height of the viewing angle, a distance between the viewing angle and the sampling point, and an included angle between a direction of the viewing angle and a normal direction of the sampling point.
Optionally, the adjusting the viewing angle by using a viewing angle adjustment policy to obtain an adjusted viewing angle includes:
determining a sphere center according to the longitude and latitude of the sampling point and the height of the visual angle;
constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius;
in the sphere, taking the visual angle as a center to obtain a region of interest;
and selecting a plurality of points in the region of interest to obtain the adjusted rear view angle.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a real model-based navigation point generating program, and when the real model-based navigation point generating program is executed by a processor, the steps of any one of the real model-based navigation point generating methods provided by the embodiment of the invention are realized.
It should be understood that the sequence number of each step in the above embodiment does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not be construed as limiting the implementation process of the embodiment of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units described above is merely a logical function division, and may be implemented in other manners, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of each method embodiment may be implemented. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The computer readable medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. The content of the computer readable storage medium can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions are not intended to depart from the spirit and scope of the various embodiments of the invention, which are also within the spirit and scope of the invention.

Claims (8)

1. The method for generating the navigation survey points based on the live-action model is characterized by comprising the following steps:
constructing a live-action model based on a region to be reconstructed, and generating a plurality of sampling points on the live-action model;
acquiring the azimuth of each sampling point on the live-action model, and respectively generating a plurality of view angles corresponding to each sampling point based on the azimuth;
if the visual angle meets the preset condition, setting the visual angle as a waypoint and adding the visual angle to a waypoint set;
for each visual angle which does not meet the preset condition, adopting a visual angle adjustment strategy to adjust the visual angle to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
Outputting the navigation point set;
the obtaining the azimuth of each sampling point on the live-action model, and generating a plurality of view angles corresponding to each sampling point based on the azimuth respectively comprises the following steps:
acquiring the viewing distance of an unmanned aerial vehicle shooting device;
setting a point which is at the sight distance from the sampling point as a first visual angle in the opposite direction of the normal direction of the sampling point;
if the azimuth is the top of the live-action model, adjusting the height of the first visual angle or the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of second visual angles;
and if the azimuth is the side face of the real scene model, adjusting the angle between the direction of the first visual angle and the normal direction of the sampling point to obtain a plurality of third visual angles.
2. The method for generating navigation survey waypoints based on a live-action model according to claim 1, wherein the viewing angle adjustment strategy is provided with a plurality of viewing angles, the viewing angle is adjusted by adopting the viewing angle adjustment strategy to obtain an adjusted viewing angle, and if the adjusted viewing angle meets a preset condition, the adjusted viewing angle is set as a waypoint and added to a waypoint set, comprising:
setting the first view angle adjustment strategy as a current view angle adjustment strategy;
Adjusting the visual angle according to the current visual angle adjustment strategy to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
otherwise, setting the next view angle adjustment strategy as the current view angle adjustment strategy, and returning to adjust the view angle according to the current view angle adjustment strategy until all view angle adjustment strategies are executed.
3. The method of claim 2, wherein the viewing angle adjustment strategy is used to adjust one or more of a height of the viewing angle, a distance between the viewing angle and the sampling point, and an angle between a direction of the viewing angle and a normal to the sampling point.
4. The method for generating a navigation survey point based on a real scene model according to claim 1, wherein the adjusting the viewing angle by using a viewing angle adjustment strategy to obtain an adjusted rear viewing angle comprises:
determining a sphere center according to the longitude and latitude of the sampling point and the height of the visual angle;
constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius;
in the sphere, taking the visual angle as a center to obtain a region of interest;
And selecting a plurality of points in the region of interest to obtain the adjusted rear view angle.
5. The utility model provides a navigation survey waypoint generating device based on live-action model which characterized in that, the device includes:
the sampling point module is used for constructing a real scene model based on the region to be reconstructed, and generating a plurality of sampling points on the real scene model;
the view angle module is used for acquiring the azimuth of each sampling point on the live-action model and respectively generating a plurality of view angles corresponding to each sampling point based on the azimuth;
the navigation point module is used for setting the visual angle as a navigation point and adding the visual angle to a navigation point set if the visual angle meets a preset condition; for each visual angle which does not meet the preset condition, adopting a visual angle adjustment strategy to adjust the visual angle to obtain an adjusted visual angle, and if the adjusted visual angle meets the preset condition, setting the adjusted visual angle as a navigation point and adding the navigation point into a navigation point set;
the output module is used for outputting the navigation point set;
the visual angle module comprises a visual distance unit, a first visual angle unit, a second visual angle unit and a third visual angle unit, wherein the visual distance unit is used for acquiring the visual distance of the unmanned aerial vehicle shooting device; the first view angle unit is used for setting a point which is at the sight distance from the sampling point as a first view angle in the opposite direction of the normal direction of the sampling point; the second view angle unit is used for adjusting the height of the first view angle or the angle between the direction of the first view angle and the normal direction of the sampling point if the azimuth is the top of the live-action model, so as to obtain a plurality of second view angles; and the third view angle unit is used for adjusting the angle between the direction of the first view angle and the normal direction of the sampling point if the azimuth is the side surface of the real scene model, so as to obtain a plurality of third view angles.
6. The model-based navigation survey waypoint generation apparatus of claim 5, wherein the waypoint module comprises a perspective adjustment unit for determining a sphere center based on longitude, latitude of the sampling point and altitude of the perspective; constructing a sphere by taking the sphere center as the center and the line of sight of the unmanned aerial vehicle shooting device as the radius; in the sphere, taking the visual angle as a center to obtain a region of interest; and selecting a plurality of points in the region of interest to obtain the adjusted rear view angle.
7. The intelligent terminal is characterized by comprising a memory, a processor and a live-action model-based navigation point generation program which is stored in the memory and can run on the processor, wherein the live-action model-based navigation point generation program realizes the steps of the live-action model-based navigation point generation method according to any one of claims 1-4 when being executed by the processor.
8. Computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a real model based navigation point generation program, which when executed by a processor, implements the steps of the real model based navigation point generation method according to any one of claims 1-4.
CN202310707820.3A 2023-06-15 2023-06-15 Navigation survey navigation point generation method and device based on live-action model Active CN116433853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310707820.3A CN116433853B (en) 2023-06-15 2023-06-15 Navigation survey navigation point generation method and device based on live-action model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310707820.3A CN116433853B (en) 2023-06-15 2023-06-15 Navigation survey navigation point generation method and device based on live-action model

Publications (2)

Publication Number Publication Date
CN116433853A CN116433853A (en) 2023-07-14
CN116433853B true CN116433853B (en) 2023-11-17

Family

ID=87092977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310707820.3A Active CN116433853B (en) 2023-06-15 2023-06-15 Navigation survey navigation point generation method and device based on live-action model

Country Status (1)

Country Link
CN (1) CN116433853B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599583A (en) * 2019-07-26 2019-12-20 深圳眸瞳科技有限公司 Unmanned aerial vehicle flight trajectory generation method and device, computer equipment and storage medium
CN111226185A (en) * 2019-04-22 2020-06-02 深圳市大疆创新科技有限公司 Flight route generation method, control device and unmanned aerial vehicle system
CN113570666A (en) * 2021-09-26 2021-10-29 天津云圣智能科技有限责任公司 Task allocation method, device, server and computer readable storage medium
CN113939786A (en) * 2020-09-22 2022-01-14 深圳市大疆创新科技有限公司 Flight route generation method and device, unmanned aerial vehicle system and storage medium
CN113945217A (en) * 2021-12-15 2022-01-18 天津云圣智能科技有限责任公司 Air route planning method, device, server and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884256B (en) * 2021-04-28 2021-07-27 深圳大学 Path planning method and device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111226185A (en) * 2019-04-22 2020-06-02 深圳市大疆创新科技有限公司 Flight route generation method, control device and unmanned aerial vehicle system
CN110599583A (en) * 2019-07-26 2019-12-20 深圳眸瞳科技有限公司 Unmanned aerial vehicle flight trajectory generation method and device, computer equipment and storage medium
CN113939786A (en) * 2020-09-22 2022-01-14 深圳市大疆创新科技有限公司 Flight route generation method and device, unmanned aerial vehicle system and storage medium
WO2022061491A1 (en) * 2020-09-22 2022-03-31 深圳市大疆创新科技有限公司 Flight route generation method and apparatus, and unmanned aerial vehicle system, and storage medium
CN113570666A (en) * 2021-09-26 2021-10-29 天津云圣智能科技有限责任公司 Task allocation method, device, server and computer readable storage medium
CN113945217A (en) * 2021-12-15 2022-01-18 天津云圣智能科技有限责任公司 Air route planning method, device, server and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel approach of efficient 3D reconstruction for real scene using unmanned aerial vehicle oblique photogrammetry with five cameras;Boxiong Yang;《Computers and Electrical Engineering》;第1-15页 *
基于实景模型的无人机舱容测量方法;胡敏捷 等;《船舶设计通讯》(第2期);第97-101页 *

Also Published As

Publication number Publication date
CN116433853A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
US20220051574A1 (en) Flight route generation method, control device, and unmanned aerial vehicle system
CA2569209C (en) Image-augmented inertial navigation system (iains) and method
CN108474657B (en) Environment information acquisition method, ground station and aircraft
US20210327287A1 (en) Uav path planning method and device guided by the safety situation, uav and storage medium
CN110244765B (en) Aircraft route track generation method and device, unmanned aerial vehicle and storage medium
US20200218289A1 (en) Information processing apparatus, aerial photography path generation method, program and recording medium
CN110706447B (en) Disaster position determination method, disaster position determination device, storage medium, and electronic device
CN110969663A (en) Static calibration method for external parameters of camera
JPWO2018193574A1 (en) Flight path generation method, information processing apparatus, flight path generation system, program, and recording medium
CN108496138A (en) A kind of tracking and device
US10565863B1 (en) Method and device for providing advanced pedestrian assistance system to protect pedestrian preoccupied with smartphone
US20220074743A1 (en) Aerial survey method, aircraft, and storage medium
CN113875222B (en) Shooting control method and device, unmanned aerial vehicle and computer readable storage medium
CN113805607A (en) Unmanned aerial vehicle shooting method and device, unmanned aerial vehicle and storage medium
CN113741495B (en) Unmanned aerial vehicle attitude adjustment method and device, computer equipment and storage medium
CN114820769A (en) Vehicle positioning method and device, computer equipment, storage medium and vehicle
CN113112553B (en) Parameter calibration method and device for binocular camera, electronic equipment and storage medium
US20210156710A1 (en) Map processing method, device, and computer-readable storage medium
CN114240769A (en) Image processing method and device
CN110800023A (en) Image processing method and equipment, camera device and unmanned aerial vehicle
CN116433853B (en) Navigation survey navigation point generation method and device based on live-action model
CN115460539B (en) Method, equipment, medium and program product for acquiring electronic fence
JP2019091961A (en) Camera control unit
CN113129422A (en) Three-dimensional model construction method and device, storage medium and computer equipment
US20210011490A1 (en) Flight control method, device, and machine-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant