CN112634370A - Unmanned aerial vehicle dotting method, device, equipment and storage medium - Google Patents

Unmanned aerial vehicle dotting method, device, equipment and storage medium Download PDF

Info

Publication number
CN112634370A
CN112634370A CN202011645448.0A CN202011645448A CN112634370A CN 112634370 A CN112634370 A CN 112634370A CN 202011645448 A CN202011645448 A CN 202011645448A CN 112634370 A CN112634370 A CN 112634370A
Authority
CN
China
Prior art keywords
image
frame
target
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011645448.0A
Other languages
Chinese (zh)
Inventor
邓杭
池鹏可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xaircraft Technology Co Ltd
Original Assignee
Guangzhou Xaircraft Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xaircraft Technology Co Ltd filed Critical Guangzhou Xaircraft Technology Co Ltd
Priority to CN202011645448.0A priority Critical patent/CN112634370A/en
Publication of CN112634370A publication Critical patent/CN112634370A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The application provides an unmanned aerial vehicle dotting method, device, equipment and storage medium, and relates to the technical field of unmanned aerial vehicle measurement. The method comprises the following steps: acquiring multi-frame images acquired by an unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle; determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image; and reconstructing the image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image under the world coordinate system. The method improves the speed of image reconstruction to a certain extent, thereby effectively improving the efficiency of target point dotting.

Description

Unmanned aerial vehicle dotting method, device, equipment and storage medium
Technical Field
The application relates to the technical field of unmanned aerial vehicle measurement, in particular to an unmanned aerial vehicle dotting method, device, equipment and storage medium.
Background
The dotting of the target point is to obtain the geographic coordinate data of the target point corresponding to the real world through a device equipped with a Global Positioning System (GPS), and usually measure the longitude and latitude and the altitude of the target point. The geographic coordinates of the target point can be accurately acquired by dotting, so that ground operation is guided.
In the prior art, when a ground target point is dotted, a ground control point is required to acquire a GPS coordinate, the target point is photographed by an aircraft carrying a camera, the ground control point is corresponded to the photographed picture, and a three-dimensional scene reconstruction is completed, so that the target point dotting is performed in the reconstructed three-dimensional scene.
However, in the prior art, the ground control points need to be collected manually, and the collection difficulty of the ground control points is high due to the complex measurement scene, so that the dotting efficiency of the target points is low.
Disclosure of Invention
An object of the application is to provide a method, a device, equipment and a storage medium for dotting an unmanned aerial vehicle, aiming at the defects in the prior art, so as to solve the problem of lower efficiency of dotting a target point in the prior art.
In order to achieve the above purpose, the technical solutions adopted in the embodiments of the present application are as follows:
in a first aspect, an embodiment of the present application provides an unmanned aerial vehicle dotting method, where the method includes:
acquiring multi-frame images acquired by an unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle;
determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image;
and reconstructing images according to the pose information corresponding to each frame of image, the initial three-dimensional point cloud and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and geographic coordinates of the target point in the reconstructed ortho-image under a world coordinate system.
Optionally, before the obtaining the multiple frames of images acquired by the drone in the target area, the method further includes:
determining a target shooting position of the unmanned aerial vehicle, wherein a preset position relation exists between the target shooting position and the target point;
and determining the target area according to the target shooting position and a preset offset distance.
Optionally, the determining the target shooting position of the drone includes:
determining the position information of the target point in the image based on the image information acquired by the unmanned aerial vehicle;
and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point in the image.
Optionally, the determining the target shooting position of the drone includes:
responding to a target point confirmation operation input by a user on a general map, and acquiring the position information of the target point;
and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point.
Optionally, before performing image reconstruction according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and a geographic coordinate of the target point in the reconstructed ortho-image under a world coordinate system, the method further includes:
calculating a similarity transformation matrix according to the pose information corresponding to each frame of image and the geographic coordinate information;
and transforming the pose information corresponding to each frame of image and the initial three-dimensional point cloud to a world coordinate system according to the similarity transformation matrix to obtain the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation.
Optionally, reconstructing an image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and a geographic coordinate of the target point in the reconstructed ortho-image under a world coordinate system, where the reconstructing includes:
and fusing the geographic coordinate information of each frame of image to reconstruct the image according to the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation to obtain the target three-dimensional point cloud.
Optionally, the reconstructing an image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and a geographic coordinate of the target point in the reconstructed ortho-image in a world coordinate system includes:
performing triangular mesh reconstruction according to the target three-dimensional point cloud to obtain an initial triangular mesh;
performing texture mapping on the initial triangular mesh according to the texture information of each frame of image to obtain a target triangular mesh;
and carrying out forward projection calculation on the target triangular mesh to obtain the reconstructed orthoimage.
Optionally, the determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multiple frames of images includes:
and respectively obtaining pose information and initial three-dimensional point cloud corresponding to each frame of image by taking the multi-frame image as input and using a motion recovery structure algorithm.
Optionally, the method further comprises:
displaying the reconstructed ortho-image;
and responding to a target point determination operation input by the user through the reconstructed ortho image, and displaying the geographic coordinates of the target point in the world coordinate system.
Optionally, the method further comprises:
and marking the target point on the reconstructed ortho image in response to a target point selection operation input by a user through the reconstructed ortho image.
In a second aspect, an embodiment of the present application further provides an unmanned aerial vehicle dotting device, the device includes: the device comprises an acquisition module, a determination module and a reconstruction module;
the acquisition module is used for acquiring multi-frame images acquired by the unmanned aerial vehicle in a target area, the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle;
the determining module is used for determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image;
the reconstruction module is used for reconstructing images according to the pose information corresponding to each frame of image, the initial three-dimensional point cloud and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and a geographic coordinate of the target point in the reconstructed ortho-image under a world coordinate system.
Optionally, the determining module is further configured to determine a target shooting position of the unmanned aerial vehicle, where a preset position relationship is formed between the target shooting position and the target point; and determining the target area according to the target shooting position and a preset offset distance.
Optionally, the determining module is specifically configured to determine, based on image information acquired by the unmanned aerial vehicle, position information of the target point in an image; and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point in the image.
Optionally, the determining module is specifically configured to obtain location information of the target point in response to a target point confirmation operation input by a user on a general map; and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point.
Optionally, the apparatus further comprises: a calculation module and a transformation module;
the calculation module is used for calculating a similarity transformation matrix according to the pose information and the geographic coordinate information corresponding to each frame of image;
and the transformation module is used for transforming the pose information corresponding to each frame of image and the initial three-dimensional point cloud to a world coordinate system according to the similarity transformation matrix to obtain the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation.
Optionally, the reconstruction module is specifically configured to perform image reconstruction by fusing geographic coordinate information of each frame of image according to the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation, so as to obtain a target three-dimensional point cloud.
Optionally, the reconstruction module is specifically configured to perform triangular mesh reconstruction according to the target three-dimensional point cloud to obtain an initial triangular mesh; performing texture mapping on the initial triangular mesh according to the texture information of each frame of image to obtain a target triangular mesh; and carrying out forward projection calculation on the target triangular mesh to obtain the reconstructed orthoimage.
Optionally, the determining module is specifically configured to obtain pose information and an initial three-dimensional point cloud corresponding to each frame of image by using the multi-frame image as an input and using a motion recovery structure algorithm.
Optionally, the apparatus further comprises: a display module;
the display module is used for displaying the reconstructed orthoimage; and responding to a target point determination operation input by the user through the reconstructed ortho image, and displaying the geographic coordinates of the target point in the world coordinate system.
Optionally, the apparatus further comprises: a labeling module;
and the marking module is used for responding to a target point selection operation input by a user through the reconstructed ortho image and marking the target point on the reconstructed ortho image.
In a third aspect, an embodiment of the present application provides an electronic device, including: the electronic device comprises a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the storage medium communicate through the bus, and the processor executes the machine-readable instructions to execute the steps of the unmanned aerial vehicle dotting method provided in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is executed by a processor to perform the steps of the unmanned aerial vehicle dotting method according to the first aspect.
The beneficial effect of this application is:
the application provides an unmanned aerial vehicle dotting method, an unmanned aerial vehicle dotting device, unmanned aerial vehicle dotting equipment and a storage medium, wherein the method comprises the following steps: acquiring multi-frame images acquired by an unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle; determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image; and reconstructing the image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image under the world coordinate system. In the scheme, based on a multi-frame image with geographic coordinate information acquired by a positioning system of the unmanned aerial vehicle, the position and pose information and the initial three-dimensional point cloud corresponding to each frame of image can be determined, and further image reconstruction can be performed according to the position and pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, so that the geographic coordinates of the reconstructed orthoimage and the target point in the image in the world coordinate system are obtained. Due to the high-precision positioning system of the unmanned aerial vehicle, the accuracy of the obtained geographic coordinate information of each frame of image is high, so that the rapid and accurate three-dimensional image reconstruction can be directly carried out based on the geographic coordinate information of each frame of image, the pose information and the three-dimensional point cloud corresponding to each frame of image obtained according to each frame of image, the geographic coordinate of the ground control point does not need to be manually collected, the image reconstruction is carried out through the geographic coordinate of the ground control point and the multi-frame image, the image reconstruction speed is improved to a certain extent, and the target point dotting efficiency is effectively improved.
Secondly, by responding to the target point determination operation input by the user on the reconstructed ortho image, the geographic coordinates of the target point under the world coordinate system can be displayed to the user, and the dotting experience of the user is improved.
In addition, the target point can be marked on the reconstructed ortho image by responding to the target point selection operation input by the user on the reconstructed image so as to help the operation planning.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic diagram of an architecture of an unmanned aerial vehicle dotting system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an unmanned aerial vehicle dotting method provided in the embodiment of the present application;
fig. 3 is a schematic flow chart of another unmanned aerial vehicle dotting method according to the embodiment of the present application;
fig. 4 is a schematic flowchart of another unmanned aerial vehicle dotting method provided in the embodiment of the present application;
fig. 5 is a schematic flow chart of another unmanned aerial vehicle dotting method provided in the embodiment of the present application;
fig. 6 is a schematic diagram of an unmanned aerial vehicle dotting device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
First, some related technologies related to the present application will be briefly described:
generally, the acquisition of the geographic coordinates of the target point in the scene can be achieved by direct measurement or indirect measurement. The method comprises the following steps: the measuring staff needs to carry equipment equipped with a GPS positioning system to a corresponding target point for measurement. The second method comprises the following steps: in the field of aerial photogrammetry, a worker is required to acquire a GPS coordinate from a ground control point, shoot a target object through an aircraft carrying a camera, correspond the ground control point to a shot picture (a pricking point), complete subsequent three-dimensional reconstruction, and measure a target point in a three-dimensional reconstruction scene to obtain a geographic coordinate value of the target point.
However, the above method has certain drawbacks:
for the first method: because the human power is needed to measure the GPS coordinates of the target points on site, the measurement scene is complex, and the acquisition of the GPS coordinates by the human power is difficult and high in cost.
For the second method: manual measurement of the ground control points is still required, as is difficult and costly. In addition, the need for a puncture point increases the manual process flow.
Both of the above methods result in inefficient dotting of the target point.
Based on the defects existing in the prior art, the core idea of the unmanned aerial vehicle dotting method provided by the application is as follows: an unmanned aerial vehicle equipped with a high-precision positioning system is used for shooting a multi-frame image of a target area where a target point is located, and position and attitude information and three-dimensional point cloud corresponding to each frame of image are determined based on geographical coordinate information of an image marked by the acquired multi-frame image, so that three-dimensional reconstruction is performed based on the position and attitude information and three-dimensional point cloud corresponding to each frame of image and geographical coordinate information of the image, and geographical coordinates of the target point in a reconstructed ortho-image and geographical coordinates of the target point in a world coordinate system in the reconstructed ortho-image are obtained. The unmanned aerial vehicle used in the method is provided with the high-precision positioning system, so that the image geographic coordinate information of each frame of image obtained by shooting is high in precision, furthermore, the geographic coordinate information is used, the pose information and the three-dimensional point cloud corresponding to each frame of image obtained according to each frame of image are used for quickly and accurately reconstructing the three-dimensional image, the measurement is not needed to be carried out on the ground control point, the three-dimensional reconstruction is carried out based on the combination of the measured ground control point and the image shot by the unmanned aerial vehicle, the three-dimensional reconstruction can be completed and the dotting of the target point can be realized only through a small amount of images shot by the unmanned aerial vehicle, and the dotting efficiency of the target point is effectively.
Specific implementation steps of the unmanned aerial vehicle dotting method provided by the application and corresponding beneficial effects are described in the following through a plurality of specific embodiments.
Fig. 1 is an architecture schematic diagram of an unmanned aerial vehicle dotting system provided in an embodiment of the present application, and the unmanned aerial vehicle dotting method provided in the present application can be applied to the unmanned aerial vehicle dotting system. As shown in fig. 1, the unmanned aerial vehicle dotting system may include: unmanned aerial vehicle and terminal equipment, unmanned aerial vehicle are connected with terminal equipment to realize the information interaction with terminal equipment. Wherein, unmanned aerial vehicle can include: the system comprises an airborne camera, a positioning system and an airborne embedded processor; the unmanned aerial vehicle can acquire multi-frame images in a target area through the mounted airborne camera, and acquire the geographic coordinate information of each acquired frame of image through the positioning system. The terminal equipment can be a computer, a mobile phone, a cloud terminal and the like.
In some cases, the data processing can be performed on each frame of acquired image through the airborne embedded processor, and the pose information and the three-dimensional point cloud corresponding to each frame of image are acquired, so that the image reconstruction is performed on the real scene where the target point is located, and the reconstructed orthoimage and the geographic coordinate of the target point in the world coordinate system are acquired. Meanwhile, the onboard embedded processor can send the obtained reconstructed ortho-image to the terminal equipment, so that the terminal equipment can display the reconstructed ortho-image, and a user can click a target point on the displayed reconstructed ortho-image.
In other cases, the onboard embedded processor may send each frame of the acquired image to the terminal device, so that the terminal device obtains the reconstructed ortho-image and the geographic coordinates of the target point in the world coordinate system by using the similar calculation method. And simultaneously displaying the reconstructed ortho-image, so that a user can click a target point on the displayed reconstructed ortho-image.
Fig. 2 is a schematic flow chart of an unmanned aerial vehicle dotting method provided in the embodiment of the present application; the execution subject of the method can be the unmanned aerial vehicle, and can also be independent of terminal equipment or a server outside the unmanned aerial vehicle, for example: computers, mobile phones, cloud terminals, and the like. If the target point is the unmanned aerial vehicle, the unmanned aerial vehicle can obtain the reconstructed ortho image and the geographic coordinates of the target point in the reconstructed ortho image under the world coordinate system based on the following method, and the unmanned aerial vehicle can send the information to terminal equipment and the like for subsequent user interaction processing. If the terminal device or the server is the terminal device or the server, the terminal device or the server may receive the multi-frame image from the unmanned aerial vehicle, and obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image in the world coordinate system by using the following method, so as to be used for subsequent user interaction processing.
As shown in fig. 2, the method of the present application may comprise:
s101, obtaining a plurality of frames of images collected by the unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the plurality of frames of images have an overlapping area, at least one frame of image in the plurality of frames of images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle.
Optionally, the unmanned aerial vehicle may shoot multi-frame images at different angles in the target area, that is, shoot in a preset range of an area where the target point is located, and obtain an image of a scene where the target point is located.
Optionally, the acquired multi-frame images are provided with overlapping regions, so that a three-dimensional scene reconstruction can be performed from the multi-frame images.
The multi-frame images may all include a target point, and the target point may be understood as a point that needs to be dotted in the real scene. In addition, at least one frame of image in the multi-frame images can also contain the target point so as to reduce the difficulty of image acquisition.
The target point may be a real object in the scene, for example: a tree, an electric tower, etc. By dotting the target point, the method can play an auxiliary role in later-stage operation planning, including unmanned aerial vehicle flight route planning, obstacle avoidance of obstacles, ground imitating flight and the like. For example: through dotting the tree, can instruct unmanned aerial vehicle to fly to the top of tree according to the geographical coordinate of the tree that acquires and carry out the pesticide and spray, perhaps instruct unmanned aerial vehicle to keep away the barrier etc. according to the geographical coordinate of the electric tower that acquires.
In this embodiment, the multi-frame images collected by the unmanned aerial vehicle are all correspondingly marked with the geographic coordinate information (GPS coordinates) of each image. The geographic coordinate information of each image is acquired through a positioning system equipped on the unmanned aerial vehicle. Optionally, the drone is equipped with a high precision positioning system, which may be, but is not limited to: GPS (Global Positioning System), RTK (Real-time kinematic differential), PPK (post processed kinematic post processing technology).
Optionally, by the high-precision positioning system equipped for the unmanned aerial vehicle, the accuracy of the geographic coordinate information of the image marked by the multi-frame image acquired by the unmanned aerial vehicle is higher, so that three-dimensional reconstruction can be directly performed based on the geographic coordinate information of each frame image, and three-dimensional reconstruction is performed without acquiring the geographic coordinate information of the ground control point, so that the efficiency of three-dimensional reconstruction of the image can be improved, and the dotting efficiency of the target point is further improved.
S102, determining pose information and initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image.
Optionally, according to each frame of image, the pose information and the initial three-dimensional point cloud corresponding to each frame of image can be recovered and obtained through a preset calculation method. Wherein, the position and pose information that each frame of image corresponds can indicate when unmanned aerial vehicle shoots this frame of image, and the position and pose information of camera can include: a camera position center vector, a rotation parameter vector (including three rotation angles), and the like. A three-dimensional point cloud may refer to a point cloud collection consisting of a plurality of three-dimensional points representing measurement points.
S103, image reconstruction is carried out according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, and the geographic coordinates of the target point in the reconstructed ortho-image and the reconstructed ortho-image under the world coordinate system are obtained.
Optionally, the accuracy of the three-dimensional point cloud recovered according to each frame of image is low, and the three-dimensional point cloud is not coordinate information in a world coordinate system, so that the obtained initial three-dimensional point cloud can be further optimized according to pose information and the initial three-dimensional point cloud corresponding to each frame of image and geographic coordinate information of each frame of image, and image reconstruction is performed based on the optimized three-dimensional point cloud to obtain a reconstructed ortho-image. Meanwhile, the geographic coordinates of the target point in the reconstructed ortho image under the world coordinate system can be obtained.
Optionally, the geographic coordinates of the target point under the world coordinate system are obtained, that is, the target point is dotted.
To sum up, the unmanned aerial vehicle dotting method that this embodiment provided includes: acquiring multi-frame images acquired by an unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle; determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image; and reconstructing the image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image under the world coordinate system. In the scheme, based on a multi-frame image with geographic coordinate information acquired by a positioning system of the unmanned aerial vehicle, the position and pose information and the initial three-dimensional point cloud corresponding to each frame of image can be determined, and further image reconstruction can be performed according to the position and pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, so that the geographic coordinates of the reconstructed orthoimage and the target point in the image in the world coordinate system are obtained. Due to the high-precision positioning system of the unmanned aerial vehicle, the accuracy of the obtained geographic coordinate information of each frame of image is high, so that the rapid and accurate three-dimensional image reconstruction can be directly carried out based on the geographic coordinate information of each frame of image, the pose information and the three-dimensional point cloud corresponding to each frame of image obtained according to each frame of image, the geographic coordinate of the ground control point does not need to be manually collected, the image reconstruction is carried out through the geographic coordinate of the ground control point and the multi-frame image, the image reconstruction speed is improved to a certain extent, and the target point dotting efficiency is effectively improved.
Optionally, in step S101, before obtaining the multiple frames of images acquired by the unmanned aerial vehicle in the target area, the method of the present application may further include: determining a target shooting position of the unmanned aerial vehicle, wherein a preset position relation exists between the target shooting position and a target point; and determining a target area according to the target shooting position and a preset offset distance.
Optionally, the preset position relationship between the target shooting position and the target point is used to indicate that the target shooting point position is located above the target point, so that it can be ensured that the target point can be contained in the multi-frame image shot by the unmanned aerial vehicle.
Taking the determined target shooting position of the unmanned aerial vehicle as (x, y) for example, the target shooting position may refer to longitude and latitude coordinates of the target shooting position. Based on the target photographing positions, the target photographing positions may be respectively shifted to the surroundings by a preset distance, thereby determining the target area.
Assuming that the preset offset distance is d, an area formed by the offset shooting positions (x + dx, y), (x-dx, y), (x, y + dy), and (x, y-dy) can be used as a target area, so that the unmanned aerial vehicle can be controlled to vertically face down the lens angle in the target area to shoot and acquire multi-frame images. Wherein, dx and dy can be predetermined longitude and latitude offset respectively, and it can carry out the adaptability according to the distance of unmanned aerial vehicle and target point and adjust.
Optionally, in the above step, determining the target shooting position of the drone may include: determining position information of a target point in an image based on image information acquired by the unmanned aerial vehicle; and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point in the image.
In an achievable mode, the unmanned aerial vehicle can be controlled to fly to the position above a target point through manual remote control, a camera equipped for the unmanned aerial vehicle is used for image transmission, the position of the target point in an image collected by the unmanned aerial vehicle is determined, and when the unmanned aerial vehicle and the target point are in a preset position relationship, the current position of the unmanned aerial vehicle is determined to be a target shooting position. Wherein, predetermine the position relation and also unmanned aerial vehicle is located the target point top.
Optionally, in the above step, determining a target shooting position of the drone may further include: responding to a target point confirmation operation input by a user on the general map, and acquiring position information of the target point; and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point.
In another implementation manner, the position information of the target point may be acquired in response to a confirmation operation input by the user to the target point on the general map, where the confirmation operation input to the target point may be a click operation, including a single click operation, a double click operation, a long press operation, or a circle selection operation.
The unmanned aerial vehicle can read the position information of the target point, automatically flies to the position near the target point, and is in a preset position relation with the target point, so that the position of the current unmanned aerial vehicle can be used as the target shooting position of the unmanned aerial vehicle.
Of course, the above embodiments of the present application only exemplify two possible implementation manners, and in practical applications, there may be other implementation manners, which only needs to ensure that at least one frame of the multi-frame images shot by the unmanned aerial vehicle in the target area determined according to the target shooting position and the preset offset distance includes the target point.
Fig. 3 is a schematic flow chart of another unmanned aerial vehicle dotting method according to the embodiment of the present application; optionally, as shown in fig. 3, before performing image reconstruction according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image in step S103 to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image in the world coordinate system, the method of the present application may further include:
s201, calculating a similarity transformation matrix according to the pose information and the geographic coordinate information corresponding to each frame of image.
Optionally, because the pose information corresponding to each frame of image recovered from each frame of image and the obtained initial three-dimensional point cloud do not have geographic coordinates in the world coordinate system, the pose information corresponding to each frame of image and the initial three-dimensional point cloud can be transformed into the world coordinate system by aligning the pose information of each frame of image and the initial three-dimensional point cloud with the GPS coordinate system.
Optionally, a similarity transformation matrix of the image pose and the image geographic coordinate information may be calculated according to the pose information and the geographic coordinate information corresponding to each frame of image.
S202, according to the similarity transformation matrix, the pose information corresponding to each frame of image and the initial three-dimensional point cloud are transformed to a world coordinate system, and the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation are obtained.
Optionally, based on the similarity transformation matrix obtained through the calculation, the pose information corresponding to each frame of image and the initial three-dimensional point cloud can be transformed to a world coordinate system, so that the obtained pose information corresponding to each frame of image after transformation has geographic coordinates in the world coordinate system, and each pixel point in the three-dimensional point cloud after transformation also has geographic coordinates in the world coordinate system.
Optionally, in step S103, reconstructing an image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image in the world coordinate system, where the geographic coordinates may include: and fusing the geographic coordinate information of each frame of image to reconstruct the image according to the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation to obtain the target three-dimensional point cloud.
In some embodiments, the geographical coordinate information of each frame of image may be fused based on the pose information corresponding to each frame of image after transformation and the transformed three-dimensional point cloud, and the obtained initial three-dimensional point cloud may be optimized to obtain an accurate three-dimensional point cloud, that is, the target three-dimensional point cloud.
Alternatively, the optimization process may be implemented using a BA (Bundle Adjustment) algorithm. It is based on the optimization problem of 3D structure and view parameters (i.e. camera position, orientation, inherent calibration and radial distortion) to obtain the best reconstruction on the assumption that there is some noise in the obtained image features.
Optionally, optimization can be achieved by combining the reprojection error and the GPS error by using the above algorithm, and an optimization error equation can be as follows:
Figure BDA0002880950810000131
Figure BDA0002880950810000132
Figure BDA0002880950810000133
wherein, the pose information corresponding to the multi-frame image after transformation is recorded as epsilon ═ ei(Ri,Ci) I ═ 1,2,. N }, pose information e of the ith frame imageiIncluding a camera position center vector C when the frame image is acquirediRotating parameter vector RiContaining 3 rotation angles, M three-dimensional point clouds P ═ { Pj1,2,. M }, and Q ═ Q ·ij∈Qi|pj∝QiQ is all observation points of the multi-frame imageijRepresents the pixel coordinate of the observed point observed by the ith frame image of the jth three-dimensional point, QiThe observation point set representing the i-th frame image, and the symbol oc represents that the three-dimensional point is observed by the observation point of the photograph. I is camera internal parameters including focal length, camera center and distortion parameters; II (e)i,piAnd I) calculating the reprojection from the jth three-dimensional point to the ith frame of picture, and adopting a pinhole imaging camera model. Recording the geographic coordinate information center T { T } of each frame imagei|i=1,...N},λGPSAnd constraining the penalty coefficient for the G PS.
Optionally, the obtained pose information corresponding to each frame of image after transformation, the three-dimensional point cloud after transformation, and the geographic coordinate information of each frame of image are substituted into a formula, and are optimized by BA (Bundle Adjustment, minimizing reprojection error), so that the pose information corresponding to each frame of image of the target and the target three-dimensional point cloud can be calculated.
Fig. 4 is a schematic flowchart of another unmanned aerial vehicle dotting method provided in the embodiment of the present application; optionally, as shown in fig. 4, in the step S103, reconstructing an image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image in the world coordinate system, where the reconstructing may include:
s301, triangular mesh reconstruction is carried out according to the target three-dimensional point cloud to obtain an initial triangular mesh.
Optionally, delaunay (delaunay triangle) triangular mesh reconstruction may be performed according to the target three-dimensional point cloud to obtain a 2.5D triangular mesh, that is, the initial triangular mesh described above. Among them, 2.5D is also commonly called pseudo-3D, which is a graphics technology combining 3D and 2D. And carrying out interpolation according to the initial triangular mesh to obtain DSM (Digital Surface Model). For the delaunay triangular mesh reconstruction algorithm, only simple application is performed in the embodiment, and the algorithm can be understood by referring to the prior art, which is not described herein.
S302, performing texture mapping on the initial triangular mesh according to the texture information of each frame of image to obtain a target triangular mesh.
The obtained initial triangular mesh does not contain any texture, and texture mapping needs to be carried out on the initial triangular mesh to obtain the target triangular mesh when a real scene image is obtained through reconstruction.
Optionally, texture mapping may be performed on the initial triangular mesh according to texture information of each frame of image acquired by the unmanned aerial vehicle, so as to obtain a target triangular mesh.
And S303, carrying out forward projection calculation on the target triangular mesh to obtain a reconstructed orthoimage.
Forward projection calculation can be carried out based on the obtained target triangular mesh to obtain a reconstructed orthoimage. In this case, the forward projection, i.e. the spatial point X, is applied to the image point of the image plane by the camera P. Through forward projection calculation, a reconstructed ortho image can be obtained from the target triangular mesh, and the reconstructed ortho image is a two-dimensional image, but each pixel point in the image has a geographic coordinate in a world coordinate system. Therefore, the geographic coordinates of the target point in the reconstructed image under the world coordinate system can be obtained while the reconstructed orthoimage is obtained.
Optionally, in the step S102, determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to multiple frames of images may include: and respectively obtaining pose information and initial three-dimensional point cloud corresponding to each frame of image by using a motion recovery structure algorithm by taking the multi-frame image as input.
In this embodiment, a motion recovery structure algorithm, that is, an SFM algorithm, may be used to recover the pose information and the initial three-dimensional point cloud corresponding to each frame of image from each frame of image.
Optionally, the pose information and the initial three-dimensional point cloud corresponding to each frame of image may be recovered according to the information of the overlapping portion between each frame of image and the geographic coordinate information of each frame of image. The specific calculation process of the algorithm can be understood by referring to the existing method, and is not described more than once.
Fig. 5 is a schematic flow chart of another unmanned aerial vehicle dotting method provided in the embodiment of the present application; optionally, the method of the present application may further include:
s401, displaying the reconstructed orthoimage.
Optionally, when the above-mentioned process of reconstructing an image is implemented in an onboard embedded processor of the unmanned aerial vehicle, after the reconstructed ortho-image is obtained through the above-mentioned series of data calculation processes, the processor of the unmanned aerial vehicle may send a terminal device independent of the unmanned aerial vehicle to display the reconstructed ortho-image. When the image reconstruction process is executed on the terminal device, the unmanned aerial vehicle only uploads the collected multi-frame images to the terminal device, the whole data calculation processing is executed on the terminal device, and when the reconstructed orthoimage is obtained, the orthoimage is displayed on a display of the terminal device.
S402, responding to target point determination operation input by a user through the reconstructed ortho image, and displaying the geographic coordinates of the target point in the world coordinate system.
Optionally, the user may select the target point on the reconstructed orthographic projection image by displaying the reconstructed orthographic projection image on the display of the terminal device, and the geographical coordinates of the selected target point in the world coordinate system are correspondingly displayed on the interface of the display to be displayed to the user in real time.
In an implementation manner, the user may determine the selection of the target point by clicking the target point in the reconstructed ortho image, and after determining the selection of the target point, the geographic coordinates of the target point in the world coordinate system may be displayed in response to the clicking operation. The click operation may be a single click operation, a double click operation, a long press operation, or the like, and is not particularly limited.
Through interaction between the user and the terminal equipment, the user can autonomously realize dotting on the target point, and the operation experience of the user is improved. Of course, when the user does not perform target point dotting through the reconstructed ortho image displayed by the terminal device, that is, as stated in step S103, while the reconstructed ortho image is reconstructed, the geographic coordinates of the target point in the world coordinate system have also been acquired, and the target point dotting has also been achieved, the difference is that in this case, the dotting result is not displayed to the user.
Optionally, the method of the present application may further include: and marking the target point on the reconstructed ortho image in response to the target point selection operation input by the user through the reconstructed ortho image.
In some embodiments, the user may also mark the target point on the reconstructed ortho image by framing the target point on the displayed reconstructed ortho image to help avoid the target point in the post-job planning. For example: the target point is the house, and the target point can be marked, so that the house can be avoided when the pesticide is sprayed on the trees in the area where the target point is located at the later stage.
Alternatively, the target point selection operation input by the user may be an operation other than the above-described target point determination operation, such as: the circle-drawing operation may be performed at the target point, or the two-hand click operation may be performed at the target point. To avoid misidentification of the target point determination operation and the target point selection operation.
Optionally, the method of the present application may further include: and dotting and splicing the target points to form a boundary.
The image reconstruction steps can be executed for multiple times to respectively obtain multiple reconstructed images, and the multiple frames of images acquired during each execution are all images acquired in a target area where one target point is located. And dotting the target point in each reconstructed ortho-image based on each reconstructed ortho-image, so that the plurality of dotting points are spliced to form a boundary.
For example: boundary coordinates of a field are acquired. The four corners of the land can be respectively used as target points, corresponding multi-frame images can be respectively obtained for each target point, the reconstructed image corresponding to each target point is obtained by adopting the method, dotting of each target point is completed, and dotting of the four target points is spliced to form a terrain boundary.
To sum up, the unmanned aerial vehicle dotting method provided by the embodiment of the application comprises the following steps: acquiring multi-frame images acquired by an unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle; determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image; and reconstructing the image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image under the world coordinate system. In the scheme, based on a multi-frame image with geographic coordinate information acquired by a positioning system of the unmanned aerial vehicle, the position and pose information and the initial three-dimensional point cloud corresponding to each frame of image can be determined, and further image reconstruction can be performed according to the position and pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, so that the geographic coordinates of the reconstructed orthoimage and the target point in the image in the world coordinate system are obtained. Due to the high-precision positioning system of the unmanned aerial vehicle, the accuracy of the obtained geographic coordinate information of each frame of image is high, so that the rapid and accurate three-dimensional image reconstruction can be directly carried out based on the geographic coordinate information of each frame of image, the pose information and the three-dimensional point cloud corresponding to each frame of image obtained according to each frame of image, the geographic coordinate of the ground control point does not need to be manually collected, the image reconstruction is carried out through the geographic coordinate of the ground control point and the multi-frame image, the image reconstruction speed is improved to a certain extent, and the target point dotting efficiency is effectively improved.
Secondly, by responding to the target point determination operation input by the user on the reconstructed ortho image, the geographic coordinates of the target point under the world coordinate system can be displayed to the user, and the dotting experience of the user is improved.
In addition, the target point can be marked on the reconstructed ortho image by responding to the target point selection operation input by the user on the reconstructed image so as to help the operation planning.
The following description is made on a device, equipment, a storage medium and the like for executing the unmanned aerial vehicle dotting method provided by the application, and specific implementation processes and technical effects thereof are referred to above and are not described in detail below.
Fig. 6 is a schematic diagram of an unmanned aerial vehicle dotting device provided in an embodiment of the present application, where functions implemented by the unmanned aerial vehicle dotting device correspond to steps executed by the foregoing method. The device can be understood as the embedded processor of the unmanned aerial vehicle, and can also be understood as terminal equipment independent of the unmanned aerial vehicle.
Optionally, as shown in fig. 6, the unmanned aerial vehicle dotting device may include: an acquisition module 510, a determination module 520, a reconstruction module 530;
the acquisition module 510 is configured to acquire a multi-frame image acquired by an unmanned aerial vehicle in a target area, where the target area is an area where a target point is located, where the multi-frame image has an overlapping area, at least one frame of image in the multi-frame image includes the target point, and geographic coordinate information of the image is marked in each frame of image, where the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle;
a determining module 520, configured to determine, according to multiple frames of images, pose information and an initial three-dimensional point cloud corresponding to each frame of image;
the reconstructing module 530 is configured to perform image reconstruction according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image, to obtain a reconstructed ortho-image and a geographic coordinate of a target point in the reconstructed ortho-image in the world coordinate system.
Optionally, the determining module 520 is further configured to determine a target shooting position of the unmanned aerial vehicle, where the target shooting position and the target point have a preset position relationship; and determining a target area according to the target shooting position and a preset offset distance.
Optionally, the determining module 520 is specifically configured to determine, based on image information acquired by the unmanned aerial vehicle, position information of the target point in the image; and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point in the image.
Optionally, the determining module 520 is specifically configured to obtain location information of the target point in response to a target point confirmation operation input by the user on the general map; and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point.
Optionally, the apparatus further comprises: a calculation module and a transformation module;
the calculation module is used for calculating a similarity transformation matrix according to the pose information and the geographic coordinate information corresponding to each frame of image;
and the transformation module is used for transforming the pose information corresponding to each frame of image and the initial three-dimensional point cloud to a world coordinate system according to the similarity transformation matrix to obtain the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation.
Optionally, the reconstructing module 530 is specifically configured to perform image reconstruction by fusing the geographic coordinate information of each frame of image according to the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation, so as to obtain a target three-dimensional point cloud.
Optionally, the reconstructing module 530 is specifically configured to perform triangular mesh reconstruction according to the target three-dimensional point cloud to obtain an initial triangular mesh; performing texture mapping on the initial triangular mesh according to the texture information of each frame of image to obtain a target triangular mesh; and carrying out forward projection calculation on the target triangular mesh to obtain a reconstructed ortho-image.
Optionally, the determining module 520 is specifically configured to obtain pose information and an initial three-dimensional point cloud corresponding to each frame of image by using a motion recovery structure algorithm with multiple frames of images as input.
Optionally, the apparatus further comprises: a display module;
the display module is used for displaying the reconstructed orthoimage; and responding to the target point determination operation input by the user through the reconstructed ortho image, and displaying the geographic coordinates of the target point in the world coordinate system.
Optionally, the apparatus further comprises: a labeling module;
and the marking module is used for responding to target point selection operation input by a user through the reconstructed ortho image and marking the target point on the reconstructed ortho image.
The above-mentioned apparatus is used for executing the method provided by the foregoing embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
The modules may be connected or in communication with each other via a wired or wireless connection. The wired connection may include a metal cable, an optical cable, a hybrid cable, etc., or any combination thereof. The wireless connection may comprise a connection over a LAN, WAN, bluetooth, ZigBee, NFC, or the like, or any combination thereof. Two or more modules may be combined into a single module, and any one module may be divided into two or more units. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application.
It should be noted that the above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, the modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device may be a computing device with a data processing function.
The apparatus may include: a processor 801 and a memory 802.
The memory 802 is used for storing programs, and the processor 801 calls the programs stored in the memory 802 to execute the above-mentioned method embodiments. The specific implementation and technical effects are similar, and are not described herein again.
The memory 802 stores therein program code that, when executed by the processor 801, causes the processor 801 to perform the various steps of the unmanned aerial vehicle dotting method according to various exemplary embodiments of the present application described in the "exemplary methods" section above in this specification.
The Processor 801 may be a general-purpose Processor, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware components, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present Application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
Memory 802, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory may include at least one type of storage medium, and may include, for example, a flash Memory, a hard disk, a multimedia card, a card-type Memory, a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Programmable Read Only Memory (PROM), a Read Only Memory (ROM), a charged Erasable Programmable Read Only Memory (EEPROM), a magnetic Memory, a magnetic disk, an optical disk, and so on. The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 802 in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Optionally, the present application also provides a program product, such as a computer readable storage medium, comprising a program which, when being executed by a processor, is adapted to carry out the above-mentioned method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (12)

1. An unmanned aerial vehicle dotting method, characterized in that the method comprises:
acquiring multi-frame images acquired by an unmanned aerial vehicle in a target area, wherein the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle;
determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image;
and reconstructing images according to the pose information corresponding to each frame of image, the initial three-dimensional point cloud and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and geographic coordinates of the target point in the reconstructed ortho-image under a world coordinate system.
2. The method of claim 1, wherein the obtaining of the plurality of frames of images captured by the drone at the target area further comprises:
determining a target shooting position of the unmanned aerial vehicle, wherein a preset position relation exists between the target shooting position and the target point;
and determining the target area according to the target shooting position and a preset offset distance.
3. The method of claim 2, wherein determining the target capture location of the drone comprises:
determining the position information of the target point in a map based on the image information acquired by the unmanned aerial vehicle;
and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point in the image.
4. The method of claim 2, wherein determining the target capture location of the drone comprises:
responding to a target point confirmation operation input by a user on a general map, and acquiring the position information of the target point;
and determining the target shooting position of the unmanned aerial vehicle according to the position information of the target point.
5. The method according to claim 1, wherein before reconstructing the image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain the reconstructed ortho-image and the geographic coordinate of the target point in the reconstructed ortho-image under the world coordinate system, the method further comprises:
calculating a similarity transformation matrix according to the pose information corresponding to each frame of image and the geographic coordinate information;
and transforming the pose information corresponding to each frame of image and the initial three-dimensional point cloud to a world coordinate system according to the similarity transformation matrix to obtain the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation.
6. The method according to claim 5, wherein reconstructing an image according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and geographic coordinates of the target point in the reconstructed ortho-image under a world coordinate system comprises:
and fusing the geographic coordinate information of each frame of image to reconstruct the image according to the pose information corresponding to each frame of image after transformation and the three-dimensional point cloud after transformation to obtain the target three-dimensional point cloud.
7. The method according to claim 6, wherein the image reconstruction is performed according to the pose information and the initial three-dimensional point cloud corresponding to each frame of image and the geographic coordinate information of each frame of image to obtain the reconstructed ortho-image and the geographic coordinates of the target point in the reconstructed ortho-image under the world coordinate system, and the method comprises:
performing triangular mesh reconstruction according to the target three-dimensional point cloud to obtain an initial triangular mesh;
performing texture mapping on the initial triangular mesh according to the texture information of each frame of image to obtain a target triangular mesh;
and carrying out forward projection calculation on the target triangular mesh to obtain the reconstructed orthoimage.
8. The method of any of claims 1-7, further comprising:
displaying the reconstructed ortho-image;
and responding to a target point determination operation input by the user through the reconstructed ortho image, and displaying the geographic coordinates of the target point in the world coordinate system.
9. The method of any of claims 1-7, further comprising:
and marking the target point on the reconstructed ortho image in response to a target point selection operation input by a user through the reconstructed ortho image.
10. An unmanned aerial vehicle device of dotting, its characterized in that, the device includes: the device comprises an acquisition module, a determination module and a reconstruction module;
the acquisition module is used for acquiring multi-frame images acquired by the unmanned aerial vehicle in a target area, the target area is an area where a target point is located, the multi-frame images have an overlapping area, at least one frame of image in the multi-frame images contains the target point, geographic coordinate information of the image is marked in each frame of image, and the geographic coordinate information is obtained by positioning based on a positioning system of the unmanned aerial vehicle;
the determining module is used for determining pose information and an initial three-dimensional point cloud corresponding to each frame of image according to the multi-frame image;
the reconstruction module is used for reconstructing images according to the pose information corresponding to each frame of image, the initial three-dimensional point cloud and the geographic coordinate information of each frame of image to obtain a reconstructed ortho-image and a geographic coordinate of the target point in the reconstructed ortho-image under a world coordinate system.
11. An electronic device, comprising: a processor, a storage medium and a bus, wherein the storage medium stores program instructions executable by the processor, when the electronic device runs, the processor and the storage medium communicate with each other through the bus, and the processor executes the program instructions to execute the steps of the unmanned aerial vehicle dotting method according to any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of a drone dotting method according to any one of claims 1 to 9.
CN202011645448.0A 2020-12-31 2020-12-31 Unmanned aerial vehicle dotting method, device, equipment and storage medium Pending CN112634370A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011645448.0A CN112634370A (en) 2020-12-31 2020-12-31 Unmanned aerial vehicle dotting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011645448.0A CN112634370A (en) 2020-12-31 2020-12-31 Unmanned aerial vehicle dotting method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112634370A true CN112634370A (en) 2021-04-09

Family

ID=75291286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011645448.0A Pending CN112634370A (en) 2020-12-31 2020-12-31 Unmanned aerial vehicle dotting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112634370A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284211A (en) * 2021-06-09 2021-08-20 杭州今奥信息科技股份有限公司 Method and system for generating orthoimage
CN113344002A (en) * 2021-07-29 2021-09-03 北京图知天下科技有限责任公司 Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium
CN113421332A (en) * 2021-06-30 2021-09-21 广州极飞科技股份有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113570649A (en) * 2021-09-26 2021-10-29 南方电网数字电网研究院有限公司 Gravity direction determination method and device based on three-dimensional model and computer equipment
CN113988701A (en) * 2021-11-15 2022-01-28 广州极飞科技股份有限公司 Terrain analysis method and device and electronic equipment
CN114202981A (en) * 2021-12-10 2022-03-18 新疆工程学院 Simulation platform for photogrammetry experiment
CN115205472A (en) * 2022-09-16 2022-10-18 成都国星宇航科技股份有限公司 Grouping method, device and equipment for live-action reconstruction pictures and storage medium
CN115760964A (en) * 2022-11-10 2023-03-07 亮风台(上海)信息科技有限公司 Method and equipment for acquiring screen position information of target object
CN116153472A (en) * 2023-02-24 2023-05-23 萱闱(北京)生物科技有限公司 Image multidimensional visualization method, device, medium and computing equipment
CN116843824A (en) * 2023-03-17 2023-10-03 瞰景科技发展(上海)有限公司 Real-time reconstruction method, device and system for three-dimensional model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109945853A (en) * 2019-03-26 2019-06-28 西安因诺航空科技有限公司 A kind of geographical coordinate positioning system and method based on 3D point cloud Aerial Images
CN109961497A (en) * 2019-03-22 2019-07-02 刘文龙 Real-time three-dimensional method for reconstructing based on unmanned plane image
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110675450A (en) * 2019-09-06 2020-01-10 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020113423A1 (en) * 2018-12-04 2020-06-11 深圳市大疆创新科技有限公司 Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN109961497A (en) * 2019-03-22 2019-07-02 刘文龙 Real-time three-dimensional method for reconstructing based on unmanned plane image
CN109945853A (en) * 2019-03-26 2019-06-28 西安因诺航空科技有限公司 A kind of geographical coordinate positioning system and method based on 3D point cloud Aerial Images
CN110675450A (en) * 2019-09-06 2020-01-10 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284211A (en) * 2021-06-09 2021-08-20 杭州今奥信息科技股份有限公司 Method and system for generating orthoimage
CN113421332A (en) * 2021-06-30 2021-09-21 广州极飞科技股份有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN113344002A (en) * 2021-07-29 2021-09-03 北京图知天下科技有限责任公司 Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium
CN113570649A (en) * 2021-09-26 2021-10-29 南方电网数字电网研究院有限公司 Gravity direction determination method and device based on three-dimensional model and computer equipment
CN113570649B (en) * 2021-09-26 2022-03-08 南方电网数字电网研究院有限公司 Gravity direction determination method and device based on three-dimensional model and computer equipment
CN113988701A (en) * 2021-11-15 2022-01-28 广州极飞科技股份有限公司 Terrain analysis method and device and electronic equipment
CN114202981A (en) * 2021-12-10 2022-03-18 新疆工程学院 Simulation platform for photogrammetry experiment
CN114202981B (en) * 2021-12-10 2023-06-16 新疆工程学院 Simulation platform for photogrammetry experiments
CN115205472A (en) * 2022-09-16 2022-10-18 成都国星宇航科技股份有限公司 Grouping method, device and equipment for live-action reconstruction pictures and storage medium
CN115760964A (en) * 2022-11-10 2023-03-07 亮风台(上海)信息科技有限公司 Method and equipment for acquiring screen position information of target object
CN115760964B (en) * 2022-11-10 2024-03-15 亮风台(上海)信息科技有限公司 Method and equipment for acquiring screen position information of target object
CN116153472A (en) * 2023-02-24 2023-05-23 萱闱(北京)生物科技有限公司 Image multidimensional visualization method, device, medium and computing equipment
CN116153472B (en) * 2023-02-24 2023-10-24 萱闱(北京)生物科技有限公司 Image multidimensional visualization method, device, medium and computing equipment
CN116843824A (en) * 2023-03-17 2023-10-03 瞰景科技发展(上海)有限公司 Real-time reconstruction method, device and system for three-dimensional model

Similar Documents

Publication Publication Date Title
CN112634370A (en) Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN112470092B (en) Surveying and mapping system, surveying and mapping method, device, equipment and medium
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
KR102200299B1 (en) A system implementing management solution of road facility based on 3D-VR multi-sensor system and a method thereof
CN111415409B (en) Modeling method, system, equipment and storage medium based on oblique photography
US9460554B2 (en) Aerial video annotation
CN111693025B (en) Remote sensing image data generation method, system and equipment
JP6138326B1 (en) MOBILE BODY, MOBILE BODY CONTROL METHOD, PROGRAM FOR CONTROLLING MOBILE BODY, CONTROL SYSTEM, AND INFORMATION PROCESSING DEVICE
CN104118561B (en) Method for monitoring large endangered wild animals based on unmanned aerial vehicle technology
JP6854195B2 (en) Image processing device, image processing method and program for image processing
CN115641401A (en) Construction method and related device of three-dimensional live-action model
WO2022077296A1 (en) Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium
CN112469967B (en) Mapping system, mapping method, mapping device, mapping apparatus, and recording medium
CN114565863B (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN112862966B (en) Method, device, equipment and storage medium for constructing surface three-dimensional model
JP2017201261A (en) Shape information generating system
CN115439531A (en) Method and equipment for acquiring target space position information of target object
CN114299236A (en) Oblique photogrammetry space-ground fusion live-action modeling method, device, product and medium
CN109712249B (en) Geographic element augmented reality method and device
CN111527375B (en) Planning method and device for surveying and mapping sampling point, control terminal and storage medium
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN112041892A (en) Panoramic image-based ortho image generation method
IL267309B2 (en) Terrestrial observation device having location determination functionality
CN114092646A (en) Model generation method and device, computer equipment and storage medium
CN110800023A (en) Image processing method and equipment, camera device and unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 510000 Block C, 115 Gaopu Road, Tianhe District, Guangzhou City, Guangdong Province

Applicant after: Guangzhou Jifei Technology Co.,Ltd.

Address before: 510000 Block C, 115 Gaopu Road, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: Guangzhou Xaircraft Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409