CN114004977A - Aerial photography data target positioning method and system based on deep learning - Google Patents

Aerial photography data target positioning method and system based on deep learning Download PDF

Info

Publication number
CN114004977A
CN114004977A CN202111244030.3A CN202111244030A CN114004977A CN 114004977 A CN114004977 A CN 114004977A CN 202111244030 A CN202111244030 A CN 202111244030A CN 114004977 A CN114004977 A CN 114004977A
Authority
CN
China
Prior art keywords
target
image
aerial photography
data
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111244030.3A
Other languages
Chinese (zh)
Inventor
张周贤
秦方亮
钱晓琼
王丽
李顺
张志翱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Aircraft Industrial Group Co Ltd
Original Assignee
Chengdu Aircraft Industrial Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Aircraft Industrial Group Co Ltd filed Critical Chengdu Aircraft Industrial Group Co Ltd
Priority to CN202111244030.3A priority Critical patent/CN114004977A/en
Publication of CN114004977A publication Critical patent/CN114004977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of image processing and target detection of an aviation system, in particular to a target positioning method and a target positioning system based on deep learning. The invention provides a target positioning method based on deep learning, which comprises the following steps: s1, acquiring aerial photography data and preprocessing the aerial photography data; s2, inputting aerial photography data into a pre-trained neural network; outputting a target type and a target position; s3, acquiring the orientation information, the image shooting information and the target position of the machine body; and calculating to obtain target positioning information through the body orientation information, the image shooting information and the target position. The method provided by the invention has good robustness; meanwhile, the detection accuracy and the detection speed of micro, weak, small and unobvious targets in the high-altitude aerial photography environment are high; in addition, the method considers the influence of different postures of the unmanned aerial vehicle on the positioning of aerial photography, and can effectively improve the accuracy of target positioning.

Description

Aerial photography data target positioning method and system based on deep learning
Technical Field
The invention relates to the field of image processing and target detection of an aviation system, in particular to a target positioning method and a target positioning system based on deep learning.
Background
With the development of the image reconnaissance technology of the unmanned aerial vehicle, a large amount of information such as ground optical images and photoelectric videos shot at high altitude can be obtained by utilizing reconnaissance equipment, and ground targets can be accurately identified and positioned through an accurate aerial target detection algorithm.
Under the high-altitude detection environment, under the influence of illumination angle and intensity, shooting hardware, cloud and fog shielding, airborne platform flight speed and the like, ships, vehicles or other interested targets often have the characteristics of being tiny, unobvious, easy to confuse and the like in shot images, and are not suitable for image preprocessing methods such as denoising, segmentation, feature extraction and the like. The existing target detection and positioning method has some defects in the aspects of automatic detection rate and accuracy, and has the disadvantages of weak robustness, high algorithm time complexity and low target identification and positioning efficiency.
Disclosure of Invention
The invention aims to: aiming at the problems of low automatic detection rate and low accuracy rate of the traditional target detection and positioning method in the prior art, the aerial photography data target positioning method and system based on deep learning are provided
The invention provides an aerial photography data target positioning method based on deep learning in a first aspect, which comprises the following steps:
s1, acquiring aerial photography data and preprocessing the aerial photography data;
s2, inputting aerial photography data into a pre-trained neural network; outputting a target type and a target position;
s3, acquiring the orientation information, the image shooting information and the target position of the machine body; and calculating to obtain target positioning information through the body orientation information, the image shooting information and the target position.
The body orientation information is used for representing the posture and the position of the body when the unmanned aerial vehicle shoots the current aerial photography data, and can be directly obtained from an inertial navigation system of the body; the image shooting information is used for representing the posture and the position of the shooting equipment when the organism shoots the current aerial photography data, and can be directly obtained from the shooting equipment.
Further, the body orientation information includes: an included angle a between the axial direction of the machine body and a central line, an included angle b between the central line and the vertical direction, the flying height H of the unmanned aerial vehicle and an included angle psi between the axial direction of the machine body and the north direction;
the image capturing information includes: the image center point is offset by a distance L along the aircraft nose directionθThe central point of the image is offset by a distance L along the direction of the wing of the airplaneφ
When the target is located in one/four quadrant of the picture, the target is determined by:
Figure BDA0003320185140000021
Figure BDA0003320185140000022
calculating to obtain target positioning information, and when the target is positioned in the second/third quadrant of the picture, obtaining the target positioning information by:
Figure BDA0003320185140000023
Figure BDA0003320185140000024
calculating to obtain target positioning information;
Figure BDA0003320185140000025
wherein the content of the first and second substances,
Figure BDA0003320185140000026
a horizontal included angle is formed between the center of the target and the shooting center;
Xoxas longitude coordinates of the center point of the original image, XoyThe latitude coordinate of the central point of the original image is taken; x'oxInfluencing the longitude coordinate, X ', of the lower image center point for the aircraft attitude'oyThe latitude coordinate of the central point of the image under the influence of the airplane attitude is obtained;
X"oxas a target longitude coordinate, X "oyIs a target latitude coordinate; the meaning of an operator "-" is: after the distance along the longitude and the latitude is calculated, the longitude and latitude difference under the condition of the distance is worked out by utilizing the relation between the longitude and the latitude and the ground distance.
Further, the pre-processing comprises: and (3) segmenting the original image into a plurality of sub-images by using a large-scale image segmentation method.
Further, the neural network includes: the device comprises a CSP feature extraction module, a spatial pyramid pooling module, a feature pyramid fusion module and a detection output module.
Further, the training step of the neural network comprises:
s51, acquiring aerial photography data;
s52, marking a target in the aerial photography data by using a rectangular frame, and marking the type of the target and relative coordinates of the upper left corner and the lower right corner of the target by using a document to obtain a training sample;
and S53, inputting the training sample into the neural network, training the neural network, and finishing the training of the neural network when the result of the loss function of the neural network is converged.
Further, when the number of the training samples is insufficient, the number of the training samples is increased by a data enhancement and expansion method; the data enhancement and expansion method comprises the following steps: a random embedding algorithm, a random scale scaling algorithm, or a mosaic training algorithm.
The invention provides a deep learning based aerial data target positioning system, which comprises at least one processor and a memory, wherein the memory is in communication connection with the at least one processor: the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the above-described deep learning based aerial data target location method.
A third aspect of the present invention provides a readable storage medium having stored thereon a computer program, the program being executed by a processor to implement a method for object localization in deep learning based aerial photography data as described above.
The invention provides an unmanned aerial vehicle, which comprises the aerial photography data target positioning system based on deep learning.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the method provided by the invention has good robustness; meanwhile, the detection accuracy and the detection speed of micro, weak, small and unobvious targets in the high-altitude aerial photography environment are high;
2. according to the method provided by the invention, the influence of different postures of the unmanned aerial vehicle on the positioning of aerial photography is considered, so that the target positioning precision is high;
3. the method provided by the invention can generate the training sample of the neural network in a data expansion and enhancement mode; in actual use, the problems of less aerial photographing materials and insufficient neural network training samples can be effectively solved;
4. according to the method provided by the invention, the aerial large-scale image is divided into a plurality of sub-pictures aiming at the aerial large-scale image of the unmanned aerial vehicle, so that the target identification efficiency can be further improved.
Drawings
FIG. 1 is a flowchart illustrating an overall method for locating an object in aerial data based on deep learning according to an exemplary embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network provided by an exemplary embodiment of the present invention;
FIG. 3a is a schematic view of a vertical projection of the UAV during aerial work;
FIG. 3b is a schematic view of an angle of the drone during aerial work;
fig. 3c is a schematic view of another angle of the drone while in aerial work;
FIG. 4 is a schematic illustration of an aircraft orientation in an exemplary embodiment of the invention;
fig. 5 is a data graph obtained by using the proposed method in an exemplary embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
As shown in fig. 1, a method for positioning an object in aerial data based on deep learning includes the following steps:
s1, acquiring aerial photography data and preprocessing the aerial photography data;
s2, inputting aerial photography data into a pre-trained neural network; outputting a target type and a target position;
s3, acquiring the orientation information, the image shooting information and the target position of the machine body; and calculating to obtain target positioning information through the body orientation information, the image shooting information and the target position.
Further, the aerial data can be picture data and/or video data.
The body orientation information is used for representing the posture and the position of the body when the unmanned aerial vehicle shoots the current aerial photography data, and can be directly obtained from an inertial navigation system of the body; the image shooting information is used for representing the posture and the position of the shooting equipment when the organism shoots the current aerial photography data and can be directly obtained from the shooting equipment; the prior art is not repeated herein for directly acquiring the body orientation information from the inertial navigation system of the body and directly acquiring the image shooting information from the shooting device.
In practical use, because the cruising height of the unmanned aerial vehicle is usually higher, the aerial data shot by the unmanned aerial vehicle is usually larger in scale, so that the preprocessing of the invention adopts a large-scale image segmentation method and a block detection merging strategy to construct an NxN image window (such as 800 x 800 pixels), an original image shot by the unmanned aerial vehicle is segmented into a plurality of subgraphs for model training and target detection, and the detection results of the plurality of subgraphs are mapped back to the original image in the detection stage, so that the rapid detection and identification of the large-scale image can be realized; during image segmentation, in order to prevent the target from being segmented and influencing the accuracy of subsequent target identification; the present embodiment segments the large-scale image using an overlap area of 20%.
Specifically, as shown in fig. 2, the neural network structure used in the present embodiment is a YOLOv4 network of CSPdarknet53, which includes: the device comprises a CSP feature extraction module, a spatial pyramid pooling module, a feature pyramid fusion module and a detection output module. In use, parameters of the network can be adjusted by modifying the cfg file of the neural countermeasure network, and training iteration times, learning rate and batch parameters are adjusted.
Firstly, an image is transmitted into a neural network model, and initial high-dimensional features of the image are extracted through a CSP module; secondly, inputting the initial high-dimensional features into a spatial pyramid pooling model to generate pooled high-dimensional features, and improving the spatial feature extraction capability of the neural network on the image; then, fusing the pooled high-dimensional features with the initial high-dimensional features by adopting a feature pyramid fusion module of a path aggregation network, unifying feature dimensions by adopting an up-sampling mode, and generating fusion features; and finally, inputting the fusion characteristics of different scales into a detection output module to generate the positioning and classification information of the target.
As shown in fig. 3a, fig. 3b, and fig. 3c, in actual flight of the unmanned aerial vehicle, both the flight attitude of the unmanned aerial vehicle and the flight height of the unmanned aerial vehicle affect the aerial data shot by the unmanned aerial vehicle, so as to affect the positioning of the target; therefore, the target position needs to be corrected through the body orientation information and the image shooting information, so as to obtain the real target positioning information (longitude and latitude).
The body orientation information includes: an included angle a between the axial direction of the machine body and a central line, an included angle b between the central line and the vertical direction, the flying height H of the unmanned aerial vehicle and an included angle psi between the axial direction of the machine body and the north direction;
the image capturing information includes: the image center point is offset by a distance L along the aircraft nose directionθThe central point of the image is offset by a distance L along the direction of the wing of the airplaneφ
In fig. 3a, the black arrow is the axial direction of the body, and when the unmanned aerial vehicle performs image scanning, the image of the scanned plane is perpendicular to the axial direction of the body, so that the image obtained by the actual scanned plane is a dotted line perpendicular to the body axis in the figure (because the plane has a pitch angle and a roll angle, the two are not perpendicular in the vertical projection diagram).
Referring to FIGS. 3b and 3c, the results are shown
When the target is located in one/four quadrant of the picture, the target is determined by:
Figure BDA0003320185140000071
Figure BDA0003320185140000072
calculating to obtain target positioning information, and when the target is positioned in the second/third quadrant of the picture, obtaining the target positioning information by:
Figure BDA0003320185140000073
Figure BDA0003320185140000074
calculating to obtain target positioning information;
Figure BDA0003320185140000075
wherein the content of the first and second substances,
Figure BDA0003320185140000076
e is a horizontal included angle between the target center and the shooting center;
Xoxas longitude coordinates of the center point of the original image, XoyThe latitude coordinate of the central point of the original image is taken; x'oxInfluencing the longitude coordinate, X ', of the lower image center point for the aircraft attitude'oyThe latitude coordinate of the central point of the image under the influence of the airplane attitude is obtained;
X"oxas a target longitude coordinate, X "oyIs a target latitude coordinate; the meaning of an operator "-" is: after the distance along the longitude and the latitude is calculated, the longitude and latitude difference under the condition of the distance is worked out by utilizing the relation between the longitude and the latitude and the ground distance.
In practical use, a coordinate system can be established by taking a picture shooting central point as an origin of coordinate axes, a geographical north direction in a picture as a positive x-axis direction, and a geographical north direction in the picture as a positive y-axis direction, and the first, second, third and fourth quadrants are determined in a counterclockwise sequence. The geographical east-righting direction and north-righting direction in the picture can be determined by an inertial navigation system on the unmanned aerial vehicle when the picture is shot, and are not described herein for the prior art.
Further, the training step of the neural network comprises:
s51, acquiring aerial photography data;
s52, marking a target in the aerial photography data by using a rectangular frame, and marking the type of the target and relative coordinates of the upper left corner and the lower right corner of the target in a picture by using a document to obtain a training sample;
and S53, inputting the training sample into the neural network, training the neural network, and finishing the training of the neural network when the result of the loss function of the neural network is converged.
Specifically, in this embodiment, labelImg software can be used to label a rectangular box and a target type on aerial photography data, and extract a relative coordinate P (x) of the upper left corner and the lower right corner of the target1,y1,x2,y2)。
In actual use, when the number of the training samples is insufficient, the number of the training samples can be increased by a data enhancement and expansion method; the data enhancement and expansion method comprises the following steps: a random embedding algorithm, a random scale scaling algorithm, or a mosaic training algorithm. Meanwhile, the aerial data used for training the neural network can be data including ground or sea surface images, video screens and the like of different target objects, and the data covers different geographic environments, climates and time periods.
The random embedding algorithm can randomly embed the generated target data into an original background image to construct a new data training set; random scale scaling and scale clustering can improve the scale adaptability of the neural network, and can improve the training efficiency and generalization performance of the model.
A second aspect of the present embodiment provides a deep learning based aerial data target positioning system, including at least one processor, and a memory communicatively connected to the at least one processor: the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the above-described deep learning based aerial data target location method.
A third aspect of the present embodiment provides a readable storage medium, on which a computer program is stored, the program being executed by a processor to implement a method for object location based on deep learning for aerial photography data as described above.
The fourth aspect of the present embodiment provides an unmanned aerial vehicle, including the above-mentioned aerial photography data target positioning system based on deep learning.
Specifically, targets detected using the present invention include, but are not limited to: the system is suitable for civil fields such as stadiums, parks, ships, factories, stations, farms, toll stations, airports, landmark buildings, vehicles and the like, and is also suitable for military fields such as tanks, armored vehicles, ships, military buildings and the like; the specific target detection requirement can be adjusted through the actual use requirement and the use scene; the adjustment of the detected target can be realized by adjusting the training sample and the corresponding neural network structure parameters. For example, when the training device is used in forest fire fighting operation, the training sample can be adjusted, and the neural network structure is correspondingly adjusted, so that forest fire points can be detected for guiding forest fire fighting operation.
Example 2
The embodiment provides an aerial photography positioning system based on deep learning, which may include: an original data management module: the model training system is used for collecting and managing aerial photography data such as image videos used for model training, realizing storage, query, access and extraction functions, and supporting files in formats such as jpg,. png,. tif,. avi,. mp4,. tfw and. xml; the data enhancement and expansion module: the method is used for enhancing aerial photography data such as extended image video and the like; the data processing and feature extraction module: the method is used for data analysis, preprocessing and assisting in network training to realize feature extraction; the characteristic data management module: the characteristic model data is used for storing and managing model training; a model training module: the network layer is used for building deep learning and training optimization; a target identification and detection module: the method is used for completing target identification and positioning and visualization by using the characteristic data and the deep convolutional neural network.
Example 3
In order to test the actual effect of the invention, the test was performed by google map simulation software. Wherein the pose of the drone is as shown in figure 4; the parameter information of the drone is shown in fig. 5. Setting the latitude of the unmanned aerial vehicle: 30.63080924, drone longitude: 104.08204021, the heading is 10 degrees north, the pitch angle is vertical 5 degrees, the roll angle is 0 degree, and the cruising height is 3000 meters; selecting a certain university track and field stadium as a detection target, and deriving an aerial image, body orientation information and image shooting information by using a relevant tool; wherein the derived aerial image has pixels 1280 × 720, target center pixels (649,463), and width and height of 44 × 71 pixels; the method provided by the invention is calculated, and the longitude and latitude result of the image center point is as follows: determining the target to be a fourth quadrant divided by quadrants according to the target position judgment by using the latitude 30.63013788 and the longitude 104.08251741, wherein the target longitude and latitude calculation result is as follows: latitude 30.63013761, longitude 104.08251731. According to the actual measurement result of the Google map, the actual longitude and latitude information of the target is as follows: latitude 30.63063889, longitude 104.08231111. Its corresponding real world error range is about 59 m.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. An aerial data target positioning method based on deep learning is characterized by comprising the following steps:
s1, acquiring aerial photography data and preprocessing the aerial photography data;
s2, inputting aerial photography data into a pre-trained neural network; outputting a target type and a target position;
s3, acquiring orientation information and image shooting information of the machine body; calculating to obtain target positioning information through the body orientation information, the image shooting information and the target position;
the body orientation information includes: an included angle a between the axial direction of the machine body and a central line, an included angle b between the central line and the vertical direction, the flying height H of the unmanned aerial vehicle and an included angle psi between the axial direction of the machine body and the north direction;
the image capturing information includes: the image center point is offset by a distance L along the aircraft nose directionθThe central point of the image is offset by a distance L along the direction of the wing of the airplaneφ
The neural network is CSPdarknet53A network of YOLOv4 structures.
2. The deep learning based aerial photography data target positioning method according to claim 1,
when the target is located in one/four boundaries of the picture, the target positioning information is obtained through the following formula:
Figure FDA0003320185130000011
Figure FDA0003320185130000012
when the target is located in the second/third quadrant of the picture, the target positioning information is obtained through the following formula:
Figure FDA0003320185130000013
Figure FDA0003320185130000014
wherein the content of the first and second substances,
Figure FDA0003320185130000021
e is a horizontal included angle between the target center and the shooting center;
Xoxas longitude coordinates of the center point of the original image, XoyThe latitude coordinate of the central point of the original image is taken; x'oxInfluencing the longitude coordinate, X ', of the lower image center point for the aircraft attitude'oyThe latitude coordinate of the central point of the image under the influence of the airplane attitude is obtained;
X"oxas a target longitude coordinate, X "oyIs a target latitude coordinate; the meaning of an operator "-" is: after the distance along the longitude and the latitude is calculated, the longitude and latitude difference under the condition of the distance is worked out by utilizing the relation between the longitude and the latitude and the ground distance.
3. The deep learning-based aerial photography data target positioning method according to claim 2, wherein the preprocessing comprises: and segmenting the original image in the aerial photography data into a plurality of sub-images by using a large-scale image segmentation method.
4. The deep learning based aerial photography data target positioning method according to claim 2, wherein the neural network comprises: the CSP detection system comprises a CSP feature extraction module, a spatial pyramid pooling module, a feature pyramid fusion module and a detection output module;
the CSP feature extraction module is used for extracting initial high-dimensional features of the image and inputting the initial high-dimensional features into the spatial pyramid pooling module; the spatial pyramid pooling module is used for generating pooled high-dimensional features and improving the spatial feature extraction capability of the neural network on the image; the characteristic pyramid fusion module is formed by adopting a path aggregation network and is used for fusing the pooled high-dimensional characteristics with the initial high-dimensional characteristics, unifying characteristic dimensions by adopting an up-sampling mode, generating fusion characteristics with different scales and inputting the fusion characteristics with different scales into the detection output module; the detection output module is used for generating positioning and classification information of the target through the fusion features of different scales.
5. The deep learning-based aerial data target positioning method according to any one of claims 1 to 5, wherein the training step of the neural network comprises:
s51, acquiring aerial photography data;
s52, marking a target in the aerial photography data by using a rectangular frame, and marking the type of the target and relative coordinates of the upper left corner and the lower right corner of the target by using a document to obtain a training sample;
and S53, inputting the training sample into the neural network, training the neural network, and finishing the training of the neural network when the result of the loss function of the neural network is converged.
6. The aerial photography data target positioning method based on deep learning of claim 5, wherein when the number of training samples is insufficient, the number of training samples is increased by a data enhancement and expansion method; the data enhancement and expansion method comprises the following steps: a random embedding algorithm, a random scale scaling algorithm, or a mosaic training algorithm.
7. An aerial data target positioning system based on deep learning, comprising at least one processor, and a memory communicatively coupled to the at least one processor: the memory stores instructions executable by at least one processor to cause the at least one processor to perform the method of any one of claims 1-6.
8. A readable storage medium having stored thereon a computer program, wherein the program is executed by a processor to implement the deep learning based aerial data object localization method of any one of claims 1-6.
9. A drone comprising a deep learning based aerial data object localization system as claimed in claim 7.
CN202111244030.3A 2021-10-25 2021-10-25 Aerial photography data target positioning method and system based on deep learning Pending CN114004977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111244030.3A CN114004977A (en) 2021-10-25 2021-10-25 Aerial photography data target positioning method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111244030.3A CN114004977A (en) 2021-10-25 2021-10-25 Aerial photography data target positioning method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN114004977A true CN114004977A (en) 2022-02-01

Family

ID=79923985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111244030.3A Pending CN114004977A (en) 2021-10-25 2021-10-25 Aerial photography data target positioning method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN114004977A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346109A (en) * 2022-08-02 2022-11-15 北京新岳纵横科技有限公司 IOU (input/output Unit) strategy based enhanced sample generation method
CN115376022A (en) * 2022-06-30 2022-11-22 广东工业大学 Application of small target detection algorithm based on neural network in unmanned aerial vehicle aerial photography
CN116152635A (en) * 2023-01-30 2023-05-23 中国人民解放军96901部队 Unmanned aerial vehicle combined aerial photographing information sharing method based on blockchain
CN116188470A (en) * 2023-04-28 2023-05-30 成都航空职业技术学院 Unmanned aerial vehicle aerial photographing identification-based fault positioning method and system
CN116543619A (en) * 2023-07-04 2023-08-04 中国科学院长春光学精密机械与物理研究所 Unmanned aerial vehicle photoelectric pod simulation training system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376022A (en) * 2022-06-30 2022-11-22 广东工业大学 Application of small target detection algorithm based on neural network in unmanned aerial vehicle aerial photography
CN115376022B (en) * 2022-06-30 2024-04-05 广东工业大学 Application of small target detection algorithm in unmanned aerial vehicle aerial photography based on neural network
CN115346109A (en) * 2022-08-02 2022-11-15 北京新岳纵横科技有限公司 IOU (input/output Unit) strategy based enhanced sample generation method
CN116152635A (en) * 2023-01-30 2023-05-23 中国人民解放军96901部队 Unmanned aerial vehicle combined aerial photographing information sharing method based on blockchain
CN116188470A (en) * 2023-04-28 2023-05-30 成都航空职业技术学院 Unmanned aerial vehicle aerial photographing identification-based fault positioning method and system
CN116188470B (en) * 2023-04-28 2023-07-04 成都航空职业技术学院 Unmanned aerial vehicle aerial photographing identification-based fault positioning method and system
CN116543619A (en) * 2023-07-04 2023-08-04 中国科学院长春光学精密机械与物理研究所 Unmanned aerial vehicle photoelectric pod simulation training system
CN116543619B (en) * 2023-07-04 2023-08-29 中国科学院长春光学精密机械与物理研究所 Unmanned aerial vehicle photoelectric pod simulation training system

Similar Documents

Publication Publication Date Title
CN112767391B (en) Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
US20160283774A1 (en) Cloud feature detection
US8503730B2 (en) System and method of extracting plane features
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN110825101A (en) Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network
US20200357141A1 (en) Systems and methods for calibrating an optical system of a movable object
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
CN110930508A (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
CN109900274B (en) Image matching method and system
CN111598952A (en) Multi-scale cooperative target design and online detection and identification method and system
EP4068210A1 (en) System and method for automated estimation of 3d orientation of a physical asset
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN112700498A (en) Wind driven generator blade tip positioning method and system based on deep learning
CN117523461B (en) Moving target tracking and positioning method based on airborne monocular camera
Chen et al. Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles
Kaufmann et al. Shadow-based matching for precise and robust absolute self-localization during lunar landings
Marelli et al. ENRICH: Multi-purposE dataset for beNchmaRking In Computer vision and pHotogrammetry
Kikuya et al. Attitude determination algorithm using Earth sensor images and image recognition
WO2021141666A2 (en) Unmanned vehicle navigation, and associated methods, systems, and computer-readable medium
CN109764864B (en) Color identification-based indoor unmanned aerial vehicle pose acquisition method and system
CN116957360A (en) Space observation and reconstruction method and system based on unmanned aerial vehicle
CN113436276B (en) Visual relative positioning-based multi-unmanned aerial vehicle formation method
CN115144879A (en) Multi-machine multi-target dynamic positioning system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination