CN118097339A - Deep learning sample enhancement method and device based on low-altitude photogrammetry - Google Patents
Deep learning sample enhancement method and device based on low-altitude photogrammetry Download PDFInfo
- Publication number
- CN118097339A CN118097339A CN202410502536.7A CN202410502536A CN118097339A CN 118097339 A CN118097339 A CN 118097339A CN 202410502536 A CN202410502536 A CN 202410502536A CN 118097339 A CN118097339 A CN 118097339A
- Authority
- CN
- China
- Prior art keywords
- image
- sample
- low
- altitude
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000013135 deep learning Methods 0.000 title claims abstract description 36
- 238000005259 measurement Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims 7
- 230000000694 effects Effects 0.000 abstract description 6
- 238000013136 deep learning model Methods 0.000 abstract description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 2
- 230000000875 corresponding effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Remote Sensing (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the field of remote sensing image target recognition, in particular to a deep learning sample enhancement method and device based on low-altitude photogrammetry. According to the application, through unmanned aerial vehicle aviation flight and aerial triangulation, accurate POS data of the DEM, DOM and original images are formed; then manually drawing a rectangular sample of the target object based on the DOM; then, under the assistance of POS data, calculating the coordinates of image points of the corner points of the rectangular sample on the corresponding images based on a photogrammetry collineation equation; and finally, reconstructing a rectangular sample on the original image by using the minimum circumscribed rectangle. According to the deep learning sample enhancement method based on low-altitude photogrammetry, the enhancement sample which is about several times or even tens times can be expanded by only one-time manual drawing, so that the generalization capability and the application effect of a deep learning model can be improved; the method has the characteristics of simple parameter setting, stability and reliability, and greatly improves the manufacturing efficiency under the condition of ensuring accurate sample identification.
Description
Technical Field
The application relates to the field of remote sensing image target recognition, in particular to a deep learning sample enhancement method and device based on low-altitude photogrammetry.
Background
Deep learning is the inherent law and presentation hierarchy of learning sample data, whose goal is to enable machines to analyze learning capabilities like humans, thus solving many complex pattern recognition challenges. In recent years, deep learning technology has made great progress, and has been used in many fields such as computer vision, speech recognition, natural language processing, etc. Studies have shown that the effect of deep learning models depends to a large extent on the quantity and quality of sample data. The sample data enhancement is to generate new training samples by carrying out operations such as transformation, expansion, recombination and the like on original training data, so that the problems of insufficient data quantity, uneven sample distribution and the like are effectively solved, and the robustness and generalization capability of the model are improved.
In the field of remote sensing applications based on deep learning, common image data enhancement techniques include: mirror image overturning, random clipping, rotation, scaling, translation, brightness adjustment, noise addition and the like, and the technologies can change factors such as appearance, angle, illumination and the like of an image, so that the diversity of training samples is increased. However, the existing remote sensing image data enhancement technology only uses the final orthographic image result, but the original image of the orthographic image is not used, which forms a waste of resources to a certain extent. In fact, for low-altitude photogrammetry remote sensing, a target object on an orthographic image can exist in several or even tens of original images (the number depends on the overlapping degree of the aerial images), if the target object in the original images can be extracted in a correlated way through mathematical logic and used as the expansion of deep learning sample data, the number of samples can be increased by about several times or even tens of times, and the training effect of the deep learning sample data is greatly improved. Therefore, it is needed to find a deep learning sample enhancement method based on the low-altitude photogrammetry-related original image.
Disclosure of Invention
The application mainly aims to provide a deep learning sample enhancement method based on low-altitude photogrammetry-related original images, which adopts POS data generated after the aerial triangulation of the original images as auxiliary information and automatically constructs an original image enhancement sample in a photogrammetry collineation equation and minimum circumscribed rectangle mode.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
According to a first aspect of the present invention, the present invention claims a deep learning sample enhancement method based on low-altitude photogrammetry, comprising:
acquiring a low-altitude aerial image, wherein the unmanned aerial vehicle captures the low-altitude aerial image with the original external azimuth element through high-altitude flight;
Importing the low-altitude aerial photography image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and exporting a digital elevation model and a digital orthographic image;
importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and acquiring angular point coordinates of the rectangular samples;
Calculating image point coordinates corresponding to the angular point coordinates by adopting a photogrammetry collineation equation, judging whether the image point coordinates are in a specified range, if so, reserving the image point coordinates as sample enhanced images of the rectangular samples, otherwise, discarding;
And constructing a simple circumscribed rectangle according to the image point coordinates of the rectangular sample, geometrically rotating to obtain a minimum circumscribed rectangle, and rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample.
Further, when a plurality of rectangular samples of the target object are manually drawn, in order to cope with image point position offset caused by the relative height of the target object and the ground, the rectangular samples are subjected to margin expansion on the basis of containing the target object, the distance H/2 of the expansion is taken, and H is the average relative height of the target object.
Further, when the minimum circumscribed rectangle is geometrically rotated and rotated, one side of the circumscribed rectangle is collinear with the side of the original polygon according to the fact that one side of the polygon, the rotation angle range is limited, and the direction identical to the side length angle of the polygon is detected to be used as a rotation basis.
Further, the obtaining the low-altitude aerial image, the unmanned aerial vehicle taking the low-altitude aerial image with the original external azimuth element through changing the high flight, further comprises:
Determining a aerial photographing range, and defining a region rich in deep learning target objects as the aerial photographing range by using an Aowei map;
The unmanned aerial vehicle aerial photography downloads digital elevation model data, sets the aerial height according to the spatial scale of the target object, sets the aerial zone on the basis of meeting the course and the side direction overlapping degree, and captures the low-altitude aerial photography image with the original external azimuth element in a high-flying mode.
Further, the importing the low-altitude aerial photography image into the first application program, performing aerial triangulation based on the ground actual measurement image control point, and constructing and exporting a digital elevation model and a digital orthophoto, further comprising:
importing the low-altitude aerial photography image with the original external azimuth element into ContextCapture or Pix4D, and performing aerial triangulation based on ground actual measurement image control points;
deriving the external azimuth element corresponding to each image in the low-altitude aerial photography image after aerial triangulation, and setting the external azimuth element of the image i as Wherein/>Is the position coordinates/>Is a rotation coordinate;
a digital elevation model and a digital orthographic image are constructed and derived.
Further, the importing the digital elevation model into the second application program, interpreting a target object, manually drawing a plurality of rectangular samples of the target object, and obtaining corner coordinates of the rectangular samples, and further includes:
Drawing a sample, loading the derived digital elevation model into Arcmap software, judging a target object by a visual interpretation method, and manually drawing rectangular samples of a plurality of target objects;
acquiring corner coordinates of the rectangular samples, wherein each rectangular sample comprises 4 corners, and setting the ground coordinates corresponding to the corner P as the ground coordinates ,/>Wherein/>And/>Can be directly read or converted by software,/>And (3) corresponding elevation values of coordinates for the digital orthophotos.
Further, the calculating the image point coordinates corresponding to the angular point coordinates by using a photogrammetry collineation equation, judging whether the image point coordinates are within a specified range, if so, reserving the image point coordinates as a sample enhancement image of the rectangular sample, otherwise, discarding, and further including:
Determining a calculation range, selecting four strips nearest to the corner P of the rectangular sample, and traversing all images of the four strips;
according to the external azimuth elements of all images of the four navigation bands and the ground coordinates of the corner point P ,/>Calculating the pixel coordinates/>, corresponding to the corner points P, of the rectangular sample by adopting a photogrammetry collineation equation,/>;
Selecting a sample enhanced image of the rectangular sample, and judging whether the image point coordinates corresponding to the corner points P are in a specified range or not;
And if the coordinates of the image points of the 4 corner points of the rectangular sample are all in the range of the image i, reserving the image i as a sample enhancement image of the rectangular sample, otherwise, discarding.
Further, the constructing a simple circumscribed rectangle according to the pixel coordinates of the rectangular sample, geometrically rotating the simple circumscribed rectangle to obtain a minimum circumscribed rectangle, rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample, and further comprising:
acquiring image point coordinates of the rectangular sample, acquiring 4 corner coordinates of a simple circumscribed rectangle under original image space coordinates of an image, and calculating the area of the simple circumscribed rectangle;
taking the centers of the 4 corner points of the simple circumscribed rectangle as rotation centers, and rotating anticlockwise by a preset angle;
Solving a second simple circumscribed rectangle of 4 angular points after rotating by a preset angle, and recording the area, vertex coordinates and rotation degrees of the second simple circumscribed rectangle;
obtaining all simple circumscribed rectangles obtained in the process of multiple rotations, obtaining a simple circumscribed rectangle with the smallest area, and obtaining vertex coordinates and rotation angles of the simple circumscribed rectangle;
And reversely rotating the obtained simple circumscribed rectangle with the smallest area by the same angle, wherein the obtained minimum circumscribed rectangle is an original image enhancement sample reconstructed on the image i by the rectangular sample.
According to a second aspect of the present invention, the present invention claims a deep learning sample enhancement device based on low-altitude photogrammetry, comprising:
one or more processors;
And a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a deep learning sample enhancement method according to the one or more low-altitude photogrammetry-based methods.
The application relates to the field of remote sensing image target recognition, in particular to a deep learning sample enhancement method and device based on low-altitude photogrammetry. According to the application, through unmanned aerial vehicle aviation flight and aerial triangulation, accurate POS data of the DEM, DOM and original images are formed; then manually drawing a rectangular sample of the target object based on the DOM; then, under the assistance of POS data, calculating the coordinates of image points of the corner points of the rectangular sample on the corresponding images based on a photogrammetry collineation equation; and finally, reconstructing a rectangular sample on the original image by using the minimum circumscribed rectangle. According to the deep learning sample enhancement method based on low-altitude photogrammetry, the enhancement sample which is about several times or even tens times can be expanded by only one-time manual drawing, so that the generalization capability and the application effect of a deep learning model can be improved; the method has the characteristics of simple parameter setting, stability and reliability, and greatly improves the manufacturing efficiency under the condition of ensuring accurate sample identification.
Drawings
FIG. 1 is a workflow diagram of a deep learning sample enhancement method based on low-altitude photogrammetry in accordance with an embodiment of the present application;
FIG. 2 is a schematic diagram of the effect of the deep learning sample enhancement method based on low-altitude photogrammetry according to the embodiment of the present application;
fig. 3 is a block diagram of a deep learning sample enhancement device based on low-altitude photogrammetry according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," and the like in this disclosure are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "first," "second," and "third" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. All directional indications (such as up, down, left, right, front, back … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular gesture (as shown in the drawings), and if the particular gesture changes, the directional indication changes accordingly. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
According to a first embodiment of the present invention, the present invention claims a deep learning sample enhancement method based on low-altitude photogrammetry, referring to fig. 1, comprising:
acquiring a low-altitude aerial image, wherein the unmanned aerial vehicle captures the low-altitude aerial image with the original external azimuth element through high-altitude flight;
Importing the low-altitude aerial photography image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and exporting a digital elevation model and a digital orthographic image;
importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and acquiring angular point coordinates of the rectangular samples;
Calculating image point coordinates corresponding to the angular point coordinates by adopting a photogrammetry collineation equation, judging whether the image point coordinates are in a specified range, if so, reserving the image point coordinates as sample enhanced images of the rectangular samples, otherwise, discarding;
And constructing a simple circumscribed rectangle according to the image point coordinates of the rectangular sample, geometrically rotating to obtain a minimum circumscribed rectangle, and rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample.
Further, when a plurality of rectangular samples of the target object are manually drawn, in order to cope with image point position offset caused by the relative height of the target object and the ground, the rectangular samples are subjected to margin expansion on the basis of containing the target object, the distance H/2 of the expansion is taken, and H is the average relative height of the target object.
Further, when the minimum circumscribed rectangle is geometrically rotated and rotated, one side of the circumscribed rectangle is collinear with the side of the original polygon according to the fact that one side of the polygon, the rotation angle range is limited, and the direction identical to the side length angle of the polygon is detected to be used as a rotation basis.
Further, the obtaining the low-altitude aerial image, the unmanned aerial vehicle taking the low-altitude aerial image with the original external azimuth element through changing the high flight, further comprises:
Determining a aerial photographing range, and defining a region rich in deep learning target objects as the aerial photographing range by using an Aowei map;
The unmanned aerial vehicle aerial photography downloads digital elevation model data, sets the aerial height according to the spatial scale of the target object, sets the aerial zone on the basis of meeting the course and the side direction overlapping degree, and captures the low-altitude aerial photography image with the original external azimuth element in a high-flying mode.
Wherein in this embodiment the heading overlap is set to be no less than 70% and the side overlap is set to be no less than 30%.
Further, the importing the aerial image from the low altitude into the first application program, performing aerial triangulation based on the ground actual measurement image control point, and constructing and exporting a Digital Elevation Model (DEM) and a Digital Orthophoto (DOM), further comprising:
importing the low-altitude aerial photography image with the original external azimuth element into ContextCapture or Pix4D, and performing aerial triangulation based on ground actual measurement image control points;
deriving the external azimuth element corresponding to each image in the low-altitude aerial photography image after aerial triangulation, and setting the external azimuth element of the image i as Wherein/>Is the position coordinates/>Is a rotation coordinate;
a digital elevation model and a digital orthographic image are constructed and derived.
Further, the importing the digital elevation model into the second application program, interpreting a target object, manually drawing a plurality of rectangular samples of the target object, and obtaining corner coordinates of the rectangular samples, and further includes:
Drawing a sample, loading the derived digital elevation model into Arcmap software, judging a target object by a visual interpretation method, and manually drawing rectangular samples of a plurality of target objects;
acquiring corner coordinates of the rectangular samples, wherein each rectangular sample comprises 4 corners, and setting the ground coordinates corresponding to the corner P as the ground coordinates ,/>Wherein/>And/>Can be directly read or converted by software,/>And (3) corresponding elevation values of coordinates for the digital orthophotos.
Further, the calculating the image point coordinates corresponding to the angular point coordinates by using a photogrammetry collineation equation, judging whether the image point coordinates are within a specified range, if so, reserving the image point coordinates as a sample enhancement image of the rectangular sample, otherwise, discarding, and further including:
Determining a calculation range, selecting four strips nearest to the corner P of the rectangular sample, and traversing all images of the four strips;
according to the external azimuth elements of all images of the four navigation bands and the ground coordinates of the corner point P ,/>Calculating the pixel coordinates/>, corresponding to the corner points P, of the rectangular sample by adopting a photogrammetry collineation equation,/>;
Selecting a sample enhanced image of the rectangular sample, and judging whether the image point coordinates corresponding to the corner points P are in a specified range or not;
And if the coordinates of the image points of the 4 corner points of the rectangular sample are all in the range of the image i, reserving the image i as a sample enhancement image of the rectangular sample, otherwise, discarding.
Wherein, in this embodiment, the first and second processing steps,
Wherein a 1,a2,a3,b1,b2,b3,c1,c2,c3 is a parameter of a rotation matrix between the image space auxiliary coordinate system and the ground photogrammetry coordinate system, and f is a camera focal length, a rotation matrix parameter and an attitude angle of a photoThe relation between the two is:
;
;
;
;
;
;
;
;
;
selecting a sample enhancement image of the rectangular sample: judging whether the coordinates of the image point are within a specified range, setting the width and height of the image as w and h respectively, and setting the size of the image element as ; Then the pixel coordinates/>,/>Should be within the following ranges:
if the coordinates of the 4 corner points of the rectangular sample are all in the range of the image i, the image i is reserved as a sample enhanced image of the rectangular sample, otherwise, the image i is removed;
Further, the constructing a simple circumscribed rectangle according to the pixel coordinates of the rectangular sample, geometrically rotating the simple circumscribed rectangle to obtain a minimum circumscribed rectangle, rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample, and further comprising:
acquiring image point coordinates of the rectangular sample, acquiring 4 corner coordinates of a simple circumscribed rectangle under original image space coordinates of an image, and calculating the area of the simple circumscribed rectangle;
taking the centers of the 4 corner points of the simple circumscribed rectangle as rotation centers, and rotating anticlockwise by a preset angle;
Solving a second simple circumscribed rectangle of 4 angular points after rotating by a preset angle, and recording the area, vertex coordinates and rotation degrees of the second simple circumscribed rectangle;
obtaining all simple circumscribed rectangles obtained in the process of multiple rotations, obtaining a simple circumscribed rectangle with the smallest area, and obtaining vertex coordinates and rotation angles of the simple circumscribed rectangle;
And reversely rotating the obtained simple circumscribed rectangle with the smallest area by the same angle, wherein the obtained minimum circumscribed rectangle is an original image enhancement sample reconstructed on the image i by the rectangular sample.
In this embodiment, an initial simple circumscribed rectangle of 4 corner points is first obtained: the coordinates of image points with 4 angular points are respectively%,/>)、(/>,/>)、(/>,/>)、(/>,/>) Under the original image space coordinates of the image, the coordinates of 4 corner points of the initial simple circumscribed rectangle are respectively (/ >,/>)、(/>,/>)、(/>,/>)、(/>,/>) Wherein, the method comprises the steps of, wherein,
;
;
;
;
The area S of the initial simple circumscribed rectangle is:
;
Geometric rotation: the center of 4 corner points is used ) To rotate the center point, the 4 corner points are rotated counterclockwise by a certain angle. The mathematical basis for achieving a certain point rotation around a fixed point by a certain angle on a plane is to set a point on the plane (/ >)) Around another point%) Counterclockwise rotation/>The point after the angle is (/ >) The following steps are:
;
;
Referring to fig. 2, a target object sample enhancement effect diagram is implemented for the present invention.
According to a second embodiment of the present invention, the present invention claims a deep learning sample enhancement device based on low-altitude photogrammetry, referring to fig. 3, comprising:
one or more processors;
And a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a deep learning sample enhancement method according to the one or more low-altitude photogrammetry-based methods.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The foregoing is only the embodiments of the present application, and the patent scope of the application is not limited thereto, but is also covered by the patent protection scope of the application, as long as the equivalent structure or equivalent flow changes made by the description and the drawings of the application or the direct or indirect application in other related technical fields are adopted.
The embodiments of the application have been described in detail above, but they are merely examples, and the application is not limited to the above-described embodiments. It will be apparent to those skilled in the art that any equivalent modifications or substitutions to this application are within the scope of the application, and therefore, all equivalent changes and modifications, improvements, etc. that do not depart from the spirit and scope of the principles of the application are intended to be covered by this application.
Claims (9)
1. A deep learning sample enhancement method based on low-altitude photogrammetry, comprising:
acquiring a low-altitude aerial image, wherein the unmanned aerial vehicle captures the low-altitude aerial image with the original external azimuth element through high-altitude flight;
Importing the low-altitude aerial photography image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and exporting a digital elevation model and a digital orthographic image;
importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and acquiring angular point coordinates of the rectangular samples;
Calculating image point coordinates corresponding to the angular point coordinates by adopting a photogrammetry collineation equation, judging whether the image point coordinates are in a specified range, if so, reserving the image point coordinates as sample enhanced images of the rectangular samples, otherwise, discarding;
And constructing a simple circumscribed rectangle according to the image point coordinates of the rectangular sample, geometrically rotating to obtain a minimum circumscribed rectangle, and rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample.
2. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein when manually drawing a plurality of rectangular samples of the target object, in order to cope with the image point position offset caused by the relative height of the target object and the ground, the rectangular samples are subjected to margin expansion on the basis of containing the target object, the distance of H/2 of the expansion is taken, and H is the average relative height of the target object.
3. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein when geometrically rotating and rotating the minimum bounding rectangle, there is a side collineation with the side of the original polygon according to a bounding rectangle of the polygon, the rotation angle range is limited, and the direction identical to the side length angle of the polygon is detected as the rotation basis.
4. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the capturing the low-altitude aerial image, the unmanned aerial vehicle capturing the low-altitude aerial image with the original external orientation element by changing the flying height, further comprises:
Determining a aerial photographing range, and defining a region rich in deep learning target objects as the aerial photographing range by using an Aowei map;
The unmanned aerial vehicle aerial photography downloads digital elevation model data, sets the aerial height according to the spatial scale of the target object, sets the aerial zone on the basis of meeting the course and the side direction overlapping degree, and captures the low-altitude aerial photography image with the original external azimuth element in a high-flying mode.
5. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the step of importing the low-altitude aerial image into a first application program, performing aerial triangulation based on ground actual measurement image control points, and constructing and deriving a digital elevation model and a digital orthophoto image, further comprises:
importing the low-altitude aerial photography image with the original external azimuth element into ContextCapture or Pix4D, and performing aerial triangulation based on ground actual measurement image control points;
deriving the external azimuth element corresponding to each image in the low-altitude aerial photography image after aerial triangulation, and setting the external azimuth element of the image i as Wherein/>Is the position coordinates/>Is a rotation coordinate;
a digital elevation model and a digital orthographic image are constructed and derived.
6. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the importing the digital elevation model into a second application program, interpreting a target object, manually drawing rectangular samples of a plurality of target objects, and obtaining coordinates of corner points of the rectangular samples, further comprises:
Drawing a sample, loading the derived digital elevation model into Arcmap software, judging a target object by a visual interpretation method, and manually drawing rectangular samples of a plurality of target objects;
acquiring corner coordinates of the rectangular samples, wherein each rectangular sample comprises 4 corners, and setting the ground coordinates corresponding to the corner P as the ground coordinates ,/>Wherein/>And/>Can be directly read or converted by software,/>And (3) corresponding elevation values of coordinates for the digital orthophotos.
7. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 5, wherein the calculating the pixel coordinates corresponding to the corner coordinates using a photogrammetry collineation equation, determining whether the pixel coordinates are within a predetermined range, if so, retaining the pixel coordinates as a sample enhanced image of the rectangular sample, otherwise discarding the sample enhanced image, further comprising:
Determining a calculation range, selecting four strips nearest to the corner P of the rectangular sample, and traversing all images of the four strips;
according to the external azimuth elements of all images of the four navigation bands and the ground coordinates of the corner point P ,/>Calculating the pixel coordinates/>, corresponding to the corner points P, of the rectangular sample by adopting a photogrammetry collineation equation,/>;
Selecting a sample enhanced image of the rectangular sample, and judging whether the image point coordinates corresponding to the corner points P are in a specified range or not;
And if the coordinates of the image points of the 4 corner points of the rectangular sample are all in the range of the image i, reserving the image i as a sample enhancement image of the rectangular sample, otherwise, discarding.
8. The method for enhancing a deep learning sample based on low-altitude photogrammetry according to claim 1, wherein the constructing a simple circumscribed rectangle according to the pixel coordinates of the rectangular sample, geometrically rotating the simple circumscribed rectangle to obtain a minimum circumscribed rectangle, rotating the minimum circumscribed rectangle to obtain an original image enhancement sample reconstructed on the low-altitude aerial image by the rectangular sample, further comprises:
acquiring image point coordinates of the rectangular sample, acquiring 4 corner coordinates of a simple circumscribed rectangle under original image space coordinates of an image, and calculating the area of the simple circumscribed rectangle;
taking the centers of the 4 corner points of the simple circumscribed rectangle as rotation centers, and rotating anticlockwise by a preset angle;
Solving a second simple circumscribed rectangle of 4 angular points after rotating by a preset angle, and recording the area, vertex coordinates and rotation degrees of the second simple circumscribed rectangle;
obtaining all simple circumscribed rectangles obtained in the process of multiple rotations, obtaining a simple circumscribed rectangle with the smallest area, and obtaining vertex coordinates and rotation angles of the simple circumscribed rectangle;
And reversely rotating the obtained simple circumscribed rectangle with the smallest area by the same angle, wherein the obtained minimum circumscribed rectangle is an original image enhancement sample reconstructed on the image i by the rectangular sample.
9. A deep learning sample enhancement device based on low-altitude photogrammetry, comprising:
one or more processors;
A memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a low-altitude photogrammetry-based deep learning sample enhancement method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410502536.7A CN118097339B (en) | 2024-04-25 | 2024-04-25 | Deep learning sample enhancement method and device based on low-altitude photogrammetry |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410502536.7A CN118097339B (en) | 2024-04-25 | 2024-04-25 | Deep learning sample enhancement method and device based on low-altitude photogrammetry |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118097339A true CN118097339A (en) | 2024-05-28 |
CN118097339B CN118097339B (en) | 2024-07-02 |
Family
ID=91155076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410502536.7A Active CN118097339B (en) | 2024-04-25 | 2024-04-25 | Deep learning sample enhancement method and device based on low-altitude photogrammetry |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118097339B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106949880A (en) * | 2017-03-10 | 2017-07-14 | 中国电建集团昆明勘测设计研究院有限公司 | Method for processing overhigh local overlapping degree of unmanned aerial vehicle images in measurement area with large elevation fluctuation |
CN110940318A (en) * | 2019-10-22 | 2020-03-31 | 上海航遥信息技术有限公司 | Aerial remote sensing real-time imaging method, electronic equipment and storage medium |
CN115223090A (en) * | 2022-06-22 | 2022-10-21 | 张亚峰 | Airport clearance barrier period monitoring method based on multi-source remote sensing image |
CN115294293A (en) * | 2022-10-08 | 2022-11-04 | 速度时空信息科技股份有限公司 | Method for automatically compiling high-precision map road reference lines based on low-altitude aerial photography results |
CN117292337A (en) * | 2023-11-24 | 2023-12-26 | 中国科学院空天信息创新研究院 | Remote sensing image target detection method |
-
2024
- 2024-04-25 CN CN202410502536.7A patent/CN118097339B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106949880A (en) * | 2017-03-10 | 2017-07-14 | 中国电建集团昆明勘测设计研究院有限公司 | Method for processing overhigh local overlapping degree of unmanned aerial vehicle images in measurement area with large elevation fluctuation |
CN110940318A (en) * | 2019-10-22 | 2020-03-31 | 上海航遥信息技术有限公司 | Aerial remote sensing real-time imaging method, electronic equipment and storage medium |
CN115223090A (en) * | 2022-06-22 | 2022-10-21 | 张亚峰 | Airport clearance barrier period monitoring method based on multi-source remote sensing image |
CN115294293A (en) * | 2022-10-08 | 2022-11-04 | 速度时空信息科技股份有限公司 | Method for automatically compiling high-precision map road reference lines based on low-altitude aerial photography results |
CN117292337A (en) * | 2023-11-24 | 2023-12-26 | 中国科学院空天信息创新研究院 | Remote sensing image target detection method |
Also Published As
Publication number | Publication date |
---|---|
CN118097339B (en) | 2024-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11887247B2 (en) | Visual localization | |
CN110163064B (en) | Method and device for identifying road marker and storage medium | |
US9519968B2 (en) | Calibrating visual sensors using homography operators | |
CN112396640B (en) | Image registration method, device, electronic equipment and storage medium | |
CN113329182A (en) | Image processing method, unmanned aerial vehicle and system | |
AU2011362799A1 (en) | 3D streets | |
CN111815707A (en) | Point cloud determining method, point cloud screening device and computer equipment | |
CN108305291B (en) | Monocular vision positioning and attitude determination method utilizing wall advertisement containing positioning two-dimensional code | |
CN108933902A (en) | Panoramic picture acquisition device builds drawing method and mobile robot | |
CN112419460B (en) | Method, apparatus, computer device and storage medium for baking model map | |
CN112733641A (en) | Object size measuring method, device, equipment and storage medium | |
JP7420815B2 (en) | System and method for selecting complementary images from a plurality of images for 3D geometric extraction | |
Yoo et al. | True orthoimage generation by mutual recovery of occlusion areas | |
Deng et al. | Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images | |
CN118097339B (en) | Deep learning sample enhancement method and device based on low-altitude photogrammetry | |
CN208638479U (en) | Panoramic picture acquisition device and mobile robot | |
CN115376018A (en) | Building height and floor area calculation method, device, equipment and storage medium | |
CN114693820A (en) | Object extraction method and device, electronic equipment and storage medium | |
JP6835665B2 (en) | Information processing equipment and programs | |
CN115836322A (en) | Image cropping method and device, electronic equipment and storage medium | |
US20240242318A1 (en) | Face deformation compensating method for face depth image, imaging device, and storage medium | |
CN118644554B (en) | Aircraft navigation method based on monocular depth estimation and ground characteristic point matching | |
WO2024188110A1 (en) | Method and device for generating three-dimensional urban texture model on basis of composite data | |
JP7457844B2 (en) | Information processing device and method | |
CN116212301B (en) | Method, system, device and medium for measuring standing long jump score |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |