CN114565863A - Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image - Google Patents

Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image Download PDF

Info

Publication number
CN114565863A
CN114565863A CN202210151486.3A CN202210151486A CN114565863A CN 114565863 A CN114565863 A CN 114565863A CN 202210151486 A CN202210151486 A CN 202210151486A CN 114565863 A CN114565863 A CN 114565863A
Authority
CN
China
Prior art keywords
image
initial
aerial vehicle
unmanned aerial
ortho
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210151486.3A
Other languages
Chinese (zh)
Other versions
CN114565863B (en
Inventor
刘洋
何华贵
王会
杨卫军
郭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Urban Planning Survey and Design Institute
Original Assignee
Guangzhou Urban Planning Survey and Design Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Urban Planning Survey and Design Institute filed Critical Guangzhou Urban Planning Survey and Design Institute
Priority to CN202210151486.3A priority Critical patent/CN114565863B/en
Publication of CN114565863A publication Critical patent/CN114565863A/en
Application granted granted Critical
Publication of CN114565863B publication Critical patent/CN114565863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a real-time generation method, a device, a medium and equipment of an orthophoto image of an unmanned aerial vehicle image, wherein the method comprises the following steps: acquiring an image acquired by an unmanned aerial vehicle in real time; carrying out feature point matching on a plurality of first feature points of the current frame image and a plurality of second feature points of the previous frame image to obtain a plurality of mutually matched third feature points; establishing epipolar geometry according to the third feature points, and calculating initial attitude information of each frame of image in a visual coordinate system; performing loop detection on each frame image according to the initial attitude information to obtain each key frame image; calculating the final attitude information of each key frame image under a visual coordinate system according to the initial attitude information of each key frame image to generate each initial orthoimage; and splicing and fusing each initial ortho-image in real time according to the final attitude information so as to generate the ortho-image of the unmanned aerial vehicle image in real time. The embodiment of the invention can estimate the unmanned aerial vehicle camera attitude in real time, thereby carrying out real-time image splicing and reducing errors.

Description

Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device, a medium and equipment for generating an orthoimage of an unmanned aerial vehicle image in real time.
Background
In recent years, as the unmanned aerial vehicle has the characteristics of real-time performance, low cost and the like, the unmanned aerial vehicle is widely applied to various fields including current urban patrol, disaster prevention and reduction, change detection and the like. In the operation of the unmanned aerial vehicle, the most important problem is how to reconstruct a scene of a picture taken by the unmanned aerial vehicle and recover the condition of an operation site, wherein an sfm (structure from motion) algorithm is most commonly used, but the algorithm has a large error.
Disclosure of Invention
The invention provides a method, a device, a medium and equipment for generating an orthoimage of an unmanned aerial vehicle image in real time, which aim to solve the problem of larger error in the prior art.
In order to achieve the above object, an embodiment of the present invention provides a method for generating an orthoimage of an unmanned aerial vehicle image in real time, including:
acquiring an image acquired by an unmanned aerial vehicle in real time;
extracting a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of mutually matched third feature points;
establishing epipolar geometry according to the third feature points, and calculating initial posture information of each frame of image in a visual coordinate system based on an SLAM algorithm;
performing loop detection on each frame of image according to the initial attitude information of each frame of image to obtain each key frame of image;
calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image so as to generate each initial orthoimage;
and splicing and fusing each initial orthoimage in real time according to the final posture information of each key frame image so as to generate the orthoimage of the unmanned aerial vehicle image in real time.
Further, the method comprises the following steps of extracting the feature points of any frame of image:
taking any pixel point of any frame of image as a central pixel point, and acquiring each local pixel point within a preset range;
acquiring each global pixel point of the frame image, respectively calculating each global pixel difference between each global pixel point and the central pixel point, and counting the absolute value of the global pixel difference exceeding a preset pixel difference threshold value to obtain a global counting result;
when the global statistical result is not less than a preset global threshold, respectively calculating each local pixel difference between each local pixel point and the central pixel point, and performing statistics on the absolute value of the local pixel difference exceeding the preset pixel difference threshold to obtain a local statistical result;
and when the local statistical result is not less than a preset local threshold, obtaining the characteristic point of the frame image.
Further, the calculating, according to the initial pose information of each key frame image, the final pose information of each key frame image in the visual coordinate system by using a least square method to generate each orthoimage includes:
when the initial attitude information of at least three frames of key frame images is acquired, calculating the rotation relation between a visual coordinate system and a GNSS coordinate system by adopting a least square method;
and when the rotation relation meets a preset rotation condition, obtaining the final posture information of each key frame image in a visual coordinate system to generate each initial orthoimage.
Further, the real-time splicing and fusing of each ortho-image according to the final posture information of each key frame image to generate the ortho-image of the unmanned aerial vehicle image in real time includes:
acquiring a current frame initial ortho-image according to the final attitude information of each key frame image;
when the initial ortho-image of the current frame and the initial ortho-image of the previous frame have an overlapping area, calculating the overlapping area;
calculating a variance of the overlapping region when the overlapping region is observed more than once;
and when the variance is smaller than a preset variance threshold value, updating the current frame initial orthoimage to a global map so as to generate an orthoimage of the unmanned aerial vehicle image in real time.
Further, the calculating the overlap region includes:
calculating the overlap region according to:
Figure BDA0003510738090000031
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, and n is the number of observations of the initial ortho image.
Further, the calculating the variance of the overlapping area includes:
calculating the variance of the overlapping region according to:
Figure BDA0003510738090000032
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, n is the number of observations of the initial ortho image, sglobalIs the global existing variance.
In order to achieve the above object, an embodiment of the present invention provides an apparatus for generating an orthoimage of an unmanned aerial vehicle image in real time, including:
the real-time image acquisition module is used for acquiring images acquired by the unmanned aerial vehicle in real time;
the feature point matching module is used for extracting a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and performing feature point matching on the first feature points and the second feature points to obtain a plurality of third feature points which are matched with each other;
the initial attitude information acquisition module is used for establishing epipolar geometry according to the third feature points and calculating initial attitude information of each frame of image in a visual coordinate system based on an SLAM algorithm;
the key frame image acquisition module is used for carrying out loop detection on each frame of image according to the initial attitude information of each frame of image to obtain each key frame image;
the initial orthoimage generation module is used for calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image so as to generate each initial orthoimage;
and the ortho-image real-time generation module is used for splicing and fusing each initial ortho-image in real time according to the final attitude information of each key frame image so as to generate the ortho-image of the unmanned aerial vehicle image in real time.
To achieve the above object, an embodiment of the present invention provides a computer-readable storage medium including a stored computer program; when the computer program runs, the device where the computer readable storage medium is located is controlled to execute the method for generating the orthoimage of the unmanned aerial vehicle image in real time.
In order to achieve the above object, an embodiment of the present invention provides an orthoscopic image real-time generating device for an unmanned aerial vehicle image, including a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, wherein the processor executes the computer program to implement the orthoscopic image real-time generating method for an unmanned aerial vehicle image as described above.
Compared with the prior art, the embodiment of the invention provides a method, a device, a medium and equipment for generating an orthoimage of an unmanned aerial vehicle image in real time, wherein the image acquired by the unmanned aerial vehicle in real time is acquired; extracting a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of mutually matched third feature points; establishing epipolar geometry according to the third feature points, and calculating initial posture information of each frame of image in a visual coordinate system based on an SLAM algorithm; performing loop detection on each frame of image according to the initial attitude information of each frame of image to obtain each key frame of image; calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image so as to generate each initial orthoimage; and splicing and fusing each initial orthoimage in real time according to the final posture information of each key frame image so as to generate the orthoimage of the unmanned aerial vehicle image in real time. Therefore, the method and the device can estimate the camera attitude of the unmanned aerial vehicle in real time according to the real-time characteristic of the image, so that real-time image splicing is performed, the real-time performance and the precision can be ensured, the method and the device are suitable for real-time image splicing of the unmanned aerial vehicle, errors are reduced, and the calculated amount is small.
Drawings
Fig. 1 is a flowchart of a method for generating an orthoimage of an unmanned aerial vehicle image in real time according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating the mismatch between the dimensions of a visual coordinate system and a GNSS coordinate system according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating the dimension matching between the visual coordinate system and the GNSS coordinate system according to an embodiment of the present invention;
fig. 4 is a block diagram of a structure of an apparatus for generating an orthoimage of an unmanned aerial vehicle image in real time according to an embodiment of the present invention;
fig. 5 is a block diagram of an orthoscopic image real-time generating device for an unmanned aerial vehicle image according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, it is a flowchart of a method for generating an orthoimage of an unmanned aerial vehicle image in real time according to an embodiment of the present invention, including:
s1, acquiring images acquired by the unmanned aerial vehicle in real time;
s2, extracting a plurality of first feature points of the current frame image and a plurality of second feature points of the previous frame image, and performing feature point matching on the first feature points and the second feature points to obtain a plurality of third feature points which are matched with each other;
s3, establishing epipolar geometry according to the third feature points, and calculating initial posture information of each frame of image in a visual coordinate system based on an SLAM algorithm;
s4, performing loop detection on each frame of image according to the initial posture information of each frame of image to obtain each key frame of image;
s5, calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image to generate each initial orthoimage;
and S6, splicing and fusing each initial orthoimage in real time according to the final posture information of each key frame image to generate an orthoimage of the unmanned aerial vehicle image in real time.
For example, in step S2, when feature point matching is performed on two frames of images, since the overlap ratio of the unmanned aerial vehicle operation is known, the estimated position of the unmanned aerial vehicle operation in the current frame is calculated according to the position of the feature point in the previous frame, a window with a size of 20 is established, and searching is performed within the window, so that the feature point matching efficiency can be improved. Specifically, a window with a preset size is established according to the overlapping rate of unmanned aerial vehicle operation, so that feature point matching is performed on the current frame image and the previous frame image in the window with the preset size.
As an improvement of the above scheme, the feature points of any frame image are extracted by the following steps:
taking any pixel point of any frame of image as a central pixel point, and acquiring each local pixel point within a preset range;
acquiring each global pixel point of the frame image, respectively calculating each global pixel difference between each global pixel point and the central pixel point, and counting the absolute value of the global pixel difference exceeding a preset pixel difference threshold value to obtain a global counting result;
when the global statistical result is not less than a preset global threshold, respectively calculating each local pixel difference between each local pixel point and the central pixel point, and performing statistics on the absolute value of the local pixel difference exceeding the preset pixel difference threshold to obtain a local statistical result;
and when the local statistical result is not less than a preset local threshold, obtaining the characteristic point of the frame image.
Specifically, the preset global threshold is r2/3, where r2 is the total number of local pixels; the preset local threshold is r2/4, wherein r2 is the total number of local pixels;
exemplarily, on any frame of image, a preset circular range with the center pixel point c as the center and the radius r is obtained, and each local pixel point p in the preset circular range is obtained1、p2、...、pr2
Respectively calculating each global pixel difference between each global pixel point and the central pixel point, counting the absolute value of the global pixel difference exceeding a preset pixel difference threshold, and if the absolute value of not less than r2/3 global pixel differences exceeds a preset pixel difference threshold T, continuously and respectively calculating each local pixel point p1、p2、...、pr2And each local pixel difference between the central pixel point and the central pixel point, otherwise, judging that the central pixel point c is not a feature point;
and counting the absolute value of the local pixel difference exceeding the preset pixel difference threshold, if the absolute value of the local pixel difference not less than r2/4 exceeds the preset pixel difference threshold T, taking the central pixel point c as the feature point of the frame image, otherwise, judging that the central pixel point c is not the feature point.
Illustratively, in step S3, establishing an epipolar geometry according to a plurality of third feature points, And calculating an external orientation parameter of each frame of image in the visual coordinate system based on a SLAM (Simultaneous Localization And Mapping) algorithm, that is, obtaining initial posture information of each frame of image;
illustratively, in step S4, performing loop detection on each frame image by using a loop detection algorithm to obtain each key frame image; it is to be understood that a loop detection algorithm refers to the detection of a previously visited scene by a session detection algorithm. The uncertainty grows due to the uncertainty of the model and the noise of the device. By introducing a closed-loop detection algorithm, scenes which are visited historically can be identified to increase constraints between poses, the uncertainty can be reduced well, and the accuracy of real-time image splicing of the unmanned aerial vehicle is improved.
As an improvement of the above solution, the calculating, according to the initial pose information of each key frame image, the final pose information of each key frame image in the visual coordinate system by using a least square method to generate each orthoimage includes:
when the initial attitude information of at least three frames of key frame images is acquired, calculating the rotation relation between a visual coordinate system and a GNSS coordinate system by adopting a least square method;
and when the rotation relation meets a preset rotation condition, obtaining the final posture information of each key frame image in a visual coordinate system to generate each initial orthoimage.
It can be understood that when the attitude information of each frame of image under the visual coordinate is obtained, the spatial position of the current frame of image under the visual coordinate system can be continuously calculated from the first frame, however, the initial attitude information obtained through the steps is not an actual scale but a normalized scale, and the scale information is continuously used in the subsequent calculation process, so that the scale drift is caused, and the subsequent calculation error is increased;
in the flight process of the unmanned aerial vehicle, the unmanned aerial vehicle needs to plan a route according to a Global Navigation Satellite System (GNSS), such as a GPS and a beidou, so that each image shot by the unmanned aerial vehicle has GNSS coordinate System information at an image exposure time, as shown in fig. 2, if a GNSS track and a calculated visual track are superposed, the GNSS track and the calculated visual track are found to be no longer at a uniform height, which results in losing the actual significance of an orthographic image;
therefore, in the embodiment of the present invention, the rotation relationship between the visual coordinate system and the GNSS coordinate system is calculated by using least squares, so as to obtain the true scale information in the visual coordinate system.
As shown in fig. 3, for each keyframe image, after the above steps, a visual coordinate and a GNSS coordinate of each keyframe image are obtained, where the visual coordinate is a, and the GNSS coordinate is B, and since the unmanned aerial vehicle only designs translational rotation in the flight process, it can default that the two coordinate systems are rigid transformation, and then a rotation matrix R and a translation matrix t exist, so that B is RA + t, and after the initial attitude information of at least 3 keyframe images is obtained, R, t is solved based on the principle of least squares, so that the minimum | | | | | B- (RA + t) | | |, and the true scale under the visual coordinate system, that is, the final attitude information is obtained; according to the embodiment of the invention, the least square method is utilized to reduce the SLAM resolving result and the real coordinate of the camera in real time to obtain the real scale of the camera, and meanwhile, the scale value is updated to the subsequent frame, so that the scale drift influence in monocular SLAM is reduced.
As an improvement of the above scheme, the real-time stitching and fusing of each ortho-image according to the final pose information of each key frame image to generate an ortho-image of an unmanned aerial vehicle image in real time includes:
acquiring a current frame initial ortho-image according to the final attitude information of each key frame image;
when the initial ortho-image of the current frame and the initial ortho-image of the previous frame have an overlapping area, calculating the overlapping area;
calculating a variance of the overlapping region when the overlapping region is observed more than once;
and when the variance is smaller than a preset variance threshold value, updating the current frame initial orthoimage to a global map so as to generate an orthoimage of the unmanned aerial vehicle image in real time.
As an improvement of the above, the calculating the overlapping area includes:
calculating the overlap region according to:
Figure BDA0003510738090000081
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, and n is the number of observations of the initial ortho image.
As an improvement of the above solution, the calculating the variance of the overlapping area includes:
calculating the variance of the overlapping area according to:
Figure BDA0003510738090000082
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, n is the number of observations of the initial ortho image, sglobalIs the global existing variance.
It can be understood that after the above steps are completed on the first frame key frame image, a first initial ortho image is obtained, and then, the second frame, the third frame … … and the like can also obtain a second initial ortho image and a third initial ortho image … … with real geographic information after being processed through the above steps, because the photos obtained by the unmanned aerial vehicle have a certain overlap degree, for the subsequent incoming images, there are necessarily overlap regions (both lateral and course exist), for the overlap regions, if the overlap regions are directly overlapped, or the average value of the grid is taken, the hues of the overlap regions are inconsistent, and the overlap regions exist in the situations of splicing lines and the like, the traditional method is to manually perform uniform light and color in professional software such as ENVI and PS, and the real unmanned aerial vehicle image splicing image cannot be obtained;
in the embodiment of the present invention, the overlap region is first calculated, and since the overlap region may occur more than once, the number of observations is considered as the main weight:
calculating the overlap region according to:
Figure BDA0003510738090000091
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, and n is the number of observations of the initial ortho image.
When an overlapping region is observed more than once, its variance is calculated according to the following equation:
Figure BDA0003510738090000092
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, n is the number of observations of the initial ortho image, sglobalIs the global existing variance.
When the grid value is smaller than a certain threshold value, the multiple observations tend to be stable, the grid value obtained by the calculation is taken as an overlapping area and is updated to the global map, and it can be understood that the grid value refers to each pixel value of the initial orthoimage. If the difference is greater than a certain threshold value, the observation is considered to be inconsistent, the result is temporarily stored, once the new picture has the overlapped area, the variance is recalculated, the initial orthoimage with the lower variance is updated to the global map, the result is stored, and if the later picture has the overlapped area, the result is updated again according to the method.
The embodiment of the invention provides a real-time ortho image generation method for unmanned aerial vehicle images, which comprises the steps of acquiring images acquired by an unmanned aerial vehicle in real time; extracting a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of mutually matched third feature points; establishing epipolar geometry according to the third feature points, and calculating initial posture information of each frame of image in a visual coordinate system based on an SLAM algorithm; performing loop detection on each frame of image according to the initial attitude information of each frame of image to obtain each key frame of image; calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image so as to generate each initial orthoimage; and splicing and fusing each initial orthoimage in real time according to the final posture information of each key frame image so as to generate the orthoimage of the unmanned aerial vehicle image in real time. Therefore, the method and the device can estimate the camera attitude of the unmanned aerial vehicle in real time according to the real-time characteristic of the image, so that real-time image splicing is performed, the real-time performance and the precision can be ensured, the method and the device are suitable for real-time image splicing of the unmanned aerial vehicle, errors are reduced, and the calculated amount is small.
Referring to fig. 4, which is a block diagram of a device for generating an orthoimage of an unmanned aerial vehicle image in real time according to an embodiment of the present invention, the device 10 for generating an orthoimage of an unmanned aerial vehicle image in real time includes:
the real-time image acquisition module 11 is used for acquiring images acquired by the unmanned aerial vehicle in real time;
the feature point matching module 12 is configured to extract a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and perform feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of third feature points which are matched with each other;
the initial attitude information acquisition module 13 is configured to establish epipolar geometry according to the third feature points, and calculate initial attitude information of each frame of image in the visual coordinate system based on an SLAM algorithm;
the key frame image acquisition module 14 is configured to perform loop detection on each frame of image according to the initial posture information of each frame of image, so as to obtain each key frame image;
an initial ortho-image generating module 15, configured to calculate, according to the initial pose information of each key frame image, final pose information of each key frame image in the visual coordinate system by using a least square method, so as to generate each initial ortho-image;
and the ortho-image real-time generation module 16 is configured to perform real-time stitching and fusion on each initial ortho-image according to the final posture information of each key frame image, so as to generate an ortho-image of the unmanned aerial vehicle image in real time.
Preferably, the extracting of the feature points of any frame image is performed by:
taking any pixel point of any frame of image as a central pixel point, and acquiring each local pixel point within a preset range;
acquiring each global pixel point of the frame image, respectively calculating each global pixel difference between each global pixel point and the central pixel point, and counting the absolute value of the global pixel difference exceeding a preset pixel difference threshold value to obtain a global counting result;
when the global statistical result is not less than a preset global threshold, respectively calculating each local pixel difference between each local pixel point and the central pixel point, and performing statistics on the absolute value of the local pixel difference exceeding the preset pixel difference threshold to obtain a local statistical result;
and when the local statistical result is not less than a preset local threshold, obtaining the characteristic point of the frame image.
Preferably, the calculating the final pose information of each key frame image in the visual coordinate system by using a least square method according to the initial pose information of each key frame image to generate each orthoimage includes:
when the initial attitude information of at least three frames of key frame images is obtained, calculating the rotation relation between a visual coordinate system and a GNSS coordinate system by adopting a least square method;
and when the rotation relation meets a preset rotation condition, obtaining the final posture information of each key frame image in a visual coordinate system to generate each initial orthoimage.
Preferably, the real-time stitching and fusing each ortho-image according to the final pose information of each key frame image to generate the ortho-image of the unmanned aerial vehicle image in real time includes:
acquiring a current frame initial ortho-image according to the final attitude information of each key frame image;
when the initial ortho-image of the current frame and the initial ortho-image of the previous frame have an overlapping area, calculating the overlapping area;
calculating a variance of the overlapping region when the overlapping region is observed more than once;
and when the variance is smaller than a preset variance threshold value, updating the current frame initial orthoimage to a global map so as to generate an orthoimage of the unmanned aerial vehicle image in real time.
Preferably, the calculating the overlapping area includes:
calculating the overlap region according to:
Figure BDA0003510738090000111
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, and n is the number of observations of the initial ortho image.
Preferably, the calculating the variance of the overlapping region includes:
calculating the variance of the overlapping area according to:
Figure BDA0003510738090000121
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, n is the number of observations of the initial ortho image, sglobalIs the global existing variance.
It should be noted that, as for the working process of each module in the device 10 for generating an orthoimage of an unmanned aerial vehicle image in real time according to the embodiment of the present invention, reference may be made to the working process of the method for generating an orthoimage of an unmanned aerial vehicle image in real time according to the above embodiment, which is not described herein again.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program; when the computer program runs, the device on which the computer readable storage medium is located is controlled to execute the method for generating the orthoimage of the unmanned aerial vehicle image in real time according to the embodiment.
Referring to fig. 5, which is a block diagram of a structure of an orthoscopic image real-time generating device 20 for an unmanned aerial vehicle image according to an embodiment of the present invention, the orthoscopic image real-time generating device 20 for an unmanned aerial vehicle image includes: a processor 21, a memory 22 and a computer program stored in said memory 22 and executable on said processor 21. The processor 21 implements the steps in the above-described in-vehicle apparatus control method embodiment when executing the computer program. Alternatively, the processor 21 implements the functions of the modules/units in the above-described device embodiments when executing the computer program.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 22 and executed by the processor 21 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the orthophoto real-time generation device 20 of the unmanned aerial vehicle image.
The device 20 for generating an orthoimage of an image of an unmanned aerial vehicle in real time may be a desktop computer, a notebook computer, a palm computer, a cloud server, or other computing devices. The device 20 for generating an orthophoto image of the drone image in real time may include, but is not limited to, a processor 21 and a memory 22. It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the device 20 for generating an orthoimage of a drone image, and does not constitute a limitation of the device 20 for generating an orthoimage of a drone image, and may include more or less components than those shown, or combine certain components, or different components, for example, the device 20 for generating an orthoimage of a drone image may also include input-output devices, network access devices, buses, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 21 is a control center of the real-time ortho image generating device 20 for the drone image, and various interfaces and lines are used to connect various parts of the real-time ortho image generating device 20 for the whole drone image.
The memory 22 may be used to store the computer programs and/or modules, and the processor 21 implements various functions of the device 20 for generating an orthophoto image of the drone image in real time by operating or executing the computer programs and/or modules stored in the memory 22 and calling up the data stored in the memory 22. The memory 22 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The modules/units integrated with the device 20 for generating an orthoimage of an unmanned aerial vehicle image in real time may be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by the processor 21 to implement the steps of the above embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (9)

1. An ortho image real-time generation method of unmanned aerial vehicle images is characterized by comprising the following steps:
acquiring an image acquired by an unmanned aerial vehicle in real time;
extracting a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and performing feature point matching on the plurality of first feature points and the plurality of second feature points to obtain a plurality of mutually matched third feature points;
establishing epipolar geometry according to the third feature points, and calculating initial posture information of each frame of image in a visual coordinate system based on an SLAM algorithm;
performing loop detection on each frame of image according to the initial attitude information of each frame of image to obtain each key frame of image;
calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image so as to generate each initial orthoimage;
and splicing and fusing each initial orthoimage in real time according to the final posture information of each key frame image so as to generate the orthoimage of the unmanned aerial vehicle image in real time.
2. The method for real-time generation of orthoimages of unmanned aerial vehicle images according to claim 1, wherein the feature points of any frame of image are extracted by the following steps:
taking any pixel point of any frame of image as a central pixel point, and acquiring each local pixel point within a preset range;
acquiring each global pixel point of the frame image, respectively calculating each global pixel difference between each global pixel point and the central pixel point, and counting the absolute value of the global pixel difference exceeding a preset pixel difference threshold value to obtain a global counting result;
when the global statistical result is not less than a preset global threshold, respectively calculating each local pixel difference between each local pixel point and the central pixel point, and performing statistics on the absolute value of the local pixel difference exceeding the preset pixel difference threshold to obtain a local statistical result;
and when the local statistical result is not less than a preset local threshold, obtaining the characteristic point of the frame image.
3. The method for real-time generation of ortho images of unmanned aerial vehicle as claimed in claim 1, wherein the calculating the final pose information of each key frame image in the visual coordinate system by using the least square method according to the initial pose information of each key frame image to generate each ortho image comprises:
when the initial attitude information of at least three frames of key frame images is acquired, calculating the rotation relation between a visual coordinate system and a GNSS coordinate system by adopting a least square method;
and when the rotation relation meets a preset rotation condition, obtaining the final posture information of each key frame image in a visual coordinate system to generate each initial orthoimage.
4. The method for real-time generation of ortho images of unmanned aerial vehicle images according to claim 1, wherein the real-time stitching and fusing of each ortho image according to the final pose information of each key frame image to generate the ortho images of unmanned aerial vehicle images in real time comprises:
acquiring a current frame initial ortho-image according to the final attitude information of each key frame image;
when the initial ortho-image of the current frame and the initial ortho-image of the previous frame have an overlapping area, calculating the overlapping area;
calculating a variance of the overlapping region when the overlapping region is observed more than once;
and when the variance is smaller than a preset variance threshold value, updating the current frame initial orthoimage to a global map so as to generate an orthoimage of the unmanned aerial vehicle image in real time.
5. The method of claim 4, wherein the calculating the overlap region comprises:
calculating the overlap region according to:
Figure FDA0003510738080000021
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, and n is the number of observations of the initial ortho image.
6. The method of claim 4, wherein the calculating the variance of the overlap region comprises:
calculating the variance of the overlapping area according to:
Figure FDA0003510738080000031
in the formula, xglobalIs a global existing observation, xnewIs the observed value of the initial ortho image, n is the number of observations of the initial ortho image, sglobalIs the global existing variance.
7. The utility model provides an orthophoto real-time generation device of unmanned aerial vehicle image which characterized in that includes:
the real-time image acquisition module is used for acquiring images acquired by the unmanned aerial vehicle in real time;
the feature point matching module is used for extracting a plurality of first feature points of a current frame image and a plurality of second feature points of a previous frame image, and performing feature point matching on the first feature points and the second feature points to obtain a plurality of third feature points which are matched with each other;
the initial attitude information acquisition module is used for establishing epipolar geometry according to the third feature points and calculating initial attitude information of each frame of image in a visual coordinate system based on an SLAM algorithm;
the key frame image acquisition module is used for carrying out loop detection on each frame of image according to the initial attitude information of each frame of image to obtain each key frame image;
the initial orthoimage generation module is used for calculating the final attitude information of each key frame image under a visual coordinate system by adopting a least square method according to the initial attitude information of each key frame image so as to generate each initial orthoimage;
and the ortho-image real-time generation module is used for splicing and fusing each initial ortho-image in real time according to the final attitude information of each key frame image so as to generate the ortho-image of the unmanned aerial vehicle image in real time.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program; wherein the computer program when running controls the device on which the computer readable storage medium is located to execute the method for real-time generation of orthophoto images of unmanned aerial vehicle images according to any one of claims 1-6.
9. An ortho image real-time generation device of a drone image, comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor executing a method of ortho image real-time generation of a drone image according to any one of claims 1 to 6 when the computer program is executed.
CN202210151486.3A 2022-02-18 2022-02-18 Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image Active CN114565863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210151486.3A CN114565863B (en) 2022-02-18 2022-02-18 Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210151486.3A CN114565863B (en) 2022-02-18 2022-02-18 Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image

Publications (2)

Publication Number Publication Date
CN114565863A true CN114565863A (en) 2022-05-31
CN114565863B CN114565863B (en) 2023-03-24

Family

ID=81713664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210151486.3A Active CN114565863B (en) 2022-02-18 2022-02-18 Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image

Country Status (1)

Country Link
CN (1) CN114565863B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782847A (en) * 2022-06-20 2022-07-22 南京航天宏图信息技术有限公司 Mine productivity monitoring method and device based on unmanned aerial vehicle
CN115439672A (en) * 2022-11-04 2022-12-06 浙江大华技术股份有限公司 Image matching method, illicit detection method, terminal device, and storage medium
CN115829833A (en) * 2022-08-02 2023-03-21 爱芯元智半导体(上海)有限公司 Image generation method and mobile device
CN117372273A (en) * 2023-10-26 2024-01-09 航天科工(北京)空间信息应用股份有限公司 Method, device, equipment and storage medium for generating orthographic image of unmanned aerial vehicle image
CN117372273B (en) * 2023-10-26 2024-04-19 航天科工(北京)空间信息应用股份有限公司 Method, device, equipment and storage medium for generating orthographic image of unmanned aerial vehicle image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030218674A1 (en) * 2002-05-24 2003-11-27 Sarnoff Corporation Method and apparatus for video georegistration
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN110675450A (en) * 2019-09-06 2020-01-10 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology
CN111951201A (en) * 2019-05-16 2020-11-17 杭州海康机器人技术有限公司 Unmanned aerial vehicle aerial image splicing method and device and storage medium
WO2021062459A1 (en) * 2019-10-04 2021-04-08 Single Agriculture Pty Ltd Weed mapping

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030218674A1 (en) * 2002-05-24 2003-11-27 Sarnoff Corporation Method and apparatus for video georegistration
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN111951201A (en) * 2019-05-16 2020-11-17 杭州海康机器人技术有限公司 Unmanned aerial vehicle aerial image splicing method and device and storage medium
CN110675450A (en) * 2019-09-06 2020-01-10 武汉九州位讯科技有限公司 Method and system for generating orthoimage in real time based on SLAM technology
WO2021062459A1 (en) * 2019-10-04 2021-04-08 Single Agriculture Pty Ltd Weed mapping

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782847A (en) * 2022-06-20 2022-07-22 南京航天宏图信息技术有限公司 Mine productivity monitoring method and device based on unmanned aerial vehicle
CN115829833A (en) * 2022-08-02 2023-03-21 爱芯元智半导体(上海)有限公司 Image generation method and mobile device
CN115439672A (en) * 2022-11-04 2022-12-06 浙江大华技术股份有限公司 Image matching method, illicit detection method, terminal device, and storage medium
CN117372273A (en) * 2023-10-26 2024-01-09 航天科工(北京)空间信息应用股份有限公司 Method, device, equipment and storage medium for generating orthographic image of unmanned aerial vehicle image
CN117372273B (en) * 2023-10-26 2024-04-19 航天科工(北京)空间信息应用股份有限公司 Method, device, equipment and storage medium for generating orthographic image of unmanned aerial vehicle image

Also Published As

Publication number Publication date
CN114565863B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN114565863B (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
US9270891B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US11113882B2 (en) Generating immersive trip photograph visualizations
CN111829532B (en) Aircraft repositioning system and method
CN112634370A (en) Unmanned aerial vehicle dotting method, device, equipment and storage medium
CN113989450B (en) Image processing method, device, electronic equipment and medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
US9460554B2 (en) Aerial video annotation
CN115329111B (en) Image feature library construction method and system based on point cloud and image matching
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN105444773A (en) Navigation method and system based on real scene recognition and augmented reality
CN113326769B (en) High-precision map generation method, device, equipment and storage medium
CN111652915A (en) Remote sensing image overlapping area calculation method and device and electronic equipment
CN109034214B (en) Method and apparatus for generating a mark
CN116858215B (en) AR navigation map generation method and device
CN113012084A (en) Unmanned aerial vehicle image real-time splicing method and device and terminal equipment
CN116630598A (en) Visual positioning method and device under large scene, electronic equipment and storage medium
CN113129422A (en) Three-dimensional model construction method and device, storage medium and computer equipment
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium
CN114111817B (en) Vehicle positioning method and system based on SLAM map and high-precision map matching
CN110827340A (en) Map updating method, device and storage medium
CN113763468A (en) Positioning method, device, system and storage medium
CN111784622B (en) Image splicing method based on monocular inclination of unmanned aerial vehicle and related device
WO2024077935A1 (en) Visual-slam-based vehicle positioning method and apparatus
CN117201708B (en) Unmanned aerial vehicle video stitching method, device, equipment and medium with position information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant