CN113905190A - Panorama real-time splicing method for unmanned aerial vehicle video - Google Patents

Panorama real-time splicing method for unmanned aerial vehicle video Download PDF

Info

Publication number
CN113905190A
CN113905190A CN202111163456.6A CN202111163456A CN113905190A CN 113905190 A CN113905190 A CN 113905190A CN 202111163456 A CN202111163456 A CN 202111163456A CN 113905190 A CN113905190 A CN 113905190A
Authority
CN
China
Prior art keywords
image
registered
aerial vehicle
unmanned aerial
canvas
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111163456.6A
Other languages
Chinese (zh)
Other versions
CN113905190B (en
Inventor
熊恒斌
耿虎军
高峰
闫玉巧
胡炎
仇梓峰
杨福琛
张泽勇
李方用
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202111163456.6A priority Critical patent/CN113905190B/en
Publication of CN113905190A publication Critical patent/CN113905190A/en
Application granted granted Critical
Publication of CN113905190B publication Critical patent/CN113905190B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a panoramic image real-time splicing method for unmanned aerial vehicle videos, and relates to the field of image processing. Firstly, receiving and preprocessing data, and eliminating longitude and latitude high data outliers; secondly, extracting, matching and purifying characteristic points, and rigidly transforming the image to be registered to the panoramic canvas; then, hovering detection is carried out, key frames are optimized, and intermediate frames with poor matching precision are removed; then adding a canvas border crossing automatic border expansion mechanism to avoid the situation that the size of the canvas is continuously updated and is rewritten to occupy excessive resources; and finally, carrying out closed loop detection and correcting the offset. The invention can not only process the splicing of the long-strip image sequence, but also is effective for the closed-loop path and the splicing route of hovering steering, and meanwhile, the splicing real-time performance is higher.

Description

Panorama real-time splicing method for unmanned aerial vehicle video
Technical Field
The invention relates to the field of image processing, in particular to a panoramic image real-time splicing method facing to an unmanned aerial vehicle video, which can be used for image splicing facing to the unmanned aerial vehicle video and having higher real-time requirement.
Background method
The traditional remote sensing imaging system carries out aerial photography by utilizing a satellite and an airplane platform to carry a sensor, and is widely applied to the fields of homeland, agriculture and forestry, environment, urban planning, water conservancy and the like due to the advantages of flight height, shooting cost and imaging range. Aiming at battlefield investigation, military reconnaissance and disaster burst area monitoring, the unmanned aerial vehicle type small remote sensing image acquisition platform with low cost, flexibility, high resolution and strong timeliness is more favored, and the target area image data with high resolution can be acquired in real time under the condition of keeping low cost. However, the unmanned aerial vehicle image has a small imaging range due to the limitation of shooting flight height and instantaneous field angle, and the area range presented by a single unmanned aerial vehicle image is not enough to directly meet the requirement. The panoramic GIS map real-time splicing technology for the unmanned aerial vehicle video can project sequence images to generate a panoramic image, and further meets the requirements of practical application.
The image splicing technology for the video of the unmanned aerial vehicle mainly has the following problems:
(1) error accumulation is the most extensive and most influential problem in image stitching: in the unmanned aerial vehicle imaging process, the actual resolutions of different areas of the same image are different, and similarly, the resolutions of ground fixed scenes in adjacent frames of the video are slightly different; the closer the scene to the camera, the higher the image resolution; the farther a scene is from the camera, the lower the image resolution; in the image splicing process, the problem of inconsistent zooming coefficients of adjacent frame images is inevitably caused, and the zooming coefficients are continuously superposed along with the increase of the number of the spliced frames, so that error accumulation is caused, and a typical phenomenon that a single frame image added subsequently is more spliced or less spliced is presented;
(2) in the image splicing process, as the data volume of the sequence images is large, the panoramic image is spliced more and more, the occupied resources are more and more, the speed at the early stage of splicing is high, the splicing speed is slower and slower along with the increasing of the splicing frame number, and the efficiency is low;
(3) the existing image splicing method does not relate to a related mechanism and an algorithm for hovering processing of an unmanned aerial vehicle, the unmanned aerial vehicle can hover firstly when changing direction in the flying process, then the unmanned aerial vehicle is quickly rotated by a load to change a course angle, the non-rigid deformation such as distortion of interframe images can be aggravated, a certain deviation can be generated in the rotation and translation amount, the deviation can be superposed together by splicing of multiple frames of images at the hovering position, splicing errors are increased, the splicing quality is seriously influenced, the overlapping degree between frame images at a hovering starting point and a hovering point is greatly reduced, and the matching pair of characteristic points can be reduced;
(4) when the existing image splicing method is used for processing the loop condition of a path, the image posture and coordinate information are generally adjusted by using a light beam method adjustment algorithm, and then splicing errors are minimized.
Disclosure of Invention
The invention aims to provide a panoramic image real-time splicing method facing to the video of the unmanned aerial vehicle, which avoids the problems in the background method. The invention can not only process the splicing of the long-strip image sequence, but also is effective for the closed-loop path and the splicing route of hovering steering, and meanwhile, the splicing has high real-time performance.
The technical scheme adopted by the invention is as follows:
a panoramic map real-time splicing method facing to unmanned aerial vehicle videos comprises the following steps:
(1) receiving data and preprocessing the data, and removing longitude and latitude high data outliers;
(2) extracting characteristic points of the reference image and the image to be registered, matching and purifying the characteristic points, and rigidly transforming the image to be registered to a panoramic canvas;
(3) hovering detection, namely selecting an image to be registered at a hovering position according to a smaller frame interval relative to a non-hovering position, and selecting a local optimal key frame to participate in image splicing;
(4) according to the splicing trend of the images to be registered, automatically expanding a dynamic step length for the rows and columns of the canvas according to the trend when the canvas is out of range;
(5) and performing closed loop detection on the spliced image and correcting the offset.
Further, the specific mode of the step (1) is as follows:
(101) receiving an unmanned aerial vehicle video frame image, longitude and latitude height data, a camera focal length and telemetering data in real time, wherein the telemetering data comprises a course angle, a pitch angle and a roll angle;
(102) and rejecting longitude and latitude high data field values by using invalid value filtering and a Savitzky-Golay filtering algorithm.
Further, the specific mode of the step (2) is as follows:
(201) rigid transformation of the image to be registered is carried out at the equipment end through a GPU kernel function, a processing thread is distributed to each pixel when the kernel function is executed, and extraction of a reference image I is accelerated1And image I to be registered2And calculating feature description vectors;
(202) purifying the feature point matching pairs by using a RANSAC algorithm based on graph cut optimization;
(203) solving I by using the purified feature point matching pairs2To I1The amount of rotation and translation of (a), calculating I by rigid transformation2A projection coordinate range on the panoramic canvas S;
(204) image I to be registered2Rotationally translating to the panoramic canvas S and aligning the image I to be registered2Replacing the reference image I spliced in the next round1
(205) Using the next key frame image as a new image I to be registered2And (3) repeating the step (2).
Further, the specific mode of the step (3) is as follows:
(301) converting the WGS84 angle system longitude and latitude acquired in real time in the step (1) into UTM meter system coordinates, and determining that the unmanned aerial vehicle moves less than 1 meter in the front and back 1 second by combining the video frame rate of the unmanned aerial vehicle as a hovering state;
(302) taking an unmanned aerial vehicle entering a suspension point as a starting point and taking an unmanned aerial vehicle leaving the suspension point as an end point; selecting an image at an initial point as a reference image, selecting images at a smaller frame interval as images to be registered, calculating the number of matched feature point pairs and distance residual errors between a plurality of frames of images to be registered and the reference image, taking a calculation result as a preferred judgment basis of a key frame, and adding the calculation result into a container until reaching an end point;
(303) setting the number of feature point pairs and the threshold values of the distance residual errors as 100 and 2 respectively, selecting the images with the number of feature point pairs higher than the threshold value and the distance residual errors smaller than the threshold value in the container as candidate images, and removing intermediate frames with poor matching precision;
(304) and selecting the image with the minimum distance residual error in all the candidate images as the optimal key frame at the hovering point position to participate in the splicing process.
Further, the specific mode of the step (5) is as follows:
(501) setting a distance threshold T to be 20 meters;
(502) unmanned aerial vehicle longitude and latitude P acquired in real time in step (1)nWith the historical longitude and latitude of unmanned aerial vehicle in earlier stage concatenation process
Figure BDA0003290640920000041
Comparison, when P isnAnd
Figure BDA0003290640920000042
when the distance between the two is less than T, the judgment is made as InAnd ImForming a closed loop; wherein n, m and j are serial numbers of the acquired data,
Figure BDA0003290640920000043
represents rounding down;
(503) if closed loop is detected, use ImIn place of In+1As a reference image, take InRepeating the step (2) for the image to be registered, and calculating InTo ImAnd the rotational and translational components of, and adding ImThe global rotation and translation quantity on the canvas are obtained as the true value Vtrue(ii) a In addition, with In-1As a reference image, InRepeating the step (2) for the image to be registered, and calculating InTo In-1And the rotational and translational components of, and adding In-1The global rotation and translation quantity on the canvas are obtained, and the obtained result is used as the value V to be calibrateduncalibrated
(504) Will Vtrue-VuncalibratedAs the offset to be corrected;
(505) setting offset correction step Sbias=(Vtrue-Vuncalibrated)/(n-m);
(506) The global rotation and translation amount corresponding to each key frame image after the offset correction are respectively as follows:
Vi(i=m+1,m+2,...,n)=Vtrue-(i-m)*Sbias
(507) and rigidly transforming each key frame image to the panoramic canvas according to the corrected global rotation and translation quantity.
The invention has the beneficial effects that:
1. the invention can process the splicing of the strip-shaped image sequence.
2. The present invention is still effective for closed loop paths and hover steered stitching routes.
3. The invention has high splicing instantaneity and good application prospect.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention.
FIG. 2 is a graph of the results of closed loop detection and pre-adjustment stitching.
FIG. 3 is a graph of the closed loop detection and post-adjustment stitching results.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
A panoramic image real-time splicing method facing to an unmanned aerial vehicle video comprises the following steps:
(1) and receiving and preprocessing data, and rejecting longitude and latitude high data outliers.
(2) Extracting, matching and purifying the characteristic points, and rigidly transforming the image to be registered to the panoramic canvas S.
(3) Hovering detection, preferably selecting key frames, and eliminating intermediate frames with poor matching precision:
selecting an image I to be registered at hover with a smaller frame spacing relative to a non-hover2The method comprises the steps of selecting local optimal key frames to participate in image splicing, aiming at screening the key frames at a denser frame rate and splicing the images at a larger frame interval, preventing deviation caused by non-rigid deformation of interframe images due to rapid rotation of loads at a hovering position from being superposed to the same area to influence splicing quality, and simultaneously avoiding the problem of splicing interruption caused by great reduction of overlapping degree of the frames of a hovering starting point and an ending point due to hovering steering.
(4) The automatic boundary enlarging mechanism for the boundary crossing of the canvas is added, so that the phenomenon that the size of the canvas is continuously updated and is rewritten to occupy excessive resources is avoided:
firstly, a small canvas is initialized, and then an image I is registered according to each image I to be registered2When the canvas is out of range, automatically expanding a dynamic step length for the rows and columns of the canvas according to the trend; the amplification step length is calculated by multiplying the row number of the current canvas by a fixed proportionality coefficient.
Only when the canvas is out of range, the boundary can be automatically expanded, the size of the canvas is updated, and then the current canvas is copied on the updated canvas once; the problem that excessive resources are occupied due to the fact that the size of the canvas is continuously updated and is rewritten in the traditional method in the splicing process is effectively solved; the problem that the splicing is slower and slower along with the continuous expansion of the canvas in the splicing process of the traditional method is solved.
(5) And closed loop detection is carried out, and the offset is corrected.
The step (1) specifically comprises the following steps:
(101) receiving video frame images, longitude and latitude heights, camera focal length and telemetering data (course angle, pitch angle and roll angle) of the unmanned aerial vehicle in real time;
(102) and eliminating longitude and latitude data field values by using invalid value filtering and a Savitzky-Golay filtering algorithm, and avoiding the influence of factors such as electromagnetic interference and the like.
The step (2) specifically comprises the following steps:
(201) writing GPU kernel function to realize image I to be registered at equipment end2In order to obtain higher instruction stream efficiency, when the kernel function is executed, the algorithm allocates a processing thread to each pixel to accelerate the extraction of the reference image I1And image I to be registered2And calculating feature description vectors;
(202) purifying the feature point matching pairs by using a RANSAC (namely GC-RANSAC) algorithm based on graph cut optimization;
(203) solving I by using the purified characteristic point pairs2To I1The amount of rotation and translation of (a), calculating I by rigid transformation2A projection coordinate range on the panoramic canvas S;
(204) image I to be registered2Rotationally translating to the panoramic canvas S, the image I to be registered2Replacing the reference image I spliced in the next round1The feature points and the feature description vectors are continuously used, so that the repeated extraction and calculation processes of the feature points are avoided, and the efficiency is improved;
(205) the next key frame image is used as a new image I to be registered2And (3) repeating the step (2).
The step (3) specifically comprises the following steps:
(301) converting the WGS84 angle system longitude and latitude acquired in real time in the step (1) into UTM meter system coordinates, and determining that the unmanned aerial vehicle moves less than 1 meter in the front and back 1 second by combining the video frame rate of the unmanned aerial vehicle as a hovering state;
(302) taking an unmanned aerial vehicle entering a suspension point as a starting point and taking an unmanned aerial vehicle leaving the suspension point as an end point; selecting an image at an initial point as a reference image, selecting the image at a certain frame interval as an image to be registered, calculating the number of matched feature point pairs and distance residual errors between a plurality of frames of images to be registered and the reference image as a basis for judging the preference of a key frame, and adding the basis into a container until the image reaches an end point;
(303) setting the number of feature point pairs and the threshold values of the distance residual errors as 100 and 2 respectively, selecting the image with the number of feature point pairs higher than the threshold value and the distance residual error smaller than the threshold value in the container as a candidate image, and removing the intermediate frame with poor matching precision;
(304) and selecting the image with the minimum distance residual error in all the candidate images as the optimal key frame at the hovering point position to participate in the splicing process.
The step (5) specifically comprises the following steps:
(501) setting a distance threshold T, and suggesting a value of 20 meters;
(502) unmanned aerial vehicle longitude and latitude P acquired in real time in step (1)nWith the historical longitude and latitude of unmanned aerial vehicle in earlier stage concatenation process
Figure BDA0003290640920000081
Comparison, when P isnAnd
Figure BDA0003290640920000082
when the distance between the two is less than T, the judgment is made as InAnd ImForming a closed loop; wherein n, m and j are serial numbers of the acquired data,
Figure BDA0003290640920000083
represents rounding down;
(503) if closed loop is detected, use ImIn place of In+1As a reference image, take InRepeating the step (2) for the image to be registered, and calculating ImTo InAnd the rotational and translational components of, and adding ImThe global rotation and translation quantity on the canvas are obtained as the true value Vtrue(ii) a In addition, with In-1As a reference image, InRepeating the step (2) for the image to be registered, and calculating In-1To InAnd the rotational and translational components of, and adding In-1The global rotation and translation quantity on the canvas are obtained, and the obtained result is used as the value V to be calibrateduncalibrated
(504)Vtrue-VuncalibratedIs the offset to be corrected;
(505) offset correction step Sbias=(Vtrue-Vuncalibrated)/(n-m);
(506) The global rotation and translation amount corresponding to each key frame image after the offset correction are respectively as follows:
Vi(i=m+1,m+2,...,n)=Vtrue-(i-m)*Sbias
(507) and rigidly transforming each key frame image to the panoramic canvas according to the corrected global rotation and translation quantity.
The following is a more specific example:
referring to fig. 1 to 3, a panoramic map real-time stitching method for an unmanned aerial vehicle video includes the following steps:
(1) data receiving and preprocessing, and eliminating longitude and latitude high data outliers: the method comprises the steps of receiving video frame images, longitude and latitude heights, camera focal length and telemetering data (course angle, pitch angle and roll angle) of the unmanned aerial vehicle in real time, and rejecting longitude and latitude data outliers by using invalid value filtering and a Savitzky-Golay filtering algorithm to avoid influences of factors such as electromagnetic interference.
And (3) filtering the longitude and latitude invalid value:
-180≤lon≤180;-90≤lat≤90
and determining that the longitude and the latitude are not within the range, and filtering the invalid value.
And respectively filtering the longitude and latitude data by using a Savitzky-Golay algorithm, and rejecting data values deviating from upper and lower thresholds of a fitting curve by 20%.
(2) Extracting, matching and purifying characteristic points, rigidly transforming the image to be registered to a panoramic canvas S:
method for accelerating extraction of reference image I by GPU1And image I to be registered2And calculating feature description vectors;
purifying the feature point matching pairs by using a RANSAC (GC-RANSAC) algorithm based on graph cut optimization;
solving I by using the purified characteristic point pairs2To I1The amount of rotation and translation of (a), calculating I by rigid transformation2On the panoramic canvas SThe projection coordinate range of (a);
image I to be registered2Rotationally translating to the panoramic canvas S, the image I to be registered2Replacing the reference image I spliced in the next round1The feature points and the feature description vectors are continuously used, so that the repeated extraction and calculation processes of the feature points are avoided, and the efficiency is improved;
the next key frame image is used as a new image I to be registered2Repeating the step (2);
writing GPU kernel function to realize image I to be registered at equipment end2In order to obtain higher instruction stream efficiency, when the kernel function is executed, the algorithm allocates a processing thread to each pixel;
(3) hovering detection, preferably selecting key frames, and eliminating intermediate frames with poor matching precision:
selecting an image I to be registered at hover with a smaller frame spacing relative to a non-hover2The method comprises the steps of selecting local optimal key frames to participate in image splicing, aiming at screening the key frames at a denser frame rate and splicing the images at a larger frame interval, preventing deviation caused by non-rigid deformation of interframe images due to rapid rotation of loads at a hovering position from being superposed to the same area to influence splicing quality, and simultaneously avoiding the problem of splicing interruption caused by great reduction of overlapping degree of the frames of a hovering starting point and an ending point due to hovering steering.
(301) Converting the WGS84 angle system longitude and latitude acquired in real time in the step (1) into UTM meter system coordinates, and determining that the unmanned aerial vehicle moves less than 1 meter in the front and back 1 second by combining the video frame rate of the unmanned aerial vehicle as a hovering state;
(302) taking an unmanned aerial vehicle entering a suspension point as a starting point and taking an unmanned aerial vehicle leaving the suspension point as an end point; selecting an image at an initial point as a reference image, selecting the image at a certain frame interval as an image to be registered, calculating the number of matched feature point pairs and distance residual errors between a plurality of frames of images to be registered and the reference image as a basis for judging the preference of a key frame, and adding the basis into a container until the image reaches an end point;
(303) setting the number of feature point pairs and the threshold values of the distance residual errors as 100 and 2 respectively, selecting the image with the number of feature point pairs higher than the threshold value and the distance residual error smaller than the threshold value in the container as a candidate image, and removing the intermediate frame with poor matching precision;
(304) and selecting the image with the minimum distance residual error in all the candidate images as the optimal key frame at the hovering point position to participate in the splicing process.
(4) The automatic boundary enlarging mechanism for the boundary crossing of the canvas is added, so that the phenomenon that the size of the canvas is continuously updated and is rewritten to occupy excessive resources is avoided:
firstly, a small canvas is initialized, and then an image I is registered according to each image I to be registered2When the canvas is out of range, automatically expanding a dynamic step length for the rows and columns of the canvas according to the trend; the amplification step length is calculated by multiplying the row number of the current canvas by a fixed proportionality coefficient.
Only when the canvas is out of range, the boundary can be automatically expanded, the size of the canvas is updated, and then the current canvas is copied on the updated canvas once; the problem that excessive resources are occupied due to the fact that the size of the canvas is continuously updated and is rewritten in the traditional method in the splicing process is effectively solved; the problem that the splicing is slower and slower along with the continuous expansion of the canvas in the splicing process of the traditional method is solved.
(5) Closed loop detection, correction offset:
setting a distance threshold T, and suggesting a value of 20 meters;
unmanned aerial vehicle longitude and latitude P acquired in real time in step (1)nWith the historical longitude and latitude of unmanned aerial vehicle in earlier stage concatenation process
Figure BDA0003290640920000111
Comparison, when P isnAnd
Figure BDA0003290640920000112
when the distance between the two is less than T, the judgment is made as InAnd ImForming a closed loop;
if closed loop is detected, use ImIn place of In+1As a reference image, take InRepeating the step (2) for the image to be registered, and calculating ImTo InAnd the rotational and translational components of, and adding ImThe amount of global rotation and translation on the canvas,the result obtained is the true value Vtrue(ii) a In addition, with In-1As a reference image, InRepeating the step (2) for the image to be registered, and calculating In-1To InAnd the rotational and translational components of, and adding In-1The global rotation and translation quantity on the canvas are obtained, and the obtained result is used as the value V to be calibrateduncalibrated
Vtrue-VuncalibratedIs the offset to be corrected;
offset correction step Sbias=(Vtrue-Vuncalibrated)/(n-m);
The global rotation and translation amount corresponding to each key frame image after the offset correction are respectively as follows:
Vi(i=m+1,m+2,...,n)=Vtrue-(i-m)*Sbias
and rigidly transforming each key frame image to the panoramic canvas according to the corrected global rotation and translation quantity.
It should be noted that the above examples are only illustrative for the patent spirit of the present invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit of the present invention or exceeding the scope of the claims appended hereto.

Claims (5)

1. A panoramic image real-time splicing method facing to unmanned aerial vehicle videos is characterized by comprising the following steps:
(1) receiving data and preprocessing the data, and removing longitude and latitude high data outliers;
(2) extracting characteristic points of the reference image and the image to be registered, matching and purifying the characteristic points, and rigidly transforming the image to be registered to a panoramic canvas;
(3) hovering detection, namely selecting an image to be registered at a hovering position according to a smaller frame interval relative to a non-hovering position, and selecting a local optimal key frame to participate in image splicing;
(4) according to the splicing trend of the images to be registered, automatically expanding a dynamic step length for the rows and columns of the canvas according to the trend when the canvas is out of range;
(5) and performing closed loop detection on the spliced image and correcting the offset.
2. The real-time panorama stitching method for unmanned aerial vehicle video according to claim 1, wherein the step (1) is specifically performed by:
(101) receiving an unmanned aerial vehicle video frame image, longitude and latitude height data, a camera focal length and telemetering data in real time, wherein the telemetering data comprises a course angle, a pitch angle and a roll angle;
(102) and rejecting longitude and latitude high data field values by using invalid value filtering and a Savitzky-Golay filtering algorithm.
3. The real-time panorama stitching method for unmanned aerial vehicle video according to claim 2, wherein the step (2) is specifically performed by:
(201) rigid transformation of the image to be registered is carried out at the equipment end through a GPU kernel function, a processing thread is distributed to each pixel when the kernel function is executed, and extraction of a reference image I is accelerated1And image I to be registered2And calculating feature description vectors;
(202) purifying the feature point matching pairs by using a RANSAC algorithm based on graph cut optimization;
(203) solving I by using the purified feature point matching pairs2To I1The amount of rotation and translation of (a), calculating I by rigid transformation2A projection coordinate range on the panoramic canvas S;
(204) image I to be registered2Rotationally translating to the panoramic canvas S and aligning the image I to be registered2Replacing the reference image I spliced in the next round1
(205) Using the next key frame image as a new image I to be registered2And (3) repeating the step (2).
4. The real-time panorama stitching method for unmanned aerial vehicle video according to claim 3, wherein the step (3) is specifically performed by:
(301) converting the WGS84 angle system longitude and latitude acquired in real time in the step (1) into UTM meter system coordinates, and determining that the unmanned aerial vehicle moves less than 1 meter in the front and back 1 second by combining the video frame rate of the unmanned aerial vehicle as a hovering state;
(302) taking an unmanned aerial vehicle entering a suspension point as a starting point and taking an unmanned aerial vehicle leaving the suspension point as an end point; selecting an image at an initial point as a reference image, selecting images at a smaller frame interval as images to be registered, calculating the number of matched feature point pairs and distance residual errors between a plurality of frames of images to be registered and the reference image, taking a calculation result as a preferred judgment basis of a key frame, and adding the calculation result into a container until reaching an end point;
(303) setting the number of feature point pairs and the threshold values of the distance residual errors as 100 and 2 respectively, selecting the images with the number of feature point pairs higher than the threshold value and the distance residual errors smaller than the threshold value in the container as candidate images, and removing intermediate frames with poor matching precision;
(304) and selecting the image with the minimum distance residual error in all the candidate images as the optimal key frame at the hovering point position to participate in the splicing process.
5. The real-time panorama stitching method for unmanned aerial vehicle video according to claim 4, wherein the step (5) is implemented in a specific manner as follows:
(501) setting a distance threshold T to be 20 meters;
(502) unmanned aerial vehicle longitude and latitude P acquired in real time in step (1)nWith the historical longitude and latitude of unmanned aerial vehicle in earlier stage concatenation process
Figure FDA0003290640910000031
Comparison, when P isnAnd
Figure FDA0003290640910000032
when the distance between the two is less than T, the judgment is made as InAnd ImForming a closed loop; wherein n, m and j are serial numbers of the acquired data,
Figure FDA0003290640910000033
represents rounding down;
(503) if closed loop is detected, use ImIn place of In+1As a reference image, take InRepeating the step (2) for the image to be registered, and calculating ImTo InAnd the rotational and translational components of, and adding ImThe global rotation and translation quantity on the canvas are obtained as the true value Vtrue(ii) a In addition, with In-1As a reference image, InRepeating the step (2) for the image to be registered, and calculating In-1To InAnd the rotational and translational components of, and adding In-1The global rotation and translation quantity on the canvas are obtained, and the obtained result is used as the value V to be calibrateduncalibrated
(504) Will Vtrue-VuncalibratedAs the offset to be corrected;
(505) setting offset correction step Sbias=(Vtrue-Vuncalibrated)/(n-m);
(506) The global rotation and translation amount corresponding to each key frame image after the offset correction are respectively as follows:
Vi(i=m+1,m+2,…,n)=Vtrue-(i-m)*Sbias
(507) and rigidly transforming each key frame image to the panoramic canvas according to the corrected global rotation and translation quantity.
CN202111163456.6A 2021-09-30 2021-09-30 Panorama real-time splicing method for unmanned aerial vehicle video Active CN113905190B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111163456.6A CN113905190B (en) 2021-09-30 2021-09-30 Panorama real-time splicing method for unmanned aerial vehicle video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111163456.6A CN113905190B (en) 2021-09-30 2021-09-30 Panorama real-time splicing method for unmanned aerial vehicle video

Publications (2)

Publication Number Publication Date
CN113905190A true CN113905190A (en) 2022-01-07
CN113905190B CN113905190B (en) 2023-03-10

Family

ID=79190012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111163456.6A Active CN113905190B (en) 2021-09-30 2021-09-30 Panorama real-time splicing method for unmanned aerial vehicle video

Country Status (1)

Country Link
CN (1) CN113905190B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578441A (en) * 2022-08-30 2023-01-06 感知信息科技(浙江)有限责任公司 Vehicle side image splicing and vehicle size measuring method based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN102982515A (en) * 2012-10-23 2013-03-20 中国电子科技集团公司第二十七研究所 Method of unmanned plane image real-time splicing
CN111105351A (en) * 2019-12-13 2020-05-05 华中科技大学鄂州工业技术研究院 Video sequence image splicing method and device
US20200195847A1 (en) * 2017-08-31 2020-06-18 SZ DJI Technology Co., Ltd. Image processing method, and unmanned aerial vehicle and system
CN112288634A (en) * 2020-10-29 2021-01-29 江苏理工学院 Splicing method and device for aerial images of multiple unmanned aerial vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN102982515A (en) * 2012-10-23 2013-03-20 中国电子科技集团公司第二十七研究所 Method of unmanned plane image real-time splicing
US20200195847A1 (en) * 2017-08-31 2020-06-18 SZ DJI Technology Co., Ltd. Image processing method, and unmanned aerial vehicle and system
CN111105351A (en) * 2019-12-13 2020-05-05 华中科技大学鄂州工业技术研究院 Video sequence image splicing method and device
CN112288634A (en) * 2020-10-29 2021-01-29 江苏理工学院 Splicing method and device for aerial images of multiple unmanned aerial vehicles

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578441A (en) * 2022-08-30 2023-01-06 感知信息科技(浙江)有限责任公司 Vehicle side image splicing and vehicle size measuring method based on deep learning
CN115578441B (en) * 2022-08-30 2023-07-28 感知信息科技(浙江)有限责任公司 Vehicle side image stitching and vehicle size measuring method based on deep learning

Also Published As

Publication number Publication date
CN113905190B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
US11554717B2 (en) Vehicular vision system that dynamically calibrates a vehicular camera
US11897606B2 (en) System and methods for improved aerial mapping with aerial vehicles
CN106127697B (en) EO-1 hyperion geometric correction method is imaged in unmanned aerial vehicle onboard
US9179064B1 (en) Diagonal collection of oblique imagery
CN110675450B (en) Method and system for generating orthoimage in real time based on SLAM technology
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
US20220264014A1 (en) Controlling a line of sight angle of an imaging platform
CN110727009B (en) High-precision visual map construction and positioning method based on vehicle-mounted all-around image
CN103822615A (en) Unmanned aerial vehicle ground target real-time positioning method with automatic extraction and gathering of multiple control points
CN111899164B (en) Image splicing method for multi-focal-segment scene
CN107192376A (en) Unmanned plane multiple image target positioning correction method based on interframe continuity
CN110223233B (en) Unmanned aerial vehicle aerial photography image building method based on image splicing
CN113905190B (en) Panorama real-time splicing method for unmanned aerial vehicle video
CN105606123A (en) Method for automatic correction of digital ground elevation model for low-altitude aerial photogrammetry
CN112950719A (en) Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform
CN108109118B (en) Aerial image geometric correction method without control points
CN114396944A (en) Autonomous positioning error correction method based on digital twinning
CN114545963A (en) Method and system for optimizing multi-unmanned aerial vehicle panoramic monitoring video and electronic equipment
JP2010226652A (en) Image processing apparatus, image processing method, and computer program
CN116007631A (en) Unmanned aerial vehicle autonomous line navigation method based on computer vision
CN113093783B (en) Shooting control method and device of unmanned aerial vehicle
CN113706389B (en) Image splicing method based on POS correction
US20240013485A1 (en) System and methods for improved aerial mapping with aerial vehicles
CN114858186B (en) On-satellite geometric calibration method for linear array camera under fixed star observation mode
CN115908136A (en) Real-time incremental splicing method for aerial images of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant