CN107808362A - A kind of image split-joint method combined based on unmanned plane POS information with image SURF features - Google Patents

A kind of image split-joint method combined based on unmanned plane POS information with image SURF features Download PDF

Info

Publication number
CN107808362A
CN107808362A CN201711132452.5A CN201711132452A CN107808362A CN 107808362 A CN107808362 A CN 107808362A CN 201711132452 A CN201711132452 A CN 201711132452A CN 107808362 A CN107808362 A CN 107808362A
Authority
CN
China
Prior art keywords
mrow
image
msub
mtd
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711132452.5A
Other languages
Chinese (zh)
Inventor
赵德群
王亚洲
孙光民
邓钱华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201711132452.5A priority Critical patent/CN107808362A/en
Publication of CN107808362A publication Critical patent/CN107808362A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image split-joint method combined based on unmanned plane POS information with image SURF features, it is related to the association areas such as digital image processing field, GIS fields and survey field.Geometric correction is carried out to image first, then the geographical coordinate of the angle point of image four is calculated, based on first image geographical coordinate, SURF features by extracting adjacent image overlapping region go to obtain the position relationship of homonymy matching target point, so as to correct the geographical coordinate of subsequent figures picture successively, the blending algorithm finally gradually gone out using being adaptively fade-in, has obtained the good panoramic picture of a secondary visual effect, has completed the good splicing of image.Image characteristics extraction algorithm is combined by this method with the geographical coordinate of image, is greatly improved than traditional feature extraction stitching algorithm in splicing efficiency and visual effect, and spliced image has geography information, has certain practical value.

Description

A kind of image mosaic combined based on unmanned plane POS information with image SURF features Method
Technical field
The present invention relates to one kind based on unmanned plane POS (Position and Orientation System) information and figure As the joining method that SURF features are combined, belong to the related neck such as digital image processing field, GIS fields and survey field Domain.
Background technology
UAV (Unmanned Aerial Vehicle unmanned planes) has that operation is simple, is swift in response, flexible, cost of fly The features such as low, be widely used in fighting calamities and providing relief, military surveillance, the area such as marine detection, environmental protection and land dynamic monitoring Domain.And due to the limited viewing angle of unmanned plane sequence of pictures shooting, by the height of unmanned plane during flying and the parameter of camera Limitation.In order to carry out global assurance and analysis to the region of shooting, more target area information, the fast Speed Pinyin of image are obtained Connect seem very urgently with needs.
The Aerial Images joining method of unmanned plane mainly has two kinds at present:A kind of is the joining method based on characteristics of image, Another kind is the joining method based on unmanned plane POS information.
The image mosaic of feature based mainly includes two main steps, respectively image registration, image co-registration.Wherein Image registration is core procedure, and in the image registration algorithm of feature based detection, CHris Harris are proposed within 1988 Harris corner detection operators, SIFT feature matching algorithm (scale invariant feature transform) are a kind of Classical algorithm, it extracts its position, yardstick, rotational invariants by finding extreme point in space scale, and carries Feature out is more stable, but shortcoming is that the characteristic point quantity extracted is more, and computationally intensive, time-consuming.Bay is in 2006 Year proposes SURF algorithm, means acceleration robust features, is a kind of improvement carried out to SIFT algorithms, and SURF algorithm utilizes Hessian matrixes determine candidate point and then carry out non-maximum restraining, reduce computation complexity, in general, the SURF of standard Operator is faster several times than SIFT operator, and has more preferable robustness under how secondary picture, but the image of feature based is spelled Method error is connect easily to accumulate, and in the case where being not easy extraction for features such as the regions such as marine site, traditional characteristic splicing Method does not apply to simultaneously.
Information joining method based on POS mainly carries out registration using the geographical coordinate relation of Aerial Images, nobody The headway of machine, cruising height, the aircraft latitude and longitude information of current exposure point, the POS systems that can be carried by unmanned plane System is got.Posture during these information records aircraft flight in each time of exposure flight, but because unmanned plane exists When being taken photo by plane, posture is not fixed, it is therefore desirable to and two groups of POS datas of adjacent exposure point calculate to carry out sky three, from And the longitude and latitude of image slices vegetarian refreshments is calculated, as the difference for establishing model, error are also different.
Feature extraction algorithm is combined by this method with the geographical coordinate of image, than passing in splicing efficiency and visual effect The feature extraction stitching algorithm of system is greatly improved, and spliced image has geography information, has certain practical valency Value.
The content of the invention
The shortcomings that for both the above method, splice precision and to increasing the adaptation to various Aerial Images to improve Property, it is proposed that a kind of method being combined based on unmanned plane POS information with traditional characteristic splicing, geometry school is carried out to image first Just, the geographical coordinate of the angle point of image four is then calculated, based on first image geographical coordinate, by extracting adjacent image The SURF features of overlapping region go to obtain the position relationship of homonymy matching target point, so as to correct the geographical seat of subsequent figures picture successively Mark, the blending algorithm finally gradually gone out using being adaptively fade-in, the good panoramic picture of a secondary visual effect is obtained, completed image Good splicing.
The technical solution adopted by the present invention is spelled for a kind of image combined based on unmanned plane POS information with image SURF features Method is connect, this method comprises the following steps:
S1:Aerial Images pre-process:
Unmanned plane is taken photo by plane in execution in task, due to by the posture of aircraft, height, speed and earth rotation etc. because The influence of element, cause image that extruding, distortion, stretching and skew etc. occurs relative to ground target physical location, therefore to taking photo by plane Image carry out Geometry rectification, to obtain the remote sensing images based on same reference projection face.
S2:Aerial Images overlapping region calculates:
The overlapping region of adjacent image is calculated according to POS information, the time of sequence image exposure point is obtained according to POS information And flight actual range, according to the angle of aircraft current flight by speed carry out positive direction on decomposition, by calculating phase The overlapping region of adjacent shooting image is and smaller with the searching area of characteristic point to reduce the amount of calculation of splicing, in fact it could happen that Error hiding probability it is just smaller, improve detection efficiency.
S3:The geographical coordinate of image calculates:
After image is carried out into Geometry rectification, the attitude angle flown according to current aircraft calculates the longitude and latitude of image center Coordinate is spent, image ground resolution is calculated according to camera internal and external orientation, thus calculates the latitude and longitude coordinates of 4 points of image, Geographical coordinate is transformed under rectangular coordinate system in space again, so as to being projected according to rectangular coordinate system in space, obtain image it Between relative position relation.
S4:Extract the SURF features of image:
The extraction of SURF features includes:Hessian matrixes are built, generate all points of interest, structure metric space, to feature The steps such as point is positioned, the distribution of characteristic point principal direction, generation feature point description, Feature Points Matching, pass through these step meters Calculation obtains the SURF characteristic matchings point pair of adjacent image.
S5:Using the image SURF characteristic points extracted to correcting geographical coordinate:
Since it is known that the geographical coordinate of every image, therefore the geographical coordinate of obtained same place pair should be same Point, if that calculates is different, on the basis of first image, geographical coordinate below is corrected successively, so as to enter line position The registration put, improve the accuracy directly placed by geographical coordinate.
S6:Image fusion policy:
For the image after amendment coordinate, due to there is certain vestige between every image, it is therefore desirable to using some plans Slightly come solve the problems, such as splice gap between color transition differ greatly, using be fade-in gradually go out adaptive weight fusion method come Gentle transition is carried out to the gap between image, makes to look at naturally, visual effect is good between picture.
Compared with prior art, the present invention has advantages below:Set forth herein method by image characteristics extraction algorithm with The geographical coordinate of image is combined, and is had than traditional feature extraction stitching algorithm in splicing efficiency and visual effect larger Improve, and spliced image has geography information, has certain practical value.
Brief description of the drawings
Fig. 1 is design method flow chart;
Fig. 2 unmanned planes carry POS information form;
Fig. 3 is adjacent image overlapping region flow, and wherein Fig. 3 .1 are velocity diagram, and Fig. 3 .2 are overlapping region calculating point Analysis figure;
Fig. 4 extracts the SURF characteristic point procedure charts of adjacent image, and wherein Fig. 4 .1 are detection feature point diagram, and Fig. 4 .2 are characterized Point telegoniometer nomogram, Fig. 4 .3 are the Feature Points Matching point pair of the adjacent image detected;
Fig. 5 is two placed according to geographical coordinate to correcting geographical coordinate process, wherein Fig. 5 .1 according to SURF characteristic points Aerial Images, Fig. 5 .2 are to place design sketch according to the revised geographical coordinate of context of methods, and Fig. 5 .3 is after plurality of pictures amendments Placement effect;
Effect comparison chart after Fig. 6 fusions;
Embodiment
The present invention will be further described below in conjunction with the accompanying drawings;
The overall flow of the present invention is as shown in figure 1, be broadly divided into six big links:Aerial Images pretreatment link, figure of taking photo by plane As overlapping region calculates link, image geographical coordinate calculates link, extracts image SURF features link, with reference to adjacent image SURF Characteristic matching point is to amendment geographical coordinate link and image co-registration link.Wherein, with reference to adjacent image SURF characteristic matching points pair Correction map is as the innovation layer that geographical coordinate is this method, combination algorithm execution sequence, below to the specific reality of above-mentioned each content The mode of applying illustrates:
S1 Aerial Images pre-process:
, it is necessary to which the coordinate system established successively is during Geometry rectification is pre-processed:Terrestrial coordinate system, geographical coordinate System, body axis system, photoelectric platform coordinate system, camera coordinates system.Conventional correction comprises the following steps that:If photoelectric platform coordinate It to the transformation matrix of digital camera coordinate system is R1 to be, is R2 by the transformation matrix of body axis system to photoelectric platform coordinate system, It is R3 by the transformation matrix of geographic coordinate system to body axis system, by the transformation matrix of earth coordinates to rectangular coordinate system in space For R4, then have:
Wherein Rx(λ)Ry(λ)Rz(λ) be respectively around x-axis, y-axis, z-axis rotation spin matrix, wherein αPSwept for pixel Retouch angle, βPFor pixel drift angle, χ is azimuth, and ω is platform roll angle, and β is aircraft roll angle, and θ is course angle, and B is longitude, L For latitude, H is height.Therefore the conversion arrived from camera coordinates system (Xc, Yc, Zc) between earth coordinates (Xe, Ye, Ze) can be with Represented with following relation:
By formula (2), point is calculated and sampled conventional Geometry rectification pixel-by-pixel.In view of this problem mainly to single air strips Spliced, and can consider image point when unmanned aerial vehicle platform roll angle and the smaller angle of pitch (when near normal is shot downwards) Resolution is consistent, therefore in order to reduce operand, improves speed, Geometry rectification only to the vector angle in horizontal direction and Platform azimuth is corrected.Correction model is as follows:
Wherein χ is azimuth, and θ is course angle, RT(χ+θ) is transformation matrix, passes through bilinearity again after completing model conversion Difference method for resampling calculates gray value of the pixel under new coordinate system, so as to generate new image array, completes geometry Correction.
S2:Aerial Images overlapping region calculates
Reduce the amount of calculation of splicing by the overlapping region of POS information calculating adjacent image, and with characteristic point Find area it is smaller, in fact it could happen that error hiding probability it is just smaller, improve the overlay region of the efficiency calculation adjacent image of detection Domain, comprise the following steps that:
1) assume that the angle that unmanned plane course line is deviation due north is θ, its speed of a ship or plane is V, is on direct north by resolution of velocity With the component on the direction of due east in both direction.V1 and V2 are expressed as, the resolution of velocity as shown in Fig. 3 .1.
2) being continuously shot the images of adjacent two exposure points, to number respectively be Pic1 and Pic2, interval time t, according to POS track documents obtain the central point longitude and latitude of two pictures, are designated as LatA, LonA, LatB, LonB respectively, then have:
Wherein R=6371.004 kms, pi take longitude and latitude angles of 3.1415926, the C between two pictures central points, L For 2 points of the actual range calculated.
3) using pic1 as benchmark, the overlapping region between pic2 and pic1 is expressed as rectangular area as shown in Figure 3 .2 S, its overlapping region is expanded and is defined into regular domain, overlapping region summit is projected on x and y directions and is then expressed as:
S=(W-P1) (H-P2) (5)
Wherein W and H is the wide and high of rectangular area, and P1 and P2 mark length in figure.
The amount of calculation of splicing is reduced by calculating overlapping region, and it is smaller with the searching area of characteristic point, it may go out Existing error hiding probability is just smaller, improves the efficiency of detection.It is specific to calculate overlapping region schematic diagram as shown in Figure 3 .2.S3:Image Geographical coordinate calculate:The Aerial Images crossed for geometric correction, every pixel coordinate of Aerial Images have corresponding geography Coordinate, the POS information (as shown in Figure 2) of carrying when unmanned plane during flying have recorded the longitude and latitude of the aircraft of current exposure dot image The information such as the height of degree and aircraft, roll angle, the angle of pitch, course angle can be to calculate used in geographical coordinate, the geography of image The calculating process of coordinate is divided into the following steps:
1) ground resolution is calculated, calculation formula is as follows:
Wherein GSD represents ground resolution (m), and f is lens focus (mm), and P is the pixel dimension (mm) of imaging sensor, H is flying height (m) corresponding to unmanned plane.
2) image diagonal True Ground Range is calculated:
Wherein w and h be image width and height, actual ranges of the L between image diagonal.
3) the angle point geographical coordinate of image four is calculated, according to the distance of image center longitude and latitude and the relative central point of another point And deflection, try to achieve using image midpoint as the center of circle, radius is the L/2 angle point geographical coordinate of correspondence four.Specific formula for calculation is:
Wherein θi∈ (0,2pi), LonaLataFor the longitude and latitude of image center, Ri is that equatorial radius takes 6378137m, Rj 6356725m, pi is taken to take 3.1415925 for polar radius.
4) it is as follows to be transformed into the conversion between coordinate system, conversion formula between space for geographical coordinate:
Wherein N is radius of curvature, and Lon, Lat, H are longitude, latitude and the height of any point on image, thus turn image Space coordinates are changed to, so far, obtain complete image coordinate.
S4:The SURF feature extractions of image:
SURF carries out convolution algorithm using approximate Hessian matrixes detection characteristic point using integral image, is greatly decreased Computing, so as to improve feature extraction speed.SURF describes attached bag and contains two main parts:Detect characteristic point and calculate feature; Specific implementation is divided into the following steps:
1) Hessian matrixes, tectonic scale space are built.
Assuming that certain point is X (x, y) on image, the matrix M under σ yardsticks is defined as:
Wherein LxxIt is the result that gaussian filtering second order leads same X convolution, LxyDeng implication it is similar, σ is space scale.When When the discriminate of Hessian matrixes obtains local maximum, it is believed that navigate to the position of key point.
2) characteristic point is detected
, will be empty with two dimensional image by each pixel of Hessian matrix disposals in metric space obtained above Between and metric space neighborhood in 26 points be compared, Primary Location goes out key point, then filtered removal energy comparison is weak Key point and the key point of location of mistake, the characteristic point of final stabilization is filtered out, detection process is as shown in Fig. 4 .1:
3) characteristic point principal direction is determined
Using characteristic point as the center of circle, using 6 σ as radius, ask the Haar small echos on XY directions to respond, count and own in 60 degree of sectors The horizontal haar wavelet characters of point and vertical haar wavelet characters summation, and set the size of haar small echos to become a length of 4s so that Each sector is obtained for respective value.Then 60 degree of sectors are rotated at certain intervals, finally will locking maximum sector Principal direction of the direction as this feature point.The schematic diagram of the process is as shown in Fig. 4 .2:
4) feature descriptor is calculated
A square-shaped frame is taken around characteristic point, the frame is then divided into 16 sub-regions, the statistics 25 per sub-regions The haar wavelet characters horizontally and vertically of individual pixel, here be all both horizontally and vertically with respect to principal direction and Speech, so each characteristic point is exactly the vector of 16*4=64 dimensions, this can greatly speed up matching speed during characteristic matching, The adjacent S URF characteristic image matching double points exemplary plots extracted are as shown in Fig. 4 .3.
S5:Using adjacent image characteristic matching point to correcting geographical coordinate
Due to POS precision is low and geometric correction in certain error be present, the coordinate mapping relations calculated have necessarily Error, now remove the geographical coordinate of correction map picture using Feature Correspondence Algorithm, detailed process is as follows:
Assuming that the geographical coordinate of image 1 is P1 (x1, y1), the geographical coordinate of image 2 is P2 (x2, y2), and extraction image is special Pixel coordinate position of the same point between different two images is then obtained after sign, pairing, is thus obtained same in two images The latitude and longitude coordinates (Lon1, Lat1) (Lon2, Lat2) of one target point, finally on the basis of first image geographical coordinate, It is as follows to try to achieve the offset between target point and first image, formula in second image:
Then with the offset tried to achieve go correction map as 2 geographical coordinate P2:
Then geographical coordinate is thrown into figure under rectangular coordinate system in space, it is accurately registering so as to complete image, corrected Image after coordinate;Fig. 5 .1 are with being the effect directly placed according to geographical coordinate shown in 5.2 and placing after being modified Comparative result, Fig. 5 .3 are placement result of multiple images according to revised geographical coordinate.
S5 image fusion policies
For the image after amendment coordinate, due to there is certain vestige between every image, it is therefore desirable to using some plans Slightly differed greatly to solve the problems, such as to splice the color transition between gap, stitching image is more smoothly gradually gone out naturally, being fade-in Adaptive weighting fusion process is as follows:
Assuming that I1, I2, I are respectively the image 3 after image 1 before merging, image 2 and fusion, then by formula (11), complete Image co-registration,
In formula, W is the overall width that two width different pictures repeat;W is between overlapping region left hand edge and current pixel point Lateral separation.It is as shown in Figure 6 with the comparative examples figure after merging before fusion.
In summary, the characteristics of present invention is directed to unmanned plane image, it is proposed that one based on unmanned plane POS information Joining method, the geographical coordinate of four angle points of image is calculated according to POS information first, then extract the overlapping region of image SURF features, go to correct geographical coordinate using the position relationship of the same characteristic features point of adjacent image, so as to complete registration, finally Gradually go out blending algorithm using adaptive be fade-in image is seamlessly transitted, obtain the good panorama sketch of a secondary visual effect Picture, and the image after the completion of splicing has geographical coordinate, has use value.

Claims (2)

  1. A kind of 1. image split-joint method combined based on unmanned plane POS information with image SURF features, it is characterised in that:This method Comprise the following steps:
    S1:Aerial Images pre-process:
    Unmanned plane is taken photo by plane in execution in task, due to the shadow by the posture of aircraft, height, speed and earth rotation factor Ring, cause image that extruding, distortion, stretching and skew occurs relative to ground target physical location, therefore the image to taking photo by plane enters Row Geometry rectification, to obtain the remote sensing images based on same reference projection face;
    S2:Aerial Images overlapping region calculates:
    According to POS information calculate adjacent image overlapping region, according to POS information obtain sequence image exposure point time and The actual range of flight, according to the angle of aircraft current flight by speed carry out positive direction on decomposition, by calculating adjacent bat The overlapping region of image is taken the photograph to reduce the amount of calculation of splicing, and smaller with the searching area of characteristic point, in fact it could happen that mistake Matching probability is just smaller, improves detection efficiency;
    S3:The geographical coordinate of image calculates:
    After image is carried out into Geometry rectification, the attitude angle flown according to current aircraft calculates the longitude and latitude seat of image center Mark, image ground resolution is calculated according to camera internal and external orientation, thus calculate the latitude and longitude coordinates of 4 points of image, then will Geographical coordinate is transformed under rectangular coordinate system in space, so as to being projected according to rectangular coordinate system in space, is obtained between image Relative position relation;
    S4:Extract the SURF features of image:
    The extraction of SURF features includes:Hessian matrixes are built, all points of interest, structure metric space is generated, feature is clicked through The steps such as row positioning, the distribution of characteristic point principal direction, generation feature point description, Feature Points Matching, are calculated by these steps To the SURF characteristic matchings point pair of adjacent image;
    S5:Using the image SURF characteristic points extracted to correcting geographical coordinate:
    Since it is known that the geographical coordinate of every image, therefore the geographical coordinate of obtained same place pair should be same point, If that calculates is different, on the basis of first image, geographical coordinate below is corrected successively, so as to carry out position Registration, improve the accuracy directly placed by geographical coordinate;
    S6:Image fusion policy:
    For amendment coordinate after image, due to there is certain vestige between every image, using strategy solve splice gap it Between color transition difference problem, using being fade-in, gradually to go out adaptive weight fusion method gentle to be carried out to the gap between image Transition.
  2. A kind of 2. image mosaic side combined based on unmanned plane POS information with image SURF features according to claim 1 Method, it is characterised in that:It is broadly divided into six big links:Aerial Images pretreatment link, Aerial Images overlapping region calculate link, figure As geographical coordinate calculating link, extraction image SURF features link, with reference to adjacent image SURF characteristic matchings point to amendment geography Coordinate link and image co-registration link;Wherein, with reference to adjacent image SURF characteristic matchings point to correction map as geographical coordinate is this The innovation layer of method;
    S1 Aerial Images pre-process:
    , it is necessary to which the coordinate system established successively is during Geometry rectification is pre-processed:Terrestrial coordinate system, geographic coordinate system, machine Body coordinate system, photoelectric platform coordinate system, camera coordinates system;Conventional correction comprises the following steps that:If photoelectric platform coordinate system arrives The transformation matrix of digital camera coordinate system is R1, is R2 by the transformation matrix of body axis system to photoelectric platform coordinate system, by ground The transformation matrix for managing coordinate system to body axis system is R3, and the transformation matrix by earth coordinates to rectangular coordinate system in space is R4, then have:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>R</mi> <mn>1</mn> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;beta;</mi> <mi>P</mi> </msub> <mo>)</mo> </mrow> <mi>R</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;alpha;</mi> <mi>P</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mn>2</mn> <mo>=</mo> <mi>R</mi> <mrow> <mo>(</mo> <mi>&amp;chi;</mi> <mo>)</mo> </mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>&amp;omega;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mn>3</mn> <mo>=</mo> <msub> <mi>R</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;beta;</mi> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>z</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <mn>90</mn> <mo>-</mo> <mi>&amp;lambda;</mi> <mo>)</mo> </mrow> <msub> <mi>R</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <mn>90</mn> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>R</mi> <mn>4</mn> <mo>=</mo> <msup> <msub> <mi>R</mi> <mi>x</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mo>-</mo> <mo>(</mo> <mn>90</mn> <mo>-</mo> <mi>B</mi> <mo>)</mo> </mrow> <mo>)</mo> <msup> <msub> <mi>R</mi> <mi>z</mi> </msub> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mn>90</mn> <mo>+</mo> <mi>L</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Wherein Rx(λ)Ry(λ)Rz(λ) be respectively around x-axis, y-axis, z-axis rotation spin matrix, wherein αPFor pixel scan angle, βPFor pixel drift angle, χ is azimuth, and ω is platform roll angle, and β is aircraft roll angle, and θ is course angle, and B is longitude, and L is latitude Degree, H are height;Therefore such as ShiShimonoseki is used in the conversion arrived from camera coordinates system (Xc, Yc, Zc) between earth coordinates (Xe, Ye, Ze) System represents:
    <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>e</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <msub> <mi>R</mi> <mn>1</mn> </msub> <mi>T</mi> </msup> <msup> <msub> <mi>R</mi> <mn>2</mn> </msub> <mi>T</mi> </msup> <msup> <msub> <mi>R</mi> <mn>3</mn> </msub> <mi>T</mi> </msup> <msup> <msub> <mi>R</mi> <mn>4</mn> </msub> <mi>T</mi> </msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
    By formula (2), point is calculated and sampled conventional Geometry rectification pixel-by-pixel;This method is directed to the single air strips for being and spliced, And think that image resolution ratio is consistent, geometry when being shot vertically downward when unmanned aerial vehicle platform roll angle and the smaller angle of pitch Correction is only corrected to the vector angle in horizontal direction and platform azimuth;Correction model is as follows:
    <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>e</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>e</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msup> <mi>R</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mi>&amp;chi;</mi> <mo>+</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>X</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Y</mi> <mi>c</mi> </mtd> </mtr> <mtr> <mtd> <mi>Z</mi> <mi>c</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
    Wherein χ is azimuth, and θ is course angle, RT(χ+θ) is transformation matrix, passes through bilinearity difference weight again after completing model conversion The method of sampling calculates gray value of the pixel under new coordinate system, so as to generate new image array, completes Geometry rectification;
    S2:Aerial Images overlapping region calculates
    1) assume that the angle that unmanned plane course line is deviation due north is θ, its speed of a ship or plane is V, by resolution of velocity on direct north and just Component in the upward both direction in east;It is expressed as V1 and V2;
    2) being continuously shot the images of adjacent two exposure points, to number respectively be Pic1 and Pic2, interval time t, is navigated according to POS Mark file obtains the central point longitude and latitude of two pictures, is designated as LatA, LonA, LatB, LonB respectively, then has:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>C</mi> <mo>=</mo> <mi>sin</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>sin</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>B</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mi>A</mi> <mo>-</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mi>B</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>A</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>B</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>L</mi> <mo>=</mo> <mi>R</mi> <mo>*</mo> <mi>A</mi> <mi>r</mi> <mi>c</mi> <mi>cos</mi> <mrow> <mo>(</mo> <mi>C</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>P</mi> <mi>i</mi> <mo>/</mo> <mn>180</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
    Wherein R=6371.004 kms, pi take longitude and latitude angles of 3.1415926, the C between two pictures central points, and L is meter 2 points of the actual range calculated;
    3) using pic1 as benchmark, the overlapping region between pic2 and pic1 is expressed as rectangular area S, and its overlapping region is expanded Regular domain is defined into, overlapping region summit is projected on x and y directions and is then expressed as:
    S=(W-P1) (H-P2) (5)
    Wherein W and H is the wide and high of rectangular area, and P1 and P2 mark length in figure;
    The amount of calculation of splicing is reduced by calculating overlapping region, and it is smaller with the searching area of characteristic point, in fact it could happen that Error hiding probability is just smaller, improves the efficiency of detection;
    S3:The geographical coordinate of image calculates:The Aerial Images crossed for geometric correction, every pixel coordinate of Aerial Images have Corresponding geographical coordinate, the POS information of carrying when unmanned plane during flying have recorded the longitude and latitude of the aircraft of current exposure dot image The height of degree and aircraft, roll angle, the angle of pitch, course angle information are to calculate used in geographical coordinate, the geographical coordinate of image Calculating process is divided into the following steps:
    1) ground resolution is calculated, calculation formula is as follows:
    <mrow> <mi>G</mi> <mi>S</mi> <mi>D</mi> <mo>=</mo> <mfrac> <mrow> <mi>H</mi> <mo>*</mo> <mi>P</mi> </mrow> <mi>f</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
    Wherein GSD represents ground resolution, m;F is lens focus, mm;P be imaging sensor pixel dimension, mm;H is nobody Flying height corresponding to machine, m;
    2) image diagonal True Ground Range is calculated:
    <mrow> <mi>L</mi> <mo>=</mo> <msqrt> <mrow> <msup> <mi>w</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>h</mi> <mn>2</mn> </msup> </mrow> </msqrt> <mo>*</mo> <mi>G</mi> <mi>S</mi> <mi>D</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
    Wherein w and h be image width and height, actual ranges of the L between image diagonal;
    3) the angle point geographical coordinate of image four is calculated, according to the distance of image center longitude and latitude and another point with respect to central point and side To angle, try to achieve using image midpoint as the center of circle, radius is the L/2 angle point geographical coordinate of correspondence four;Specific formula for calculation is:
    <mrow> <msub> <mi>Lon</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>L</mi> <mi>sin</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>R</mi> <mi>c</mi> </msub> <mo>-</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>90</mn> <mo>-</mo> <msub> <mi>Lat</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <mn>90</mn> <mo>)</mo> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mi>a</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>+</mo> <msub> <mi>Lon</mi> <mi>a</mi> </msub> <mo>)</mo> <mo>*</mo> <mn>180</mn> </mrow> <mrow> <mi>p</mi> <mi>i</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
    <mrow> <msub> <mi>Lat</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>L</mi> <mi> </mi> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mn>2</mn> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>+</mo> <mo>(</mo> <msub> <mi>R</mi> <mi>c</mi> </msub> <mo>-</mo> <msub> <mi>R</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <mn>90</mn> <mo>-</mo> <msub> <mi>Lat</mi> <mi>a</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <mn>90</mn> <mo>)</mo> </mrow> </mfrac> <mo>+</mo> <msub> <mi>Lat</mi> <mi>a</mi> </msub> <mo>)</mo> <mo>*</mo> <mn>180</mn> </mrow> <mrow> <mi>p</mi> <mi>i</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
    Wherein θi∈ (0,2pi), LonaLataFor the longitude and latitude of image center, it is pole that Ri takes 6378137m, Rj for equatorial radius Radius takes 6356725m, pi to take 3.1415925;
    4) it is as follows to be transformed into the conversion between coordinate system, conversion formula between space for geographical coordinate:
    <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>X</mi> <mi>s</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Y</mi> <mi>s</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>Z</mi> <mi>s</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mi>H</mi> <mo>)</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mo>(</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mo>)</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mi>H</mi> <mo>)</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mo>(</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mo>)</mo> <mi>sin</mi> <mo>(</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mo>)</mo> </mtd> </mtr> <mtr> <mtd> <mo>&amp;lsqb;</mo> <mi>N</mi> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msup> <mi>e</mi> <mn>2</mn> </msup> <mo>)</mo> <mo>+</mo> <mi>H</mi> <mo>&amp;rsqb;</mo> <mi>sin</mi> <mi> </mi> <mi>L</mi> <mi>o</mi> <mi>n</mi> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
    Wherein N is radius of curvature, and Lon, Lat, H are longitude, latitude and the height of any point on image, and image is transformed into space Coordinate system, so far obtain complete image coordinate;
    S4:The SURF feature extractions of image:
    SURF using approximate Hessian matrixes detection characteristic point, and using integral image carry out convolution algorithm, reduce computing so as to Improve feature extraction speed;SURF describes attached bag and contains two parts:Detect characteristic point and calculate feature;Specific implementation is divided into following Several steps:
    1) Hessian matrixes, tectonic scale space are built;
    Assuming that certain point is X (x, y) on image, the matrix M under σ yardsticks is defined as:
    <mrow> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>x</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>L</mi> <mrow> <mi>y</mi> <mi>y</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>&amp;sigma;</mi> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
    Wherein LxxIt is the result that gaussian filtering second order leads same X convolution, LxyDeng implication it is similar, σ is space scale;Work as Hessian When the discriminate of matrix obtains local maximum, it is believed that navigate to the position of key point;
    2) characteristic point is detected
    In obtained metric space, by by each pixel of Hessian matrix disposals and two dimensional image space and yardstick 26 points in spatial neighborhood are compared, and Primary Location goes out key point, then the key point and mistake that filtered removal energy is weak The key point of positioning by mistake, filter out the characteristic point of final stabilization;
    3) characteristic point principal direction is determined
    Using characteristic point as the center of circle, using 6 σ as radius, ask the Haar small echos on XY directions to respond, count 60 degree it is fan-shaped interior a little Horizontal haar wavelet characters and vertical haar wavelet characters summation, and set the size of haar small echos to become a length of 4s so that it is each Sector is obtained for respective value;Then by 60 degree of fan-shaped sides for being rotated at certain intervals, finally locking maximum sector To the principal direction as this feature point;
    4) feature descriptor is calculated
    A square-shaped frame is taken around characteristic point, the frame is then divided into 16 sub-regions, 25 pictures are counted per sub-regions The haar wavelet characters horizontally and vertically of element, here be all both horizontally and vertically relative principal direction for , so each characteristic point is exactly the vector of 16*4=64 dimensions;
    S5:Using adjacent image characteristic matching point to correcting geographical coordinate
    Due to POS precision is low and geometric correction in certain error be present, the coordinate mapping relations calculated have certain mistake Difference, now removes the geographical coordinate of correction map picture using Feature Correspondence Algorithm, and detailed process is as follows:
    Assuming that the geographical coordinate of image 1 is P1 (x1, y1), the geographical coordinate of image 2 is P2 (x2, y2), extraction characteristics of image, Pixel coordinate position of the same point between different two images is then obtained after pairing, thus obtains same mesh in two images The latitude and longitude coordinates (Lon1, Lat1) (Lon2, Lat2) of punctuate, finally on the basis of first image geographical coordinate, try to achieve Offset in two images between target point and first image, formula are as follows:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mo>=</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mn>2</mn> <mo>-</mo> <mi>L</mi> <mi>o</mi> <mi>n</mi> <mn>1</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mo>=</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mn>2</mn> <mo>-</mo> <mi>L</mi> <mi>a</mi> <mi>t</mi> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
    Then with the offset tried to achieve go correction map as 2 geographical coordinate P2:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>x</mi> <mn>2</mn> <mo>=</mo> <mi>x</mi> <mn>2</mn> <mo>+</mo> <mi>x</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>y</mi> <mn>2</mn> <mo>=</mo> <mi>y</mi> <mn>2</mn> <mo>+</mo> <mi>y</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow>
    Then geographical coordinate is thrown into figure under rectangular coordinate system in space, it is accurately registering so as to complete image, obtain correcting coordinate Image afterwards;
    S5 image fusion policies
    For amendment coordinate after image, due to there is certain vestige between every image, it is therefore desirable to using some strategy come The color transition for solving the problems, such as to splice between gap differs greatly, make stitching image more smooth naturally, be fade-in gradually go out it is adaptive Answer weight fusion process as follows:
    Assuming that I1, I2, I are respectively the image 3 after image 1 before merging, image 2 and fusion, then by formula (11), image is completed Fusion,
    In formula, W is the overall width that two width different pictures repeat;W is the transverse direction between overlapping region left hand edge and current pixel point Distance.
CN201711132452.5A 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features Pending CN107808362A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711132452.5A CN107808362A (en) 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711132452.5A CN107808362A (en) 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features

Publications (1)

Publication Number Publication Date
CN107808362A true CN107808362A (en) 2018-03-16

Family

ID=61580456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711132452.5A Pending CN107808362A (en) 2017-11-15 2017-11-15 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features

Country Status (1)

Country Link
CN (1) CN107808362A (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965855A (en) * 2018-07-12 2018-12-07 深圳超多维科技有限公司 A kind of stereoprojection method, apparatus, equipment and storage medium
CN109238240A (en) * 2018-10-22 2019-01-18 武汉大势智慧科技有限公司 A kind of unmanned plane oblique photograph method that taking landform into account and its camera chain
CN109325913A (en) * 2018-09-05 2019-02-12 北京悦图遥感科技发展有限公司 Unmanned plane image split-joint method and device
CN109461121A (en) * 2018-11-06 2019-03-12 中国林业科学研究院资源信息研究所 A kind of image co-registration joining method based on parallel algorithms
CN109472752A (en) * 2018-10-30 2019-03-15 北京工业大学 More exposure emerging systems based on Aerial Images
CN109544455A (en) * 2018-11-22 2019-03-29 重庆市勘测院 A kind of overlength high-definition live-action long paper seamless integration method
CN109712071A (en) * 2018-12-14 2019-05-03 电子科技大学 Unmanned plane image mosaic and localization method based on track constraint
CN109782786A (en) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 A kind of localization method and unmanned plane based on image procossing
CN109858527A (en) * 2019-01-09 2019-06-07 北京全路通信信号研究设计院集团有限公司 A kind of image interfusion method
CN110033411A (en) * 2019-04-12 2019-07-19 哈尔滨工业大学 The efficient joining method of highway construction scene panoramic picture based on unmanned plane
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110097498A (en) * 2019-01-25 2019-08-06 电子科技大学 More air strips image mosaics and localization method based on unmanned aerial vehicle flight path constraint
CN110111250A (en) * 2019-04-11 2019-08-09 中国地质大学(武汉) A kind of automatic panorama unmanned plane image split-joint method and device of robust
CN110223233A (en) * 2019-06-11 2019-09-10 西北工业大学 A kind of unmanned plane based on image mosaic builds drawing method
CN110473236A (en) * 2019-06-25 2019-11-19 上海圭目机器人有限公司 A kind of measurement method of the offset position of road face image detection camera
CN110490830A (en) * 2019-08-22 2019-11-22 中国农业科学院农业信息研究所 A kind of agricultural remote sensing method for correcting image and system
CN110596740A (en) * 2019-09-29 2019-12-20 中国矿业大学(北京) Rapid positioning method suitable for geological exploration
CN110738599A (en) * 2019-10-14 2020-01-31 北京百度网讯科技有限公司 Image splicing method and device, electronic equipment and storage medium
CN110910432A (en) * 2019-12-09 2020-03-24 珠海大横琴科技发展有限公司 Remote sensing image matching method and device, electronic equipment and readable storage medium
CN110992261A (en) * 2019-11-15 2020-04-10 国网福建省电力有限公司漳州供电公司 Method for quickly splicing images of unmanned aerial vehicle of power transmission line
CN111383205A (en) * 2020-03-11 2020-07-07 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model
CN111401385A (en) * 2020-03-19 2020-07-10 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111507901A (en) * 2020-04-15 2020-08-07 中国电子科技集团公司第五十四研究所 Aerial image splicing and positioning method based on aerial belt GPS and scale invariant constraint
CN111510684A (en) * 2020-04-24 2020-08-07 安徽比特文化传媒有限公司 VR auxiliary aerial photography method
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images
CN111583312A (en) * 2019-12-26 2020-08-25 珠海大横琴科技发展有限公司 Method and device for accurately matching remote sensing images, electronic equipment and storage medium
CN111612828A (en) * 2019-12-27 2020-09-01 珠海大横琴科技发展有限公司 Remote sensing image correction matching method and device, electronic equipment and storage medium
CN111640142A (en) * 2019-12-25 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image multi-feature matching method and device and electronic equipment
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111652915A (en) * 2019-12-09 2020-09-11 珠海大横琴科技发展有限公司 Remote sensing image overlapping area calculation method and device and electronic equipment
CN111667405A (en) * 2019-03-06 2020-09-15 西安邮电大学 Image splicing method and device
CN111681190A (en) * 2020-06-18 2020-09-18 深圳天海宸光科技有限公司 High-precision coordinate mapping method for panoramic video
CN112163995A (en) * 2020-09-07 2021-01-01 中山大学 Splicing generation method and device for oversized aerial photographing strip images
CN112184703A (en) * 2020-10-27 2021-01-05 广东技术师范大学 Corn ear period unmanned aerial vehicle image alignment method and system based on space-time backtracking
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing
CN112288634A (en) * 2020-10-29 2021-01-29 江苏理工学院 Splicing method and device for aerial images of multiple unmanned aerial vehicles
CN112414375A (en) * 2020-10-08 2021-02-26 武汉大学 Unmanned aerial vehicle image posture recovery method for flood disaster emergency quick jigsaw making
CN112799430A (en) * 2021-01-13 2021-05-14 东南大学 Programmable unmanned aerial vehicle-based road surface image intelligent acquisition method
CN112837378A (en) * 2021-02-03 2021-05-25 江南大学 Aerial camera attitude external dynamic calibration and mapping method based on multi-unmanned aerial vehicle formation
CN113012047A (en) * 2021-03-26 2021-06-22 广州市赋安电子科技有限公司 Dynamic camera coordinate mapping establishing method and device and readable storage medium
WO2021120389A1 (en) * 2019-12-19 2021-06-24 广州启量信息科技有限公司 Coordinate transformation method and apparatus for aerial panoramic roaming data
CN113099266A (en) * 2021-04-02 2021-07-09 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113191946A (en) * 2021-03-02 2021-07-30 中国人民解放军空军航空大学 Aviation three-step area array image splicing method
CN113570720A (en) * 2021-08-04 2021-10-29 西安万飞控制科技有限公司 Gis technology-based real-time display method and system for unmanned aerial vehicle video petroleum pipeline
CN113706723A (en) * 2021-08-23 2021-11-26 维沃移动通信有限公司 Image processing method and device
CN113706389A (en) * 2021-09-30 2021-11-26 中国电子科技集团公司第五十四研究所 Image splicing method based on POS correction
CN114202583A (en) * 2021-12-10 2022-03-18 中国科学院空间应用工程与技术中心 Visual positioning method and system for unmanned aerial vehicle
CN114897966A (en) * 2022-04-13 2022-08-12 深圳市路远智能装备有限公司 Visual identification method for large element
CN114999335A (en) * 2022-06-10 2022-09-02 长春希达电子技术有限公司 LED spliced screen seam repairing method based on ultra-wide band and one-dimensional envelope peak
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115861050A (en) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 Method, apparatus, device and storage medium for generating panoramic image
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117291980A (en) * 2023-10-09 2023-12-26 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732482A (en) * 2015-03-30 2015-06-24 中国人民解放军63655部队 Multi-resolution image stitching method based on control points
CN105956058A (en) * 2016-04-27 2016-09-21 东南大学 Method for quickly discovering changed land by adopting unmanned aerial vehicle remote sensing images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732482A (en) * 2015-03-30 2015-06-24 中国人民解放军63655部队 Multi-resolution image stitching method based on control points
CN105956058A (en) * 2016-04-27 2016-09-21 东南大学 Method for quickly discovering changed land by adopting unmanned aerial vehicle remote sensing images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王亚洲等: "基于无人机POS信息的拼接方法", 《地理国情监测云平台》 *

Cited By (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965855A (en) * 2018-07-12 2018-12-07 深圳超多维科技有限公司 A kind of stereoprojection method, apparatus, equipment and storage medium
CN109325913A (en) * 2018-09-05 2019-02-12 北京悦图遥感科技发展有限公司 Unmanned plane image split-joint method and device
CN109325913B (en) * 2018-09-05 2022-12-16 北京悦图遥感科技发展有限公司 Unmanned aerial vehicle image splicing method and device
CN109238240A (en) * 2018-10-22 2019-01-18 武汉大势智慧科技有限公司 A kind of unmanned plane oblique photograph method that taking landform into account and its camera chain
CN109238240B (en) * 2018-10-22 2021-01-08 武汉大势智慧科技有限公司 Unmanned aerial vehicle oblique photography method considering terrain and photography system thereof
CN109472752A (en) * 2018-10-30 2019-03-15 北京工业大学 More exposure emerging systems based on Aerial Images
CN109461121A (en) * 2018-11-06 2019-03-12 中国林业科学研究院资源信息研究所 A kind of image co-registration joining method based on parallel algorithms
CN109461121B (en) * 2018-11-06 2022-11-04 中国林业科学研究院资源信息研究所 Image fusion splicing method based on parallel computing algorithm
CN109544455B (en) * 2018-11-22 2023-05-02 重庆市勘测院 Seamless fusion method for ultralong high-definition live-action long rolls
CN109544455A (en) * 2018-11-22 2019-03-29 重庆市勘测院 A kind of overlength high-definition live-action long paper seamless integration method
CN109712071A (en) * 2018-12-14 2019-05-03 电子科技大学 Unmanned plane image mosaic and localization method based on track constraint
CN109712071B (en) * 2018-12-14 2022-11-29 电子科技大学 Unmanned aerial vehicle image splicing and positioning method based on track constraint
CN109858527A (en) * 2019-01-09 2019-06-07 北京全路通信信号研究设计院集团有限公司 A kind of image interfusion method
CN109858527B (en) * 2019-01-09 2021-08-24 北京全路通信信号研究设计院集团有限公司 Image fusion method
CN110084743B (en) * 2019-01-25 2023-04-14 电子科技大学 Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN110084743A (en) * 2019-01-25 2019-08-02 电子科技大学 Image mosaic and localization method based on more air strips starting track constraint
CN110097498A (en) * 2019-01-25 2019-08-06 电子科技大学 More air strips image mosaics and localization method based on unmanned aerial vehicle flight path constraint
CN109782786A (en) * 2019-02-12 2019-05-21 上海戴世智能科技有限公司 A kind of localization method and unmanned plane based on image procossing
CN109782786B (en) * 2019-02-12 2021-09-28 上海戴世智能科技有限公司 Positioning method based on image processing and unmanned aerial vehicle
CN111667405A (en) * 2019-03-06 2020-09-15 西安邮电大学 Image splicing method and device
CN110111250A (en) * 2019-04-11 2019-08-09 中国地质大学(武汉) A kind of automatic panorama unmanned plane image split-joint method and device of robust
CN110033411A (en) * 2019-04-12 2019-07-19 哈尔滨工业大学 The efficient joining method of highway construction scene panoramic picture based on unmanned plane
CN110223233A (en) * 2019-06-11 2019-09-10 西北工业大学 A kind of unmanned plane based on image mosaic builds drawing method
CN110223233B (en) * 2019-06-11 2022-04-05 西北工业大学 Unmanned aerial vehicle aerial photography image building method based on image splicing
CN110473236B (en) * 2019-06-25 2022-03-15 上海圭目机器人有限公司 Method for measuring offset position of camera for road surface image detection
CN110473236A (en) * 2019-06-25 2019-11-19 上海圭目机器人有限公司 A kind of measurement method of the offset position of road face image detection camera
CN110490830A (en) * 2019-08-22 2019-11-22 中国农业科学院农业信息研究所 A kind of agricultural remote sensing method for correcting image and system
CN110490830B (en) * 2019-08-22 2021-09-24 中国农业科学院农业信息研究所 Agricultural remote sensing image correction method and system
CN110596740A (en) * 2019-09-29 2019-12-20 中国矿业大学(北京) Rapid positioning method suitable for geological exploration
CN110738599A (en) * 2019-10-14 2020-01-31 北京百度网讯科技有限公司 Image splicing method and device, electronic equipment and storage medium
CN110738599B (en) * 2019-10-14 2023-04-25 北京百度网讯科技有限公司 Image stitching method and device, electronic equipment and storage medium
CN110992261A (en) * 2019-11-15 2020-04-10 国网福建省电力有限公司漳州供电公司 Method for quickly splicing images of unmanned aerial vehicle of power transmission line
CN110910432A (en) * 2019-12-09 2020-03-24 珠海大横琴科技发展有限公司 Remote sensing image matching method and device, electronic equipment and readable storage medium
CN111652915A (en) * 2019-12-09 2020-09-11 珠海大横琴科技发展有限公司 Remote sensing image overlapping area calculation method and device and electronic equipment
WO2021120389A1 (en) * 2019-12-19 2021-06-24 广州启量信息科技有限公司 Coordinate transformation method and apparatus for aerial panoramic roaming data
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111640142A (en) * 2019-12-25 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image multi-feature matching method and device and electronic equipment
CN111583312A (en) * 2019-12-26 2020-08-25 珠海大横琴科技发展有限公司 Method and device for accurately matching remote sensing images, electronic equipment and storage medium
CN111612828A (en) * 2019-12-27 2020-09-01 珠海大横琴科技发展有限公司 Remote sensing image correction matching method and device, electronic equipment and storage medium
CN111383205A (en) * 2020-03-11 2020-07-07 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model
CN111383205B (en) * 2020-03-11 2023-03-24 西安应用光学研究所 Image fusion positioning method based on feature points and three-dimensional model
CN111401385B (en) * 2020-03-19 2022-06-17 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111401385A (en) * 2020-03-19 2020-07-10 成都理工大学 Similarity calculation method for image local topological structure feature descriptors
CN111507901B (en) * 2020-04-15 2023-08-15 中国电子科技集团公司第五十四研究所 Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint
CN111507901A (en) * 2020-04-15 2020-08-07 中国电子科技集团公司第五十四研究所 Aerial image splicing and positioning method based on aerial belt GPS and scale invariant constraint
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images
CN111583110B (en) * 2020-04-24 2023-05-23 华南理工大学 Splicing method of aerial images
CN111510684A (en) * 2020-04-24 2020-08-07 安徽比特文化传媒有限公司 VR auxiliary aerial photography method
CN111681190A (en) * 2020-06-18 2020-09-18 深圳天海宸光科技有限公司 High-precision coordinate mapping method for panoramic video
CN112163995A (en) * 2020-09-07 2021-01-01 中山大学 Splicing generation method and device for oversized aerial photographing strip images
CN112414375A (en) * 2020-10-08 2021-02-26 武汉大学 Unmanned aerial vehicle image posture recovery method for flood disaster emergency quick jigsaw making
CN112414375B (en) * 2020-10-08 2021-09-03 武汉大学 Unmanned aerial vehicle image posture recovery method for flood disaster emergency quick jigsaw making
CN112184703A (en) * 2020-10-27 2021-01-05 广东技术师范大学 Corn ear period unmanned aerial vehicle image alignment method and system based on space-time backtracking
CN112288634A (en) * 2020-10-29 2021-01-29 江苏理工学院 Splicing method and device for aerial images of multiple unmanned aerial vehicles
CN112215304A (en) * 2020-11-05 2021-01-12 珠海大横琴科技发展有限公司 Gray level image matching method and device for geographic image splicing
CN112799430A (en) * 2021-01-13 2021-05-14 东南大学 Programmable unmanned aerial vehicle-based road surface image intelligent acquisition method
CN112837378B (en) * 2021-02-03 2024-04-30 江南大学 Aerial camera attitude external dynamic calibration and mapping method based on multi-unmanned aerial vehicle formation
CN112837378A (en) * 2021-02-03 2021-05-25 江南大学 Aerial camera attitude external dynamic calibration and mapping method based on multi-unmanned aerial vehicle formation
CN113191946A (en) * 2021-03-02 2021-07-30 中国人民解放军空军航空大学 Aviation three-step area array image splicing method
CN113012047A (en) * 2021-03-26 2021-06-22 广州市赋安电子科技有限公司 Dynamic camera coordinate mapping establishing method and device and readable storage medium
CN113099266B (en) * 2021-04-02 2023-05-26 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113099266A (en) * 2021-04-02 2021-07-09 云从科技集团股份有限公司 Video fusion method, system, medium and device based on unmanned aerial vehicle POS data
CN113570720B (en) * 2021-08-04 2024-02-27 西安万飞控制科技有限公司 Unmanned plane video oil pipeline real-time display method and system based on gis technology
CN113570720A (en) * 2021-08-04 2021-10-29 西安万飞控制科技有限公司 Gis technology-based real-time display method and system for unmanned aerial vehicle video petroleum pipeline
CN113706723A (en) * 2021-08-23 2021-11-26 维沃移动通信有限公司 Image processing method and device
CN113706389B (en) * 2021-09-30 2023-03-28 中国电子科技集团公司第五十四研究所 Image splicing method based on POS correction
CN113706389A (en) * 2021-09-30 2021-11-26 中国电子科技集团公司第五十四研究所 Image splicing method based on POS correction
CN114202583A (en) * 2021-12-10 2022-03-18 中国科学院空间应用工程与技术中心 Visual positioning method and system for unmanned aerial vehicle
CN114897966B (en) * 2022-04-13 2024-04-09 深圳市路远智能装备有限公司 Visual identification method for large element
CN114897966A (en) * 2022-04-13 2022-08-12 深圳市路远智能装备有限公司 Visual identification method for large element
CN114999335B (en) * 2022-06-10 2023-08-15 长春希达电子技术有限公司 LED spliced screen seam repairing method based on ultra-wideband and one-dimensional envelope peak value
CN114999335A (en) * 2022-06-10 2022-09-02 长春希达电子技术有限公司 LED spliced screen seam repairing method based on ultra-wide band and one-dimensional envelope peak
CN115861050A (en) * 2022-08-29 2023-03-28 如你所视(北京)科技有限公司 Method, apparatus, device and storage medium for generating panoramic image
CN115393196B (en) * 2022-10-25 2023-03-24 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117291980A (en) * 2023-10-09 2023-12-26 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN117291980B (en) * 2023-10-09 2024-03-15 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning

Similar Documents

Publication Publication Date Title
CN107808362A (en) A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN110966991B (en) Single unmanned aerial vehicle image positioning method without control point
CN104408689B (en) Streetscape dough sheet optimization method based on full-view image
US8315477B2 (en) Method and apparatus of taking aerial surveys
CN104835115A (en) Imaging method for aerial camera, and system thereof
CN108765298A (en) Unmanned plane image split-joint method based on three-dimensional reconstruction and system
CN106023086A (en) Aerial photography image and geographical data splicing method based on ORB feature matching
WO2020062434A1 (en) Static calibration method for external parameters of camera
CN113222820B (en) Pose information-assisted aerial remote sensing image stitching method
CN106373088A (en) Quick mosaic method for aviation images with high tilt rate and low overlapping rate
CN108917753A (en) Method is determined based on the position of aircraft of structure from motion
CN105550994A (en) Satellite image based unmanned aerial vehicle image rapid and approximate splicing method
CN109325913A (en) Unmanned plane image split-joint method and device
Liu et al. A new approach to fast mosaic UAV images
Moussa et al. A fast approach for stitching of aerial images
Sai et al. Geometric accuracy assessments of orthophoto production from uav aerial images
JP2023530449A (en) Systems and methods for air and ground alignment
CN112750075A (en) Low-altitude remote sensing image splicing method and device
Zhou et al. Automatic orthorectification and mosaicking of oblique images from a zoom lens aerial camera
KR102389762B1 (en) Space Formation and Recognition System through Digital Twin-linked Augmented Reality Camera and the method thereof
CN117036666B (en) Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
Lee et al. Georegistration of airborne hyperspectral image data
Božić-Štulić et al. Complete model for automatic object detection and localisation on aerial images using convolutional neural networks
Wang et al. Automated mosaicking of UAV images based on SFM method
CN116124094A (en) Multi-target co-location method based on unmanned aerial vehicle reconnaissance image and combined navigation information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180316