CN102968780B - A kind of remote sensing images joining method based on human-eye visual characteristic - Google Patents

A kind of remote sensing images joining method based on human-eye visual characteristic Download PDF

Info

Publication number
CN102968780B
CN102968780B CN201210510695.9A CN201210510695A CN102968780B CN 102968780 B CN102968780 B CN 102968780B CN 201210510695 A CN201210510695 A CN 201210510695A CN 102968780 B CN102968780 B CN 102968780B
Authority
CN
China
Prior art keywords
image
matrix
coordinate
spliced
reference picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210510695.9A
Other languages
Chinese (zh)
Other versions
CN102968780A (en
Inventor
陈锦伟
冯华君
徐之海
李奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201210510695.9A priority Critical patent/CN102968780B/en
Publication of CN102968780A publication Critical patent/CN102968780A/en
Application granted granted Critical
Publication of CN102968780B publication Critical patent/CN102968780B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a kind of remote sensing images joining method based on human-eye visual characteristic, comprise the following steps: 1) extract the unique point on reference picture and image to be spliced, set up matching characteristic point pair; 2) reject the matching characteristic point pair of mistake, obtain transformation matrix H initial between reference picture and image to be spliced; 3) be divided into interested region and common region with reference to image, utilize the ratio of the average gradient value in two regions to obtain weight factor α; 4) the optimization increment of Increment Matrix P as initial transformation matrix H is set, utilize step 2) in the right coordinate information of matching characteristic point and step 3) in weight factor α iterative Increment Matrix P, and utilize the Increment Matrix P obtained each time in iterative process can calculate corresponding transformation matrix H, until convergence finishing iteration process; 5) apply final transformation matrix H to treat stitching image and carry out projective transformation, and carry out splicing with reference picture and merge, complete the splicing of image.

Description

A kind of remote sensing images joining method based on human-eye visual characteristic
Technical field
The present invention relates to computer image processing technology field, particularly relate to a kind of remote sensing images joining method based on human-eye visual characteristic.
Background technology
Remote sensing images are the images obtained by the platform of the high-altitude flights such as satellite, aircraft or hot air balloon, having very important effect in fields such as disaster alarm, resource detection, military surveillance, ground mappings, is the important channel that the mankind understand the information change such as terrestrial climate, resource.
In the process obtaining remote sensing images, always wish the high-definition picture obtaining enough visual fields, like this can so that people extract useful information rapidly.But under existing technical conditions, visual field and resolution are two contradictions being difficult to be in harmonious proportion.
Existing solution is continue to increase the resolution of sensor on the one hand, on the one hand by a certain large regions repeatedly imaging obtain several there are high-resolution superimposed images, be reduced into the high-resolution remote sensing images of a width large coverage by follow-up image processing techniques.
Remote sensing images splicing can two parts: image registration and image co-registration.
Image registration is the core link of image mosaic technology, is the means of the transformation relation of acquisition two width superimposed images.
In order to obtain the relation more accurately between image, usually can be optimized in image registration, the method for optimization has a variety of.
But optimized algorithm is in the past all the optimization of unitarity, importance namely not between differentiate between images content, actual result likely reaches higher splicing precision at some to actual production otiose region of living, but in some very important contents, obtain the result of non-constant on the contrary, be unfavorable for follow-up application.
Summary of the invention
The object of the invention is to provide a kind of for the difference of remote sensing images scenery importance can, for the optimization method of different applied environments, make the result optimized meet the visual determination characteristic of human eye.
To achieve these goals, the invention provides a kind of remote sensing images joining method based on human-eye visual characteristic, comprise the following steps:
1) extract the unique point on reference picture and image to be spliced, set up matching characteristic point pair, obtain the initial matching relationship between two width image characteristic points;
2) reject the matching relationship of mistake, obtain the matching characteristic point pair that matching relationship is correct, and obtain transformation matrix H initial between reference picture and image to be spliced;
3) described reference picture is divided into interested region and common region, calculates the average gradient value S in two regions respectively interestand S (G) normal(G) ratio of average gradient value, is utilized to obtain weight factor α;
4) the optimization increment of Increment Matrix P as initial transformation matrix H is set, utilize step 2) in the right coordinate information of matching characteristic point and step 3) in weight factor α iterative Increment Matrix P, and utilize the Increment Matrix P obtained each time in iterative process can calculate corresponding transformation matrix H, until convergence finishing iteration process;
5) apply final transformation matrix H to treat stitching image and carry out projective transformation, and carry out splicing with reference picture and merge, complete the splicing of image.
The unique point of reference picture and image to be spliced is chosen and is adopted SIFT algorithm, and the initial matching relationship utilizing European geometric distance principle to obtain between unique point; And utilize RANSAC method to reject the matching relationship of mistake, namely the matching characteristic point pair that matching relationship is correct is obtained, the transformation matrix obtained by correct matching relationship can make that two width stitching images are reasonable to overlap, and the coupling of mistake then can produce splicing by mistake.
Make described step 2) in the homogeneous coordinates of reference picture unique point be X 1=(x 1, y 1, 1), the homogeneous coordinates of image characteristic point to be spliced are X 2=(x 2, y 2, 1), and initial transformation matrix H is:
H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1
In formula, h 1~ h 8be the element in transformation matrix H, characterize the Transformation Relation of Projection between reference picture and image to be spliced, the coordinate conversion relation between reference picture and image to be spliced corresponds to following formula:
x 1 y 1 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
In image, the Grad G (x, y) of pixel is expressed from the next:
G ( x , y ) = I x 2 ( x , y ) + I y 2 ( x , y )
I x = ∂ I ( x , y ) ∂ x , I y = ∂ I ( x , y ) ∂ y
In formula, (x, y) is the coordinate of pixel in image, I xrepresent the gradient of image in x direction, I yrepresent image gradient in y-direction, I (x, y) represents the gray-scale value of image.
Described weight factor α is expressed as:
β = S interest ( G ) S normal ( G )
α = 1 β + 1
In formula: S interest(G) be the mean value of all pixel gradient values of area-of-interest, S normal(G) be the mean value of all pixel gradient values of normal areas.
Described Increment Matrix P is expressed as:
P = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 1
In formula, the element p in Increment Matrix P 1~ p 8the fractional increments of respective element in difference correspondent transform matrix H.
Arrange middle coordinate variable X ' 2=(x ' 2, y ' 2, 1) and be convenient to mathematical expression calculating:
x 2 ′ y 2 ′ 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
Increment Matrix P characterizes the fractional increments of projective transformation matrix H in iterative process each time, and its effect is constantly optimized H to approach, and its expression formula is:
(I+P)H→H
In formula, I is unit matrix, utilize middle coordinate variable X ' 2=(x ' 2, y ' 2, 1) and formula:
(I+P)HX 2=(I+P)X′ 2=X3=(x 3,y 3,1)
Computational short cut can be made, X 3=(x 3, y 3, 1) and represent the coordinate of unique point coordinate after projective transformation of image to be spliced.
Introduce weight factor α, image be divided into two parts:
E ( p ) = Σ n ∈ S 1 α | | X 1 ( n ) - X 3 ( n ) | | + Σ m ∈ S 2 ( 1 - α ) | | X 1 ( m ) - X 3 ( m ) | |
In formula, represent coordinate and the coordinate of image to be spliced n-th unique point coordinate after projective transformation of reference picture n-th unique point respectively, represent coordinate and the coordinate of image to be spliced m unique point coordinate after projective transformation of reference picture m unique point respectively, α is weight factor, S 1, S 2represent interested region and common region respectively.
The Increment Matrix P obtained in iterative process each time, obtains the transformation matrix H in the iterative process of next time according to (I+P) H → H, the end condition of iteration is for convergence or exceed iterations restriction, and obtains final transformation matrix H.
Reference picture described in final transformation matrix H is substituted into and the coordinate conversion relation formula between image to be spliced:
x 1 y 1 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
Complete image to be spliced and carry out projective transformation, and obtain stitching image.
The present invention has following advantage:
In the process that the present invention optimizes at remote sensing image registration, introduce the theory of weight optimization.According to the significance level of remote sensing images content, give different weighted values, make the result of remote sensing image registration optimization more meet human eye vision and judge.In important remote sensing region, the error of image registration can be little as much as possible, and registration accuracy reaches a higher level, and last stitching image can be applied in wider actual demand.
Accompanying drawing explanation
Fig. 1 is the operational flowchart of the remote sensing images joining method that the present invention is based on human-eye visual characteristic;
Fig. 2 a, Fig. 2 b, Fig. 2 c and Fig. 2 d are the process that the present invention sets up Image Feature Point Matching relation;
Fig. 2 a is reference picture;
Fig. 2 b is image to be spliced;
Fig. 2 c is the initial matching relationships of two width image characteristic points;
Fig. 2 d is the correct matching relationship obtained after RANSAC method;
Fig. 3 a is the splicing result after common Levenberg-Marquardt algorithm optimization;
Fig. 3 b, Fig. 3 c are marked region corresponding in Fig. 3 a;
Fig. 4 a is the splicing result after joining method of the present invention is optimized;
Fig. 4 b, Fig. 4 c are marked region corresponding in Fig. 4 a;
Fig. 5 a is the reference picture of emulation;
Fig. 5 b is the image to be spliced of emulation;
Fig. 6 a is the splicing result after common LM algorithm optimization;
Fig. 6 b, Fig. 6 c and Fig. 6 d are marked region corresponding in Fig. 6 a;
Fig. 7 a is the splicing result after joining method of the present invention is optimized;
Fig. 7 b, Fig. 7 c and Fig. 7 d are marked region corresponding in Fig. 7 a;
Fig. 8 a is the reference picture of remote sensing images;
Fig. 8 b is the image to be spliced of remote sensing images;
Fig. 8 c is the splicing result after joining method of the present invention is optimized.
Embodiment
As shown in Figure 1, the present invention is the remote sensing images joining method based on human-eye visual characteristic, by following two concrete examples, concrete embodiment is described.
Example one: the process that whole joining method is described with Fig. 2 a, Fig. 2 b, Fig. 2 c and Fig. 2 d.
(1) Fig. 2 a and Fig. 2 b is reference picture and image to be spliced respectively, utilizes SIFT technology to extract the unique point of reference picture and image to be spliced, and Fig. 2 c shows the initial matching relationship of the two width image characteristic points that obtain.
(2) transformation relation between two width images is characterized with projective transformation model, namely transformation matrix contains 8 parametric variables, then adopt RANSAC technology to reject the match point of mistake, obtain the feature point pairs that matching relationship is correct, obtain initial transformation matrix H simultaneously.The homogeneous coordinates of reference picture unique point are made to be X 1=(x 1, y 1, 1), the homogeneous coordinates of image characteristic point to be spliced are X 2=(x 2, y 2, 1), matrix H is the projective transformation matrix of 3x3, and wherein transformation matrix H is:
H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1
In formula, h 1~ h 8be the element in projective transformation matrix H, characterize the Transformation Relation of Projection between reference picture and image to be spliced, its coordinate conversion relation corresponds to following formula:
x 1 y 1 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
(3) the weight optimization algorithm considering content significance level is adopted.The quantification of the significance level of remote sensing images content, can be set to two kinds of methods: subjective method and objective method.Subjective algorithm is user's additional weight value artificial to the various content of remote sensing images, and such as airfield runway, military target can add higher weight, and the span of weighted value be interval [0,1].For the optimized algorithm demand of robotization, design and utilize the gradient information of image to characterize the information importance level in certain region.The gradient of image slices vegetarian refreshments is determined by following formula:
G ( x , y ) = I x 2 ( x , y ) + I y 2 ( x , y )
I x = ∂ I ( x , y ) ∂ x , I y = ∂ I ( x , y ) ∂ y
In formula, (x, y) is the coordinate of pixel in image, I xrepresent the gradient of image in x direction, I yrepresent image gradient in y-direction, I (x, y) represents the gray-scale value of image.
Weight factor α is expressed as:
β = S interest ( G ) S normal ( G )
α = 1 β + 1
In formula: S interest(G) be the mean value of all pixel gradient values of area-of-interest, S normal(G) be the mean value of all pixel gradient values of normal areas.
(4) the optimization increment of Increment Matrix P as initial transformation matrix H is set, exploitation right repeated factor α and the right coordinate position iterative Increment Matrix P of matching characteristic point.The Increment Matrix P obtained each time in iterative process and initial transformation matrix H forms the initial transformation matrix H of next iteration, until convergence finishing iteration process.
Increment Matrix P is expressed as:
P = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 1
Arrange middle coordinate variable X ' 2=(x ' 2, y ' 2, 1) and be convenient to mathematical expression calculating:
x 2 ′ y 2 ′ 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
Increment Matrix P characterizes the fractional increments of projective transformation matrix H in iterative process each time, and its effect is constantly optimized H to approach, and its expression formula is:
(I+P)H→H
I is unit matrix, utilize middle coordinate variable X ' 2(I+P) HX 2=(I+P) X ' 2=X 3=(x 3, y 3, 1) and can computational short cut be made.
In order to obtain Increment Matrix P, we require to separate following formula:
E ( p ) = Σ n | | X 1 ( n ) - X 3 ( n ) | |
In formula, represent coordinate and the coordinate of image to be spliced n-th unique point coordinate after projective transformation of reference picture n-th unique point respectively.
Introduce weight factor α, image be divided into two parts:
E ( p ) = Σ n ∈ S 1 α | | X 1 ( n ) - X 3 ( n ) | | + Σ m ∈ S 2 ( 1 - α ) | | X 1 ( m ) - X 3 ( m ) | |
In formula, represent coordinate and the coordinate of image to be spliced n-th unique point coordinate after projective transformation of reference picture n-th unique point respectively, represent coordinate and the coordinate of image to be spliced m unique point coordinate after projective transformation of reference picture m unique point respectively, α is weight factor, S 1, S 2represent interested region and common region respectively.
Levenberg-Marquardt Algorithm for Solving is utilized to go out Increment Matrix P.The Increment Matrix P obtained in iterative process each time, obtains the transformation matrix H in the iterative process of next time according to (I+P) H → H, the end condition of iteration is for convergence or exceed iterations restriction.The standard of convergence is that ε is less than 1 pixel:
ϵ = ( Σ n ∈ S 1 ( x 3 n - x 1 n ) 2 + ( y 3 n - y 1 n ) 2 ) / n
In formula, represent the coordinate of reference picture in area-of-interest unique point, represent the coordinate of image characteristic point to be spliced after conversion.Preferably, maximum iteration time is set as 100, and in some cases, the situation more complicated of two width images of acquisition, within the average error of area-of-interest cannot be reduced to a pixel, is just the end condition of iteration with iterations.
Utilize Increment Matrix P and transformation matrix H can obtain final transformation matrix H, then according to the Transformation Relation of Projection formula between reference picture and image to be spliced by (I+P) H → H iterative:
x 1 y 1 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
Image to be spliced is carried out projective transformation, and obtains final optimization stitching image.
Splicing result after the splicing result of optimization method of the present invention and common LM algorithm optimization compared, can obtain: Fig. 3 a is the splicing result after common Levenberg-Marquardt algorithm optimization, Fig. 4 a is the splicing result adopting optimization method of the present invention.Fig. 3 b and Fig. 3 c is respectively the enlarged image in corresponding region in Fig. 3 a, and wherein Fig. 3 b comes as normal areas, and Fig. 3 c comes as area-of-interest, and it is good that we can find that the optimum results of Fig. 3 b compares, but the difference that the optimum results of Fig. 3 c compares.But in Fig. 4 b and Fig. 4 c, we just can find that the image optimization effectiveness comparison of area-of-interest Fig. 4 c is good, but the effect of optimization of Fig. 4 b is relatively poor.
Example two: the characteristic of the inventive method on human eye vision is described by the optimization splicing result of contrast simulation image and remote sensing images.
(1) Fig. 5 a and Fig. 5 b is the reference picture of input and image to be spliced.
(2) Fig. 6 a is the optimization splicing construction not having weighted value.Fig. 6 b and Fig. 6 c carrys out the area-of-interest as our definition, and it is optimized spliced result and has occurred larger error.Fig. 6 d carrys out the normal areas as our definition.
(3) Fig. 7 a is for adopting optimization of the present invention splicing result.Compared to optimization error very little of Fig. 6 b and Fig. 6 c, Fig. 7 b and Fig. 7 c, substantially eliminate the difference in vision aspect, and Fig. 7 d visually relative to Fig. 6 d substantially without large difference.
(4) Fig. 8 a and Fig. 8 b is the reference picture of remote sensing images and image to be spliced, and Fig. 8 c is for adopting splicing result of the present invention, and wherein the unique point optimization error of area-of-interest reaches sub-pix rank.

Claims (8)

1., based on a remote sensing images joining method for human-eye visual characteristic, it is characterized in that, comprise the following steps:
1) extract the unique point on reference picture and image to be spliced, set up matching characteristic point pair, obtain the initial matching relationship between two width image characteristic points;
2) reject the matching relationship of mistake, obtain the matching characteristic point pair that matching relationship is correct, and obtain transformation matrix H initial between reference picture and image to be spliced;
3) described reference picture is divided into interested region and common region, calculates the average gradient value S in two regions respectively interestand S (G) normal(G) ratio of average gradient value, is utilized to obtain weight factor α;
Described weight factor α is expressed as: α = 1 β + 1 , β = S i n t e r e s t ( G ) S n o r m a l ( G ) ;
4) the optimization increment of Increment Matrix P as initial transformation matrix H is set, utilize step 2) in the right coordinate information of matching characteristic point and step 3) in weight factor α iterative Increment Matrix P, and utilize the Increment Matrix P obtained each time in iterative process can calculate corresponding transformation matrix H, until convergence finishing iteration process;
The standard of convergence is that ε is less than 1 pixel:
ϵ = ( Σ n ∈ S 1 ( x 3 n - x 1 n ) 2 + ( y 3 n - y 1 n ) 2 ) / n
In formula, represent the coordinate of reference picture in area-of-interest unique point, represent the coordinate of image characteristic point to be spliced after conversion;
Introduce formula:
E ( p ) = Σ n ∈ S 1 α | | X 1 ( n ) - X 3 ( n ) | | + Σ m ∈ S 2 ( 1 - α ) | | X 1 ( m ) - X 3 ( m ) | |
In formula, represent coordinate and the coordinate of image to be spliced n-th unique point coordinate after projective transformation of reference picture n-th unique point respectively, represent coordinate and the coordinate of image to be spliced m unique point coordinate after projective transformation of reference picture m unique point respectively, α is weight factor, S 1, S 2represent interested region and common region respectively; And utilize Levenberg-Marquardt algorithm iteration to solve Increment Matrix P;
5) apply final transformation matrix H to treat stitching image and carry out projective transformation, and carry out splicing with reference picture and merge, complete the splicing of image.
2., as claimed in claim 1 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, make described step 2) in the homogeneous coordinates of reference picture unique point be X 1=(x 1, y 1, 1), the homogeneous coordinates of image characteristic point to be spliced are X 2=(x 2, y 2, 1), and described initial transformation matrix H is:
H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1
In formula, h 1~ h 8be the element in transformation matrix H, characterize the Transformation Relation of Projection between reference picture and image to be spliced.
3., as claimed in claim 2 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, in image, the Grad G (x, y) of pixel is expressed from the next:
G ( x , y ) = I x 2 ( x , y ) + I y 2 ( x , y )
I x = ∂ I ( x , y ) ∂ x , I y = ∂ I ( x , y ) ∂ y
In formula, (x, y) is the coordinate of pixel in image, I xrepresent the gradient of image in x direction, I yrepresent image gradient in y-direction, I (x, y) represents the gray-scale value of image.
4., as claimed in claim 3 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, described Increment Matrix P is expressed as:
P = p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 1
In formula, the element p in Increment Matrix P 1~ p 8the fractional increments of respective element in difference correspondent transform matrix H.
5., as claimed in claim 4 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, the relational expression of Increment Matrix P and transformation matrix H is:
(I+P)H→H
In formula, I is unit matrix.
6., as claimed in claim 5 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, the coordinate of unique point coordinate after projective transformation of described image to be spliced is:
X 3=(x 3,y 3,1)=(I+P)HX 2=(I+P)X' 2
In formula, I is unit matrix, X' 2for middle coordinate variable, and X' 2=(x' 2, y' 2, 1).
7. as claimed in claim 6 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, utilize the Increment Matrix P obtained in each iterative process, obtain the transformation matrix H in the iterative process of next time according to (I+P) H → H, and substitute into formula (I+P) H → H iterative and obtain final transformation matrix H.
8., as claimed in claim 7 based on the remote sensing images joining method of human-eye visual characteristic, it is characterized in that, the coordinate conversion relation formula between described reference picture and image to be spliced is:
x 1 y 1 1 ~ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 1 x 2 y 2 1
Utilize final transformation matrix H to substitute in above formula, image to be spliced is carried out projective transformation, and obtains stitching image.
CN201210510695.9A 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic Expired - Fee Related CN102968780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210510695.9A CN102968780B (en) 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN 201210333179 CN102867298A (en) 2012-09-11 2012-09-11 Remote sensing image splicing method based on human eye visual characteristic
CN201210333179.3 2012-09-11
CN201210510695.9A CN102968780B (en) 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic

Publications (2)

Publication Number Publication Date
CN102968780A CN102968780A (en) 2013-03-13
CN102968780B true CN102968780B (en) 2015-11-25

Family

ID=47446154

Family Applications (2)

Application Number Title Priority Date Filing Date
CN 201210333179 Pending CN102867298A (en) 2012-09-11 2012-09-11 Remote sensing image splicing method based on human eye visual characteristic
CN201210510695.9A Expired - Fee Related CN102968780B (en) 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN 201210333179 Pending CN102867298A (en) 2012-09-11 2012-09-11 Remote sensing image splicing method based on human eye visual characteristic

Country Status (1)

Country Link
CN (2) CN102867298A (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514580B (en) * 2013-09-26 2016-06-08 香港应用科技研究院有限公司 For obtaining the method and system of the super-resolution image that visual experience optimizes
US9734599B2 (en) * 2014-10-08 2017-08-15 Microsoft Technology Licensing, Llc Cross-level image blending
CN104599258B (en) * 2014-12-23 2017-09-08 大连理工大学 A kind of image split-joint method based on anisotropic character descriptor
CN105279735B (en) * 2015-11-20 2018-08-21 沈阳东软医疗系统有限公司 A kind of fusion method of image mosaic, device and equipment
CN105931185A (en) * 2016-04-20 2016-09-07 中国矿业大学 Automatic splicing method of multiple view angle image
CN105915804A (en) * 2016-06-16 2016-08-31 恒业智能信息技术(深圳)有限公司 Video stitching method and system
CN107067368B (en) * 2017-01-20 2019-11-26 武汉大学 Streetscape image splicing method and system based on deformation of image
CN107833207B (en) * 2017-10-25 2020-04-03 北京大学 Method for detecting error matching between images based on augmented homogeneous coordinate matrix
CN109995993A (en) * 2018-01-02 2019-07-09 广州亿航智能技术有限公司 Aircraft and its filming control method, device and terminal system
CN109829853B (en) * 2019-01-18 2022-12-23 电子科技大学 Unmanned aerial vehicle aerial image splicing method
CN110363179B (en) * 2019-07-23 2022-03-25 联想(北京)有限公司 Map acquisition method, map acquisition device, electronic equipment and storage medium
CN112070775B (en) * 2020-09-29 2021-11-09 成都星时代宇航科技有限公司 Remote sensing image optimization processing method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200937946A (en) * 2008-02-18 2009-09-01 Univ Nat Taiwan Full-frame video stabilization with a polyline-fitted camcorder path

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200937946A (en) * 2008-02-18 2009-09-01 Univ Nat Taiwan Full-frame video stabilization with a polyline-fitted camcorder path

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Constructing Image Panoramas using Dual-Homography Warping;Junhong Gao,SeonJoo Kim,Michael S.Brown;《Proceedings of CVPR》;20110625;第3.1节 *
Creating Full View Panoramic Image Mosaics and Environment Maps;Richard Szeliski,Heung-Yeung Shum;《Computer Graphics(SIGGRAPH’97)》;19970831;第3节 *

Also Published As

Publication number Publication date
CN102968780A (en) 2013-03-13
CN102867298A (en) 2013-01-09

Similar Documents

Publication Publication Date Title
CN102968780B (en) A kind of remote sensing images joining method based on human-eye visual characteristic
Liu et al. Infrared and visible image fusion method based on saliency detection in sparse domain
Li et al. Image registration and fusion of visible and infrared integrated camera for medium-altitude unmanned aerial vehicle remote sensing
CN103914678B (en) Abandoned land remote sensing recognition method based on texture and vegetation indexes
Dufour et al. Shape, displacement and mechanical properties from isogeometric multiview stereocorrelation
CN102622759B (en) A kind of combination gray scale and the medical image registration method of geological information
CN101814192A (en) Method for rebuilding real 3D face
CN103177458B (en) A kind of visible remote sensing image region of interest area detecting method based on frequency-domain analysis
CN105975912B (en) Hyperspectral image nonlinear solution mixing method neural network based
CN103575395B (en) A kind of outfield absolute radiation calibration method and system
CN103914847A (en) SAR image registration method based on phase congruency and SIFT
Daffara et al. A cost-effective system for aerial 3D thermography of buildings
US8855439B2 (en) Method for determining a localization error in a georeferenced image and related device
CN104008543A (en) Image fusion quality evaluation method
Saveliev et al. An approach to the automatic construction of a road accident scheme using UAV and deep learning methods
Wang et al. 3D-CALI: Automatic calibration for camera and LiDAR using 3D checkerboard
Liu et al. Fusion of binocular vision, 2D lidar and IMU for outdoor localization and indoor planar mapping
CN102034236A (en) Multi-camera layered calibration method based on one-dimensional object
CN111413296A (en) Aerosol optical thickness remote sensing inversion method considering surface non-Lambert characteristics
Li et al. Estimation of the image interpretability of ZY-3 sensor corrected panchromatic nadir data
Rahmaniar et al. Distance Measurement of Unmanned Aerial Vehicles Using Vision-Based Systems in Unknown Environments
Feng et al. MID: A novel mountainous remote sensing imagery registration dataset assessed by a coarse-to-fine unsupervised cascading network
CN105389819A (en) Robust semi-calibrating down-looking image epipolar rectification method and system
CN102789641A (en) Method for fusing high-spectrum image and infrared image based on graph Laplacian
CN105139370A (en) Double-wave-band camera real time image fusion method based on visible light and near infrared

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151125

Termination date: 20181203

CF01 Termination of patent right due to non-payment of annual fee