CN108537833A - A kind of quick joining method of infrared image - Google Patents
A kind of quick joining method of infrared image Download PDFInfo
- Publication number
- CN108537833A CN108537833A CN201810350881.8A CN201810350881A CN108537833A CN 108537833 A CN108537833 A CN 108537833A CN 201810350881 A CN201810350881 A CN 201810350881A CN 108537833 A CN108537833 A CN 108537833A
- Authority
- CN
- China
- Prior art keywords
- image
- spliced
- adjacent
- images
- match
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000013519 translation Methods 0.000 claims abstract description 21
- 239000011159 matrix material Substances 0.000 claims abstract description 18
- 238000002156 mixing Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 8
- 238000001228 spectrum Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005314 correlation function Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000010408 sweeping Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to a kind of quick joining methods of infrared image, belong to infrared image splicing field.This approach includes the following steps:Read in image library to be spliced;Position of each image to be spliced in pattern matrix is parsed according to photographing information, is spliced in order;During carrying out image registration, adjacent image is considered as weak perspective relation, only consider the relative translation between image, and using multistage matching strategy, it uses Fourier to be registrated method slightly to be matched, it reuses template matching method carefully to be matched, if above-mentioned matching method matches fail, adds RANSAC algorithms using Feature Points Matching;Finally it is fade-in gradually to go out to merge adjacent image using laplacian pyramid blending algorithm.The method of the present invention can realize the splicing of high-volume infrared image, have fast throughput and have robustness and accuracy, application easy to spread.
Description
Technical field
The invention belongs to infrared images to splice field, and in particular to a kind of quick joining method of infrared image, more particularly to
The joining method for the infrared image that infrared camera is shot.
Background technology
With the development of infrared thermal imaging technique, infrared image processing is as the important technical in infrared technique,
Through as important component indispensable in infrared technique.Wherein, weight of the infrared image splicing as expanded field of vision
Means are wanted, the hot spot of image processing field research is had become.
Compared to Visible Light Camera, infrared camera due to special image-forming mechanism, divide by the image generally existing shot
The features such as resolution is low, narrow dynamic range, more noise.These problems increase difficulty to the image registration step in image mosaic.
Therefore, for specific application scene, except considering stitching algorithm from image itself, it is also necessary to be obtained when being shot in conjunction with infrared camera
Other information assists completing image mosaic.In addition, the amount of images that certain application scenarios (such as taking photo by plane) generate is huge, to a certain
Region carry out sweep type shooting obtain picture number it is hundreds and thousands of, this requires stitching algorithm have fast throughput and
With very strong robustness, cannot fail to be registrated and cause to splice mission failure because of wherein a certain or a few images.
Common image split-joint method involves the steps of:The feature point detection algorithms such as Harris, SIFT are used to examine first
The characteristic point in image to be spliced is surveyed, then the characteristic point of two images is matched, and using RANSAC algorithms to described
Characteristic point is to purifying and calculating the transformation matrixs of two images;By image co-registration side after being remapped to image using transformation matrix
Method merges image to be spliced;The excessive two images fusion of gray difference in order to prevent generates Gray step, needs under normal circumstances
The gray scale of image is adjusted before fusion.In addition, in the application spliced to multiple image, Images Registration
Can gradually increasing and accumulate with image mosaic number, this just needs bundle adjustment (Bundle Adjustment) algorithm pair
The calculated transformation matrix of registration Algorithm is adjusted, and eliminates error, and bundle adjustment algorithm is needed to each transformation matrix
It is ceaselessly iterated to calculate, algorithm takes very much.Above-mentioned joining method step is various, and algorithm is complicated, and splicing task is caused to consume
Duration.Therefore the problem of how overcome the deficiencies in the prior art is current infrared image splicing field urgent need to resolve.
Invention content
It is an object of the present invention to solve the deficiency of the existing technology and provide a kind of quick joining methods of infrared image, should
Method is directed to infrared image, is a kind of method carrying out image mosaic using the information that is generated in shooting process, have rapidity,
Accuracy and robustness.
To achieve the above object, the technical solution adopted by the present invention is as follows:
One kind is taken photo by plane the quick joining method of infrared image, is included the following steps:
Step 1, it reads:Read in the photographing information generated in image library to be spliced and shooting process;
Step 2, the making of pattern matrix:It each waits spelling according to what the photographing information of step 1 parsed in image library to be spliced
Later pattern matrix is made according to the position of each image to be spliced in all images to be spliced by the position of map interlinking picture;
Step 3, adjacent image slightly matches:Adjacent two images in step 2 gained pattern matrix are considered as to weak perspective
Relationship calculates the relative translation amount of adjacent two images using the image registration method based on Fourier transformation;
Step 4, adjacent image carefully matches:The relative translation amount obtained according to step 3 delimit out adjacent two width two images
Overlapping region, the region is expanded 20% respectively to both sides later, then from left to right random choosing in the region after expansion
Three block diagram pictures are selected, left templates image block, middle template image block, right mould plate image block are denoted as;
Template matches are carried out using maximum comparability criterion to adjacent two images, obtain the best of each template image block
Match point;The optimal match point of three template image blocks is denoted as (x respectively1, y1)、(x2, y2)、(x3, y3), then calculate three points
Cross, ordinate difference absolute value:W1=| x1-x2|、W2=| x1-x3|、W3=| x2-x3|、H1=| y1-y2|、H2=| y1-y3
|、H3=| y2-y3|;These three match points are screened later:If W1、W2And W3Respectively less than it is equal to image to be spliced length
5%, and corresponding H1、H2And H3Also it is respectively less than 5% high equal to image to be spliced, then illustrates three points without too big difference, matching
Succeed, and three points are taken into the thin matching result of average conduct;If W1、W2、W3In there are two be less than or equal to image to be spliced length
5% and corresponding H is again smaller than wide equal to image to be spliced 5%, then explanation has two pairs of points without too big difference, successful match,
It is taken for any pair in these two pair point and average is used as matching result;If W1、W2、W3In there are one be less than or equal to image to be spliced length
5% and corresponding H is again smaller than wide equal to image to be spliced 5%, then explanation has a pair of of point without too big difference, and successful match will
This pair of point takes average as matching result;Other situations then illustrate that three optimal match points differ greatly, and it fails to match;Matching
Successful implementation step 5, it fails to match then implementation steps 6;
Step 5, matching result judges:If the thin matched result of step 4 is less than threshold with the thick matched result difference of step 3
Value, then receive matching result, otherwise implementation steps 7 refuse matching result, implementation steps 6;
Step 6, adjacent two images Feature Points Matching:Harris characteristic point inspections are carried out to adjacent two images to be matched
It surveys, carrying out Euclidean distance to feature point description that detection obtains matches to obtain match point, and mistake is rejected using RANSAC algorithms
Match point calculates the relative translation amount that two width are registered between image;
Step 7, adjacent image merges:Adjacent image is merged using laplacian pyramid blending algorithm;
Step 8, step 3- steps 7 are repeated, the splicing until completing whole image library.
It is further preferred that in step 1, the photographing information generated in the shooting process includes aircraft flight
Longitude and latitude and shooting time where when speed, flying height, course angle, pitch angle, shooting.
It is further preferred that when splicing, splicing sequence is that sequence from top to bottom, from left to right is spliced successively.
It is further preferred that being registrated the specific side that method calculates the relative translation amount of adjacent two images using Fourier
Method is:Calculate the crosspower spectrum of two images, the phase difference of crosspower spectrum is equivalent to the phase difference between image, to the phase difference into
Row inverse Fourier transform obtains Two-dimensional Pulsed function, finds out pulse peak positions, which is that two width are registered two images
Between relative translation amount.
It is further preferred that in step 4, maximum comparability criterion is used as similarity measurements using correlation function R (i, j)
Amount, calculation formula are as follows:
Wherein, s (x, y) is the overlapping region after expanding, and T (x, y) is template image block, and X is the length of template image block, Y
For the width of template image block, i is template image block shift position in the direction of the x axis, and j is that template image block moves in the y-axis direction
Dynamic position;
Corresponding (i, j) is optimal match point when R (i, j) is maximum.
It is further preferred that in step 4,1/6 of overlapping region length after a length of expansion of every piece of template image block,
Width is 1/6 of the overlay region field width after expanding.
It is further preferred that in step 5, the threshold value is 5%-10%.For example, when threshold value is 5%, if step
Rapid 4 thin matched results are less than threshold value with the thick matched result difference of step 3, then thick for the thin matched result of step 4 and step 3
Matched result difference is less than the 5% of image to be spliced length, and 5% wide less than image to be spliced.
Compared with prior art, the present invention advantage is:
(1) present invention makes full use of photographing information to parse image mosaic sequence and stitching position, is orderly spliced, and
The complexity that image projection relationship is handled to accelerate splicing speed, reduces splicing as weak perspective;
(2) it using multistage matching strategy, is first slightly matched, is calculated using the fireballing Fourier's method for registering of registration
Lap reuses template matching method and is carefully registrated, if tangible fail uses more accurate feature with criterion
The matched method of point improves the accuracy of splicing while having taken into account image mosaic rapidity and robustness;
(3) influence that misalignment of pixels is brought is eliminated using laplacian pyramid fusion method.
Description of the drawings
Fig. 1 is the flow diagram of the quick joining method of infrared image of taking photo by plane of the present invention;
Fig. 2 is Aerial Images style of shooting schematic diagram;
Fig. 3 is the pattern matrix schematic diagram taken photo by plane;
Fig. 4 is the choosing method schematic diagram of template matching method template;
Fig. 5 is the flow chart for screening optimal match point;
Fig. 6 is the splicing result figure of embodiment.
Specific implementation mode
With reference to embodiment, the present invention is described in further detail.
It will be understood to those of skill in the art that the following example is merely to illustrate the present invention, and it should not be regarded as limiting this hair
Bright range.In the examples where no specific technique or condition is specified, according to technology or condition described in document in the art
Or it is carried out according to product description.Production firm person is not specified in material therefor or equipment, is that can be obtained by buying
Conventional products.
Embodiment 1
One kind is taken photo by plane the quick joining method of infrared image, as shown in Figure 1, including the following steps:
Step 1, it reads:Read in the photographing information generated in image library to be spliced and shooting process;The shooting process
Longitude and latitude where when the photographing information of middle generation includes vehicle flight speeds, flying height, course angle, pitch angle, shooting
And shooting time;
Step 2, the making of pattern matrix:It each waits spelling according to what the photographing information of step 1 parsed in image library to be spliced
All images to be spliced are made according to the position of each image to be spliced (which row of which row) later for the position of map interlinking picture
Pattern matrix;When splicing, splicing sequence is that sequence from top to bottom, from left to right is spliced successively.
Step 3, adjacent image slightly matches:Adjacent two images in step 2 gained pattern matrix are considered as to weak perspective
Relationship gives tacit consent to two images and only exists translation relation, adjacent two width is calculated using the image registration method based on Fourier transformation
The relative translation amount of image;
Step 4, adjacent image carefully matches:The relative translation amount obtained according to step 3 delimit out adjacent two width two images
Overlapping region, the region is expanded 20% respectively to both sides later, then from left to right random choosing in the region after expansion
Three block diagram pictures are selected, left templates image block, middle template image block, right mould plate image block are denoted as;A length of expansion of every piece of template image block
1/6 of overlapping region length after big, width be 1/6 of the overlay region field width after expansion;
Template matches are carried out using maximum comparability criterion to adjacent two images, obtain the best of each template image block
Match point;The optimal match point of three template image blocks is denoted as (x respectively1, y1)、(x2, y2)、(x3, y3), then calculate three points
Cross, ordinate difference absolute value:W1=| x1-x2|、W2=| x1-x3|、W3=| x2-x3|、H1=| y1-y2|、H2=| y1-y3
|、H3=| y2-y3|;These three match points are screened later:If W1、W2And W3Respectively less than it is equal to image to be spliced length
5%, and corresponding H1、H2And H3Also it is respectively less than 5% high equal to image to be spliced, then illustrates three points without too big difference, matching
Succeed, and three points are taken into the thin matching result of average conduct;If W1、W2、W3In there are two be less than or equal to image to be spliced length
5% and corresponding H is again smaller than wide equal to image to be spliced 5%, then explanation has two pairs of points without too big difference, successful match,
It is taken for any pair in these two pair point and average is used as matching result;If W1、W2、W3In there are one be less than or equal to image to be spliced length
5% and corresponding H is again smaller than wide equal to image to be spliced 5%, then explanation has a pair of of point without too big difference, and successful match will
This pair of point takes average as matching result;Other situations then illustrate that three optimal match points differ greatly, and it fails to match;Matching
Successful implementation step 5, it fails to match then implementation steps 6;
Step 5, matching result judges:If the thin matched result of step 4 is less than threshold with the thick matched result difference of step 3
Value, then receive matching result, otherwise implementation steps 7 refuse matching result, implementation steps 6;
Step 6, adjacent two images Feature Points Matching:Harris characteristic point inspections are carried out to adjacent two images to be matched
It surveys, carrying out Euclidean distance to feature point description that detection obtains matches to obtain match point, and mistake is rejected using RANSAC algorithms
Match point calculates the relative translation amount that two width are registered between image;
Step 7, adjacent image merges:Adjacent image is merged using laplacian pyramid blending algorithm;Due to will be adjacent
Two images are considered as weak perspective relation, and there are errors for registration, therefore the image after registration can generally have several pixels
Dislocation.Adjacent image is merged using laplacian pyramid blending algorithm, the influence that image pixel dislocation is brought can be effectively eliminated.
Step 8, step 3- steps 7 are repeated, the splicing until completing whole image library.
Wherein, use Fourier be registrated method calculate adjacent two images relative translation amount specific method for:Calculate two
The crosspower spectrum of width image, the phase difference of crosspower spectrum are equivalent to the phase difference between image, and it is inverse to carry out Fourier to the phase difference
Transformation, obtains Two-dimensional Pulsed function, finds out pulse peak positions, the position be two width be registered it is relatively flat between two images
Shifting amount.
In step 4, maximum comparability criterion is used as similarity measurement, calculation formula as follows using correlation function R (i, j):
Wherein, s (x, y) is the overlapping region after expanding, and T (x, y) is template image block, and X is the length of template image block, Y
For the width of template image block, i is template image block shift position in the direction of the x axis, and j is that template image block moves in the y-axis direction
Dynamic position;
Corresponding (i, j) is optimal match point when R (i, j) is maximum.
In step 5, the threshold value is 5%-10%.
Application example
With reference to the embodiment and attached drawing of Aerial Images, the present invention is described in further detail, but the reality of the present invention
The mode of applying is not limited to Aerial Images.
As shown in Figure 1, one kind is taken photo by plane, the quick joining method of infrared image, includes the following steps:
Step 1, it reads:Read in the photographing information generated in image library to be spliced and shooting process;
The mode that image in image library obtains is as shown in Figure 2:Aircraft forward flight (in Fig. 2 heading perpendicular to
Paper), infrared camera is shot landing ground with θ overturning angle sweepings in the plane (paper) perpendicular to heading, sweeping production
The image of raw longitudinal direction, and longitudinal picture number is fixed 9 secondary;Aircraft flight generates the image of transverse direction, lateral number
It is unlimited to extend;Read the longitude and latitude generated when taking photo by plane that carries in image head, shooting time etc. is taken photo by plane information.Specific information of taking photo by plane
Including vehicle flight speeds, flying height, course angle, pitch angle, current longitude and latitude, current shooting time.
Step 2, the making of pattern matrix:It each waits spelling according to what the photographing information of step 1 parsed in image library to be spliced
Later pattern matrix is made according to the position of each image to be spliced in all images to be spliced by the position of map interlinking picture;Finally
The pattern matrix of generation is as shown in Figure 3.When splicing, splicing sequence is that sequence from top to bottom, from left to right is spliced successively.
Step 3, adjacent image slightly matches:Adjacent two images in step 2 gained pattern matrix are considered as to weak perspective
Relationship calculates the relative translation amount of adjacent two images using the image registration method based on Fourier transformation;It is specific as follows:
Preliminary registration is carried out to sequentially adjacent image.It is theoretically adjacent due to being the image taken photo by plane at a distance
There is projection relations between two images can be adjacent two width for the Aerial Images that shooting distance is tens kilometers
Image is considered as to weak perspective relation, and acquiescence two images only exist translation relation.Therefore, it is found out using Fourier's registration method adjacent
The relative displacement relationship of two images.Specific implementation method is:If f1(x, y) is f2(x, y) is adjacent image, f1And f2It is right
The Fourier transform answered is F1(u, v) and F2(u, v) calculates the crosspower spectrum of two images according to following formula:
Wherein F1* (u, v) and F2* (u, v) is respectively F1(u, v) and F2The complex conjugate of (u, v);
The phase difference of crosspower spectrum is equivalent to the phase difference between image, carries out inverse Fourier transform to above formula, obtains two dimension
Impulse function finds out pulse peak positions, which is the relative translation amount x that two width are registered between image0And y0。
Step 4, adjacent image carefully matches:The relative translation amount obtained according to step 3 delimit out adjacent two width two images
Overlapping region, the region is expanded 20% respectively to both sides later, then from left to right random choosing in the region after expansion
Three block diagram pictures are selected, left templates image block, middle template image block, right mould plate image block are denoted as;A length of expansion of every piece of template image block
1/6 of overlapping region length after big, width be 1/6 of the overlay region field width after expansion;As shown in figure 4, select in the image 1 it is left,
In, the three template image block T in the right side1、T2、T3, size is X × Y;
Template matches are carried out using maximum comparability criterion to adjacent two images, obtain the best of each template image block
Match point;Maximum comparability criterion is used as similarity measurement, calculation formula as follows using correlation function R (i, j):
Wherein, s (x, y) is the overlapping region after image 2 expands, and T (x, y) is template image block, and X is template image block
Long, Y is the width of template image block, and i is template image block shift position in the direction of the x axis, and j is template image block in y-axis direction
Upper shift position;
Corresponding (i, j) is optimal match point when R (i, j) is maximum.
The optimal match point of three template image blocks is denoted as (x respectively1, y1)、(x2, y2)、(x3, y3), then calculate three
The cross of point, the absolute value of ordinate difference:W1=| x1-x2|、W2=| x1-x3|、W3=| x2-x3|、H1=| y1-y2|、H2=| y1-
y3|、H3=| y2-y3|;These three match points are screened later, Liu's dust is as shown in Figure 5:If W1、W2And W3Respectively less than etc.
5% grown in image to be spliced, and corresponding H1、H2And H3Also it is respectively less than 5% high equal to image to be spliced, then illustrates three
It puts without too big difference, successful match, and three points is taken into the thin matching result of average conduct;If W1、W2、W3In there are two be less than etc.
In the 5% and corresponding H that image to be spliced is grown again smaller than wide equal to image to be spliced 5%, then explanation has two pairs of points without too big
Difference, successful match take average as matching result using any pair in these two pair point;If W1、W2、W3In there are one be less than etc.
In the 5% and corresponding H that image to be spliced is grown again smaller than wide equal to image to be spliced 5%, then explanation has a pair of of point without too big
Difference, successful match take this pair of point average as matching result;Other situations then illustrate three optimal match point differences compared with
Greatly, it fails to match;Successful match implementation steps 5, it fails to match then implementation steps 6;
Step 5, matching result judges:It is waited for if the thin matched result of step 4 is less than with the thick matched result difference of step 3
Stitching image length 5%, and it is wide by 5% less than image to be spliced, then receive matching result, implementation steps 7, otherwise refusal matching is tied
Fruit, implementation steps 6;
Step 6, adjacent two images Feature Points Matching:Harris characteristic point inspections are carried out to adjacent two images to be matched
It surveys, carrying out Euclidean distance to feature point description that detection obtains matches to obtain match point, and mistake is rejected using RANSAC algorithms
Match point calculates the relative translation amount that two width are registered between image;
Step 7, adjacent image merges:Adjacent image is merged using laplacian pyramid blending algorithm, integration region
Fusion weight is successively decreased by both sides to center.Since adjacent two images are considered as weak perspective relation, and it is registrated there are error, because
Image after this registration can generally have the dislocation of several pixels.Neighbor map is merged using laplacian pyramid blending algorithm
Picture can effectively eliminate the influence that image pixel dislocation is brought.
Step 8, step 3- steps 7 are repeated, the splicing until completing whole image library.
According to the above embodiment, the splicing result of image library of the splicing comprising 72 images (9 rows 8 row) is as shown in Figure 6.
The basic principles, main features and advantages of the present invention have been shown and described above.The technology of the industry
Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this
The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes
Change and improvement all fall within the protetion scope of the claimed invention.The claimed scope of the invention by appended claims and its
Equivalent thereof.
Claims (7)
- The quick joining method of infrared image 1. one kind is taken photo by plane, which is characterized in that include the following steps:Step 1, it reads:Read in the photographing information generated in image library to be spliced and shooting process;Step 2, the making of pattern matrix:Each figure to be spliced in image library to be spliced is parsed according to the photographing information of step 1 Later pattern matrix is made according to the position of each image to be spliced in all images to be spliced by the position of picture;Step 3, adjacent image slightly matches:Adjacent two images in step 2 gained pattern matrix are considered as to weak perspective relation, The relative translation amount of adjacent two images is calculated using the image registration method based on Fourier transformation;Step 4, adjacent image carefully matches:The relative translation amount obtained according to step 3 delimit out the weight of adjacent two width two images Folded region, expands 20% to both sides by the region later, then from left to right randomly chooses three in the region after expansion respectively Block diagram picture is denoted as left templates image block, middle template image block, right mould plate image block;Template matches are carried out using maximum comparability criterion to adjacent two images, obtain the best match of each template image block Point;The optimal match point of three template image blocks is denoted as (x respectively1, y1)、(x2, y2)、(x3, y3), then calculate three points Horizontal, ordinate difference absolute value:W1=| x1-x2|、W2=| x1-x3|、W3=| x2-x3|、H1=| y1-y2|、H2=| y1-y3|、 H3=| y2-y3|;These three match points are screened later:If W1、W2And W3Respectively less than it is equal to the 5% of image to be spliced length, And corresponding H1、H2And H3Also be respectively less than be equal to image to be spliced it is high 5%, then illustrate three points without too big difference, matching at Work(, and three points are taken into the thin matching result of average conduct;If W1、W2、W3In there are two be less than or equal to image to be spliced length 5% And corresponding H is again smaller than wide equal to image to be spliced 5%, then explanation has two pairs of points without too big difference, successful match, this two Any pair in point is taken average as matching result;If W1、W2、W3In there are one be less than or equal to image to be spliced length 5% and Corresponding H is again smaller than wide equal to image to be spliced 5%, then explanation has a pair of of point without too big difference, successful match, by this pair Point takes average as matching result;Other situations then illustrate that three optimal match points differ greatly, and it fails to match;Successful match is real Step 5 is applied, it fails to match then implementation steps 6;Step 5, matching result judges:If the thin matched result of step 4 is less than threshold value with the thick matched result difference of step 3, Then receive matching result, otherwise implementation steps 7 refuse matching result, implementation steps 6;Step 6, adjacent two images Feature Points Matching:Harris characteristic point detections are carried out to adjacent two images to be matched, it is right It detects obtained feature point description progress Euclidean distance to match to obtain match point, RANSAC algorithms is used to reject erroneous matching Point calculates the relative translation amount that two width are registered between image;Step 7, adjacent image merges:Adjacent image is merged using laplacian pyramid blending algorithm;Step 8, step 3- steps 7 are repeated, the splicing until completing whole image library.
- 2. the quick joining method of infrared image according to claim 1, which is characterized in that in step 1, the shooting Longitude and latitude where when the photographing information generated in journey includes vehicle flight speeds, flying height, course angle, pitch angle, shooting Degree and shooting time.
- 3. the quick joining method of infrared image according to claim 1, which is characterized in that when splicing, splicing sequence be from Top to bottm, sequence from left to right are spliced successively.
- 4. the quick joining method of infrared image according to claim 1, which is characterized in that be registrated method using Fourier and calculate The specific method of the relative translation amount of adjacent two images is:Calculate the crosspower spectrum of two images, the phase difference of crosspower spectrum The phase difference being equivalent between image carries out inverse Fourier transform to the phase difference, obtains Two-dimensional Pulsed function, find out peak value of pulse Position, the position are the relative translation amount that two width are registered between two images.
- 5. the quick joining method of infrared image according to claim 1, which is characterized in that in step 4, maximum comparability is accurate Then correlation function R (i, j) is used to be used as similarity measurement, calculation formula as follows:Wherein, s (x, y) is the overlapping region after expanding, and T (x, y) is template image block, and X is the length of template image block, and Y is mould The width of plate image block, i are template image block shift position in the direction of the x axis, and j is that template image block moves position in the y-axis direction It sets;Corresponding (i, j) is optimal match point when R (i, j) is maximum.
- 6. the quick joining method of infrared image according to claim 1, which is characterized in that in step 4, every piece of template image After a length of expansion of block overlapping region length 1/6, width be expansion after overlay region field width 1/6.
- 7. the quick joining method of infrared image according to claim 1, which is characterized in that in step 5, the threshold value is 5%-10%.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810350881.8A CN108537833B (en) | 2018-04-18 | 2018-04-18 | Infrared image rapid splicing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810350881.8A CN108537833B (en) | 2018-04-18 | 2018-04-18 | Infrared image rapid splicing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108537833A true CN108537833A (en) | 2018-09-14 |
CN108537833B CN108537833B (en) | 2022-06-21 |
Family
ID=63477627
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810350881.8A Active CN108537833B (en) | 2018-04-18 | 2018-04-18 | Infrared image rapid splicing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108537833B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109285118A (en) * | 2018-09-26 | 2019-01-29 | 电子科技大学 | A kind of thermal-induced imagery joining method adding attachment layer |
CN109712070A (en) * | 2018-12-04 | 2019-05-03 | 天津津航技术物理研究所 | A kind of infrared panoramic image split-joint method based on graph cut |
CN110264569A (en) * | 2019-05-30 | 2019-09-20 | 南京航空航天大学 | A kind of infrared image three-dimensional reconstruction algorithm |
CN110689511A (en) * | 2019-09-19 | 2020-01-14 | 华东师范大学 | Optimization method for large-visual-field image splicing path |
CN111260561A (en) * | 2020-02-18 | 2020-06-09 | 中国科学院光电技术研究所 | Rapid multi-graph splicing method for mask defect detection |
CN113129213A (en) * | 2020-01-14 | 2021-07-16 | 中国计量大学 | Automatic splicing and fusing method for digital holographic subaperture phase diagram |
CN113916149A (en) * | 2021-09-10 | 2022-01-11 | 北京工业大学 | Cylindrical straight gear appearance vision measuring device based on multi-eye CCD industrial camera and image processing method |
CN115272077A (en) * | 2022-07-29 | 2022-11-01 | 西安羚控电子科技有限公司 | Image splicing method and system based on vision field fusion |
CN116930262A (en) * | 2023-09-15 | 2023-10-24 | 佰聆数据股份有限公司 | Defect detection method and system for realizing power equipment based on multi-terminal interaction |
CN117323002A (en) * | 2023-11-30 | 2024-01-02 | 北京万特福医疗器械有限公司 | Neural endoscopic surgery visualization system based on mixed reality technology |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714548A (en) * | 2013-12-27 | 2014-04-09 | 西安电子科技大学 | Infrared image and visible image registration method based on visual attention |
CN104881841A (en) * | 2015-05-20 | 2015-09-02 | 南方电网科学研究院有限责任公司 | Aerial high voltage electric tower image splicing method based on edge characteristics and point characteristics |
US20170132764A1 (en) * | 2015-11-06 | 2017-05-11 | Industrial Technology Research Institute | Image inpainting system and method for using the same |
CN106981077A (en) * | 2017-03-24 | 2017-07-25 | 中国人民解放军国防科学技术大学 | Infrared image and visible light image registration method based on DCE and LSS |
CN107563438A (en) * | 2017-08-31 | 2018-01-09 | 西南交通大学 | The multi-modal Remote Sensing Images Matching Method and system of a kind of fast robust |
CN107784632A (en) * | 2016-08-26 | 2018-03-09 | 南京理工大学 | A kind of infrared panorama map generalization method based on infra-red thermal imaging system |
CN107833179A (en) * | 2017-09-05 | 2018-03-23 | 云南电网有限责任公司昆明供电局 | The quick joining method and system of a kind of infrared image |
-
2018
- 2018-04-18 CN CN201810350881.8A patent/CN108537833B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103714548A (en) * | 2013-12-27 | 2014-04-09 | 西安电子科技大学 | Infrared image and visible image registration method based on visual attention |
CN104881841A (en) * | 2015-05-20 | 2015-09-02 | 南方电网科学研究院有限责任公司 | Aerial high voltage electric tower image splicing method based on edge characteristics and point characteristics |
US20170132764A1 (en) * | 2015-11-06 | 2017-05-11 | Industrial Technology Research Institute | Image inpainting system and method for using the same |
CN107784632A (en) * | 2016-08-26 | 2018-03-09 | 南京理工大学 | A kind of infrared panorama map generalization method based on infra-red thermal imaging system |
CN106981077A (en) * | 2017-03-24 | 2017-07-25 | 中国人民解放军国防科学技术大学 | Infrared image and visible light image registration method based on DCE and LSS |
CN107563438A (en) * | 2017-08-31 | 2018-01-09 | 西南交通大学 | The multi-modal Remote Sensing Images Matching Method and system of a kind of fast robust |
CN107833179A (en) * | 2017-09-05 | 2018-03-23 | 云南电网有限责任公司昆明供电局 | The quick joining method and system of a kind of infrared image |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109285118B (en) * | 2018-09-26 | 2023-03-07 | 电子科技大学 | Infrared thermal image splicing method with additional accessory layer |
CN109285118A (en) * | 2018-09-26 | 2019-01-29 | 电子科技大学 | A kind of thermal-induced imagery joining method adding attachment layer |
CN109712070A (en) * | 2018-12-04 | 2019-05-03 | 天津津航技术物理研究所 | A kind of infrared panoramic image split-joint method based on graph cut |
CN110264569A (en) * | 2019-05-30 | 2019-09-20 | 南京航空航天大学 | A kind of infrared image three-dimensional reconstruction algorithm |
CN110689511A (en) * | 2019-09-19 | 2020-01-14 | 华东师范大学 | Optimization method for large-visual-field image splicing path |
CN110689511B (en) * | 2019-09-19 | 2023-02-03 | 华东师范大学 | Optimization method for large-visual-field image splicing path |
CN113129213A (en) * | 2020-01-14 | 2021-07-16 | 中国计量大学 | Automatic splicing and fusing method for digital holographic subaperture phase diagram |
CN111260561A (en) * | 2020-02-18 | 2020-06-09 | 中国科学院光电技术研究所 | Rapid multi-graph splicing method for mask defect detection |
CN113916149A (en) * | 2021-09-10 | 2022-01-11 | 北京工业大学 | Cylindrical straight gear appearance vision measuring device based on multi-eye CCD industrial camera and image processing method |
CN115272077A (en) * | 2022-07-29 | 2022-11-01 | 西安羚控电子科技有限公司 | Image splicing method and system based on vision field fusion |
CN116930262A (en) * | 2023-09-15 | 2023-10-24 | 佰聆数据股份有限公司 | Defect detection method and system for realizing power equipment based on multi-terminal interaction |
CN116930262B (en) * | 2023-09-15 | 2023-11-28 | 佰聆数据股份有限公司 | Defect detection method and system for realizing power equipment based on multi-terminal interaction |
CN117323002A (en) * | 2023-11-30 | 2024-01-02 | 北京万特福医疗器械有限公司 | Neural endoscopic surgery visualization system based on mixed reality technology |
Also Published As
Publication number | Publication date |
---|---|
CN108537833B (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108537833A (en) | A kind of quick joining method of infrared image | |
CN111311689B (en) | Method and system for calibrating relative external parameters of laser radar and camera | |
CN105550995B (en) | tunnel image splicing method and system | |
US8275195B2 (en) | Method for determining scattered disparity fields in stereo vision | |
WO2019100933A1 (en) | Method, device and system for three-dimensional measurement | |
CN101394573B (en) | Panoramagram generation method and system based on characteristic matching | |
US6353678B1 (en) | Method and apparatus for detecting independent motion in three-dimensional scenes | |
US5832110A (en) | Image registration using projection histogram matching | |
US20100103175A1 (en) | Method for generating a high-resolution virtual-focal-plane image | |
CN111462503B (en) | Vehicle speed measuring method and device and computer readable storage medium | |
CN107533753A (en) | Image processing apparatus | |
CN107833179A (en) | The quick joining method and system of a kind of infrared image | |
EP1384046A2 (en) | Self-calibrating, digital, large format camera with single or multiple detector arrays and single or multiple optical systems | |
CN107154014A (en) | A kind of real-time color and depth Panorama Mosaic method | |
JPH1183530A (en) | Optical flow detector for image and self-position recognizing system for mobile body | |
JPH09161068A (en) | Picture photographing method and picture edition device using the method | |
CN104392416A (en) | Video stitching method for sports scene | |
CN106952262A (en) | A kind of deck of boat analysis of Machining method based on stereoscopic vision | |
CN108596837A (en) | Image split-joint method, device, equipment and computer media | |
CN110120013A (en) | A kind of cloud method and device | |
CN106780309A (en) | A kind of diameter radar image joining method | |
JP6285686B2 (en) | Parallax image generation device | |
CN113012238B (en) | Method for quick calibration and data fusion of multi-depth camera | |
CN110044266A (en) | Digital Photogrammetric System based on speckle projection | |
WO2020158726A1 (en) | Image processing device, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |