CN114881900A - Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion - Google Patents

Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion Download PDF

Info

Publication number
CN114881900A
CN114881900A CN202210396731.7A CN202210396731A CN114881900A CN 114881900 A CN114881900 A CN 114881900A CN 202210396731 A CN202210396731 A CN 202210396731A CN 114881900 A CN114881900 A CN 114881900A
Authority
CN
China
Prior art keywords
image
gradual
feature
pantograph
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210396731.7A
Other languages
Chinese (zh)
Inventor
张皓泽
杜森
邢宗义
雷威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202210396731.7A priority Critical patent/CN114881900A/en
Publication of CN114881900A publication Critical patent/CN114881900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion, which comprises the following steps: acquiring a left half image and a right half image of a pantograph; performing image preprocessing on the left half image and the right half image; extracting feature points of the left half image and the right half image by adopting an SIFT algorithm, and completing coarse matching of the feature points by using a K-D tree; screening the feature point pairs by adopting an RANSAC algorithm, calculating an optimal transformation matrix, and eliminating mismatching feature point pairs to obtain effective matching feature point pairs; and directly splicing the left half-side image and the right half-side image after registration, and finally fusing the stitched area by using a gradual-in and gradual-out method to obtain a pantograph full-bow image. The method has the advantages of excellent splicing effect and good robustness, and can obtain high-quality pantograph spliced images.

Description

Pantograph image stitching method based on feature matching and gradual-in and gradual-out fusion
Technical Field
The invention relates to the technical field of pantograph image detection, in particular to a pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion.
Background
With the rapid development of socioeconomic and the continuous promotion of urbanization progress in China, urban subways are developed quite rapidly, pantograph state detection by using a machine vision related technology is in vigorous development, and the precision of a pantograph detection technology based on machine vision is based on obtaining a high-quality pantograph full-pantograph real-time image.
Currently, there is less research on the stitching method of pantograph images, and it is relatively immature. Guangzhou Yunda intelligent science and technology limited company provides a method for obtaining a pantograph image by calling an image splicing calibration method to perform perspective transformation on a left half-side image and a right half-side image and then performing translation splicing transformation, but the full-pantograph image obtained by the method has obvious splicing gaps, and is low in image quality and poor in splicing stability.
Disclosure of Invention
The invention aims to provide a pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion, which has the advantages of excellent fusion effect, good splicing stability and high quality of spliced images.
The technical solution for realizing the purpose of the invention is as follows: a pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion comprises the following steps:
step 1, acquiring a left half image and a right half image of a pantograph;
step 2, image preprocessing is carried out on the left half image and the right half image;
step 3, extracting feature points of the left half image and the right half image by adopting an SIFT algorithm, and completing coarse matching of the feature points by using a K-D tree;
step 4, screening the feature point pairs by adopting an RANSAC algorithm, calculating an optimal transformation matrix, and eliminating mismatching feature point pairs to obtain effective matching feature point pairs;
and 5, directly splicing the registered left half-side image and right half-side image, and finally fusing the spliced area by using a gradual-in and gradual-out method to obtain a pantograph full-bow image.
Compared with the prior art, the invention has the following remarkable advantages: (1) SIFT and RANSAC algorithms are used, image feature points are accurately registered, and robustness is high; (2) and a gradual-in and gradual-out fusion algorithm is used, the fusion quality of the spliced images is high, no obvious splicing seam exists, and the work is stable.
Drawings
Fig. 1 is a flowchart illustrating a pantograph image stitching method based on feature matching and fade-in and fade-out fusion.
Fig. 2 is left and right side pantograph images preprocessed by wavelet transform.
Fig. 3 is an image obtained by K-D tree matching after completing the SIFT feature point detection.
Fig. 4 is a feature point matching image after RANSAC screening.
Fig. 5 is a pantograph full-bow image obtained by the fade-in fade-out method fusion.
Detailed Description
The invention discloses a pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion, which comprises the following steps of:
step 1, acquiring a left half image and a right half image of a pantograph;
step 2, image preprocessing is carried out on the left half image and the right half image;
step 3, extracting feature points of the left half image and the right half image by adopting an SIFT algorithm, and completing coarse matching of the feature points by using a K-D tree;
step 4, screening the feature point pairs by adopting an RANSAC algorithm, calculating an optimal transformation matrix, and eliminating mismatching feature point pairs to obtain effective matching feature point pairs;
and 5, directly splicing the registered left half-side image and right half-side image, and finally fusing the spliced area by using a gradual-in and gradual-out method to obtain a pantograph full-bow image.
As a specific embodiment, step 2 is to perform image preprocessing on the left half image and the right half image, specifically to perform image preprocessing on the images by using wavelet transform, and the steps include:
the decomposition of the first-order wavelet transform into sub-image components LL, HL, LH and HH representing the approximate, horizontal, vertical and diagonal characteristics of the image is realized through a Harr wavelet function; and selecting the sub-image components LL, HL and LH for superposition to obtain an image for next feature extraction.
As a specific embodiment, step 3, extracting feature points of the left half image and the right half image by using an SIFT algorithm, and performing rough matching of the feature points by using a K-D tree includes the following steps:
step 31, constructing a gaussian pyramid, wherein the number n of layers of the pyramid is determined according to the original size of the image and the size of the image on the top of the tower, and the calculation formula is as follows:
n=log 2 {min(M,N)}-t,t∈[0,log 2 {min(M,N)}]
wherein M, N is the height and width of the original image, and t is the logarithm value of the minimum dimension of the tower top image;
and 32, subtracting adjacent upper and lower layers of images in each group by using a Gaussian pyramid to obtain a Gaussian difference image, and performing extreme value detection by using a Gaussian difference operator, wherein the Gaussian difference operator comprises the following steps:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein, D (x, y, σ) represents a gaussian difference scale space, G (x, y, k σ), G (x, y, σ) represents a gaussian function, L (x, y, k σ), L (x, y, σ) represents a scale space of an image, I (x, y) represents an input image, x, y represent pixel coordinates of the image, k represents a scale factor between two adjacent scales, and σ represents a scale factor;
step 33, for candidate feature points obtained in the gaussian difference space, performing feature value calculation through a Hessian matrix, and detecting whether the calculated feature values satisfy the following formula, if so, the candidate feature points are kept as feature points:
Figure BDA0003599302790000031
wherein
Figure BDA0003599302790000032
D xx Is the second derivative of the candidate feature point in the horizontal direction; d xy Firstly, solving a first derivative of the candidate characteristic points in the horizontal direction, and then solving the first derivative in the vertical direction; d yy Is the candidate feature point in the vertical directionA second derivative; tr (H) ═ D xx +D yy =α+β,Det(H)=D xx D yy -(D xy ) 2 α β, α being the larger eigenvalue, β being the smaller eigenvalue;
step 34, calculating a module value m (x, y) and a direction θ (x, y) of the feature point gradient, wherein the formula is as follows:
Figure BDA0003599302790000033
θ(x,y)=tan -1 (((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
wherein L (x, y) is a scale space value where the feature point is located, and the main direction of the feature point is determined by using the histogram to count the pixel gradient and the direction in the neighborhood of the feature point;
step 35, rotating coordinate axes to be the direction of the feature points, taking the feature points as centers, selecting a 16 × 16 pixel neighborhood, and dividing the selected area into 8 4 × 4 sub-areas; forming a feature vector of 128 dimensions 4 × 4 × 8 for each key point, calculating direction histograms of 8 directions in each sub-region, and finally performing feature vector normalization to complete feature point detection;
and step 36, matching the characteristic points obtained in the step 35 based on Euclidean distance by using a K-D tree algorithm to obtain rough matching characteristic point pairs.
As a specific embodiment, the step 4 of screening the feature point pairs by using the RANSAC algorithm, calculating an optimal transformation matrix, and removing mismatching feature point pairs to obtain effective matching feature point pairs specifically includes the following steps:
step 41, defining a homography transformation matrix A as follows:
Figure BDA0003599302790000041
then there is
Figure BDA0003599302790000042
Wherein x and y are coordinates of the middle point of the left half image, and x 'and y' are coordinates of the middle point of the right half image;
selecting 4 pairs from all the characteristic point pairs each time, calculating a homography matrix A, and then selecting the matrix which meets the most characteristic point pairs as a final result, wherein the distance calculating method comprises the following steps:
Figure BDA0003599302790000043
wherein T is a constant threshold;
and 42, eliminating the mismatching characteristic point pairs according to the obtained homography matrix A to obtain effective matching characteristic point pairs.
As a specific embodiment, step 5 is to directly stitch the left half-side image and the right half-side image after registration, and finally fuse the stitched region by using a gradual-in and gradual-out method to obtain a pantograph full-bow image, and specifically includes the following steps:
step 51, performing cylindrical projection on the left half image and the right half image, wherein the projection formula is as follows:
Figure BDA0003599302790000044
Figure BDA0003599302790000045
wherein, a 'and b' are image coordinates after cylindrical projection, a and b are original coordinates of the image, width and height are image width and height, and f is camera focal length;
step 52, finding the overlapping part of the left half image and the right half image according to the homography matrix A matrix, and carrying out splicing displacement on the right half image;
step 53, fusing the stitching region by using a gradual-in and gradual-out method, wherein weights are linearly distributed to pixels of the two images in the overlapping region, and the formula is as follows:
Figure BDA0003599302790000051
wherein, I (x, y) is the fused image point pixel, I 1 (x, y) is the image point pixel to be stitched, I 2 (x, y) is the reference image point pixel to be stitched, k 1 And k 2 Is a weight value, and satisfies k 1 +k 2 =1,0<k 1 ,k 2 <1;
And finally, completing image fusion to obtain a pantograph full-bow image.
The invention is described in further detail below with reference to the figures and specific embodiments.
Examples
With reference to fig. 1, the pantograph image stitching method based on feature matching and gradual-in and gradual-out fusion of the present invention includes the following steps:
s1: acquiring a left half image and a right half image of a pantograph;
s2: image preprocessing is carried out on the left half image and the right half image, and the method specifically comprises the following steps:
the image is pre-processed using a wavelet transform, which is decomposed into sub-image components LL, HL, LH and HH representing approximate, horizontal, vertical and diagonal characteristics of the image by a Harr wavelet function. And selecting sub-image components LL, HL and LH for superposition to obtain an image for next feature extraction. As shown in fig. 2.
S3: calling an SIFT algorithm to extract feature points of the left half image and the right half image, and finishing rough matching of the feature points by using a K-D tree, wherein the method specifically comprises the following steps:
s31, constructing a Gaussian pyramid, wherein the number n of layers of the pyramid is determined according to the original size of the image and the size of the image at the top of the tower, and the calculation formula is as follows:
n=log 2 {min(M,N)}-t,t∈[0,log 2 {min(M,N)}]
wherein M, N is the height and width of the original image, and t is the logarithm value of the minimum dimension of the tower top image;
s32, subtracting adjacent upper and lower layers of images in each group by using a Gaussian pyramid to obtain a Gaussian difference image, and performing extreme value detection by using a Gaussian difference operator, wherein the Gaussian difference operator comprises the following steps:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)=L(x,y,kσ)-L(x,y,σ)
wherein, D (x, y, σ) represents a gaussian difference scale space, G (x, y, k σ), G (x, y, σ) represents a gaussian function, L (x, y, k σ), L (x, y, σ) represents a scale space of an image, I (x, y) represents an input image, x, y represent pixel coordinates of the image, k represents a scale factor between two adjacent scales, and σ represents a scale factor;
s33, for candidate feature points obtained through the Gaussian difference space, carrying out feature value calculation through a Hessian matrix, and detecting whether the calculated feature values meet the following formula, if the following formula is met, the candidate feature points are reserved as the feature points:
Figure BDA0003599302790000061
wherein
Figure BDA0003599302790000062
D xx Is the second derivative of the candidate feature point in the horizontal direction; d xy Firstly, solving a first derivative of the candidate characteristic points in the horizontal direction, and then solving the first derivative in the vertical direction; d yy Is the second derivative of the candidate feature point in the vertical direction; tr (H) ═ D xx +D yy =α+β,Det(H)=D xx D yy -(D xy ) 2 α β, α being the larger eigenvalue, β being the smaller eigenvalue;
s34, calculating the module value m (x, y) and the direction theta (x, y) of the gradient of the characteristic point, wherein the formula is as follows:
Figure BDA0003599302790000063
θ(x,y)=tan -1 (((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
wherein L (x, y) is a scale space value where the feature point is located, and the main direction of the feature point is determined by using the histogram to count the pixel gradient and the direction in the neighborhood of the feature point;
and S35, rotating the coordinate axis to the direction of the characteristic point, taking the characteristic point as the center, selecting a 16 × 16 pixel neighborhood, and dividing the region into 8 4 × 4 sub-regions. Each key point forms a feature vector with dimensions of 4 multiplied by 8 which is 128, the feature vector is realized by calculating the direction histogram of 8 directions in each sub-area, and finally, feature vector normalization is carried out, and the detection of the feature point is completed.
And S36, matching the characteristic points obtained in the S35 by using a K-D tree algorithm based on Euclidean distance to obtain rough matching characteristic point pairs, as shown in figure 3.
S4: calling RANSAC algorithm to screen the feature point pairs, calculating an optimal transformation matrix, and eliminating mismatching feature point pairs to obtain effective matching feature point pairs, wherein the method specifically comprises the following steps:
s41, defining a homography transformation matrix A as follows:
Figure BDA0003599302790000071
then there is
Figure BDA0003599302790000072
Wherein, x and y are coordinates of the middle point of the left half image, and x 'and y' are coordinates of the middle point of the right half image.
Selecting 4 pairs from all the characteristic point pairs each time, calculating a homography matrix A, and then selecting the matrix which meets the most characteristic point pairs as a final result, wherein the distance calculating method comprises the following steps:
Figure BDA0003599302790000073
wherein T is a constant threshold.
And S42, removing the mismatching characteristic point pairs according to the obtained homography matrix A to obtain effective matching characteristic point pairs. As shown in fig. 4.
S5: carry out direct concatenation to pantograph left side half image and right side image, use gradually to go into gradually out the method at last and fuse the seam region, obtain smooth pantograph full-bow image, concrete step includes:
s51, performing cylindrical projection on the left half image and the right half image, wherein the projection formula is as follows:
Figure BDA0003599302790000074
Figure BDA0003599302790000075
wherein, a 'and b' are image coordinates after cylindrical projection, a and b are original coordinates of the image, width and height are image width and height, and f is camera focal length;
s52, finding the overlapping part of the left half image and the right half image according to the homography matrix A, and carrying out splicing displacement on the right half image;
s53, fusing the stitching region by using a gradual-in gradual-out method, wherein weights are linearly distributed to pixels of the two images in the overlapping region, and the formula is as follows:
Figure BDA0003599302790000081
wherein, I (x, y) is the fused image point pixel, I 1 (x, y) is the image point pixel to be stitched, I 2 (x, y) is the reference image point pixel to be stitched, k 1 And k 2 Is a weight value, and satisfies k 1 +k 2 =1,0<k 1 ,k 2 <1。
Finally, image fusion is completed, and a pantograph full-bow image is obtained, as shown in fig. 5. As can be seen from FIG. 5, by using SIFT and RANSAC algorithms, the image feature points are accurately registered and have high robustness; by using the gradual-in and gradual-out fusion algorithm, the fusion quality of the spliced images is high, no obvious splicing seam exists, and the work is stable.

Claims (5)

1. A pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion is characterized by comprising the following steps:
step 1, acquiring a left half image and a right half image of a pantograph;
step 2, image preprocessing is carried out on the left half image and the right half image;
step 3, extracting feature points of the left half image and the right half image by adopting an SIFT algorithm, and completing coarse matching of the feature points by using a K-D tree;
step 4, screening the feature point pairs by adopting an RANSAC algorithm, calculating an optimal transformation matrix, and eliminating mismatching feature point pairs to obtain effective matching feature point pairs;
and 5, directly splicing the registered left half-side image and right half-side image, and finally fusing the spliced area by using a gradual-in and gradual-out method to obtain a pantograph full-bow image.
2. The pantograph image stitching method based on feature matching and fade-in and fade-out fusion according to claim 1, wherein the image preprocessing on the left half-side image and the right half-side image in step 2, in particular to image preprocessing on the images by using wavelet transform, comprises the steps of:
the decomposition of the first-order wavelet transform into sub-image components LL, HL, LH and HH representing the approximate, horizontal, vertical and diagonal characteristics of the image is realized through a Harr wavelet function; and selecting the sub-image components LL, HL and LH for superposition to obtain an image for next feature extraction.
3. The pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion of claim 1, wherein in the step 3, the left half image and the right half image are subjected to feature point extraction by using a SIFT algorithm, and coarse matching of feature points is completed by using a K-D tree, which specifically comprises the following steps:
step 31, constructing a gaussian pyramid, wherein the number n of layers of the pyramid is determined according to the original size of the image and the size of the image on the top of the tower, and the calculation formula is as follows:
n=log 2 {min(M,N)}-t,t∈[0,log 2 {min(M,N)}]
wherein M, N is the height and width of the original image, and t is the logarithm value of the minimum dimension of the tower top image;
and 32, subtracting adjacent upper and lower layers of images in each group by using a Gaussian pyramid to obtain a Gaussian difference image, and performing extreme value detection by using a Gaussian difference operator, wherein the Gaussian difference operator comprises the following steps:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein, D (x, y, σ) represents a gaussian difference scale space, G (x, y, k σ), G (x, y, σ) represents a gaussian function, L (x, y, k σ), L (x, y, σ) represents a scale space of an image, I (x, y) represents an input image, x, y represent pixel coordinates of the image, k represents a scale factor between two adjacent scales, and σ represents a scale factor;
step 33, for candidate feature points obtained in the gaussian difference space, performing feature value calculation through a Hessian matrix, and detecting whether the calculated feature values satisfy the following formula, if so, the candidate feature points are kept as feature points:
Figure FDA0003599302780000021
wherein
Figure FDA0003599302780000022
D xx Is the second derivative of the candidate feature point in the horizontal direction; d xy Firstly, the first derivative of the candidate characteristic points is calculated in the horizontal direction, and then the first derivative is calculated in the vertical direction;D yy Is the second derivative of the candidate feature point in the vertical direction; tr (H) ═ D xx +D yy =α+β,Det(H)=D xx D yy -(D xy ) 2 α β, α being the larger eigenvalue, β being the smaller eigenvalue;
step 34, calculating a module value m (x, y) and a direction θ (x, y) of the feature point gradient, wherein the formula is as follows:
Figure FDA0003599302780000023
θ(x,y)=tan -1 (((L(x,y+1)-L(x,y-1))/L(x+1,y)-L(x-1,y)))
wherein L (x, y) is a scale space value where the feature point is located, and the main direction of the feature point is determined by using the histogram to count the pixel gradient and the direction in the neighborhood of the feature point;
step 35, rotating coordinate axes to be the direction of the feature points, taking the feature points as centers, selecting a 16 × 16 pixel neighborhood, and dividing the selected area into 8 4 × 4 sub-areas; forming a feature vector of 128 dimensions 4 × 4 × 8 for each key point, calculating direction histograms of 8 directions in each sub-region, and finally performing feature vector normalization to complete feature point detection;
and step 36, matching the characteristic points obtained in the step 35 based on Euclidean distance by using a K-D tree algorithm to obtain rough matching characteristic point pairs.
4. The pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion of claim 1, wherein the step 4 of screening feature point pairs by using a RANSAC algorithm, calculating an optimal transformation matrix, and removing mismatching feature point pairs to obtain effective matching feature point pairs comprises the following steps:
step 41, defining a homography transformation matrix A as follows:
Figure FDA0003599302780000024
then there is
Figure FDA0003599302780000031
Wherein x and y are coordinates of the middle point of the left half image, and x 'and y' are coordinates of the middle point of the right half image;
selecting 4 pairs from all the characteristic point pairs each time, calculating a homography matrix A, and then selecting the matrix which meets the most characteristic point pairs as a final result, wherein the distance calculating method comprises the following steps:
Figure FDA0003599302780000032
wherein T is a constant threshold;
and 42, eliminating the mismatching characteristic point pairs according to the obtained homography matrix A to obtain effective matching characteristic point pairs.
5. The pantograph image stitching method based on the feature matching and the gradual-in and gradual-out fusion of the claim 1, wherein the step 5 of directly stitching the left half image and the right half image after the registration and finally fusing the stitching region by using the gradual-in and gradual-out method to obtain the pantograph full-pantograph image comprises the following steps:
step 51, performing cylindrical projection on the left half image and the right half image, wherein the projection formula is as follows:
Figure FDA0003599302780000033
Figure FDA0003599302780000034
wherein, a 'and b' are image coordinates after cylindrical projection, a and b are original coordinates of the image, width and height are image width and height, and f is camera focal length;
step 52, finding the overlapped part of the left half image and the right half image according to the matrix A of the homography matrix, and carrying out splicing displacement on the right half image;
step 53, fusing the stitching region by using a gradual-in and gradual-out method, wherein weights are linearly distributed to pixels of the two images in the overlapping region, and the formula is as follows:
Figure FDA0003599302780000035
wherein, I (x, y) is the fused image point pixel, I 1 (x, y) is the image point pixel to be stitched, I 2 (x, y) is the reference image point pixel to be stitched, k 1 And k 2 Is a weight value, and satisfies k 1 +k 2 =1,0<k 1 ,k 2 <1;
And finally, completing image fusion to obtain a pantograph full-bow image.
CN202210396731.7A 2022-04-15 2022-04-15 Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion Pending CN114881900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210396731.7A CN114881900A (en) 2022-04-15 2022-04-15 Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210396731.7A CN114881900A (en) 2022-04-15 2022-04-15 Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion

Publications (1)

Publication Number Publication Date
CN114881900A true CN114881900A (en) 2022-08-09

Family

ID=82668898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210396731.7A Pending CN114881900A (en) 2022-04-15 2022-04-15 Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion

Country Status (1)

Country Link
CN (1) CN114881900A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116147525A (en) * 2023-04-17 2023-05-23 南京理工大学 Pantograph contour detection method and system based on improved ICP algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116147525A (en) * 2023-04-17 2023-05-23 南京理工大学 Pantograph contour detection method and system based on improved ICP algorithm
CN116147525B (en) * 2023-04-17 2023-07-04 南京理工大学 Pantograph contour detection method and system based on improved ICP algorithm

Similar Documents

Publication Publication Date Title
EP3382644B1 (en) Method for 3d modelling based on structure from motion processing of sparse 2d images
CN111968172B (en) Method and system for measuring volume of stock ground material
CN109410207B (en) NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method
TWI480833B (en) A method for composing a confocal microscopy image with a higher resolution
CN105205781B (en) Transmission line of electricity Aerial Images joining method
CN111340701B (en) Circuit board image splicing method for screening matching points based on clustering method
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN108648240A (en) Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration
CN109671110B (en) Local geometric structure constrained urban wide baseline image feature point matching method
CN106960414A (en) A kind of method that various visual angles LDR image generates high-resolution HDR image
CN110956069B (en) Method and device for detecting 3D position of pedestrian, and vehicle-mounted terminal
CN110428367B (en) Image splicing method and device
CN112991176A (en) Panoramic image splicing method based on optimal suture line
CN114881900A (en) Pantograph image splicing method based on feature matching and gradual-in and gradual-out fusion
Gao et al. Accurate and efficient ground-to-aerial model alignment
CA2605234C (en) A method of local tracing of connectivity and schematic representations produced therefrom
CN111861866A (en) Panoramic reconstruction method for substation equipment inspection image
CN112365518A (en) Image splicing method based on optimal suture line self-selection area gradual-in and gradual-out algorithm
CN112862674A (en) Automatic Stitch algorithm-based multi-image automatic splicing method and system
CN112001954B (en) Underwater PCA-SIFT image matching method based on polar curve constraint
CN111047513A (en) Robust image alignment method and device for cylindrical panoramic stitching
CN115082509A (en) Method for tracking non-feature target
CN114973384A (en) Electronic face photo collection method based on key point and visual salient target detection
Zhu et al. A filtering strategy for interest point detecting to improve repeatability and information content
CN113344795B (en) Rapid image splicing method based on prior information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination