CN112862692A - Image splicing method applied to underground coal mine roadway - Google Patents

Image splicing method applied to underground coal mine roadway Download PDF

Info

Publication number
CN112862692A
CN112862692A CN202110338716.2A CN202110338716A CN112862692A CN 112862692 A CN112862692 A CN 112862692A CN 202110338716 A CN202110338716 A CN 202110338716A CN 112862692 A CN112862692 A CN 112862692A
Authority
CN
China
Prior art keywords
image
matching
registered
line segment
directed line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110338716.2A
Other languages
Chinese (zh)
Inventor
程健
闫鹏鹏
王凯
王瑞彬
许鹏远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Coal Science Research Institute
China Coal Research Institute CCRI
Original Assignee
Coal Science Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Coal Science Research Institute filed Critical Coal Science Research Institute
Priority to CN202110338716.2A priority Critical patent/CN112862692A/en
Publication of CN112862692A publication Critical patent/CN112862692A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image splicing method applied to an underground coal mine roadway. Firstly, using an SIFT (Scale invariant feature transform) algorithm to perform feature extraction and matching on a coal mine tunnel image to obtain a coarse matching point pair; then constructing a directional line segment model of the coarse matching point pairs of adjacent images, and removing the mismatching point pairs once by utilizing the direction and length attributes of the line segments; then establishing a characteristic point directed line segment model and a direction label thereof in each image, performing direction matching on the corresponding directed line segments of adjacent images, and performing secondary elimination on mismatching point pairs through a probability statistical model to obtain final fine matching point pairs; and finally, aligning and splicing the images by using an AANAP (adaptive-Natural-As-Possible) algorithm, and fusing the images by using a weighted average method to finish image splicing. Compared with other algorithms, the method has the advantages that: the mismatching point rejecting algorithm is high in accuracy, good in real-time performance, and better in quality of the final panoramic mosaic image, and is more suitable for complex scenes of underground coal mine tunnels.

Description

Image splicing method applied to underground coal mine roadway
Technical Field
The invention relates to an image splicing method applied to a coal mine underground roadway, which is particularly suitable for being used underground, and belongs to the field of computer vision.
Background
In recent years, along with the rapid fusion development of big data, cloud computing and artificial intelligence and the breakthrough innovation of related software and hardware, coal mine intelligence becomes a major strategic direction for the development of the coal industry. The underground roadway of the coal mine has a severe environment, most of the information acquisition depends on manpower, and videos and images are used as an important environmental information carrier and are gradually applied to daily production and life of the coal mine. The image splicing of the coal mine underground tunnel can obtain a panoramic image of the tunnel, further obtain the space environment information of the coal mine underground tunnel, and has certain guiding significance for space reconstruction, personnel prevention and control, danger identification and early warning and coal mine excavation of the coal mine tunnel.
The core and key of the image stitching method is image registration, and the quality of an image registration result depends on the accuracy of image feature matching points. Meanwhile, the illumination of the underground tunnel of the coal mine is unbalanced, and a certain amount of mismatching points easily exist in the existing feature extraction algorithm, so that a good mismatching rejection method has great significance on the result of image registration. For the image splicing method and the mismatching elimination method of the coal mine underground roadway, a lot of researchers do a lot of work.
The method comprises the following steps that (in the literature, pottery wisdom, coal mine working face video stitching algorithm research [ D ]. Western Ann science and technology university, 2020.) Moravec angular point detection and PCA principal component analysis are adopted to optimize an SIFT feature extraction process, matching feature points are screened through a feature approximate nearest neighbor matching algorithm and an improved RANSAC algorithm to obtain an image transformation model, and image stitching is realized through a gradual-in gradual-out type fusion method; the method comprises the following steps of (algorithm research [ J ]. coal mine electromechanical, 2011) of image splicing under coal mines in the literature (Cheng, Chua Xiao Wan, Zhang national Yong. coal mine) extracting image feature points by using a CSS corner detection algorithm, extracting feature point matching points by normalizing cross-correlation coefficients (NCC), and deleting mismatching point pairs by using RANSAC to finish image splicing; performing feature point detection and matching by using a SIFT algorithm in documents (spinning, Liu Jing, Brilliant, and the like. a large-parallax image splicing algorithm [ J ] for underground roadways and mining automation, 2020.), grouping feature matching points based on multiple planes in the images and solving corresponding homography matrixes, and finally aligning the images by using a suture line to synthesize a spliced image to finish image splicing; the method comprises the steps of extracting video image feature points by a document (guan Zeng, Zhao Guang Yuan, an underground video stitching algorithm [ J ] based on improved accelerated robust features, industrial and mining automation, 2018.), dynamically tracking the number of the feature points, selecting a homography matrix according to the number of the feature points, fusing each frame of image by adopting a gradual-in gradual-out weighted average method, and completing video stitching; the method comprises the following steps of (Wanyan, bear flying snow, a method for splicing underground images enhanced by Retinex [ J ]. university of Liaoning engineering science (Nature science edition), 2015.) enhancing the images by improving a Retinex algorithm through local bilateral filtering, then extracting features by using a SURF algorithm, performing precise matching on feature points by using a RANSAC algorithm to obtain a transformation matrix, and finally completing image splicing by using a linear gradient fusion image; the literature (Wangyan, Zhao Jingyu, Zhao Yan Guang. based on the image enhancement downhole image mosaic [ J ] computer engineering and application, 2016.) utilizes local bilateral filtering algorithm to enhance the image, then uses SURF algorithm to extract the feature, adopts the nearest distance to compare closely to carry on the precise matching of the feature point, and then obtains the transformation matrix, finally uses the linear gradient fusion image, completes the image mosaic; the method comprises the following steps of firstly extracting feature points by using an SURF algorithm, then removing the feature points with small similarity by using cosine similarity and accuracy, performing feature matching by using an Euclidean distance, and finally processing an image by using bilinear interpolation and weighted average methods to obtain a final result (Zhayanguan. application research [ D ] of an image splicing technology in a digital mine, 2014.); the method comprises the following steps that (Zhang Baolong, a research [ J ] of a coal mine image splicing technology based on feature points, coal mine machinery, 2012.) feature points are extracted by using a curvature scale space improved adaptive threshold CSS corner point detection algorithm, feature point matching points are extracted by using a similarity measure NCC, mismatching point pairs are deleted by using RANSAC, and image splicing is realized by using a gradually-in gradually-out fusion image; according to the technical scheme, the method comprises the following steps of performing feature vector construction and matching on images subjected to Curvelt denoising pretreatment by using SIFT in a document (Liangyu, Lidan, Niuyxi, Zhanguayong, SIFT algorithm research [ J ] based on underground environment, industrial and mining automation, 2011.), and removing mismatching points by using a RANSAC method to obtain an image transformation model to finish image splicing.
Disclosure of Invention
Aiming at the defects of the technology, the image splicing method applied to the underground coal mine roadway is provided, and the problems that the underground coal mine roadway is complex in environment, unbalanced in illumination, more in characteristic point mismatching points, prone to projection distortion in the image splicing process and the like are solved.
In order to solve the technical problems, the invention provides an image splicing method applied to an underground coal mine roadway, which comprises the following specific steps:
step 1) using an SIFT (Scale Invariant Feature transform) algorithm to perform Feature extraction and matching between a reference image and an image to be registered, wherein the reference image and the image to be registered are adjacent and exist in a partial overlapping area, so as to obtain a rough matching point pair set between the reference image and the image to be registered;
step 2) constructing a directed line segment model by using the characteristic points corresponding to the coarse matching point pairs between the reference image and the image to be registered, and constructing a slope threshold interval and a length constraint model through the directed line segment model;
step 3) synthesizing a slope threshold interval and a length constraint model, and simultaneously removing mismatching point pairs in rough matching point pairs to obtain a group of new matching point pairs;
step 4) establishing a characteristic point directed line segment model and a direction label thereof in the reference image and the image to be registered in the step 1) based on the new matching point pairs;
step 5) carrying out direction matching on the reference image and the corresponding directed line segments in the image to be registered, and carrying out secondary elimination on the mismatching point pairs through a probability statistical model to obtain final fine matching point pairs;
and step 6) aligning the reference image and the image to be aligned by utilizing an AANAP (Adaptive As-Natural-As-Possible) algorithm, and fusing the images by using a weighted average method to complete the splicing of the reference image and the image to be aligned.
The concrete content of the step 1) comprises the following steps:
step 11) two reference images I with the same size and overlapping area of the coal mine tunnel1And image I to be registered2SIFT feature extraction is respectively carried out to obtain a descriptor ds1 of a reference image feature point, a descriptor ds2 of an image to be registered, a pixel coordinate system position kp1 in the reference image and a pixel coordinate system position kp2 in the image to be registered;
step 12) calculating the Euclidean distance between the descriptor of the ith point in the descriptor ds1 of the image to be registered and the descriptor of each point in the descriptor ds2 of the image to be registered, and obtaining the Euclidean distance MIN of the minimum value between the Euclidean distance MIN of the ith point in the descriptor ds1 of the image to be registered and the Euclidean distance MIN of the jth point in the descriptor ds2 of the image to be registeredijDistance in EuropeijMultiplying by 1.5 and then determining that the obtained product is still less than all other Euclidean distances, wherein the ith point in the descriptor ds1 of the image to be registered and the jth point in the descriptor ds2 of the image to be registered are a pair of matching points, and otherwise, the matching points are not; finally, n rough matching point pairs can be obtained by utilizing the pixel coordinate system position kp1 in the reference image and the corresponding value in the pixel coordinate system position kp2 in the image to be registered, wherein the coordinate values of the n rough matching are
Figure BDA0002998608050000041
And
Figure BDA0002998608050000042
Figure BDA0002998608050000043
wherein i is an element (1, n) (1)
Figure BDA0002998608050000044
Wherein i is an element (1, n) (2)
The specific steps of constructing the model of the directional line segments of the rough matching points between the adjacent images in the step 2) are as follows:
step 21) by referencing the image I1And image I to be registered2And (3) putting the points into the same window in a left-right sequence, and establishing a directed line segment model for all the coarse matching point pairs in the window for the n coarse matching point pairs:
Figure BDA0002998608050000045
wherein i is an element (1, n) (3)
In the formula: i iswIs the width of the picture, reference picture I1And image I to be registered2Are the same in size and are all I in heighthCorresponding to the y value of the coordinate, the width is IwCorresponding to the x value of the coordinate.
The step of constructing a slope threshold interval model through the directed line segment model in the step 2) is as follows:
step 22) calculating the slope of the directed line segment model using the following formula:
Figure BDA0002998608050000046
wherein i is an element (1, n) (4)
Step 23) accurately finding a slope threshold interval, and knowing from image matching experience that the slope of the image matching directed line segment is approximately in the interval [ -1,1], so that the slope interval is divided into 22 intervals from negative infinity to positive infinity, wherein the intervals are [ - ∞ -1.0], [ -1.0, -0.9], [ -0.9, -0.8] … … [0.8, 0.9], [0.9, 1.0], [1.0, + ∞ ];
step 24) calculating the slopes of all the directed line segments, dividing each directed line segment into 22 intervals in the step 23) according to the size of the directed line segment, counting the number of the divided directed line segments in each interval, obtaining the interval [ slp1, slp2] with the largest number of directed line segments, and expanding the slope threshold interval outwards by 0.1 on the left and right in consideration of possible slight scale change and rotation transformation of the image to be registered relative to the reference image, namely obtaining the slope threshold interval [ slp1-0.1, slp2+0.1 ].
The specific steps of constructing the length constraint model through the directed line segment model in the step 2) are as follows:
step 25) obtaining the length of the jth directed line segment model between the images by using the following formula:
Figure BDA0002998608050000047
wherein j ∈ (1, p) (5)
Step 26) establishing a length constraint model by using a length calculation formula of the line segment model:
Figure BDA0002998608050000051
wherein j ∈ (1, p) (6)
In the formula:
Figure BDA0002998608050000052
is the mean value of the squared Euclidean distances of all matching points, TdFor the experimentally obtained length-constrained control value, TdDepending on the size of the image involved in the image stitching, T is given when the image pixels are in the millionsdT is 4 if the image pixel value is less than million pixelsd=1.5。
The specific steps of the step 3) are as follows:
step 31) eliminating the matching points which do not accord with the constraint from the coarse matching point pair between the reference image and the image to be registered through the obtained slope threshold space to obtain p matching point index value sets which accord with the slope threshold space from the n matching points
Figure BDA0002998608050000053
Figure BDA0002998608050000054
Wherein i is an element (1, n) (7)
Step 32) carrying out the length constraint of the step 26) on the obtained p matching points to obtain k correct matching point index value sets which accord with the length constraint in the p matching points
Figure BDA0002998608050000055
Figure BDA0002998608050000056
Wherein j ∈ (1, p) (8)
The specific steps of the step 4) are as follows:
step 41) after obtaining the new matching point pair, establishing a directed line segment in the reference image and the image to be registered, and constructing a directed line segment model of the feature point in the image based on the new matching point as follows:
Figure BDA0002998608050000057
in the formula: i ∈ (1, k-1), j ∈ (i +1, k), and (XianD)Inner 1(i,j),XianDInner 2(i, j)) are two pairs of matching points.
Step 42) calculating that the models of the directional line segments in the reference image and the image to be registered are both provided with
Figure BDA0002998608050000058
And each element corresponds to the reference image directed line segment model and the to-be-registered image directed line segment model one by one according to the index value, and the reference image directed line segment model and the to-be-registered image directed line segment model are expressed as follows according to the index value:
Figure BDA0002998608050000059
wherein
Figure BDA00029986080500000510
Step 43) establishing a direction coordinate system with the characteristic points as the original centers to determine the direction labels of the directed line segments, and further determining the direction labels by determining
Figure BDA0002998608050000061
And
Figure BDA0002998608050000062
in that quadrant of the directional coordinate system, the directed line segment is labeled flag:
Figure BDA0002998608050000063
step 44) taking the label flag as the matching condition of the directed line segment, and then changing the directed line segment model into:
Figure BDA0002998608050000064
wherein
Figure BDA0002998608050000065
The specific steps of the step 5) are as follows:
step 51) if
Figure BDA0002998608050000066
Then represent
Figure BDA0002998608050000067
And
Figure BDA0002998608050000068
for a pair of correctly matching line segments, i.e.
Figure BDA0002998608050000069
And
Figure BDA00029986080500000610
is a pair of correctly matched line segments, the end points of the two corresponding line segments
Figure BDA00029986080500000611
And
Figure BDA00029986080500000612
all are correct matching points, otherwise, all are mismatching points;
step 52) judging whether each pair of matching points is a mismatching point by counting the times of dividing the matching point into the mismatching points, and establishing a quantity statistical matrix TongJ belonging to the R2×k
Figure BDA00029986080500000613
Wherein i ∈ (1, k) (13)
Step 53) the number of times that each pair of matching points is discriminated as a mis-matching point is placed in the quantity statistical matrix TongJ, thereby obtaining a probability statistical model corresponding to each pair of matching points:
Figure BDA00029986080500000614
wherein i ∈ (1, k) (14)
Step 54) setting a threshold T for the probabilistic statistical modelpIf the probability of a certain pair of matching points is greater than the threshold value, the matching points are indicated as mismatching points, and then l matching points can be obtained
Figure BDA00029986080500000615
Correct matching point index value of (1):
Figure BDA00029986080500000616
wherein i ∈ (1, k) (15)
Step 55) obtaining I index values of correct matching points in the original coordinate set according to the formula (8) and the formula (15)
Figure BDA00029986080500000617
Figure BDA00029986080500000618
Combining the formulas (1), (2) and (16), the coordinate values of the correct matching points after the mismatching points are proposed can be obtained as follows:
Figure BDA0002998608050000071
the concrete steps of the step 6) are as follows:
step 61) for the obtained correct matching point D'1And D'2Using AANAP algorithmTo align the reference image and the image to be registered, i.e. image registration.
Step 62) after the image registration of the reference image and the image to be registered is completed, fusing the images by using a weighted average method, wherein the calculation formula of the weighted average method is as follows:
Figure BDA0002998608050000072
in the formula: i is1And I2For input images with adjacent overlapping regions, I (x, y) is the final output fused image, I1∩I2Is represented by1And I2Overlap region, w1And w2Is a weighted value, and w1+w2=1,w1,w1E (0, 1). The weights are determined as follows:
Figure BDA0002998608050000073
in the formula: x is the number oflAnd xrThe left and right boundaries of the overlap region, x corresponds to the number of columns in which the point is located in the image.
And 63) finishing image splicing to obtain a panoramic spliced image.
Has the advantages that:
the method has the advantages of stronger capacity of eliminating mismatching, higher instantaneity and better accuracy, realizes panoramic stitching of the images of the underground coal mine roadway, and has fewer required characteristic points and better stitching effect compared with the traditional method for eliminating mismatching.
The mismatching elimination algorithm based on the characteristic point directed line segment model can effectively eliminate the mismatching, obtain a more accurate image transformation model, and has the advantages of faster running time, higher matching accuracy and real-time performance; and then, the AANAP algorithm is used for aligning and splicing the images, so that the image splicing effect and the appearance naturalness are better under the conditions of few characteristic points and high accuracy rate.
Drawings
Figure 1 is a schematic diagram of the matching points between images of the present invention with directed line segments,
figure 2 is a schematic diagram of the characteristic point directed line segments in the image of the present invention,
FIG. 3 is a schematic diagram of the direction coordinate system of the feature points in the image according to the present invention,
FIG. 4 is a flow chart of an image stitching method applied to an underground coal mine roadway.
Detailed Description
The present invention is further described with reference to the accompanying drawings, and the following examples are only for clearly illustrating the technical solutions of the present invention, and should not be taken as limiting the scope of the present invention.
Example 1: image splicing method applied to underground coal mine roadway
The invention discloses an image splicing method applied to a coal mine underground tunnel, which comprises the steps of firstly, using an SIFT algorithm to carry out feature extraction and matching on a coal mine tunnel image to obtain a rough matching point pair; then constructing a directional line segment model of the coarse matching point pairs of adjacent images, and removing the mismatching point pairs once by utilizing the direction and length attributes of the line segments; then establishing a characteristic point directed line segment model and a direction label thereof in each image, performing direction matching on the corresponding directed line segments of adjacent images, and performing secondary elimination on mismatching point pairs through a probability statistical model to obtain final fine matching point pairs; and finally, aligning and splicing the images by using an AANAP algorithm, and fusing the images by using a weighted average method to finish image splicing. Compared with other algorithms, the method has the advantages that: the mismatching point rejecting algorithm is high in accuracy, good in real-time performance, and better in quality of the final panoramic mosaic image, and is more suitable for underground coal mine roadway images.
As shown in fig. 4, the image stitching method applied to the coal mine underground roadway is adopted in the invention as follows:
step 1) using an SIFT (Scale Invariant Feature transform) algorithm to perform Feature extraction and matching between a reference image and an image to be registered, wherein the reference image and the image to be registered are adjacent and exist in a partial overlapping area, so as to obtain a rough matching point pair set between the reference image and the image to be registered;
the specific process is as follows:
step 11) two reference images I with the same size and overlapping area of the coal mine tunnel1And image I to be registered2SIFT feature extraction is respectively carried out to obtain a descriptor ds1 of a reference image feature point, a descriptor ds2 of an image to be registered, a pixel coordinate system position kp1 in the reference image and a pixel coordinate system position kp2 in the image to be registered;
step 12) calculating the Euclidean distance between the descriptor of the ith point in the descriptor ds1 of the image to be registered and the descriptor of each point in the descriptor ds2 of the image to be registered, and obtaining the Euclidean distance MIN of the minimum value between the Euclidean distance MIN of the ith point in the descriptor ds1 of the image to be registered and the Euclidean distance MIN of the jth point in the descriptor ds2 of the image to be registeredijDistance in EuropeijMultiplying by 1.5 and then determining that the obtained product is still less than all other Euclidean distances, wherein the ith point in the descriptor ds1 of the image to be registered and the jth point in the descriptor ds2 of the image to be registered are a pair of matching points, and otherwise, the matching points are not; finally, n rough matching point pairs can be obtained by utilizing the pixel coordinate system position kp1 in the reference image and the corresponding value in the pixel coordinate system position kp2 in the image to be registered, wherein the coordinate values of the n rough matching are
Figure BDA0002998608050000091
And
Figure BDA0002998608050000092
Figure BDA0002998608050000093
wherein i is an element (1, n) (1)
Figure BDA0002998608050000094
Wherein i is an element (1, n) (2)
As shown in fig. 1, step 2) constructs a directed line segment model by using feature points corresponding to coarse matching point pairs between a reference image and an image to be registered, wherein a-F and a-F in the reference image and the image to be registered are in one-to-one correspondence, and a slope threshold interval and a length constraint model are constructed by the directed line segment model;
the specific process is as follows:
step 21) by referencing the image I1And image I to be registered2Left and right are placed in the same window in which the resulting n coarse matching point pairs are paired, as shown in fig. 1, wherein<Aa>、<Bb>、<Cc>、<Mm>、<Ee>、<Ff>The method is a directed line segment example formed by matching point pairs between images, and a directed line segment model is established for all coarse matching point pairs:
Figure BDA0002998608050000095
wherein i is an element (1, n) (3)
In the formula: i iswIs the width of the picture, reference picture I1And image I to be registered2Are the same in size and are all I in heighth(y value for coordinate), width is Iw(x value for the corresponding coordinate).
Step 22) calculating the slope of the directed line segment model using the following formula:
Figure BDA0002998608050000096
wherein i is an element (1, n) (4)
Step 23) accurately finding a slope threshold interval, and knowing from image matching experience that the slope of the image matching directed line segment is approximately in the interval [ -1,1], so that the slope interval is divided into 22 intervals from negative infinity to positive infinity, wherein the intervals are [ - ∞ -1.0], [ -1.0, -0.9], [ -0.9, -0.8] … … [0.8, 0.9], [0.9, 1.0], [1.0, + ∞ ];
step 24) calculating the slopes of all the directed line segments, dividing each directed line segment into 22 intervals in the step 32) according to the size of the directed line segment, counting the number of the divided directed line segments in each interval, obtaining the interval [ slp1, slp2] with the largest number of directed line segments, and expanding the slope threshold interval outwards by 0.1 on the left and right in consideration of possible slight scale change and rotation transformation of the image to be registered relative to the reference image, namely obtaining the slope threshold interval [ slp1-0.1, slp2+0.1 ].
Step 25) obtaining the length of the jth directed line segment model between the images by using the following formula:
Figure BDA0002998608050000101
wherein j ∈ (1, p) (5)
Step 26) establishing a length constraint model by using a length calculation formula of the line segment model:
Figure BDA0002998608050000102
wherein j ∈ (1, p) (6)
In the formula:
Figure BDA0002998608050000103
is the mean value of the squared Euclidean distances of all matching points, TdFor the experimentally obtained length-constrained control value, TdDepending on the image size involved in the image stitching, experiments have shown that T is the number of millions of image pixelsdPreferably, T is 4, when the image pixel value is less than a million pixelsdThe preferred value is 1.5.
Step 3) synthesizing a slope threshold interval and a length constraint model, and simultaneously removing mismatching point pairs in rough matching point pairs to obtain a group of new matching point pairs;
as shown in fig. 3, the specific process is as follows:
step 31) eliminating the matching points which do not accord with the constraint in the rough matching point pair between the reference image and the image to be registered through the obtained slope threshold space to obtain p matching point index value sets which accord with the slope threshold space in the n matching points
Figure BDA0002998608050000104
Figure BDA0002998608050000105
Wherein i is an element (1, n) (7)
Step 32) carrying out the length constraint of the step 26) on the obtained p matching points to obtainIndex value set of k correct matching points meeting length constraint in p matching points
Figure BDA0002998608050000106
Figure BDA0002998608050000107
Wherein j ∈ (1, p) (8)
If fig. 2 is a schematic diagram of a directional line segment of a feature point in an image, fig. 3 is a schematic diagram of a directional coordinate system of the feature point in the image, step 4) establishes a directional line segment model and a directional label of the feature point in the reference image and the feature point in the image to be registered in step 1) based on a new matching point pair;
the specific process is as follows:
step 41) after obtaining a new matching point pair, establishing a directional line segment in the reference image and the image to be registered, as shown in fig. 2, where (B, B), (C, C), (H, H), (Q, Q), (V, V) are matching points obtained through a slope threshold interval and length constraint, and < BC >, < BH >, < BQ >, < BV > are directional line segment examples of the reference image and the image to be registered, respectively, and correspond to each other according to the matching points, that is, < BC, BC >, < BH, BH >, < BQ, BQ > and < BV, BV > are four matching line segments formed by five pairs of matching points. Then constructing a directed line segment model of the feature points in the image based on the new matching points as follows:
Figure BDA0002998608050000111
in the formula: i ∈ (1, k-1), j ∈ (i +1, k), and (XianD)Inner 1(i,j),XianDInner 2(i, j)) are two pairs of matching points.
Step 42) calculating that the models of the directional line segments in the reference image and the image to be registered are both provided with
Figure BDA0002998608050000112
Element and reference image directed line segment model and image to be registeredThe directed line segment models are in one-to-one correspondence according to the index values, and then can be expressed as:
Figure BDA0002998608050000113
wherein
Figure BDA0002998608050000114
Step 43) establishing a direction coordinate system (as shown in FIG. 3) with the feature points as the original centers, wherein<P,P1>,<P,P2>,<P,P3>,<P,P4>Respectively representing a direction and corresponding to four quadrants of a coordinate system one by one, so as to determine the direction label of the directed line segment, and further determine the direction label by determining
Figure BDA0002998608050000115
And
Figure BDA0002998608050000116
in that quadrant of the directional coordinate system, the directed line segment is labeled flag:
Figure BDA0002998608050000117
step 44) taking the label flag as the matching condition of the directed line segment, and then changing the directed line segment model into:
Figure BDA0002998608050000118
wherein
Figure BDA0002998608050000119
Step 5) carrying out direction matching on the reference image and the corresponding directed line segments in the image to be registered, and carrying out secondary elimination on the mismatching point pairs through a probability statistical model to obtain final fine matching point pairs;
the specific process is as follows:
step (ii) of51) If it is
Figure BDA0002998608050000121
Then represent
Figure BDA0002998608050000122
And
Figure BDA0002998608050000123
for a pair of correctly matching line segments, i.e.
Figure BDA0002998608050000124
And
Figure BDA0002998608050000125
is a pair of correctly matched line segments, the end points of the two corresponding line segments
Figure BDA0002998608050000126
And
Figure BDA0002998608050000127
all are correct matching points, otherwise, all are mismatching points;
step 52) judging whether each pair of matching points is a mismatching point by counting the times of dividing the matching point into the mismatching points, and establishing a quantity statistical matrix TongJ belonging to the R2×k
Figure BDA0002998608050000128
Wherein i ∈ (1, k) (13)
Step 53) the number of times that each pair of matching points is discriminated as a mis-matching point is placed in the quantity statistical matrix TongJ, thereby obtaining a probability statistical model corresponding to each pair of matching points:
Figure BDA0002998608050000129
wherein i ∈ (1, k) (14)
Step 54) setting a threshold T for the probabilistic statistical modelpIf the probability of a pair of matching points is greater than the threshold, then it is declaredFor the mismatch point, one can be obtained
Figure BDA00029986080500001210
Correct matching point index value of (1):
Figure BDA00029986080500001211
wherein i ∈ (1, k) (15)
Step 55) obtaining I index values of correct matching points in the original coordinate set according to the formula (8) and the formula (15)
Figure BDA00029986080500001212
Figure BDA00029986080500001213
Combining the formulas (1), (2) and (16), the coordinate values of the correct matching points after the mismatching points are proposed can be obtained as follows:
Figure BDA00029986080500001214
step 6) aligning the reference image and the image to be aligned by utilizing an AANAP (Adaptive As-Natural-As-Possible) algorithm, fusing the images by using a weighted average method, and completing splicing the reference image and the image to be aligned;
the specific process is as follows:
step 61) for the obtained correct matching point D'1And D'2The AANAP algorithm is used to align the reference image and the image to be registered, i.e. image registration.
Step 62) after the image registration of the reference image and the image to be registered is completed, fusing the images by using a weighted average method, wherein the calculation formula of the weighted average method is as follows:
Figure BDA0002998608050000131
in the formula: i is1And I2For input images with adjacent overlapping regions, I (x, y) is the final output fused image, I1∩I2Is represented by1And I2Overlap region, w1And w2Is a weighted value, and w1+w2=1,w1,w1E (0, 1). The weights are determined as follows:
Figure BDA0002998608050000132
in the formula: x is the number oflAnd xrThe left and right boundaries of the overlap region, x corresponds to the number of columns in which the point is located in the image.
And 63) finishing image splicing to obtain a panoramic spliced image.

Claims (9)

1. An image splicing method applied to a coal mine underground roadway is characterized by comprising the following steps:
step 1) using an SIFT (Scale Invariant Feature transform) algorithm to perform Feature extraction and matching between a reference image and an image to be registered, wherein the reference image and the image to be registered are adjacent and exist in a partial overlapping area, so as to obtain a rough matching point pair set between the reference image and the image to be registered;
step 2) constructing a directed line segment model by using the characteristic points corresponding to the coarse matching point pairs between the reference image and the image to be registered, and constructing a slope threshold interval and a length constraint model through the directed line segment model;
step 3) synthesizing a slope threshold interval and a length constraint model, and simultaneously removing mismatching point pairs in rough matching point pairs to obtain a group of new matching point pairs;
step 4) establishing a characteristic point directed line segment model and a direction label thereof in the reference image and the image to be registered in the step 1) based on the new matching point pairs;
step 5) carrying out direction matching on the reference image and the corresponding directed line segments in the image to be registered, and carrying out secondary elimination on the mismatching point pairs through a probability statistical model to obtain final fine matching point pairs;
and step 6) aligning the reference image and the image to be aligned by utilizing an AANAP (Adaptive As-Natural-As-Possible) algorithm, and fusing the images by using a weighted average method to complete the splicing of the reference image and the image to be aligned.
2. The image splicing method applied to the coal mine underground roadway, according to claim 1, is characterized in that: the concrete content of the step 1) comprises the following steps:
step 11) two reference images I with the same size and overlapping area of the coal mine tunnel1And image I to be registered2SIFT feature extraction is respectively carried out to obtain a descriptor ds1 of a reference image feature point, a descriptor ds2 of an image to be registered, a pixel coordinate system position kp1 in the reference image and a pixel coordinate system position kp2 in the image to be registered;
step 12) calculating the Euclidean distance between the descriptor of the ith point in the descriptor ds1 of the image to be registered and the descriptor of each point in the descriptor ds2 of the image to be registered, and obtaining the Euclidean distance MIN of the minimum value between the Euclidean distance MIN of the ith point in the descriptor ds1 of the image to be registered and the Euclidean distance MIN of the jth point in the descriptor ds2 of the image to be registeredijDistance in EuropeijMultiplying by 1.5 and then determining that the obtained product is still less than all other Euclidean distances, wherein the ith point in the descriptor ds1 of the image to be registered and the jth point in the descriptor ds2 of the image to be registered are a pair of matching points, and otherwise, the matching points are not; finally, n rough matching point pairs can be obtained by utilizing the pixel coordinate system position kp1 in the reference image and the corresponding value in the pixel coordinate system position kp2 in the image to be registered, wherein the coordinate values of the n rough matching are
Figure FDA0002998608040000021
And
Figure FDA0002998608040000022
Figure FDA0002998608040000023
Figure FDA0002998608040000024
3. the image splicing method applied to the coal mine underground roadway, according to claim 2, is characterized in that: the specific steps of constructing the model of the directional line segments of the rough matching points between the adjacent images in the step 2) are as follows:
step 21) by referencing the image I1And image I to be registered2And (3) putting the points into the same window in a left-right sequence, and establishing a directed line segment model for all the coarse matching point pairs in the window for the n coarse matching point pairs:
Figure FDA0002998608040000025
in the formula: i iswIs the width of the picture, reference picture I1And image I to be registered2Are the same in size and are all I in heighthCorresponding to the y value of the coordinate, the width is IwCorresponding to the x value of the coordinate.
4. The image stitching method applied to the coal mine tunnel images as claimed in claim 3, wherein the image stitching method comprises the following steps: the step of constructing a slope threshold interval model through the directed line segment model in the step 2) is as follows:
step 22) calculating the slope of the directed line segment model using the following formula:
Figure FDA0002998608040000026
step 23) accurately finding a slope threshold interval, and knowing from image matching experience that the slope of the image matching directed line segment is approximately in the interval [ -1,1], so that the slope interval is divided into 22 intervals from negative infinity to positive infinity, wherein the intervals are [ - ∞ -1.0], [ -1.0, -0.9], [ -0.9, -0.8] … … [0.8, 0.9], [0.9, 1.0], [1.0, + ∞ ];
step 24) calculating the slopes of all the directed line segments, dividing each directed line segment into 22 intervals in the step 23) according to the size of the directed line segment, counting the number of the divided directed line segments in each interval, obtaining the interval [ slp1, slp2] with the largest number of directed line segments, and expanding the slope threshold interval outwards by 0.1 on the left and right in consideration of possible slight scale change and rotation transformation of the image to be registered relative to the reference image, namely obtaining the slope threshold interval [ slp1-0.1, slp2+0.1 ].
5. The image stitching method applied to the coal mine tunnel images as claimed in claim 4, wherein the image stitching method comprises the following steps: the specific steps of constructing the length constraint model through the directed line segment model in the step 2) are as follows:
step 25) obtaining the length of the jth directed line segment model between the images by using the following formula:
Figure FDA0002998608040000031
step 26) establishing a length constraint model by using a length calculation formula of the line segment model:
Figure FDA0002998608040000032
in the formula:
Figure FDA0002998608040000033
is the mean value of the squared Euclidean distances of all matching points, TdFor the experimentally obtained length-constrained control value, TdDepending on the size of the image involved in the image stitching, T is given when the image pixels are in the millionsdT is 4 if the image pixel value is less than million pixelsd=1.5。
6. The image stitching method applied to the coal mine tunnel images as claimed in claim 1, wherein the image stitching method comprises the following steps: the specific steps of the step 3) are as follows:
step 31) eliminating the matching points which do not accord with the constraint from the coarse matching point pair between the reference image and the image to be registered through the obtained slope threshold space to obtain p matching point index value sets which accord with the slope threshold space from the n matching points
Figure FDA0002998608040000034
Figure FDA0002998608040000035
Step 32) carrying out the length constraint of the step 26) on the obtained p matching points to obtain k correct matching point index value sets which accord with the length constraint in the p matching points
Figure FDA0002998608040000036
Figure FDA0002998608040000037
7. The image stitching method applied to the coal mine tunnel images as claimed in claim 2, wherein the image stitching method comprises the following steps: the specific steps of the step 4) are as follows:
step 41) after obtaining the new matching point pair, establishing a directed line segment in the reference image and the image to be registered, and constructing a directed line segment model of the feature point in the image based on the new matching point as follows:
Figure FDA0002998608040000038
in the formula: i ∈ (1, k-1), j ∈ (i +1, k), and (XianD)Inner 1(i,j),XianDInner 2(i, j)) are two pairs of matching points.
Step 42) calculating that the models of the directional line segments in the reference image and the image to be registered are both provided with
Figure FDA0002998608040000041
And each element corresponds to the reference image directed line segment model and the to-be-registered image directed line segment model one by one according to the index value, and the reference image directed line segment model and the to-be-registered image directed line segment model are expressed as follows according to the index value:
Figure FDA0002998608040000042
step 43) establishing a direction coordinate system with the characteristic points as the original centers to determine the direction labels of the directed line segments, and further determining the direction labels by determining
Figure FDA0002998608040000043
And
Figure FDA0002998608040000044
in that quadrant of the directional coordinate system, the directed line segment is labeled flag:
Figure FDA0002998608040000045
step 44) taking the label flag as the matching condition of the directed line segment, and then changing the directed line segment model into:
Figure FDA0002998608040000046
8. the image stitching method applied to the coal mine tunnel images as claimed in claim 2, wherein the image stitching method comprises the following steps: the specific steps of the step 5) are as follows:
step 51) if
Figure FDA0002998608040000047
Then represent
Figure FDA0002998608040000048
And
Figure FDA0002998608040000049
for a pair of correctly matching line segments, i.e.
Figure FDA00029986080400000410
And
Figure FDA00029986080400000411
is a pair of correctly matched line segments, the end points of the two corresponding line segments
Figure FDA00029986080400000412
And
Figure FDA00029986080400000413
all are correct matching points, otherwise, all are mismatching points;
step 52) judging whether each pair of matching points is a mismatching point by counting the times of dividing the matching point into the mismatching points, and establishing a quantity statistical matrix TongJ belonging to the R2×k
Figure FDA00029986080400000414
Step 53) the number of times that each pair of matching points is discriminated as a mis-matching point is placed in the quantity statistical matrix TongJ, thereby obtaining a probability statistical model corresponding to each pair of matching points:
Figure FDA00029986080400000415
step 54) setting a threshold T for the probabilistic statistical modelpIf the probability of a certain pair of matching points is larger than the threshold value, the matching points are indicated as mismatching points,then one can get
Figure FDA0002998608040000051
Correct matching point index value of (1):
Figure FDA0002998608040000052
step 55) obtaining I index values of correct matching points in the original coordinate set according to the formula (8) and the formula (15)
Figure FDA0002998608040000053
Figure FDA0002998608040000054
Combining the formulas (1), (2) and (16), the coordinate values of the correct matching points after the mismatching points are proposed can be obtained as follows:
Figure FDA0002998608040000055
9. the image splicing method applied to the coal mine underground roadway, according to claim 8, is characterized in that: the concrete steps of the step 6) are as follows:
step 61) for the correct matching point D obtained1' and D2' the AANAP algorithm is used to align the reference image and the image to be registered, i.e., image registration.
Step 62) after the image registration of the reference image and the image to be registered is completed, fusing the images by using a weighted average method, wherein the calculation formula of the weighted average method is as follows:
Figure FDA0002998608040000056
in the formula: i is1And I2For input images with adjacent overlapping regions, I (x, y) is the final output fused image, I1∩I2Is represented by1And I2Overlap region, w1And w2Is a weighted value, and w1+w2=1,w1,w1E (0, 1). The weights are determined as follows:
Figure FDA0002998608040000057
in the formula: x is the number oflAnd xrThe left and right boundaries of the overlap region, x corresponds to the number of columns in which the point is located in the image.
And 63) finishing image splicing to obtain a panoramic spliced image.
CN202110338716.2A 2021-03-30 2021-03-30 Image splicing method applied to underground coal mine roadway Pending CN112862692A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110338716.2A CN112862692A (en) 2021-03-30 2021-03-30 Image splicing method applied to underground coal mine roadway

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110338716.2A CN112862692A (en) 2021-03-30 2021-03-30 Image splicing method applied to underground coal mine roadway

Publications (1)

Publication Number Publication Date
CN112862692A true CN112862692A (en) 2021-05-28

Family

ID=75993203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110338716.2A Pending CN112862692A (en) 2021-03-30 2021-03-30 Image splicing method applied to underground coal mine roadway

Country Status (1)

Country Link
CN (1) CN112862692A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708206A (en) * 2022-03-24 2022-07-05 成都飞机工业(集团)有限责任公司 Method, device, equipment and medium for identifying placing position of autoclave molding tool
CN115797381A (en) * 2022-10-20 2023-03-14 河南理工大学 Heterogeneous remote sensing image registration method based on geographic blocking and hierarchical feature matching
CN116128734A (en) * 2023-04-17 2023-05-16 湖南大学 Image stitching method, device, equipment and medium based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886569A (en) * 2014-04-03 2014-06-25 北京航空航天大学 Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images
CN110310331A (en) * 2019-06-18 2019-10-08 哈尔滨工程大学 A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886569A (en) * 2014-04-03 2014-06-25 北京航空航天大学 Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images
CN110310331A (en) * 2019-06-18 2019-10-08 哈尔滨工程大学 A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
程健等: "基于有向线段误匹配剔除的煤矿巷道复杂场景图像拼接方法", 《煤炭科学技术》, vol. 50, no. 9, 30 September 2021 (2021-09-30) *
董强等: "基于改进BRISK的图像拼接算法", 《 电子与信息学报》, vol. 39, no. 2, 31 December 2017 (2017-12-31) *
闫鹏鹏: "煤矿巷道复杂场景图像拼接方法研究", 《中国知网硕士电子期刊》, no. 3, 15 March 2022 (2022-03-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114708206A (en) * 2022-03-24 2022-07-05 成都飞机工业(集团)有限责任公司 Method, device, equipment and medium for identifying placing position of autoclave molding tool
CN115797381A (en) * 2022-10-20 2023-03-14 河南理工大学 Heterogeneous remote sensing image registration method based on geographic blocking and hierarchical feature matching
CN115797381B (en) * 2022-10-20 2024-04-12 河南理工大学 Heterogeneous remote sensing image registration method based on geographic segmentation and hierarchical feature matching
CN116128734A (en) * 2023-04-17 2023-05-16 湖南大学 Image stitching method, device, equipment and medium based on deep learning

Similar Documents

Publication Publication Date Title
Melekhov et al. Dgc-net: Dense geometric correspondence network
CN112862692A (en) Image splicing method applied to underground coal mine roadway
CN108256394B (en) Target tracking method based on contour gradient
CN111652892A (en) Remote sensing image building vector extraction and optimization method based on deep learning
CN103473785B (en) A kind of fast multi-target dividing method based on three-valued image clustering
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN103593832A (en) Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN104156965A (en) Automatic fast mine monitoring image stitching method
CN106709870B (en) Close-range image straight-line segment matching method
CN103353941B (en) Natural marker registration method based on viewpoint classification
CN110111375A (en) A kind of Image Matching elimination of rough difference method and device under Delaunay triangulation network constraint
CN109325487B (en) Full-category license plate recognition method based on target detection
CN114612450B (en) Image detection segmentation method and system based on data augmentation machine vision and electronic equipment
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN105374010A (en) A panoramic image generation method
CN117765363A (en) Image anomaly detection method and system based on lightweight memory bank
CN114332814A (en) Parking frame identification method and device, electronic equipment and storage medium
CN106651756B (en) Image registration method based on SIFT and verification mechanism
CN111160262A (en) Portrait segmentation method fusing human body key point detection
CN115330655A (en) Image fusion method and system based on self-attention mechanism
CN111931689B (en) Method for extracting video satellite data identification features on line
Vidal et al. Automatic video to point cloud registration in a structure-from-motion framework
CN113674340A (en) Binocular vision navigation method and device based on landmark points
Wang et al. Deep homography estimation based on attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination