CN112862692A - Image splicing method applied to underground coal mine roadway - Google Patents
Image splicing method applied to underground coal mine roadway Download PDFInfo
- Publication number
- CN112862692A CN112862692A CN202110338716.2A CN202110338716A CN112862692A CN 112862692 A CN112862692 A CN 112862692A CN 202110338716 A CN202110338716 A CN 202110338716A CN 112862692 A CN112862692 A CN 112862692A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- registered
- line segment
- directed line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 239000003245 coal Substances 0.000 title claims abstract description 44
- 238000013179 statistical model Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 230000008030 elimination Effects 0.000 claims abstract description 7
- 238000003379 elimination reaction Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 241000488484 Craugastor andi Species 0.000 claims description 3
- 101150044010 SLP1 gene Proteins 0.000 claims description 3
- 101150055071 slaA gene Proteins 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000005065 mining Methods 0.000 description 3
- 230000002146 bilateral effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000009412 basement excavation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000009628 lidan Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image splicing method applied to an underground coal mine roadway. Firstly, using an SIFT (Scale invariant feature transform) algorithm to perform feature extraction and matching on a coal mine tunnel image to obtain a coarse matching point pair; then constructing a directional line segment model of the coarse matching point pairs of adjacent images, and removing the mismatching point pairs once by utilizing the direction and length attributes of the line segments; then establishing a characteristic point directed line segment model and a direction label thereof in each image, performing direction matching on the corresponding directed line segments of adjacent images, and performing secondary elimination on mismatching point pairs through a probability statistical model to obtain final fine matching point pairs; and finally, aligning and splicing the images by using an AANAP (adaptive-Natural-As-Possible) algorithm, and fusing the images by using a weighted average method to finish image splicing. Compared with other algorithms, the method has the advantages that: the mismatching point rejecting algorithm is high in accuracy, good in real-time performance, and better in quality of the final panoramic mosaic image, and is more suitable for complex scenes of underground coal mine tunnels.
Description
Technical Field
The invention relates to an image splicing method applied to a coal mine underground roadway, which is particularly suitable for being used underground, and belongs to the field of computer vision.
Background
In recent years, along with the rapid fusion development of big data, cloud computing and artificial intelligence and the breakthrough innovation of related software and hardware, coal mine intelligence becomes a major strategic direction for the development of the coal industry. The underground roadway of the coal mine has a severe environment, most of the information acquisition depends on manpower, and videos and images are used as an important environmental information carrier and are gradually applied to daily production and life of the coal mine. The image splicing of the coal mine underground tunnel can obtain a panoramic image of the tunnel, further obtain the space environment information of the coal mine underground tunnel, and has certain guiding significance for space reconstruction, personnel prevention and control, danger identification and early warning and coal mine excavation of the coal mine tunnel.
The core and key of the image stitching method is image registration, and the quality of an image registration result depends on the accuracy of image feature matching points. Meanwhile, the illumination of the underground tunnel of the coal mine is unbalanced, and a certain amount of mismatching points easily exist in the existing feature extraction algorithm, so that a good mismatching rejection method has great significance on the result of image registration. For the image splicing method and the mismatching elimination method of the coal mine underground roadway, a lot of researchers do a lot of work.
The method comprises the following steps that (in the literature, pottery wisdom, coal mine working face video stitching algorithm research [ D ]. Western Ann science and technology university, 2020.) Moravec angular point detection and PCA principal component analysis are adopted to optimize an SIFT feature extraction process, matching feature points are screened through a feature approximate nearest neighbor matching algorithm and an improved RANSAC algorithm to obtain an image transformation model, and image stitching is realized through a gradual-in gradual-out type fusion method; the method comprises the following steps of (algorithm research [ J ]. coal mine electromechanical, 2011) of image splicing under coal mines in the literature (Cheng, Chua Xiao Wan, Zhang national Yong. coal mine) extracting image feature points by using a CSS corner detection algorithm, extracting feature point matching points by normalizing cross-correlation coefficients (NCC), and deleting mismatching point pairs by using RANSAC to finish image splicing; performing feature point detection and matching by using a SIFT algorithm in documents (spinning, Liu Jing, Brilliant, and the like. a large-parallax image splicing algorithm [ J ] for underground roadways and mining automation, 2020.), grouping feature matching points based on multiple planes in the images and solving corresponding homography matrixes, and finally aligning the images by using a suture line to synthesize a spliced image to finish image splicing; the method comprises the steps of extracting video image feature points by a document (guan Zeng, Zhao Guang Yuan, an underground video stitching algorithm [ J ] based on improved accelerated robust features, industrial and mining automation, 2018.), dynamically tracking the number of the feature points, selecting a homography matrix according to the number of the feature points, fusing each frame of image by adopting a gradual-in gradual-out weighted average method, and completing video stitching; the method comprises the following steps of (Wanyan, bear flying snow, a method for splicing underground images enhanced by Retinex [ J ]. university of Liaoning engineering science (Nature science edition), 2015.) enhancing the images by improving a Retinex algorithm through local bilateral filtering, then extracting features by using a SURF algorithm, performing precise matching on feature points by using a RANSAC algorithm to obtain a transformation matrix, and finally completing image splicing by using a linear gradient fusion image; the literature (Wangyan, Zhao Jingyu, Zhao Yan Guang. based on the image enhancement downhole image mosaic [ J ] computer engineering and application, 2016.) utilizes local bilateral filtering algorithm to enhance the image, then uses SURF algorithm to extract the feature, adopts the nearest distance to compare closely to carry on the precise matching of the feature point, and then obtains the transformation matrix, finally uses the linear gradient fusion image, completes the image mosaic; the method comprises the following steps of firstly extracting feature points by using an SURF algorithm, then removing the feature points with small similarity by using cosine similarity and accuracy, performing feature matching by using an Euclidean distance, and finally processing an image by using bilinear interpolation and weighted average methods to obtain a final result (Zhayanguan. application research [ D ] of an image splicing technology in a digital mine, 2014.); the method comprises the following steps that (Zhang Baolong, a research [ J ] of a coal mine image splicing technology based on feature points, coal mine machinery, 2012.) feature points are extracted by using a curvature scale space improved adaptive threshold CSS corner point detection algorithm, feature point matching points are extracted by using a similarity measure NCC, mismatching point pairs are deleted by using RANSAC, and image splicing is realized by using a gradually-in gradually-out fusion image; according to the technical scheme, the method comprises the following steps of performing feature vector construction and matching on images subjected to Curvelt denoising pretreatment by using SIFT in a document (Liangyu, Lidan, Niuyxi, Zhanguayong, SIFT algorithm research [ J ] based on underground environment, industrial and mining automation, 2011.), and removing mismatching points by using a RANSAC method to obtain an image transformation model to finish image splicing.
Disclosure of Invention
Aiming at the defects of the technology, the image splicing method applied to the underground coal mine roadway is provided, and the problems that the underground coal mine roadway is complex in environment, unbalanced in illumination, more in characteristic point mismatching points, prone to projection distortion in the image splicing process and the like are solved.
In order to solve the technical problems, the invention provides an image splicing method applied to an underground coal mine roadway, which comprises the following specific steps:
step 1) using an SIFT (Scale Invariant Feature transform) algorithm to perform Feature extraction and matching between a reference image and an image to be registered, wherein the reference image and the image to be registered are adjacent and exist in a partial overlapping area, so as to obtain a rough matching point pair set between the reference image and the image to be registered;
step 2) constructing a directed line segment model by using the characteristic points corresponding to the coarse matching point pairs between the reference image and the image to be registered, and constructing a slope threshold interval and a length constraint model through the directed line segment model;
step 3) synthesizing a slope threshold interval and a length constraint model, and simultaneously removing mismatching point pairs in rough matching point pairs to obtain a group of new matching point pairs;
step 4) establishing a characteristic point directed line segment model and a direction label thereof in the reference image and the image to be registered in the step 1) based on the new matching point pairs;
step 5) carrying out direction matching on the reference image and the corresponding directed line segments in the image to be registered, and carrying out secondary elimination on the mismatching point pairs through a probability statistical model to obtain final fine matching point pairs;
and step 6) aligning the reference image and the image to be aligned by utilizing an AANAP (Adaptive As-Natural-As-Possible) algorithm, and fusing the images by using a weighted average method to complete the splicing of the reference image and the image to be aligned.
The concrete content of the step 1) comprises the following steps:
step 11) two reference images I with the same size and overlapping area of the coal mine tunnel1And image I to be registered2SIFT feature extraction is respectively carried out to obtain a descriptor ds1 of a reference image feature point, a descriptor ds2 of an image to be registered, a pixel coordinate system position kp1 in the reference image and a pixel coordinate system position kp2 in the image to be registered;
step 12) calculating the Euclidean distance between the descriptor of the ith point in the descriptor ds1 of the image to be registered and the descriptor of each point in the descriptor ds2 of the image to be registered, and obtaining the Euclidean distance MIN of the minimum value between the Euclidean distance MIN of the ith point in the descriptor ds1 of the image to be registered and the Euclidean distance MIN of the jth point in the descriptor ds2 of the image to be registeredijDistance in EuropeijMultiplying by 1.5 and then determining that the obtained product is still less than all other Euclidean distances, wherein the ith point in the descriptor ds1 of the image to be registered and the jth point in the descriptor ds2 of the image to be registered are a pair of matching points, and otherwise, the matching points are not; finally, n rough matching point pairs can be obtained by utilizing the pixel coordinate system position kp1 in the reference image and the corresponding value in the pixel coordinate system position kp2 in the image to be registered, wherein the coordinate values of the n rough matching areAnd
The specific steps of constructing the model of the directional line segments of the rough matching points between the adjacent images in the step 2) are as follows:
step 21) by referencing the image I1And image I to be registered2And (3) putting the points into the same window in a left-right sequence, and establishing a directed line segment model for all the coarse matching point pairs in the window for the n coarse matching point pairs:
In the formula: i iswIs the width of the picture, reference picture I1And image I to be registered2Are the same in size and are all I in heighthCorresponding to the y value of the coordinate, the width is IwCorresponding to the x value of the coordinate.
The step of constructing a slope threshold interval model through the directed line segment model in the step 2) is as follows:
step 22) calculating the slope of the directed line segment model using the following formula:
Step 23) accurately finding a slope threshold interval, and knowing from image matching experience that the slope of the image matching directed line segment is approximately in the interval [ -1,1], so that the slope interval is divided into 22 intervals from negative infinity to positive infinity, wherein the intervals are [ - ∞ -1.0], [ -1.0, -0.9], [ -0.9, -0.8] … … [0.8, 0.9], [0.9, 1.0], [1.0, + ∞ ];
step 24) calculating the slopes of all the directed line segments, dividing each directed line segment into 22 intervals in the step 23) according to the size of the directed line segment, counting the number of the divided directed line segments in each interval, obtaining the interval [ slp1, slp2] with the largest number of directed line segments, and expanding the slope threshold interval outwards by 0.1 on the left and right in consideration of possible slight scale change and rotation transformation of the image to be registered relative to the reference image, namely obtaining the slope threshold interval [ slp1-0.1, slp2+0.1 ].
The specific steps of constructing the length constraint model through the directed line segment model in the step 2) are as follows:
step 25) obtaining the length of the jth directed line segment model between the images by using the following formula:
Step 26) establishing a length constraint model by using a length calculation formula of the line segment model:
In the formula:is the mean value of the squared Euclidean distances of all matching points, TdFor the experimentally obtained length-constrained control value, TdDepending on the size of the image involved in the image stitching, T is given when the image pixels are in the millionsdT is 4 if the image pixel value is less than million pixelsd=1.5。
The specific steps of the step 3) are as follows:
step 31) eliminating the matching points which do not accord with the constraint from the coarse matching point pair between the reference image and the image to be registered through the obtained slope threshold space to obtain p matching point index value sets which accord with the slope threshold space from the n matching points
Step 32) carrying out the length constraint of the step 26) on the obtained p matching points to obtain k correct matching point index value sets which accord with the length constraint in the p matching points
The specific steps of the step 4) are as follows:
step 41) after obtaining the new matching point pair, establishing a directed line segment in the reference image and the image to be registered, and constructing a directed line segment model of the feature point in the image based on the new matching point as follows:
in the formula: i ∈ (1, k-1), j ∈ (i +1, k), and (XianD)Inner 1(i,j),XianDInner 2(i, j)) are two pairs of matching points.
Step 42) calculating that the models of the directional line segments in the reference image and the image to be registered are both provided withAnd each element corresponds to the reference image directed line segment model and the to-be-registered image directed line segment model one by one according to the index value, and the reference image directed line segment model and the to-be-registered image directed line segment model are expressed as follows according to the index value:
Step 43) establishing a direction coordinate system with the characteristic points as the original centers to determine the direction labels of the directed line segments, and further determining the direction labels by determiningAndin that quadrant of the directional coordinate system, the directed line segment is labeled flag:
step 44) taking the label flag as the matching condition of the directed line segment, and then changing the directed line segment model into:
The specific steps of the step 5) are as follows:
step 51) ifThen representAndfor a pair of correctly matching line segments, i.e.Andis a pair of correctly matched line segments, the end points of the two corresponding line segmentsAndall are correct matching points, otherwise, all are mismatching points;
step 52) judging whether each pair of matching points is a mismatching point by counting the times of dividing the matching point into the mismatching points, and establishing a quantity statistical matrix TongJ belonging to the R2×k:
Step 53) the number of times that each pair of matching points is discriminated as a mis-matching point is placed in the quantity statistical matrix TongJ, thereby obtaining a probability statistical model corresponding to each pair of matching points:
Step 54) setting a threshold T for the probabilistic statistical modelpIf the probability of a certain pair of matching points is greater than the threshold value, the matching points are indicated as mismatching points, and then l matching points can be obtainedCorrect matching point index value of (1):
Step 55) obtaining I index values of correct matching points in the original coordinate set according to the formula (8) and the formula (15)
Combining the formulas (1), (2) and (16), the coordinate values of the correct matching points after the mismatching points are proposed can be obtained as follows:
the concrete steps of the step 6) are as follows:
step 61) for the obtained correct matching point D'1And D'2Using AANAP algorithmTo align the reference image and the image to be registered, i.e. image registration.
Step 62) after the image registration of the reference image and the image to be registered is completed, fusing the images by using a weighted average method, wherein the calculation formula of the weighted average method is as follows:
in the formula: i is1And I2For input images with adjacent overlapping regions, I (x, y) is the final output fused image, I1∩I2Is represented by1And I2Overlap region, w1And w2Is a weighted value, and w1+w2=1,w1,w1E (0, 1). The weights are determined as follows:
in the formula: x is the number oflAnd xrThe left and right boundaries of the overlap region, x corresponds to the number of columns in which the point is located in the image.
And 63) finishing image splicing to obtain a panoramic spliced image.
Has the advantages that:
the method has the advantages of stronger capacity of eliminating mismatching, higher instantaneity and better accuracy, realizes panoramic stitching of the images of the underground coal mine roadway, and has fewer required characteristic points and better stitching effect compared with the traditional method for eliminating mismatching.
The mismatching elimination algorithm based on the characteristic point directed line segment model can effectively eliminate the mismatching, obtain a more accurate image transformation model, and has the advantages of faster running time, higher matching accuracy and real-time performance; and then, the AANAP algorithm is used for aligning and splicing the images, so that the image splicing effect and the appearance naturalness are better under the conditions of few characteristic points and high accuracy rate.
Drawings
Figure 1 is a schematic diagram of the matching points between images of the present invention with directed line segments,
figure 2 is a schematic diagram of the characteristic point directed line segments in the image of the present invention,
FIG. 3 is a schematic diagram of the direction coordinate system of the feature points in the image according to the present invention,
FIG. 4 is a flow chart of an image stitching method applied to an underground coal mine roadway.
Detailed Description
The present invention is further described with reference to the accompanying drawings, and the following examples are only for clearly illustrating the technical solutions of the present invention, and should not be taken as limiting the scope of the present invention.
Example 1: image splicing method applied to underground coal mine roadway
The invention discloses an image splicing method applied to a coal mine underground tunnel, which comprises the steps of firstly, using an SIFT algorithm to carry out feature extraction and matching on a coal mine tunnel image to obtain a rough matching point pair; then constructing a directional line segment model of the coarse matching point pairs of adjacent images, and removing the mismatching point pairs once by utilizing the direction and length attributes of the line segments; then establishing a characteristic point directed line segment model and a direction label thereof in each image, performing direction matching on the corresponding directed line segments of adjacent images, and performing secondary elimination on mismatching point pairs through a probability statistical model to obtain final fine matching point pairs; and finally, aligning and splicing the images by using an AANAP algorithm, and fusing the images by using a weighted average method to finish image splicing. Compared with other algorithms, the method has the advantages that: the mismatching point rejecting algorithm is high in accuracy, good in real-time performance, and better in quality of the final panoramic mosaic image, and is more suitable for underground coal mine roadway images.
As shown in fig. 4, the image stitching method applied to the coal mine underground roadway is adopted in the invention as follows:
step 1) using an SIFT (Scale Invariant Feature transform) algorithm to perform Feature extraction and matching between a reference image and an image to be registered, wherein the reference image and the image to be registered are adjacent and exist in a partial overlapping area, so as to obtain a rough matching point pair set between the reference image and the image to be registered;
the specific process is as follows:
step 11) two reference images I with the same size and overlapping area of the coal mine tunnel1And image I to be registered2SIFT feature extraction is respectively carried out to obtain a descriptor ds1 of a reference image feature point, a descriptor ds2 of an image to be registered, a pixel coordinate system position kp1 in the reference image and a pixel coordinate system position kp2 in the image to be registered;
step 12) calculating the Euclidean distance between the descriptor of the ith point in the descriptor ds1 of the image to be registered and the descriptor of each point in the descriptor ds2 of the image to be registered, and obtaining the Euclidean distance MIN of the minimum value between the Euclidean distance MIN of the ith point in the descriptor ds1 of the image to be registered and the Euclidean distance MIN of the jth point in the descriptor ds2 of the image to be registeredijDistance in EuropeijMultiplying by 1.5 and then determining that the obtained product is still less than all other Euclidean distances, wherein the ith point in the descriptor ds1 of the image to be registered and the jth point in the descriptor ds2 of the image to be registered are a pair of matching points, and otherwise, the matching points are not; finally, n rough matching point pairs can be obtained by utilizing the pixel coordinate system position kp1 in the reference image and the corresponding value in the pixel coordinate system position kp2 in the image to be registered, wherein the coordinate values of the n rough matching areAnd
As shown in fig. 1, step 2) constructs a directed line segment model by using feature points corresponding to coarse matching point pairs between a reference image and an image to be registered, wherein a-F and a-F in the reference image and the image to be registered are in one-to-one correspondence, and a slope threshold interval and a length constraint model are constructed by the directed line segment model;
the specific process is as follows:
step 21) by referencing the image I1And image I to be registered2Left and right are placed in the same window in which the resulting n coarse matching point pairs are paired, as shown in fig. 1, wherein<Aa>、<Bb>、<Cc>、<Mm>、<Ee>、<Ff>The method is a directed line segment example formed by matching point pairs between images, and a directed line segment model is established for all coarse matching point pairs:
In the formula: i iswIs the width of the picture, reference picture I1And image I to be registered2Are the same in size and are all I in heighth(y value for coordinate), width is Iw(x value for the corresponding coordinate).
Step 22) calculating the slope of the directed line segment model using the following formula:
Step 23) accurately finding a slope threshold interval, and knowing from image matching experience that the slope of the image matching directed line segment is approximately in the interval [ -1,1], so that the slope interval is divided into 22 intervals from negative infinity to positive infinity, wherein the intervals are [ - ∞ -1.0], [ -1.0, -0.9], [ -0.9, -0.8] … … [0.8, 0.9], [0.9, 1.0], [1.0, + ∞ ];
step 24) calculating the slopes of all the directed line segments, dividing each directed line segment into 22 intervals in the step 32) according to the size of the directed line segment, counting the number of the divided directed line segments in each interval, obtaining the interval [ slp1, slp2] with the largest number of directed line segments, and expanding the slope threshold interval outwards by 0.1 on the left and right in consideration of possible slight scale change and rotation transformation of the image to be registered relative to the reference image, namely obtaining the slope threshold interval [ slp1-0.1, slp2+0.1 ].
Step 25) obtaining the length of the jth directed line segment model between the images by using the following formula:
Step 26) establishing a length constraint model by using a length calculation formula of the line segment model:
In the formula:is the mean value of the squared Euclidean distances of all matching points, TdFor the experimentally obtained length-constrained control value, TdDepending on the image size involved in the image stitching, experiments have shown that T is the number of millions of image pixelsdPreferably, T is 4, when the image pixel value is less than a million pixelsdThe preferred value is 1.5.
Step 3) synthesizing a slope threshold interval and a length constraint model, and simultaneously removing mismatching point pairs in rough matching point pairs to obtain a group of new matching point pairs;
as shown in fig. 3, the specific process is as follows:
step 31) eliminating the matching points which do not accord with the constraint in the rough matching point pair between the reference image and the image to be registered through the obtained slope threshold space to obtain p matching point index value sets which accord with the slope threshold space in the n matching points
Step 32) carrying out the length constraint of the step 26) on the obtained p matching points to obtainIndex value set of k correct matching points meeting length constraint in p matching points
If fig. 2 is a schematic diagram of a directional line segment of a feature point in an image, fig. 3 is a schematic diagram of a directional coordinate system of the feature point in the image, step 4) establishes a directional line segment model and a directional label of the feature point in the reference image and the feature point in the image to be registered in step 1) based on a new matching point pair;
the specific process is as follows:
step 41) after obtaining a new matching point pair, establishing a directional line segment in the reference image and the image to be registered, as shown in fig. 2, where (B, B), (C, C), (H, H), (Q, Q), (V, V) are matching points obtained through a slope threshold interval and length constraint, and < BC >, < BH >, < BQ >, < BV > are directional line segment examples of the reference image and the image to be registered, respectively, and correspond to each other according to the matching points, that is, < BC, BC >, < BH, BH >, < BQ, BQ > and < BV, BV > are four matching line segments formed by five pairs of matching points. Then constructing a directed line segment model of the feature points in the image based on the new matching points as follows:
in the formula: i ∈ (1, k-1), j ∈ (i +1, k), and (XianD)Inner 1(i,j),XianDInner 2(i, j)) are two pairs of matching points.
Step 42) calculating that the models of the directional line segments in the reference image and the image to be registered are both provided withElement and reference image directed line segment model and image to be registeredThe directed line segment models are in one-to-one correspondence according to the index values, and then can be expressed as:
Step 43) establishing a direction coordinate system (as shown in FIG. 3) with the feature points as the original centers, wherein<P,P1>,<P,P2>,<P,P3>,<P,P4>Respectively representing a direction and corresponding to four quadrants of a coordinate system one by one, so as to determine the direction label of the directed line segment, and further determine the direction label by determiningAndin that quadrant of the directional coordinate system, the directed line segment is labeled flag:
step 44) taking the label flag as the matching condition of the directed line segment, and then changing the directed line segment model into:
Step 5) carrying out direction matching on the reference image and the corresponding directed line segments in the image to be registered, and carrying out secondary elimination on the mismatching point pairs through a probability statistical model to obtain final fine matching point pairs;
the specific process is as follows:
step (ii) of51) If it isThen representAndfor a pair of correctly matching line segments, i.e.Andis a pair of correctly matched line segments, the end points of the two corresponding line segmentsAndall are correct matching points, otherwise, all are mismatching points;
step 52) judging whether each pair of matching points is a mismatching point by counting the times of dividing the matching point into the mismatching points, and establishing a quantity statistical matrix TongJ belonging to the R2×k:
Step 53) the number of times that each pair of matching points is discriminated as a mis-matching point is placed in the quantity statistical matrix TongJ, thereby obtaining a probability statistical model corresponding to each pair of matching points:
Step 54) setting a threshold T for the probabilistic statistical modelpIf the probability of a pair of matching points is greater than the threshold, then it is declaredFor the mismatch point, one can be obtainedCorrect matching point index value of (1):
Step 55) obtaining I index values of correct matching points in the original coordinate set according to the formula (8) and the formula (15)
Combining the formulas (1), (2) and (16), the coordinate values of the correct matching points after the mismatching points are proposed can be obtained as follows:
step 6) aligning the reference image and the image to be aligned by utilizing an AANAP (Adaptive As-Natural-As-Possible) algorithm, fusing the images by using a weighted average method, and completing splicing the reference image and the image to be aligned;
the specific process is as follows:
step 61) for the obtained correct matching point D'1And D'2The AANAP algorithm is used to align the reference image and the image to be registered, i.e. image registration.
Step 62) after the image registration of the reference image and the image to be registered is completed, fusing the images by using a weighted average method, wherein the calculation formula of the weighted average method is as follows:
in the formula: i is1And I2For input images with adjacent overlapping regions, I (x, y) is the final output fused image, I1∩I2Is represented by1And I2Overlap region, w1And w2Is a weighted value, and w1+w2=1,w1,w1E (0, 1). The weights are determined as follows:
in the formula: x is the number oflAnd xrThe left and right boundaries of the overlap region, x corresponds to the number of columns in which the point is located in the image.
And 63) finishing image splicing to obtain a panoramic spliced image.
Claims (9)
1. An image splicing method applied to a coal mine underground roadway is characterized by comprising the following steps:
step 1) using an SIFT (Scale Invariant Feature transform) algorithm to perform Feature extraction and matching between a reference image and an image to be registered, wherein the reference image and the image to be registered are adjacent and exist in a partial overlapping area, so as to obtain a rough matching point pair set between the reference image and the image to be registered;
step 2) constructing a directed line segment model by using the characteristic points corresponding to the coarse matching point pairs between the reference image and the image to be registered, and constructing a slope threshold interval and a length constraint model through the directed line segment model;
step 3) synthesizing a slope threshold interval and a length constraint model, and simultaneously removing mismatching point pairs in rough matching point pairs to obtain a group of new matching point pairs;
step 4) establishing a characteristic point directed line segment model and a direction label thereof in the reference image and the image to be registered in the step 1) based on the new matching point pairs;
step 5) carrying out direction matching on the reference image and the corresponding directed line segments in the image to be registered, and carrying out secondary elimination on the mismatching point pairs through a probability statistical model to obtain final fine matching point pairs;
and step 6) aligning the reference image and the image to be aligned by utilizing an AANAP (Adaptive As-Natural-As-Possible) algorithm, and fusing the images by using a weighted average method to complete the splicing of the reference image and the image to be aligned.
2. The image splicing method applied to the coal mine underground roadway, according to claim 1, is characterized in that: the concrete content of the step 1) comprises the following steps:
step 11) two reference images I with the same size and overlapping area of the coal mine tunnel1And image I to be registered2SIFT feature extraction is respectively carried out to obtain a descriptor ds1 of a reference image feature point, a descriptor ds2 of an image to be registered, a pixel coordinate system position kp1 in the reference image and a pixel coordinate system position kp2 in the image to be registered;
step 12) calculating the Euclidean distance between the descriptor of the ith point in the descriptor ds1 of the image to be registered and the descriptor of each point in the descriptor ds2 of the image to be registered, and obtaining the Euclidean distance MIN of the minimum value between the Euclidean distance MIN of the ith point in the descriptor ds1 of the image to be registered and the Euclidean distance MIN of the jth point in the descriptor ds2 of the image to be registeredijDistance in EuropeijMultiplying by 1.5 and then determining that the obtained product is still less than all other Euclidean distances, wherein the ith point in the descriptor ds1 of the image to be registered and the jth point in the descriptor ds2 of the image to be registered are a pair of matching points, and otherwise, the matching points are not; finally, n rough matching point pairs can be obtained by utilizing the pixel coordinate system position kp1 in the reference image and the corresponding value in the pixel coordinate system position kp2 in the image to be registered, wherein the coordinate values of the n rough matching areAnd
3. the image splicing method applied to the coal mine underground roadway, according to claim 2, is characterized in that: the specific steps of constructing the model of the directional line segments of the rough matching points between the adjacent images in the step 2) are as follows:
step 21) by referencing the image I1And image I to be registered2And (3) putting the points into the same window in a left-right sequence, and establishing a directed line segment model for all the coarse matching point pairs in the window for the n coarse matching point pairs:
in the formula: i iswIs the width of the picture, reference picture I1And image I to be registered2Are the same in size and are all I in heighthCorresponding to the y value of the coordinate, the width is IwCorresponding to the x value of the coordinate.
4. The image stitching method applied to the coal mine tunnel images as claimed in claim 3, wherein the image stitching method comprises the following steps: the step of constructing a slope threshold interval model through the directed line segment model in the step 2) is as follows:
step 22) calculating the slope of the directed line segment model using the following formula:
step 23) accurately finding a slope threshold interval, and knowing from image matching experience that the slope of the image matching directed line segment is approximately in the interval [ -1,1], so that the slope interval is divided into 22 intervals from negative infinity to positive infinity, wherein the intervals are [ - ∞ -1.0], [ -1.0, -0.9], [ -0.9, -0.8] … … [0.8, 0.9], [0.9, 1.0], [1.0, + ∞ ];
step 24) calculating the slopes of all the directed line segments, dividing each directed line segment into 22 intervals in the step 23) according to the size of the directed line segment, counting the number of the divided directed line segments in each interval, obtaining the interval [ slp1, slp2] with the largest number of directed line segments, and expanding the slope threshold interval outwards by 0.1 on the left and right in consideration of possible slight scale change and rotation transformation of the image to be registered relative to the reference image, namely obtaining the slope threshold interval [ slp1-0.1, slp2+0.1 ].
5. The image stitching method applied to the coal mine tunnel images as claimed in claim 4, wherein the image stitching method comprises the following steps: the specific steps of constructing the length constraint model through the directed line segment model in the step 2) are as follows:
step 25) obtaining the length of the jth directed line segment model between the images by using the following formula:
step 26) establishing a length constraint model by using a length calculation formula of the line segment model:
in the formula:is the mean value of the squared Euclidean distances of all matching points, TdFor the experimentally obtained length-constrained control value, TdDepending on the size of the image involved in the image stitching, T is given when the image pixels are in the millionsdT is 4 if the image pixel value is less than million pixelsd=1.5。
6. The image stitching method applied to the coal mine tunnel images as claimed in claim 1, wherein the image stitching method comprises the following steps: the specific steps of the step 3) are as follows:
step 31) eliminating the matching points which do not accord with the constraint from the coarse matching point pair between the reference image and the image to be registered through the obtained slope threshold space to obtain p matching point index value sets which accord with the slope threshold space from the n matching points
Step 32) carrying out the length constraint of the step 26) on the obtained p matching points to obtain k correct matching point index value sets which accord with the length constraint in the p matching points
7. The image stitching method applied to the coal mine tunnel images as claimed in claim 2, wherein the image stitching method comprises the following steps: the specific steps of the step 4) are as follows:
step 41) after obtaining the new matching point pair, establishing a directed line segment in the reference image and the image to be registered, and constructing a directed line segment model of the feature point in the image based on the new matching point as follows:
in the formula: i ∈ (1, k-1), j ∈ (i +1, k), and (XianD)Inner 1(i,j),XianDInner 2(i, j)) are two pairs of matching points.
Step 42) calculating that the models of the directional line segments in the reference image and the image to be registered are both provided withAnd each element corresponds to the reference image directed line segment model and the to-be-registered image directed line segment model one by one according to the index value, and the reference image directed line segment model and the to-be-registered image directed line segment model are expressed as follows according to the index value:
step 43) establishing a direction coordinate system with the characteristic points as the original centers to determine the direction labels of the directed line segments, and further determining the direction labels by determiningAndin that quadrant of the directional coordinate system, the directed line segment is labeled flag:
step 44) taking the label flag as the matching condition of the directed line segment, and then changing the directed line segment model into:
8. the image stitching method applied to the coal mine tunnel images as claimed in claim 2, wherein the image stitching method comprises the following steps: the specific steps of the step 5) are as follows:
step 51) ifThen representAndfor a pair of correctly matching line segments, i.e.Andis a pair of correctly matched line segments, the end points of the two corresponding line segmentsAndall are correct matching points, otherwise, all are mismatching points;
step 52) judging whether each pair of matching points is a mismatching point by counting the times of dividing the matching point into the mismatching points, and establishing a quantity statistical matrix TongJ belonging to the R2×k:
Step 53) the number of times that each pair of matching points is discriminated as a mis-matching point is placed in the quantity statistical matrix TongJ, thereby obtaining a probability statistical model corresponding to each pair of matching points:
step 54) setting a threshold T for the probabilistic statistical modelpIf the probability of a certain pair of matching points is larger than the threshold value, the matching points are indicated as mismatching points,then one can getCorrect matching point index value of (1):
step 55) obtaining I index values of correct matching points in the original coordinate set according to the formula (8) and the formula (15)
Combining the formulas (1), (2) and (16), the coordinate values of the correct matching points after the mismatching points are proposed can be obtained as follows:
9. the image splicing method applied to the coal mine underground roadway, according to claim 8, is characterized in that: the concrete steps of the step 6) are as follows:
step 61) for the correct matching point D obtained1' and D2' the AANAP algorithm is used to align the reference image and the image to be registered, i.e., image registration.
Step 62) after the image registration of the reference image and the image to be registered is completed, fusing the images by using a weighted average method, wherein the calculation formula of the weighted average method is as follows:
in the formula: i is1And I2For input images with adjacent overlapping regions, I (x, y) is the final output fused image, I1∩I2Is represented by1And I2Overlap region, w1And w2Is a weighted value, and w1+w2=1,w1,w1E (0, 1). The weights are determined as follows:
in the formula: x is the number oflAnd xrThe left and right boundaries of the overlap region, x corresponds to the number of columns in which the point is located in the image.
And 63) finishing image splicing to obtain a panoramic spliced image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110338716.2A CN112862692A (en) | 2021-03-30 | 2021-03-30 | Image splicing method applied to underground coal mine roadway |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110338716.2A CN112862692A (en) | 2021-03-30 | 2021-03-30 | Image splicing method applied to underground coal mine roadway |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112862692A true CN112862692A (en) | 2021-05-28 |
Family
ID=75993203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110338716.2A Pending CN112862692A (en) | 2021-03-30 | 2021-03-30 | Image splicing method applied to underground coal mine roadway |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862692A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114708206A (en) * | 2022-03-24 | 2022-07-05 | 成都飞机工业(集团)有限责任公司 | Method, device, equipment and medium for identifying placing position of autoclave molding tool |
CN115797381A (en) * | 2022-10-20 | 2023-03-14 | 河南理工大学 | Heterogeneous remote sensing image registration method based on geographic blocking and hierarchical feature matching |
CN116128734A (en) * | 2023-04-17 | 2023-05-16 | 湖南大学 | Image stitching method, device, equipment and medium based on deep learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886569A (en) * | 2014-04-03 | 2014-06-25 | 北京航空航天大学 | Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images |
CN110310331A (en) * | 2019-06-18 | 2019-10-08 | 哈尔滨工程大学 | A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature |
-
2021
- 2021-03-30 CN CN202110338716.2A patent/CN112862692A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886569A (en) * | 2014-04-03 | 2014-06-25 | 北京航空航天大学 | Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images |
CN110310331A (en) * | 2019-06-18 | 2019-10-08 | 哈尔滨工程大学 | A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature |
Non-Patent Citations (3)
Title |
---|
程健等: "基于有向线段误匹配剔除的煤矿巷道复杂场景图像拼接方法", 《煤炭科学技术》, vol. 50, no. 9, 30 September 2021 (2021-09-30) * |
董强等: "基于改进BRISK的图像拼接算法", 《 电子与信息学报》, vol. 39, no. 2, 31 December 2017 (2017-12-31) * |
闫鹏鹏: "煤矿巷道复杂场景图像拼接方法研究", 《中国知网硕士电子期刊》, no. 3, 15 March 2022 (2022-03-15) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114708206A (en) * | 2022-03-24 | 2022-07-05 | 成都飞机工业(集团)有限责任公司 | Method, device, equipment and medium for identifying placing position of autoclave molding tool |
CN115797381A (en) * | 2022-10-20 | 2023-03-14 | 河南理工大学 | Heterogeneous remote sensing image registration method based on geographic blocking and hierarchical feature matching |
CN115797381B (en) * | 2022-10-20 | 2024-04-12 | 河南理工大学 | Heterogeneous remote sensing image registration method based on geographic segmentation and hierarchical feature matching |
CN116128734A (en) * | 2023-04-17 | 2023-05-16 | 湖南大学 | Image stitching method, device, equipment and medium based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Melekhov et al. | Dgc-net: Dense geometric correspondence network | |
CN112862692A (en) | Image splicing method applied to underground coal mine roadway | |
CN108256394B (en) | Target tracking method based on contour gradient | |
CN111652892A (en) | Remote sensing image building vector extraction and optimization method based on deep learning | |
CN103473785B (en) | A kind of fast multi-target dividing method based on three-valued image clustering | |
CN104077760A (en) | Rapid splicing system for aerial photogrammetry and implementing method thereof | |
CN103593832A (en) | Method for image mosaic based on feature detection operator of second order difference of Gaussian | |
CN109376641B (en) | Moving vehicle detection method based on unmanned aerial vehicle aerial video | |
CN104156965A (en) | Automatic fast mine monitoring image stitching method | |
CN106709870B (en) | Close-range image straight-line segment matching method | |
CN103353941B (en) | Natural marker registration method based on viewpoint classification | |
CN110111375A (en) | A kind of Image Matching elimination of rough difference method and device under Delaunay triangulation network constraint | |
CN109325487B (en) | Full-category license plate recognition method based on target detection | |
CN114612450B (en) | Image detection segmentation method and system based on data augmentation machine vision and electronic equipment | |
CN110246165B (en) | Method and system for improving registration speed of visible light image and SAR image | |
CN105374010A (en) | A panoramic image generation method | |
CN117765363A (en) | Image anomaly detection method and system based on lightweight memory bank | |
CN114332814A (en) | Parking frame identification method and device, electronic equipment and storage medium | |
CN106651756B (en) | Image registration method based on SIFT and verification mechanism | |
CN111160262A (en) | Portrait segmentation method fusing human body key point detection | |
CN115330655A (en) | Image fusion method and system based on self-attention mechanism | |
CN111931689B (en) | Method for extracting video satellite data identification features on line | |
Vidal et al. | Automatic video to point cloud registration in a structure-from-motion framework | |
CN113674340A (en) | Binocular vision navigation method and device based on landmark points | |
Wang et al. | Deep homography estimation based on attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |