CN113112403B - Infrared image splicing method, system, medium and electronic equipment - Google Patents

Infrared image splicing method, system, medium and electronic equipment Download PDF

Info

Publication number
CN113112403B
CN113112403B CN202110351133.3A CN202110351133A CN113112403B CN 113112403 B CN113112403 B CN 113112403B CN 202110351133 A CN202110351133 A CN 202110351133A CN 113112403 B CN113112403 B CN 113112403B
Authority
CN
China
Prior art keywords
image
infrared
feature
images
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110351133.3A
Other languages
Chinese (zh)
Other versions
CN113112403A (en
Inventor
李岩
宋士瞻
刘玉娇
康文文
李国亮
王坤
代二刚
李森
刘振虎
韩锋
杨凤文
燕重阳
张健
杨天驰
谢军
庞春江
谢庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
North China Electric Power University
Zaozhuang Power Supply Co of State Grid Shandong Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
North China Electric Power University
Zaozhuang Power Supply Co of State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, North China Electric Power University, Zaozhuang Power Supply Co of State Grid Shandong Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202110351133.3A priority Critical patent/CN113112403B/en
Publication of CN113112403A publication Critical patent/CN113112403A/en
Application granted granted Critical
Publication of CN113112403B publication Critical patent/CN113112403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an infrared image splicing method, system, medium and electronic device, which are used for acquiring an infrared image to be spliced; preprocessing the acquired infrared image; extracting characteristic points of the electrical equipment area in the preprocessed infrared image; carrying out feature point description on the infrared image after feature point extraction; based on the infrared image after the feature point description, performing feature matching on the infrared image to be spliced by adopting a nearest neighbor approximation search algorithm; based on the infrared images after the characteristic matching, performing similarity calculation on the infrared images to be spliced by adopting a self-adaptive similarity calculation method, and determining the splicing sequence of the infrared images; based on the splicing sequence of the infrared images, adopting a weighted image fusion algorithm based on a distance coefficient to perform image fusion; the method and the device can synthesize a plurality of infrared images with narrow view fields and high spatial resolution into a panoramic image with wide view field and high spatial resolution, improve the images and improve the efficiency and accuracy of image splicing.

Description

Infrared image splicing method, system, medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an infrared image stitching method, system, medium, and electronic device.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
With the continuous and rapid development of modern power enterprises towards large units, large capacity and high voltage, the operating conditions are more severe. In order to ensure the safe and economic operation of power generation and transmission and transformation systems, the domestic and foreign power industries generally put higher and higher requirements on the fault detection of electrical equipment. The infrared diagnosis technology is an advanced detection means, and the infrared detection of the thermal fault of the electrical equipment in operation is a new technology which is widely popularized and applied in recent years by each unit of the power system. Because the infrared detection is the on-line monitoring of the running state of the equipment, the normal running of the equipment is not influenced, compared with the traditional prevention test, the method has the advantages of no equipment power failure, no contact, no disintegration, long distance, large-area rapid scanning imaging, safety, reliability, accuracy and high efficiency, can find the thermal defect of the electrical equipment, makes up the defects of the conventional electrical test method to a certain extent, and is one of the most effective means for realizing the live detection and further realizing the state maintenance.
In recent years, there has been an increasing demand for wide-field, high-spatial-resolution infrared images for fault diagnosis of power systems. Although the thermal imager is developed towards the direction of high spatial resolution, the thermal imager capable of outputting images with high spatial resolution has the defects of high process difficulty and high production cost due to the limitation of a detector process and the design of an optical system, and the requirements of the current engineering application are difficult to meet. Meanwhile, for an imaging system, the relationship between the focal length and the field angle is approximately inversely proportional on the premise of sharp imaging. In the optical design of an infrared imaging system, clear imaging is used as a premise, focal lengths correspond to lens screen distances one by one, and the focal lengths and the lens screen distances jointly determine the size of a field angle, wherein the smaller the focal length, the larger the field angle, and the opposite is true. The spatial resolution of the narrow-field image output by the thermal imager is higher than that of the wide-field image, and the thermal imager is more suitable for detecting, identifying and subsequently analyzing and processing the target. Therefore, in some specific application occasions, a plurality of infrared images with narrow view fields and high spatial resolution need to be combined into a panoramic image with wide view field and high spatial resolution by an image splicing technology, but the existing image splicing method has low splicing efficiency and accuracy, and when a plurality of images are spliced, sequencing is disordered, and sequencing splicing after sequencing can not be realized more efficiently and more accurately.
Disclosure of Invention
In order to overcome the defects of the prior art, the disclosure provides an infrared image splicing method, a system, a medium and electronic equipment, wherein a plurality of infrared images with narrow view fields and high spatial resolution are combined into a panoramic image with wide view field and high spatial resolution, so that the images are improved, and the efficiency and accuracy of image splicing are improved.
In order to achieve the purpose, the following technical scheme is adopted in the disclosure:
the first aspect of the disclosure provides an infrared image stitching method.
An infrared image stitching method comprises the following processes:
acquiring an infrared image to be spliced;
preprocessing the acquired infrared image;
extracting characteristic points of the electrical equipment area in the preprocessed infrared image;
carrying out feature point description on the infrared image after feature point extraction;
based on the infrared image after the feature point description, performing feature matching on the infrared image to be spliced by adopting a nearest neighbor approximation search algorithm;
based on the infrared images after the characteristic matching, performing similarity calculation on the infrared images to be spliced by adopting a self-adaptive similarity calculation method, and determining the splicing sequence of the infrared images;
and based on the splicing sequence of the infrared images, carrying out image fusion by adopting a weighted image fusion algorithm based on the distance coefficient.
Further, the infrared image is preprocessed, which comprises: and carrying out image smoothing and histogram equalization processing on the infrared image.
Further, extracting feature points of the electrical equipment region in the preprocessed infrared image by using a FAST algorithm, and performing feature description on the infrared image after feature extraction by using BRIEF (Binary Robust Independent elements), which includes the following steps:
carrying out coarse extraction on the preprocessed infrared image to obtain a plurality of characteristic points;
inputting a plurality of pixels on the circumference of the feature point into a decision tree by adopting a decision tree algorithm, and screening out an optimal FAST (Features foreground pixel Segment Test) feature point;
calculating the response size of each feature point, reserving the feature points with response values larger than a preset value, and deleting the rest feature points;
randomly selecting N pairs of pixel points in the neighborhood of one feature point;
and comparing the gray value of each pair of pixel points, if the gray value of the first pixel point is greater than the gray value of the second pixel point, generating 1 in the binary string, and if not, 0, and finally generating a binary string with the length of N by one feature point.
Further, generating a feature descriptor by using ResNet based on the feature points and the feature description comprises the following processes:
according to the existing feature point set, cutting out feature sub-blocks with preset pixel sizes by taking each electrical feature point as a center, and constructing an electrical feature sub-block set of the image;
carrying out random scale change, rotary scaling and brightness fine adjustment on the image of each electrical characteristic sub-block to obtain an amplified image data set;
and calculating the electrical feature extraction rate based on the trained residual error neural network.
Furthermore, the dimension reduction processing is carried out on the feature descriptors, and the process comprises the following steps:
and performing feature fusion on the BRIEF feature descriptor and the feature descriptor generated by the ResNet network by using the trained FAE network to obtain the final feature descriptor after dimension reduction.
Further, a nearest neighbor approximation search algorithm is adopted to perform feature matching on the infrared images to be spliced, one of the images is subjected to image coordinate transformation, and weighted image fusion algorithm based on distance coefficients is adopted to perform image fusion on the images subjected to coordinate transformation, and the method comprises the following processes:
constructing a mapping transformation matrix, carrying out mapping transformation on the other image by taking one image as a reference, and obtaining a mapping transformation matrix according to the obtained matching characteristic point pairs;
calculating the pixel value of the image to be transformed and the mapping transformation matrix to obtain an image after coordinate transformation;
and for the image after coordinate transformation, multiplying the pixel value of each image by a distance coefficient in the overlapping area of the two images, and then overlapping, wherein the distance coefficient is obtained according to the distance between the pixel point of the overlapping area and the seam.
Further, the method for calculating the similarity of the infrared images to be spliced by adopting a self-adaptive similarity calculation method comprises the following steps:
selecting an image from the images to be spliced;
calculating the similarity of the current image and all other infrared images, sequencing the similarity, selecting the image with the highest similarity, and splicing the current image and the image with the highest similarity;
and taking the image with the highest similarity as a reference, calculating the similarity between the image with the highest similarity and other images, and selecting the image with the highest similarity again for splicing until all the images are spliced.
A second aspect of the present disclosure provides an infrared image stitching system.
An infrared image stitching system, comprising:
an image acquisition module configured to: acquiring an infrared image to be spliced;
a pre-processing module configured to: preprocessing the acquired infrared image;
a feature point extraction module configured to: extracting characteristic points of the electrical equipment area in the preprocessed infrared image;
a feature point description module configured to: carrying out feature point description on the infrared image after feature point extraction;
a feature matching module configured to: based on the infrared image after the feature point description, performing feature matching on the infrared image to be spliced by adopting a nearest neighbor approximation search algorithm;
a stitching order determination module configured to: based on the infrared images after the characteristic matching, performing similarity calculation on the infrared images to be spliced by adopting a self-adaptive similarity calculation method, and determining the splicing sequence of the infrared images;
an image fusion module configured to: and based on the splicing sequence of the infrared images, carrying out image fusion by adopting a weighted image fusion algorithm based on the distance coefficient.
A third aspect of the present disclosure provides a computer-readable storage medium, on which a program is stored, which when executed by a processor, implements the steps in the infrared image stitching method according to the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides an electronic device, including a memory, a processor, and a program stored in the memory and executable on the processor, where the processor executes the program to implement the steps in the infrared image stitching method according to the first aspect of the present disclosure.
Compared with the prior art, the beneficial effect of this disclosure is:
1. the method, the system, the medium or the electronic equipment can synthesize a plurality of infrared images with narrow view fields and high spatial resolution into a panoramic image with wide view field and high spatial resolution, thereby improving the images and the efficiency and the accuracy of image splicing.
2. The method, the system, the medium or the electronic equipment disclosed by the disclosure are based on a FAST algorithm and a BRIEF feature description algorithm, and the residual error neural network ResNet and the depth fusion encoder FAE are fused into the feature extraction and description algorithm, so that the defect that only the image shallow feature can be extracted in the existing feature extraction algorithm is overcome.
3. The method, the system, the medium or the electronic equipment disclosed by the disclosure utilize ResNet to carry out deep excavation on the electrical characteristics of the infrared image, so that the accuracy of characteristic extraction is improved.
4. According to the method, the system, the medium or the electronic equipment, the depth fusion encoder FAE is used for performing dimension reduction processing on the obtained electrical feature descriptor, redundant information and noise information are removed, the integral data volume of image features is reduced, and the calculation complexity is reduced.
5. The method, the system, the medium or the electronic equipment disclosed by the disclosure can better realize seamless splicing aiming at the infrared image by adopting a weighted image fusion algorithm based on the distance coefficient.
6. According to the method, the system, the medium or the electronic equipment, based on the infrared images after feature matching, the similarity calculation is carried out on the infrared images to be spliced by adopting a self-adaptive similarity calculation method, the splicing sequence of the infrared images is determined, and splicing is carried out according to the preset splicing sequence, so that the splicing efficiency and accuracy are greatly improved.
Advantages of additional aspects of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
Fig. 1 is a flowchart of an infrared image stitching method provided in embodiment 1 of the present disclosure.
FIG. 2 is a schematic diagram of the crude extraction of feature points provided in example 1 of the present disclosure.
Fig. 3 is a schematic structural diagram of the neural network ResNet18 provided in embodiment 1 of the present disclosure after a decoding network is added to an output layer.
Fig. 4 is a schematic diagram of a feature descriptor after dimension reduction provided in embodiment 1 of the present disclosure.
Fig. 5 is a schematic diagram of distance coefficients provided in embodiment 1 of the present disclosure.
Fig. 6 is a schematic diagram of a preprocessed infrared image according to embodiment 1 of the disclosure;
fig. 7 is a diagram of a feature extraction result provided in embodiment 1 of the present disclosure.
Fig. 8 is a graph of a matching result of the characteristic points of the infrared image with stitching provided in embodiment 1 of the present disclosure.
Fig. 9 is a graph of infrared image stitching results provided in embodiment 1 of the present disclosure.
Fig. 10 is a schematic structural diagram of an infrared image rapid stitching system provided in embodiment 1 of the present disclosure.
Detailed Description
The present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
Example 1:
the embodiment 1 of the present disclosure provides an infrared image stitching method, as shown in fig. 1, including the following steps:
step 101: and acquiring the infrared images to be spliced.
Step 102: and preprocessing the infrared images to be spliced.
Step 103: and extracting the characteristic points of the electrical equipment area in the preprocessed infrared image by using a FAST algorithm.
Step 104: and describing the characteristic points of the infrared image after the characteristic points are extracted by adopting an FAE-BRIEF algorithm.
Step 105: and performing feature matching on the infrared image to be spliced by adopting a rapid nearest neighbor approximation search algorithm based on the infrared image after feature point description.
Step 106: and performing similarity calculation on the infrared images to be spliced by adopting a self-adaptive similarity calculation method based on the infrared images after the characteristic matching, and determining the splicing sequence of the infrared images.
Step 107: and performing image fusion by adopting a weighted image fusion algorithm based on a distance coefficient based on the splicing sequence.
Specifically, the method comprises the following steps:
1. infrared image preprocessing
The infrared image preprocessing comprises image smoothing processing and histogram equalization, and is specifically introduced as follows:
1. image smoothing
The infrared image is formed by heat radiation, which is imaged by using the infrared ray difference between the object and the detecting instrument. Since different objects or different parts of the same object usually have different heat radiation characteristics, the self temperature difference and emissivity of the target object can be reflected substantially according to the infrared image. Due to the random influence of the external environment and the imperfection of the infrared acquisition equipment, various noises exist in the infrared image easily, so that the target edge information of the image is fuzzy, and the contrast between the target and the background is not obvious.
Firstly, the image is smoothed, and the median filtering method is selected to smooth the image in this embodiment. The images to be stitched are read in spatial order in python. For an image, a fixed size window is given, with each pixel value in the image being the median of all pixel values in the window. The so-called median value is used for sorting all gray values in the window in size, and when the total number of pixels in the window is odd, the middle value is the median value; when the total number of pixels is even, the final median value is obtained by averaging the two intermediate values in the pair.
2. Histogram equalization
Due to the reasons of thermal balance of external objects, the transmission distance of scenery, atmospheric air attenuation and the like, the infrared image of the electrical equipment has stronger spatial correlation, extremely low contrast is easy to generate, and the image is fuzzy, so that histogram equalization needs to be performed on the image to enable the image to be clearer.
Reading the image after the image smoothing, extracting the gray values of all pixel points in the image, normalizing the gray values of all the pixel points, arranging the normalized pixel points according to the original positions, and obtaining the image after histogram equalization.
2. Infrared image electrical characteristic point extraction and description
After the infrared images are preprocessed, the present embodiment performs feature point extraction and description on the electrical equipment area of each infrared image. In order to overcome the defect that only the shallow features of an image can be extracted in the existing feature extraction algorithm, the residual neural network ResNet and the depth fusion encoder FAE are fused into the feature extraction and description algorithm based on the FAST algorithm and the BRIEF feature description algorithm. The specific steps are described as follows:
1. FAST feature extraction and BRIEF feature description
In this embodiment, first, electrical feature extraction and feature description are performed on an infrared image by using a conventional algorithm.
The method comprises the following steps: the method comprises the following steps:
as shown in fig. 2, a point P is selected from the electrical equipment area in the infrared image, and to determine whether the point P is a feature point, a circle with a radius of 3 pixels is drawn with the point P as the center. If the gray value of n continuous pixel points on the circumference is larger or smaller than the gray value of the P point, the P point is considered as the characteristic point. Typically n is set to 12. In order to accelerate the extraction of the feature points, the non-feature points are quickly eliminated, the gray values at the positions 1, 9, 5 and 13 are firstly detected, if P is the feature point, 3 or more than 3 pixel values at the four positions are all larger or smaller than the gray value of the P point. If not, the point is directly drained.
Step two: and (4) screening feature points. And (3) for a large number of feature points selected in the step one, inputting 16 pixels on the circumference of the feature points into a decision tree by adopting a decision tree algorithm, and screening out the optimal FAST feature points.
Step three: the non-maximum suppresses the extraction of locally dense feature points. The response size is calculated for each feature point. The calculation method is the sum of the absolute values of the deviations of the feature point P and its surrounding 16 feature points. And in the comparison of adjacent characteristic points, keeping the characteristic point with a larger response value, and deleting the rest characteristic points.
Step four: and selecting pixel points. BRIEF feature description will first arbitrarily choose n pairs of pixel points p in the neighborhood of a feature point i 、q i (i =1,2, \ 8230;, n), generally, the value of n is 128, 256 or 512, and 128 is selected in this embodiment.
Step five: and generating the feature description. After selecting n pairs of pixel points, BRIEF compares the gray value of each pair of points. If I (p) i )>I(q i ) A 1 in the binary string is generated, otherwise it is 0. Finally, a feature point will generate a binary string of length n.
2. Feature descriptor generation using ResNet
On the basis of obtaining the feature points and the feature descriptors of the infrared image by using a traditional algorithm, the present embodiment uses ResNet to perform deep mining on the electrical features of the infrared image. In consideration of the limited amount of the existing infrared image data, the embodiment also applies the infrared image data enhancement and the transfer learning to the feature extraction. The specific steps are described below.
The method comprises the following steps: and (4) extracting the infrared image electrical characteristic sub-blocks. For an infrared image, feature sub-blocks with the size of 64 x 64 pixels are cut out by taking each electrical feature point in an existing feature point set as the center, and an electrical feature sub-block set of the image is constructed.
Step two: the image data is enhanced. And (3) performing image data enhancement on each 64 × 64 electrical characteristic sub-block, namely performing random scale change, rotation scaling and brightness fine adjustment on the selected images to obtain an amplified image data set.
Step three: and constructing a residual error neural network. The existing ResNet18 is first trimmed. The final output layer of the original ResNet18 network is used for outputting the classification result of the image, and the number of output elements of the network is changed to 128 in the embodiment. The trimmed ResNet18 network judges each electrical feature sub-block, if the feature sub-block does not belong to the electrical region, the feature sub-block is discarded, if the feature sub-block belongs to the electrical region, the feature sub-block is reserved, a 128-dimensional feature descriptor is generated, and the generated feature descriptor has scale invariance and rotation invariance due to the fact that the input feature sub-block set is subjected to data enhancement.
Considering that training of the network is performed next, the present embodiment adds a decoding network after the output layer of the ResNet 18. The decoding network is composed of a plurality of fully connected layers and an activation function. The role of the decoding network is to reconstruct the input feature sub-block images. The final network structure is shown in fig. 3.
Assuming that the input data set is n 64 × 64 feature sub-block images, n 128-dimensional feature descriptors are generated at the output layer after passing through the network of ResNet 18. In order to ensure the generation quality of the feature descriptors in the network, the network is trained by using the FLIR infrared data set.
Step four: and calculating the electrical feature extraction rate. Compared with the traditional feature extraction algorithm, the feature extraction algorithm of the embodiment mainly extracts the features of the electrical image region of the image, so the embodiment provides a calculation method of the electrical feature extraction rate, and the calculation formula is the ratio of the number of the extracted electrical feature points to the total number of all the feature points in the whole image.
3. Feature descriptor fusion for FAE networks
In this embodiment, a 256-dimensional feature descriptor is obtained by using a conventional feature extraction method and transfer learning. In order to remove redundant information and noise information, reduce the data size of the whole image feature, and reduce the computational complexity, the embodiment performs dimension reduction processing on the obtained electrical feature descriptor by using a depth fusion encoder FAE.
The method comprises the following steps: and constructing the FAE network. The deep fusion device has the main functions of deep expression of dimensionality reduction and feature mining, the final data size and the dimensionality reduction effect are comprehensively considered, a 5-layer FAE network is constructed in the embodiment, the dimensionality of the feature descriptor is reduced to 128 dimensions, and the structure of the network is shown in FIG. 4.
The network mainly utilizes a full connection layer to perform feature fusion and compression on input features, the features are compressed to 128 dimensions after two times of feature fusion, meanwhile, in order to ensure the quality of the feature fusion, the network performs dimension increasing on the fused features after the feature fusion, performs root mean square error calculation on the dimension increased feature descriptors and the input feature descriptors, and takes the root mean square error calculation as a target function of network training.
Step two: and (5) feature fusion. In this embodiment, a FLIR infrared data set is used to train a network, and the trained network is used to perform feature fusion on the feature descriptors generated by the BRIEF feature descriptor and the ResNet feature descriptor, so as to finally obtain a 128-dimensional feature descriptor.
3. Panoramic infrared image stitching
After the electrical feature extraction and feature description of each infrared image are completed, feature point matching and image coordinate conversion are also completed, and final image splicing can be carried out. The specific steps are described below.
1. Feature point matching
For two images with adjacent spatial positions but different shooting angles, feature points of the two images need to be matched so as to complete the coordinate conversion of the images for one image. Common matching algorithms are brute force matching algorithms and fast nearest neighbor search algorithms. In this embodiment, a fast nearest neighbor approximation search algorithm is used to perform feature point matching.
2. Image coordinate transformation
Because the two images have the problem of inconsistent shooting angles, the two images cannot be directly overlapped and spliced, and one image needs to be subjected to image coordinate transformation.
The method comprises the following steps: and constructing a mapping transformation matrix. Two images a and B adjacent in spatial position but having different imaging angles are mapped on image B with image a as a reference. And obtaining a mapping change matrix H according to the obtained matching characteristic point pairs.
Step two: and (5) converting image coordinates. And (5) operating the pixel value of the image B and the mapping transformation matrix H to obtain a transformed image B'.
3. Image fusion
The image coordinate conversion has unified the coordinates of two images into one coordinate, and the two images are directly spliced into a panoramic image. Ideally, the gray scale values on both sides of the image joint are consistent. In practical situations, due to the fact that parameters in a camera are different during imaging and the influence of errors is generated due to transformation and matching of shooting environments, obvious seams and ghost images or distortion phenomena exist at the splicing positions when images are directly spliced.
The embodiment adopts a weighted image fusion algorithm based on the distance coefficient, and the algorithm can well realize seamless splicing for the infrared image.
In the overlapping region of two images, the present embodiment multiplies the pixel value of each image by a distance coefficient k i The superposition is performed. The distance coefficient is schematically shown in fig. 5:
in the figure, f (x, y) is a pixel point of the overlapping region, and a and b are distances from f (x, y) to seam 1 and seam 2, respectively, thereby defining a distance coefficient k 1 And k 2
Figure BDA0003002172960000131
It can be seen that k 1 +k 2 =1, it is seen from the figure that the distance coefficient is applicable to fusion of both horizontal and vertical images and to fusion of rotated images.
The calculation formula of the gray value of the pixel point in the overlapping area is as follows:
f(x,y)=k 1 ×f A (x,y)+k 2 ×f B (x,y) (3)
wherein (x, y) ∈ (f) A ∩f B ),f A (x, y) and f B And (x, y) is the gray value of the pixel points of the reference image and the image to be spliced.
4. Multi-frame infrared image fusion
In consideration of the situation that a plurality of unordered infrared images need to be spliced in an actual application scene, the embodiment provides a sorting algorithm for splicing the plurality of unordered infrared images according to the most possible sequence.
The method comprises the following steps: adaptive image similarity algorithm
Considering two infrared images possibly having overlapping regions, firstly extracting feature points of the two images by using a FAST algorithm, and respectively marking the feature points of the overlapping regions of the two images as X i (i =1,2,3, \ 8230n) and Y j (j =1,2,3, \ 8230m), then FAE-BRIEF algorithm is utilized and described, and then fast nearest neighbor approximation search algorithm is utilized to match the feature points in the two images to obtain matched feature point pairs, which are marked as P q (q =1,2,3, \8230k), this example proposes a formula for evaluating the similarity S of two images:
Figure BDA0003002172960000141
step two: determining a stitching order
Supposing that L infrared images with splicing are provided, firstly selecting an image A from the images to be spliced, calculating the similarity of the image A and all other infrared images by taking the image A as a reference, sequencing the similarity, selecting an image B with the highest similarity from the images, and splicing the two images. And then, taking the image B as a reference, calculating the similarity between the image B and the rest images (excluding A and B), selecting the image C with the highest similarity for splicing, and so on until all the images are spliced.
Specifically, in this embodiment, an infrared image stitching experiment is performed by taking 9 transformer bushing infrared images as an example.
(1) Infrared image preprocessing
The image smoothing and histogram equalization operations are performed on the infrared image, and the result is shown in fig. 6, where part (a) in fig. 6 is the original image, part (b) is the image after image smoothing, and part (c) is the image after histogram equalization.
The noise in the original image can be eliminated well by the image smoothing, and the contrast of the image can be improved by the histogram equalization, so that the main body part is more prominent.
(2) Infrared image electrical characteristic point extraction and description
And extracting electrical characteristic points of the preprocessed infrared image by using a FAST algorithm, firstly selecting an electrical equipment area in the infrared image, and describing the characteristic points by using an improved BRIEF characteristic descriptor. The feature extraction result is shown in fig. 7, and the feature description result of part of the feature points is shown in table 1.
Table 1: partial infrared image feature point coordinate and BRIEF feature descriptor
Figure BDA0003002172960000151
Figure BDA0003002172960000161
/>
According to the improved BRIEF algorithm, 64 × 64 feature pixel blocks near the electrical feature points are used as ResNet input, the feature points of the non-electrical region are discarded, the feature points of the electrical region are reserved, and a 128-dimensional feature descriptor is generated, wherein the results of part of the feature descriptor are shown in Table 2. The final calculation result shows that the electrical feature extraction rate reaches 85.6%, and is greatly improved compared with 68.7% of the electrical feature extraction rate of the traditional feature extraction algorithm.
Table 2: and the coordinates of the characteristic points of the partial infrared image and the characteristic descriptors generated by RESNET.
Figure BDA0003002172960000162
/>
Figure BDA0003002172960000171
Figure BDA0003002172960000181
/>
And finally, fusing the obtained BRIEF feature descriptor and the feature descriptor generated by RESNET by adopting a DAE network to finally generate a 128-dimensional feature descriptor. The partial fused feature descriptor results are shown in table 3.
Table 3: partial infrared image feature point coordinate and fusion feature descriptor
Figure BDA0003002172960000182
/>
Figure BDA0003002172960000191
(3) And (4) determining the matching of the characteristic points of the infrared image and the splicing sequence.
And after the characteristic description of the infrared image is completed, matching the characteristic points of the two spliced images. The results are shown in FIG. 8.
Since there is a matching relationship between a plurality of infrared images (9 in this embodiment), before completing the panorama stitching, the similarity between each of the images needs to be determined, and according to the similarity calculation formula, the result of the similarity between the infrared images can be obtained as shown in table 4.
Table 4: similarity between infrared images
Figure BDA0003002172960000201
According to the similarity between the infrared images, the stitching order is finally determined as 1 → 5 → 8 → 6 → 4 → 7 → 3 → 2 → 9, and after the stitching order is determined, all the infrared images are sequentially stitched.
(4) Infrared image fusion
And finally finishing disordered infrared image splicing according to the splicing sequence determined in the previous step, wherein the result is shown in fig. 9.
Example 2:
an embodiment 2 of the present disclosure provides an infrared image stitching system, as shown in fig. 10, including:
and the image acquisition module 201 is used for acquiring the infrared images to be spliced.
And the preprocessing module 202 is configured to preprocess the infrared images to be spliced.
And the characteristic point extraction module 203 is configured to extract characteristic points of the electrical equipment region in the preprocessed infrared image by using a FAST algorithm.
And the feature point description module 204 is configured to perform feature point description on the infrared image after feature point extraction by using a FAE-BRIEF algorithm.
And the feature matching module 205 is configured to perform feature matching on the infrared image to be spliced by using a fast nearest neighbor search algorithm based on the infrared image after feature point description.
And the splicing sequence determining module 206 is configured to perform similarity calculation on the infrared images to be spliced by using a self-adaptive similarity calculation method based on the infrared images after feature matching, and determine a splicing sequence of the infrared images.
And the image fusion module 207 is used for carrying out image fusion by adopting a weighted image fusion algorithm based on the distance coefficient based on the splicing sequence.
Example 3:
the embodiment 3 of the present disclosure provides a computer-readable storage medium, on which a program is stored, which when executed by a processor, implements the steps in the infrared image stitching method according to the embodiment 1 of the present disclosure.
Example 4:
the embodiment 4 of the present disclosure provides an electronic device, which includes a memory, a processor, and a program stored in the memory and executable on the processor, and when the processor executes the program, the steps in the infrared image stitching method according to the embodiment 1 of the present disclosure are implemented.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (6)

1. An infrared image splicing method is characterized by comprising the following steps: the method comprises the following steps:
acquiring an infrared image to be spliced, and preprocessing the acquired infrared image;
extracting feature points of an electrical equipment area in the preprocessed infrared image by adopting a FAST algorithm, and performing feature description on the infrared image after feature extraction by adopting BRIEF, wherein the method comprises the following steps: carrying out rough extraction on the preprocessed infrared image to obtain a plurality of characteristic points, wherein the rough extraction specifically comprises the following steps: selecting a point P from an electrical equipment area in an infrared image, judging whether the point P is a characteristic point, drawing a circle with the radius of 3 pixels by taking the P as the center of the circle, considering the P as the characteristic point if the gray value of continuous n pixel points on the circle is larger or smaller than the gray value of the P point, quickly eliminating the non-characteristic point in order to accelerate the extraction of the characteristic point, firstly detecting the gray values of 1, 9, 5 and 13 positions, if the P is the characteristic point, then 3 or more than 3 pixel values on the four positions are larger or smaller than the gray value of the P point, and if the P is not the characteristic point, directly eliminating the point; inputting a plurality of pixels on the circumference of the feature point into a decision tree by adopting a decision tree algorithm, and screening out the optimal FAST feature point; calculating the response size of each feature point, reserving the feature points with response values larger than a preset value, and deleting the rest feature points, wherein the specific calculation mode is as follows: the sum of absolute values of deviations of the feature point P and a plurality of feature points around the feature point P; randomly selecting N pairs of pixel points in the neighborhood of one feature point, comparing the gray value of each pair of pixel points, if the gray value of a first pixel point is larger than that of a second pixel point, generating 1 in a binary string, and if not, 0, finally generating a binary string with the length of N by one feature point;
generating feature descriptors by using ResNet based on the feature points and the feature descriptions, comprising the following processes: according to the existing feature point set, cutting out feature sub-blocks with preset pixel sizes by taking each electrical feature point as a center, and constructing an electrical feature sub-block set of the image; carrying out random scale change, rotary scaling and brightness fine adjustment on the image of each electrical characteristic sub-block to obtain an amplified image data set; fine adjustment is carried out on the existing ResNet18, and each electrical characteristic sub-block is judged by the fine-adjusted ResNet18 network; a decoding network is added behind an output layer of the ResNet18, and the decoding network consists of a plurality of full connection layers and an activation function; calculating an electrical feature extraction rate based on the trained residual error neural network;
based on the infrared image after the feature point description, performing feature matching on the infrared image to be spliced by adopting a nearest neighbor approximation search algorithm;
based on the infrared images after the characteristic matching, performing similarity calculation on the infrared images to be spliced by adopting a self-adaptive similarity calculation method, and determining the splicing sequence of the infrared images;
based on the splicing sequence of the infrared images, adopting a weighted image fusion algorithm based on a distance coefficient to perform image fusion;
the method comprises the following steps of performing feature matching on infrared images to be spliced by adopting a nearest neighbor approximation search algorithm, performing image coordinate transformation on one of the images, and performing image fusion on the image subjected to the coordinate transformation by adopting a weighted image fusion algorithm based on a distance coefficient, wherein the method comprises the following processes:
constructing a mapping transformation matrix, carrying out mapping transformation on the other image by taking one image as a reference, and obtaining a mapping transformation matrix according to the obtained matching characteristic point pairs; calculating the pixel value of the image to be transformed and the mapping transformation matrix to obtain an image after coordinate transformation; for the image with the coordinate transformation completed, in the overlapping area of the two images, multiplying the pixel value of each image by a distance coefficient for superposition, wherein the distance coefficient is obtained according to the distance between the pixel point of the overlapping area and the seam;
the method comprises the following steps of performing similarity calculation on infrared images to be spliced by adopting a self-adaptive similarity calculation method:
selecting an image from the images to be spliced, calculating the similarity of the current image and all other infrared images, sequencing the similarity, selecting the image with the highest similarity, and splicing the current image and the image with the highest similarity; and taking the image with the highest similarity as a reference, calculating the similarity between the image with the highest similarity and other images, and selecting the image with the highest similarity again for splicing until all the images are spliced.
2. The infrared image stitching method according to claim 1, characterized in that:
preprocessing the infrared image, comprising: and carrying out image smoothing and histogram equalization processing on the infrared image.
3. The infrared image stitching method according to claim 1, characterized in that:
and performing dimension reduction processing on the feature descriptor, wherein the dimension reduction processing comprises the following processes:
and performing feature fusion on the BRIEF feature descriptor and the feature descriptor generated by the ResNet network by using the trained FAE network to obtain the final feature descriptor after dimension reduction.
4. An infrared image stitching system for implementing the infrared image stitching method according to any one of claims 1 to 3, characterized in that: the method comprises the following steps:
an image acquisition module configured to: acquiring an infrared image to be spliced;
a pre-processing module configured to: preprocessing the acquired infrared image;
a feature point extraction module configured to: extracting characteristic points of the electrical equipment area in the preprocessed infrared image;
a feature point description module configured to: carrying out feature point description on the infrared image after feature point extraction;
a feature matching module configured to: based on the infrared image after the feature point description, performing feature matching on the infrared image to be spliced by adopting a nearest neighbor approximation search algorithm;
a stitching order determination module configured to: based on the infrared images after the characteristic matching, performing similarity calculation on the infrared images to be spliced by adopting a self-adaptive similarity calculation method, and determining the splicing sequence of the infrared images;
an image fusion module configured to: based on the splicing sequence of the infrared images, adopting a weighted image fusion algorithm based on a distance coefficient to perform image fusion;
the method comprises the following steps of performing feature matching on infrared images to be spliced by adopting a nearest neighbor approximation search algorithm, performing image coordinate transformation on one of the images, and performing image fusion on the image subjected to the coordinate transformation by adopting a weighted image fusion algorithm based on a distance coefficient, wherein the weighted image fusion algorithm comprises the following processes:
constructing a mapping transformation matrix, carrying out mapping transformation on the other image by taking one image as a reference, and obtaining a mapping transformation matrix according to the obtained matching characteristic point pairs;
calculating the pixel value of the image to be transformed and the mapping transformation matrix to obtain an image after coordinate transformation;
for the image with the coordinate transformation completed, in the overlapping area of the two images, multiplying the pixel value of each image by a distance coefficient for superposition, wherein the distance coefficient is obtained according to the distance between the pixel point of the overlapping area and the seam;
the method for calculating the similarity of the infrared images to be spliced by adopting a self-adaptive similarity calculation method comprises the following steps:
selecting an image from the images to be spliced;
calculating the similarity of the current image and all other infrared images, sequencing the similarity, selecting the image with the highest similarity, and splicing the current image and the image with the highest similarity;
and taking the image with the highest similarity as a reference, calculating the similarity between the image with the highest similarity and other images, and selecting the image with the highest similarity again for splicing until all the images are spliced.
5. A computer-readable storage medium, on which a program is stored, which, when being executed by a processor, carries out the steps of the infrared image stitching method according to any one of claims 1 to 3.
6. An electronic device comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the steps of the infrared image stitching method according to any one of claims 1 to 3 when executing the program.
CN202110351133.3A 2021-03-31 2021-03-31 Infrared image splicing method, system, medium and electronic equipment Active CN113112403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110351133.3A CN113112403B (en) 2021-03-31 2021-03-31 Infrared image splicing method, system, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110351133.3A CN113112403B (en) 2021-03-31 2021-03-31 Infrared image splicing method, system, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113112403A CN113112403A (en) 2021-07-13
CN113112403B true CN113112403B (en) 2023-03-24

Family

ID=76713207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110351133.3A Active CN113112403B (en) 2021-03-31 2021-03-31 Infrared image splicing method, system, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113112403B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115919B (en) * 2022-06-24 2023-05-05 国网智能电网研究院有限公司 Power grid equipment thermal defect identification method and device
CN117011147B (en) * 2023-10-07 2024-01-12 之江实验室 Infrared remote sensing image feature detection and splicing method and device
CN117745537B (en) * 2024-02-21 2024-05-17 微牌科技(浙江)有限公司 Tunnel equipment temperature detection method, device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390275B (en) * 2013-07-19 2016-03-30 香港应用科技研究院有限公司 The method of dynamical image joining
CN107784632A (en) * 2016-08-26 2018-03-09 南京理工大学 A kind of infrared panorama map generalization method based on infra-red thermal imaging system
CN110046599A (en) * 2019-04-23 2019-07-23 东北大学 Intelligent control method based on depth integration neural network pedestrian weight identification technology
CN112365404B (en) * 2020-11-23 2023-03-17 成都唐源电气股份有限公司 Contact net panoramic image splicing method, system and equipment based on multiple cameras

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416732A (en) * 2018-02-02 2018-08-17 重庆邮电大学 A kind of Panorama Mosaic method based on image registration and multi-resolution Fusion

Also Published As

Publication number Publication date
CN113112403A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN113112403B (en) Infrared image splicing method, system, medium and electronic equipment
CN109684924B (en) Face living body detection method and device
CN107253485B (en) Foreign matter invades detection method and foreign matter invades detection device
CN105957015B (en) A kind of 360 degree of panorama mosaic methods of threaded barrel inner wall image and system
CN109410207B (en) NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method
CN105205781B (en) Transmission line of electricity Aerial Images joining method
CN111445389A (en) Wide-view-angle rapid splicing method for high-resolution images
Yang et al. Progressively complementary network for fisheye image rectification using appearance flow
Hong et al. Unsupervised homography estimation with coplanarity-aware gan
CN111667470B (en) Industrial pipeline flaw detection inner wall detection method based on digital image
CN113628261B (en) Infrared and visible light image registration method in electric power inspection scene
CN109509163B (en) FGF-based multi-focus image fusion method and system
CN113744153B (en) Double-branch image restoration forgery detection method, system, equipment and storage medium
CN111738211B (en) PTZ camera moving object detection and recognition method based on dynamic background compensation and deep learning
CN115830004A (en) Surface defect detection method, device, computer equipment and storage medium
Zhao et al. Learning perspective undistortion of portraits
CN114972312A (en) Improved insulator defect detection method based on YOLOv4-Tiny
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
Yu et al. Detecting line segments in motion-blurred images with events
Zhu et al. Progressive feedback-enhanced transformer for image forgery localization
Yuan et al. Structure flow-guided network for real depth super-resolution
Song et al. Unsupervised Deep Asymmetric Stereo Matching with Spatially-Adaptive Self-Similarity
CN111696090A (en) Method for evaluating quality of face image in unconstrained environment
CN117036235A (en) Relay protection cabinet terminal wire arrangement sequence detection method
CN116363468A (en) Multi-mode saliency target detection method based on feature correction and fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant