CN111861866A - Panoramic reconstruction method for substation equipment inspection image - Google Patents
Panoramic reconstruction method for substation equipment inspection image Download PDFInfo
- Publication number
- CN111861866A CN111861866A CN202010624342.6A CN202010624342A CN111861866A CN 111861866 A CN111861866 A CN 111861866A CN 202010624342 A CN202010624342 A CN 202010624342A CN 111861866 A CN111861866 A CN 111861866A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- matching
- points
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000007689 inspection Methods 0.000 title claims abstract description 36
- 238000006243 chemical reaction Methods 0.000 claims abstract description 29
- 238000000605 extraction Methods 0.000 claims abstract description 26
- 230000004927 fusion Effects 0.000 claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 19
- 230000008569 process Effects 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 38
- 230000009466 transformation Effects 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 8
- 238000012216 screening Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000007500 overflow downdraw method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000002087 whitening effect Effects 0.000 description 1
Images
Classifications
-
- G06T3/14—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The invention discloses a panoramic reconstruction method for a substation equipment inspection image. Firstly, performing feature extraction on an acquired substation equipment inspection image by adopting an improved SURF feature detection algorithm; then, carrying out feature matching on the extracted feature points; then, carrying out space conversion on the image, substituting a parameter model of a characteristic space into the characteristic matching information of the corresponding image by using a RANSAC algorithm to solve the model, and converting the image to a space coordinate of the image to be registered through the conversion model; and finally, carrying out image fusion to obtain a high-fidelity panoramic image. The invention can realize panoramic reconstruction of the substation equipment inspection image and overcome the influence of factors such as distance, rotation, angle, shielding, light and the like in the image shooting process. The method uses the image registration method based on the characteristics, has the characteristic of strong robustness, and still has good matching reconstruction results aiming at the characteristics of complex structure and rich appearance characteristics of the transformer substation equipment.
Description
Technical Field
The invention relates to the field of substation equipment inspection, in particular to a substation equipment inspection image panoramic reconstruction method.
Background
In order to realize daily management of main power equipment of the transformer substation and ensure safe and reliable operation of a power system, each transformer substation collects routing inspection images of various equipment through manual routing inspection, an online monitoring device and a routing inspection robot every day. However, due to the fact that the terrain of the position where the transformer substation is located is complex, and devices in the transformer substation are various and are seriously shielded, the routing inspection process is affected by factors such as distance, rotation, angle, shielding and light, and a large number of images cannot completely display panoramic information of the devices. Therefore, it is necessary to fuse images by using an image panorama reconstruction technique. The reconstruction method adopts an improved SURF (speeded up robust feature) detection algorithm to extract image features aiming at the characteristics of the structure and the appearance of the transformer substation equipment, and combines a plurality of matching modes to finish efficient matching between routing inspection images, so that a good fusion effect can be obtained finally.
Disclosure of Invention
The invention aims to provide a panoramic reconstruction method of a substation equipment inspection image aiming at the problem that a large number of inspection images obtained by various inspection means of a substation cannot completely present equipment panoramic information, and according to the characteristics of complex appearance, clear edges and corners and rules of equipment in the substation, the method adopts an improved SURF (speeded up robust features) feature detection algorithm to extract image features according to the characteristics of the structure and the appearance of the substation equipment, and combines a plurality of matching modes to complete efficient matching between the inspection images, thereby finally realizing a good image fusion effect.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
firstly, performing feature extraction on an acquired substation equipment inspection image by adopting an improved SURF feature detection algorithm, and then performing feature matching on the extracted feature points, wherein the feature matching comprises a rough matching method for performing feature matching search according to Euclidean distances among the feature points and performing primary screening by using a distance ratio of nearest neighbors to next nearest neighbors, and a fine matching method for deleting an error matching feature pair in the rough matching by calculating the rationality of a small data set through matching line segment slope check, cross matching filter and RANSAC (random sample consensus) algorithm; and then substituting the parameter model of the characteristic space into the characteristic matching information of the corresponding image by using a RANSAC algorithm to solve the model to obtain a fixed image space conversion model, thereby converting the image to the space coordinate of the image to be registered through the conversion model, and finally processing the image overlapping part by using a weighted sum algorithm to perform image fusion.
The panoramic reconstruction method for the substation equipment inspection image provided by the invention comprises the following specific steps:
(1) feature extraction
Through an improved SURF feature detection algorithm feature extraction and description mode, in combination with a fast projection detection power equipment feature key point, a SURF operator can describe the condition of a region around the key point through a feature vector, and the extraction and description of the power equipment feature point are completed in a more efficient mode.
(ii) a structure scale space
The scale space of the improved SURF algorithm is composed of O groups of S layers, the sizes of images among different groups are consistent, the sizes of templates of box filters used among different groups are gradually increased, the same group of images on different layers use the same size of filters, and the scale space factor of the filters is gradually increased.
② detecting characteristic points
The images over all scale spaces are searched, potential pairs of scales are identified and candidate points are determined by the blackout matrix. The purpose of constructing the blackplug matrix is to generate stable edge points (mutation points) of the image, which have similar functions to laplacian edge detection and prepare for feature extraction. Considering the diversity of target features and the clearness of edge features of the power equipment, the selection efficiency of the black plug matrix for the feature points is optimized, the consideration for the edge features is weakened, and the maximum suppression is quickly performed on the selected feature points, so that the efficiency of the whole detection process is greatly improved.
For an image I (x, y), the blackplug matrix is as follows:
where H is the blackout matrix, I (x, y) is the pixel amplitude of the image at the (x, y) location, and (x, y) is the location coordinates of the pixels in the image.
The discriminant of the blackplug matrix is:
Fig. 2 shows the values of the second derivatives Lyy and Lxy in the vertical direction of the image for a 9 × 9 gaussian filter template, which are approximated using box filters, with pixel values of 0 for the gray portion, 2 for black, and 1 for white.
Determining the direction of the characteristic points
The improved SURF algorithm compares each pixel processed by the blackout matrix (i.e. obtains the discriminant value of the blackout matrix of each pixel) with all neighboring points of its image domain (same size image) and scale domain (neighboring scale space), and when it is greater than (or less than) all neighboring points, the point is the extreme point. The middle detection point in fig. 2 needs to be compared with 8 pixel points in the 3 × 3 neighborhood of the image where the detection point is located, and 18 pixel points in the upper and lower adjacent layers of the 3 × 3 neighborhood, which are 26 pixel points in total.
After the characteristic points are preliminarily positioned, the final stable characteristic points are screened out by filtering the key points with weaker energy and the key points with wrong positioning.
Fourthly, constructing feature point descriptors
As shown in fig. 3, in the improved SURF algorithm, 4 × 4 rectangular region blocks around a feature point are extracted, a direction of the rectangular region along the main direction of the feature point is obtained, and each sub-region counts the ear wavelet features of 25 pixel points in the horizontal direction and the vertical direction, where the horizontal direction and the vertical direction are relative to the main direction. The Hear wavelet is characterized by 4 directions of the sum of horizontal direction values, the sum of vertical direction values, the sum of horizontal direction values and the sum of vertical direction values.
(2) Feature matching
The feature matching comprises coarse matching of feature points and fine matching of feature points.
The improved SURF algorithm also determines the matching degree by calculating the Euclidean distance between two feature points, wherein the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented. Judging the black blocking matrix trace, if the signs of the matrix traces of the two characteristic points are the same, representing that the two characteristics have contrast variation in the same direction; if the contrast ratio is different, the contrast ratio change directions of the two characteristic points are opposite, and even if the Euclidean distance is 0, the contrast ratio change directions are directly excluded.
The precise matching of the feature points adopts the consistency of random samples, namely a method for removing error points by using RANSAC algorithm. Defining it as rigid using an image, homography transformations can be found between the feature points, using the RANSAC algorithm to find the best homography matrix.
(3) Image fusion
And substituting the parameter model of the characteristic space into the characteristic matching information of the corresponding image by using a RANSAC algorithm to solve the model to obtain a fixed image space conversion model, so that the image is converted to the space coordinate of the image to be registered through the conversion model, and then processing the image overlapping part by using a weighted sum algorithm to perform image fusion.
The invention also provides a substation equipment inspection image panoramic reconstruction system, which comprises
The extraction device comprises: carrying out feature extraction on the collected substation equipment inspection image;
a feature matching device: feature matching is performed on the feature points extracted by the extraction device
A conversion device: substituting the parameter model of the characteristic space into the characteristic matching information of the image corresponding to the characteristic matching device to solve the model to obtain a fixed image space conversion model, so that the image is converted to the space coordinate of the image to be registered through the conversion model;
a fusion device: and processing the image overlapping part to perform image fusion.
The extraction device is used for extracting the characteristics of the collected inspection image of the power transformation equipment by adopting an improved SURF characteristic detection algorithm; the feature matching of the feature matching device comprises a rough matching method for carrying out feature matching search according to Euclidean distances between feature points and carrying out primary screening by utilizing a nearest neighbor distance ratio and a next nearest neighbor distance ratio, and a fine matching method for deleting an error matching feature pair in rough matching by calculating the rationality of a small data set through a matching line segment slope check, cross matching filtering and random sampling consensus (RANSAC) algorithm, wherein the conversion device utilizes the RANSAC algorithm to substitute a parameter model of a feature space into feature matching information of a corresponding image to solve the model to obtain a fixed image space conversion model, so that the image is converted to a space coordinate of an image to be registered through the conversion model; the fusion device carries out image fusion by processing the image overlapping part by using a weighted sum algorithm.
The feature extraction device: detecting characteristic key points of the power equipment by combining with fast projection through an improved SURF characteristic detection algorithm characteristic extraction and description mode, wherein an SURF operator describes the condition of a region around the key points through a characteristic vector;
the feature matching device: the characteristic matching comprises coarse matching of characteristic points and fine matching of the characteristic points;
the rough matching of the feature points adopts an improved SURF algorithm, the matching degree is determined by calculating the Euclidean distance between two feature points, the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented, the judgment of a black plug matrix trace is added, and if the signs of the matrix traces of the two feature points are the same, the two features have contrast change in the same direction; if different, directly eliminating;
the precise matching of the characteristic points adopts a method of removing the error points by using an RANSAC algorithm, an image is defined as rigid, homography transformation is found among the characteristic points, and the RANSAC algorithm is used for finding the optimal homography matrix;
the image fusion device: and substituting the parameter model of the characteristic space into the characteristic matching information of the corresponding image by using a RANSAC algorithm to solve the model to obtain a fixed image space conversion model, so that the image is converted to the space coordinate of the image to be registered through the conversion model, and then processing the image overlapping part by using a weighted sum algorithm to perform image fusion.
The feature extraction device further includes:
constructing a scale space device: the scale space of the improved SURF algorithm is composed of O groups of S layers, the sizes of images among different groups are consistent, the sizes of templates of box filters used among different groups are gradually increased, the same group of images on different layers use filters with the same size, and the scale space factor of the filters is gradually increased;
the characteristic point detection device comprises: searching images on all scale spaces, identifying potential pair scales and determining candidate points through a black plug matrix, and preparing for feature extraction;
the device for determining the direction of the characteristic points comprises: the improved SURF algorithm compares each pixel point processed by the black matrix with all adjacent points of an image domain and a scale domain of the pixel point, and when the pixel point is larger than or smaller than all the adjacent points, the pixel point is an extreme point;
after the characteristic points are preliminarily positioned, filtering key points with weak energy and key points with wrong positioning to screen out final stable characteristic points;
the feature point descriptor constructing device comprises: the improved SURF algorithm extracts 4 x 4 rectangular area blocks around the feature point, obtains the main direction of the rectangular area along the feature point, and counts the Hear wavelet characteristics of 25 pixel points in the horizontal direction and the vertical direction in each sub-area, wherein the horizontal direction and the vertical direction are relative to the main direction, and the Hear wavelet characteristics are 4 directions of the sum of the horizontal direction values, the sum of the vertical direction values, the sum of the absolute values of the horizontal direction values and the absolute sum of the vertical direction values.
In the feature point detection device
For one image I (x, y), the blackplug matrix is as follows:
in the formula (1), H is a blackplug matrix, I (x, y) is the pixel amplitude of the image at the (x, y) position, and (x, y) is the position coordinate of the pixel in the image;
the discriminant of the blackplug matrix is as follows:
the invention solves the problem that a large number of images cannot completely present the panoramic information of the equipment due to the influence of factors such as distance, rotation, angle, shielding, light and the like on the routing inspection process by fusing the images shot at different angle positions.
Drawings
FIG. 1 is a flow chart of a power transformation device image stitching implemented by the present invention;
FIG. 2 is a flow chart of the present invention using a blackplug matrix to select feature points;
FIG. 3 is a flow chart of the SURF algorithm for constructing feature point descriptors in accordance with the present invention;
FIG. 4 is a schematic diagram of coarse matching of feature points according to the present invention;
FIG. 5 is a schematic diagram of fine matching of feature points according to the present invention;
fig. 6 is a result diagram of the image stitching of the power transformation device implemented by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below with reference to the accompanying drawings and examples.
The implementation mode of the invention is as shown in figure 1, firstly SURF characteristics are extracted from an image to construct a characteristic point descriptor, then a characteristic point matching optimization characteristic point set is used, then the image is converted to a space coordinate of an image to be registered through a geometric transformation model, and finally image fusion is carried out to obtain a panoramic image. SURF characteristic points are extracted from the sequence image, potential multi-scale points in all scale space images are identified through a black plug matrix, and candidate points are determined. The 9x9 gaussian filter template is used to find the corresponding values of the second derivatives Lyy and Lxy in the vertical direction of the image, as shown in fig. 2, so as to perform fast maximum suppression on the feature points. And the pixel points processed by the black plug matrix are compared with the image domain and the scale domain to endow the characteristic points with sufficient direction information, and the characteristic points are all obvious boundary information points in the image and corresponding gradient information thereof, so that the method has good multi-scale property and rotation invariance, and can be well adapted to the scale and angle transformation condition of the image target object. After locating and screening out stable feature points, a 4x4 rectangular region along the periphery of the feature points is extracted, and horizontal and vertical haar wavelet features are acquired along the principal directions of the feature points in the rectangular region, as shown in fig. 3, and a feature point descriptor is established therefrom.
After the feature points of the image are obtained, the feature points of the same target need to be matched. Two operations of rough matching and fine matching are required in the process of feature matching, feature matching searching is carried out according to Euclidean distances among feature points during rough matching, and primarily matched feature points are screened out by utilizing the distance ratio of nearest neighbor to next nearest neighbor, and an image subjected to rough matching is shown in an attached figure 4. The image result obtained by rough matching is not very accurate, and many misjudged matching points still exist in fig. 4. Therefore, a parameter model is required to be established in a feature space through matched line segment slope check, cross matching filtering or similar RANSAC algorithm, and an erroneous matched feature pair in coarse matching is deleted through calculating the rationality of a small data set to obtain a final accurate feature matching result. The image after the fine matching is shown in figure 5, and the matching precision is obviously improved compared with figure 4.
After the matching result of the feature points is obtained, the transformation relation between the images corresponding to the features needs to be calculated, the RANSAC algorithm is used for solving the model by using the parameter model of the feature space and bringing the feature matching information of the corresponding image pair into the parameter matching information of the model, so that the parameter accurate solution of the model is obtained, and the fixed image space conversion model is determined. The images are then transformed by the transformation model to the spatial coordinates of the registered images, and further fusion of the images can be performed.
Factors such as illumination contrast among different images are considered in image fusion operation, and meanwhile, obvious image boundaries need to be reduced for the fused panoramic image, especially the characteristics of complex and clear appearance of equipment in a transformer substation. And (3) ensuring that the image and the image with registration are relatively similar in brightness and contrast by utilizing image whitening and other preprocessing methods, processing the image overlapping part by utilizing a weighted sum algorithm, splicing the fused part with other parts of the image, and finally obtaining the panoramic image with high fidelity (shown in figure 6).
Claims (10)
1. A panoramic reconstruction method for substation equipment inspection images is characterized by comprising the following steps: the reconstruction method comprises the steps of firstly extracting features of collected inspection images of the power transformation equipment, secondly matching the extracted features, then substituting a parameter model of a feature space into feature matching information of corresponding images to solve the model to obtain a fixed image space conversion model, so that the images are converted to space coordinates of the images to be registered through the conversion model, and finally processing image superposition parts to perform image fusion.
2. The substation equipment inspection image panoramic reconstruction method according to claim 1, characterized in that: the characteristic extraction of the collected routing inspection image of the power transformation equipment is carried out by adopting an improved SURF characteristic detection algorithm; the feature matching comprises a rough matching method for searching feature matching according to Euclidean distances between feature points and primarily screening by using a nearest neighbor distance ratio and a next nearest neighbor distance ratio, a fine matching method for deleting wrong matching feature pairs in rough matching by calculating the reasonability of a small data set through matching line segment slope check, cross matching filtering and random sampling consensus RANSAC algorithm, and solving a model by substituting a parameter model of a feature space into feature matching information of a corresponding image by using the RANSAC algorithm to obtain a fixed image space conversion model, so that the image is converted to space coordinates of an image to be registered through the conversion model; and the image fusion is to process the image overlapping part by using a weighted sum algorithm to perform image fusion.
3. The substation equipment inspection image panoramic reconstruction method according to claim 1, characterized in that:
(1) the method for extracting the characteristics of the collected routing inspection image of the power transformation equipment comprises the following steps:
detecting characteristic key points of the power equipment by combining with fast projection through an improved SURF characteristic detection algorithm characteristic extraction and description mode, wherein an SURF operator describes the condition of a region around the key points through a characteristic vector;
(2) the method for performing feature matching on the extracted feature points comprises the following steps: the feature matching comprises coarse matching of feature points and fine matching of the feature points;
the rough matching of the feature points is to determine the matching degree by calculating the Euclidean distance between two feature points by adopting an improved SURF algorithm, the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented, the judgment of a black plug matrix trace is added, and if the signs of the matrix traces of the two feature points are the same, the two features have contrast ratio changes in the same direction; if different, directly eliminating;
the precise matching of the feature points is a method for removing the error points by using an RANSAC algorithm, an image is defined as rigid, homography transformation is found among the feature points, and the RANSAC algorithm is used for finding an optimal homography matrix;
(3) The image fusion method comprises the following steps:
and substituting the parameter model of the characteristic space into the characteristic matching information of the corresponding image by using a RANSAC algorithm to solve the model to obtain a fixed image space conversion model, so that the image is converted to the space coordinate of the image to be registered through the conversion model, and then processing the image overlapping part by using a weighted sum algorithm to perform image fusion.
4. The substation equipment inspection image panoramic reconstruction method according to claim 3, characterized in that: the method for feature extraction further comprises:
(ii) a structure scale space
The scale space of the improved SURF algorithm is composed of O groups of S layers, the sizes of images among different groups are consistent, the sizes of templates of box filters used among different groups are gradually increased, the same group of images on different layers use filters with the same size, and the scale space factor of the filters is gradually increased;
② detecting characteristic points
Searching images on all scale spaces, identifying potential pair scales and determining candidate points through a black plug matrix, and preparing for feature extraction;
determining the direction of the characteristic points
The improved SURF algorithm compares each pixel point processed by the black matrix with all adjacent points of an image domain and a scale domain of the pixel point, and when the pixel point is larger than or smaller than all the adjacent points, the pixel point is an extreme point;
After the characteristic points are preliminarily positioned, filtering key points with weak energy and key points with wrong positioning to screen out final stable characteristic points;
fourthly, constructing feature point descriptors
The improved SURF algorithm extracts 4 x 4 rectangular area blocks around the feature point, obtains the main direction of the rectangular area along the feature point, and counts the Hear wavelet characteristics of 25 pixel points in the horizontal direction and the vertical direction in each sub-area, wherein the horizontal direction and the vertical direction are relative to the main direction, and the Hear wavelet characteristics are 4 directions of the sum of the horizontal direction values, the sum of the vertical direction values, the sum of the absolute values of the horizontal direction values and the absolute sum of the vertical direction values.
5. The substation equipment inspection image panoramic reconstruction method according to claim 4, characterized in that: in the step of detecting the characteristic points
For one image I (x, y), the blackplug matrix is as follows:
where H in equation (1) is a blackplug matrix, I (x, y) is the pixel amplitude of the image at the (x, y) location, (x, y) is the location coordinates of the pixels in the image,
the discriminant of the blackplug matrix is as follows:
6. the utility model provides a substation equipment patrols and examines image panorama system of rebuilding which characterized in that: the system comprises
The extraction device comprises: carrying out feature extraction on the collected substation equipment inspection image;
A feature matching device: feature matching is performed on the feature points extracted by the extraction device
A conversion device: substituting the parameter model of the characteristic space into the characteristic matching information of the image corresponding to the characteristic matching device to solve the model to obtain a fixed image space conversion model, so that the image is converted to the space coordinate of the image to be registered through the conversion model;
a fusion device: and processing the image overlapping part to perform image fusion.
7. The substation equipment inspection image panoramic reconstruction system of claim 6, wherein: the extraction device is used for extracting the characteristics of the collected inspection image of the power transformation equipment by adopting an improved SURF characteristic detection algorithm; the feature matching of the feature matching device comprises a rough matching method for carrying out feature matching search according to Euclidean distances between feature points and carrying out primary screening by utilizing a nearest neighbor distance ratio and a next nearest neighbor distance ratio, and a fine matching method for deleting an error matching feature pair in rough matching by calculating the rationality of a small data set through a matching line segment slope check, cross matching filtering and random sampling consensus (RANSAC) algorithm, wherein the conversion device utilizes the RANSAC algorithm to substitute a parameter model of a feature space into feature matching information of a corresponding image to solve the model to obtain a fixed image space conversion model, so that the image is converted to a space coordinate of an image to be registered through the conversion model; the fusion device carries out image fusion by processing the image overlapping part by using a weighted sum algorithm.
8. The substation equipment inspection image panoramic reconstruction system of claim 6, wherein:
the feature extraction device: detecting characteristic key points of the power equipment by combining with fast projection through an improved SURF characteristic detection algorithm characteristic extraction and description mode, wherein an SURF operator describes the condition of a region around the key points through a characteristic vector;
the feature matching device: the characteristic matching comprises coarse matching of characteristic points and fine matching of the characteristic points;
the rough matching of the feature points adopts an improved SURF algorithm, the matching degree is determined by calculating the Euclidean distance between two feature points, the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented, the judgment of a black plug matrix trace is added, and if the signs of the matrix traces of the two feature points are the same, the two features have contrast change in the same direction; if different, directly eliminating;
the precise matching of the characteristic points adopts a method of removing the error points by using an RANSAC algorithm, an image is defined as rigid, homography transformation is found among the characteristic points, and the RANSAC algorithm is used for finding the optimal homography matrix;
the image fusion device: and substituting the parameter model of the characteristic space into the characteristic matching information of the corresponding image by using a RANSAC algorithm to solve the model to obtain a fixed image space conversion model, so that the image is converted to the space coordinate of the image to be registered through the conversion model, and then processing the image overlapping part by using a weighted sum algorithm to perform image fusion.
9. The substation equipment inspection image panoramic reconstruction system of claim 8, wherein: the feature extraction device further includes:
constructing a scale space device: the scale space of the improved SURF algorithm is composed of O groups of S layers, the sizes of images among different groups are consistent, the sizes of templates of box filters used among different groups are gradually increased, the same group of images on different layers use filters with the same size, and the scale space factor of the filters is gradually increased;
the characteristic point detection device comprises: searching images on all scale spaces, identifying potential pair scales and determining candidate points through a black plug matrix, and preparing for feature extraction;
the device for determining the direction of the characteristic points comprises: the improved SURF algorithm compares each pixel point processed by the black matrix with all adjacent points of an image domain and a scale domain of the pixel point, and when the pixel point is larger than or smaller than all the adjacent points, the pixel point is an extreme point;
after the characteristic points are preliminarily positioned, filtering key points with weak energy and key points with wrong positioning to screen out final stable characteristic points;
the feature point descriptor constructing device comprises: the improved SURF algorithm extracts 4 x 4 rectangular area blocks around the feature point, obtains the main direction of the rectangular area along the feature point, and counts the Hear wavelet characteristics of 25 pixel points in the horizontal direction and the vertical direction in each sub-area, wherein the horizontal direction and the vertical direction are relative to the main direction, and the Hear wavelet characteristics are 4 directions of the sum of the horizontal direction values, the sum of the vertical direction values, the sum of the absolute values of the horizontal direction values and the absolute sum of the vertical direction values.
10. The substation equipment inspection image panoramic reconstruction system of claim 9, wherein: in the feature point detection device
For one image I (x, y), the blackplug matrix is as follows:
in the formula (1), H is a blackplug matrix, I (x, y) is the pixel amplitude of the image at the (x, y) position, and (x, y) is the position coordinate of the pixel in the image;
the discriminant of the blackplug matrix is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010624342.6A CN111861866A (en) | 2020-06-30 | 2020-06-30 | Panoramic reconstruction method for substation equipment inspection image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010624342.6A CN111861866A (en) | 2020-06-30 | 2020-06-30 | Panoramic reconstruction method for substation equipment inspection image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111861866A true CN111861866A (en) | 2020-10-30 |
Family
ID=72988918
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010624342.6A Pending CN111861866A (en) | 2020-06-30 | 2020-06-30 | Panoramic reconstruction method for substation equipment inspection image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111861866A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112612002A (en) * | 2020-12-01 | 2021-04-06 | 北京天地玛珂电液控制系统有限公司 | Digital construction system and method for scene space of full working face under coal mine |
CN113343916A (en) * | 2021-06-30 | 2021-09-03 | 上海申瑞继保电气有限公司 | Method for extracting device features in transformer substation device image |
CN113780224A (en) * | 2021-09-17 | 2021-12-10 | 广东电网有限责任公司 | Transformer substation unmanned inspection method and system |
CN115861927A (en) * | 2022-12-01 | 2023-03-28 | 中国南方电网有限责任公司超高压输电公司大理局 | Image identification method and device for power equipment inspection image and computer equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960442A (en) * | 2017-03-01 | 2017-07-18 | 东华大学 | Based on the infrared night robot vision wide view-field three-D construction method of monocular |
CN108961162A (en) * | 2018-03-12 | 2018-12-07 | 北京林业大学 | A kind of unmanned plane forest zone Aerial Images joining method and system |
CN110310310A (en) * | 2019-03-27 | 2019-10-08 | 南京航空航天大学 | A kind of improved method for aviation image registration |
CN111080529A (en) * | 2019-12-23 | 2020-04-28 | 大连理工大学 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
-
2020
- 2020-06-30 CN CN202010624342.6A patent/CN111861866A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106960442A (en) * | 2017-03-01 | 2017-07-18 | 东华大学 | Based on the infrared night robot vision wide view-field three-D construction method of monocular |
CN108961162A (en) * | 2018-03-12 | 2018-12-07 | 北京林业大学 | A kind of unmanned plane forest zone Aerial Images joining method and system |
CN110310310A (en) * | 2019-03-27 | 2019-10-08 | 南京航空航天大学 | A kind of improved method for aviation image registration |
CN111080529A (en) * | 2019-12-23 | 2020-04-28 | 大连理工大学 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
Non-Patent Citations (1)
Title |
---|
ARTHUR-JI: "Surf算法特征点检测与匹配", 《HTTPS://BLOG.CSDN.NET/ARTHUR_HOLMES/ARTICLE/DETAILS/100675690》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112612002A (en) * | 2020-12-01 | 2021-04-06 | 北京天地玛珂电液控制系统有限公司 | Digital construction system and method for scene space of full working face under coal mine |
CN113343916A (en) * | 2021-06-30 | 2021-09-03 | 上海申瑞继保电气有限公司 | Method for extracting device features in transformer substation device image |
CN113343916B (en) * | 2021-06-30 | 2024-02-09 | 上海申瑞继保电气有限公司 | Method for extracting equipment characteristics in substation equipment image |
CN113780224A (en) * | 2021-09-17 | 2021-12-10 | 广东电网有限责任公司 | Transformer substation unmanned inspection method and system |
CN113780224B (en) * | 2021-09-17 | 2024-04-05 | 广东电网有限责任公司 | Unmanned inspection method and system for transformer substation |
CN115861927A (en) * | 2022-12-01 | 2023-03-28 | 中国南方电网有限责任公司超高压输电公司大理局 | Image identification method and device for power equipment inspection image and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104376548B (en) | A kind of quick joining method of image based on modified SURF algorithm | |
CN109615611B (en) | Inspection image-based insulator self-explosion defect detection method | |
CN111861866A (en) | Panoramic reconstruction method for substation equipment inspection image | |
CN102132323B (en) | System and method for automatic image straightening | |
CN104751142B (en) | A kind of natural scene Method for text detection based on stroke feature | |
CN106548169B (en) | Fuzzy literal Enhancement Method and device based on deep neural network | |
CN109409355B (en) | Novel transformer nameplate identification method and device | |
CN109858527B (en) | Image fusion method | |
CN109961399B (en) | Optimal suture line searching method based on image distance transformation | |
CN110619623B (en) | Automatic identification method for heating of joint of power transformation equipment | |
CN107945221A (en) | A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process | |
CN110414571A (en) | A kind of website based on Fusion Features reports an error screenshot classification method | |
CN108460833A (en) | A kind of information platform building traditional architecture digital protection and reparation based on BIM | |
CN106022337B (en) | A kind of planar target detection method based on continuous boundary feature | |
CN112215925A (en) | Self-adaptive follow-up tracking multi-camera video splicing method for coal mining machine | |
CN109711420B (en) | Multi-affine target detection and identification method based on human visual attention mechanism | |
CN109389165A (en) | Oil level gauge for transformer recognition methods based on crusing robot | |
CN106934395B (en) | Rigid body target tracking method adopting combination of SURF (speeded Up robust features) and color features | |
CN113673515A (en) | Computer vision target detection algorithm | |
CN116977316A (en) | Full-field detection and quantitative evaluation method for damage defects of complex-shape component | |
CN104036494A (en) | Fast matching computation method used for fruit picture | |
Hao et al. | Active cues collection and integration for building extraction with high-resolution color remote sensing imagery | |
CN109544608B (en) | Unmanned aerial vehicle image acquisition characteristic registration method | |
CN107330436B (en) | Scale criterion-based panoramic image SIFT optimization method | |
CN116385477A (en) | Tower image registration method based on image segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201030 |
|
RJ01 | Rejection of invention patent application after publication |