CN114926391A - Perspective transformation method based on improved RANSAC algorithm - Google Patents

Perspective transformation method based on improved RANSAC algorithm Download PDF

Info

Publication number
CN114926391A
CN114926391A CN202210352267.1A CN202210352267A CN114926391A CN 114926391 A CN114926391 A CN 114926391A CN 202210352267 A CN202210352267 A CN 202210352267A CN 114926391 A CN114926391 A CN 114926391A
Authority
CN
China
Prior art keywords
image
pairs
transformation
matching points
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210352267.1A
Other languages
Chinese (zh)
Inventor
舒征宇
姚景岩
汪俊
许欣慧
高健
翟二杰
黄志鹏
李镇翰
张洋
李�浩
马聚超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202210352267.1A priority Critical patent/CN114926391A/en
Publication of CN114926391A publication Critical patent/CN114926391A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The perspective transformation method based on the improved RANSAC algorithm utilizes the SIFT algorithm to obtain the coordinates of the feature matching points of the reference image and the auxiliary image, therefore, only 4 pairs of feature matching points are randomly selected to obtain a transformation matrix, and the transformation matrix is utilized to perform perspective transformation on the auxiliary image. Only 4 pairs of matching points are needed when a transformation matrix is calculated, and an improved RANSAC algorithm is provided. The intelligent inspection robot can help to better assist the intelligent inspection robot in identifying the running state of the pressure plate in the image, and the anti-interference capability of the intelligent inspection robot is improved.

Description

Perspective transformation method based on improved RANSAC algorithm
Technical Field
The invention relates to the technical field of intelligent power grid inspection, in particular to a perspective transformation method based on an improved RANSAC algorithm.
Background
With the continuous development of communication technology and artificial intelligence technology, the rapid construction of the smart grid is driven. The intelligent inspection gradually becomes an important auxiliary operation and maintenance means under the unattended mode of the current transformer substation. Relay secondary equipment protection clamp plate is as important protection device in the electric power operation system, because its is in large quantity, and the work load is big and numerous and diverse to traditional artifical patrolling and examining, and false retrieval or miss the condition take place occasionally, adopt intelligent robot to patrol and examine it for this also becomes the research focus. The application of the related art of image processing in the power system is mainly focused on the primary equipment at present. And in actual production, the switching of the operation state of the secondary equipment is not easy to find, and attention and real-time monitoring are required.
It can know through observing the clamp plate image of gathering, because the protection cabinet door mostly uses the glass cabinet door, there is local highlight interference in clamp plate image under the uneven condition of illumination. Under severe conditions, the highlight interference can cause the texture information to be seriously lost and cannot be identified, and the condition of false detection and missed detection of the state of the pressing plate occurs.
The reflection reasons of the glass cabinet door in the transformer substation are mainly three:
(1): natural light irradiates the glass cabinet door through the window to form reflection light;
(2) (ii) a The indoor illuminating lamp reflects light formed on the glass cabinet door;
(3): when the light is dark, the camera flash lamp reflects light formed on the cabinet door.
Highlight area detection based on two-dimensional OTSU algorithm:
among various algorithms for image thresholding, the maximum inter-class variance method (OTSU) is widely used because of its simplicity in calculation and stable performance. The traditional OTSU algorithm is to use a two-peak histogram to count the pixel average gray value of an image, divide the image into a foreground area and a background area, and obtain the threshold value when the variance of the two divided areas is the maximum. Because pixels in the same area in an image have stronger consistency and correlation in position and gray level, the traditional OTSU algorithm only considers the gray level information provided by the histogram and ignores the spatial position information of the image.
Disclosure of Invention
The invention provides a method for identifying the running state of a protection pressing plate based on image fusion, which is used for intelligent routing inspection of a secondary relay protection pressing plate of a transformer substation; according to the method, a threshold segmentation method is adopted to detect the highlight area, image restoration is carried out on the basis, and light and shadow interference existing in the relay protection pressing plate image is effectively removed, so that the intelligent inspection robot is better assisted to identify the running state of the pressing plate in the image, and the anti-interference capability of the intelligent inspection robot is improved.
The technical scheme adopted by the invention is as follows:
the method for identifying the running state of the protective pressing plate based on image fusion comprises the following steps:
step 1, protecting the detection of a high-light area of a pressure plate image:
step 1.1: the intelligent robot patrols the relay protection pressing plate and shoots an image of the protection pressing plate;
step 1.2: because the protection pressing plate of the whole area is required to be obtained during shooting, the shot protection pressing plate image is input into a computer image processing system, the protection pressing plate image is screened, the protection pressing plate image which is not shot to the whole area is deleted, and the subsequent detection result is prevented from being influenced. And compressing the screened images of the protective pressing plate, so that the speed of processing the images by a computer is higher.
The computer image processing system is realized by configuring plug-ins such as OpenCV, python and the like in a Microsoft Visual Studio development platform.
Step 1.3: carrying out graying processing on the image of the protective pressing plate to enlarge the difference between a highlight area and a background area;
step 1.4: and (3) solving an optimal threshold value by utilizing a two-dimensional OTSU algorithm, and dividing the gray image of the protective pressing plate according to the optimal threshold value, thereby quickly detecting a highlight area in the image.
Step 2, removing the highlight area based on image fusion:
step 2.1, feature point detection:
carrying out feature point detection on the reference image and the auxiliary image by utilizing an SIFT algorithm to generate feature description vectors, and carrying out feature matching on the reference image and the auxiliary image by using a nearest neighbor method;
step 2.2: perspective transformation based on the modified RANSAC algorithm:
and (4) obtaining a perspective transformation matrix through the obtained feature matching points, and adjusting the view angle and the size of the auxiliary graph by adopting perspective transformation to make the auxiliary graph consistent with the reference graph.
Step 2.3, image restoration:
and according to the highlight area position detected in the reference image, performing texture filling and repairing on the auxiliary image corresponding to the area position after perspective transformation, and further removing the highlight in the reference image.
Step 3, identifying the state of the protection pressing plate:
step 3.1: extracting a connected region:
and (4) carrying out color region screening, binarization and morphological processing on the reference image subjected to highlight removal, and extracting a connected region in an 8-connection mode.
Step 2: screening the effective pressing plate area:
and analyzing the area, the size and the shape according to the morphological characteristics, and accurately extracting an effective pressing plate area from the communicated area.
And step 3: identifying the on-off state of the pressing plate:
and identifying the screened effective pressing plate area, and after identifying the on-off state of the effective pressing plate, sequencing the effective pressing plates from left to right and from top to bottom by adopting barycentric coordinates, and finally obtaining a state sequence only containing 0 and 1.
Step 2.2 comprises the following steps:
the method comprises the following steps: and carrying out initial matching on the effective feature points extracted by the SIFT algorithm by using a nearest neighbor method, wherein the initial selected Euclidean distance threshold is 0.6. Equally dividing the image into 4 regions, judging whether the feature matching point logarithms of the currently divided 4 regions are all larger than 4, if so, carrying out the next step, and if not, adding 0.1 to the Euclidean distance threshold value for re-matching.
Step two: and 4 pairs of matching points with the minimum Euclidean distance are respectively screened from the 4 areas, 16 pairs are selected in total, the 16 pairs of matching points 4 are combined into one group, 1,2, … and N are sorted according to the Euclidean distance of the combined 4 pairs of matching points from small to large, and the first 50 groups of serial numbers are selected.
Step three: firstly, 4 pairs of matching points with the sequence number of 1 are selected according to the sequence number sequence to calculate a transformation matrix H, all matching point pairs in an image are checked by using the matrix H, whether the proportion of the local point number to the total number of the matching point pairs is more than 50% or not is judged, if so, the currently calculated matrix H is the optimal transformation matrix, and otherwise, the next group of 4 pairs of matching points is selected according to the sequence number sequence to calculate the transformation matrix H.
The invention relates to a method for identifying the running state of a protection pressing plate based on image fusion, which has the following technical effects:
1) the intelligent inspection robot can be widely applied to intelligent inspection of the relay protection pressing plate of the transformer substation, can better assist the inspection robot to check the state of the pressing plate in and out, improves the identification accuracy rate of the state of the relay protection pressing plate in and out, reduces the labor intensity of inspection personnel, reduces misoperation in power grid operation, avoids economic loss, and ensures safe and stable operation of the power grid.
2) The invention has the main functions of better assisting the intelligent inspection robot to check the on-off state of the pressing plate, improving the identification accuracy of the on-off state of the relay protection pressing plate, reducing the labor intensity of inspection personnel, reducing misoperation in power grid operation, avoiding economic loss and ensuring the safe and stable operation of the power grid.
3) The computer image processing technology of the invention mainly comprises the following steps of: graying the acquired reference image, rapidly detecting highlight interference in the image by using an improved OTSU two-dimensional threshold segmentation method, reducing the influence of noise and enabling highlight area detection in the reference image to be more accurate.
4) The method adopts SIFT algorithm to detect the characteristic points of the reference image and the auxiliary image and matches the characteristic points by using the nearest neighbor method, introduces improved RANSAC algorithm to remove error matching points and solve the optimal perspective transformation matrix, and utilizes the auxiliary image perspective transformation to repair the highlight area in the main image. The running state of the pressure plate is judged by detecting the inclination angle of the edge of the pressure plate on the basis of image restoration. The intelligent inspection robot can help to better assist the intelligent inspection robot in identifying the running state of the pressure plate in the image, and the anti-interference capability of the intelligent inspection robot is improved.
Drawings
Fig. 1 is a two-dimensional histogram.
Fig. 2 is a perspective transformation process diagram.
Fig. 3 is a flow chart of the detection of highlight region of the image of the protective pressing plate.
Fig. 4 is a highlight removal flow chart based on image fusion.
Fig. 5 is a diagram of perspective transformation steps based on the modified RANSAC algorithm.
FIG. 6 is a flow chart of the protection platen status identification.
FIG. 7 is a diagram showing the overall structure of the method of the present invention.
Detailed Description
The method for identifying the running state of the protective pressing plate based on image fusion comprises the following steps:
the method comprises the following steps of firstly, detecting a highlight area based on an OTSU optimization algorithm:
1. and (3) carrying out characteristic analysis on the pressing plate image:
the invention adopts a threshold segmentation method to detect the highlight area and carries out image restoration on the basis, thereby laying a foundation for identifying the running state of the pressing plate.
2. Highlight area detection based on two-dimensional OTSU algorithm:
in order to better divide the foreground and the background and improve the anti-noise capability of the algorithm, the invention changes the dimension increase of the traditional one-dimensional OTSU algorithm into two-dimensional, and the specific steps are as follows:
step 1: let the image I exist, and let the gray level of the image I (x, y) be L level, then the domain average gray level of the image I is also L level.
And 2, step: let f (x, y) be the gray value of the pixel (x, y), and g (x, y) be the average gray value in the K × K domain of the central pixel (x, y). Let f (x, y) be i and g (x, y) be j, and then form a doublet (i, j).
And step 3: let f be the number of occurrences of the doublet (i, j) ij To find the probability density P corresponding to the binary ij ,P ij =f ij and/N, i, j is 1,2, …, L, where N is the total number of image pixels.
And 4, step 4: a threshold vector (s, t) is arbitrarily chosen to divide the two-dimensional histogram of the image into 4 regions, B, C representing the foreground and background of the image and A, D representing noise points, as shown in fig. 1.
And 5: let the probability of background and foreground respectively be omega 1 ,ω 2 Corresponding mean vector is μ 1 ,μ 2 . The mean vector corresponding to the whole image is mu, and the formula is as follows:
Figure BDA0003581306890000041
in the formula, omega 1 Probability of occurrence of background, P ij Is the probability density of occurrence of the doublet (i, j).
Figure BDA0003581306890000051
In the formula, omega 2 Probability of occurrence of foreground, P ij Is the probability density of occurrence of the doublet (i, j).
Figure BDA0003581306890000052
In the formula, mu 1 Is the mean vector corresponding to the background.
Figure BDA0003581306890000053
In the formula, mu 2 And the mean vector corresponding to the foreground.
Figure BDA0003581306890000054
In the formula, mu is a mean vector corresponding to the whole image.
Step 6: using a matrix S of discrete measures (s,t) Determining a measure tr (S) of the image (s,t) ) The formula is as follows:
S (s,t) =ω 11 -μ)(μ 1 -μ) T22 -μ)(μ 2 -μ) T (6)
in the formula S (s,t) Is a matrix of discrete measures of an image.
tr(S (s,t) )=ω 1 [(μ 1ii ) 2 +(μ 1jj ) 2 ]+ω 2 [(μ 2ii ) 2 +(μ 2jj ) 2 ] (7)
In the formula tr (S) (s,t) ) Is a discrete measure of the image.
And 7: the larger the discrete measure is, the larger the inter-class variance is, and the maximum discrete measure corresponds to the optimal threshold(s) * ,t * )。
(s * ,t * )=argmax{tr(S (s,t) )} (8)
In the formula(s) * ,t * ) Is the optimal threshold for the image.
After the optimal threshold value is obtained through the steps, the threshold value is used for carrying out binarization processing on the gray level images with 0-255 brightness levels, a foreground area and a background area are separated, and the foreground area at the moment is a highlight area.
Secondly, removing highlight areas based on image fusion:
image Fusion (Image Fusion) is to synthesize two or more images collected about the same target into a new Image, so as to effectively improve the utilization rate of Image information, and enable the Image obtained after Fusion to describe the target more comprehensively and clearly.
The method aims at the problems that after a feature description vector is generated through an SIFT algorithm, when feature matching is carried out by using a nearest neighbor method, the method excessively depends on a preset threshold value and has error matching point pairs, introduces an RANSAC algorithm, further eliminates the error matching point pairs, and completes multi-view image feature matching and solves an optimal perspective transformation matrix. Meanwhile, in order to avoid the defect of random selection of the traditional RANSAC algorithm and reduce unnecessary iteration times and time, the RANSAC algorithm is improved, so that the effects of image perspective transformation and highlight removal are better.
1. Detecting characteristic points:
currently, the most classical in feature point description is the SIFT (Scale-invariant feature transform) algorithm, which is widely used for feature point detection and feature description vector generation because it has the characteristics of keeping the rotation, Scale scaling, and luminance changes unchanged. The method comprises the following specific steps:
step 1: the method comprises the steps of enabling an input image I (x, y) to be subjected to continuous reduced-order sampling to obtain a series of images with different sizes, and sequencing the images from large to small and from bottom to top to form a pyramid model. Then, the image I (x, y) is convolved with a two-dimensional gaussian function G (x, y, σ) of which the scale varies continuously for each layer of the image, and a scale space L (x, y, σ) of the image is obtained.
L(x,y,σ)=G(x,y,σ)*I(x,y) (9)
Figure BDA0003581306890000061
Wherein, denotes convolution operation, and sigma is scale.
Step 2: and combining a plurality of images on each layer in the scale space to form a group, and subtracting two adjacent layers of images in the same group to obtain a Gaussian difference image. Comparing each pixel point of each layer of Gaussian difference image except the top layer and the bottom layer in the same group with 8 pixels points of the same layer and 26 pixels points of 9 multiplied by 2 of the upper and lower adjacent layer images, and when the pixel value of the point is maximum or minimum, the pixel point is an extreme point. The gaussian difference function is formulated as follows:
D(x,y,σ)=L(x,y,kσ)-L(x,y,σ) (11)
where k is a fixed coefficient.
And step 3: because the detected extreme point is an extreme point in a discrete space and is not a real characteristic point in a continuous space, the coordinate of the extreme point is recalculated by performing curve fitting on the Gaussian difference function in the scale space, namely, the Gaussian difference function in the scale space is expanded by using a Taylor formula:
Figure BDA0003581306890000062
where d (X) is a gaussian difference function, X ═ X, y, σ) T
Derivation and equation yielding equal to zero, the offset of the extreme point can be obtained:
Figure BDA0003581306890000071
the corresponding extreme point equation has the value:
Figure BDA0003581306890000072
wherein the content of the first and second substances,
Figure BDA0003581306890000073
is the value of the offset corresponding to the extreme point equation.
And comparing the pixel value of the generated new coordinate with the set contrast threshold value by using the original extreme point coordinate plus the offset as the generated new coordinate of the extreme point, and eliminating the extreme points with low contrast, wherein the rest extreme points are the feature points.
And 4, step 4: to make the descriptor rotationally invariant, a direction needs to be assigned to each feature point. The gradient module value and the direction of a pixel point in a feature point neighborhood in an image are obtained by using a gradient method, and a module value m (x, y) and a direction theta (x, y) of the gradient are expressed as follows:
Figure BDA0003581306890000074
θ(x,y)=tan -1 ((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))) (16)
wherein the scale used by L is the scale of each characteristic point.
Then, statistics is carried out by using a two-dimensional histogram, and the direction with the highest amplitude value in the histogram is taken as the main direction of the feature point. In order to enhance the robustness of matching, the direction of the amplitude greater than 80% of the amplitude of the main direction is reserved as the auxiliary direction of the feature point.
And 5: taking a 16 × 16 pixel window with the feature point as the center, dividing the window into 4 × 4 subfields, and counting the gradient accumulated amplitude values in 8 directions on each subfield by using a gradient direction histogram, where each subfield can be represented by an 8-dimensional feature description vector, and finally, for each feature point, a feature vector with 4 × 4 × 8 ═ 128 dimensions is used to describe the feature point. In order to make the matching result have illumination invariance, the 128-dimensional feature vector is normalized and a threshold value is selected to limit the gradient amplitude, so that the influence of uneven illumination on the matching result can be effectively reduced.
After SIFT feature vectors of the reference graph and the auxiliary graph are generated, the SIFT feature vectors are matched by using a nearest neighbor method, namely a proportional threshold value is set, and if the nearest Euclidean distance of the two feature point description vectors divided by the next nearest Euclidean distance is smaller than the proportional threshold value, the two feature points are considered to be matched correctly.
2. Perspective transformation based on the modified RANSAC algorithm:
because the visual angle and the size of the auxiliary image are required to be consistent with those of the reference image when highlight-removed images are fused,
for this purpose, the auxiliary map is adjusted using a perspective transformation. The perspective transformation formula is as follows:
Figure BDA0003581306890000081
wherein, (x ', y') is coordinate values of feature matching points of the reference map, (u, v) is coordinate values of corresponding feature matching points of the auxiliary map, S is a transformation coefficient between images, and H is a transformation matrix of 3 × 3, that is:
Figure BDA0003581306890000082
wherein the content of the first and second substances,
Figure BDA0003581306890000083
can carry out rotation, scaling and distortion transformation on the image, T 2 =[a 13 a 23 ] T The image can be subjected to translation transformation, T 3 =[a 31 a 32 ]A perspective transformation of the image may be generated as shown in fig. 2.
Because the coordinates of the feature matching points of the reference image and the auxiliary image are obtained by utilizing the SIFT algorithm, the transformation matrix H can be obtained by randomly selecting 4 pairs of feature matching points, and the perspective transformation can be carried out on the auxiliary image by utilizing the transformation matrix. The transformed auxiliary image pixel coordinates (x, y) are formulated as follows:
Figure BDA0003581306890000084
where (x, y) is the transformed auxiliary image pixel coordinates.
Because the two feature points are matched by adopting the nearest neighbor method and the threshold value is excessively dependent on the current preset threshold value, the size of the threshold value cannot be accurately judged. When the set threshold is larger, more wrong matching point pairs can appear; when the set threshold is smaller, although the number of the mismatching point pairs is reduced, the number of the matching point pairs is obviously reduced, and the optimal selection of the transformation matrix H is seriously influenced. Therefore, a RANSAC algorithm needs to be introduced to further remove the pairs of mismatching points to obtain an optimal transformation matrix.
The idea of the RANSAC algorithm is to fit an estimation model by randomly selecting a set of random subsets of data, test other data through the estimation model, classify certain data as local interior points if it is applicable to the estimation model, consider the estimated model reasonable enough if there are enough points classified as hypothetical local interior points, then re-estimate the model with all hypothetical local interior points, and evaluate the model by estimating the error rate of local interior points with the model, such a process being repeated a fixed number of times, each time the resulting model either being discarded because there are too few local interior points or being chosen because it is better than the existing model.
Because only 4 pairs of matching points are needed when the transformation matrix H is calculated, the invention provides an improved RANSAC algorithm for the purpose, which comprises the following steps:
step 1: firstly, effective feature points extracted by the SIFT algorithm are initially matched by a nearest neighbor method, and an initially selected Euclidean distance threshold value is 0.6.
Step 2: equally dividing the image into 4 regions, judging whether the logarithm of feature matching points of the currently divided 4 regions is greater than 4, if so, carrying out the next step, and if not, adding 0.1 to the Euclidean distance threshold, returning to the previous step for re-matching.
And 3, step 3: from each of the 4 regions, 4 pairs of matching points with the minimum Euclidean distance are selected, and 16 pairs are selected in total.
And 4, step 4: and combining the 16 pairs of matching points 4 into one group, sorting the groups by 1,2, … and N according to the Euclidean distance and the sequence from small to large of the 4 pairs of matching points after combination, and selecting the first 50 groups of serial numbers.
And 5: firstly, 4 pairs of matching points with the sequence number of 1 are taken according to the sequence number sequence to calculate a transformation matrix H.
Step 6: and checking all the matching point pairs in the image by using the matrix H, and when the proportion of the number of the local points to the total number of the matching point pairs is more than 50%, determining that the currently calculated matrix H is the optimal transformation matrix, otherwise, returning to the previous step, and selecting the next group of 4 pairs of matching points according to the sequence number to calculate the transformation matrix H.
And adjusting the auxiliary graph by using the optimal transformation matrix, and covering the corresponding area of the auxiliary graph in a highlight area of the reference graph to obtain the highlight-removed reference graph.
Thirdly, identifying the on-off state of the pressing plate:
in order to accurately identify the pressing plate state in the reference image after highlight removal, the invention adopts a relay protection pressing plate state identification method based on image processing and morphological characteristic analysis. Firstly, in order to improve the accuracy of the whole feature extraction of a platen image, color region screening, binaryzation and morphological processing are carried out on a reference image after highlight removal, and a connected region is extracted in an 8-way connection mode; then, analyzing the area, the size and the shape according to the morphological characteristics, and accurately extracting an effective pressing plate area from all areas; and finally, carrying out state recognition on the effective area according to the direction angle of the pressing plate in the on-off state, and sequencing the effective pressing plates by adopting the barycentric coordinates to obtain all effective pressing plate state sequences.
1. And (3) extracting a connected region of a pressing plate image:
in order to better reflect the overall and local characteristic information of the image, the connected region of the image of the pressing plate is accurately extracted, and the following steps are adopted:
step 1: because the reference image after highlight removal is a color image, the whole effective pressing plate in the image is red and yellow, the standby pressing plate is camel and red, and the background area is white, for this purpose, the red and yellow areas can be screened out by setting a certain RGB threshold value. Considering that interference factors such as other elements and marks possibly existing on the image can cause screening errors, a large number of experiments show that the difference between the maximum value and the minimum value of the red pixel point and the yellow pixel point in the R, G, B three-channel values is not less than 40, so that the pixels of which the parts are not less than 40 in the image are reserved, and the rest part area is set to be R, G, B equivalent value, namely the pixels are changed into black.
Step 2: in order to improve the operation speed, the grayscale processing is carried out on the reference image after the screening of the red and yellow areas, the binarization threshold value is obtained by using the OTSU algorithm, and the binarization processing is carried out on the grayscale image by using the threshold value.
And step 3: due to the fact that some uneven edges can be generated after image binarization, holes can exist at the communicating part of the pressing plate, and the effect of subsequent feature extraction is seriously influenced. For this purpose, a binary image needs to be morphologically processed, a hole is filled by using a dilation and erosion operation, connected regions are extracted in an 8-connected manner (if a pixel and its neighboring pixels are connected at the top, bottom, left, right, top left corner, bottom left corner, top right corner or bottom right corner, they are considered to be connected), and N connected regions obtained by extraction are numbered as 1,2, … and N.
2. Screening an effective pressing plate area:
in order to accurately screen out an effective pressing plate area, morphological feature analysis is carried out from the three aspects of area, size and shape, and the method comprises the following specific steps:
step 1: and (5) area analysis. Considering that an invalid backup pressure plate, a mark and the like may exist in the image after the connected region is extracted, the connected region where the part is located is observed to be small in area, and for this purpose, an area threshold value V is set area-thre To remove the interference area, the formula is as follows:
Figure BDA0003581306890000101
wherein the threshold value V area-thre V is obtained by multiplying the average value of the pixel area of the first 5-bit pixel area in the binary image by 0.3 area (i) Is the ith zone area arranged from large to small.
According to the formula (20), it is judged that the connected region pixel area is larger than the threshold value V area-thre The area of (a) is an alternative active platen area, otherwise is an interference area.
Step 2: and (5) analyzing the size. Considering that the effective platen in the image after the connected region extraction has a certain size, the effective platen region boundary length has a certain ratio to the pixel size of the image in the direction X, Y. Therefore, use is made of the image pixel P in the direction X, Y X 、P Y For pixel size threshold value X width-thre 、Y width-thre The setting is made as follows:
Figure BDA0003581306890000102
according to the formula (21), judging that the areas with the connected area boundary lengths larger than the corresponding threshold values in the direction X, Y are the alternative effective pressing plate areas, and otherwise, judging that the areas are the interference areas.
And step 3: and (5) analyzing the shape. Considering that the effective pressing plate in the image extracted from the connected region has a certain shape, the effective pressing plate region has a certain equivalent length-width ratio in the image, and simultaneously, in order to eliminate other interference information with similar shapes in the image, the equivalent length-width ratio threshold S is used for eliminating the interference information with similar shapes ratio-thre Is set to 2<S ratio-thre <And 5, if the equivalent length-width ratio of the communicated area is within the threshold value, judging that the area is an alternative effective pressing plate area, otherwise, judging that the area is an interference area.
And 4, step 4: searching all connected areas in the image, repeating the steps 1-3 until the Nth area is searched, and meanwhile, judging that the alternative effective pressing plate area is the final effective pressing plate area.
3. Identifying the on-off state of the pressing plate:
in order to accurately identify the pressing plate putting-in and putting-out state in the highlight removing image of the protective pressing plate screened by the effective pressing plate area, the invention utilizes the direction angle of the pressing plate in and out of the state, namely the direction angle of the pressing plate in the putting-in state is +/-90 degrees, the direction angle of the pressing plate in the putting-out state is +/-45 degrees, and a margin of +/-10 degrees is set, and the criterion formula is as follows:
Figure BDA0003581306890000111
where the throw-in state is marked 1 and the exit state is marked 0. After the on-off state of the effective pressing plates is recognized, the effective pressing plates are sorted from left to right and from top to bottom by adopting barycentric coordinates, and finally a state sequence only containing 0 and 1 is obtained.
In conclusion, the invention can be seen in that the image of the protective pressing plate shot by the intelligent robot inspection is taken as a basis, the highlight area in the image is detected by adopting a threshold segmentation method, the image is restored by utilizing an image fusion method on the basis, the condition that the light source existing in a transformer substation interferes with the state identification of the pressing plate is eliminated, and finally the running state of the pressing plate is judged by the inclination angle detected on the edge of the pressing plate on the basis of image restoration. The inspection robot is better assisted in checking the on-off state of the pressing plate, the identification accuracy of the on-off state of the relay protection pressing plate is improved, the labor intensity of inspection personnel is reduced, misoperation in power grid operation is reduced, economic loss is avoided, and safe and stable operation of a power grid is ensured.

Claims (2)

1. A perspective transformation method based on an improved RANSAC algorithm is characterized in that:
the perspective transformation formula is as follows:
Figure FDA0003581306880000011
wherein, (x ', y') are coordinate values of feature matching points of the reference map, (u, v) are coordinate values of corresponding feature matching points of the auxiliary map, S is a transformation coefficient between images, and H is a transformation matrix of 3 × 3, that is:
Figure FDA0003581306880000012
wherein the content of the first and second substances,
Figure FDA0003581306880000013
can carry out rotation, scaling and distortion transformation on the image, T 2 =[a 13 a 23 ] T Can perform translation transformation on the image, T 3 =[a 31 a 32 ]A perspective transformation of the image may be generated.
Because the coordinates of the feature matching points of the reference image and the auxiliary image are already obtained by utilizing the SIFT algorithm, the transformation matrix H can be obtained by randomly selecting 4 pairs of feature matching points, and the perspective transformation can be carried out on the auxiliary image by utilizing the transformation matrix. The transformed auxiliary image pixel coordinates (x, y) are formulated as follows:
Figure FDA0003581306880000014
2. the method of claim 1 based on the improved RANSAC algorithm for perspective transformation, wherein:
only 4 pairs of matching points are needed in the calculation of the transformation matrix H, and an improved RANSAC algorithm is provided, which comprises the following steps:
the method comprises the following steps: firstly, carrying out initial matching on effective feature points extracted by an SIFT algorithm by using a nearest neighbor method, wherein an initially selected Euclidean distance threshold value is 0.6;
step two: equally dividing the image into 4 regions, judging whether the logarithm of feature matching points of the currently divided 4 regions is greater than 4, if so, carrying out the next step, and if not, adding 0.1 to the Euclidean distance threshold value, returning to the previous step for re-matching;
step three: screening 4 pairs of matching points with the minimum Euclidean distance from the 4 regions, wherein 16 pairs are selected;
step IV: combining a group of 16 pairs of matching points 4, sorting 1,2, … and N according to the Euclidean distance of the combined 4 pairs of matching points from small to large, and selecting the first 50 groups of serial numbers;
step five: firstly, 4 pairs of matching points with the serial number of 1 are taken according to the sequence number sequence to calculate a transformation matrix H;
comprises the following steps: and checking all the matching point pairs in the image by using the matrix H, and when the proportion of the number of the local points to the total number of the matching point pairs is more than 50%, determining that the currently calculated matrix H is the optimal transformation matrix, otherwise, returning to the previous step, and selecting the next group of 4 pairs of matching points according to the sequence number to calculate the transformation matrix H.
CN202210352267.1A 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm Pending CN114926391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210352267.1A CN114926391A (en) 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210352267.1A CN114926391A (en) 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm
CN202010631727.5A CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based method for identifying running state of protection pressing plate

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010631727.5A Division CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based method for identifying running state of protection pressing plate

Publications (1)

Publication Number Publication Date
CN114926391A true CN114926391A (en) 2022-08-19

Family

ID=73227207

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202210352274.1A Pending CN114926392A (en) 2020-07-03 2020-07-03 Highlight region removing method based on image fusion
CN202210352267.1A Pending CN114926391A (en) 2020-07-03 2020-07-03 Perspective transformation method based on improved RANSAC algorithm
CN202010631727.5A Active CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based method for identifying running state of protection pressing plate

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210352274.1A Pending CN114926392A (en) 2020-07-03 2020-07-03 Highlight region removing method based on image fusion

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010631727.5A Active CN111915544B (en) 2020-07-03 2020-07-03 Image fusion-based method for identifying running state of protection pressing plate

Country Status (1)

Country Link
CN (3) CN114926392A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508940B (en) * 2020-12-22 2022-06-03 三峡大学 Method for identifying switching state of functional protection pressing plate of transformer substation
CN113096120A (en) * 2021-04-30 2021-07-09 随锐科技集团股份有限公司 Method and system for identifying on-off state of protection pressing plate
CN113361548B (en) * 2021-07-05 2023-11-14 北京理工导航控制科技股份有限公司 Local feature description and matching method for highlight image
CN114919792B (en) * 2022-06-01 2023-09-12 中迪机器人(盐城)有限公司 System and method for detecting abnormality of film sticking of steel belt

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289676B (en) * 2011-07-30 2013-02-20 山东鲁能智能技术有限公司 Method for identifying mode of switch of substation based on infrared detection
JP6056319B2 (en) * 2012-09-21 2017-01-11 富士通株式会社 Image processing apparatus, image processing method, and image processing program
CN107424181A (en) * 2017-04-12 2017-12-01 湖南源信光电科技股份有限公司 A kind of improved image mosaic key frame rapid extracting method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm

Also Published As

Publication number Publication date
CN111915544A (en) 2020-11-10
CN111915544B (en) 2022-05-03
CN114926392A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN111915544B (en) Image fusion-based method for identifying running state of protection pressing plate
CN111369516B (en) Transformer bushing heating defect detection method based on infrared image recognition
CN111080691A (en) Infrared hot spot detection method and device for photovoltaic module
CN110751619A (en) Insulator defect detection method
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN112417931B (en) Method for detecting and classifying water surface objects based on visual saliency
CN111915509B (en) Protection pressing plate state identification method based on shadow removal optimization of image processing
CN110135248A (en) A kind of natural scene Method for text detection based on deep learning
CN113888462A (en) Crack identification method, system, readable medium and storage medium
CN112801949A (en) Method and device for determining discharge area in ultraviolet imaging detection technology
Pawade et al. Comparative study of different paper currency and coin currency recognition method
CN114429649B (en) Target image identification method and device
Sharma et al. Concrete crack detection using the integration of convolutional neural network and support vector machine
CN107944453A (en) Based on Hu not bushing detection methods of bending moment and support vector machines
CN114898116A (en) Garage management method and system based on embedded platform and storage medium
CN110321890A (en) A kind of digital instrument recognition methods of electric inspection process robot
Liu et al. Container-code recognition system based on computer vision and deep neural networks
Zhang et al. Research on multiple features extraction technology of insulator images
CN114913370A (en) State automatic detection method and device based on deep learning and morphology fusion
CN112330643B (en) Secondary equipment state identification method based on sparse representation image restoration
CN113506290A (en) Method and device for detecting defects of line insulator
Shang et al. Automatic Drainage Pipeline Defect Detection Method Using Handcrafted and Network Features
Wang et al. Thermal Defect Detection and Location for Power Equipment based on Improved VGG16
CN116051539A (en) Diagnosis method for heating fault of power transformation equipment
Zhou et al. Gun model recognition using geometric features of contour image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination