CN112884795A - Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion - Google Patents
Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion Download PDFInfo
- Publication number
- CN112884795A CN112884795A CN201911210391.9A CN201911210391A CN112884795A CN 112884795 A CN112884795 A CN 112884795A CN 201911210391 A CN201911210391 A CN 201911210391A CN 112884795 A CN112884795 A CN 112884795A
- Authority
- CN
- China
- Prior art keywords
- value
- image
- pixel
- sal
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000011218 segmentation Effects 0.000 title claims abstract description 43
- 230000005540 biological transmission Effects 0.000 title claims abstract description 38
- 230000004927 fusion Effects 0.000 title claims abstract description 23
- 230000010365 information processing Effects 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 9
- 150000003839 salts Chemical class 0.000 claims description 7
- 230000007797 corrosion Effects 0.000 claims description 6
- 238000005260 corrosion Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000005286 illumination Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000009499 grossing Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 208000025274 Lightning injury Diseases 0.000 description 1
- 206010033799 Paralysis Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/758—Involving statistics of pixels or of feature values, e.g. histogram matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion. The processing steps of the information processing center are as follows: firstly, dividing an image into different color intervals, clustering similar pixels by using a mean shift algorithm, and calculating a color significance value; converting the image into a gray space, clustering similar pixels by using a mean shift algorithm, and calculating a gradient significance value; then calculating texture contrast difference values among the pixel blocks to obtain texture significance values; weighting and fusing the multi-feature saliency value of each pixel block according to a center distance method to obtain a foreground segmentation result graph; and finally, subtracting the foreground segmentation result image from the original image to obtain an image background, and realizing the segmentation of the foreground and the background. By the method, the foreground and the background can be segmented in the power transmission line inspection, and the overhaul personnel can be helped to quickly identify various power accessories.
Description
Technical Field
The invention relates to a power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion, and belongs to the field of computer vision and power transmission line inspection.
Technical Field
With the rapid development of national economy, the national power system is rapidly developed, and a power transmission system is distributed all over the country. However, various accessories of the power system can cause paralysis of the whole power transmission system due to the influence of lightning stroke, ice coating, external force damage and the like, so that the normal life of people is influenced, and huge economic loss is brought to the society. It is therefore important to discover faults in the power system in a timely manner.
As is known, the human visual attention mechanism refers to that when facing a scene, a human automatically selects regions of interest, and selectively ignores regions of no interest, these regions of human interest are called salient regions. By utilizing the saliency characteristics of the image, the target area can be quickly positioned.
In recent years, the unmanned aerial vehicle technology enters a rapid development stage, is widely applied to various fields, particularly on a power system, not only replaces manual inspection, but also reduces risks in the inspection process and improves the inspection efficiency. People can further process the aerial images by combining the aerial inspection images with the saliency characteristics and applying a digital image processing method, so that the segmentation of the power transmission line inspection foreground and background is realized, the states of power accessories are analyzed, and the normal operation of the whole power transmission system is guaranteed.
Disclosure of Invention
The technical problem solved by the invention is as follows: the power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion is provided, and rapid and accurate segmentation of the foreground and the background of an inspection image is achieved through weighted fusion of multi-feature saliency values of the inspection image. The method has high efficiency, and can achieve good segmentation effect on the inspection images with different backgrounds.
A power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion comprises the following steps:
(1) firstly, dividing different color intervals according to the color characteristics of an image, then clustering pixels with similar color histograms by using a mean shift clustering algorithm, and then calculating the color significance value of each pixel block;
(2) and converting the image into a gray space, calculating gradient characteristics, clustering pixels with similar color histograms by using a mean shift clustering algorithm, and calculating a gradient significance value of each pixel block.
(3) And calculating a contrast difference value between each pixel block and other pixel blocks according to the texture characteristics of the image, weighting and fusing the contrast difference values according to the distance difference between different pixel blocks, and calculating the texture significance value of each pixel block.
(4) And calculating a spatial weight item of each pixel block, and weighting and fusing the color significance value, the gradient significance value and the texture significance value of the pixel block according to a central distance method in sequence to realize the foreground segmentation of the multi-feature significance fused power transmission line inspection.
(5) And performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to realize the foreground and background segmentation of the power transmission line inspection image.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method and the device combine multiple salient features of the image, avoid detection errors caused by single feature, obtain the depth salient segmentation result of the image, reduce the detection errors caused by dark illumination or unclear shooting angle, improve the segmentation accuracy of the foreground and the background of the patrol image, and have good stability.
(2) The method has the advantages of good practicability, high efficiency, good foreground and background segmentation effect on the inspection images with different backgrounds, high application value and capability of greatly improving the inspection efficiency of workers.
Drawings
FIG. 1 is a system diagram of a multi-feature significance fused power transmission line inspection foreground and background segmentation method;
FIG. 2 is a flow chart of a multi-feature significance fused power transmission line inspection foreground and background segmentation method;
FIG. 3 is a schematic diagram of a process of clustering similar color histograms by a mean shift clustering algorithm;
fig. 4 is a schematic spatial diagram of a position of a pixel block from a center point.
Detailed Description
In order to describe the invention more specifically, the following detailed description of the invention is provided with reference to the accompanying drawings and the detailed description.
As shown in fig. 1, the invention provides a method for segmenting a foreground and a background of power transmission line inspection based on multi-feature significance fusion, and the system comprises: unmanned aerial vehicle image acquisition module, airborne picture pass module, information processing center module. The unmanned aerial vehicle carrying the visible light camera shoots images in real time, and collected image information is transmitted to the information processing center module for analysis and processing through the vehicle-mounted image transmission module.
As shown in fig. 2, 3 and 4, the invention provides a power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion, which comprises the following steps:
1) firstly, according to the color characteristics of the image, the method willConverting the collected inspection image into an HSV color space, and dividing the hue component H into 14 different color intervals; traversing all pixels i one by one (i is 1, 2, 3.), calculating the hue component value of each pixel i and dividing the hue component value into corresponding color intervals; dividing pixels i with similar color histograms into a group of clusters by using a mean shift clustering algorithm, and dividing the clusters into wcGrouping, then calculating the color significance value of each pixel i; subdividing an image into n 2 x 2 blocks of pixels thetaj(j ═ 1, 2, 3.. times, n), calculate each pixel block θiA color saliency value of.
2) According to the gradient characteristics of the image, color space normalization is carried out on the inspection image to be converted into a gray-scale image, each pixel i (i is 1, 2 and 3.) of the image is traversed in sequence, then gradient calculation is carried out to obtain a gradient histogram, pixels with similar gradient histograms are divided into a group by using a clustering algorithm of mean shift clustering, and the group is divided into wgGrouping, and then calculating a gradient significance value of each pixel i; finally, the fusion pixel point im,n、im,n+1、im+1,n、im+1,n+1Calculating a significance value of each pixel block thetaiThe gradient significance value of (a).
3) Extracting each pixel block theta by utilizing a gray level co-occurrence matrix according to the texture characteristics of the imageiRespectively calculating thetaiAnd (3) carrying out weighted fusion on the contrast difference differences between the pixel blocks and the rest n-1 difference values according to a center distance method, and calculating the texture significance value of each pixel block.
4) By each block of pixels thetaiCalculating the distance between the pixel block and the image center point to obtain a pixel block thetaiSpatial weight term of (a) pixel block θiColor saliency value of salc(θi) Significance of gradient value salg(θi) Texture saliency value salt(θi) And performing weighted fusion to realize the inspection image foreground segmentation of the multi-feature significance fusion.
5) And performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to realize the foreground and background segmentation of the power transmission line inspection image.
In the step (1), the method for calculating the color saliency value of the pixel block is as follows:
1) firstly, converting an acquired image into an HSV (hue, saturation, lightness) color space, dividing a hue component H into 14 different color intervals, wherein each color interval represents a color grade k (k is 1, 2.. 14), traversing the image pixel by pixel, taking i as a starting pixel, calculating the H component value of each pixel i and dividing the H component value into corresponding color grades k to obtain a color histogram f corresponding to each pixel icol(i)。
2) Setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar color to be obtainedcol(i) Is divided into a set of clusters, totally divided into wcGroup clustering using a group of vectorsRepresenting, each group of clustersThe method comprises a plurality of pixels i (i is 1, 2, 3.), and the color significance calculation method of the pixels i comprises the following steps:
wherein salc(i) The color saliency value of a pixel point i is represented,to be provided withThe number of pixels of the cluster is used as a weight,is composed ofThe average color histogram value of (a) is,is composed ofIs the average color histogram value ofc 1、λc 2The weight is set to 0.001 to eliminate the average error.Is composed ofThe variance of (a) is determined,is composed ofThe variance of (c).
3) Solving the color significance value sal of each pixel point ic(i) Thereafter, the image is divided into n 2 × 2 pixel blocks θj(j ═ 1, 2, 3.., n), each pixel block represented asWherein im,nPixel point, i, representing the mth row and nth columnm,n+1Is the pixel point of the m row and n +1 column, im+1,nIs the pixel point of the n-th column in the m +1 th row, im+1,n+1The pixel points of the (m + 1) th line and the (n + 1) th column are calculated, and finally the color significance value of each pixel block is calculated:
salc(θi)=salc(im,n)+salc(im,n+1)+salc(im+1,n)+salc(im+1,n+1) (2)
wherein salc(θi) Representing the ith pixel block thetaiColor significance value of (1), salc(im,n) Is the color saliency value, sal, of the m-th row and n-th column pixelsc(im,n+1) Is the color saliency value, sal, of the m row and column n +1c(im+1,n) Is the color saliency value, sal, of the n column pixel of row m +1c(im+1,n+1) Is the color saliency value of the m +1 th row and n +1 th column pixels.
In the step (2), the process of calculating the gradient saliency value of the pixel block is as follows:
1) firstly, carrying out color space normalization processing on an inspection image to obtain a gray-scale image, and carrying out gamma correction on the image with uneven illumination by adopting a square root solving method; after the color space normalization of the inspection image, firstly, the gradient value I of each pixel I in the horizontal direction is calculatedxAnd a gradient value I in the vertical directionyAccording to IxAnd IyCalculating the gradient amplitude A (x, y) and the gradient direction theta (x, y) of each pixel i; then according to the gradient direction of each pixel point, combining the values of the adjacent 4 pixel points, giving different weights according to the distance difference between the pixel point and the obtained target point, performing linear interpolation, and accumulating the gradient amplitude to a gradient histogram fgrad(i) And (5) obtaining a final gradient histogram. The method for calculating the gradient magnitude and gradient direction of each pixel i comprises the following steps:
Ix=G(x+1,y)-G(x-1,y) (3)
Iy=G(x,y+1)-G(x,y-1) (4)
where A (x, y) is the gradient magnitude of pixel I, G (x, y) represents the gradient value of the pixel at spatial location (x, y), IxIn the horizontal directionGradient value of (I)yθ (x, y) ([0, 360 °)) represents the gradient direction, which is a gradient value in the vertical direction.
2) Setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar gradient to be obtainedgrad(i) Is divided into a set of clusters, totally divided into wgGroup clustering using a group of vectorsRepresenting, each group of clustersThe method comprises a plurality of pixels i (i is 1, 2, 3.), and the gradient significance value of the pixels i is calculated by the following method:
wherein salg(i) Represents the gradient significance value, beta, of the pixel point ijIs composed ofThe number of pixels of the cluster is used as a weight,is composed ofThe average gradient histogram value of (a) is,is composed ofIs the mean gradient histogram value ofg 1、λg 2The weight is set to 0.001 to exclude the averageAnd (4) error.Is composed ofThe variance of (a) is determined,is composed ofThe variance of (c).
3) Solving the gradient significance value sal of each pixel point ig(i) Then, the pixel points i are fusedm,n、im,n+1、im+1,n、im+1,n+1The gradient saliency value of each pixel block is calculated:
salg(θi)=salg(im,n)+salg(im,n+1)+salg(im+1,n)+salg(im+1,n+1) (8)
wherein salg(θi) Representing the ith pixel block thetaiGradient significance value of, salg(im,n) Is the gradient significance, sal, of the m-th row and n-th column pixelsg(im,n+1) Is the gradient significance, sal, of the m row and n +1 column pixelsg(im+1,n) Is the gradient significance, sal, of the n column pixel of row m +1g(im+1,n+1) Is the gradient saliency value of the m +1 th row and n +1 th column pixels.
In the step (3), the process of calculating the texture saliency value of the pixel block is as follows:
1) obtaining a gray matrix according to the gray map, dividing the gray value in the gray matrix into 8 levels, and dividing theta into each pixel blockjThe value of each pixel point (i.j) in the gray level co-occurrence matrix is equal to the probability of the pixel pair in the gray level matrix appearing at the same time, so that the co-occurrence matrix GLCM (8 multiplied by 8) of the gray level co-occurrence matrix is obtained.
2) Then the gray level co-occurrence matrix according to the pixel blockGLCM (8 x 8) calculates this block of pixels thetajContrast difference (theta)j) The calculation method of the contrast comprises the following steps:
wherein difference (theta)j) Is a block of pixels thetajThe value of the contrast, P (i, j), is the value at (i, j) of the gray co-occurrence matrix, indicating the number of pixel pairs whose gray combination is (i, j), and the gray matrix is an 8 × 8 matrix.
3) The weighted sum of the contrast value of each pixel block and the contrast values of the other pixel blocks is taken as the saliency value of each pixel block. Texture saliency value sal for each block of pixelst(θi) The calculation method comprises the following steps:
wherein salt(θi) For each block of pixels, the texture saliency value, difference (θ)j) Is a block of pixels thetajValue of contrast, di,jIs a weight value, the larger the distance between two pixel blocks, the smaller the value, dist (i, j) represents two pixel blocks thetaiAnd thetajA distance between, thetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of the centre, θjxRepresenting a block of pixels thetajPosition of the abscissa of the center, θjyRepresenting a block of pixels thetajThe ordinate position of the center.
In the step (4), the step of realizing the foreground segmentation of the power transmission line inspection through weighting and fusing the multi-feature significance value is as follows:
1) block of pixels theta at the edge of an imagejThe saliency characteristic is shallow, the influence on the global saliency is small, and a pixel block theta positioned in the center of the targetjThe saliency characteristic is strong, the influence on the global saliency is large, and each pixel block theta is calculated respectivelyjDefining a comprehensive space weight term weight based on color, gradient and texture characteristics by using a distance difference between the image center and the image centerc/g/t(θi),weightc(θi)weightg(θi)weightt(θi) The calculation method comprises the following steps:
wherein, weightc(θi)weightg(θi)weightt(θi) A weight is represented that is inversely proportional to the distance of the pixel block from the center of the image. ThetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of center, OxAbscissa representing center point of image, OyThe ordinate of the center point of the image is shown, and the size of the image is p × q.
2) The method for calculating the multi-feature saliency value after image fusion comprises the following steps:
S(θi)=salc(θi)·weightc(θi)+salg(θi)·weightg(θi)+salt(θi)·weightt(θi) (14)
wherein, S (theta)i) Is a block of pixels thetaiOf the multi-feature significance value of, salc(θi)、salg(θi)、salt(θi) Respectively representing pixel blocks thetaiColor, gradient and texture saliency values, weightc(θi)、weightg(θi)、weightt(θi) Respectively representing pixel blocks thetaiWeights based on color, gradient, and texture features.
And calculating the multi-feature significance value of the image according to the process, and displaying according to the value, thereby realizing the foreground segmentation effect of the inspection image of the power transmission line.
In the step (5), the segmented inspection image is morphologically processed, and the steps of segmenting the foreground and the background of the inspection image of the power transmission line are as follows:
(1) and selecting corrosion or expansion according to the segmentation result of the image to eliminate obvious noise interference in the image, smoothing the segmentation result, and keeping the most complete edge of the foreground image, so that the subsequent foreground and background segmentation is more accurate.
(2) And subtracting the foreground from the original image to obtain a background image of the inspection image, and realizing the foreground and background segmentation of the inspection image of the power transmission line.
Claims (7)
1. A power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion is characterized in that the system comprises: the unmanned aerial vehicle image acquisition module, the airborne image transmission module and the information processing center module are arranged in the vehicle; the unmanned aerial vehicle carrying the visible light camera shoots images in real time, and collected image information is transmitted to the information processing center module for analysis and processing through the vehicle-mounted image transmission module.
2. The information processing center module of claim 1 adopts a power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion, and is characterized by comprising the following steps:
(1) firstly, dividing different color intervals according to the color characteristics of an image, then clustering pixels with similar color histograms by using a mean shift clustering algorithm, and then calculating the color significance value of each pixel block;
(2) converting the image into a gray space, calculating gradient characteristics, clustering pixels with similar color histograms by using a mean shift clustering algorithm, and calculating a gradient significance value of each pixel block;
(3) calculating a contrast difference value between each pixel block and other pixel blocks according to the texture characteristics of the image, weighting and fusing the contrast difference values according to the distance difference between different pixel blocks, and calculating the texture significance value of each pixel block;
(4) calculating a spatial weight item of each pixel block, and weighting and fusing the color significance value, the gradient significance value and the texture significance value of the pixel block according to a central distance method in sequence to realize foreground segmentation of the multi-feature significance fused power transmission line inspection;
(5) and performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to realize the foreground and background segmentation of the power transmission line inspection image.
3. The power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion as claimed in claim 2, wherein in the step (1), the image color saliency value sal is calculated by using a weighted fusion algorithmc(θj) The steps are as follows:
(1) firstly, converting an acquired image into an HSV (hue, saturation, lightness) color space, dividing a hue component H into 14 different color intervals, wherein each color interval represents a color grade k (k is 1, 2.. 14), traversing the image pixel by pixel, taking i as a starting pixel, calculating the H component value of each pixel i and dividing the H component value into corresponding color grades k to obtain a color histogram f corresponding to each pixel icol(i);
(2) Setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar color to be obtainedcol(i) Is divided into a set of clusters, totally divided into wcGroup clustering using a group of vectorsRepresenting, each group of clustersThe method comprises a plurality of pixels i (i is 1, 2, 3.), and the color significance calculation method of the pixels i comprises the following steps:
wherein salc(i) The color saliency value of a pixel point i is represented,to be provided withThe number of pixels of the cluster is used as a weight,is composed ofThe average color histogram value of (a) is,is composed ofIs the average color histogram value ofc 1、λc 2Setting the weight as 0.001 to eliminate average error;is composed ofThe variance of (a) is determined,is composed ofThe variance of (a);
(3) solving the color significance value sal of each pixel point ic(i) Thereafter, the image is divided into n 2 × 2 pixel blocks θj(j ═ 1, 2, 3.., n), each pixel block represented asWherein im,nPixel point, i, representing the mth row and nth columnm,n+1Is the pixel point of the m row and n +1 column, im+1,nIs the pixel point of the n-th column in the m +1 th row, im+1,n+1The pixel points of the (m + 1) th line and the (n + 1) th column are calculated, and finally the color significance value of each pixel block is calculated:
salc(θi)=salc(im,n)+salc(im,n+1)+salc(im+1,n)+salc(im+1,n+1)
wherein salc(θi) Representing the ith pixel block thetaiColor significance value of (1), salc(im,n) Is the color saliency value, sal, of the m-th row and n-th column pixelsc(im,n+1) Is the color saliency value, sal, of the m row and column n +1c(im+1,n) Is the color saliency value, sal, of the n column pixel of row m +1c(im+1,n+1) Is the color saliency value of the m +1 th row and n +1 th column pixels.
4. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (2), the gradient significance value sal is calculatedg(i) The steps are as follows:
(1) firstly, carrying out color space normalization processing on an inspection image to obtain a gray-scale image, and carrying out gamma correction on the image with uneven illumination by adopting a square root solving method; after the color space normalization of the inspection image, firstly, the gradient value I of each pixel I in the horizontal direction is calculatedxAnd a gradient value I in the vertical directionyAccording to IxAnd IyCalculating the gradient amplitude A (x, y) and the gradient direction theta (x, y) of each pixel i; then according to the gradient direction of each pixel point, combining the values of the adjacent 4 pixel points, giving different weights according to the distance difference between the pixel point and the obtained target point, performing linear interpolation, and accumulating the gradient amplitude to a gradient histogram fgrad(i) Obtaining a final gradient histogram; the method for calculating the gradient magnitude and gradient direction of each pixel i comprises the following steps:
Ix=G(x+1,y)-G(x-1,y)
Iy=G(x,y+1)-G(x,y-1)
where A (x, y) is the gradient magnitude of pixel I, G (x, y) represents the gradient value of the pixel at spatial location (x, y), IxIs a gradient value in the horizontal direction, Iyθ (x, y) ([0, 360 °)) represents the gradient direction, which is a gradient value in the vertical direction;
(2) setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar gradient to be obtainedgrad(i) Is divided into a set of clusters, totally divided into wgGroup clustering using a group of vectorsRepresenting, each group of clustersThe method comprises a plurality of pixels i (i is 1, 2, 3.), and the gradient significance value of the pixels i is calculated by the following method:
wherein salg(i) Represents the gradient significance value, beta, of the pixel point ijIs composed ofThe number of pixels of the cluster is used as a weight,is composed ofThe average gradient histogram value of (a) is,is composed ofIs the mean gradient histogram value ofg 1、λg 2Setting the weight as 0.001 to eliminate average error;is composed ofThe variance of (a) is determined,is composed ofThe variance of (a);
(3) solving the gradient significance value sal of each pixel point ig(i) Then, the pixel points i are fusedm,n、im,n+1、im+1,n、im+1,n+1The gradient saliency value of each pixel block is calculated:
salg(θi)=salg(im,n)+salg(im,n+1)+salg(im+1,n)+salg(im+1,n+1)
wherein salg(θi) Representing the ith pixel block thetaiGradient significance value of, salg(im,n) Is the gradient significance, sal, of the m-th row and n-th column pixelsg(im,n+1) Is the gradient significance, sal, of the m row and n +1 column pixelsg(im+1,n) Is the gradient significance, sal, of the n column pixel of row m +1g(im+1,n+1) Is the gradient saliency value of the m +1 th row and n +1 th column pixels.
5. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (3), a texture saliency value sal is calculatedt(θi) The steps are as follows:
(1) obtaining a gray matrix according to the gray map, dividing the gray value in the gray matrix into 8 levels, and dividing theta into each pixel blockjThe value of each pixel point (i.j) in the gray level co-occurrence matrix is equal to the probability of the pixel pair in the gray level matrix appearing at the same time, so that the co-occurrence matrix GLCM (8 multiplied by 8) of the gray level co-occurrence matrix is obtained;
(2) then, the pixel block theta is calculated based on the gray level co-occurrence matrix GLCM (8 x 8) of the pixel blockjContrast difference (theta)j) The calculation method of the contrast comprises the following steps:
wherein difference (theta)j) Is a block of pixels thetajThe value of contrast, P (i, j) is at (i, j) of the gray level co-occurrence matrixA value representing the number of pairs of pixels having a gray scale combination of (i, j), the gray scale matrix being an 8 × 8 matrix;
(3) taking the weighted sum of the contrast value of each pixel block and the contrast values of other pixel blocks as the significance value of each pixel block; texture saliency value sal for each block of pixelst(θi) The calculation method comprises the following steps:
wherein salt(θi) For each block of pixels, the texture saliency value, difference (θ)j) Is a block of pixels thetajValue of contrast, di,jIs a weight value, the larger the distance between two pixel blocks, the smaller the value, dist (i, j) represents two pixel blocks thetaiAnd thetajA distance between, thetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of the centre, θjxRepresenting a block of pixels thetajPosition of the abscissa of the center, θjyRepresenting a block of pixels thetajThe ordinate position of the center.
6. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (4), the inspection image foreground segmentation is realized by weighting and fusing the multi-feature saliency value as follows:
(1) block of pixels theta at the edge of an imagejThe significance characteristics are shallow, have small influence on the global significance and are positionedBlock of pixels theta at the center of the targetjThe saliency characteristic is strong, the influence on the global saliency is large, and each pixel block theta is calculated respectivelyjDefining a comprehensive space weight term weight based on color, gradient and texture characteristics by using a distance difference between the image center and the image centerc/g/t(θi),weightc(θi)weightg(θi)weightt(θi) The calculation method comprises the following steps:
wherein, weightc(θi)weighg(θi)weightt(θi) Representing a weight inversely proportional to the distance of the pixel block from the center of the image; thetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of center, OxAbscissa representing center point of image, OyThe ordinate of the central point of the image is represented, and the size of the image is p × q;
(2) the method for calculating the multi-feature significance value after fusion comprises the following steps:
S(θi)=salc(θi)·weightc(θi)+salg(θi)·weightg(θi)+salt(θi)·weightt(θi)
wherein, S (theta)i) Is a block of pixels thetaiOf the multi-feature significance value of, salc(θi)、salg(θi)、salt(θi) Respectively representing pixel blocks thetaiColor, gradient and texture saliency values, weightc(θi)、weighg(θi)、weightt(θi) Respectively representing pixel blocks thetaiWeights based on color, gradient, and texture features;
and calculating the multi-feature significance value of the image according to the process, and displaying according to the value, thereby realizing the foreground segmentation effect of the inspection image of the power transmission line.
7. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (5), performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to segment the foreground and the background of the power transmission line inspection image;
(1) selecting corrosion or expansion according to the segmentation result of the image to eliminate obvious noise interference in the image, smoothing the segmentation result, and keeping the most complete edge of the foreground image so as to ensure that the subsequent foreground and background segmentation is more accurate;
(2) and subtracting the foreground from the original image to obtain a background image of the inspection image, and realizing the foreground and background segmentation of the inspection image of the power transmission line.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911210391.9A CN112884795A (en) | 2019-11-29 | 2019-11-29 | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911210391.9A CN112884795A (en) | 2019-11-29 | 2019-11-29 | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112884795A true CN112884795A (en) | 2021-06-01 |
Family
ID=76039548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911210391.9A Pending CN112884795A (en) | 2019-11-29 | 2019-11-29 | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112884795A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450372A (en) * | 2021-08-27 | 2021-09-28 | 海门裕隆光电科技有限公司 | Power transmission line image intelligent enhancement method and system based on artificial intelligence |
CN116563279A (en) * | 2023-07-07 | 2023-08-08 | 山东德源电力科技股份有限公司 | Measuring switch detection method based on computer vision |
CN117350926A (en) * | 2023-12-04 | 2024-01-05 | 北京航空航天大学合肥创新研究院 | Multi-mode data enhancement method based on target weight |
-
2019
- 2019-11-29 CN CN201911210391.9A patent/CN112884795A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450372A (en) * | 2021-08-27 | 2021-09-28 | 海门裕隆光电科技有限公司 | Power transmission line image intelligent enhancement method and system based on artificial intelligence |
CN113450372B (en) * | 2021-08-27 | 2021-11-16 | 海门裕隆光电科技有限公司 | Power transmission line image intelligent enhancement method and system based on artificial intelligence |
CN116563279A (en) * | 2023-07-07 | 2023-08-08 | 山东德源电力科技股份有限公司 | Measuring switch detection method based on computer vision |
CN116563279B (en) * | 2023-07-07 | 2023-09-19 | 山东德源电力科技股份有限公司 | Measuring switch detection method based on computer vision |
CN117350926A (en) * | 2023-12-04 | 2024-01-05 | 北京航空航天大学合肥创新研究院 | Multi-mode data enhancement method based on target weight |
CN117350926B (en) * | 2023-12-04 | 2024-02-13 | 北京航空航天大学合肥创新研究院 | Multi-mode data enhancement method based on target weight |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428748B (en) | HOG feature and SVM-based infrared image insulator identification detection method | |
CN109376591B (en) | Ship target detection method for deep learning feature and visual feature combined training | |
CN112884795A (en) | Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion | |
CN110197185B (en) | Method and system for monitoring space under bridge based on scale invariant feature transform algorithm | |
CN107066989B (en) | Method and system for identifying accumulated snow of geostationary satellite remote sensing sequence image | |
CN108665468B (en) | Device and method for extracting tangent tower insulator string | |
CN110110131B (en) | Airplane cable support identification and parameter acquisition method based on deep learning and binocular stereo vision | |
CN109977834B (en) | Method and device for segmenting human hand and interactive object from depth image | |
CN112946679B (en) | Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence | |
CN113538503A (en) | Solar panel defect detection method based on infrared image | |
CN113313047A (en) | Lane line detection method and system based on lane structure prior | |
CN114998251A (en) | Air multi-vision platform ground anomaly detection method based on federal learning | |
CN110866472A (en) | Unmanned aerial vehicle ground moving target identification and image enhancement system and method | |
CN111220619B (en) | Insulator self-explosion detection method | |
CN113378672A (en) | Multi-target detection method for defects of power transmission line based on improved YOLOv3 | |
CN111428752B (en) | Visibility detection method based on infrared image | |
CN113052110A (en) | Three-dimensional interest point extraction method based on multi-view projection and deep learning | |
CN115830514B (en) | Whole river reach surface flow velocity calculation method and system suitable for curved river channel | |
CN116843738A (en) | Tree dumping risk assessment system and method based on TOF depth camera | |
CN108830834B (en) | Automatic extraction method for video defect information of cable climbing robot | |
CN116385477A (en) | Tower image registration method based on image segmentation | |
CN114677428A (en) | Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing | |
CN112241691B (en) | Channel ice condition intelligent identification method based on unmanned aerial vehicle inspection and image characteristics | |
CN111402223B (en) | Transformer substation defect problem detection method using transformer substation video image | |
CN113763261A (en) | Real-time detection method for far and small targets under sea fog meteorological condition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210601 |