CN112884795A - Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion - Google Patents

Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion Download PDF

Info

Publication number
CN112884795A
CN112884795A CN201911210391.9A CN201911210391A CN112884795A CN 112884795 A CN112884795 A CN 112884795A CN 201911210391 A CN201911210391 A CN 201911210391A CN 112884795 A CN112884795 A CN 112884795A
Authority
CN
China
Prior art keywords
value
image
pixel
sal
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911210391.9A
Other languages
Chinese (zh)
Inventor
王胜
匡小兵
成云朋
陈文�
李庆武
张杉
常心悦
周亚琴
马云鹏
刘艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Hohai University HHU
Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU, Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical Hohai University HHU
Priority to CN201911210391.9A priority Critical patent/CN112884795A/en
Publication of CN112884795A publication Critical patent/CN112884795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion. The processing steps of the information processing center are as follows: firstly, dividing an image into different color intervals, clustering similar pixels by using a mean shift algorithm, and calculating a color significance value; converting the image into a gray space, clustering similar pixels by using a mean shift algorithm, and calculating a gradient significance value; then calculating texture contrast difference values among the pixel blocks to obtain texture significance values; weighting and fusing the multi-feature saliency value of each pixel block according to a center distance method to obtain a foreground segmentation result graph; and finally, subtracting the foreground segmentation result image from the original image to obtain an image background, and realizing the segmentation of the foreground and the background. By the method, the foreground and the background can be segmented in the power transmission line inspection, and the overhaul personnel can be helped to quickly identify various power accessories.

Description

Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
Technical Field
The invention relates to a power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion, and belongs to the field of computer vision and power transmission line inspection.
Technical Field
With the rapid development of national economy, the national power system is rapidly developed, and a power transmission system is distributed all over the country. However, various accessories of the power system can cause paralysis of the whole power transmission system due to the influence of lightning stroke, ice coating, external force damage and the like, so that the normal life of people is influenced, and huge economic loss is brought to the society. It is therefore important to discover faults in the power system in a timely manner.
As is known, the human visual attention mechanism refers to that when facing a scene, a human automatically selects regions of interest, and selectively ignores regions of no interest, these regions of human interest are called salient regions. By utilizing the saliency characteristics of the image, the target area can be quickly positioned.
In recent years, the unmanned aerial vehicle technology enters a rapid development stage, is widely applied to various fields, particularly on a power system, not only replaces manual inspection, but also reduces risks in the inspection process and improves the inspection efficiency. People can further process the aerial images by combining the aerial inspection images with the saliency characteristics and applying a digital image processing method, so that the segmentation of the power transmission line inspection foreground and background is realized, the states of power accessories are analyzed, and the normal operation of the whole power transmission system is guaranteed.
Disclosure of Invention
The technical problem solved by the invention is as follows: the power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion is provided, and rapid and accurate segmentation of the foreground and the background of an inspection image is achieved through weighted fusion of multi-feature saliency values of the inspection image. The method has high efficiency, and can achieve good segmentation effect on the inspection images with different backgrounds.
A power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion comprises the following steps:
(1) firstly, dividing different color intervals according to the color characteristics of an image, then clustering pixels with similar color histograms by using a mean shift clustering algorithm, and then calculating the color significance value of each pixel block;
(2) and converting the image into a gray space, calculating gradient characteristics, clustering pixels with similar color histograms by using a mean shift clustering algorithm, and calculating a gradient significance value of each pixel block.
(3) And calculating a contrast difference value between each pixel block and other pixel blocks according to the texture characteristics of the image, weighting and fusing the contrast difference values according to the distance difference between different pixel blocks, and calculating the texture significance value of each pixel block.
(4) And calculating a spatial weight item of each pixel block, and weighting and fusing the color significance value, the gradient significance value and the texture significance value of the pixel block according to a central distance method in sequence to realize the foreground segmentation of the multi-feature significance fused power transmission line inspection.
(5) And performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to realize the foreground and background segmentation of the power transmission line inspection image.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method and the device combine multiple salient features of the image, avoid detection errors caused by single feature, obtain the depth salient segmentation result of the image, reduce the detection errors caused by dark illumination or unclear shooting angle, improve the segmentation accuracy of the foreground and the background of the patrol image, and have good stability.
(2) The method has the advantages of good practicability, high efficiency, good foreground and background segmentation effect on the inspection images with different backgrounds, high application value and capability of greatly improving the inspection efficiency of workers.
Drawings
FIG. 1 is a system diagram of a multi-feature significance fused power transmission line inspection foreground and background segmentation method;
FIG. 2 is a flow chart of a multi-feature significance fused power transmission line inspection foreground and background segmentation method;
FIG. 3 is a schematic diagram of a process of clustering similar color histograms by a mean shift clustering algorithm;
fig. 4 is a schematic spatial diagram of a position of a pixel block from a center point.
Detailed Description
In order to describe the invention more specifically, the following detailed description of the invention is provided with reference to the accompanying drawings and the detailed description.
As shown in fig. 1, the invention provides a method for segmenting a foreground and a background of power transmission line inspection based on multi-feature significance fusion, and the system comprises: unmanned aerial vehicle image acquisition module, airborne picture pass module, information processing center module. The unmanned aerial vehicle carrying the visible light camera shoots images in real time, and collected image information is transmitted to the information processing center module for analysis and processing through the vehicle-mounted image transmission module.
As shown in fig. 2, 3 and 4, the invention provides a power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion, which comprises the following steps:
1) firstly, according to the color characteristics of the image, the method willConverting the collected inspection image into an HSV color space, and dividing the hue component H into 14 different color intervals; traversing all pixels i one by one (i is 1, 2, 3.), calculating the hue component value of each pixel i and dividing the hue component value into corresponding color intervals; dividing pixels i with similar color histograms into a group of clusters by using a mean shift clustering algorithm, and dividing the clusters into wcGrouping, then calculating the color significance value of each pixel i; subdividing an image into n 2 x 2 blocks of pixels thetaj(j ═ 1, 2, 3.. times, n), calculate each pixel block θiA color saliency value of.
2) According to the gradient characteristics of the image, color space normalization is carried out on the inspection image to be converted into a gray-scale image, each pixel i (i is 1, 2 and 3.) of the image is traversed in sequence, then gradient calculation is carried out to obtain a gradient histogram, pixels with similar gradient histograms are divided into a group by using a clustering algorithm of mean shift clustering, and the group is divided into wgGrouping, and then calculating a gradient significance value of each pixel i; finally, the fusion pixel point im,n、im,n+1、im+1,n、im+1,n+1Calculating a significance value of each pixel block thetaiThe gradient significance value of (a).
3) Extracting each pixel block theta by utilizing a gray level co-occurrence matrix according to the texture characteristics of the imageiRespectively calculating thetaiAnd (3) carrying out weighted fusion on the contrast difference differences between the pixel blocks and the rest n-1 difference values according to a center distance method, and calculating the texture significance value of each pixel block.
4) By each block of pixels thetaiCalculating the distance between the pixel block and the image center point to obtain a pixel block thetaiSpatial weight term of (a) pixel block θiColor saliency value of salci) Significance of gradient value salgi) Texture saliency value salti) And performing weighted fusion to realize the inspection image foreground segmentation of the multi-feature significance fusion.
5) And performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to realize the foreground and background segmentation of the power transmission line inspection image.
In the step (1), the method for calculating the color saliency value of the pixel block is as follows:
1) firstly, converting an acquired image into an HSV (hue, saturation, lightness) color space, dividing a hue component H into 14 different color intervals, wherein each color interval represents a color grade k (k is 1, 2.. 14), traversing the image pixel by pixel, taking i as a starting pixel, calculating the H component value of each pixel i and dividing the H component value into corresponding color grades k to obtain a color histogram f corresponding to each pixel icol(i)。
2) Setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar color to be obtainedcol(i) Is divided into a set of clusters, totally divided into wcGroup clustering using a group of vectors
Figure BSA0000195853470000031
Representing, each group of clusters
Figure BSA0000195853470000032
The method comprises a plurality of pixels i (i is 1, 2, 3.), and the color significance calculation method of the pixels i comprises the following steps:
Figure BSA0000195853470000033
wherein salc(i) The color saliency value of a pixel point i is represented,
Figure BSA0000195853470000034
to be provided with
Figure BSA0000195853470000035
The number of pixels of the cluster is used as a weight,
Figure BSA0000195853470000036
is composed of
Figure BSA0000195853470000037
The average color histogram value of (a) is,
Figure BSA0000195853470000038
is composed of
Figure BSA0000195853470000039
Is the average color histogram value ofc 1、λc 2The weight is set to 0.001 to eliminate the average error.
Figure BSA00001958534700000310
Is composed of
Figure BSA00001958534700000311
The variance of (a) is determined,
Figure BSA00001958534700000312
is composed of
Figure BSA00001958534700000313
The variance of (c).
3) Solving the color significance value sal of each pixel point ic(i) Thereafter, the image is divided into n 2 × 2 pixel blocks θj(j ═ 1, 2, 3.., n), each pixel block represented as
Figure BSA00001958534700000314
Wherein im,nPixel point, i, representing the mth row and nth columnm,n+1Is the pixel point of the m row and n +1 column, im+1,nIs the pixel point of the n-th column in the m +1 th row, im+1,n+1The pixel points of the (m + 1) th line and the (n + 1) th column are calculated, and finally the color significance value of each pixel block is calculated:
salci)=salc(im,n)+salc(im,n+1)+salc(im+1,n)+salc(im+1,n+1) (2)
wherein salci) Representing the ith pixel block thetaiColor significance value of (1), salc(im,n) Is the color saliency value, sal, of the m-th row and n-th column pixelsc(im,n+1) Is the color saliency value, sal, of the m row and column n +1c(im+1,n) Is the color saliency value, sal, of the n column pixel of row m +1c(im+1,n+1) Is the color saliency value of the m +1 th row and n +1 th column pixels.
In the step (2), the process of calculating the gradient saliency value of the pixel block is as follows:
1) firstly, carrying out color space normalization processing on an inspection image to obtain a gray-scale image, and carrying out gamma correction on the image with uneven illumination by adopting a square root solving method; after the color space normalization of the inspection image, firstly, the gradient value I of each pixel I in the horizontal direction is calculatedxAnd a gradient value I in the vertical directionyAccording to IxAnd IyCalculating the gradient amplitude A (x, y) and the gradient direction theta (x, y) of each pixel i; then according to the gradient direction of each pixel point, combining the values of the adjacent 4 pixel points, giving different weights according to the distance difference between the pixel point and the obtained target point, performing linear interpolation, and accumulating the gradient amplitude to a gradient histogram fgrad(i) And (5) obtaining a final gradient histogram. The method for calculating the gradient magnitude and gradient direction of each pixel i comprises the following steps:
Ix=G(x+1,y)-G(x-1,y) (3)
Iy=G(x,y+1)-G(x,y-1) (4)
Figure BSA0000195853470000041
Figure BSA0000195853470000042
where A (x, y) is the gradient magnitude of pixel I, G (x, y) represents the gradient value of the pixel at spatial location (x, y), IxIn the horizontal directionGradient value of (I)yθ (x, y) ([0, 360 °)) represents the gradient direction, which is a gradient value in the vertical direction.
2) Setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar gradient to be obtainedgrad(i) Is divided into a set of clusters, totally divided into wgGroup clustering using a group of vectors
Figure BSA0000195853470000043
Representing, each group of clusters
Figure BSA0000195853470000044
The method comprises a plurality of pixels i (i is 1, 2, 3.), and the gradient significance value of the pixels i is calculated by the following method:
Figure BSA0000195853470000045
wherein salg(i) Represents the gradient significance value, beta, of the pixel point ijIs composed of
Figure BSA0000195853470000046
The number of pixels of the cluster is used as a weight,
Figure BSA0000195853470000047
is composed of
Figure BSA0000195853470000048
The average gradient histogram value of (a) is,
Figure BSA0000195853470000051
is composed of
Figure BSA0000195853470000052
Is the mean gradient histogram value ofg 1、λg 2The weight is set to 0.001 to exclude the averageAnd (4) error.
Figure BSA0000195853470000053
Is composed of
Figure BSA0000195853470000054
The variance of (a) is determined,
Figure BSA0000195853470000055
is composed of
Figure BSA0000195853470000056
The variance of (c).
3) Solving the gradient significance value sal of each pixel point ig(i) Then, the pixel points i are fusedm,n、im,n+1、im+1,n、im+1,n+1The gradient saliency value of each pixel block is calculated:
salgi)=salg(im,n)+salg(im,n+1)+salg(im+1,n)+salg(im+1,n+1) (8)
wherein salgi) Representing the ith pixel block thetaiGradient significance value of, salg(im,n) Is the gradient significance, sal, of the m-th row and n-th column pixelsg(im,n+1) Is the gradient significance, sal, of the m row and n +1 column pixelsg(im+1,n) Is the gradient significance, sal, of the n column pixel of row m +1g(im+1,n+1) Is the gradient saliency value of the m +1 th row and n +1 th column pixels.
In the step (3), the process of calculating the texture saliency value of the pixel block is as follows:
1) obtaining a gray matrix according to the gray map, dividing the gray value in the gray matrix into 8 levels, and dividing theta into each pixel blockjThe value of each pixel point (i.j) in the gray level co-occurrence matrix is equal to the probability of the pixel pair in the gray level matrix appearing at the same time, so that the co-occurrence matrix GLCM (8 multiplied by 8) of the gray level co-occurrence matrix is obtained.
2) Then the gray level co-occurrence matrix according to the pixel blockGLCM (8 x 8) calculates this block of pixels thetajContrast difference (theta)j) The calculation method of the contrast comprises the following steps:
Figure BSA0000195853470000057
wherein difference (theta)j) Is a block of pixels thetajThe value of the contrast, P (i, j), is the value at (i, j) of the gray co-occurrence matrix, indicating the number of pixel pairs whose gray combination is (i, j), and the gray matrix is an 8 × 8 matrix.
3) The weighted sum of the contrast value of each pixel block and the contrast values of the other pixel blocks is taken as the saliency value of each pixel block. Texture saliency value sal for each block of pixelsti) The calculation method comprises the following steps:
Figure BSA0000195853470000058
Figure BSA0000195853470000059
Figure BSA00001958534700000510
wherein salti) For each block of pixels, the texture saliency value, difference (θ)j) Is a block of pixels thetajValue of contrast, di,jIs a weight value, the larger the distance between two pixel blocks, the smaller the value, dist (i, j) represents two pixel blocks thetaiAnd thetajA distance between, thetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of the centre, θjxRepresenting a block of pixels thetajPosition of the abscissa of the center, θjyRepresenting a block of pixels thetajThe ordinate position of the center.
In the step (4), the step of realizing the foreground segmentation of the power transmission line inspection through weighting and fusing the multi-feature significance value is as follows:
1) block of pixels theta at the edge of an imagejThe saliency characteristic is shallow, the influence on the global saliency is small, and a pixel block theta positioned in the center of the targetjThe saliency characteristic is strong, the influence on the global saliency is large, and each pixel block theta is calculated respectivelyjDefining a comprehensive space weight term weight based on color, gradient and texture characteristics by using a distance difference between the image center and the image centerc/g/ti),weightci)weightgi)weightti) The calculation method comprises the following steps:
Figure BSA0000195853470000061
wherein, weightci)weightgi)weightti) A weight is represented that is inversely proportional to the distance of the pixel block from the center of the image. ThetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of center, OxAbscissa representing center point of image, OyThe ordinate of the center point of the image is shown, and the size of the image is p × q.
2) The method for calculating the multi-feature saliency value after image fusion comprises the following steps:
S(θi)=salci)·weightci)+salgi)·weightgi)+salti)·weightti) (14)
wherein, S (theta)i) Is a block of pixels thetaiOf the multi-feature significance value of, salci)、salgi)、salti) Respectively representing pixel blocks thetaiColor, gradient and texture saliency values, weightci)、weightgi)、weightti) Respectively representing pixel blocks thetaiWeights based on color, gradient, and texture features.
And calculating the multi-feature significance value of the image according to the process, and displaying according to the value, thereby realizing the foreground segmentation effect of the inspection image of the power transmission line.
In the step (5), the segmented inspection image is morphologically processed, and the steps of segmenting the foreground and the background of the inspection image of the power transmission line are as follows:
(1) and selecting corrosion or expansion according to the segmentation result of the image to eliminate obvious noise interference in the image, smoothing the segmentation result, and keeping the most complete edge of the foreground image, so that the subsequent foreground and background segmentation is more accurate.
(2) And subtracting the foreground from the original image to obtain a background image of the inspection image, and realizing the foreground and background segmentation of the inspection image of the power transmission line.

Claims (7)

1. A power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion is characterized in that the system comprises: the unmanned aerial vehicle image acquisition module, the airborne image transmission module and the information processing center module are arranged in the vehicle; the unmanned aerial vehicle carrying the visible light camera shoots images in real time, and collected image information is transmitted to the information processing center module for analysis and processing through the vehicle-mounted image transmission module.
2. The information processing center module of claim 1 adopts a power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion, and is characterized by comprising the following steps:
(1) firstly, dividing different color intervals according to the color characteristics of an image, then clustering pixels with similar color histograms by using a mean shift clustering algorithm, and then calculating the color significance value of each pixel block;
(2) converting the image into a gray space, calculating gradient characteristics, clustering pixels with similar color histograms by using a mean shift clustering algorithm, and calculating a gradient significance value of each pixel block;
(3) calculating a contrast difference value between each pixel block and other pixel blocks according to the texture characteristics of the image, weighting and fusing the contrast difference values according to the distance difference between different pixel blocks, and calculating the texture significance value of each pixel block;
(4) calculating a spatial weight item of each pixel block, and weighting and fusing the color significance value, the gradient significance value and the texture significance value of the pixel block according to a central distance method in sequence to realize foreground segmentation of the multi-feature significance fused power transmission line inspection;
(5) and performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to realize the foreground and background segmentation of the power transmission line inspection image.
3. The power transmission line inspection foreground and background segmentation method based on multi-feature saliency fusion as claimed in claim 2, wherein in the step (1), the image color saliency value sal is calculated by using a weighted fusion algorithmcj) The steps are as follows:
(1) firstly, converting an acquired image into an HSV (hue, saturation, lightness) color space, dividing a hue component H into 14 different color intervals, wherein each color interval represents a color grade k (k is 1, 2.. 14), traversing the image pixel by pixel, taking i as a starting pixel, calculating the H component value of each pixel i and dividing the H component value into corresponding color grades k to obtain a color histogram f corresponding to each pixel icol(i);
(2) Setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar color to be obtainedcol(i) Is divided into a set of clusters, totally divided into wcGroup clustering using a group of vectors
Figure FSA0000195853460000011
Representing, each group of clusters
Figure FSA0000195853460000012
The method comprises a plurality of pixels i (i is 1, 2, 3.), and the color significance calculation method of the pixels i comprises the following steps:
Figure FSA0000195853460000013
wherein salc(i) The color saliency value of a pixel point i is represented,
Figure FSA0000195853460000021
to be provided with
Figure FSA0000195853460000022
The number of pixels of the cluster is used as a weight,
Figure FSA0000195853460000023
is composed of
Figure FSA0000195853460000024
The average color histogram value of (a) is,
Figure FSA0000195853460000025
is composed of
Figure FSA0000195853460000026
Is the average color histogram value ofc 1、λc 2Setting the weight as 0.001 to eliminate average error;
Figure FSA0000195853460000027
is composed of
Figure FSA0000195853460000028
The variance of (a) is determined,
Figure FSA0000195853460000029
is composed of
Figure FSA00001958534600000210
The variance of (a);
(3) solving the color significance value sal of each pixel point ic(i) Thereafter, the image is divided into n 2 × 2 pixel blocks θj(j ═ 1, 2, 3.., n), each pixel block represented as
Figure FSA00001958534600000211
Wherein im,nPixel point, i, representing the mth row and nth columnm,n+1Is the pixel point of the m row and n +1 column, im+1,nIs the pixel point of the n-th column in the m +1 th row, im+1,n+1The pixel points of the (m + 1) th line and the (n + 1) th column are calculated, and finally the color significance value of each pixel block is calculated:
salci)=salc(im,n)+salc(im,n+1)+salc(im+1,n)+salc(im+1,n+1)
wherein salci) Representing the ith pixel block thetaiColor significance value of (1), salc(im,n) Is the color saliency value, sal, of the m-th row and n-th column pixelsc(im,n+1) Is the color saliency value, sal, of the m row and column n +1c(im+1,n) Is the color saliency value, sal, of the n column pixel of row m +1c(im+1,n+1) Is the color saliency value of the m +1 th row and n +1 th column pixels.
4. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (2), the gradient significance value sal is calculatedg(i) The steps are as follows:
(1) firstly, carrying out color space normalization processing on an inspection image to obtain a gray-scale image, and carrying out gamma correction on the image with uneven illumination by adopting a square root solving method; after the color space normalization of the inspection image, firstly, the gradient value I of each pixel I in the horizontal direction is calculatedxAnd a gradient value I in the vertical directionyAccording to IxAnd IyCalculating the gradient amplitude A (x, y) and the gradient direction theta (x, y) of each pixel i; then according to the gradient direction of each pixel point, combining the values of the adjacent 4 pixel points, giving different weights according to the distance difference between the pixel point and the obtained target point, performing linear interpolation, and accumulating the gradient amplitude to a gradient histogram fgrad(i) Obtaining a final gradient histogram; the method for calculating the gradient magnitude and gradient direction of each pixel i comprises the following steps:
Ix=G(x+1,y)-G(x-1,y)
Iy=G(x,y+1)-G(x,y-1)
Figure FSA00001958534600000212
Figure FSA00001958534600000213
where A (x, y) is the gradient magnitude of pixel I, G (x, y) represents the gradient value of the pixel at spatial location (x, y), IxIs a gradient value in the horizontal direction, Iyθ (x, y) ([0, 360 °)) represents the gradient direction, which is a gradient value in the vertical direction;
(2) setting a sliding window with radius r and a randomly selected central point O by using a mean shift clustering algorithm, starting to slide by using the circular sliding window, reserving a window containing the most pixel points, then clustering according to the sliding window where the pixel point i is positioned, and enabling the histogram f with similar gradient to be obtainedgrad(i) Is divided into a set of clusters, totally divided into wgGroup clustering using a group of vectors
Figure FSA0000195853460000031
Representing, each group of clusters
Figure FSA0000195853460000032
The method comprises a plurality of pixels i (i is 1, 2, 3.), and the gradient significance value of the pixels i is calculated by the following method:
Figure FSA0000195853460000033
wherein salg(i) Represents the gradient significance value, beta, of the pixel point ijIs composed of
Figure FSA0000195853460000034
The number of pixels of the cluster is used as a weight,
Figure FSA0000195853460000035
is composed of
Figure FSA0000195853460000036
The average gradient histogram value of (a) is,
Figure FSA0000195853460000037
is composed of
Figure FSA0000195853460000038
Is the mean gradient histogram value ofg 1、λg 2Setting the weight as 0.001 to eliminate average error;
Figure FSA0000195853460000039
is composed of
Figure FSA00001958534600000310
The variance of (a) is determined,
Figure FSA00001958534600000311
is composed of
Figure FSA00001958534600000312
The variance of (a);
(3) solving the gradient significance value sal of each pixel point ig(i) Then, the pixel points i are fusedm,n、im,n+1、im+1,n、im+1,n+1The gradient saliency value of each pixel block is calculated:
salgi)=salg(im,n)+salg(im,n+1)+salg(im+1,n)+salg(im+1,n+1)
wherein salgi) Representing the ith pixel block thetaiGradient significance value of, salg(im,n) Is the gradient significance, sal, of the m-th row and n-th column pixelsg(im,n+1) Is the gradient significance, sal, of the m row and n +1 column pixelsg(im+1,n) Is the gradient significance, sal, of the n column pixel of row m +1g(im+1,n+1) Is the gradient saliency value of the m +1 th row and n +1 th column pixels.
5. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (3), a texture saliency value sal is calculatedti) The steps are as follows:
(1) obtaining a gray matrix according to the gray map, dividing the gray value in the gray matrix into 8 levels, and dividing theta into each pixel blockjThe value of each pixel point (i.j) in the gray level co-occurrence matrix is equal to the probability of the pixel pair in the gray level matrix appearing at the same time, so that the co-occurrence matrix GLCM (8 multiplied by 8) of the gray level co-occurrence matrix is obtained;
(2) then, the pixel block theta is calculated based on the gray level co-occurrence matrix GLCM (8 x 8) of the pixel blockjContrast difference (theta)j) The calculation method of the contrast comprises the following steps:
Figure FSA00001958534600000313
wherein difference (theta)j) Is a block of pixels thetajThe value of contrast, P (i, j) is at (i, j) of the gray level co-occurrence matrixA value representing the number of pairs of pixels having a gray scale combination of (i, j), the gray scale matrix being an 8 × 8 matrix;
(3) taking the weighted sum of the contrast value of each pixel block and the contrast values of other pixel blocks as the significance value of each pixel block; texture saliency value sal for each block of pixelsti) The calculation method comprises the following steps:
Figure FSA0000195853460000041
Figure FSA0000195853460000042
Figure FSA0000195853460000043
wherein salti) For each block of pixels, the texture saliency value, difference (θ)j) Is a block of pixels thetajValue of contrast, di,jIs a weight value, the larger the distance between two pixel blocks, the smaller the value, dist (i, j) represents two pixel blocks thetaiAnd thetajA distance between, thetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of the centre, θjxRepresenting a block of pixels thetajPosition of the abscissa of the center, θjyRepresenting a block of pixels thetajThe ordinate position of the center.
6. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (4), the inspection image foreground segmentation is realized by weighting and fusing the multi-feature saliency value as follows:
(1) block of pixels theta at the edge of an imagejThe significance characteristics are shallow, have small influence on the global significance and are positionedBlock of pixels theta at the center of the targetjThe saliency characteristic is strong, the influence on the global saliency is large, and each pixel block theta is calculated respectivelyjDefining a comprehensive space weight term weight based on color, gradient and texture characteristics by using a distance difference between the image center and the image centerc/g/ti),weightci)weightgi)weightti) The calculation method comprises the following steps:
Figure FSA0000195853460000044
wherein, weightci)weighgi)weightti) Representing a weight inversely proportional to the distance of the pixel block from the center of the image; thetaixRepresenting a block of pixels thetaiPosition of the abscissa of the center, θiyRepresenting a block of pixels thetaiOrdinate position of center, OxAbscissa representing center point of image, OyThe ordinate of the central point of the image is represented, and the size of the image is p × q;
(2) the method for calculating the multi-feature significance value after fusion comprises the following steps:
S(θi)=salci)·weightci)+salgi)·weightgi)+salti)·weightti)
wherein, S (theta)i) Is a block of pixels thetaiOf the multi-feature significance value of, salci)、salgi)、salti) Respectively representing pixel blocks thetaiColor, gradient and texture saliency values, weightci)、weighgi)、weightti) Respectively representing pixel blocks thetaiWeights based on color, gradient, and texture features;
and calculating the multi-feature significance value of the image according to the process, and displaying according to the value, thereby realizing the foreground segmentation effect of the inspection image of the power transmission line.
7. The power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion as claimed in claim 2, characterized in that: in the step (5), performing expansion corrosion on the segmented image, keeping the most complete edge of the foreground, and subtracting the foreground from the original image to obtain a background image so as to segment the foreground and the background of the power transmission line inspection image;
(1) selecting corrosion or expansion according to the segmentation result of the image to eliminate obvious noise interference in the image, smoothing the segmentation result, and keeping the most complete edge of the foreground image so as to ensure that the subsequent foreground and background segmentation is more accurate;
(2) and subtracting the foreground from the original image to obtain a background image of the inspection image, and realizing the foreground and background segmentation of the inspection image of the power transmission line.
CN201911210391.9A 2019-11-29 2019-11-29 Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion Pending CN112884795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911210391.9A CN112884795A (en) 2019-11-29 2019-11-29 Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911210391.9A CN112884795A (en) 2019-11-29 2019-11-29 Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion

Publications (1)

Publication Number Publication Date
CN112884795A true CN112884795A (en) 2021-06-01

Family

ID=76039548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911210391.9A Pending CN112884795A (en) 2019-11-29 2019-11-29 Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion

Country Status (1)

Country Link
CN (1) CN112884795A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450372A (en) * 2021-08-27 2021-09-28 海门裕隆光电科技有限公司 Power transmission line image intelligent enhancement method and system based on artificial intelligence
CN116563279A (en) * 2023-07-07 2023-08-08 山东德源电力科技股份有限公司 Measuring switch detection method based on computer vision
CN117350926A (en) * 2023-12-04 2024-01-05 北京航空航天大学合肥创新研究院 Multi-mode data enhancement method based on target weight

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450372A (en) * 2021-08-27 2021-09-28 海门裕隆光电科技有限公司 Power transmission line image intelligent enhancement method and system based on artificial intelligence
CN113450372B (en) * 2021-08-27 2021-11-16 海门裕隆光电科技有限公司 Power transmission line image intelligent enhancement method and system based on artificial intelligence
CN116563279A (en) * 2023-07-07 2023-08-08 山东德源电力科技股份有限公司 Measuring switch detection method based on computer vision
CN116563279B (en) * 2023-07-07 2023-09-19 山东德源电力科技股份有限公司 Measuring switch detection method based on computer vision
CN117350926A (en) * 2023-12-04 2024-01-05 北京航空航天大学合肥创新研究院 Multi-mode data enhancement method based on target weight
CN117350926B (en) * 2023-12-04 2024-02-13 北京航空航天大学合肥创新研究院 Multi-mode data enhancement method based on target weight

Similar Documents

Publication Publication Date Title
CN111428748B (en) HOG feature and SVM-based infrared image insulator identification detection method
CN109376591B (en) Ship target detection method for deep learning feature and visual feature combined training
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN110197185B (en) Method and system for monitoring space under bridge based on scale invariant feature transform algorithm
CN107066989B (en) Method and system for identifying accumulated snow of geostationary satellite remote sensing sequence image
CN108665468B (en) Device and method for extracting tangent tower insulator string
CN110110131B (en) Airplane cable support identification and parameter acquisition method based on deep learning and binocular stereo vision
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN112946679B (en) Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence
CN113538503A (en) Solar panel defect detection method based on infrared image
CN113313047A (en) Lane line detection method and system based on lane structure prior
CN114998251A (en) Air multi-vision platform ground anomaly detection method based on federal learning
CN110866472A (en) Unmanned aerial vehicle ground moving target identification and image enhancement system and method
CN111220619B (en) Insulator self-explosion detection method
CN113378672A (en) Multi-target detection method for defects of power transmission line based on improved YOLOv3
CN111428752B (en) Visibility detection method based on infrared image
CN113052110A (en) Three-dimensional interest point extraction method based on multi-view projection and deep learning
CN115830514B (en) Whole river reach surface flow velocity calculation method and system suitable for curved river channel
CN116843738A (en) Tree dumping risk assessment system and method based on TOF depth camera
CN108830834B (en) Automatic extraction method for video defect information of cable climbing robot
CN116385477A (en) Tower image registration method based on image segmentation
CN114677428A (en) Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing
CN112241691B (en) Channel ice condition intelligent identification method based on unmanned aerial vehicle inspection and image characteristics
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image
CN113763261A (en) Real-time detection method for far and small targets under sea fog meteorological condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210601