CN112241973B - Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment - Google Patents

Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment Download PDF

Info

Publication number
CN112241973B
CN112241973B CN202011150405.5A CN202011150405A CN112241973B CN 112241973 B CN112241973 B CN 112241973B CN 202011150405 A CN202011150405 A CN 202011150405A CN 112241973 B CN112241973 B CN 112241973B
Authority
CN
China
Prior art keywords
image
space
power transformation
intelligent assembly
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011150405.5A
Other languages
Chinese (zh)
Other versions
CN112241973A (en
Inventor
江翼
刘正阳
程林
周盟
蔡玉汝
黄勤清
周文
杨旭
徐惠
刘梦娜
曾静岚
陈敏维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Wuhan NARI Ltd
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
NARI Group Corp
Original Assignee
State Grid Corp of China SGCC
Wuhan NARI Ltd
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
NARI Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Wuhan NARI Ltd, Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd, NARI Group Corp filed Critical State Grid Corp of China SGCC
Priority to CN202011150405.5A priority Critical patent/CN112241973B/en
Publication of CN112241973A publication Critical patent/CN112241973A/en
Application granted granted Critical
Publication of CN112241973B publication Critical patent/CN112241973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for tracking and representing an image analysis boundary of an intelligent assembly of transformer equipment, wherein the method comprises the following steps: 1) Carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space; 2) Carrying out multi-scale layered processing on an image space comparing image data of the intelligent assembly of the transformer equipment by using multiple scales, and carrying out step-by-step recursive filtering model processing from a high-contrast image to a low-contrast image in each window; 3) And (3) performing binary image conversion on the filtered image data of the intelligent assembly of the power transformation equipment, and outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement. The invention overcomes the defects of the edge detection algorithm in processing complex images with high boundary density, selects the boundary tracking algorithm with optimized scale refinement, and improves the performance of the boundary tracking algorithm in the aspects of complex retrieval and complete boundary.

Description

Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment
Technical Field
The invention relates to the technical field of intelligent detection and image processing of power transformation equipment, in particular to a method and a device for representing intelligent assembly image analysis boundary tracking of the power transformation equipment.
Background
The normal operation of the intelligent assembly of the power transformation equipment is very important for guaranteeing the safe operation of a transformer substation, and the intelligent assembly of the power transformation equipment is easily influenced by multiple factors such as heating, damage and lightning stroke of an electronic device due to being in a high-voltage complex and changeable environment for a long time, so that a major fire safety accident is caused, and great loss is brought to personnel safety and national economy. The image detection method is an important method for detecting fire disaster of the intelligent assembly of the power transformation equipment. When a fire disaster occurs in the environment of the intelligent assembly of the power transformation equipment, the picture of the fire disaster is acquired by the intelligent assembly of the power transformation equipment through the camera, then a boundary tracking method of image filtering is used for clearing a large number of non-fire redundant noise interference signals in the picture, and then a characteristic boundary sign of a fire disaster contour is extracted, so that fire disaster detection is effectively realized. In practical engineering application, when a fire disaster occurs to the intelligent assembly of the power transformation equipment, fire disaster characteristics can be extracted through the boundary tracking representation method of image filtering, and safety alarm of the intelligent assembly of the power transformation equipment is achieved, so that the boundary tracking representation method of the image filtering is very wide in engineering application.
The main problems of boundary tracking detection of image filtering in practical application are the problems of incomplete filtering and denoising and unclear edge contour extraction caused by the fact that objects are detected to be far and near crossed. When fire disaster occurs to the power transformation equipment, the shot picture contains multi-element interference, such as influence of multiple factors, such as multiple buildings, complex power transformation equipment, excessive space information and the like, and the difficulty in removing interference information and extracting outlines of image fire disaster is increased, so that the fire disaster detection needs to be carried out by using optimized image filtering boundary tracking.
The filtering of the fire image is a basic problem for solving the fire characteristic extraction, and the traditional methods at present comprise mean filtering, bilateral filtering, gaussian filtering and the like. However, these methods have many disadvantages, such as that the gaussian filtering can effectively remove the noise close to the positive distribution from the image, but the image contains a lot of interference signals and there are non-positive partial information, so that the interference information in the image is not completely removed, which is not beneficial to fire contour extraction.
After filtering and denoising the image, a boundary detection algorithm such as a sobel algorithm, a laplacian algorithm, and a canny operator is often used. However, these algorithms have disadvantages, such as that the bayer algorithm has a good effect on processing images with gradual gray scale and more noise, but when the edge of an image has more than one pixel, the bayer algorithm is not very accurate in positioning the edge contour.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the boundary tracking representation method and the boundary tracking representation device for the image analysis of the intelligent assembly of the power transformation equipment, the problems of multi-dimensional denoising and contour extraction of a fire image space are solved by using the boundary tracking algorithm of image filtering, the image is processed by using the boundary tracking algorithm of the image filtering, the essential characteristics of the complex information multi-dimensional space in the image can be revealed, the removal of space multi-interference signals is facilitated, the extraction performance of the fire contour characteristics is improved, the image processing performance is improved, and the safety monitoring of the intelligent assembly of the power transformation equipment is realized.
In order to achieve the above object, the present invention provides a method and a device for tracking and representing an image analysis boundary of an intelligent component of a power transformation device, wherein the method comprises the following steps:
1) Carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space;
2) Carrying out multi-scale layered processing on an image space for comparing image data of the intelligent assembly of the transformer equipment by using multiple scales to obtain window images in a precise high-low contrast range, and then carrying out step-by-step recursive filtering model processing from a high-contrast image to a low-contrast image in each window image;
3) And (3) performing binary image conversion on the filtered image data of the intelligent assembly of the power transformation equipment, and outputting a space safety monitoring image or an image of a fire image outline with a detection frame through a boundary tracking algorithm with scale refinement.
Preferably, the spatial color conversion method is that acquired image data of the intelligent assembly of the power transformation equipment is input into a polar coordinate system with three spatial directions of depth, wherein the polar coordinate system is mixed with cross colors corresponding to red, green and blue R, G and B, and when one shaft rotates along the circumferential direction, the expressed color starts to be correspondingly converted every time the shaft rotates by a specified angle, so that the spatial color conversion performs original information expression on the monitored multidimensional space object; and then storing the gray value of each pixel point by using one byte through gray processing.
Preferably, the filtering model in step 2) is a trained spatial recursive filtering model, and an operational formula of the model is as follows:
Figure BDA0002741004050000031
wherein g (x, y) and f (x, y) are respectively a processed image and an original image; h. h are the image space recursive low and high contrast values respectively,
Figure BDA0002741004050000032
the image orthonormal basis is processed for recursion.
Preferably, the boundary tracking algorithm of the scale refinement in the step 3) comprises:
(1) Carrying out thinning segmentation processing on the external contour and the internal edge of the image data;
(2) Setting a window image as a binary image, setting an outer frame with the width of p pixels and the value of q in the binary image, wherein p and q are natural numbers;
(3) Marking a point with a pixel p in the window image as b0, marking a point with a value p-1 in the window image as c0, searching m points from c0 to b0 in sequence, wherein m is a natural number, setting a point with a first pixel value of p as b1, and setting a point pixel with a value p-1 as c1 in the same way, and initializing to b = b1 and c = c1.
(4) The detection of m neighbourhoods clockwise from c to b is marked as k1, \ 8230;, km, and the subsequent value retrieved is p pixels, marked as kn, the same as in the previous step b = kn, c = kn-1.
(5) And (4) looping the step until b stops at the position b0, and retrieving the next boundary point b1.
The invention also provides a device for tracking and representing the image analysis boundary of the intelligent assembly of the power transformation equipment, which is characterized by comprising the following components:
the conversion module is used for carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space;
the processing module is used for carrying out multi-scale layered processing on an image space for comparing image data of the intelligent assembly of the transformer equipment by using multiple scales to obtain window images in a precise high-low contrast range, and then carrying out step-by-step recursive filtering model processing from a high-contrast image to a low-contrast image in each window image;
and the output module is used for performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment, and outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
Further, the conversion module includes:
the input module is used for inputting the acquired image data of the intelligent assembly of the power transformation equipment to a polar coordinate system with three spatial depths;
the polar coordinate system conversion module is used for carrying out space color conversion on original information of a multi-dimensional space object in image data in a polar coordinate system, wherein the polar coordinate system is corresponding to mixed cross colors of red, green and blue R, G and B, and when one shaft rotates along the circumferential direction, the expressed color starts to be correspondingly converted every time the shaft rotates by a specified angle;
and the gray processing module is used for carrying out gray processing on the image data after the space color conversion to obtain an image space.
Still further, the processing module includes:
the multi-scale layering processing module is used for carrying out multi-scale layering processing on an image space of image data of the intelligent assembly of the power transformation equipment through multi-scale comparison to obtain a window image of an accurate high-low comparison range;
and the step-by-step recursive filtering model processing module is used for performing step-by-step recursive filtering model processing from the high-contrast image to the low-contrast image in each window image.
Still further, the output module includes:
the binary image conversion module is used for performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment;
and the boundary tracking operation module is used for outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
The invention further proposes a device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the above-described image analysis boundary tracking representation method suitable for a smart component of a power transformation device.
The present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the above image analysis boundary tracking representation method for an intelligent component of a power transformation device.
The invention has the beneficial effects that:
1. according to the invention, when the intelligent component of the power transformation equipment is in fire, the smoke fire image is subjected to spatial layered filtering, and then the fire outline is extracted by adopting an optimized boundary tracking method, so that the fire detection performance is improved, and the safety monitoring of the intelligent component of the power transformation equipment is realized.
2. The spatial layered recursive variant filtering model provided by the invention can improve the filtering performance of spatial cross information; and the boundary tracking algorithm is optimized, so that the fire contour characteristic detection performance is improved.
3. In order to improve the fire contour feature monitoring performance and overcome the defects of complex processing and high boundary density of an edge detection algorithm, the invention selects a boundary tracking algorithm with optimized scale refinement, thereby improving the performance of the boundary tracking algorithm in the aspects of complex retrieval and complete boundary.
4. The method solves the problems of multi-dimensional denoising and contour extraction in the fire image space by using the boundary tracking algorithm of image filtering, strongly removes complex interference redundant signals in the fire image and improves the detection effect of edge contour characteristics; the boundary tracking algorithm of image filtering is used for processing the image, the multi-dimensional space essential characteristics of complex information in the image can be revealed, the space multi-interference signals can be cleared conveniently, the fire outline characteristic extraction performance is improved, the image processing performance is improved, and the safety monitoring of the intelligent assembly of the power transformation equipment is realized.
Drawings
FIG. 1 is an overall flow chart of the method of the present invention.
Fig. 2 is a flow chart of a spatial color conversion method.
Fig. 3 is a flow chart of a spatial hierarchical recursive variant filtering process.
Fig. 4 is a flowchart of a first embodiment of the present invention.
FIG. 5 is a schematic diagram illustrating an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the drawings and specific examples, but the embodiments of the present invention are not limited thereto.
As shown in fig. 1, the image analysis boundary tracking representation method for an intelligent component of a power transformation device provided by the present invention includes, first, performing spatial color conversion according to an acquired image, then performing interference denoising processing through spatial hierarchical recursive variant filtering, and finally performing feature extraction on a fire contour through an optimized boundary tracking algorithm, thereby implementing security monitoring on the power transformation device, and specifically includes the following steps:
1) And carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space.
The flow chart of the space color conversion method is shown in fig. 2, the acquired image data of the intelligent assembly of the power transformation equipment is input into a polar coordinate system with three spatial depth directions, the colors of red, green and blue R, G and B are mixed and crossed in the polar coordinate system, when one shaft rotates along the circumferential direction, the expressed color starts to be correspondingly converted every time the shaft rotates by a specified angle, so that the space color conversion can carry out original information expression on the monitored multi-dimensional space object; and then, storing the gray value of each pixel point by using one byte through gray processing, wherein the gray range is 0-255, thus not only avoiding the distortion of image strips, but also preventing the loss of important object information.
2) The method comprises the steps of carrying out multi-scale layering processing on image space of image data of an intelligent assembly of the transformer equipment through multi-scale comparison to obtain accurate window images with high and low contrast ranges, carrying out quality qualitative analysis on object edge component information in windows, and then carrying out filtering model processing step by step through progressive recursion from high contrast images to low contrast images in each window image to effectively remove filtering of space cross interferents.
The problem that images acquired by monitoring the intelligent assembly of the power transformation equipment in real time have the difficulties of multi-dimensional space interlacing and indoor and outdoor obstacle complexity is solved, and therefore the removal of the interference filtering of the space obstacles is an important key for extracting the image contour features. Because the working environment of the intelligent assembly of the power transformation equipment is duplicated and the number of obstacles is too large, the removal of the obstacle interference is very important. The monitoring environment changes in a complex and various way, for example, the characteristics of smoke are detected at the early stage of fire occurrence, then the characteristics of flame are detected, the characteristics of the smoke change, and for example, the objects on the monitoring picture are crossed from far to near, and the number of obstacles between the objects is too many, so that various adverse conditions cause filtering difficulty.
Therefore, a flow chart of the training method for filtering recursive variance in image space is shown in fig. 3: the image collected by the monitoring equipment for space color conversion is subjected to multi-scale layering processing on the comparison image space by using multiple scales, so that accurate high-low contrast range image windows are obtained, quality qualitative analysis on object edge component information in the windows is realized, and then filtering model processing is gradually recurred from high-H to low-H contrast images in each window, so that filtering of space cross interferents is effectively removed. The spatial recursive variant filter model is shown in table 1.
Table 1 filter model representation
Figure BDA0002741004050000071
In the formula shown in table 1, g (x, y) and f (x, y) are the processed image and the original image, respectively; w two-dimensional template in the median filtering model, (k, l) is pixel points in the two-dimensional template, (x, y) is points in the image, and med () is a median filtering reduction function; in the mean filtering model, M is the total number of pixels; w in threshold filter model i,j Is the image gradient value, lambda is the optimal threshold; wavelet filtering model A j-1 Is the J-1 layer low frequency sparse, D n,j Is the high-frequency coefficient of the J-th layer,
Figure BDA0002741004050000072
is a wavelet basis; h and H in the space recursive filtering model are respectively a low contrast value and a high contrast value of the image space recursive,
Figure BDA0002741004050000073
the image orthonormal basis is processed for recursion.
3) And (3) performing binary image conversion on the filtered image data of the intelligent assembly of the power transformation equipment, and outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement. In order to improve the monitoring performance of fire outline characteristics, overcome the defects of complex processing and high boundary density of an edge detection algorithm, and optimize a boundary tracking algorithm with detailed scale, thereby improving the performance of the boundary tracking algorithm in the aspects of complex retrieval and complete boundary. The process is as follows:
(1) Carrying out scale thinning and segmentation processing on the external contour and the internal edge of the image data;
(2) Setting the window image as a binary image, and setting the binary image as an outer frame with the width of 1 pixel and the value of 0;
(3) The method comprises the steps of marking a point of 1 pixel in an image as b0, marking a point of which the adjacent west side value is 0 as c0, searching 8 points for the points from c0 to b0 in sequence, marking a point of which the first pixel value is 1 as b1, and similarly, setting a pixel point of 0 value as c1 and initializing the pixel points to be b = b1 and c = c1.
(4) The 8 neighbourhoods detected clockwise from c to b are labeled k1, \ 8230;, k8, respectively, and the subsequent value retrieved is 1 pixel, labeled kn, the same as the previous step with b = kn, c = kn-1.
(5) And (4) looping the step until b stops at the position b0, and retrieving the next boundary point b1.
The first embodiment is as follows: fire image detection method for power transformation equipment
The invention belongs to an image processing method, which inputs an acquired image monitored by an intelligent assembly of a power transformation device and outputs an extracted fire profile characteristic. The flow chart of the algorithm is shown in fig. 4. The method is different from the prior art in that the prior image filtering such as median filtering, threshold filtering and wavelet filtering can not effectively denoise the spatial interference information of the complex image, and the text provides a spatial hierarchical recursive variant filtering model which can improve the filtering performance of the spatial cross information. And the boundary tracking algorithm is optimized, so that the fire contour characteristic detection performance is improved. The application effect is shown in fig. 5, which comprises four graphs, namely a fire graph of an intelligent component of the power transformation equipment, a space color conversion model graph, a space layered recursive variant filter graph and an optimization boundary algorithm extraction fire profile graph.
The invention further provides an image analysis boundary tracking and representing device for the intelligent assembly of the power transformation equipment, which comprises a conversion module, a processing module and an output module. Wherein,
the conversion module is used for carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space;
the processing module is used for carrying out multi-scale layered processing on an image space for comparing image data of the intelligent assembly of the transformer equipment by using multiple scales to obtain window images in a precise high-low contrast range, and then carrying out step-by-step recursive filtering model processing from a high-contrast image to a low-contrast image in each window image;
and the output module is used for performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment, and outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
The conversion module includes:
the input module is used for inputting the acquired image data of the intelligent assembly of the power transformation equipment to a polar coordinate system with three spatial depths;
the polar coordinate system conversion module is used for carrying out space color conversion on original information of a multi-dimensional space object in image data in a polar coordinate system, wherein the polar coordinate system is used for mixing cross colors corresponding to R, G and B of red, green and blue, and when one of the polar coordinate system rotates along the circumferential direction, the expressed color starts to be correspondingly converted every time an appointed angle is rotated;
and the gray processing module is used for carrying out gray processing on the image data after the space color conversion to obtain an image space.
The processing module comprises:
the multi-scale layering processing module is used for carrying out multi-scale layering processing on the image space of the image data of the intelligent assembly of the power transformation equipment through multi-scale comparison to obtain a window image of an accurate high-low comparison range;
and the step-by-step recursive filtering model processing module is used for performing step-by-step recursive filtering model processing from the high-contrast image to the low-contrast image in each window image.
The output module includes:
the binary image conversion module is used for performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment;
and the boundary tracking operation module is used for outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
The invention further proposes a device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the above-described image analysis boundary tracking representation method suitable for a smart component of a power transformation device.
The present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program, when executed by a processor, implements the above-mentioned image analysis boundary tracking representation method for an intelligent component of a power transformation device.
Those not described in detail in this specification are well within the skill of the art.
Finally, it should be noted that the above detailed description is only for describing the technical solution of the patent and not for limiting, although the patent is described in detail with reference to the preferred embodiment, it should be understood by those skilled in the art that the technical solution of the patent can be modified or substituted by equivalents without departing from the spirit and scope of the technical solution of the patent, which shall be covered by the claims of the patent.

Claims (9)

1. A method for representing intelligent assembly image analysis boundary tracking of transformer equipment is characterized by comprising the following steps: the method comprises the following steps:
1) Carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space;
2) Carrying out multi-scale layered processing on an image space for comparing image data of the intelligent assembly of the transformer equipment by using multiple scales to obtain window images in a precise high-low contrast range, and then carrying out step-by-step recursive filtering model processing from a high-contrast image to a low-contrast image in each window image; the filtering model is a trained space recursive filtering model, and the operational formula of the model is as follows:
Figure FDA0003786880280000011
wherein g (x, y) and f (x, y) are respectively a processed image and an original image; h. h are the image space recursive low and high contrast values respectively,
Figure FDA0003786880280000012
processing the image orthogonal basis for recursion;
3) And (3) performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment, and outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
2. The method for image analysis boundary tracking representation of a power transformation device intelligent component according to claim 1, wherein: the space color conversion method comprises the steps that collected image data of the intelligent assembly of the power transformation equipment are input into a polar coordinate system with three spatial directions of depth, the colors of red, green and blue R, G and B in the polar coordinate system are mixed and crossed correspondingly, when one shaft rotates along the circumferential direction, the color represented by the rotation starts to be correspondingly converted every time the shaft rotates by a specified angle, so that the original information representation of a monitored multi-dimensional space object is carried out by space color conversion; and then, storing the gray value of each pixel point by using one byte through gray processing.
3. The method for image analysis boundary tracking representation of a power transformation device intelligent component according to claim 1, wherein: the boundary tracking algorithm of the mesoscale refinement in the step 3) comprises the following steps:
(1) Carrying out scale thinning segmentation processing on the external contour and the internal edge of the image data;
(2) Setting the window image as a binary image, setting the binary image as an outer frame with the width of p pixels and the value of p-1, wherein p is a natural number;
(3) Marking a point with a pixel of p in a window image as b0, marking a point with a median of p-1 in the window image as c0, sequentially and adjacently searching m points from c0 to b0, wherein m is a natural number, setting a point with a first pixel value of p as b1, and setting a point with a value of p-1 as c1 in the same way, and initializing the point with b = b1 and c = c1;
(4) Detecting m neighborhoods clockwise from c to b as k1, \ 8230;, km, retrieving the subsequent value as p pixels and as kn, and b = kn, c = kn-1 as the previous step;
(5) And (4) looping the step until b stops at the position b0, and retrieving the next boundary point b1.
4. A boundary tracking representation device for intelligent assembly image analysis of power transformation equipment is characterized in that: the device comprises:
the conversion module is used for carrying out space color conversion on the acquired image data of the intelligent assembly of the power transformation equipment to obtain an image space;
the processing module is used for carrying out multi-scale layered processing on an image space for comparing image data of the intelligent assembly of the transformer equipment by using multiple scales to obtain window images in a precise high-low contrast range, and then carrying out step-by-step recursive filtering model processing from a high-contrast image to a low-contrast image in each window image; the filtering model is a trained space recursive filtering model, and the operational formula of the model is as follows:
Figure FDA0003786880280000021
wherein g (x, y) and f (x, y) are respectively a processed image and an original image; h. h are the image space recursive low and high contrast values respectively,
Figure FDA0003786880280000022
processing the image orthogonal basis for recursion;
and the output module is used for performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment, and outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
5. The apparatus according to claim 4, wherein the apparatus comprises: the conversion module includes:
the input module is used for inputting the acquired image data of the intelligent assembly of the power transformation equipment to a polar coordinate system with three spatial depths;
the polar coordinate system conversion module is used for carrying out space color conversion on original information of a multi-dimensional space object in image data in a polar coordinate system, wherein the polar coordinate system is used for mixing cross colors corresponding to R, G and B of red, green and blue, and when one of the polar coordinate system rotates along the circumferential direction, the expressed color starts to be correspondingly converted every time an appointed angle is rotated;
and the gray processing module is used for carrying out gray processing on the image data after the space color conversion to obtain an image space.
6. The apparatus according to claim 4, wherein the apparatus comprises: the processing module comprises:
the multi-scale layering processing module is used for carrying out multi-scale layering processing on the image space of the image data of the intelligent assembly of the power transformation equipment through multi-scale comparison to obtain a window image of an accurate high-low comparison range;
and the step-by-step recursive filtering model processing module is used for performing step-by-step recursive filtering model processing from the high-contrast image to the low-contrast image in each window image.
7. The apparatus according to claim 4, wherein the apparatus comprises: the output module includes:
the binary image conversion module is used for performing binary image conversion on the filtered intelligent assembly image data of the power transformation equipment;
and the boundary tracking operation module is used for outputting a space safety monitoring image or a fire image outline image with a detection frame through a boundary tracking algorithm with scale refinement.
8. An apparatus, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 3.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 3.
CN202011150405.5A 2020-10-23 2020-10-23 Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment Active CN112241973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011150405.5A CN112241973B (en) 2020-10-23 2020-10-23 Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011150405.5A CN112241973B (en) 2020-10-23 2020-10-23 Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment

Publications (2)

Publication Number Publication Date
CN112241973A CN112241973A (en) 2021-01-19
CN112241973B true CN112241973B (en) 2022-11-25

Family

ID=74169576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011150405.5A Active CN112241973B (en) 2020-10-23 2020-10-23 Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment

Country Status (1)

Country Link
CN (1) CN112241973B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512569A (en) * 2013-09-29 2014-01-15 北京理工大学 Discrete wavelet multiscale analysis based random error compensation method for MEMS (Micro Electro Mechanical system) gyroscope
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN111047570A (en) * 2019-12-10 2020-04-21 西安中科星图空间数据技术有限公司 Automatic cloud detection method based on texture analysis method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6847737B1 (en) * 1998-03-13 2005-01-25 University Of Houston System Methods for performing DAF data filtering and padding
EP2026278A1 (en) * 2007-08-06 2009-02-18 Agfa HealthCare NV Method of enhancing the contrast of an image.
CN103942758B (en) * 2014-04-04 2017-02-15 中国人民解放军国防科学技术大学 Dark channel prior image dehazing method based on multiscale fusion
CN104361571B (en) * 2014-11-21 2017-05-10 南京理工大学 Infrared and low-light image fusion method based on marginal information and support degree transformation
CN109191432B (en) * 2018-07-27 2021-11-30 西安电子科技大学 Remote sensing image cloud detection method based on domain transformation filtering multi-scale decomposition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103512569A (en) * 2013-09-29 2014-01-15 北京理工大学 Discrete wavelet multiscale analysis based random error compensation method for MEMS (Micro Electro Mechanical system) gyroscope
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN111047570A (en) * 2019-12-10 2020-04-21 西安中科星图空间数据技术有限公司 Automatic cloud detection method based on texture analysis method

Also Published As

Publication number Publication date
CN112241973A (en) 2021-01-19

Similar Documents

Publication Publication Date Title
CN111260616A (en) Insulator crack detection method based on Canny operator two-dimensional threshold segmentation optimization
CN110097522B (en) Single outdoor image defogging method based on multi-scale convolution neural network
CN109448009A (en) Infrared Image Processing Method and device for transmission line faultlocating
CN110660065B (en) Infrared fault detection and identification algorithm
CN112308872B (en) Image edge detection method based on multi-scale Gabor first derivative
CN108830883B (en) Visual attention SAR image target detection method based on super-pixel structure
CN112861654A (en) Famous tea picking point position information acquisition method based on machine vision
CN111738338B (en) Defect detection method applied to motor coil based on cascaded expansion FCN network
CN112070717A (en) Power transmission line icing thickness detection method based on image processing
CN111861866A (en) Panoramic reconstruction method for substation equipment inspection image
CN110751667B (en) Method for detecting infrared dim and small targets under complex background based on human visual system
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN106203536B (en) Feature extraction and detection method for fabric defects
CN115909028A (en) High-voltage isolating switch state identification method based on gradient image fusion
CN116311201A (en) Substation equipment state identification method and system based on image identification technology
CN110047041A (en) A kind of empty-frequency-domain combined Traffic Surveillance Video rain removing method
CN112241973B (en) Image analysis boundary tracking representation method and device for intelligent assembly of power transformation equipment
CN115700737A (en) Oil spill detection method based on video monitoring
Li et al. A study of crack detection algorithm
Khan et al. Shadow removal from digital images using multi-channel binarization and shadow matting
Chaudhary et al. A comparative study of fruit defect segmentation techniques
CN111401275B (en) Information processing method and device for identifying grassland edge
CN111932470A (en) Image restoration method, device, equipment and medium based on visual selection fusion
CN111932469A (en) Significance weight quick exposure image fusion method, device, equipment and medium
CN118135141B (en) Pore three-dimensional reconstruction method and system based on rock image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant