CN112164086A - Refined image edge information determining method and system and electronic equipment - Google Patents

Refined image edge information determining method and system and electronic equipment Download PDF

Info

Publication number
CN112164086A
CN112164086A CN202011086872.6A CN202011086872A CN112164086A CN 112164086 A CN112164086 A CN 112164086A CN 202011086872 A CN202011086872 A CN 202011086872A CN 112164086 A CN112164086 A CN 112164086A
Authority
CN
China
Prior art keywords
image
gradient
value
amplitude
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011086872.6A
Other languages
Chinese (zh)
Inventor
周庆
刘德凯
贺苏宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayan Intelligent Technology Group Co ltd
Original Assignee
Huayan Intelligent Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayan Intelligent Technology Group Co ltd filed Critical Huayan Intelligent Technology Group Co ltd
Priority to CN202011086872.6A priority Critical patent/CN112164086A/en
Publication of CN112164086A publication Critical patent/CN112164086A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention provides a refined image edge information determining method, a refined image edge information determining system and electronic equipment, and relates to the field of digital image processing. The method comprises the steps of firstly converting a monitored power grid equipment image into a first gray image, then carrying out filtering operation on the first gray image to obtain a second gray image, carrying out value domain mapping conversion and amplitude normalization inversion processing on the second gray image to generate a third gray image, and after obtaining an amplitude gradient image and a direction gradient image corresponding to the third gray image, determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value to be used for extracting image edge pixels. And carrying out secondary calculation on the image edge pixels according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value to obtain the pixel values of the image edges. The method improves the information quantity and accuracy of the edge amplitude and direction information in the power grid equipment image, and improves the availability of the image edge information.

Description

Refined image edge information determining method and system and electronic equipment
Technical Field
The present invention relates to the field of digital image processing technologies, and in particular, to a method and a system for determining refined image edge information, and an electronic device.
Background
The working state of the power grid equipment directly influences whether the power transmission and transformation network can run safely, so that the real-time acquisition of the working state of the power grid equipment is important for the safety control of the power transmission and transformation line, and the current physical state monitoring process of the power grid equipment is mainly realized by means of images (or videos) acquired by a video recording device. Because the power transmission and transformation grid relates to numerous grid equipment and comprises a plurality of cables, and the brightness of the collected images is very easily influenced by changing weather, the difficulty of accurately grasping the running state of the grid equipment by a conventional detection algorithm is increased.
In the prior art, an image edge extraction algorithm is adopted for detecting the power grid equipment, but the existing edge extraction algorithm has the problems that the detected edge amplitude and direction information are not accurate enough, refined edge information is lacked and the like, and the monitoring effect of the power grid equipment is reduced.
Disclosure of Invention
In view of this, the present invention provides a method, a system, and an electronic device for determining refined image edge information, so as to improve accuracy of edge amplitude and direction information in an image of a power grid device, improve usability of the image edge information, and provide refined and rich device state change feature information for subsequent feature coding and state analysis.
In a first aspect, an embodiment of the present invention provides a method for determining refined image edge information, where the method is applied to a power grid device state monitoring image, and includes:
acquiring a digital color image of monitored power grid equipment, and converting the digital color image into a first gray image;
carrying out filtering operation on the first gray level image to obtain a second gray level image, and carrying out value domain mapping conversion and amplitude normalization and inversion processing on the second gray level image to generate a third gray level image;
acquiring an amplitude gradient image and a direction gradient image corresponding to the third gray level image, and determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value; wherein, the high threshold value and the low threshold value are used for extracting pixels of the image edge;
and carrying out secondary calculation on pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value, and taking the calculation result as the pixel value of the image edge.
In some embodiments, the above process of converting the digital color image into the first grayscale image is implemented by the following equation:
B(x,y)=0.299Ir(x,y)+0.578Ig(x,y)+0.114Ib(x,y),
wherein B (x, y) is a first grayscale image; i isr(x,y)、Ig(x,y)、Ib(x, y) are the image components of the red, green, and blue channels, respectively, in the digital color image.
In some embodiments, the above process of performing a filtering operation on the first grayscale image to obtain a second grayscale image includes:
initializing a 7 x 7 two-digit matrix Gaussian filter;
performing convolution filtering on the Gaussian filter and the first gray level image to obtain a second gray level image; the above process is realized by the following formula:
f(x,y)=h(x,y,1)*B(x,y),
wherein the content of the first and second substances,
Figure BDA0002720669940000021
Figure BDA0002720669940000022
f (x, y) is a second gray scale image; b (x, y) is the first grayscale image.
In some embodiments, the above process of performing the value domain mapping conversion and the amplitude normalization inversion process on the second grayscale image to generate the third grayscale image includes:
linearly mapping the range of the pixel value of the second gray scale image from 0-255 to a numerical range of 0.00-1.00 to obtain a value domain mapping conversion map of the second gray scale image;
acquiring a histogram of the value domain mapping conversion chart by taking 0.01 as an amplification unit, and determining the amplitude position with the maximum pixels in the histogram; if the numerical value of the position is more than or equal to 0.50, the second gray scale map is shot at night; if the numerical value of the position is less than 0.50, the second gray scale map is shot in the daytime;
carrying out amplitude normalization processing on the value domain mapping conversion map, and carrying out inversion processing on a second gray scale map shot in the daytime to generate a third gray scale image with a uniform background; the formula used in this step is as follows:
Figure BDA0002720669940000031
wherein C (x, y) is a third grayscale image; f (x, y) is a value domain mapping transformation map of the second gray scale image.
In some embodiments, the acquiring of the amplitude gradient image and the direction gradient image corresponding to the third grayscale image includes:
calculating gradient values of 8 neighborhood pixels of each pixel of the third gray level image;
selecting the maximum gradient value in the gradient values of 8 adjacent pixels as the gradient value of the pixel, and taking the direction corresponding to the maximum gradient value as the gradient direction of the pixel;
determining an amplitude gradient image according to the maximum gradient value of each pixel of the third gray image; and determining a direction gradient image according to the direction of the maximum gradient value of each pixel corresponding to the third gray image.
In some embodiments, the determining the high threshold and the low threshold of the amplitude gradient image according to the preset threshold includes:
taking 0.01 as a step size unit, obtaining a histogram of the amplitude gradient image from 0.00 to 1.00 amplitude;
setting the low threshold value to be 0.07, and decreasing in the histogram from 1.00 by a step size of 0.01; and when the amplitude is larger than the amplitude value corresponding to the weight coefficient 0.4 multiplied by all gradient pixel numbers larger than the low threshold value, the amplitude is the high threshold value.
In some embodiments, the above process of performing secondary calculation on the image edge pixels according to the amplitude gradient image, the direction gradient image, the high threshold and the low threshold includes:
if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is not larger than the low threshold value, the pixel to be detected is not the image edge, and the gradient value of the pixel to be detected is set to be 0;
if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is not smaller than the high threshold value, the pixel to be detected is an image edge, and the amplitude gradient value of the pixel to be detected is the gradient value of the pixel to be detected;
if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is larger than the low threshold value and smaller than the high threshold value, the gradient direction of the pixel to be detected is obtained according to the direction gradient image, and the gradient direction of the pixel to be detected is compared with 2 field pixel points in the same direction in the neighborhood of the pixel to be detected to determine the gradient value of the pixel to be detected.
In a second aspect, an embodiment of the present invention provides a refined image edge information determining system, which is applied to a power grid device state monitoring image, and the system includes:
the first image conversion module is used for acquiring a digital color image of the monitored power grid equipment and converting the digital color image into a first gray image;
the second image conversion module is used for carrying out filtering operation on the first gray level image to obtain a second gray level image, and carrying out value domain mapping conversion and amplitude normalization reversal processing on the second gray level image to generate a third gray level image;
the third image conversion module is used for acquiring an amplitude gradient image and a direction gradient image corresponding to the third gray level image, and determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value; wherein, the high threshold value and the low threshold value are used for extracting pixels of the image edge;
and the edge determining module is used for carrying out secondary calculation on the pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value, and taking the calculation result as the pixel value of the image edge.
In a third aspect, an embodiment of the present invention further provides an electronic device, including: a processor and a memory; the memory has stored thereon a computer program which, when executed by the processor, implements the steps of the refined image edge information determination method mentioned in any of the possible embodiments of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program, when executed by a processor, implements the steps of the refined image edge information determining method mentioned in any possible implementation manner of the first aspect.
The embodiment of the invention has the following beneficial effects:
the invention provides a refined image edge information determining method, a refined image edge information determining system and electronic equipment, which are applied to a power grid equipment state monitoring image. Then, filtering the first gray level image to obtain a second gray level image, performing value domain mapping conversion and amplitude normalization inversion processing on the second gray level image to generate a third gray level image, obtaining an amplitude gradient image and a direction gradient image corresponding to the third gray level image, and determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value; wherein, the high threshold value and the low threshold value are used for extracting pixels of the image edge. And finally, carrying out secondary calculation on the pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value, and taking the calculation result as the pixel value of the image edge. The method has the unique characteristics that primary information carried by an original image is used as much as possible in the implementation process, secondary information processed on the basis of the primary information is not used, the error of edge detection is reduced, the accuracy of edge amplitude and direction information in the image of the power grid equipment is improved, the usability of the image edge information is improved, refined and rich equipment state change characteristic information is provided for subsequent characteristic coding and state analysis, and the monitoring quality of the power grid equipment is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for determining refined image edge information according to an embodiment of the present invention;
fig. 2 is a flowchart of a process of performing value-domain mapping conversion and amplitude normalization inversion processing on a second grayscale image to generate a third grayscale image in the refined image edge information determining method according to the embodiment of the present invention;
FIG. 3 is a comparison diagram of the values before and after the value domain mapping transformation and the amplitude normalization inversion processing according to the embodiment of the present invention;
fig. 4 is a flowchart of a process of obtaining an amplitude gradient image and a direction gradient image corresponding to a third gray scale image in the refined image edge information determining method according to the embodiment of the present invention;
fig. 5 is a schematic diagram of a position structure of a pixel to be calculated and surrounding pixels in the refined image edge information determining method according to the embodiment of the present invention;
fig. 6 is a flowchart illustrating a process of determining a high threshold and a low threshold of an amplitude gradient image according to a preset threshold in the refined image edge information determining method according to the embodiment of the present invention;
fig. 7 is a flowchart illustrating a process of performing secondary calculation on pixels of an image edge according to an amplitude gradient image, a direction gradient image, a high threshold and a low threshold in the refined image edge information determining method according to the embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a refined image edge information determining system according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Icon:
810-a first image conversion module; 820-a second image conversion module; 830-a third image conversion module; 840-an edge determination module; 101-a processor; 102-a memory; 103-a bus; 104-communication interface.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The power environment refers to physical space areas such as high-voltage transmission line corridors, transformer substations (substations), power distribution networks and the like which are specially used for electric energy transmission and conversion, a large number of transmission lines with different forms are erected in the areas, and various forms of power equipment and devices are configured in the areas. The appearance state changes of devices like circuit breakers and disconnectors directly reflect the opening and closing states of the electrical appliances, and the appearance color changes of PT (Potential Transformer) and CT (Current Transformer) and whether oil leakage stains exist reflect whether the devices are in a healthy state or a sub-healthy state. Therefore, the electric power department is specially provided with monitoring equipment in some important equipment areas for monitoring the appearance state change of the power grid equipment within 24 hours, and uploads the collected video (images) to a dispatching center and a power grid operation inspection management and control platform for relevant department personnel to call, watch and evidence at any time.
However, such conventional operation mode has not been able to meet the requirements of smart grid development. On one hand, the number and the types of the monitored electric power equipment are too many, so that the existing dispatching and operation and inspection personnel can not be considered at all; on the other hand, the detection and filtering of massive videos and images based on manual operation cannot be accurate and real-time, and the equipment state judgment completely based on experience lacks a uniform measurement scale. With the continuous progress of artificial intelligence and machine vision technology, the problem of monitoring the real-time state of power grid equipment can be solved through an intelligent detection technology, the work of people is replaced as far as possible, and all-weather functions are achieved.
The state monitoring of the power grid equipment is different from a common application scene, and has three main characteristics:
1. the monitored area has many overhead cables, towers and electrical equipment with different sizes and thicknesses, and the overhead cables, the towers and the electrical equipment are mixed with the monitored equipment into a whole, so that the monitoring difficulty is increased.
2. The changes of various weather and sunshine change the definition and contrast of the video (image) at any moment, and cause the inconsistency of the background scene of the collected video (image).
3. At a specific scene position, a special camera or a card machine is installed for each monitored object, and the state information of the monitored equipment is acquired within 24 hours.
From characteristics 1 and 2, it can be known that adverse factors such as complex background and inconsistent light exist in the power grid equipment state monitoring process, and the monitoring effect of the power grid equipment is influenced.
There are many algorithms for image preprocessing and image edge detection, and the common algorithm in the prior art is Canny algorithm, which comprises 4 steps:
1. smoothing the original image by using a Gaussian filter to reduce the influence of noise on the detection performance;
2. calculating the gradient amplitude and the gradient direction of each pixel point in the image by using the finite difference of the first-order partial derivatives in the horizontal direction and the vertical direction;
3. performing gradient amplitude non-maximum suppression based on adjacent pixel direction information;
4. and detecting and connecting edge pixel points by using a double-threshold algorithm.
The detection performance of the Canny algorithm in many application scenes can meet the application requirements, but certain gap exists in the high accuracy rate required by all-weather power grid equipment state monitoring under any meteorological condition for 24 hours. The characteristic analysis of the power grid equipment image under the power environment is combined to find that the algorithm has the problems that the detected edge amplitude and direction information are not accurate enough, refined edge information is lacked and the like. These problems stem from two aspects of this algorithm: on one hand, the definition of the edge is a problem, the edge defined in a strict sense refers to a part where the image brightness changes suddenly, and the definition of a relaxation point refers to a part where the brightness of a local area of the image changes significantly. The edge information is divided into high and low layers, and if the algorithm only extracts the part with larger jump value and loses the part with smaller jump value, the edge information is likely to be incomplete. On the other hand, the problem of the algorithm is that the gradient value and the gradient direction obtained by the algorithm are not necessarily the maximum value of each pixel point and the corresponding direction thereof; non-maximum suppression may lose some edge pixel point information with smaller gradient value and larger curvature change; the double-threshold edge pixel detection based on hard decision can further lose some weak edge information, and all the lost edge information can influence the performance of the subsequent processing process, so that the accuracy of correct decision is reduced.
In summary, the existing image edge detection algorithm has the problems that the gradient amplitude and the direction information of the detected image edge are not accurate enough, refined edge information is lacked, and the like, so that the accuracy of monitoring the state of the power grid equipment is influenced, and potential safety hazards exist in the state monitoring of unattended power grid online equipment based on artificial intelligence and machine vision technology.
Based on this, the embodiment of the invention provides a refined image edge information determining method, a refined image edge information determining system and electronic equipment, so as to improve the accuracy of edge amplitude and direction information in an image of a power grid device, improve the usability of image edge information and provide refined and rich equipment state change characteristic information for subsequent characteristic coding and state analysis.
For the understanding of the present embodiment, a detailed description will be first given of a refined image edge information determining method disclosed in the present embodiment.
Referring to a flowchart of a refined image edge information determining method shown in fig. 1, the method is applied to a power grid equipment state monitoring image, and includes the following steps:
step S101, acquiring a digital color image of the monitored power grid equipment, and converting the digital color image into a first gray image.
The digital color image source can be a card machine or a video camera specially set up for shooting monitored electric equipment, and can also be a color image sample shot by a handheld camera. Generally, the resolution of the captured image is not lower than 1280 × 720.
The purpose of converting the digital color image into the first gray scale image is to simplify the data processing amount of the image edge information detection process, and the implementation process can obtain the gray scale image with the gray scale value ranging from 0 to 255 by performing weighted summation on R, G, B channel intensities of pixels of the color image.
And S102, performing filtering operation on the first gray level image to obtain a second gray level image, and performing value domain mapping conversion and amplitude normalization and inversion processing on the second gray level image to generate a third gray level image.
The purpose of obtaining the second gray level image by filtering the first gray level image is to reduce the influence of noise in the image on the extraction and analysis of the equipment state characteristics. The parameters selected during the filtering operation should maintain the integrity of the image edge information as much as possible, in addition to the noise elimination.
The purpose of performing value domain mapping conversion on the second gray scale image is to unify subsequent data processing value domains, and is also data normalization operation. For the range of 0-1 mapping value range and 0.01 precision, 0.00 represents the darkest; 1.00 indicates the whitest.
Before the amplitude normalization inversion processing, whether the image is inverted or not needs to be judged, and a specific implementation process can be executed by combining the histogram of the second gray level image.
The normalization processing is a process of taking the pixel with the maximum gray value in the gray image as a standard value and carrying out ratio on other pixel values and the standard value, and after the normalization processing, the inversion processing is carried out on the white sky image by combining the threshold judgment, so that a uniform background condition is provided for the subsequent processing.
Step S103, obtaining an amplitude gradient image and a direction gradient image corresponding to the third gray level image, and determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value; wherein, the high threshold value and the low threshold value are used for extracting pixels of the image edge.
The amplitude gradient image and the direction gradient image are obtained by calculating gradient values and gradient directions of 8-domain pixels of each pixel in the third gray scale image. In the calculation process, the gradient value of each direction is characterized by the maximum gradient value in 3 adjacent sub-directions, and after the gradient values of all 8 directions are obtained, the maximum gradient value and the gradient direction corresponding to the maximum gradient value are selected as the gradient data of the pixel, so that the final amplitude gradient image and direction gradient image are obtained. Therefore, all gradient values and gradient directions come from primary information carried by the original image, secondary processing is not carried out on the basis of the primary information, and the information of the original image is more faithfully expressed.
When the low threshold of the amplitude gradient image is set, the low threshold needs to be set by combining with an amplitude gradient image histogram, wherein the low threshold represents the gradient amplitude of the pixels in the low-limit image background area; when the high threshold is set, the low threshold and the histogram need to be combined, and the idea that false addition can not be omitted is followed, since omission means that information of the pixels is lacked in the subsequent processing process, and the final edge identification effect is influenced.
And step S104, carrying out secondary calculation on the pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value, and taking the calculation result as the pixel value of the image edge.
The magnitude gradient image contains gradient values for image pixels and the direction gradient image contains gradient directions for image pixels. For a pixel with an amplitude gradient between a high threshold and a low threshold, an edge pixel point needs to be selected and determined by combining a direction gradient image, and specifically, the gradient direction of two pixels in the same direction field as the pixel needs to be combined for further judgment, so that secondary calculation is realized. And finally, determining the edge pixel information of all the images by traversing all the pixels of the images and adopting the process.
According to the method provided by the embodiment, the primary information carried by the original image is completely used in the image edge information determining process, and the secondary information processed on the basis of the primary information is not used, so that the error of edge detection can be obviously reduced, the accuracy of the edge amplitude and direction information in the power grid equipment image is improved, the usability of the image edge information is improved, the refined and rich equipment state change characteristic information is provided for the subsequent characteristic coding and state analysis, and the monitoring quality of the power grid equipment is improved.
In some embodiments, the above process of converting the digital color image into the first grayscale image is implemented by the following equation:
B(x,y)=0.299Ir(x,y)+0.578Ig(x,y)+0.114Ib(x,y),
wherein B (x, y) is a first grayscale image; i isr(x,y)、Ig(x,y)、Ib(x, y) are the image components of the red, green, and blue channels, respectively, in the digital color image.
In general, Ir(x,y)、Ig(x,y)、IbThe coefficients of (x, y) can be dynamically adjusted according to the scene, but follow the constraint condition that the sum of should is 1. The sum of the coefficients is 0.991 due to rounding in the above formula, and this effect is negligible.
In some embodiments, the above process of performing a filtering operation on the first grayscale image to obtain a second grayscale image includes:
initializing a 7 x 7 two-digit matrix Gaussian filter;
performing convolution filtering on the Gaussian filter and the first gray level image to obtain a second gray level image; the above process is realized by the following formula:
f(x,y)=h(x,y,1)*B(x,y),
wherein the content of the first and second substances,
Figure BDA0002720669940000121
Figure BDA0002720669940000123
f (x, y) is a second gray scale image; b (x, y) is the first grayscale image.
In the strict sense of the word "a", "an", "the" are intended to mean,
Figure BDA0002720669940000122
the method also comprises a sigma parameter, wherein sigma is 1 in the embodiment, and the sigma parameter is set by keeping the edge position of the detected image accurate enough as possible under the premise of comprehensively considering the noise filtering. Theoretically, the larger the σ value is, the better the filtering effect is, and the side effect is that the edge position is blurred. The 7 × 7 two-digit matrix is selected based on the characteristics of the power scenario.
In some embodiments, the above process of performing the value domain mapping conversion and the amplitude normalization inversion process on the second grayscale image to generate the third grayscale image, as shown in fig. 2, includes:
step S201, linearly mapping the range of the pixel values of the second gray scale image from 0-255 to a numerical range of 0.00-1.00 to obtain a value range mapping conversion map of the second gray scale image.
The linear mapping process is to map the integer interval of 0-255 to the decimal interval of 0.00-1.00; wherein 0.00 represents the darkest; 1.00 indicates the whitest.
Step S202, a histogram of the value domain mapping conversion chart is obtained by taking 0.01 as an amplification unit, and the amplitude position with the largest pixels in the histogram is determined; if the numerical value of the position is more than or equal to 0.50, the second gray scale map is shot at night; if the value of the position is less than 0.50, the second gray scale map is shot in the daytime.
Since the range of the value-domain mapping transformation map is 0.00-1.00, if the transformation is performed with 0.01 as an amplification unit, the obtained histogram has 100 numerical levels in total. Finding the position with the most pixels in the 100 numerical levels, wherein if the numerical value of the position is greater than or equal to 0.50, the image is shot at night; if the value is less than 0.50, it is an image taken in the daytime.
And step S203, carrying out amplitude normalization processing on the value domain mapping conversion map, and carrying out inversion processing on the second gray scale map shot in the daytime to generate a third gray scale image with a uniform background.
The formula used in this step is as follows:
Figure BDA0002720669940000131
wherein C (x, y) is a third grayscale image; f (x, y) is a value domain mapping transformation map of the second gray scale image.
In order to show the result of the value domain mapping conversion and the amplitude normalization inversion processing, reference is made to the comparison chart before and after the value domain mapping conversion and the amplitude normalization inversion processing shown in fig. 3. Therefore, after the image shot at night is subjected to value domain mapping conversion and amplitude normalization and inversion processing, the line edge still keeps relatively sharp imaging; the image in daytime is subjected to inversion processing after value domain mapping conversion and amplitude normalization inversion processing, and the line edge is sharper and clearer than that before processing. Therefore, the brightness of the electrical equipment and the cable in the gray scale image processed by the value domain mapping conversion and the amplitude normalization inversion is higher than the background color no matter in day or night, and the subsequent edge determination process provides a uniform processing scene.
In some embodiments, the process of acquiring the amplitude gradient image and the direction gradient image corresponding to the third grayscale image, as shown in fig. 4, includes:
in step S401, gradient values of pixels in the 8 neighborhoods of each pixel of the third grayscale image are calculated.
The schematic diagram of the position structures of the pixel to be calculated and the surrounding pixels in this step is shown in fig. 5, the right-side pixel of the pixel (x, y) to be measured is used as a starting point, the pixels in the direction of the surrounding 8 neighborhoods are marked according to the counterclockwise direction sequence, the marking results are 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °, and the corresponding pixel position coordinates are shown in fig. 5, which are not described again.
Calculating gradient values of pixels in 8 adjacent directions of the pixel to be calculated; the formula used is as follows:
gradient value dx,y(0 degree) taking
Figure BDA0002720669940000141
Maximum value of (1);
gradient value dx,y(45 degree) taking
Figure BDA0002720669940000142
Maximum value of (1);
gradient value dx,y(90 degree) taking
Figure BDA0002720669940000143
Maximum value of (1);
gradient value dx,y(135 deg.C.) taking
Figure BDA0002720669940000144
Maximum value of (1);
gradient value dx,y(180 ℃) extraction
Figure BDA0002720669940000145
Maximum value of (1);
gradient value dx,y(225 ℃) extraction
Figure BDA0002720669940000146
Maximum value of (1);
gradient value dx,y(270 ℃) extraction
Figure BDA0002720669940000147
Maximum value of (1);
gradient value dx,y(315 degree) taking
Figure BDA0002720669940000148
Maximum value of (1);
wherein d isx,y(0°)、dx,y(45°)、dx,y(90°)、dx,y(135°)、dx,y(180°)、dx,y(225°)、dx,y(270°)、dx,y(315 °) are gradient values of the pixel to be calculated in directions of 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 °, respectively.
Step S402, selecting the maximum gradient value in the gradient values of 8 adjacent pixels as the gradient value of the pixel, and using the direction corresponding to the maximum gradient value as the gradient direction of the pixel.
Corresponding to the pixel marking result in step S401, the d is markedx,y(0°)、dx,y(45°)、dx,y(90°)、dx,y(135°)、dx,y(180°)、dx,y(225°)、dx,y(270°)、dx,y(315 °) as gradient values of the pixel to be calculated; the direction corresponding to the maximum gradient value is taken as the gradient direction of the pixel.
Step S403, determining an amplitude gradient image according to the maximum gradient value of each pixel of the third gray image; and determining a direction gradient image according to the direction of the maximum gradient value of each corresponding third gray level image.
The above calculation process is different from other existing algorithms in two points, one is that the gradient value in each direction is represented by the maximum gradient value in 3 adjacent sub-directions, and the gradient value and the gradient direction of the pixel point are represented by the maximum gradient value in the pixel in the 8-neighborhood direction and the corresponding gradient direction; and secondly, all gradient values and gradient directions are from primary information carried by the original image, so that the precision is higher.
Specifically, in comparison with the gradient value and gradient direction obtained by the existing Canny algorithm, the Canny algorithm in the prior art first calculates the first partial derivatives of the X axis and the Y axis of the gray image C (X, Y):
Figure BDA0002720669940000151
Figure BDA0002720669940000152
gradient value:
Figure BDA0002720669940000153
gradient direction:
Figure BDA0002720669940000161
it can be seen that the Canny algorithm obtains gradient values in the X-axis and Y-axis directions by comprehensive calculation using 6 pixels in the same direction, and then performs square sum evolution on the gradient values, that is, the gradient values of the pixel point are derived from primary information of the image. The gradient directions θ (x, y) are different and are processed from the primary information. If the gradient values are not accurate, the gradient directions derived therefrom are also inaccurate, which is technically referred to as error propagation. Clearly, the Canny algorithm differs significantly from this embodiment in calculating the gradient values and gradient directions.
Through randomly extracting a group of experimental data of 5 image samples of the power scene shot in daytime, the problem that the gradient value and the gradient direction generated by the Canny algorithm are not accurate enough can be explained, and the direction gradient is calculated by adopting the method mentioned in the embodiment.
Figure BDA0002720669940000162
TABLE 1 ratio of maximum value of gradient in direction other than main direction (%)
The data in table 1 show that the maximum gradient values of more than half of the pixels are not in the main direction, i.e. if the algorithm only uses the gradient values in the main direction, at least half of the values are not the maximum values, although the gradient direction calculated from these values is also inaccurate. In addition, the Canny algorithm adopts gradient values obtained based on the square sum of the gradients of the X axis and the Y axis, which can neutralize the phenomenon that the gradient values of the X axis and the Y axis are different in size, and the purpose of edge detection is to determine the maximum gradient value of each pixel point and the corresponding direction thereof. From this point of view, the gradient values and the directions thereof given in the present embodiment are more accurate.
In some embodiments, the above process of determining the high threshold and the low threshold of the amplitude gradient image according to the preset threshold, as shown in fig. 6, includes:
step S601, taking 0.01 as a step size unit, obtaining a histogram of the amplitude gradient image from 0.00 to 1.00 amplitude.
This step can be directly obtained by the value domain mapping transformation map of the second gray scale image generated in the previous step.
Step S602, setting a low threshold value to be 0.07, and decreasing the threshold value in a histogram from 1.00 by a step size of 0.01; and when the amplitude is larger than the amplitude value corresponding to the weight coefficient 0.4 multiplied by all gradient pixel numbers larger than the low threshold value, the amplitude is the high threshold value.
The above steps include two basic judgments, one is to take the low threshold TLThe problem of 0.07, after the image amplitude normalization process, makes the amplitude distribution fill the 0.00-1.00 value range, so that TLThe gradient amplitude of the image background area pixel characterized as 0.07 is almost the lower limit; the other is a high threshold value THHow to get the value problem, the algorithm is to be larger than TLThe pixels with gradient values from high to low of 40% are determined as edge pixels, which also includes the idea that the pixels can be increased or decreased but cannot be omitted, because the omission means that the information of the pixels is lacked in the subsequent processing procedure, and the improvement of the recognition rate is possibly influenced.
To test TL、THThe same test images as in table 1 are also taken as examples of the filtering effect of (c).
Figure BDA0002720669940000171
TABLE 2 lower than TLAnd is higher than THRatio of number of gradient-value pixels to total pixels (%)
The numbers in table 2 show that about two-thirds of the gradient values are background pixels, and the data processing amount of the subsequent process can be greatly reduced by filtering the background pixels; and is higher than THThe number of pixels of the gradient value is generally not more than 15%, and the pixels are important objects for subsequent analysis processing.
In some embodiments, the above process of performing secondary calculation on pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold and the low threshold, as shown in fig. 7, includes:
step S701, if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is not less than the low threshold, the pixel to be detected is not an image edge, and the gradient value of the pixel to be detected is set to 0.
If the gradient value D of a certain pixel point in the amplitude gradient image1(x, y) is less than or equal to TLIf the pixel is not an edge point, the gradient value E (x, y) is set to 0.
Step S702, if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is not less than the high threshold value, the pixel to be detected is an image edge, and the amplitude gradient value of the pixel to be detected is the gradient value of the pixel to be detected.
If the gradient value D of a certain pixel point in the amplitude gradient image1(x, y) is T or moreHIf the pixel is an edge point, the gradient value E (x, y) is set to D1(x,y)。
Step S703, if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is greater than the low threshold and smaller than the high threshold, obtaining the gradient direction of the pixel to be detected according to the direction gradient image, and comparing the gradient direction with 2 field pixel points in the same direction in the neighborhood of the pixel to be detected to determine the gradient value of the pixel to be detected.
If the gradient value D of a certain pixel point in the amplitude gradient image1(x, y) is greater than TLAnd is less than THAccording to the gradient square of the pointTo D2(x, y) and the gradient value E (x, y) of the point is determined by the directions of the pixel points in 2 fields in the same direction field, and the specific implementation process is as follows:
if the gradient direction D of the point2(x, y) is 0 °, then pixel D is selected2(x, y +1) or D2When (x, y-1) points in any of 3 directions of 45 °, 0 °, and 315 °, the gradient value E (x, y) at that point is D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 45 °, then pixel D is said to be2(x-1, y +1) or D2When (x +1, y-1) points in any of 3 directions of 0 °, 45 °, and 90 °, the gradient E (x, y) at that point is D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 90 °, then pixel D is said to be2(x-1, y) or D2When (x +1, y) points in any of 3 directions of 45 °, 90 °, and 135 °, the gradient E (x, y) at that point is D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 135 deg., then pixel D is true2(x-1, y-1) or D2When (x +1, y +1) points in any of 3 directions of 90 °, 135 °, 180 °, the gradient E (x, y) at that point becomes D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 180 °, then pixel D is said to be2(x, y-1) or D2When (x, y +1) points in any of 3 directions of 135 °, 180 °, and 225 °, the gradient value E (x, y) at that point is D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 225 °, then pixel D is said to be2(x +1, y-1) or D2When (x-1, y +1) points in any of 3 directions of 180 °, 225 °, and 270 °, the gradient E (x, y) at that point is D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 270 °, then pixel D is said to be2(x-1, y) or D2(x+1Y) points in any of the 3 directions of 225 °, 270 °, 315 °, the gradient E (x, y) of that point is D1(x, y); otherwise, the gradient value E (x, y) of the point is 0;
if the gradient direction D of the point2(x, y) is 315 deg., then pixel D is2(x-1, y-1) or D2When (x +1, y +1) points in any of the 3 directions of 270 °, 315 °, 0 °, the gradient value E (x, y) at that point is D1(x, y); otherwise, the gradient value E (x, y) at that point is 0.
For amplitude gradient D1(x, y) is between TLAnd THPixels in between, need to combine the directional gradient images D2(x, y) select and determine edge pixel points, see FIG. 2. Without loss of generality, assume the magnitude gradient D of a certain pixel1(x, y) is between TLAnd THIn the direction of gradient D2(x, y) points to 0 deg., at which time the algorithm checks the two pixels D adjacent to it1Gradient direction D of (x, y-1)2(x, y-1) and a pixel D1Gradient direction D of (x, y +1)2(x, y +1) if the gradient direction of one of the pixels is 0 °, 45 °, 315 °, the gradient value E (x, y) of the point is D1(x, y), otherwise the gradient value E (x, y) of the point is 0; where 45 ° and 315 ° are nearest neighbor directions of 0 °. The other 7 directions also determine gradient values according to this criterion; traversing all pixels of the image, a final edge detection image E (x, y) determined based on the gradient values can be obtained.
Still taking the 5 test images in the foregoing embodiment as an example, for better comparison, two sets of data of homodromous decision and multi-decision are given in table 3, the former means that the gradient direction of one of the two adjacent pixels must be consistent with the gradient direction of the present pixel, and the latter means that the gradient direction of one of the two adjacent pixels must be within ± 45 ° of the present pixel.
Figure BDA0002720669940000201
Table 3 is based on TL、TH、D1(x,y)、D2(x, y) integrated decisionRatio of the number of edge pixels to the total pixel (%)
The test data of Table 3 illustrates that the number of edge pixels detected based on the direction information of neighboring pixels is greater than the high threshold THThe number of pixels detected by the parameter, which means that the gradient values of a lot of edge pixels are small from another point of view, if the algorithm only makes a decision with the single parameter of the gradient value, many weak edge pixels will be lost; the data in table 3 also shows that multi-directional decision can better find edge pixels with changed directions than unidirectional decision, so that more edge pixels are supported in the subsequent processing process, which is helpful for improving the recognition rate, and this is the fundamental concern of monitoring the state of the power equipment, and is a core evaluation index for determining whether the image recognition algorithm has practical value.
According to the method provided by the embodiment, the primary information carried by the original image is used as much as possible in the image edge information determining process, and the secondary information processed on the basis of the primary information is not used, so that the error of edge detection is reduced; the method improves the accuracy of the edge amplitude and the direction information in the power grid equipment image, improves the availability of the image edge information, provides refined and rich equipment state change characteristic information for subsequent characteristic coding and state analysis, and increases the monitoring quality of the power grid equipment.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a refined image edge information determining system, which is applied to a power grid device state monitoring image, and a schematic structural diagram of the system is shown in fig. 8, where the system includes:
the first image conversion module 810 is configured to obtain a digital color image of the monitored power grid device, and convert the digital color image into a first grayscale image;
the second image conversion module 820 is configured to perform a filtering operation on the first grayscale image to obtain a second grayscale image, and perform value domain mapping conversion and amplitude normalization and inversion processing on the second grayscale image to generate a third grayscale image;
a third image conversion module 830, configured to obtain an amplitude gradient image and a direction gradient image corresponding to the third grayscale image, and determine a high threshold and a low threshold of the amplitude gradient image according to a preset threshold; wherein, the high threshold value and the low threshold value are used for extracting pixels of the image edge;
and the edge determining module 840 is configured to perform secondary calculation on pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold and the low threshold, and use the calculation result as a pixel value of the image edge.
The embodiment of the invention provides a refined image edge determining system, which has the same technical characteristics as the refined image edge determining method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved. For the sake of brevity, where not mentioned in the examples section, reference may be made to the corresponding matter in the preceding method examples.
The embodiment also provides an electronic device, a schematic structural diagram of which is shown in fig. 9, and the electronic device includes a processor 101 and a memory 102; the memory 102 is used for storing one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the refined image edge information determining method.
The electronic device shown in fig. 9 further includes a bus 103 and a communication interface 104, and the processor 101, the communication interface 104, and the memory 102 are connected through the bus 103.
The Memory 102 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Bus 103 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The communication interface 104 is configured to connect with at least one user terminal and other network units through a network interface, and send the packaged IPv4 message or IPv4 message to the user terminal through the network interface.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 102, and the processor 101 reads the information in the memory 102 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method of the foregoing embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A refined image edge information determining method is applied to a power grid equipment state monitoring image, and is characterized by comprising the following steps:
acquiring a digital color image of monitored power grid equipment, and converting the digital color image into a first gray image;
carrying out filtering operation on the first gray level image to obtain a second gray level image, and carrying out value domain mapping conversion and amplitude normalization inversion processing on the second gray level image to generate a third gray level image;
acquiring an amplitude gradient image and a direction gradient image corresponding to the third gray level image, and determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value; wherein, the high threshold value and the low threshold value are used for extracting image edge pixels;
and carrying out secondary calculation on the pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value, and taking the calculation result as the pixel value of the image edge.
2. The method of claim 1, wherein the process of converting the digital color image into the first grayscale image is performed by the following equation:
B(x,y)=0.299Ir(x,y)+0.578Ig(x,y)+0.114Ib(x,y),
wherein B (x, y) is the first grayscale image; i isr(x,y)、Ig(x,y)、Ib(x, y) are the image components of the red, green, and blue channels, respectively, in the digital color image.
3. The method of claim 1, wherein the step of performing a filtering operation on the first grayscale image to obtain a second grayscale image comprises:
initializing a 7 x 7 two-digit matrix Gaussian filter;
performing convolution filtering on the Gaussian filter and the first gray level image to obtain a second gray level image; the above process is realized by the following formula:
f(x,y)=h(x,y,1)*B(x,y),
wherein the content of the first and second substances,
Figure FDA0002720669930000021
f (x, y) is the second gray scale image; b (x, y) is the first grayscale image.
4. The method according to claim 1, wherein the step of performing a value-domain mapping conversion and an amplitude normalization inversion process on the second gray scale image to generate a third gray scale image comprises:
linearly mapping the pixel value range of the second gray scale image from 0-255 to a numerical value interval of 0.00-1.00 to obtain a value domain mapping conversion map of the second gray scale image;
acquiring a histogram of the value domain mapping conversion chart by taking 0.01 as an amplification unit, and determining the amplitude position with the maximum pixels in the histogram; if the numerical value of the position is more than or equal to 0.50, the second gray scale map is shot at night; if the numerical value of the position is less than 0.50, the second gray scale map is shot in the daytime;
carrying out amplitude normalization processing on the value domain mapping conversion map, and carrying out inversion processing on the second gray scale map shot in the daytime to generate a third gray scale image with a uniform background; the formula used in this step is as follows:
Figure FDA0002720669930000022
wherein C (x, y) is the third grayscale image; f (x, y) is a value domain mapping transformation map of the second gray scale image.
5. The method according to claim 1, wherein the process of obtaining the amplitude gradient image and the direction gradient image corresponding to the third gray image comprises:
calculating gradient values of pixels in the 8 neighborhoods of each pixel of the third gray image;
selecting the maximum gradient value in the gradient values of the 8 neighborhood pixels as the gradient value of the pixel, wherein the direction corresponding to the maximum gradient value is taken as the gradient direction of the pixel;
determining the amplitude gradient image according to the maximum gradient value of each pixel of the third gray image; and determining the direction gradient image according to the direction of the maximum gradient value of each pixel corresponding to the third gray level image.
6. The method according to claim 1, wherein the process of determining the high threshold and the low threshold of the amplitude gradient image according to the preset threshold comprises:
acquiring a histogram of the amplitude gradient image from 0.00 to 1.00 in amplitude value range by taking 0.01 as a step length unit;
setting the low threshold value to be 0.07, and decreasing in a histogram from 1.00 in steps of 0.01; and when the amplitude is larger than the weighting coefficient 0.4 and is multiplied by the amplitude corresponding to all the gradient pixel numbers larger than the low threshold value, the amplitude is the high threshold value.
7. The method of claim 1, wherein the process of performing a secondary computation on the image edge pixels according to the amplitude gradient image, the direction gradient image, the high threshold and the low threshold comprises:
if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is not larger than the low threshold value, the pixel to be detected is not the image edge, and the gradient value of the pixel to be detected is set to be 0;
if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is not smaller than the high threshold value, the pixel to be detected is the image edge, and the amplitude gradient value of the pixel to be detected is the gradient value of the pixel to be detected;
if the amplitude gradient value of the pixel to be detected in the amplitude gradient image is larger than the low threshold value and smaller than the high threshold value, the gradient direction of the pixel to be detected is obtained according to the direction gradient image, and the gradient direction of the pixel to be detected is compared with 2 field pixel points in the same direction in the neighborhood of the pixel to be detected to determine the gradient value of the pixel to be detected.
8. A refined image edge information determining system is applied to a power grid equipment state monitoring image, and is characterized by comprising the following components:
the first image conversion module is used for acquiring a digital color image of the monitored power grid equipment and converting the digital color image into a first gray image;
the second image conversion module is used for carrying out filtering operation on the first gray level image to obtain a second gray level image, and carrying out value domain mapping conversion and amplitude normalization and inversion processing on the second gray level image to generate a third gray level image;
the third image conversion module is used for acquiring an amplitude gradient image and a direction gradient image corresponding to the third gray level image, and determining a high threshold value and a low threshold value of the amplitude gradient image according to a preset threshold value; wherein the high threshold and the low threshold are used for extracting pixels of the image edge;
and the edge determining module is used for carrying out secondary calculation on the pixels of the image edge according to the amplitude gradient image, the direction gradient image, the high threshold value and the low threshold value, and taking the calculation result as the pixel value of the image edge.
9. An electronic device, comprising: a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, implements the steps of the refined image edge information determination method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the refined image edge information determining method according to any one of claims 1 to 7.
CN202011086872.6A 2020-10-12 2020-10-12 Refined image edge information determining method and system and electronic equipment Pending CN112164086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011086872.6A CN112164086A (en) 2020-10-12 2020-10-12 Refined image edge information determining method and system and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011086872.6A CN112164086A (en) 2020-10-12 2020-10-12 Refined image edge information determining method and system and electronic equipment

Publications (1)

Publication Number Publication Date
CN112164086A true CN112164086A (en) 2021-01-01

Family

ID=73868194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011086872.6A Pending CN112164086A (en) 2020-10-12 2020-10-12 Refined image edge information determining method and system and electronic equipment

Country Status (1)

Country Link
CN (1) CN112164086A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870293A (en) * 2021-09-27 2021-12-31 东莞拓斯达技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114693651A (en) * 2022-03-31 2022-07-01 南通三信塑胶装备科技股份有限公司 Rubber ring flow mark detection method and device based on image processing
CN115908431A (en) * 2023-03-09 2023-04-04 国网山东省电力公司东营供电公司 Cable positioning and accommodating method for power transmission and transformation project
CN116385446A (en) * 2023-06-06 2023-07-04 山东德圣源新材料有限公司 Crystal impurity detection method for boehmite production
CN117575974A (en) * 2024-01-15 2024-02-20 浙江芯劢微电子股份有限公司 Image quality enhancement method, system, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298698A (en) * 2011-05-30 2011-12-28 河海大学 Remote sensing image airplane detection method based on fusion of angle points and edge information
US20120075440A1 (en) * 2010-09-28 2012-03-29 Qualcomm Incorporated Entropy based image separation
CN104463170A (en) * 2014-12-04 2015-03-25 江南大学 Unlicensed vehicle detecting method based on multiple detection under gate system
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature
CN107300968A (en) * 2016-04-15 2017-10-27 中兴通讯股份有限公司 A kind of face identification method and device, picture display process and device
US20180114089A1 (en) * 2016-10-24 2018-04-26 Fujitsu Ten Limited Attachable matter detection apparatus and attachable matter detection method
CN108022233A (en) * 2016-10-28 2018-05-11 沈阳高精数控智能技术股份有限公司 A kind of edge of work extracting method based on modified Canny operators
CN109360217A (en) * 2018-09-29 2019-02-19 国电南瑞科技股份有限公司 Power transmission and transforming equipment method for detecting image edge, apparatus and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075440A1 (en) * 2010-09-28 2012-03-29 Qualcomm Incorporated Entropy based image separation
CN102298698A (en) * 2011-05-30 2011-12-28 河海大学 Remote sensing image airplane detection method based on fusion of angle points and edge information
CN104463170A (en) * 2014-12-04 2015-03-25 江南大学 Unlicensed vehicle detecting method based on multiple detection under gate system
CN107300968A (en) * 2016-04-15 2017-10-27 中兴通讯股份有限公司 A kind of face identification method and device, picture display process and device
US20180114089A1 (en) * 2016-10-24 2018-04-26 Fujitsu Ten Limited Attachable matter detection apparatus and attachable matter detection method
CN108022233A (en) * 2016-10-28 2018-05-11 沈阳高精数控智能技术股份有限公司 A kind of edge of work extracting method based on modified Canny operators
CN107301661A (en) * 2017-07-10 2017-10-27 中国科学院遥感与数字地球研究所 High-resolution remote sensing image method for registering based on edge point feature
CN109360217A (en) * 2018-09-29 2019-02-19 国电南瑞科技股份有限公司 Power transmission and transforming equipment method for detecting image edge, apparatus and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113870293A (en) * 2021-09-27 2021-12-31 东莞拓斯达技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114693651A (en) * 2022-03-31 2022-07-01 南通三信塑胶装备科技股份有限公司 Rubber ring flow mark detection method and device based on image processing
CN115908431A (en) * 2023-03-09 2023-04-04 国网山东省电力公司东营供电公司 Cable positioning and accommodating method for power transmission and transformation project
CN116385446A (en) * 2023-06-06 2023-07-04 山东德圣源新材料有限公司 Crystal impurity detection method for boehmite production
CN116385446B (en) * 2023-06-06 2023-08-15 山东德圣源新材料有限公司 Crystal impurity detection method for boehmite production
CN117575974A (en) * 2024-01-15 2024-02-20 浙江芯劢微电子股份有限公司 Image quality enhancement method, system, electronic equipment and storage medium
CN117575974B (en) * 2024-01-15 2024-04-09 浙江芯劢微电子股份有限公司 Image quality enhancement method, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112164086A (en) Refined image edge information determining method and system and electronic equipment
CN108229526B (en) Network training method, network training device, image processing method, image processing device, storage medium and electronic equipment
CN112379231B (en) Equipment detection method and device based on multispectral image
CN108337551B (en) Screen recording method, storage medium and terminal equipment
CN112381784A (en) Equipment detecting system based on multispectral image
CN108682039B (en) Binocular stereo vision measuring method
CN111385640B (en) Video cover determining method, device, equipment and storage medium
CN106940886A (en) A kind of electrical equipment electric discharge ultraviolet imagery quantization parameter extracting method based on gray scale
CN108489996A (en) A kind of defect inspection method of insulator, system and terminal device
CN110533665B (en) SAR image processing method for inhibiting scallop effect and sub-band splicing effect
CN116309483A (en) DDPM-based semi-supervised power transformation equipment characterization defect detection method and system
CN113962859B (en) Panorama generation method, device, equipment and medium
CN115908999B (en) Method for detecting rust of top hardware fitting of distribution pole tower, medium and edge terminal equipment
CN110766708B (en) Image comparison method based on contour similarity
CN111179245B (en) Image quality detection method, device, electronic equipment and storage medium
CN109903258B (en) Power cable category detection method and system based on gray level matching
CN111833341A (en) Method and device for determining stripe noise in image
CN117095417A (en) Screen shot form image text recognition method, device, equipment and storage medium
CN115439319A (en) Exposed detection method for electric slide wire protection device
CN115909033A (en) Method and device for evaluating freezing state of camera lens
CN114943720A (en) Electric power image processing method and device
WO2021189460A1 (en) Image processing method and apparatus, and movable platform
CN115018777A (en) Power grid equipment state evaluation method and device, computer equipment and storage medium
CN110930344B (en) Target quality determination method, device and system and electronic equipment
CN104915959A (en) Aerial photography image quality evaluation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination