Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the method, the system and the medium for scanning and detecting the convertor station based on image processing are provided, real-time pictures of a field display screen of the convertor station are processed by a series of image processing processes to obtain data detection results, can be finally displayed on mobile phone and computer client, can completely not affect the work of the current monitoring system of the convertor station, can realize the networked output of the monitoring data under the condition of ensuring the isolation of the internal network and the external network of the current monitoring system of the convertor station, prevents the current monitoring system of the convertor station from exposing security holes due to the output of network data, provides a novel idea for the artificial intelligent monitoring of the convertor station, the method has the advantages that the labor cost is reduced, the monitoring accuracy is guaranteed, the safe and stable operation of the converter stations is facilitated, the method is suitable for monitoring and alarming of each converter station, and the method can be widely applied to a plurality of scenes of power systems except the converter stations.
In order to solve the technical problems, the invention adopts the technical scheme that:
a converter station scanning detection method based on image processing comprises the following implementation steps:
1) acquiring a monitoring image of a monitoring center display screen of a converter station;
2) carrying out image preprocessing, image correction, image denoising and edge information extraction on a monitored image, wherein the image preprocessing specifically refers to carrying out image graying processing;
3) and performing character segmentation and character recognition on the monitored image to obtain a detection result.
Optionally, after the detection result is obtained in step 3), a step of performing early warning detection is further included, and the detailed steps include: and aiming at each kind of data in the detection result, finding out a corresponding early warning threshold value from a preset early warning threshold value database, judging whether the data exceeds the corresponding early warning threshold value, and if the data exceeds the corresponding early warning threshold value, pushing an alarm message to a specified mobile terminal device or a monitoring center.
Optionally, the step 1) of obtaining the monitoring image of the monitoring center display screen of the converter station is specifically realized by a mobile trolley provided with a camera on a mechanical arm, and the mobile trolley moves along a track laid on the ground or a table near the monitoring center display screen to adjust the shooting angle of the monitoring center display screen so as to realize picture quality adjustment.
Optionally, the image correction in step 2) specifically refers to correction by using a reverse mapping method, where the correction by using the reverse mapping method refers to deriving coordinates of a corresponding original image through coordinates of a target image, and determining gray levels of non-integer coordinate points by using a linear interpolation method to implement non-linear correction on a distorted image; the gray scale of the non-integer coordinate point is shown as the following formula:
in the above formula, y is the gray scale of the non-integer coordinate point, (x)0,y0)、(x1,y1) For known coordinates, (x, y) is [ x ]0,x1]Some value in the interval, α, is represented as an interpolation coefficient.
Optionally, the image denoising in step 2) specifically includes denoising the image by using a mean filtering method, and a function expression of denoising the image by using the mean filtering method is shown as follows:
g(x,y)=1/m∑f(x,y)
in the above formula, g (x, y) is the gray level of the processed image on the pixel point, m is the total number of pixels including the current pixel in the de-noised image template, and f (x, y) is the gray level of the original image on the pixel point.
Optionally, the extracting of the edge information in step 2) specifically refers to extracting the image edge information by using a sobel operator, where the sobel operator is shown as follows:
in the above formula, GxRepresenting a lateral edge detection image; gyExpressed as a longitudinal edge detection image; g represents the gradient magnitude; θ represents the gradient direction.
Optionally, the detailed steps of step 3) include:
3.1) performing expansion operation on the monitored image by using a mathematical morphology method, dividing the image into a plurality of communicated areas, and marking to complete label positioning;
3.2) carrying out character segmentation on the determined connected region, utilizing a vertical projection method to carry out segmentation, accumulating gray values of all lines of the monitoring image, adopting self-adaptive threshold segmentation to find out an optimal threshold segmentation point, converting the gray image into a binary image, and finally utilizing a horizontal vertical projection method to find out boundary points between characters so as to segment each data character;
3.3) identifying the characters of each kind of data by using a machine learning model to obtain the detection result of each kind of data.
Optionally, step 3.3) is preceded by a step of training a machine learning model, and the detailed steps include:
s1) obtaining a monitoring image sample of a monitoring center display screen of the converter station;
s2) performing sample expansion on the monitoring image sample, wherein the expansion comprises one or more of rotation, inclination, deformation, noise addition and width change;
s3) carrying out image preprocessing, image correction, image denoising and edge information extraction on the monitored image sample after the sample, wherein the image preprocessing specifically refers to carrying out image graying processing;
s4) performing dilation operation on the monitored image by a mathematical morphology method, dividing the image into a plurality of communicated areas, and marking to complete label positioning;
s5) carrying out character segmentation on the determined connected region, carrying out segmentation by using a vertical projection method, accumulating gray values of all lines of the monitored image, finding out an optimal threshold segmentation point by adopting self-adaptive threshold segmentation, converting the gray image into a binary image, finally finding out boundary points between characters by using a horizontal vertical projection method so as to segment each data character, and setting a label for each segmented data character so as to establish a training data set;
s6) completing training of the machine learning model through the training data set.
In addition, the invention also provides a convertor station scanning detection system based on image processing, which comprises a mobile trolley, wherein a camera is mounted on a mechanical arm, a control terminal is arranged in the mobile trolley, the control terminal comprises a data acquisition module, a microprocessor, a communication module and a power module, the camera is connected with the microprocessor through the data acquisition module, the microprocessor is connected with the communication module, the power module is respectively connected with the data acquisition module, the microprocessor, the communication module and the camera, the microprocessor is programmed or configured to execute the steps of the convertor station scanning detection method based on image processing, or a computer program which is programmed or configured to execute the convertor station scanning detection method based on image processing is stored in a storage medium of the microprocessor.
Furthermore, the present invention also provides a computer readable storage medium having stored thereon a computer program programmed or configured to execute the image processing based converter station scan detection method.
Compared with the prior art, the invention has the following advantages:
the invention obtains the data detection result by the real-time picture of the field display screen of the convertor station through a series of image processing processes, and the data detection result can be finally displayed on a mobile phone and a computer client, so that the work of the existing monitoring system of the convertor station can be completely not influenced, the networked output of the monitoring data can be realized under the condition of really isolating the internal network and the external network of the existing monitoring system of the convertor station, the safety leak of the existing monitoring system of the convertor station due to the network data output is prevented, a novel thought is provided for the artificial intelligent monitoring of the convertor station, the monitoring accuracy is ensured while the labor cost is reduced, the safe and stable operation of the convertor station is facilitated, and the system is not only suitable for monitoring and alarming of each convertor station, but also can be widely applied to a plurality of scenes of electric.
Detailed Description
As shown in fig. 1, the implementation steps of the image processing-based converter station scan detection method of the embodiment include:
1) acquiring a monitoring image of a monitoring center display screen of a converter station;
2) carrying out image preprocessing, image correction, image denoising and edge information extraction on the monitored image, wherein the image preprocessing specifically refers to carrying out image graying processing to reduce the data processing amount of the image;
3) and performing character segmentation and character recognition on the monitored image to obtain a detection result.
In this embodiment, the step 1) of obtaining the monitoring image of the monitoring center display screen of the converter station is specifically realized by a mobile trolley provided with a camera on a mechanical arm, and the mobile trolley moves along a track laid on the ground or a table surface near the monitoring center display screen to adjust the shooting angle of the monitoring center display screen so as to realize picture quality adjustment. The movable trolley moves back and forth, and the scanning camera continuously shoots the field display screen.
As shown in fig. 2, fig. 3 and fig. 4, the device (intelligent monitoring robot) for acquiring the monitoring image of the monitoring center display screen of the converter station in step 1) of this embodiment includes a track 1 and a mobile trolley 2 having a function of traveling on the track 1, a control component is arranged inside the mobile trolley 2, an image acquisition device 3 for monitoring the operation data screen of the main control room of the transformer substation is arranged on the mobile trolley 2, the mobile trolley 2 is further connected with a button touch device 4 for touching the adjustment button of the operation data screen of the main control room of the transformer substation, and both the image acquisition device 3 and the button touch device 4 are connected with the control component. The mobile trolley 2 is provided with the image acquisition device 3 for monitoring the operation data screen of the transformer substation master control room, and data acquisition can be realized by monitoring the operation data screen of the transformer substation master control room through the camera on the premise of not interfering the transformer substation master control system to realize internal and external network isolation. The embodiment comprises a track 1 and a movable trolley 2 walking on the track 1, so that the position of the movable trolley on the track can be adjusted as required, the interference of illumination on the operation data screen of the transformer substation master control room can be overcome, and the image acquisition quality of the operation data screen of the transformer substation master control room is ensured. The moving trolley 2 is also connected with a button touch device 4 for touching an adjusting button of a running data screen of a main control room of the transformer substation, and on one hand, the brightness and contrast of the running data screen of the main control room of the transformer substation can be adjusted through the button touch device 4 for adjusting the button, so that the image acquisition quality of the running data screen of the main control room of the transformer substation is ensured; on the other hand, the on-off state of the operation data screen of the substation master control room can be controlled according to needs to realize energy conservation.
In this embodiment, the control assembly includes a power module, a controller and a data communication module that are connected in sequence, and the image acquisition device 3 and the button touching device 4 are connected with the controller.
As shown in fig. 2, the bottom of the mobile trolley 2 is provided with a walking wheel 21 and a walking motor for controlling the walking wheel 21 to move, the walking wheel 21 is arranged on the track 1, the walking wheel 21 is provided with a brake 22, the walking motor and the brake 22 are both connected with the output end of the control component, the walking and braking of the mobile trolley 2 on the track 1 can be conveniently controlled through the structure, and the mobile trolley 2 can be conveniently controlled to stop through the brake 22, so that the imaging quality is ensured; in this embodiment, the brake 22 is specifically an HDDDWX micro electromagnetic brake.
As shown in fig. 3, the image capturing device 3 includes a mechanical arm assembly 31 and a camera 32 installed at an end of the mechanical arm assembly 31, so that the camera 32 can be adjusted to a proper position, and the image capturing quality of the operation data screen of the substation main control room is ensured. In this embodiment, the mechanical arm assembly 31 comprises a plurality of sections of sequentially hinged mechanical arms, a rotation driving steering engine is arranged between the mechanical arm assembly 31 and the movable trolley 2, a rotation driving steering engine is arranged between adjacent mechanical arms, the rotation driving steering engine is connected with the output end of the control assembly, and the output end of the camera 32 is connected with the control assembly.
As shown in FIG. 3, the end of the robot arm assembly 31 is further provided with an infrared distance measuring sensor array 33, the infrared distance measuring sensor array 33 comprises a plurality of infrared distance measuring sensors, and output ends of the infrared distance measuring sensors are connected with the control assembly. The transformer substation master control room operation data screen can be conveniently scanned through the infrared distance measuring sensor array 33, so that the accurate position of the screen center of the transformer substation master control room operation data screen can be determined according to the blocking condition of the transformer substation master control room operation data screen on infrared rays.
In order to increase the angle of the moving trolley 2 walking on the track 1 relative to the running data screen of the transformer substation main control room and reduce the stroke of the moving trolley 2, as shown in fig. 4, the track 1 is an arc-shaped track, and the running data screen of the transformer substation main control room is positioned in the direction of the circle center side of the arc-shaped track, so that the stroke of the moving trolley 2 can be reduced to the maximum extent, the distance change of the moving trolley 2 relative to the running data screen of the transformer substation main control room is ensured to be smaller, and the image acquisition quality can be adjusted quickly.
As shown in fig. 5, the button activating device 4 includes a base 42 having a clamping groove 41, a clamping bolt 43 is disposed on a side wall of the base 42 located on at least one side of the clamping groove 41, a sliding groove 44 disposed along a length direction of the clamping groove 41 is disposed on one side of the base 42 located on one side of the clamping groove 41, a linear reciprocating motor 45 is disposed on one side of the sliding groove 44, a through hole is disposed in a middle portion of a sliding block 451 of the linear reciprocating motor 45, a telescopic motor 46 is mounted on the sliding block 451, a telescopic shaft of the telescopic motor 46 is inserted into the through hole and a pressing head of an adjusting button for touching an operation data screen of a main control room of a transformer substation is mounted on an end portion of the telescopic shaft, and control ends of the linear reciprocating motor 45 and the telescopic. The operating principle of the button actuation device 4 is as follows: the clamping groove 41 is sleeved at the bottom of the operation data screen of the transformer substation main control room at a proper height in advance, so that the sliding groove 44 and the bottom adjusting button of the operation data screen of the transformer substation main control room are at the same height, and then the clamping bolts 43 at two sides are adjusted (through inner hexagons on the surfaces of the clamping bolts 43) to clamp the operation data screen of the transformer substation main control room, so that the operation data screen of the transformer substation main control room is fixedly installed. Under the monitoring of the camera 32, the sliding block 451 can slide above the designated adjusting button of the running data screen of the corresponding transformer substation main control room through the linear reciprocating motor 45, and then the adjusting button of the running data screen of the transformer substation main control room can be touched by the stretching motor 46 to work so that the pressing head extends out, so that on one hand, the brightness and contrast of the running data screen of the transformer substation main control room can be adjusted, and the image acquisition quality of the running data screen of the transformer substation main control room is ensured; on the other hand, the on-off state of the operation data screen of the main control room of the transformer substation can be controlled according to needs to achieve energy saving, the telescopic motor 46 can start a plurality of adjusting buttons, and the use is flexible and convenient.
Because the screen has more distortion and noise points during imaging, the image correction in step 2) in this embodiment specifically refers to correction by using a reverse mapping method, where the correction by using the reverse mapping method refers to deriving coordinates of a corresponding original image through coordinates of a target image, and determining gray levels of non-integer coordinate points by using a linear interpolation method to implement nonlinear correction on a distorted image; the gray scale of the non-integer coordinate point is shown as the following formula:
in the above formula, y is the gray scale of the non-integer coordinate point, (x)0,y0)、(x1,y1) For known coordinates, (x, y) is [ x ]0,x1]Some value in the interval, α is denoted as an insertionA value coefficient.
In order to achieve the image smoothing effect, the image denoising in step 2) in this embodiment specifically means denoising an image by using a mean filtering method, and a function expression of denoising the image by using the mean filtering method is shown as follows:
g(x,y)=1/m∑f(x,y)
in the above formula, g (x, y) is the gray level of the processed image on the pixel point, m is the total number of pixels including the current pixel in the de-noised image template, and f (x, y) is the gray level of the original image on the pixel point.
In this embodiment, the extracting of the edge information in step 2) specifically refers to extracting the image edge information by using a sobel operator, where the sobel operator is shown as follows:
in the above formula, GxRepresenting a lateral edge detection image; gyExpressed as a longitudinal edge detection image; g represents the gradient magnitude; θ represents the gradient direction.
In this embodiment, the graying processing of the image in step 2) selects a weighted average value method, and the expression is as follows:
in the above formula, R, G, B represents three basic colors of red, green and blue, respectively, WR、WG、WBRespectively R, G, B.
In this embodiment, the detailed steps of step 3) include:
3.1) performing expansion operation on the monitored image by using a mathematical morphology method, dividing the image into a plurality of communicated areas, and marking to complete label positioning; in this embodiment, the specific expression of the expansion calculation is as follows:
in the above formula, A, B represents two different structures; (x, y) are denoted image elements. The equation shows that expanding a with structure B translates the origin of structure element B to the image pixel (x, y) location. If the intersection of B and A at the image pixel (x, y) is not empty (that is, at least one of the image values corresponding to A at the element position of B being 1 is 1), the pixel (x, y) corresponding to the output image is assigned to 1, otherwise, the pixel is assigned to 0.
3.2) carrying out character segmentation on the determined connected region, utilizing a vertical projection method to carry out segmentation, accumulating gray values of all lines of the monitored image, adopting self-adaptive threshold segmentation (OTSU, also called Otsu method) to find out an optimal threshold segmentation point, converting the gray image into a binary image, and finally utilizing a horizontal vertical projection method to find out boundary points between characters so as to segment each data character;
3.3) identifying the characters of each kind of data by using a machine learning model to obtain the detection result of each kind of data.
In this embodiment, the detailed step of segmenting by using the vertical projection method in step 3.2) includes:
3.2.1) obtaining a convertor station display screen image which is only provided with characters after positioning according to the finished label positioning, obtaining the total number of pixel values in the column direction through calculation, and performing vertical projection on the pixel values;
3.2.2) selecting a smaller pixel and a smaller threshold value, scanning the image to find the left end of the character, and then finding the right end of the character according to the height-width ratio of the display screen of the convertor station;
3.2.3) repeating the step 3.2.2) to sequentially cut out other characters in the display screen of the convertor station, and finishing the determination of the left and right boundaries of the characters and storing the characters in a set array;
3.2.4) horizontally projecting the character segmented in the step 3.2.3) to find the upper and lower boundaries of the character;
3.2.5) output a standard character sub-graph (the previous steps resulted in an array of characters whose elements are not uniform in size and the character size used for different character recognition libraries is not uniform).
In this embodiment, step 3.3) is preceded by a step of training a machine learning model, and the detailed steps include:
s1) obtaining a monitoring image sample of a monitoring center display screen of the converter station;
s2) performing sample expansion on the monitoring image sample, wherein the expansion comprises one or more of rotation, inclination, deformation, noise addition and width change;
s3) carrying out image preprocessing, image correction, image denoising and edge information extraction on the monitored image sample after the sample, wherein the image preprocessing specifically refers to carrying out image graying processing;
s4) performing dilation operation on the monitored image by a mathematical morphology method, dividing the image into a plurality of communicated areas, and marking to complete label positioning;
s5) carrying out character segmentation on the determined connected region, carrying out segmentation by using a vertical projection method, accumulating gray values of all lines of the monitored image, finding out an optimal threshold segmentation point by adopting self-adaptive threshold segmentation, converting the gray image into a binary image, finally finding out boundary points between characters by using a horizontal vertical projection method so as to segment each data character, and setting a label for each segmented data character so as to establish a training data set;
s6) completing training of the machine learning model through the training data set.
In this embodiment, the step of performing early warning detection is further included after the detection result is obtained in step 3), and the detailed steps include: and aiming at each kind of data in the detection result, finding out a corresponding early warning threshold value from a preset early warning threshold value database, judging whether the data exceeds the corresponding early warning threshold value, and if the data exceeds the corresponding early warning threshold value, pushing an alarm message to a specified mobile terminal device or a monitoring center.
In addition, this embodiment further provides a converter station scanning detection system based on image processing, including a mobile cart having a camera mounted on a mechanical arm, the mobile cart being provided with a control terminal, the control terminal including a data acquisition module, a microprocessor, a communication module and a power module, the camera being connected to the data acquisition module and the microprocessor, the microprocessor being connected to the communication module, the power module being connected to the data acquisition module, the microprocessor, the communication module and the camera, respectively, the microprocessor being programmed or configured to execute the steps of the converter station scanning detection method based on image processing, or a storage medium of the microprocessor being stored with a computer program programmed or configured to execute the converter station scanning detection method based on image processing.
In this embodiment, the camera includes a surface scanning camera, a line scanning camera, and a matching light source, and the selection of the surface scanning camera, the line scanning camera, and the matching light source needs to be determined according to actual conditions. The system adopts a high-speed camera with 320 ten thousand pixels (2048 x 1080), the precision can reach 0.03mm/Pixel, the frame rate can reach 70fps or more, such as DALSA G2-GM10-T1921, and the light source adopts an annular shadowless light source, so that the system has the advantages of large irradiation area, small volume and good illumination uniformity. The line scan camera adopts a line scan camera with horizontal resolution 2048, vertical resolution 2 and line frequency of 100 KHZ; the precision can reach 0.03mm/Pixel, the frame rate can reach 30fps or more, such as DALSA P4-CM-02K10D, the light source is a high-brightness linear light source, and the illumination intensity is high and the illumination uniformity is good.
Furthermore, the present embodiment also provides a computer readable storage medium having stored thereon a computer program programmed or configured to execute the aforementioned image processing-based converter station scan detection method.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.