CN114792405A - Overhead line foreign matter intrusion prevention monitoring method - Google Patents

Overhead line foreign matter intrusion prevention monitoring method Download PDF

Info

Publication number
CN114792405A
CN114792405A CN202210472326.9A CN202210472326A CN114792405A CN 114792405 A CN114792405 A CN 114792405A CN 202210472326 A CN202210472326 A CN 202210472326A CN 114792405 A CN114792405 A CN 114792405A
Authority
CN
China
Prior art keywords
gaussian distribution
overhead line
video information
gaussian
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210472326.9A
Other languages
Chinese (zh)
Inventor
孟令雯
张锐锋
班国邦
黄亮程
蒋理
席光辉
余思伍
刘飞
郭思琪
王维耀
邱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Power Grid Co Ltd filed Critical Guizhou Power Grid Co Ltd
Priority to CN202210472326.9A priority Critical patent/CN114792405A/en
Publication of CN114792405A publication Critical patent/CN114792405A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a monitoring method for preventing foreign matter invasion of an overhead line, which comprises the following steps: collecting overhead line video information; preprocessing a video; carrying out video feature identification and feature extraction, and carrying out defogging treatment; and positioning the overhead line target of the foreign body invasion position. The invention can identify the line video of the detection area of the overhead line shot in severe weather, carry out defogging treatment, avoid fuzzy video, and simultaneously can separately scratch out the invaded foreign matter, so that whether the foreign matter invades in the detection area of the overhead line is difficult to identify clearly, and the worker has some misjudgment situations, thereby being beneficial to finding out the potential safety hazard in time.

Description

Overhead line foreign matter intrusion prevention monitoring method
Technical Field
The invention relates to the technical field of overhead line foreign matter detection, in particular to a foreign matter invasion prevention monitoring method for an overhead line.
Background
Due to the complex environmental conditions of the overhead line in China, the line is extremely easy to be damaged by external force, including icing and aging of the line, natural fire disasters in adjacent forest zones and the like. The method also comprises the following steps that some human factors including human damage and theft and damage to the line caused by illegal construction of large-scale machinery cause power failure, the power failure can cause larger economic loss and adverse social influence, and how to avoid and reduce the loss and ensure the safe and stable operation of a large-scale power grid is an important problem faced by the power grid in China. When a river and the like pass through the lower part of the outdoor overhead line, a plurality of fishing enthusiasts can not notice the overhead line at the top of the head too much, and the overhead line is touched when the fishing rod is thrown, so that electric shock can be caused, and accidents can be caused.
At present, video is generally shot through installation camera to outer broken detection is prevented to overhead line video, rethread staff is at background real time monitoring, when identifying video information, because rain fog can appear often in the outdoor environment, video information under the adverse circumstances such as illumination not enough is more fuzzy, it is clear whether have the foreign matter invasion in the picture of overhead line to be difficult to discern, also have birds or insect to fly into, some misjudgments's circumstances can appear, lead to being difficult to in time discerning the circuit that the crust was damaged to the potential safety hazard appears and change, cause power failure even, the power failure appears.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the utility model provides an overhead line prevents foreign matter invasion monitoring method to solve exist among the prior art because the outdoor environment can appear rain fog often, the video information under the adverse circumstances such as illumination is more fuzzy, it is difficult to discern whether have the foreign matter invasion in the picture of clear overhead line, also have birds or insect to fly into, some misjudgement's circumstances can appear, lead to being difficult to in time discerning the circuit that the crust was damaged to having the potential safety hazard and change, cause power failure even, the technical problem who has a power failure appears.
The technical scheme adopted by the invention is as follows: a system for detecting the crimping quality of overhead conductors and ground wire strain clamps of power transmission lines comprises the following steps:
the method comprises the following steps of firstly, collecting video information of an overhead line;
secondly, preprocessing the video information;
thirdly, identifying and extracting the characteristics of the preprocessed video, and performing defogging treatment;
and step four, positioning the target according to the features extracted and processed in the step three.
Wherein, gather overhead line video information and include:
acquiring video information of an inspection area, wherein the video information of the inspection area comprises color distribution in the video of the inspection area and areas of different color block areas;
and acquiring real-time video information in the inspection area through the camera.
The real-time video information acquisition method comprises the following steps: the regional installation camera through producing the threat to overhead line obtains patrolling and examining the real-time video information in the region and includes: the camera is used for acquiring real-time video information in the inspection area, a first position point and a second position point are arranged by taking an overhead line as a center point, and the angle formed by connecting the first position point, the center point and the second position point in the same plane is smaller than a first angle threshold value and larger than a second angle threshold value.
The video feature identification and feature extraction method in the third step comprises the following steps:
receiving video information transmitted back by a camera;
removing a dynamic background existing in the video information through a Gaussian mixture background model, and inhibiting interference caused by the dynamic background, wherein the calculation expression is as follows:
Figure 100002_DEST_PATH_IMAGE002
wherein K represents the number of pixels in Gaussian distribution, each Gaussian distribution represents an element, the mixed Gaussian distribution applied to the image pixels is also obtained by linear addition of the K Gaussian distributions,
Figure 100002_DEST_PATH_IMAGE004
represents the weight of the ith Gaussian distribution component at time t, an
Figure 100002_DEST_PATH_IMAGE005
=1,
Figure 100002_DEST_PATH_IMAGE007
Representing the pixel value of the image at the (x, y) position t,
Figure 931744DEST_PATH_IMAGE007
the representation is in the form of a vector,
Figure 100002_DEST_PATH_IMAGE009
expressed as the probability density of Gaussian distribution of the pixel point (x, y) at the time t,
Figure 100002_DEST_PATH_IMAGE011
is shown at tThe mean vector of the ith gaussian distributed component is plotted,
Figure 100002_DEST_PATH_IMAGE013
a covariance matrix representing the ith gaussian distribution.
Figure 100002_DEST_PATH_IMAGE015
Wherein, w represents a weight, o represents a standard deviation, and the ratio of the weight to the standard deviation is sorted from large to small, so that the distribution with higher description background possibility is more forward, and the distribution with lower description background possibility is more backward, the first B Gaussian distribution components are selected as the distribution model of the background pixels, and the calculation expression is as follows:
Figure 100002_DEST_PATH_IMAGE017
initializing a first Gaussian distribution component corresponding to each pixel of a first frame image of a video, setting the mean value as the pixel value of a gray image corresponding to a current frame, setting the weight value w =1, setting the mean values of other Gaussian distribution components and the weight value to 0, and respectively matching each pixel value of the obtained current frame with the Gaussian distribution component which is already established at the corresponding position;
if the distance between the pixel value and the mean value of the ith Gaussian distribution component G in the Gaussian mixture model is less than 2.5 times of the standard deviation, judging that the pixel value I is matched with the Gaussian distribution component, and calculating the formula as follows:
Figure 100002_DEST_PATH_IMAGE019
wherein alpha represents the learning rate of background Gaussian distribution component parameter estimation, p represents the updating rate of the weight, and the smaller alpha is, the slower the updating speed of each parameter index is;
the larger the alpha is, the faster the updating speed of each parameter index is;
increasing the weight along with the increase of the pixel points of the components conforming to the new Gaussian distribution model, and abandoning the component with the minimum ratio of the weight to the standard deviation when the number of the current components reaches saturation;
if the pixel value of the image at the current moment is not matched with the obtained B Gaussian model components;
then, the pixel is determined to represent the foreground point, and the foreground pixel is often represented by several gaussian distributions with smaller weights.
When an edge appears in a foreground image window taking a central pixel as a center, comparing the difference value of the pixel value (x, y) and the mean value of any Gaussian distribution component at the corresponding position with the ratio of the weight and the standard deviation;
if the difference value of the mean value of the Gaussian distribution component is less than or equal to the ratio of the weight value to the standard deviation, the pixel value is judged to be successfully matched with the component;
counting the sum of the weights of the Gaussian distribution components and normalizing all the weights;
taking B Gaussian distribution components as a foreground image of the next frame image at the pixel value (x, y);
if the difference value of the mean value of the Gaussian distribution component is greater than the ratio of the weight to the standard deviation, determining that the pixel value is failed to be matched with the component;
deleting the Gaussian distribution component and recalculating;
counting the sum of the weights of the Gaussian distribution components and normalizing all the weights;
and taking the B Gaussian distribution components as a foreground image of the next frame image at the pixel value (x, y).
When the foreground image pixel value (x, y) changes continuously, the foreground image shape at the position changes, pixels in the collected video information change continuously, and foreign matter invasion is shown in the overhead line detection area.
The invention has the beneficial effects that: compared with the prior art, the method and the device can identify the line video of the detection area of the overhead line shot in severe weather, carry out defogging treatment, simultaneously can separately scratch out the invading foreign matters, avoid the situation that the video is fuzzy, and lead to difficulty in identifying and clearly judging whether the foreign matters invade in the detection area of the overhead line, lead workers to have some misjudgment situations, and are favorable for finding potential safety hazards in time.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a monitoring method for preventing foreign object intrusion of an overhead line;
FIG. 2 is a two-dimensional Gaussian distribution model of an overhead line foreign object intrusion prevention monitoring method;
fig. 3 is an exemplary diagram of a method for monitoring an overhead line for preventing intrusion of foreign objects.
Detailed Description
The invention will be further described with reference to specific examples.
Example 1: as shown in fig. 1 to 3, a method for monitoring an overhead line for preventing intrusion of foreign objects includes the following steps:
s1: collecting overhead line video information;
acquiring patrol area video information, wherein the patrol area video information comprises color distribution and areas of different color block areas in a patrol area video; can gather the video information in the region of patrolling and examining through the camera, mainly divide into the color lump area of two parts in the video information, one is the overhead line of black, and another is the part outside the overhead line, namely the background picture.
The real-time video information in the inspection area is obtained through the camera installed in the area threatening the overhead line. Cameras can be installed in the areas where rivers and the like pass through under the overhead lines in the field, and the cameras are arranged back to the overhead lines and face the areas where the overhead lines are threatened to serve as detection areas.
S2: preprocessing a video;
the real-time video information in the region of patrolling and examining is obtained through the camera includes:
the camera is used for acquiring real-time video information in the inspection area, the overhead line is a central point and is provided with a first position point and a second position point, the first position point-the central point-the second position point are connected in the same plane, and the angle formed by the connection is smaller than a first angle threshold value and is larger than a second angle threshold value; can be through two sets of cameras, shoot overhead line both sides, and the camera orientation produces the region of threat to overhead line, as detection area, be located the camera of overhead line both sides first position point and second position point, in a plane with overhead line vertically, first position point in its plane, overhead line central point and second position point, the angle of continuous line is too little will lead to the camera visual angle blind area to appear, lead to having partly overhead line surface, be difficult to shoot, make and gather video information incomplete, lead to the testing result inaccurate.
The camera positioned at the first position point on one side of the overhead line can only shoot the surface on one side of the overhead line; the camera that is located the second position point of overhead line opposite side can shoot the surface of overhead line opposite side. The shape of the overhead line can be approximately regarded as a cylinder, and the overhead line can be damaged along the surface of the overhead line after being damaged, when the deviation between the first angle threshold and the second angle threshold and an angle of 180 degrees is larger, the area of the superposed shooting between the cameras of the first position point and the second position point is larger, namely the area of the shooting blind area is larger, the first angle threshold and the second angle threshold can be determined according to the line diameter of the overhead line, the area of the shooting blind area which needs to be avoided is larger when the line diameter is larger, and the shooting blind area needs to be reduced and needs to be closer to 180 degrees when the connecting angle between the first position point, the central point and the second position point is larger.
S3: the video feature identification and feature extraction comprises the following steps:
receiving video information transmitted back by a camera; and the base station or the scheduling room receives the video information transmitted back by the camera through the signal tower and analyzes the video information transmitted back by the camera.
The method removes the dynamic background in the video information through a mixed Gaussian background model, and inhibits the interference caused by the dynamic background, wherein the calculation expression is as follows:
Figure 672386DEST_PATH_IMAGE002
wherein K represents the number of pixels in Gaussian distribution, each Gaussian distribution represents an element, the mixed Gaussian distribution applied to the image pixels is also obtained by linearly adding the K Gaussian distributions,
Figure 530752DEST_PATH_IMAGE004
represents the weight of the ith Gaussian distribution component at time t, an
Figure 768704DEST_PATH_IMAGE005
=1,
Figure 910972DEST_PATH_IMAGE007
Representing the pixel value of the image at the (x, y) position t,
Figure 812063DEST_PATH_IMAGE007
the representation is in the form of a vector,
Figure 406992DEST_PATH_IMAGE009
expressed as the Gaussian distribution probability density of the pixel point (x, y) at the time t,
Figure 451565DEST_PATH_IMAGE011
the mean vector representing the ith gaussian distributed component at time t,
Figure 199073DEST_PATH_IMAGE013
a covariance matrix representing the ith gaussian distribution.
When the method is actually applied to an image, an individual Gaussian distribution is established according to the change rule of each pixel, and the Gaussian distribution is updated in real time along with the change of time, but because a dynamic background exists and the change of the background has a certain rule, a mixed Gaussian background model is adopted, so that the interference caused by the dynamic background can be well inhibited.
S4: the target positioning comprises the following steps:
Figure DEST_PATH_IMAGE020
wherein, w represents a weight, o represents a standard deviation, and the ratio of the weight to the standard deviation is sorted from large to small, so that the distribution with higher description background possibility is more forward, and the distribution with lower description background possibility is more backward, the first B Gaussian distribution components are selected as the distribution model of the background pixels, and the calculation expression is as follows:
Figure 363075DEST_PATH_IMAGE017
initializing a first Gaussian distribution component corresponding to each pixel of a first frame image of a video, setting the average value as the pixel value of a gray image corresponding to a current frame, setting the weight w =1, setting the average values of other Gaussian distribution components and the weight as 0, and respectively matching each acquired pixel value of the current frame with the Gaussian distribution component established at the corresponding position;
if the distance between the pixel value and the mean value of the ith Gaussian distribution component G in the Gaussian mixture model is less than 2.5 times of the standard deviation, judging that the pixel value I is matched with the Gaussian distribution component, and calculating the formula as follows:
Figure 196033DEST_PATH_IMAGE019
wherein alpha represents the learning rate of background Gaussian distribution component parameter estimation, p represents the updating rate of the weight, and the smaller alpha is, the slower the updating speed of each parameter index is;
the larger the alpha is, the faster the updating speed of each parameter index is;
increasing the weight along with the increase of the pixel points of the components conforming to the new Gaussian distribution model, and abandoning the component with the minimum ratio of the weight to the standard deviation when the number of the current components reaches saturation;
if the pixel value of the image at the current moment is not matched with the B Gaussian model components;
then, the pixel point is judged to represent the foreground point, and the foreground pixel point is often represented by several gaussian distributions with smaller weights.
When an edge appears in the foreground map window centered on the central pixel,
comparing the difference value of the pixel value (x, y) and the mean value of any Gaussian distribution component at the corresponding position with the ratio of the weight value to the standard deviation;
if the difference value of the mean value of the Gaussian distribution component is less than or equal to the ratio of the weight value to the standard deviation, the pixel value is judged to be successfully matched with the component;
counting the sum of the weights of the Gaussian distribution components and normalizing all the weights;
taking B Gaussian distribution components as a foreground image of the next frame image at a pixel value (x, y);
if the difference value of the mean value of the Gaussian distribution component is larger than the ratio of the weight to the standard deviation, determining that the matching between the pixel value and the component fails;
deleting the Gaussian distribution component and recalculating;
counting the sum of the weights of the Gaussian distribution components and normalizing all the weights;
the B gaussian distribution components are taken as the foreground map of the next frame image at pixel value (x, y).
When the foreground image pixel value (x, y) changes continuously, the foreground image shape at the position changes, pixels in the collected video information change continuously, and foreign matter invasion is shown in the overhead line detection area.
Can discern the detection area circuit video of the overhead line of shooting in bad weather, carry out defogging and handle, can scratch out the foreign matter of invasion alone simultaneously, it is more fuzzy to avoid the video, whether lead to being difficult to discern the invasion of foreign matter clearly in the overhead line detection area, make the staff the condition that some erroneous judgments appear, be favorable to in time discovering to have the potential safety hazard, if fishing fan when getting rid of the pole, the fishing rod appears when in the detection area, in time discern, backstage's dispatcher can send sound through remote control alarm, or send near-by maintainer, get rid of the potential safety hazard and avoid causing the condition that power failure appears.
To verify the effect of the present invention, the following simulation was performed.
Because the outdoor monitoring environment is complex, if the collected image is a clear image without fog, the image quality can not be improved by using a defogging algorithm, and the image quality can be reduced, so that the subsequent intrusion detection is negatively influenced. Therefore, the method firstly detects the fog area of the image, and then carries out image defogging pretreatment on the fog area or the haze when the fog area or the haze reaches a certain degree.
Initializing a first Gaussian distribution component corresponding to each pixel of a first frame image of a video, setting the mean value as the pixel value of a gray image corresponding to a current frame, setting a weight value w =1, setting the mean values of other Gaussian distribution components and the weight value as 0, respectively matching each acquired pixel value of the current frame with the Gaussian distribution components established at corresponding positions, and taking B Gaussian distribution components as foreground images of a next frame image at the pixel values (x, y).
Shooting a video picture of a cell as an experimental video, wherein the background of the cell can be regarded as the background of an overhead line video in an analogy manner; the pedestrian can regard as the foreign matter of invasion overhead line in the district, and the pedestrian can walk about in the district, and the pedestrian has relative movement with the camera, and when the detection area video was shot to the camera of the same reason, the camera also can relative movement with the foreign matter of invasion overhead line, shows in the video picture that the camera was shot, and the foreign matter that is invasion overhead line is removing.
Referring to fig. 3, under the condition of clear scene, pedestrians entering the scene in the left image are completely detected, the pedestrians serve as a moving target, the right image is the processed image, the still target, for example, a car in the video, can not be detected basically, the pedestrians serve as a moving target, can serve as a foreground relative to the background, and can be buckled out independently, and foreign matters invading the overhead line detection area can be buckled out independently, so that the identification and judgment are facilitated.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and therefore the scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A foreign object intrusion prevention monitoring method for an overhead line is characterized by comprising the following steps: the method comprises the following steps:
the method comprises the following steps of firstly, collecting video information of an overhead line;
secondly, preprocessing the video information;
thirdly, carrying out defogging treatment on the preprocessed video features by identifying and extracting the features;
and step four, positioning the target according to the features extracted and processed in the step three.
2. The overhead line foreign object intrusion prevention monitoring method according to claim 1, characterized in that: gathering overhead line video information includes:
acquiring patrol area video information, wherein the patrol area video information comprises color distribution and areas of different color block areas in a patrol area video;
and acquiring real-time video information in the inspection area through the camera.
3. The overhead line foreign object intrusion prevention monitoring method according to claim 2, characterized in that: the method for acquiring the real-time video information comprises the following steps: the method comprises the steps that a camera is installed in an area which threatens an overhead line to obtain real-time video information in an inspection area, the camera is used for obtaining the real-time video information in the inspection area, a first position point and a second position point are arranged by taking the overhead line as a central point, and a connecting line of the first position point, the central point and the second position point in the same plane forms an angle smaller than a first angle threshold value and larger than a second angle threshold value.
4. The overhead line foreign object intrusion prevention monitoring method according to claim 1, characterized in that: the video feature identification and feature extraction method comprises the following steps:
receiving video information returned by a camera;
and removing the dynamic background in the video information through a Gaussian mixture background model, and inhibiting the interference caused by the dynamic background.
5. The overhead line foreign object intrusion prevention monitoring method according to claim 4, characterized in that: the calculation expression of the Gaussian mixture background model is as follows:
Figure DEST_PATH_IMAGE002
wherein K represents the number of pixels in Gaussian distribution, each Gaussian distribution represents an element, the mixed Gaussian distribution applied to the image pixels is also obtained by linearly adding the K Gaussian distributions,
Figure DEST_PATH_IMAGE004
represents the weight of the ith Gaussian distribution component at time t, an
Figure DEST_PATH_IMAGE005
=1,
Figure DEST_PATH_IMAGE007
Representing the pixel value of the image at the (x, y) position t,
Figure 687695DEST_PATH_IMAGE007
the representation is in the form of a vector,
Figure DEST_PATH_IMAGE009
expressed as the probability density of Gaussian distribution of the pixel point (x, y) at the time t,
Figure DEST_PATH_IMAGE011
the mean vector representing the ith gaussian distributed component at time t,
Figure DEST_PATH_IMAGE013
a covariance matrix representing the ith gaussian distribution.
6. The overhead line foreign object intrusion prevention monitoring method according to claim 5, characterized in that: the target positioning model is as follows:
Figure DEST_PATH_IMAGE015
wherein, w represents a weight, o represents a standard deviation, and the ratio of the weight to the standard deviation is sorted from large to small, then the distribution with the higher probability of describing the background is more forward, and the distribution with the lower probability of describing the background is more backward, the first B Gaussian distribution components are selected as the distribution model of the background pixels, and the calculation expression is as follows:
Figure DEST_PATH_IMAGE017
initializing a first Gaussian distribution component corresponding to each pixel of a first frame image of a video, setting the average value as the pixel value of a gray image corresponding to a current frame, setting the weight w =1, setting the average values of other Gaussian distribution components and the weight as 0, and respectively matching each acquired pixel value of the current frame with the Gaussian distribution component established at the corresponding position;
if the distance between the pixel value and the mean value of the ith Gaussian distribution component G in the Gaussian mixture model is less than 2.5 times of the standard deviation, judging that the pixel value I is matched with the Gaussian distribution component, and calculating the formula as follows:
Figure DEST_PATH_IMAGE019
wherein alpha represents the learning rate of background Gaussian distribution component parameter estimation, p represents the updating rate of the weight, and the smaller alpha is, the slower the updating speed of each parameter index is;
the larger the alpha is, the faster the updating speed of each parameter index is;
increasing the weight along with the increase of the pixel points of the components conforming to the new Gaussian distribution model, and abandoning the component with the minimum ratio of the weight to the standard deviation when the number of the current components reaches saturation;
if the pixel value of the image at the current moment is not matched with the B Gaussian model components;
then, the pixel is determined to represent the foreground point, and the foreground pixel is often represented by several gaussian distributions with smaller weights.
7. The overhead line foreign object intrusion prevention monitoring method according to claim 6, characterized in that: when an edge appears in the foreground map window centered on the central pixel,
comparing the difference value of the pixel value (x, y) and the mean value of any Gaussian distribution component at the corresponding position with the ratio of the weight value to the standard deviation;
if the difference value of the mean value of the Gaussian distribution component is less than or equal to the ratio of the weight to the standard deviation, determining that the pixel value is successfully matched with the component;
counting the sum of the weights of the Gaussian distribution components and normalizing all the weights;
taking B Gaussian distribution components as a foreground image of a next frame image at the pixel value (x, y);
if the difference value of the mean value of the Gaussian distribution component is larger than the ratio of the weight to the standard deviation, determining that the matching between the pixel value and the component fails;
deleting the Gaussian distribution component and recalculating;
counting the sum of the weights of the Gaussian distribution components and normalizing all the weights;
and taking the B Gaussian distribution components as a foreground image of the next frame image at the pixel value (x, y).
8. The overhead line foreign object intrusion prevention monitoring method according to claim 7, characterized in that: when the foreground image pixel value (x, y) continuously changes, the foreground image shape at the position is changed, the pixels continuously change in the collected video information, and the fact that foreign matters invade in the overhead line detection area is indicated.
CN202210472326.9A 2022-04-29 2022-04-29 Overhead line foreign matter intrusion prevention monitoring method Pending CN114792405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210472326.9A CN114792405A (en) 2022-04-29 2022-04-29 Overhead line foreign matter intrusion prevention monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210472326.9A CN114792405A (en) 2022-04-29 2022-04-29 Overhead line foreign matter intrusion prevention monitoring method

Publications (1)

Publication Number Publication Date
CN114792405A true CN114792405A (en) 2022-07-26

Family

ID=82462512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210472326.9A Pending CN114792405A (en) 2022-04-29 2022-04-29 Overhead line foreign matter intrusion prevention monitoring method

Country Status (1)

Country Link
CN (1) CN114792405A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117885116A (en) * 2024-03-15 2024-04-16 中铁电气化铁路运营管理有限公司 Contact network line ranging inspection method and inspection robot based on remote control communication
CN117906615A (en) * 2024-03-15 2024-04-19 苏州艾吉威机器人有限公司 Fusion positioning method and system of intelligent carrying equipment based on environment identification code

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117885116A (en) * 2024-03-15 2024-04-16 中铁电气化铁路运营管理有限公司 Contact network line ranging inspection method and inspection robot based on remote control communication
CN117906615A (en) * 2024-03-15 2024-04-19 苏州艾吉威机器人有限公司 Fusion positioning method and system of intelligent carrying equipment based on environment identification code
CN117885116B (en) * 2024-03-15 2024-05-24 中铁电气化铁路运营管理有限公司 Contact network line ranging inspection method and inspection robot based on remote control communication
CN117906615B (en) * 2024-03-15 2024-06-04 苏州艾吉威机器人有限公司 Fusion positioning method and system of intelligent carrying equipment based on environment identification code

Similar Documents

Publication Publication Date Title
CN114792405A (en) Overhead line foreign matter intrusion prevention monitoring method
CN103106766B (en) Forest fire identification method and forest fire identification system
CN110021133B (en) All-weather fire-fighting fire patrol early-warning monitoring system and fire image detection method
CN107437318B (en) Visible light intelligent recognition algorithm
CN111428617A (en) Video image-based distribution network violation maintenance behavior identification method and system
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN104463253B (en) Passageway for fire apparatus safety detection method based on adaptive background study
CN109034038B (en) Fire identification device based on multi-feature fusion
CN113963301A (en) Space-time feature fused video fire and smoke detection method and system
CN111667655A (en) Infrared image-based high-speed railway safety area intrusion alarm device and method
CN107103324A (en) Transmission line of electricity recognition methods and device
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN111461078A (en) Anti-fishing monitoring method based on computer vision technology
CN113221603A (en) Method and device for detecting shielding of monitoring equipment by foreign matters
CN117789394B (en) Early fire smoke detection method based on motion history image
CN107704818A (en) A kind of fire detection system based on video image
CN108563986B (en) Method and system for judging posture of telegraph pole in jolt area based on long-distance shooting image
CN112347906B (en) Method for detecting abnormal aggregation behavior in bus
CN113989732A (en) Real-time monitoring method, system, equipment and readable medium based on deep learning
CN112434564B (en) Detection system for abnormal aggregation behavior in bus
CN111325708B (en) Transmission line detection method and server
CN115171006B (en) Detection method for automatically identifying person entering electric power dangerous area based on deep learning
CN111432172A (en) Fence alarm method and system based on image fusion
CN116718125A (en) Line induction electricity-taking icing and external-breakage-preventing image online monitoring method and device
YunChang et al. Nighttime video smoke detection based on active infrared video image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination